Tuning-Math Digests messages 2325 - 2349

This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).

Contents Hide Contents S 3

Previous Next

2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500 2550 2600 2650 2700 2750 2800 2850 2900 2950

2300 - 2325 -



top of page bottom of page down


Message: 2325

Date: Fri, 07 Dec 2001 07:55:23

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> > Huh? Obviously any badness metric _must_ slope down towards (0,0) 
> on 
> > the (cents,gens) plain.
> 
> The badness metric does, but the results don't. The results have a 
> similar distribution everywhere on the plane, but only when gens^2 
> cents is the badness metric.

You're not making any sense. The results are all just discrete points 
in the badness surface with respect to gens and cents, so they have 
exactly the same slope. The results have a similar distribution of 
what? Everywhere on what plane?


top of page bottom of page up down


Message: 2326

Date: Fri, 07 Dec 2001 05:34:40

Subject: Re: The grooviest linear temperaments for 7-limit music

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> 
> > Yes, I was just going to say we should write the whole paper 
first 
> in 
> > the 5-limit.
> 
> There's not much to the 5-limit--it basically is a mere comma 
search, 
> and that can be done expeditiously using a decent 5-limit notation.

A decent 5-limit notation?


top of page bottom of page up down


Message: 2327

Date: Fri, 07 Dec 2001 07:56:54

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> I may be able to answer that when someone explains what is flat 
with 
> respect to what.

Paul did that. An analogy would be to use n^(4/3) cents when seaching 
for 7-limit ets; this will give you a list which does not favor 
either high or low numbers n, but it has nothing to do with human 
perception, and you would use a different exponent in a different 
prime limit--n^2 cents in the 3-limit, n^(3/2) cents in the 5-limit, 
and so forth.

>  It doesn't 
> > commit to any particular theory about what humans are like and 
what 
> > they should want, and I think that's a good plan.
> 
> Don't the cutoffs have to be based on a theory about what humans 
are 
> like?

I don't think you can have much of a theory about what a bunch of 
cranky individualists might like, but I hope we could cut it off when 
the difference could no longer be percieved. Can anyone hear the 
difference between Ennealimmal and just?

> If a "flat" system was miles from anything related what humans are 
> like, would you still be interested in it?

I might, most people would not be. I've discovered though that even 
the large, "useless" ets have uses.


top of page bottom of page up down


Message: 2328

Date: Sat, 08 Dec 2001 06:22:38

Subject: Re: More lists

From: dkeenanuqnetau

--- In tuning-math@y..., graham@m... wrote:
> It may depend on whether or not you include the zero error for 1/1 
in the 
> mean.

I don't. Seems like a silly idea. And that wouldn't change _where_ the 
minimum occurs.

Are you able to look at the Excel spreadsheet I gave the URL for in my 
previous message in this thread?


top of page bottom of page up down


Message: 2329

Date: Sat, 08 Dec 2001 19:44:42

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > Global relative badness means what, exactly? This makes no sense 
to 
> > me.
> 
> It means if two ETs have around the same badness number then are 
are 
> about as bad as each other, no matter how far apart they are on the 
> spectrum.

This strikes me as subjective to the point of being meaningless.


top of page bottom of page up down


Message: 2330

Date: Sat, 08 Dec 2001 06:37:30

Subject: Re: Diophantine approximation alternatives

From: dkeenanuqnetau

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> Dave was questioning the lack of alternatives, so let's look at the 
> standard Diophantine approximation ones.

Why not look outside Diophantine approximation alternatives?

> In general, if f(n)>0 is such that its integral is unbounded, then 
> for d irrational numbers xi, 
> max f(n)^(-1/d) |round(n*xi) - n*xi| < c
> "almost always" has an infinite number of solutions. This isn't as 
> tight a theorem as when we use exponents, but in practice it works 
> for our problem.
...
> I don't see any advantages here, but there it is.

There probably aren't any advantages here. 

But why does badness have to be of the form
f(n)* |round(n*xi) - n*xi|
at all?

Why not 
f(n) + |round(n*xi) - n*xi| 
or
f(n) * g(|round(n*xi) - n*xi|)
?


top of page bottom of page up down


Message: 2331

Date: Sat, 08 Dec 2001 20:04:25

Subject: Re: What's so Super about Superparticularity?

From: genewardsmith

--- In tuning-math@y..., "unidala" <JGill99@i...> wrote:

> > GWS: If they had done 
> > the same with superparticulars with square or triangular or 
fourth 
> > power, etc. numerators it would have been more to the point, if 
so.
> 
> JG: Or would it? Can anyone demonstrate an implicit advantage of 
> utilizing superparticular scale interval ratios with large valued 
> integers existing in the numerators and/or denominators of such 
scale 
> interval ratios? Or am I missing something regarding Gene's points 
> made regarding "square or triangular or fourth power" numerators?

I was talking about commas, not intervals. Commas appear as the 
ratios between the superparticulars assoicated to branches of the 
tree, and hence to nodes of the tree. From 3/2, we have branchs going 
to 4/3 and 5/3, labeled by 9/8 and 10/9; the ratio is 81/80, which 
has a fourth power as numerator. We might define a comma function in 
this way, which maps from fractions to commas; then comma(3/2)=81/80,
comma(4/3)=64/63, comma(5/3)=126/125, and so forth. The numerators of 
these involve various polynomial functions.

Thanks for prodding me, I think I'll code "comma".


top of page bottom of page up down


Message: 2332

Date: Sat, 08 Dec 2001 06:42:50

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> ET list 2
> 
> Steps   7-limit
> per     RMS
> octave  error (cents)
> ---------------------
> 15       18.5
> 19       12.7
> 22        8.6
> 24       15.1
> 26       10.4
> 27        7.9
> 31        4.0
> 35        9.9
> 36        8.6
> 37        7.6
> 41        4.2

If you're going to do this, let's at least do it right and use the 
right list:

1   884.3587134
2   839.4327178
4   647.3739047
5   876.4669184
9   920.6653451
10   955.6795096
12   910.1603254
15   994.0402775
31   580.7780905
41   892.0787789
72   892.7193923
99   716.7738001
171   384.2612749
270   615.9368489
342   968.2768986
441   685.5766666
1578   989.4999106

The first point to note is that the two lists are clearly not 
intended to do the same thing. The second is that while you object to 
this characterization, your list seems to want to do our thinking for 
us more than mine; you've decided the important place to look is 
around 27. The third thing to notice is that if you want to look at a 
limited range, you always can. Suppose I look from 10 to 50 and see 
what the top 11 are, using my measure:

10  .796
12  .758
15  .828
16 1.113
19  .906
22  .898
26 1.122
27  .924
31  .484
41  .743
46 1.181

I'm afraid I like this list better than yours, but your milage may 
vary.


top of page bottom of page up down


Message: 2333

Date: Sat, 8 Dec 2001 21:32 +00

Subject: Re: Wedge products

From: graham@xxxxxxxxxx.xx.xx

Update!

The code at <############################################################################### *> has been updated to do 
most of the stuff I used to use matrices and Numeric for, but with wedge 
products and standard Python 1.5.2.  It's passed all the tests I've tried 
so far, still some cleaning up to do.

My wedge invariants can't be made unique and invariant in all cases, but 
they work most of the time.  I could have a method for declaring of two 
wedgable objects are equivalent.  Also, my invariant is very different to 
Gene's.

I still don't get the process for calculating unison vectors with wedge 
products, especially in the general case.

One good thing is that the generator mapping (ignoring the period mapping) 
which I'm using as my invariant key, is simply the octave-equivalent part 
of the wedge product of the commatic unison vectors!

Example:

>>> h31 = temper.PrimeET(31, temper.primes[:4])
>>> h41 = temper.PrimeET(41, temper.primes[:4])
>>> h31^h41
{(2, 3): 15, (0, 4): 15, (1, 4): 3, (1, 2): -25, (0, 3): -2, (2, 4): 59, 
(0, 2): -7, (3, 4): 49, (1, 3): -20, (0, 1): 6}
>>> (h31^h41).invariant()
(6, -7, -2, 15, -25, -20, 3, 15, 59, 49)


Gene:
> First you order the basis so that a wedge product taken from two ets 
> or two unison vectors will correspond:
> 
> Yahoo groups: /tuning-math/message/1553 *

I've got mine ordered, but it looks like a different order to yours.

> Then you put the wedge product into a standard form, by
> 
> (1) Dividing through by the gcd of the coefficients, and

Okay, done that

> (2) Changing sign if need be, so that the 5-limit comma (or unison)
> 2^w[6] * 3^(-w[2])*5^w[1] where w is the wedgie, is greater than 1. 
> If it equals 1, go on to the next invariant comma, which leaves out 
> 5, and if that is 1 also to the one which leaves out 3. See
> 
> Yahoo groups: /tuning-math/message/1555 *
> 
> for the invariant commas. The result of this standardization is the 
> wedge invariant, or wedgie, which uniquely determins the temperament.

Done something like this.


The problem is with zeroes.  As it stands, the 5-limit interval 5:4 is the 
same as the 7-limit interval 5:4 as far as the wedge products are 
concerned.  But some zero elements aren't always present.  Either I can 
get rid of them, which might mean that different products have the same 
invariant, or enumerate the missing bases when I calculate the invariant.

The latter problem is the same as the one I'm trying to solve to get all 
combinations of a list of unison vectors.

Another thing would be to ignore the invariants, and add a weak comparison 
function.

As to the unison vectors, in the 7-limit I seem to be getting 4 when I 
only wanted 2, so how can I be sure they're linearly independent?


             Graham


top of page bottom of page up down


Message: 2334

Date: Sat, 08 Dec 2001 07:09:35

Subject: Logarithmic "flatness"

From: genewardsmith

One reason why the n^(4/3) cents measure seems somehow "flat" to me, 
and evidently to Paul, is that aside from a initial bias in favor of 
small ets due to the infinite relative perfection of the 0-et, it is 
a logarithmic measure. In other words, the size of the ets grows 
roughly exponentially, so that there are about the same number from
11 to 100 as 101 to 1000 and 1001 to 10000, etc. Also, the size of 
the interval around the et defined by its neighbors on the list is 
proportional to the size of the et.


top of page bottom of page up down


Message: 2335

Date: Sat, 8 Dec 2001 21:32 +00

Subject: Re: More lists

From: graham@xxxxxxxxxx.xx.xx

Me:
> > It may depend on whether or not you include the zero error for 1/1 
> in the 
> > mean.

Dave:
> I don't. Seems like a silly idea. And that wouldn't change _where_ the 
> minimum occurs.

Yes, won't change the position.  But, looking carefully at your previous 
mail, I see you're including 1/3, 9/3 and 9/1, so that'll be it.  I remove 
the duplicates.

> Are you able to look at the Excel spreadsheet I gave the URL for in my 
> previous message in this thread?

I'll be able to look at it on Monday, when I get back to work.  I *might* 
be able to check it in Star Office first, but probably won't.


                  Graham


top of page bottom of page up down


Message: 2336

Date: Sat, 08 Dec 2001 07:14:09

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> 
> > ET list 2
> > 
> > Steps   7-limit
> > per     RMS
> > octave  error (cents)
> > ---------------------
> > 15       18.5
> > 19       12.7
> > 22        8.6
> > 24       15.1
> > 26       10.4
> > 27        7.9
> > 31        4.0
> > 35        9.9
> > 36        8.6
> > 37        7.6
> > 41        4.2
> 
> If you're going to do this, let's at least do it right and use the 
> right list:
> 
> 1   884.3587134
> 2   839.4327178
> 4   647.3739047
> 5   876.4669184
> 9   920.6653451
> 10   955.6795096
> 12   910.1603254
> 15   994.0402775
> 31   580.7780905
> 41   892.0787789
> 72   892.7193923
> 99   716.7738001
> 171   384.2612749
> 270   615.9368489
> 342   968.2768986
> 441   685.5766666
> 1578   989.4999106

But this doesn't look like an approximate isobad. It looks like a list  
of ETs less than a certain badness. i.e. it's a top 17. Right?

We can do it that way if you like. So I'll have to give my top 17. I 
wasn't proposing that we give the badness measure (since it was meant 
to be an isobad). But I guess we could if it's a top 17. However I 
don't want people distracted by 9 significant digits of badness. 
Couldn't we normalise to a 10 point scale and only give whole numbers. 
And you need to supply the RMS error.

> The first point to note is that the two lists are clearly not 
> intended to do the same thing.

Mine is intended to pack the maximum number of ETs likely to be of 
interest to musicians, composers, music theorists etc. who are 
interested in 7-limit music, into a list of a given size. Maybe you 
need to explain what yours is intended to do. 

> The second is that while you object to 
> this characterization, your list seems to want to do our thinking 
for 
> us more than mine; you've decided the important place to look is 
> around 27. 

Not at all. It just comes out that way. I simply decided that an extra 
note per octave was worth about the same badness as an increase of 0.5 
cent in the RMS error. This comes thru experience and tuning list 
discussions.

> The third thing to notice is that if you want to look at 
a 
> limited range, you always can. Suppose I look from 10 to 50 and see 
> what the top 11 are, using my measure:
> 
> 10  .796
> 12  .758
> 15  .828
> 16 1.113
> 19  .906
> 22  .898
> 26 1.122
> 27  .924
> 31  .484
> 41  .743
> 46 1.181

Sure. I can do that too.

> I'm afraid I like this list better than yours, but your milage may 
> vary.

I might like it better than mine too.  Mine's still got problems. But 
you had to arbitrarily limit it to 10<n<50 to get this list. This is 
clearly doing our thinking for us.

I thought we we're talking about a single published list, not a piece 
of software that lets you enter your favourite limits.


top of page bottom of page up down


Message: 2337

Date: Sat, 08 Dec 2001 21:40:20

Subject: Stern-Brocot commas

From: genewardsmith

There has been some discussion of what should count as a comma, and 
what doesn't. One possible definition, which we might call an SB-
comma, is that something is an SB-comma if it is in the image of the 
comma map I defined, meaning take the ratio of the intevals which are 
ratios of the intervals between a node of the Stern-Brocot tree and 
its two subnodes; this gives a map from nodes to SB-commas, or in 
other words from positive rationals to SB-commas.

Here is a list of 11-limit SB-commas for numbers with denominators 
less than 52:

9801/9800, 4375/4374, 6250/6237, 1375/1372, 441/440, 8019/8000, 
5120/5103, 243/242, 225/224, 2200/2187, 2835/2816, 1728/1715, 
126/125, 245/243, 1944/1925, 81/80, 875/864, 2079/2048, 64/63, 
405/392, 126/121, 135/128, 77/72, 27/25, 35/32, 10/9, 9/8

It looks a little cheesy, now that I look at it; I'd better check my 
program. :)


top of page bottom of page up down


Message: 2338

Date: Sat, 08 Dec 2001 07:23:40

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But this doesn't look like an approximate isobad. It looks like a 
list  
> of ETs less than a certain badness. i.e. it's a top 17. Right?

Right, but your list looked like a top 11 in a certain range also.

> 
> We can do it that way if you like. So I'll have to give my top 17. 
I 
> wasn't proposing that we give the badness measure (since it was 
meant 
> to be an isobad). 

The things on your list didn't make sense to me as an isobad, and I 
didn't know that was what it was supposed to be. Trying a top n and 
comparing makes more sense to me, but I need to pick a range.

> Mine is intended to pack the maximum number of ETs likely to be of 
> interest to musicians, composers, music theorists etc. who are 
> interested in 7-limit music, into a list of a given size. 

It needs work.

Maybe you 
> need to explain what yours is intended to do. 

Mine is intended to show what the relatively best 7-limit ets are, in 
a measurement which has the logarithmic flatness I describe in 
another posting.

> I might like it better than mine too.  Mine's still got problems. 
But 
> you had to arbitrarily limit it to 10<n<50 to get this list. This 
is 
> clearly doing our thinking for us.

And I can reduce that problem to essentially nil, by putting in a 
high cut-off and leaving it at that. You are stuck with it as an 
intrinsic feature.


top of page bottom of page up down


Message: 2339

Date: Sat, 08 Dec 2001 21:55:49

Subject: Re: Wedge products

From: genewardsmith

--- In tuning-math@y..., graham@m... wrote:

> The code at <############################################################################### *> has been 
updated to do 
> most of the stuff I used to use matrices and Numeric for, but with 
wedge 
> products and standard Python 1.5.2.  It's passed all the tests I've 
tried 
> so far, still some cleaning up to do.

What's the best version of Python for Win98, do you know? In 
particular, what is the deal with the "stackless" version?

> My wedge invariants can't be made unique and invariant in all 
cases, but 
> they work most of the time.  I could have a method for declaring of 
two 
> wedgable objects are equivalent.  

You don't need to use my system; you could make the first non-zero 
coefficient in the basis ordering you use positive.

Also, my invariant is very different to 
> Gene's.

It should differ only in the sign or order of basis elements.

> I still don't get the process for calculating unison vectors with 
wedge 
> products, especially in the general case.

One way to think of the general case is to get the associated matrix 
of what I call "vals", reduce by dividing out by gcds, and solve the 
resultant system of linear Diophantine equations, which set each of 
the val maps to zero.

> One good thing is that the generator mapping (ignoring the period 
mapping) 
> which I'm using as my invariant key, is simply the octave-
equivalent part 
> of the wedge product of the commatic unison vectors!

Or of the wedge product of two ets.

> I've got mine ordered, but it looks like a different order to yours.

That's not surprising; the order is not determined by the definition 
of wedge product, and I chose mine in a way I thought made sense from 
the point of view of usability for music theory.

> Done something like this.
> 
> 
> The problem is with zeroes.  As it stands, the 5-limit interval 5:4 
is the 
> same as the 7-limit interval 5:4 as far as the wedge products are 
> concerned. 

This has me confused, because it's the same as far as I'm concerned 
too, unless you mean its vector representation.

 But some zero elements aren't always present.  Either I can 
> get rid of them, which might mean that different products have the 
same 
> invariant, or enumerate the missing bases when I calculate the 
invariant.

I don't know what is going on here.

> As to the unison vectors, in the 7-limit I seem to be getting 4 
when I 
> only wanted 2, so how can I be sure they're linearly independent?

They are never linearly independent. Why do they need to be?


top of page bottom of page up down


Message: 2340

Date: Sat, 08 Dec 2001 00:30:26

Subject: Re: More lists

From: genewardsmith

--- In tuning-math@y..., graham@m... wrote:

> I need to be able to take all combinations.  So far, I can only do 
that 
> for the 7-limit, where they're pairs.  I'll have to think about the 
> general case.  It'll probably involve recursion.

It's certainly possible to start with a certain prime limit, and use 
that for the next one; I've been thinking about that from the point 
of view of 5-->7.

  I'm also worried about 
> the speed of this search, because there are going to be a lot more 
unison 
> vector combinations that ET pairs for the higher limits.

Already for the 11-limit you need to wedge three unison vectors to 
get the wedgie for a linear temperament, but only two ets. However, 
to get the wedgie for a *planar* temperament, it is two unisons vs 
three ets, and in higher limits it gets more involved yet.


top of page bottom of page up down


Message: 2341

Date: Sat, 08 Dec 2001 22:00:01

Subject: Re: Stern-Brocot commas

From: genewardsmith

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> It looks a little cheesy, now that I look at it; I'd better check 
my 
> program. :)

I think the problem is my definition; I should confine it to the 
branch of the tree coming from 3/2.


top of page bottom of page up down


Message: 2342

Date: Sat, 08 Dec 2001 00:47:17

Subject: Re: The grooviest linear temperaments for 7-limit music

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> 
> > > If with all quantities positive we have g^2 c < A and c > B, 
then
> > > 1/c < 1/B, and so g^2 < A/B and g < sqrt(A/B). However, it 
> probably 
> > > makes more sense to use g>=1, so that if g^2 c <= A then c <= A.
> 
> > Are you saying that using g>=1 is enough to make this a closed 
> search?
> 
> All it does is put an upper limit on how far out of tune the worst 
> cases can be, so we really need to bound c below or g above to get 
a 
> finite search.

So do you still stand by this statement:

"If we bound one of them and gens^2 cents, we've bound the other; 
that's what I'd do."

(which you wrote after I said that a single cufoff point wouldn't be 
enough, that we would need a cutoff curve)?


top of page bottom of page up down


Message: 2343

Date: Sat, 08 Dec 2001 08:21:24

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> 
> > But this doesn't look like an approximate isobad. It looks like a 
> list  
> > of ETs less than a certain badness. i.e. it's a top 17. Right?
> 
> Right, but your list looked like a top 11 in a certain range also.

It happens to also be the top 11 by the 0.5*steps + cents metric, but 
not limited to any range.

> > We can do it that way if you like. So I'll have to give my top 17. 
> I 
> > wasn't proposing that we give the badness measure (since it was 
> meant 
> > to be an isobad). 
> 
> The things on your list didn't make sense to me as an isobad,

Obviously they wouldn't, given what your isobad looked like.

> and I 
> didn't know that was what it was supposed to be.

I thought I made that pretty clear.

> Trying a top n and 
> comparing makes more sense to me,

Fine.

> but I need to pick a range.

Objectively of course. Ha ha. If you have to pick a range then your 
so-called badness metric obviously isn't really a badness metric at 
all!

> > Mine is intended to pack the maximum number of ETs likely to be of 
> > interest to musicians, composers, music theorists etc. who are 
> > interested in 7-limit music, into a list of a given size. 
> 
> It needs work.

I think I said that.

> Mine is intended to show what the relatively best 7-limit ets are, 
in 
> a measurement which has the logarithmic flatness I describe in 
> another posting.

Even if you and Paul are the only folks on the planet who find that 
interesting? In that case I think its very misleading to call it a 
badness metric when it only gives relative badness _locally_.

> > I might like it better than mine too.  Mine's still got problems. 
> But 
> > you had to arbitrarily limit it to 10<n<50 to get this list. This 
> is 
> > clearly doing our thinking for us.
> 
> And I can reduce that problem to essentially nil, by putting in a 
> high cut-off and leaving it at that. 

How high? How will this fix the problem that folks will assume you're 
saying that 3-tET and 1547-tET are about as useful as 22-tET for 
7-limit.

> You are stuck with it as an 
> intrinsic feature.

And a damn fine feature it is too. :-) Seriously, mine was proposed 
without any great amount of research or deliberation to show that it 
is easy to find alternatives that do _much_ better than yours 
_globally_ and about the same _locally_.


top of page bottom of page up down


Message: 2344

Date: Sat, 08 Dec 2001 00:58:28

Subject: Re: More lists

From: genewardsmith

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> --- In tuning-math@y..., graham@m... wrote:
> > Oh yes, I forgot to say before.  Here's the difference RMS errors 
> make in 
> > the 11-limit:
> ...
> > The left hand column is the minimax ranking in terms of the RMS 
one, 
> and 
> > the other one is the other way round.  So they mostly agree on 
the 
> best 
> > ones, but disagree on the mediocre ones.
> 
> Ok. Thanks. That was a good way of showing it.
> 
> > To check my RMS optimization's working, is a 116.6722643 cent 
> generator 
> > right for Miracle in the 11-limit?  RMS error of 1.9732 cents.
> 
> I get 116.678 and 1.9017.

I got 116.672264296056... which checks with Graham, so that's 
progress of some kind.


top of page bottom of page up down


Message: 2345

Date: Sat, 08 Dec 2001 09:07:02

Subject: Re: What's so Super about Superparticularity?

From: genewardsmith

--- In tuning-math@y..., J Gill <JGill99@i...> wrote:


> If the significance is one of "Farey adjacence" (where the absolute 
value 
> of N1*D2 - D1*N2 = 1), then it seems that the sequence 3/2, 5/3, 
7/4, 9/5 
> (which also possesses this characteristic), while not made up of 
> "superparticular" ratios, would also possess similar properties...

Farey adjacence explains some of why superpaticular ratios show up in 
music theory. If you take the ratios of the above sequence, you get
(5/3)/(3/2) = 10/9, (7/4)/(5/3) = 21/20, (9/5)/(7/4) = 36/36, and
in general T/(T-1), where T is triangular of even order--that is, the
sum from 1 to an even number. These numbers and superparticulars like 
them appear often as scale steps in JI scales. Moreover, you get 
superparticular ratios of superparticular ratios, for instance 
(9/8)/(10/9) = 81/80, or (15/14)/(16/15) = 225/224; these are of a 
common type, having in one case the square of a triangular number, 
and in the other case a fourth power (square of a square) as 
numerator.


top of page bottom of page up down


Message: 2346

Date: Sat, 08 Dec 2001 01:48:14

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

Thanks Gene, for taking the time to explain this in a way that a 
mere computer scientist can understand. :-)

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> 
> > So ... What is n? What is a 7-limit et? How does one use n^(4/3) 
to 
> > get a list of them? How would one check to see whether the list 
> > favours high or low n.
> 
> "n" is how many steps to the octave, or in other words what 2 is 
> mapped to. By a "7-limit et" I mean something which maps 7-limit 
> intervals to numbers of steps in a consistent way. Since we are 
> looking for the best, we can safely restrict these to what we get by 
> rounding n*log2(3), n*log2(5) and n*log2(7) to the nearest integer, 
> and defining the n-et as the map one gets from this.

OK so far.

> Let's call this map "h"; 
> for the 12-et, h(2)=12, h(3)=19, h(5)=28 
and
> h(7)=34; this entails that h(5/3) = h(5)-h(3) = 9, h(7/3)=15 and
> h(7/5)=6. 

Fine.

> I can now measure the relative badness of "h" by taking the 
> sum, or maximum, or rms, of the differences of |h(3)-n*log2(3)|, 
> |h(5)-n*log2(5)|, |h(7)-n*log2(7)|, |h(5/3)-n*log2(5/3)|, 
> |h(7/3)-n*log2(7/3)| and |h(7/5)-n*log2(7/5)|. 

I'd say this is just one component of badness. Its the error expressed 
as a proportion of the step size. The number of steps in the  octave n 
has an effect on badness independent of the relative error.

> This measure of badness is flat in the sense that the density is the 
> same everywhere, so that we would be selecting about the same number 
> of ets in a range around 12 as we would in a range around 1200.

Yes. I believe this. See the two charts near the end of
Harmonic errors in equal tempered musical scales *
although it uses a weighting error that only includes the primes 
(only the "rooted" intervals) that I now find dubious.

> I don't really want this sort of "flatness", 

Hardly anyone would. Not without some additional penalty for large n, 
even if it's just a crude sudden cutoff. But _why_ don't you want this 
sort of flatness? Did you reject it on "objective" grounds? Is there 
some other sort of flatness that you _do_ want? If so, what is it? How 
many sorts of flatness are there and how did you choose between them?

> so I use the theory of 
> Diophantine approximation to tell we that if I multiply this badness
> by the cube root of n, so that the density falls off at a rate of
> n^(-1/3), I will still get an infinite list of ets, but if I make it 
> fall off faster I probably won't. 

Here's where the real leap-of-faith occurs. 

First of all, I take it that when you say you will (or wont) "get an 
infinite list of ets", you mean "when the list is limited to ETs whose 
badness does not exceed a given badness limit, greater than zero".

There are an infinite number of ways of defining badness to achieve a 
finite list with a cutoff only on badness itself. Most of these will 
produce a finite list that is of of absolutely no interest to 99.99% 
of the population (of people who are interested in the topic at all).

Why do you immediately leap to the theory of Diophantine approximation 
as giving the best way to achieve a finite list?

I think a good way to achieve it is simply to add an amount k*n to the 
error in cents (absolute, not relative to step size). I suggest 
initially trying a k of about 0.5 cents per step.

The only way to tell if this is better than something based on the 
theory of Diophantine equations is to suck it and see. Some of us have 
been on the tuning lists long enough to know what a lot of other 
people find useful or interesting, even though we don't necessarily 
find them so ourselves.

> I can use either the maximum of 
the 
> above numbers, or the sum, or the rms, and the same conclusion 
holds; 
> in fact, I can look at the 9-limit instead of the 7-limit and the 
> same conclusion holds. If I look at the maximum, and multiply by 
1200
> so we are looking at units of n^(4/3) cents, I get the following 
list 
> of ets which come out as less than 1000, for n going from 1 to 2000:
> 
> 1     884.3587134
> 2     839.4327178
> 4     647.3739047
> 5     876.4669184
> 9     920.6653451
> 10    955.6795096
> 12    910.1603254
> 15    994.0402775
> 31    580.7780905
> 41    892.0787789
> 72    892.7193923
> 99    716.7738001
> 171   384.2612749
> 270   615.9368489
> 342   968.2768986
> 441   685.5766666
> 1578  989.4999106
> 
> This list just keeps on going, so I cut it off at 2000. I might look 
> at it, and decide that it doesn't have some important ets on it, 
such 
> as 19,22 and 27; I decide to put those on, not really caring about 
> any other range, by raising the ante to 1200; I then get the 
> following additions:
> 
> 3     1154.683345
> 6     1068.957518
> 19    1087.886603
> 22    1078.033523
> 27    1108.589256
> 68    1090.046322
> 130   1182.191130
> 140   1091.565279
> 202   1143.628876
> 612   1061.222492
> 1547  1190.434242
> 
> My decision to add 19,22, and 27 leads me to add 3 and 6 at the low 
> end, and 68 and so forth at the high end. It tells me that if I'm 
> interested in 27 in the range around 31, I should also be interested 
> in 68 in the range around 72, in 140 and 202 around 171, 612 around 
> 441, and 1547 near 1578. That's the sort of "flatness" Paul was 
> talking about; it doesn't favor one range over another.

But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 are 
of approximately equal interest to 19, 22 and 27. Sure you'll always 
be able to find one person who'll say they are. But ask anyone who has 
actually used 19-tET or 22-tET when they plan to try 3-tET or 
1547-tET. It's just a joke. I suspect you've been seduced by the 
beauty of the math and forgotten your actual purpose. This metric 
clearly favours both very small and very large n over middle ones.

> > But no matter what you come up with I can't see how you can get 
> past 
> > the fact that gens and cents are fundamentally incomensurable 
> > quantities, so somewhere there has to be a parameter that says how 
> bad 
> > they are relative to each other.
> 
> "n" and cents are incommeasurable also, 

Yes. 

> and n^(4/3) is only right for 
> the 7 and 9 limits, and wrong for everything else, so I don't think 
> this is the issue if we adopt this point of view.
> 
> Why not 
> > use k*gens + cents. e.g. if badness was simply gens + cents and 
you 
> > listed everything with badness not more than 30 then you don't 
need 
> > any additional cutoffs. You automatically eliminate anything with 
> gens 
> > > 30 or cents > 30 (actually cents > 29 because gens can't go 
below 
> > 1).
> 
> Gens^3 cents also automatically cuts things off, but I rather like 
> the idea of keeping it "flat" in the above sense and doing the 
> cutting off deliberately, it seems more objective.

_Seems_ more objective? You mean that subjectively, to you, it seems 
more objective?

Well I'm afraid that it seems to me that this quest for an "objective" 
badness metric (with ad hoc cutoffs) is the silliest thing I've heard 
in quite a while.

If you're combining two or more incomensurable quantities into a 
single badness metric, the choice of the constant of proportionality 
between them (and the choice of whether this constant should relate 
the plain quantities or their logarithms or whatever) should be 
decided so that as many people as possible agree that it actually 
gives something like what they perceive as badness, even if its only 
roughly so.

An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one. The 
fact that its based on the theory of Diophantine equations is utterly 
irrelevant.


top of page bottom of page up down


Message: 2347

Date: Sat, 08 Dec 2001 09:20:22

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> > 
> > > But this doesn't look like an approximate isobad. It looks like 
a 
> > list  
> > > of ETs less than a certain badness. i.e. it's a top 17. Right?
> > 
> > Right, but your list looked like a top 11 in a certain range also.
> 
> It happens to also be the top 11 by the 0.5*steps + cents metric, 
but 
> not limited to any range.

You could describe my top 11 in the range from 10 to 50 as the top 11 
using a measure which multipies by a function equal to 1 from 10 to 
50, and 10^n otherwise, which we multiply by our badness measure and 
so end up with a top 11 "not limited by range". The difference is 
that you have blurry outlines to your chosen region, which seems to 
me to be a bad thing, not a good one. It allows you to imagine you 
have not chosen a range, which hardly clarifies matters, since in 
effect you have.

> Objectively of course. Ha ha. If you have to pick a range then your 
> so-called badness metric obviously isn't really a badness metric at 
> all!

See above; I can screw it up in an _ad hoc_ way and make it a screwed-
up, _ad hoc_ measure also, but why should I want to?

> Even if you and Paul are the only folks on the planet who find that 
> interesting? In that case I think its very misleading to call it a 
> badness metric when it only gives relative badness _locally_.

Global relative badness means what, exactly? This makes no sense to 
me.

> How high? How will this fix the problem that folks will assume 
you're 
> saying that 3-tET and 1547-tET are about as useful as 22-tET for 
> 7-limit.

I think you would be one of the very few who looked at it that way. 
After all, this is hardly the first time such a thing has been done.


top of page bottom of page up down


Message: 2348

Date: Sat, 08 Dec 2001 02:11:04

Subject: Re: More lists

From: dkeenanuqnetau

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> I got 116.672264296056... which checks with Graham, so that's 
> progress of some kind.

So what's wrong with this spreadsheet?

http://uq.net.au/~zzdkeena/Music/Miracle/Miracle11RMS.xls - Ok *


top of page bottom of page up

Previous Next

2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500 2550 2600 2650 2700 2750 2800 2850 2900 2950

2300 - 2325 -

top of page