Tuning-Math Digests messages 9425 - 9449

This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).

Contents Hide Contents S 10

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9400 - 9425 -



top of page bottom of page down


Message: 9425

Date: Thu, 22 Jan 2004 20:39:13

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> >In the 3-limit, there's only one kind of regular TOP 
temperament: 
> >> >equal TOP temperament. For any instance of it, the complexity 
can
> >> >be assessed by either
> >> >
> >> >() Measuring the Tenney harmonic distance of the commatic 
unison 
> >> >vector
> >> >
> >> >5-equal: log2(256*243) = 15.925, log3(256*243) = 10.047
> >> >12-equal: log2(531441*524288) = 38.02, log3(531441*524288) = 
23.988
> >> >
> >> >() Calculating the number of notes per pure octave or 'tritave':
> >> >
> >> >5-equal: TOP octave = 1194.3 -> 5.0237 notes per pure octave;
> >> >.........TOP tritave = 1910.9 -> 7.9624 notes per pure tritave.
> >> >12-equal: TOP octave = 1200.6 -> 11.994 notes per pure octave;
> >> >.........TOP tritave = 1901 -> 19.01 notes per pure tritave.
> >> >
> >> >The latter results are precisely the former divided by 2: in 
> >> >particular, the base-2 Tenney harmonic distance gives 2 times 
the 
> >> >number of notes per tritave, and the base-3 Tenney harmonic 
> >distance 
> >> >gives 2 times the number of notes per octave. A funny 'switch' 
but 
> >> >agreement (up to a factor of exactly 2) nonetheless. In some 
way, 
> >> >both of these methods of course have to correspond to the same 
> >> >mathematical formula . . .
> >> 
> >> Ok, great!
> >> 
> >> >In the 5-limit, there are both 'linear' and equal TOP
> >> >temperaments. 
> >> >For the 'linear' case, we can use the first method above (Tenney
> >> >harmonic distance) to calculate complexity.
> >> 
> >> Did you repeat the above comparison for the two methods in the
> >> 5-limit?
> >
> >How can you? Linear temperaments and equal temperaments are 
different 
> >entities in the 5-limit.
> 
> For linear temperaments can't you use both the map-based and
> comma based approach, and see if the factor of 2 holds?

What's the map-based approach, explicitly?



________________________________________________________________________
________________________________________________________________________




------------------------------------------------------------------------
Yahoo! Groups Links

To visit your group on the web, go to:
 Yahoo groups: /tuning-math/ *

To unsubscribe from this group, send an email to:
 tuning-math-unsubscribe@xxxxxxxxxxx.xxx

Your use of Yahoo! Groups is subject to:
 Yahoo! Terms of Service *


top of page bottom of page up down


Message: 9428

Date: Sat, 24 Jan 2004 02:47:18

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
wrote:
> 
>  Gene seems to be implying 
> > that the norm of the wedgie gives this, but I'd love to see him 
(when 
> > he has time) show how the cross-checking for the 3- and 5-limit 
cases 
> > works out.
> 
> I'd need to know what you mean by cross-checking first.

The cross-checking that I showed in the 3-limit case (except I was 
off by a factor of 2). In each limit and dimension, the complexity 
measure should arise from a single general formula -- ||Wedgie||, I 
suppose, but with a full elaboration for the grassmann-unaware -- 
and our paper should show how this reduces,

in the dimension-1 case, to the number of notes per log(frequency) 
unit (assume we will also explain fokker determinants), and

in the codimension-1 case, to the length (scaled with a factor of 
1/2 or however it works out) of the comma = the width of 
the 'periodicity slice'.



________________________________________________________________________
________________________________________________________________________




------------------------------------------------------------------------
Yahoo! Groups Links

To visit your group on the web, go to:
 Yahoo groups: /tuning-math/ *

To unsubscribe from this group, send an email to:
 tuning-math-unsubscribe@xxxxxxxxxxx.xxx

Your use of Yahoo! Groups is subject to:
 Yahoo! Terms of Service *


top of page bottom of page up down


Message: 9429

Date: Sun, 25 Jan 2004 05:04:45

Subject: top complexity

From: Paul Erlich

orthogonalization.

since it doesn't matter which comma basis you choose, you can always 
choose a basis where the commas each involve n-1 primes (this is 
probably one of the matrix reduction or decomposition methods matlab 
is happy to do). then it's trivial, up to torsion, to express the 
complexity in terms of the complexities of the commas, since the 
relevant {length, area, volume . . .} measure will just be that of a 
rectangular solid . . . or do you have to iteratively cascade down 
the dimensions (brain foggy . . .)?


top of page bottom of page up down


Message: 9431

Date: Sun, 25 Jan 2004 05:38:15

Subject: Re: top complexity

From: Paul Erlich

The orthogonalized basis for pajara would be {64:63, 50:49}. The 
complexity should be the product of the complexities of these 3, 
with the suitable "dimension boost" factors = log of the remaining 
prime. yup?



________________________________________________________________________
________________________________________________________________________




------------------------------------------------------------------------
Yahoo! Groups Links

To visit your group on the web, go to:
 Yahoo groups: /tuning-math/ *

To unsubscribe from this group, send an email to:
 tuning-math-unsubscribe@xxxxxxxxxxx.xxx

Your use of Yahoo! Groups is subject to:
 Yahoo! Terms of Service *


top of page bottom of page up down


Message: 9432

Date: Sun, 25 Jan 2004 22:43:03

Subject: rank complexity explanation updated

From: Carl Lumma

When a listener hears a melody in a fixed scale, we assume she *

-Carl



________________________________________________________________________
________________________________________________________________________




------------------------------------------------------------------------
Yahoo! Groups Links

To visit your group on the web, go to:
 Yahoo groups: /tuning-math/ *

To unsubscribe from this group, send an email to:
 tuning-math-unsubscribe@xxxxxxxxxxx.xxx

Your use of Yahoo! Groups is subject to:
 Yahoo! Terms of Service *


top of page bottom of page up down


Message: 9433

Date: Tue, 27 Jan 2004 02:33:30

Subject: Re: 114 7-limit temperaments

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> > --- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> > > This list is attractive, but Meantone, Magic, Pajara, maybe
> > > Injera to name a few are too low for my taste, if I'm reading
> > > these errors right (they're weighted here, I take it).
> > 
> > I think log-flat badness has outlived its popularity :)
> 
> Not with me. However, an alternative which isn't simply ad-hoc
> randomness would be nice.

Thanks for the list Gene.

Gene or Paul,

Can one of you easily plot these 7-limit temperaments on an error vs.
complexity graph (log log or whatever seemed best with 5-limit) so we
can all think about what our subjective badness contours might look like.

Please label the points with the numbers-plus-names Gene gave them in
his list.


top of page bottom of page up down


Message: 9436

Date: Tue, 27 Jan 2004 13:09:49

Subject: Re: rank complexity explanation updated

From: Carl Lumma

>> When a listener hears a melody in a fixed scale, we assume she *
>
>Of course, you are talking about "external" intervals here, right?
>If you take ALL the intervals in a septachord, you get an interval
>vector that totals up to 21. (Hexachords, 15, Pentachords, 10) It
>would be cool if someone could tie "interval-vectors" (The kind
>Jon Wild has compiled- hey that rhymes!) into the main discussion. 
>I'm still trying to find a good use for them! -And trying to find
>a rule for Z-relations...

Hi Paul,

I'm afraid I don't know what an "external" interval is.  Here's
the interval matrix of the diatonic scale in 12-equal, as given
by Scala...

100.0 : 2  4  5  7  9  11 12
200.0 : 2  3  5  7  9  10 12
400.0 : 1  3  5  7  8  10 12
500.0 : 2  4  6  7  9  11 12
700.0 : 2  4  5  7  9  10 12
900.0 : 2  3  5  7  8  10 12
1100.0: 1  3  5  6  8  10 12

The ruler is...

0...1...2...3...4...5...6...7...8...9...10..11..12

The list of adjacent marks on the ruler:

1

The rank complexity:

0

As for the things that Jon Wild compiled, I don't have a clue
what they are... which is akin to saying I don't know of any
use for them.

-Carl


top of page bottom of page up down


Message: 9437

Date: Tue, 27 Jan 2004 04:48:37

Subject: Re: 114 7-limit temperaments

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx Herman Miller <hmiller@I...> wrote:
> On Wed, 21 Jan 2004 09:08:14 -0000, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> 
> >Number 82
> >
> >[6, -2, -2, -17, -20, 1] [[2, 2, 5, 6], [0, 3, -1, -1]]
> >TOP tuning [1203.400986, 1896.025764, 2777.627538, 3379.328030]
> >TOP generators [601.7004928, 230.8749260]
> >bad: 79.825592 comp: 4.619353 err: 3.740932

...

> When I plug 10 and 16 into the temperament finder, this is what I
> end up with.
> 
> 5/13, 229.4 cent generator
> 
> basis:
> (0.5, 0.191135896755)
> 
> mapping by period and generator:
> [(2, 0), (2, 3), (5, -1), (6, -1)]
> 
> mapping by steps:
> [(16, 10), (25, 16), (37, 23), (45, 28)]
> 
> highest interval width: 4
> complexity measure: 8  (10 for smallest MOS)
> highest error: 0.014573  (17.488 cents)

This comparison of different outputs for the same temperament shows up
the need to correctly normalise the new weighted error and complexity
figures so they actually have units we can relate to. i.e. cents for
the error and gens per interval for the complexity.

This should be simple to do.

I think the correct normalisation of a weighted norm is the one where,
if every individual value happened to be X then the, the norm would
also be X, irrespective of the weights.

e.g. if the individual errors are E1, E2, ... En, and the respective
weights are W1, W2, ... Wn (all positive), I think the p-norm should
not be

[(|W1E1|**p + |W2E2|**p + ... |WnEn|**p)/n]**(1/p) 

but instead

[(|W1E1|**p + |W2E2|**p + ... |WnEn|**p)/(W1**p + W2**p + ...
Wn**p)]**(1/p) 

i.e. n is replaced by (W1**p + W2**p + ... Wn**p)

However it bothers me slightly that for minimax (p -> oo), this is
equivalent to 

Max(|W1E1|, |W2E2|, ... |WnEn|)/Max(W1, W2, ... Wn)

It seems like I'd rather have

Max(|W1E1|, |W2E2|, ... |WnEn|)/Mean(W1, W2, ... Wn)

but I guess that would be inconsistent.


top of page bottom of page up down


Message: 9438

Date: Tue, 27 Jan 2004 13:13:25

Subject: Re: Graef article on rationalization of scales

From: Carl Lumma

>Does anyone know why Barlow changed Euler's (p-1) weighting to
>2(p-1+1/(2p))?
>
>Graef gives examples of "rationaizing" 12-equal and 1/4-comma 
>meantone in the 5-limit. Rationalizing means moving to a nearby value 
>so that a certain badness figure is minimized, where we look at the 
>entire matrix of intervals and work locally (interval pair by 
>interval pair) not globally. Adding the last condition makes the 
>problem much more complicated, and I don't see the point in it. He 
>got the duodene by rationalizing equal temperament, and 
>syndie2=fogliano1 by rationalizing 1/4-comma.

This raises an interesting question.  What is our approved method
for finding Fokker blocks for an arbitrary irrational scale?
Such a method would surely make Graf's look silly.

-Carl


top of page bottom of page up down


Message: 9442

Date: Tue, 27 Jan 2004 22:55:52

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
wrote:
> > > --- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> 
wrote:
> > > > This list is attractive, but Meantone, Magic, Pajara, maybe
> > > > Injera to name a few are too low for my taste, if I'm reading
> > > > these errors right (they're weighted here, I take it).
> > > 
> > > I think log-flat badness has outlived its popularity :)
> > 
> > Not with me. However, an alternative which isn't simply ad-hoc
> > randomness would be nice.
> 
> Thanks for the list Gene.
> 
> Gene or Paul,
> 
> Can one of you easily plot these 7-limit temperaments on an error 
vs.
> complexity graph (log log or whatever seemed best with 5-limit) so 
we
> can all think about what our subjective badness contours might look 
like.

I could do that, but as this was done with a log-flat badness cutoff, 
there will be a huge gaping hole in the graph. That's why I'm trying 
to figure out the whole deal for myself, but no one's helping.


top of page bottom of page up down


Message: 9443

Date: Tue, 27 Jan 2004 23:09:00

Subject: Re: Graef article on rationalization of scales

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >Does anyone know why Barlow changed Euler's (p-1) weighting to
> >2(p-1+1/(2p))?
> >
> >Graef gives examples of "rationaizing" 12-equal and 1/4-comma 
> >meantone in the 5-limit. Rationalizing means moving to a nearby 
value 
> >so that a certain badness figure is minimized, where we look at 
the 
> >entire matrix of intervals and work locally (interval pair by 
> >interval pair) not globally. Adding the last condition makes the 
> >problem much more complicated, and I don't see the point in it. He 
> >got the duodene by rationalizing equal temperament, and 
> >syndie2=fogliano1 by rationalizing 1/4-comma.
> 
> This raises an interesting question.  What is our approved method
> for finding Fokker blocks for an arbitrary irrational scale?
> Such a method would surely make Graf's look silly.

All such methods are silly, but I prefer the hexagonal (rhombic 
dodecahedral, etc.) or Kees blocks that result from the min-"odd-
limit" criterion. But the whole idea of rationalizing a tempered 
scale is completely backwards and misses the point in a big way.


top of page bottom of page up down


Message: 9444

Date: Tue, 27 Jan 2004 22:59:31

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx Herman Miller <hmiller@I...> 
wrote:
> > On Wed, 21 Jan 2004 09:08:14 -0000, "Gene Ward Smith" 
<gwsmith@s...>
> > wrote:
> > 
> > >Number 82
> > >
> > >[6, -2, -2, -17, -20, 1] [[2, 2, 5, 6], [0, 3, -1, -1]]
> > >TOP tuning [1203.400986, 1896.025764, 2777.627538, 3379.328030]
> > >TOP generators [601.7004928, 230.8749260]
> > >bad: 79.825592 comp: 4.619353 err: 3.740932
> 
> ...
> 
> > When I plug 10 and 16 into the temperament finder, this is what I
> > end up with.
> > 
> > 5/13, 229.4 cent generator
> > 
> > basis:
> > (0.5, 0.191135896755)
> > 
> > mapping by period and generator:
> > [(2, 0), (2, 3), (5, -1), (6, -1)]
> > 
> > mapping by steps:
> > [(16, 10), (25, 16), (37, 23), (45, 28)]
> > 
> > highest interval width: 4
> > complexity measure: 8  (10 for smallest MOS)
> > highest error: 0.014573  (17.488 cents)
> 
> This comparison of different outputs for the same temperament shows 
up
> the need to correctly normalise the new weighted error and 
complexity
> figures so they actually have units we can relate to. i.e. cents for
> the error and gens per interval for the complexity.
> 
> This should be simple to do.
> 
> I think the correct normalisation of a weighted norm is the one 
where,
> if every individual value happened to be X then the, the norm would
> also be X, irrespective of the weights.
> 
> e.g. if the individual errors are E1, E2, ... En,

You realize that there are an infinite number of errors in the TOP 
case.


top of page bottom of page up down


Message: 9445

Date: Tue, 27 Jan 2004 15:53:46

Subject: Re: rank complexity explanation updated

From: Carl Lumma

>> Here's the interval matrix of the diatonic scale in
>> 12-equal, as given by Scala...
>> 
>> 100.0 : 2  4  5  7  9  11 12
>> 200.0 : 2  3  5  7  9  10 12
>> 400.0 : 1  3  5  7  8  10 12
>> 500.0 : 2  4  6  7  9  11 12
>> 700.0 : 2  4  5  7  9  10 12
>> 900.0 : 2  3  5  7  8  10 12
>> 1100.0: 1  3  5  6  8  10 12
//
>Interesting. What I meant was really "adjacent, outer" intervals: 
>This row:
>
>100.0 : 2  4  5  7  9  11 12 Has a vector count of 
>(2,5,0,0,0,0) "outer" intervals.

I'm lost.  There are 2, 5 and 0 of what?

-Carl


top of page bottom of page up down


Message: 9447

Date: Tue, 27 Jan 2004 16:11:22

Subject: Re: 114 7-limit temperaments

From: Carl Lumma

>>Can one of you easily plot these 7-limit temperaments on an error
>>vs. complexity graph (log log or whatever seemed best with 5-limit)
>>so we can all think about what our subjective badness contours might
>>look like.
>
>I could do that, but as this was done with a log-flat badness cutoff, 
>there will be a huge gaping hole in the graph. That's why I'm trying 
>to figure out the whole deal for myself, but no one's helping.

Since I usually like the way you figure things out, I'll do whatever
I can to help, which may not be much.  If there are any particular
msg. #s associated with this, I'll reread them.  I didn't follow your
orthogonalization posts at all. :(

Part of the problem is these contours represent musical values, so
they're ultimately a matter of opinion.  Log-flat badness has some
nice things going for it, I suppose.  Back when I was coding it I
was asking things like, 'every time I double the number of commas
I search, how much of my top-10 list ought to change?'... without
much closure, I might add.

-Carl


top of page bottom of page up down


Message: 9448

Date: Tue, 27 Jan 2004 18:27:12

Subject: Re: rank complexity explanation updated

From: Carl Lumma

>> >> When a listener hears a melody in a fixed scale, we assume she *
>> >
>> >> Hi Paul,
>> 
>> I'm afraid I don't know what an "external" interval is.  Here's
>> the interval matrix of the diatonic scale in 12-equal, as given
>> by Scala...
>> 
>> 100.0 : 2  4  5  7  9  11 12
>> 200.0 : 2  3  5  7  9  10 12
>> 400.0 : 1  3  5  7  8  10 12
>> 500.0 : 2  4  6  7  9  11 12
>> 700.0 : 2  4  5  7  9  10 12
>> 900.0 : 2  3  5  7  8  10 12
>> 1100.0: 1  3  5  6  8  10 12
>
>Q: Shouldn't the first row be 000.0?

It is quite safe to ignore the values before the colon, as they
are merely an artifact of scala's output.

>Another point,
>
>One can obtain "my" interval vector from "your" interval matrix
>by tallying all the intervals from 1 to 6 and ignoring 7 to 12.
>You subsequently obtain (2,5,4,3,6,1)

Sorry, but how does tallying numbers in the above matrix lead to
(2,5,4,3,6,1)?

-Carl


top of page bottom of page up down


Message: 9449

Date: Tue, 27 Jan 2004 18:29:50

Subject: Re: Graef article on rationalization of scales

From: Carl Lumma

>> This raises an interesting question.  What is our approved method
>> for finding Fokker blocks for an arbitrary irrational scale?
>> Such a method would surely make Graf's look silly.
>
>All such methods are silly,

Perhaps you mean Graf's idea of people wanting "just" versions of
arbitrary scales is silly.  That's for sure.

A method which could show when there is (and isn't) a reasonable
Fokker-block interp. of, say scales taken from field measurements
silly?  I think not. 

>but I prefer the hexagonal (rhombic dodecahedral, etc.) or Kees
>blocks that result from the min-"odd-limit" criterion. But the
>whole idea of rationalizing a tempered scale is completely
>backwards and misses the point in a big way.

I think you missed my point.

-Carl


top of page bottom of page up

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9400 - 9425 -

top of page