About this archive This is an opt in archive. To add your posts click here and send e-mail: tuning_archive@rcwalker.freeserve.co.uk.
S 2

First Previous Next Last

4000 4050 4100 4150 4200 4250 4300 4350 4400 4450 4500 4550 4600 4650 4700 4750 4800 4850 4900 4950 5000 5050 5100 5150 5200 5250 5300 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 5850 5900 5950 6000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550

5400 - 5425 -


top of page bottom of page down

Message: 5400

Date: Tue, 11 Dec 2001 08:06:23

Subject: Re: More lists

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> 
> > Shouldn't you weight them _twice_ if they're occuring twice as 
> often? 
> > How can you justify equal-weighting?
> 
> I do weight them twice, more or less, depending on how you define 
> this. 3 is weighted once as a 3, and then its error is doubled, so 
it 
> is weighted again 4 times as much from 3 and 9 together; so 3 is 
> weighted 5 times, or 9 1.25 times, from one point of view. Then we 
> double dip with 5/3, 5/9 etc. with similar effect.

I don't see it that way. 9:1 is an interval of its own and needs to 
be weighted independently of whether any 3:1s or 9:3s are actually 
used. 5/3 and 5/9 could be seen as weighting 5 commensurately more, I 
don't buy the "double dip" bit one bit!

These are conclusions I've reached after years of playing with 
tunings with large errors and comparing them and thinking hard about 
this problem.


top of page bottom of page up down Message: 5401 Date: Tue, 11 Dec 2001 12:55 +0 Subject: Re: Wedge products From: graham@xxxxxxxxxx.xx.xx In-Reply-To: <9v36kb+mvhf@xxxxxxx.xxx> gene wrote: > The 5-limit wedge product of two ets is the corresponding comma, and > of two commas the corresponding et. We've been doing this all along; > you can consider it to be a cross-product, or a matrix determinant. > The wedgie of a 5-limit linear temperament would reduce the cross- > product until it was not a power (getting rid of any torsion) and > standardize it to be greater than one. The thing about the 5-limit is that the complements can have the same dimensions. Here's where the problem arises: >>> comma = temper.WedgableRatio(81,80) >>> comma.octaveEquivalent() {(2,): -1, (1,): 4} >>> comma.octaveEquivalent().complement() {(2,): 4, (1,): 1} That shows the octave equivalent part of the syntonic comma is not the meantone mapping. You need to take the complement. But in your system, which ignores this distinction, how to you know that the vector as it stands isn't right? You don't get any clues from the dimensions, because they're the same. Do you have an algorithm that would give (0 4 1) as the invariant of (-4 4 -1) wedged with (1 0 0)? I don't, so I explicitly take the complement in the code. Me: > > But before you said the definition of wedge products was > > > > ei^ej = -ej^ei > > > > nothing about keeping zero elements. Gene: > That's the defintion, but it's an element in a vector space. You > can't wish way a basis element ei^ej simply because it has a > coefficient of zero, that isn't the way linear algebra works. A zero > vector is not the same as the number zero. So how about ei^ei, can I wish that away? Seriously, until we get to complements and listifying, it doesn't make any difference of those dimensions go. At least, not the way I wrote the code it doesn't. I can always listify the generator and ET mappings. The only problem would be if a temperament-defining wedgie didn't depend on a particular prime interval, in which case I don't think it would define an n-limit temperament. > > > > If I could enumerate over all pairs, I could fix that. But > that > > > still > > > > leave the general problem of all combinations of N items taken > from > > > a set. > > > > I'd prefer to get rid of zero elements altogether. > > Why not simply order a list of size n choose m, and if one entry has > the value zero, so be it? A function which goes from combinations of > the first n integers, takrn m at a time, to unique integers in the > range from 1 to n choose m might help. Yes, it's getting the combinations of the first n integers that's the problem. But I'm sure I can solve it if I sit down and think about it. To make you happy, I'll do that. >>> def combinations(number, input): ... if number==1: ... return [[x] for x in input] ... output = [] ... for i in range(len(input)-number+1): ... for element in combinations(number-1, input[i+1:]): ... output.append([input[i]]+element) ... return output (Hopefully the indentation will be okay on that, at least in View Source) Inexistent entries are already the same as entries with the value zero in that w[1,2] will be zero if w doesn't have an element (1,2). For example >>> comma[0,] -4 >>> comma[1,] 4 >>> comma[2,] -1 >>> comma[3,] 0 So the 5-limit comma I used above is already a 7-limit interval. What I'd like to change is for w[1,2]=0 to remove the element (1,2) instead of assigning zero to it. Graham
top of page bottom of page up down Message: 5402 Date: Tue, 11 Dec 2001 12:55 +0 Subject: Re: Systems of Diophantine equations From: graham@xxxxxxxxxx.xx.xx In-Reply-To: <9v35h4+amle@xxxxxxx.xxx> In article <9v35h4+amle@xxxxxxx.xxx>, genewardsmith@xxxx.xxx (genewardsmith) wrote: > That's a fancy new method for an old classic problem, and presumably > becomes interesting mostly when the number of simultaneous > Diophantine equations are high. Can you solve a system of linear > equations over the rationals in Python? With Numeric, you can solve for floating point numbers (using a wrapper around the Fortran LAPACK) but there's no support for integers or rationals. Graham
top of page bottom of page up down Message: 5403 Date: Tue, 11 Dec 2001 12:55 +0 Subject: Re: More lists From: graham@xxxxxxxxxx.xx.xx In-Reply-To: <9v3e5l+102l7@xxxxxxx.xxx> Paul: > > > Sorry, I have to disagree. Graham is specifically considering the > > > harmonic entity that consists of the first N odd numbers, in a > > chord. I'm considering a set of consonant intervals with equal weighting. 9:3 and 3:1 are the same interval. Perhaps I should improve my minimax algorithm so the whole debate becomes moot. Gene: > > That may be what Graham was doing, but it wasn't what I was doing; > I > > seldom go beyond four parts. Paul: > Even if you don't, don't you think chords like > > 1:3:5:9 > 1:3:7:9 > 1:3:9:11 > 10:12:15:18 > 12:14:18:21 > 18:22:24:33 > > which contain only 11-limit consonant intervals, would be important > to your music? Yes, but so is 3:4:5:6 which involves both 2:3 and 3:4. And 1/1:11/9:3/2, which has two neutral thirds (and so far I've not used 11-limit temperaments in which this doesn't work) so should they be weighted double? My experience so far of Miracle is that the "wolf fourth" of 4 secors is also important, but I don't have a rational approximation (21:16 isn't quite right). It may be that chords of 0-2-4-6-8 secors become important, in which case 8:7 should be weighted three times as high as 12:7 and twice as high as 3:2. I'd much rather stay with the simple rule that all consonant intervals are weighted equally until we can come up with an improved, subjective weighting. For that, I'm thinking of taking Partch at his word weighting more complex intervals higher. But Paul was talking about a Tenney metric, which would have the opposite effect. So it looks like we're not going to agree on that one. Graham
top of page bottom of page up down Message: 5404 Date: Wed, 12 Dec 2001 02:21:53 Subject: Re: More lists From: paulerlich --- In tuning-math@y..., graham@m... wrote: > Yes, but so is 3:4:5:6 which involves both 2:3 and 3:4. But you can do that with _any_ interval. > And 1/1:11/9:3/2, > which has two neutral thirds (and so far I've not used 11-limit > temperaments in which this doesn't work) so should they be weighted > double? Only if that were your target harmony. I thought hexads were your target harmony. > My experience so far of Miracle is that the "wolf fourth" of 4 > secors is also important, but I don't have a rational approximation (21:16 > isn't quite right). What do you mean it's important? > It may be that chords of 0-2-4-6-8 secors become > important, in which case 8:7 should be weighted three times as high as > 12:7 and twice as high as 3:2. If that was the harmony you were targeting, sure. > I'd much rather stay with the simple rule that all consonant intervals are > weighted equally until we can come up with an improved, subjective > weighting. For that, I'm thinking of taking Partch at his word weighting > more complex intervals higher. But Paul was talking about a Tenney > metric, which would have the opposite effect. So it looks like we're not > going to agree on that one. If you don't agree with me that you're targeting the hexad (I thought you had said as much at one point, when I asked you to consider running some lists for other saturated chords), then maybe we better go to minimax (of course, we'll still have a problem in cases like paultone, where the maximum error is fixed -- what do we do then, go to 2nd-worst error?).
top of page bottom of page up down Message: 5405 Date: Wed, 12 Dec 2001 20:20:39 Subject: Re: Badness with gentle rolloff From: clumma >I now understand that Gene's logarithmically flat distribution >is a very important starting point. Wow- how did that happen? One heckuva switch from the last post I can find in this thread. Not that I understand what any of this is about. Badness?? -Carl
top of page bottom of page up down Message: 5406 Date: Wed, 12 Dec 2001 21:04:37 Subject: Re: More lists From: paulerlich --- In tuning-math@y..., graham@m... wrote: > so if you'd like to check this should be > Paultone minimax: > > > 2/11, 106.8 cent generator That's clearly wrong, as the 7:4 is off by 17.5 cents! > basis: > (0.5, 0.089035952556318909) > > mapping by period and generator: > [(2, 0), (3, 1), (5, -2), (6, -2)] > > mapping by steps: > [(12, 10), (19, 16), (28, 23), (34, 28)] > > highest interval width: 3 > complexity measure: 6 (8 for smallest MOS) > highest error: 0.014573 (17.488 cents) > unique I don't think it should count as unique since > 7:5 =~ 10:7
top of page bottom of page up down Message: 5407 Date: Wed, 12 Dec 2001 04:13:51 Subject: Re: More lists From: dkeenanuqnetau --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > If you don't agree with me that you're targeting the hexad (I thought > you had said as much at one point, when I asked you to consider > running some lists for other saturated chords), then maybe we better > go to minimax (of course, we'll still have a problem in cases like > paultone, where the maximum error is fixed -- what do we do then, go > to 2nd-worst error?). Yes. That's what I do. You still give the error as the worst one, but you give the optimum generator based on the worst error that actually _depends_ on the generator (as opposed to being fixed because it only depends on the period).
top of page bottom of page up down Message: 5408 Date: Wed, 12 Dec 2001 21:12:22 Subject: Re: Badness with gentle rolloff From: paulerlich --- In tuning-math@y..., graham@m... wrote: > In-Reply-To: <9v741u+62fl@e...> > Gene wrote: > > > This seems to be a big improvement, though the correct power is > > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. It > > still seems to me that a rolloff is just as arbitary as a sharp > > cutoff, and disguiss the fact that this is what it is, so tastes will > > differ about whether it is a good idea. > > A sharp cutoff won't be what most people want. For example, in looking > for an 11-limit temperament I might have thought, well, I don't want more > than 24 notes in the scale because then it can't be mapped to two keyboard > octaves. So, if I want three identical, transposable hexads in that scale > I need to set a complexity cutoff at 21. But I'd still be very pleased if > the program throws up a red hot temperament with a complexity of 22, > because it was only an arbitrary criterion I was applying. That's why I suggested that we place our sharp cutoffs where we find some big gaps -- typically right after the "capstone" temperaments. > I suggest the flat badness be calculated first, and then shelving > functions applied for worst error and complexity. The advantage of a > sharp cutoff would be that you could store the temperaments in a database, > to save repetitive calculations, and get the list from a single SQL > statement, like > > SELECT * FROM Scales WHERE complexity<25 AND minimax<10.0 ORDER BY > goodness > > but you'd have to go to all the trouble of setting up a database. > > > Graham Well, database or no, I still like the idea of using a flat badness measure, since it doesn't automatically have to be modified just because we decide to look outside our original range.
top of page bottom of page up down Message: 5410 Date: Wed, 12 Dec 2001 22:44:37 Subject: Re: Badness with gentle rolloff From: paulerlich --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > > --- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote: > > > > > steps^(4/3) * exp((cents/k)^r) > > > > This seems to be a big improvement, though the correct power is > > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. > > Ok. Sorry. But I find I don't really understand _log_ flat. I only > understand flat. When I plot steps*cents against steps I can _see_ > that this is flat. I expect steps*cents to be flat irrespective of the > odd-limit. If I change the steps axis to logarithmic its still gonna > look flat, and anything with a higher power of steps is gonna have a > general upward trend to the right. > > So please tell me again how I can tell if something is log flat, in > such a way that I can check it empirically in my spreadsheet. And > please tell me why a log-flat distribution should be of more interest > than a simply flat one. My (incomplete) understanding is that flatness is flatness. It's what you acheive when you hit the critical exponent. The "logarithmic" character that we see is simply a by-product of the criticality. I look forward to a fuller and more accurate reply from Gene.
top of page bottom of page up down Message: 5412 Date: Wed, 12 Dec 2001 23:11:19 Subject: Re: Badness with gentle rolloff From: paulerlich --- In tuning-math@y..., "clumma" <carl@l...> wrote: > Badness?? The "badness" of a linear temperament is a function of two components -- how many generators it takes to get the consonant intervals, and how large the deviations from JI are in the consonant intervals. Total "badness" is therefore some function of these two components.
top of page bottom of page up down Message: 5413 Date: Wed, 12 Dec 2001 10:47 +0 Subject: Re: More lists From: graham@xxxxxxxxxx.xx.xx In-Reply-To: <9v6f01+1bdh@xxxxxxx.xxx> Paul wrote: > If you don't agree with me that you're targeting the hexad (I thought > you had said as much at one point, when I asked you to consider > running some lists for other saturated chords), then maybe we better > go to minimax (of course, we'll still have a problem in cases like > paultone, where the maximum error is fixed -- what do we do then, go > to 2nd-worst error?). The hexads are targetted by the complexity formula. But that's because it's the simplest such measure, not because I actually think they're musically useful. I'm coming to the opinion that anything over a 7-limit tetrad is quite ugly, but some smaller 11-limit chords (and some chords with the for secor wolf) are strikingly beautiful, if they're tuned right. So Blackjack is a good 11-limit scale although it doesn't contain any hexads. I've always preferred minimax as a measure, but currently speed is the most important factor. The RMS optimum can be calculated much faster, and although I can improve the minimax algorithm I don't think it can be made as fast. Scales such as Paultone can be handled by excluding all intervals that don't depend on the generator. But the value used for rankings still has to include all intervals. My program should be doing this, but I'm not sure if it is working correctly, so if you'd like to check this should be Paultone minimax: 2/11, 106.8 cent generator basis: (0.5, 0.089035952556318909) mapping by period and generator: [(2, 0), (3, 1), (5, -2), (6, -2)] mapping by steps: [(12, 10), (19, 16), (28, 23), (34, 28)] highest interval width: 3 complexity measure: 6 (8 for smallest MOS) highest error: 0.014573 (17.488 cents) unique 9:7 =~ 32:25 =~ 64:49 8:7 =~ 9:8 4:3 =~ 21:16 35:32 =~ 10:9 consistent with: 10, 12, 22 Hmm, why isn't 7:5 =~ 10:7 on that list? Graham
top of page bottom of page up down Message: 5414 Date: Wed, 12 Dec 2001 23:34:52 Subject: Re: Badness with gentle rolloff From: dkeenanuqnetau --- In tuning-math@y..., "clumma" <carl@l...> wrote: > >I now understand that Gene's logarithmically flat distribution > >is a very important starting point. > > Wow- how did that happen? One heckuva switch from the last > post I can find in this thread. Hey. I had a coupla days to cool off. :-) I'm only saying its a starting point and I probably should have written "I now understand that Gene's flat distribution is a very important starting point". Since I now realise I don't really understand the "logarithmically-flat" business. While I'm waiting for clarification on that I should point out that once we go to the rolled-off version, the power that "steps" is raised to, is not an independent parameter, (it can be subsumed in k) so it doesn't really matter where we start from. steps^p * exp((cents/k)^r) = ( steps * (exp((cents/k)^r))^(1/p) )^p = ( steps * exp((cents/k)^r /p) )^p = ( steps * exp((cents/(k * p^(1/r)))^r) )^p Now raising badness to a positive power doesn't affect the ranking so we can just use steps * exp((cents/(k * p^(1/r))^r) and we can call simply treat k * p^(1/r) a new version of k. So we have steps * exp((cents/k)^r) so my old k of 2.1 cents becomes a new one of 3.7 cents and my proposed badness measure becomes steps * exp(sqrt(cents/3.7)) > Not that I understand what > any of this is about. Badness?? As has been done many times before, we are looking for a single figure that combines the error in cents with the number of notes in the tuning to give a single figure-of-demerit with which to rank tunings for the purpose of deciding what to leave out of a published list or catalog for which limited space is available. One simply lowers the maximum badness bar until the right number of tunings get under it. It is ultimately aimed at automatically generated linear temperaments. But we are using 7-limit ETs as a trial run since we have much more collective experience of their subjective badness to draw on. So "steps" is the number of divisions in the octave and "cents" is the 7-limit rms error. I understand that Paul and Gene favour a badness metric for these that looks like this steps^2 * cents * if(min<=steps<=max, 1, infinity) I think they have agreed to set min = 1 and max will correspond to some locally-good ET, but I don't know how they will decide exactly which one. This sharp cutoff in number of steps seems entirely arbitrary to me and (as Graham pointed out) doesn't correspond to the human experience of these things. I would rather use a gentle rolloff that at least makes some attempt to represent the collective subjective experience of people on the tuning lists.
top of page bottom of page up down Message: 5415 Date: Wed, 12 Dec 2001 10:47 +0 Subject: Re: Badness with gentle rolloff From: graham@xxxxxxxxxx.xx.xx In-Reply-To: <9v741u+62fl@xxxxxxx.xxx> Gene wrote: > This seems to be a big improvement, though the correct power is > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. It > still seems to me that a rolloff is just as arbitary as a sharp > cutoff, and disguiss the fact that this is what it is, so tastes will > differ about whether it is a good idea. A sharp cutoff won't be what most people want. For example, in looking for an 11-limit temperament I might have thought, well, I don't want more than 24 notes in the scale because then it can't be mapped to two keyboard octaves. So, if I want three identical, transposable hexads in that scale I need to set a complexity cutoff at 21. But I'd still be very pleased if the program throws up a red hot temperament with a complexity of 22, because it was only an arbitrary criterion I was applying. I suggest the flat badness be calculated first, and then shelving functions applied for worst error and complexity. The advantage of a sharp cutoff would be that you could store the temperaments in a database, to save repetitive calculations, and get the list from a single SQL statement, like SELECT * FROM Scales WHERE complexity<25 AND minimax<10.0 ORDER BY goodness but you'd have to go to all the trouble of setting up a database. Graham
top of page bottom of page up down Message: 5416 Date: Wed, 12 Dec 2001 23:56:39 Subject: Re: Badness with gentle rolloff From: paulerlich --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > But we are using 7-limit ETs as a trial run since we have much more > collective experience of their subjective badness to draw on. > > So "steps" is the number of divisions in the octave and "cents" is the > 7-limit rms error. > > I understand that Paul and Gene favour a badness metric for these that > looks like this > > steps^2 * cents * if(min<=steps<=max, 1, infinity) The exponent would be 4/3, not 2, for ETs.
top of page bottom of page up down Message: 5417 Date: Wed, 12 Dec 2001 13:26:14 Subject: Re: Badness with gentle rolloff From: dkeenanuqnetau --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote: > > > steps^(4/3) * exp((cents/k)^r) > > This seems to be a big improvement, though the correct power is > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. Ok. Sorry. But I find I don't really understand _log_ flat. I only understand flat. When I plot steps*cents against steps I can _see_ that this is flat. I expect steps*cents to be flat irrespective of the odd-limit. If I change the steps axis to logarithmic its still gonna look flat, and anything with a higher power of steps is gonna have a general upward trend to the right. So please tell me again how I can tell if something is log flat, in such a way that I can check it empirically in my spreadsheet. And please tell me why a log-flat distribution should be of more interest than a simply flat one.
top of page bottom of page up down Message: 5418 Date: Wed, 12 Dec 2001 15:52 +0 Subject: Temperament calculations online From: graham@xxxxxxxxxx.xx.xx <temperament finding scripts *> Early days yet, but it is working. Graham
top of page bottom of page up down Message: 5419 Date: Thu, 13 Dec 2001 16:25 +0 Subject: Re: A hidden message (was: Re: Badness with gentle rolloff) From: graham@xxxxxxxxxx.xx.xx In-Reply-To: <9vaje4+gncf@xxxxxxx.xxx> paulerlich wrote: > Take a look at the two pictures in > > Yahoo groups: /tuning-math/files/Paul/ * > > (I didn't enforce consistency, but we're only focusing on > the "goodest" ones, which are consistent anyway). > > In both of them, you can spot the same periodicity, occuring 60 times > with regular frequency among the first 100,000 ETs. > > Thus we see a frequency of about 1670 in the wave, agreeing closely > with the previous estimate? > > What the heck is going on here? Riemann zetafunction weirdness? I don't know either, but I'll register an interest in finding out. I've thought for a while that the set of consistent ETs may have properties similar to the set of prime numbers. It really gets down to details of the distribution of rational numbers. One thing I noticed is that you seem to get roughly the same number of consistent ETs within any linear range. Is that correct? As to these diagrams, one thing I notice is that the resolution is way below the number of ETs being considered. So could this be some sort of aliasing problem? Best way of checking is to be sure each bin contains the same *number* of ETs, not merely that the x axis is divided into near-enough equal parts. Graham
top of page bottom of page up down Message: 5420 Date: Thu, 13 Dec 2001 19:46:45 Subject: A hidden message (was: Re: Badness with gentle rolloff) From: paulerlich --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > Result: one big giant spike right at 1665-1666. Actually, the Nyquist resolution (?) prevents me from saying whether it's 1659.12658227848 (the nominal peak) or something plus or minus a dozen or so. But clearly my visual estimate of 1664 has been corroborated.
top of page bottom of page up down Message: 5421 Date: Thu, 13 Dec 2001 16:30:51 Subject: the 75 "best" 7-limit ETs below 100,000-tET From: paulerlich Out of the consistent ones: rank ET "badness" 1 171 0.20233 2 18355 0.25034 3 31 0.30449 4 84814 0.33406 5 4 0.33625 6 270 0.34265 7 99 0.35282 8 3125 0.35381 9 441 0.37767 10 6691 0.42354 11 72 0.44575 12 3566 0.45779 13 10 0.46883 14 5 0.47022 15 11664 0.48721 16 41 0.48793 17 12 0.49554 18 342 0.50984 19 21480 0.51987 20 68 0.52538 21 3395 0.53654 22 19 0.53668 23 15 0.53966 24 27 0.54717 25 140 0.5502 26 9 0.55191 27 6 0.55842 28 22 0.56091 Up through this point in the list, most of the results tend to be "small" ETs . . . hereafter, they don't. 29 1578 0.56096 30 6520 0.56344 31 14789 0.56416 32 39835 0.5856 33 612 0.58643 34 33144 0.59123 35 202 0.59316 36 130 0.59628 37 1547 0.59746 38 5144 0.5982 39 11835 0.62134 40 63334 0.63002 41 36710 0.63082 42 2954 0.63236 43 53 0.63451 44 2019 0.63526 45 3296 0.6464 46 44979 0.65123 47 8269 0.65424 48 51499 0.67526 49 301 0.68163 50 1376 0.68197 51 51670 0.68505 52 1848 0.68774 53 66459 0.68876 54 14960 0.68915 55 103 0.69097 56 16 0.69137 57 33315 0.69773 58 1749 0.70093 59 1407 0.70125 60 46 0.71008 61 37 0.71553 62 26624 0.732 63 4973 0.73284 64 1106 0.73293 65 239 0.73602 66 472 0.75857 67 30019 0.76 68 26 0.76273 69 9816 0.76717 70 62 0.76726 71 58 0.77853 72 1718 0.77947 73 15230 0.7845 74 25046 0.78996 75 58190 0.79264 What seems to be happening is that whatever effect is creating 60 equally-spaced "waves" in the data starts to dominate the result after about the top 30 or so; or after the cutoff 'badness' value exceeds approximately e times its "global" minimum value, the logarithmic character of the results begins to loosen it grip . . . is there anything to this observation, Gene?
top of page bottom of page up down Message: 5422 Date: Thu, 13 Dec 2001 20:03:39 Subject: A hidden message (was: Re: Badness with gentle rolloff) From: paulerlich I wrote, > But clearly my visual estimate of 1664 has been > corroborated. 1664 = 2^7 * 13 Pretty spooky!! Thus, 103169 = 2^8 * 13 * 31 + 1
top of page bottom of page up down Message: 5423 Date: Thu, 13 Dec 2001 16:40:43 Subject: A hidden message (was: Re: Badness with gentle rolloff) From: paulerlich --- In tuning-math@y..., graham@m... wrote: > > I don't know either, but I'll register an interest in finding out. I've > thought for a while that the set of consistent ETs may have properties > similar to the set of prime numbers. Well, this pattern I found shows up regardless of whether you look at consistent ETs only, or fail to enforce consistency at all. > It really gets down to details of > the distribution of rational numbers. One thing I noticed is that you > seem to get roughly the same number of consistent ETs within any linear > range. Is that correct? Yup -- in the 7-limit, it's always half! You know how to view this table: range #inconsistent 1-10000 5006 10001-20000 4996 20001-30000 5004 30001-40000 5002 40001-50000 4996 50001-60000 4996 60001-70000 4996 70001-80000 5002 80001-90000 5006 90001-100000 4999 (the first odd number so far) > > As to these diagrams, one thing I notice is that the resolution is way > below the number of ETs being considered. So could this be some sort of > aliasing problem? No, because the same exact behavior showed up in the Excel chart, no matter how I stretched it out . . . > Best way of checking is to be sure each bin contains > the same *number* of ETs, not merely that the x axis is divided into > near-enough equal parts. Hmm . . . all the maxima are visible, so I'm not sure this is relevant anyway.
top of page bottom of page up down Message: 5424 Date: Thu, 13 Dec 2001 20:30:48 Subject: the 75 "best" 5-limit ETs below 2^17-tET From: paulerlich Assuming a "critical exponent" of 3/2 for this case (is that right?) : Out of the consistent ones: rank ET "badness" 1 4296 0.20153554902775 2 78005 0.253840852090173 3 118 0.298051576414275 4 3 0.325158891374691 5 53 0.361042754847595 6 1783 0.376704792560154 7 2513 0.38157807050998 8 25164 0.410594002644579 9 19 0.410991123902702 10 12 0.417509911542676 11 612 0.436708226862349 12 730 0.440328484445999 13 34 0.458833616575689 14 171 0.461323498406156 15 20868 0.462440101460723 16 7 0.479263869467813 17 4 0.517680428544775 18 441 0.525786933473794 19 1171 0.54066707734392 20 8592 0.570028613470703 21 65 0.580261609859836 22 52841 0.584468600555837 23 73709 0.592105848504379 24 6809 0.613067695688349 25 15 0.644650341848039 26 5 0.654939089766412 27 31 0.659243117645396 28 289 0.666113665527379 29 22 0.713295533690924 30 1342 0.734143972117584 31 16572 0.736198397866562 32 323 0.744599492497238 33 559 0.763541910323762 34 152 0.785452598431966 35 9 0.804050483021927 36 29460 0.806162085936717 37 98873 0.808456458619207 38 1053 0.816063953343609 39 10 0.831348880236421 40 27677 0.840139252565266 41 236 0.843017163303497 42 6079 0.854618300478436 43 87 0.855517482964681 44 1901 0.875919286932322 45 8 0.885030392786763 46 3684 0.886931822414785 47 48545 0.889578653724097 48 11105 0.89024911748373 49 84 0.908733078006219 50 46 0.91773712251282 51 6 0.919688228216577 52 99 0.929380204600093 53 205 0.94016876679068 54 103169 0.941197892471069 55 57137 0.942202383987665 56 12276 0.953572058507306 57 31973 0.95740445574325 58 494 0.959416685644995 59 270 0.962515005704479 60 23381 0.982018213968414 61 5467 0.999256787729657 62 2954 1.01217901495476 63 130846 1.01554445408754 64 16 1.02089438881021 65 106 1.02118312100403 66 3125 1.02398156287357 67 7980 1.03541593154556 68 12888 1.04720943128646 69 46032 1.06067411398872 70 3566 1.06548205329903 71 5026 1.07926576483874 72 41 1.08648217274233 73 72 1.09019587056164 74 2395 1.12961831514844 75 82301 1.14050108729716
top of page bottom of page up

First Previous Next Last

4000 4050 4100 4150 4200 4250 4300 4350 4400 4450 4500 4550 4600 4650 4700 4750 4800 4850 4900 4950 5000 5050 5100 5150 5200 5250 5300 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 5850 5900 5950 6000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550

5400 - 5425 -

top of page