Tuning-Math Digests messages 9700 - 9724

This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).

Contents Hide Contents S 10

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9700 - 9725 -



top of page bottom of page down


Message: 9700

Date: Mon, 02 Feb 2004 03:03:02

Subject: Re: Back to the 5-limit cutoff (was: 60 for Dave)

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
> wrote:
> 
> > It'll generally be different people, though. For a given 
> individual, 
> > very low error doesn't make complexity any easier to tolerate 
than 
> > merely tolerable error. Nevertheless, I think we should use 
> something 
> > close to a straight line, slightly convex or concave to best fit 
> > a "moat".
> 
> Recall that any goofy, ad-hoc weirdness may need to be both 
explained 
> and justified.

What did you have in mind?


top of page bottom of page up down


Message: 9701

Date: Mon, 02 Feb 2004 05:50:33

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> But Yes, true.  Increasing my tolerance for complexity 
> >> simultaneously increases my tolerance for error, since this is 
Max().
> >
> >I have no idea why you say that. However, when I said "more of 
one", 
> >I didn't mean "more tolerance for one", I simply meant "higher 
values 
> >of one".
> 
> If I have a certain expectation of max error and a separate
> expectation of max complexity, but I can't measure them directly,
> I have to use Dave's formula, I wind up with more of whatever I
> happened to expect less of.

More of whatever you happened to expect less of? What do you mean? 
Can you explain with an example?

> Dave's function is thus a badness
> function, since it represents both error and complexity.

A badness function has to take error and complexity as inputs, and 
give a number as output.


top of page bottom of page up down


Message: 9702

Date: Mon, 02 Feb 2004 09:26:03

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> >> >I'm arguing that, along this particular line of thinking, 
> >> >> >complexity does one thing to music, and error another, but
> >> >> >there's no urgent reason more of one should limit your
> >> >> >tolerance for the other . . .
> //
> >> 
> >> If I'm bounding a list of temperaments with Dave's formula only,
> >> and I desire that error not exceed 10 cents rms and complexity 
not
> >> exceed 20 notes (and a and b somehow put cents and notes into the
> >> same units), what bound on Dave's formula should I use?
> >
> >You'd pick a and b such that max(cents/10,complexity/20) < 1.
> 
> Ok, I walked into that one by giving fixed bounds on what I wanted.
> But re. your original suggestion (above), for any fixed version of
> the formula, more of one *increases* my tolerance for the other.

Nonsense.


top of page bottom of page up down


Message: 9703

Date: Mon, 02 Feb 2004 03:05:36

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx Graham Breed <graham@m...> 
wrote:
> > You're using temperaments to construct scales, aren't you?
> 
> Not me, for the most part. I think the non-keyboard composer is 
> simply being ignored in these discussions, and I think I'll stand 
up 
> for him.

And the new-keyboard composer!

> > Oh, yes, I think the 9-limit calculation can be done by giving 3 
a 
> > weight of a half.  That places 9 on an equal footing with 5 and 
7, 
> and I 
> > think it works better than vaguely talking about the number of 
> > consonances. 
> 
> You also get a lattice which is two symmetrical lattices glued 
> together that way.

Really? Can you elaborate? Do you still get an infinite number of 
images of each ratio, times 9/3^2, 3^2/9, 9^2/3^4, etc.?


top of page bottom of page up down


Message: 9704

Date: Mon, 02 Feb 2004 05:03:26

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
> wrote:
> > My favourite cutoff for 5-limit temperaments is now.
> > 
> > (error/8.13)^2 + (complexity/30.01)^2 < 1
> > 
> > This has an 8.5% moat, in the sense that we must go out to 
> > 
> > (error/8.13)^2 + (complexity/30.01)^2 < 1.085
> > 
> > before we will include another temperament (semisixths).

That was wrong. I forgot to square the radius.

It has an 8.5% moat in the sense that we must go out to 

(error/8.13)**2 + (complexity/30.01)**2 < 1.085**2

before we will include another temperament (semisixths).

I'm trying to remember to use "**" for power now that "^" is wedge
product.

> > It includes the following 17 temperaments.
> 
> is this in order of (error/8.13)^2 + (complexity/30.01)^2 ?

Yes. Or if it isn't, it's pretty close to it. The last four are
essentially _on_ the curve, so their order is irrelevant.

> > meantone	80:81
> > augmented	125:128
> > porcupine	243:250
> > diaschismic	2025:2048
> > diminished	625:648
> > magic	3072:3125
> > blackwood	243:256
> > kleismic	15552:15625
> > pelogic	128:135
> > 6561/6250	6250:6561
> > quartafifths (tetracot)	19683:20000
> > negri	16384:16875
> > 2187/2048	2048:2187
> > neutral thirds (dicot)	24:25
> > superpythag	19683:20480
> > schismic	32768:32805
> > 3125/2916	2916:3125
> > 
> > Does this leave out anybody's "must-have"s?
> > 
> > Or include anybody's "no-way!"s?
> 
> I suspect you could find a better moat if you included semisixths 
> too -- but you might need to hit at least one axis at less than a 90-
> degree angle. Then again, you might not.

If you keep the power at 2, there is no better moat that includes
semisixths. The best such only has a 6.7% moat. 

This is

(error/8.04)**2 + (complexity/32.57)**2 = 1**2

However if the power is reduced to 1.75 then we get a 9.3% moat outside of

(error/8.25)**1.75 + (complexity/32.62)**1.75 = 1**1.75

which adds only semisixths to the above list.

However I'm coming around to thinking that the power of 2 (and the
resultant meeting the axes at right angles) is far easier to justify
than anything else.

Has anyone (e.g. Herman) expressed any particular interest in semisixths?


top of page bottom of page up down


Message: 9705

Date: Mon, 02 Feb 2004 05:48:12

Subject: Re: Weighting

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:

> I'm equating "field of attraction" with the width from the top of 
one
> hump (maximum) to the next. I guess what I'm adding to Partch is 
that
> once an interval is mistuned so much that it's outside of the 
original
> field of attraction and into that of another ratio, then it is no
> longer meaningful to call it an approximation of the original ratio.
> 
> And so, with TOP weighting you will more easily do this with the 
more
> complex ratios.

We don't yet know what harmonic entropy says about the tolerance of 
the tuning of individual intervals in a consonant chord. And in the 
past, complexity computations have often been geared around complete 
consonant chords. They're definitely an important consideration . . .

For dyads, you have more of a point. As I mentioned before, TOP can 
be viewed as an optimization over *only* a set of equally-complex, 
fairly complex ratios, all containing the largest prime in your 
lattice as one term, and a number within a factor of sqrt(2) or so of 
it as the other. So as long as these ratios have a standard of error 
applied to them which keeps them "meaningful", you should have no 
objection. Otherwise, you had no business including that prime in 
your lattice in the first place, something I've used harmonic entropy 
to argue before. But clearly you are correct in implying we'll need 
to tighten our error tolerance when we do 13-limit "moats", etc. I 
think that's true but really just tells us that with the kinds of 
timbres and other musical factors that high-error low-limit timbres 
are useful for, you simply won't have access to any 13-limit effects -
- from dyads alone.


top of page bottom of page up down


Message: 9706

Date: Mon, 02 Feb 2004 09:28:57

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >Even if you accept this (which I don't), wouldn't it merely tell you 
> >that the power should be *at least 2* or something, rather than 
> >*exactly 2*?
> 
> Yes.  I was playing with things like comp**5(err**2) back in the day.
> But I may have been missing out on the value of adding...

Taking the log of the whole thing doesn't change anything since you
can just take the log of the cutoff value too, and the log of a
constant is still a constant. So the above is equivalent to

5*log(comp) + 2*log(err)


top of page bottom of page up down


Message: 9707

Date: Mon, 02 Feb 2004 01:35:14

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>> Ok, I walked into that one by giving fixed bounds on what I wanted.
>> But re. your original suggestion (above), for any fixed version of
>> the formula, more of one *increases* my tolerance for the other.
>
>Nonsense.

Nonsense, eh?  This is pretty much the definition of Max().  It
throws away information on the smaller thing.  You can tweak your
precious constants after the fact to fix it, but not before the
fact.

-Carl


top of page bottom of page up down


Message: 9708

Date: Mon, 02 Feb 2004 03:09:35

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> I don't think that's quite what Partch says. Manuel, at least, 
has 
> >> always insisted that simpler ratios need to be tuned more 
accurately, 
> >> and harmonic entropy and all the other discordance functions 
I've 
> >> seen show that the increase in discordance for a given amount of 
> >> mistuning is greatest for the simplest intervals.
> >
> >Did you ever track down what Partch said?
> 
> Observation One: The extent and intensity of the influence of a
> magnet is in inverse proportion to its ratio to 1.
> 
> "To be taken in conjunction with the following"
> 
> Observation Two: The intensity of the urge for resolution is in
> direct proportion to the proximity of the temporarily magnetized
> tone to the magnet.

Thanks, Carl.

> >It also shows that, if all intervals are equally mistuned, the 
more 
> >complex ones will have the highest entropy.
> 
> ?  The more complex ones already have the highest entropy.  You mean
> they gain the most entropy from the mistuning?  I think Paul's 
saying
> the entropy gain is about constant per mistuning of either complex
> or simple putative ratios.

It's a lot greater for the sinple ones.

> Thus if consonance really *does*
> deteriorate at the same rate for all ratios as Paul claims,

Where did I claim that?


top of page bottom of page up down


Message: 9709

Date: Mon, 02 Feb 2004 05:05:34

Subject: Harmonic Entropy (was: Re: Question for Dave Keenan)

From: Paul Erlich

Action on the harmonic entropy list, where I just explained the 
calculation again, hopefully more clearly . . .


top of page bottom of page up down


Message: 9710

Date: Mon, 02 Feb 2004 05:54:01

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
wrote:
> > Also try including semisixths *and* wuerschmidt -- for a list of 
19 --
> >  particularly if you're willing to try a straighter curve.
> 
> No. There's no way to get a better moat by adding wuerschmidt. It's
> too close to aristoxenean, and if you also add aristoxenean it's too
> close to ... etc.

Sorry -- I was looking at an unlabeled graph and thought semisixths 
was wuerschmidt for a second . . .


top of page bottom of page up down


Message: 9711

Date: Mon, 02 Feb 2004 03:11:53

Subject: Re: Weighting (was: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Graham Breed <graham@m...> wrote:

> Oh no, the simple intervals gain the most entropy.  That's Paul's 
> argument for them being well tuned.  After a while, the complex 
> intervals stop gaining entropy altogether, and even start losing 
it.  At 
> that point I'd say they should be ignored altogether, rather than 
> included with a weighting that ensures they can never be important. 
> Some of the temperaments being bandied around here must get way 
beyond 
> that point.

Examples?

> Actually, any non-unique temperament will be a problem.

?


top of page bottom of page up down


Message: 9712

Date: Mon, 02 Feb 2004 05:10:50

Subject: Re: Weighting

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:

> 
> >To me, that's an argument for why TOP isn't necessarily what you
> >want.
> 
> The entropy minima are wider for simple ratios, but that doesn't
> mean that error is less damaging to them.  What it does mean is
> that you're less likely to run afoul of extra-JI effects when
> measuring error from a rational interval when that interval is
> simple.

How does measuring error run you afoul of extra-JI effects, and what 
are these extra-JI effects?


top of page bottom of page up down


Message: 9713

Date: Mon, 02 Feb 2004 05:58:58

Subject: Re: 7-limit horagrams

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >I'm ssorry.
> >
> >Green-black-green-black-green-black-green-black-green-black-green-
> >black . . .
> >
> >Wasn't that your idea in the first place?
> 
> I think so, and this is one of the patterns (including even, odd,
> prime, proper ...) I ruled out.  Look at decimal.bmp.  10 & 14
> are both green.
> 
> -Carl

I guess my debugging is at fault, then! Wow, I sure win the ass award 
for suggesting you needed to look more!


top of page bottom of page up down


Message: 9714

Date: Mon, 02 Feb 2004 09:39:11

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> Ok, I walked into that one by giving fixed bounds on what I 
wanted.
> >> But re. your original suggestion (above), for any fixed version 
of
> >> the formula, more of one *increases* my tolerance for the other.
> >
> >Nonsense.
> 
> Nonsense, eh?  This is pretty much the definition of Max().  It
> throws away information on the smaller thing.

Yes -- thus more of one has no effect on the tolerance for the other -
- it's either the bigger thing, making the tolerance for the other 
irrelevant anyway, or it's the smaller thing, it which case the 
tolerance for the other is a constant.

> You can tweak your
> precious constants after the fact to fix it, but not before the
> fact.

Isn't this true of any badness criterion? I don't see how one with 
rectangular contours is suddently any different.


top of page bottom of page up down


Message: 9715

Date: Mon, 02 Feb 2004 03:11:28

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> You should know how to calculate them by now: log(n/d)*log(n*d) 
> and log(n*d) respectively.

You mean 

log(n/d)/log(n*d)

where n:d is the comma that vanishes.

I prefer these scalings

complexity = lg2(n*d)

error = comma_size_in_cents / complexity
      = 1200 * log(n/d) / log(n*d)

My favourite cutoff for 5-limit temperaments is now.

(error/8.13)^2 + (complexity/30.01)^2 < 1

This has an 8.5% moat, in the sense that we must go out to 

(error/8.13)^2 + (complexity/30.01)^2 < 1.085

before we will include another temperament (semisixths).

Note that I haven't called it a "badness" function, but rather a
"cutoff" function. So there's no need to see it as competing with
log-flat badness. What it is competing with is log-flat badness plus
cutoffs on error and complexity (or epimericity).

Yes it's arbitrary, but at least it's not capricious, thanks to the
existence of a reasonable-sized moat around it.

It includes the following 17 temperaments.

meantone	80:81
augmented	125:128
porcupine	243:250
diaschismic	2025:2048
diminished	625:648
magic	3072:3125
blackwood	243:256
kleismic	15552:15625
pelogic	128:135
6561/6250	6250:6561
quartafifths (tetracot)	19683:20000
negri	16384:16875
2187/2048	2048:2187
neutral thirds (dicot)	24:25
superpythag	19683:20480
schismic	32768:32805
3125/2916	2916:3125

Does this leave out anybody's "must-have"s?

Or include anybody's "no-way!"s?

The more I think about this sum-of-squares type of cutoff function,
the more I think it is the sort of thing I might have suggested a very
long time ago if log-flat badness (with error and complexity cutoffs)
wasn't being pushed so hard by a certain Erlich (who shall remain
nameless). ;-)


top of page bottom of page up down


Message: 9716

Date: Mon, 02 Feb 2004 05:12:56

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> >> >I'm arguing that, along this particular line of thinking, 
> >> >> >complexity does one thing to music, and error another, but
> >> >> >there's no urgent reason more of one should limit your
> >> >> >tolerance for the other . . .
> >> >> 
> >> >> Taking this to its logical extreme, wouldn't we abandon 
badness
> >> >> alltogether?
> >> >> 
> >> >> -Carl
> >> >
> >> >No, it would just become 'rectangular', as Dave noted.
> >> 
> >> I didn't follow that.
> >
> >Your badness function would become max(a*complexity, b*error), 
thus 
> >having rectangular contours.
> 
> More of one can here influence the tolerance for the other.

Not true.

> >> Maybe you could explain how it explains
> >> how someone who sees no relation between error and complexity
> >> could possibly be interested in badness.
> >
> >Dave and I abandoned badness in favor of a "moat".
> 
> "Badness" to me is any combination of complexity and error, which
> I took your moat to be.

No, it's just a single in/out dividing line, which can be taken as a 
single arbitrary contour of some badness function, but doesn't have 
to be.


top of page bottom of page up down


Message: 9717

Date: Mon, 02 Feb 2004 06:01:21

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
> wrote:
> > However I'm coming around to thinking that the power of 2 (and the
> > resultant meeting the axes at right angles) is far easier to justify
> > than anything else.
> 
> How so?

It's what you said yesterday (I think).

At some point (1 cent, 0.5 cent?) the error is so low and the
complexity so high, that any further reduction in error is irrelevant
and will not cause you to allow any further complexity. So it should
be straight down to the complexity axis from there. 

Similarly, at some point (10 notes per whatever, 5?) the complexity is
so low and the error so high, that any further reduction will not
cause you to allow any further error. So it should be straight across
to the error axis from there.

It also corresponds to mistuning-pain being the square of the error.
As you pointed out, that may have just been used by JdL as it is
convenient, but don't the bottoms of your HE notches look parabolic?

To justify using the square of complexity (as I think Carl suggested)
we also have the fact that the number of intervals is O(comp**2).


top of page bottom of page up down


Message: 9718

Date: Mon, 02 Feb 2004 09:48:47

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> > >> Ok, I walked into that one by giving fixed bounds on what I 
> wanted.
> > >> But re. your original suggestion (above), for any fixed version 
> of
> > >> the formula, more of one *increases* my tolerance for the other.
> > >
> > >Nonsense.
> > 
> > Nonsense, eh?  This is pretty much the definition of Max().  It
> > throws away information on the smaller thing.
> 
> Yes -- thus more of one has no effect on the tolerance for the other -
> - it's either the bigger thing, making the tolerance for the other 
> irrelevant anyway, or it's the smaller thing, it which case the 
> tolerance for the other is a constant.
> 
> > You can tweak your
> > precious constants after the fact to fix it, but not before the
> > fact.
> 
> Isn't this true of any badness criterion? I don't see how one with 
> rectangular contours is suddently any different.

Paul and Carl,

I think you're both right. You're just talking about slightly
different things.

As a function, max(x,y) "depends on" both x and y but at any given
point on the "curve" it only "depends on" one of them in the sense
that if you take the partial derivatives wrt x and y, one of them will
always be zero.


top of page bottom of page up down


Message: 9719

Date: Mon, 02 Feb 2004 05:14:29

Subject: Re: The true top 32 in log-flat?

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> >> >> > TOP generators [1201.698520, 504.1341314]
> >> >> 
> >> >> So how are these generators being chosen?  Hermite?
> >> >
> >> >No, just assume octave repetition, find the period (easy) and 
then 
> >> >the unique generator that is between 0 and 1/2 period.
> >> >
> >> >> I confess
> >> >> I don't know how to 'refactor' a generator basis.
> >> >
> >> >What do you have in mind?
> >> 
> >> Isn't it possible to find alternate generator pairs that give
> >> the same temperament when carried out to infinity?
> >
> >Yup! You can assume tritave-equivalence instead of octave-
> >equivalence, for one thing . . .
> 
> And can doing so change the DES series?
> 
> -Carl

Well of course . . . can you think of any octave-repeating DESs that 
are also tritave-repeating?


top of page bottom of page up down


Message: 9720

Date: Mon, 02 Feb 2004 06:03:45

Subject: Re: The true top 32 in log-flat?

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
> wrote:
> 
> > I'm using your formula from
> > 
> > Yahoo groups: /tuning-math/message/8806 *
> > 
> > but instead of "max", I'm using "sum" . . .
> 
> So these cosmically great answers are coming from the L1 norm 
applied 
> to the scaling we got from vals, where we divide by log2(p)'s. What 
> does that mean, I wonder?

The formulas, I think, are basically the same ones being used in 
the "cross-check" post -- did you have a chance to think about it?

Yahoo groups: /tuning-math/message/9052 *


top of page bottom of page up down


Message: 9721

Date: Mon, 02 Feb 2004 09:52:44

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
wrote:
> > --- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> > > >> Ok, I walked into that one by giving fixed bounds on what I 
> > wanted.
> > > >> But re. your original suggestion (above), for any fixed 
version 
> > of
> > > >> the formula, more of one *increases* my tolerance for the 
other.
> > > >
> > > >Nonsense.
> > > 
> > > Nonsense, eh?  This is pretty much the definition of Max().  It
> > > throws away information on the smaller thing.
> > 
> > Yes -- thus more of one has no effect on the tolerance for the 
other -
> > - it's either the bigger thing, making the tolerance for the 
other 
> > irrelevant anyway, or it's the smaller thing, it which case the 
> > tolerance for the other is a constant.
> > 
> > > You can tweak your
> > > precious constants after the fact to fix it, but not before the
> > > fact.
> > 
> > Isn't this true of any badness criterion? I don't see how one 
with 
> > rectangular contours is suddently any different.
> 
> Paul and Carl,
> 
> I think you're both right. You're just talking about slightly
> different things.
> 
> As a function, max(x,y) "depends on" both x and y but at any given
> point on the "curve" it only "depends on" one of them in the sense
> that if you take the partial derivatives wrt x and y, one of them 
will
> always be zero.

That's what I was saying. So what was Carl saying?


top of page bottom of page up down


Message: 9723

Date: Mon, 02 Feb 2004 01:54:36

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>Yes -- thus more of one has no effect on the tolerance for the
>other -- it's either the bigger thing, making the tolerance for
>the other irrelevant anyway, or it's the smaller thing, it which
>case the tolerance for the other is a constant.

If you make the bigger one bigger, you're also allowing the smaller
one to get bigger without knowing about it.  Or maybe I'm
misunderstanding "tolerance" here, or the setup of the procedure.

>> You can tweak your
>> precious constants after the fact to fix it, but not before
>> the fact.
>
>Isn't this true of any badness criterion?

Yes, that's why I said someone who wants to change his expectations
of error without changing his expectations of complexity shouldn't
use badness.

-Carl


top of page bottom of page up down


Message: 9724

Date: Mon, 02 Feb 2004 05:15:22

Subject: Re: 7-limit horagrams

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
> >> >> Beautiful!  I take it the green lines are proper scales?
> >> >> 
> >> >> -C.
> >> >
> >> >Guess again (it's easy)!
> >> 
> >> Obviously not easy enough if we've had to exchange three
> >> messages about it.
> >> 
> >> -Carl
> >
> >Then you can't actually be looking at the horagrams ;)
> 
> Why not just explain things rather than riddling your users?

Because I'm trying to encourage some looking.


top of page bottom of page up

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9700 - 9725 -

top of page