Saturday, March 16, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Georg Ohm: birthday

Albert Einstein (born in Ulm on March 14th, 1879) is such a formidable personality that I gave up the idea to write a biography. The problem is that I know too many things about him and other people know even more things about Einstein etc. Ohm is an easier task.

George Simon Ohm was born in Erlangen, Holy Roman Empire (100 km from the Czech border) on March 16th, 1789. He died in Munich, Kingdom of Bavaria (200 km from the Czech border) at the age of 65. When he was just four months old, he could have stormed the Bastille but he decided not to.

His father was officially uneducated but he was actually one of the most widely respected autodidacts. Georg's mother died when he was ten years old. Among seven siblings, only three survived to adulthood: sister Elizabeth Barbara, Georg Simon, and his younger brother Martin Ohm who would become a famous mathematician (during their lifetime, maybe more famous than Georg Ohm). Martin Ohm figured out what \(a^b\) was for \(a,b\in\CC\); I loved this problem when I was 8 years old or so.




Imagine that it's just 200 years ago – and 4/7 of the children were dying before they reached adulthood. Medicine and related fields have made an amazing progress since Ohm's time. On the other hand, one could be worried that this cruel fate of the children was still imposing some "creative" natural selection on the mankind that became absent sometime in the 20th century. So far, things look fine but who knows what the people will think about this issue in 2100.




Georg Simon Ohm most famously worked as a high school teacher – that's when he discovered his Ohm's law\[

U = RI.

\] I can't forget the joke about the three Ohm's laws: \(U=RI\), \(I=U/R\), \(R=U/I\). I guess that many people, especially among the girls, wouldn't view this as a joke.

Note that the unit "one ohm" of resistance is denoted by the capital Omega, \(\rm\Omega\), because the letters starts almost like Ohm's name. Ohm made the discovery in 1827, when he was 39 and when he was playing with the electrochemical cell invented by Alessandro Volta. Ohm's law is clearly paramount for all electric circuits but sadly enough, electric circuits only began to be a hot topic more than 50 years later. Imagine how rich he would be if he could collect royalties from Ohm's law patents today.

Ohm's career involved various teaching jobs – including those at universities (not the most famous ones) – that were paying so little that he almost starved to death. And that's true despite the fact that he was hired by no one else than the Prussian king at one moment. The king loved Ohm's book and work. Some low-brow colleges that used to employ Ohm didn't so they fired him, and so on.

Johann Dirichlet was among Ohm's students.

There's another law that Ohm proposed, the so-called Ohm's other law or Ohm's acoustic law. Using a modern language, it says that the human ear is a Fourier analyzer that measures \(|\tilde f(\omega)|^2\) for all accessible frequencies \(\omega\).

This statement, known to be partly false, is pretty fascinating. For example, it implicitly says that the relative phases don't matter. For example, look at the graphs of the functions \(\sin(x)+\sin(2x)\) and \(\sin(x)+\cos(2x)\). The graphs of the position of the speaker as a function of time look very different (the second graph is time-reversal-symmetric but the first one is far from it, for example) but because the amplitudes have the same absolute values, we can't hear the difference between these two sounds.

When I was a kid, I played the piano and I was very confused by certain basic things. For example, when I was 7, I was convinced that if you play "C" and "E" at the same moment, the ear must hear the tone in between, "D", if I pick a prominent example. That's of course rubbish – each frequency has its independent "account" – as I understood a year later (even though I surely had to "hear" this fact – hear chords – a long time earlier). But it was still confusing to me why we can't hear the differences between the functions above, for example, and why interference never cancels the same tone coming from two sources etc. You're welcome to offer your opinion.

Ohm's acoustic law answers most of these questions. Nevertheless, musicians have generally hated this law from the beginning – it became a major reason why musicians distrust physicists. It has to be wrong, they feel and hear (?). Well, I am sure that the law "ears are Fourier analyzers" can't be quite true. On the other hand, I haven't found any coherent description by the musicians that would clarify what they really dislike about the law.

Well, I would say that the ear only hears some frequencies, from \(20\) to \(20,000\,{\rm Hz}\) or so; frequencies outside this interval are simply eliminated (gradually). Moreover, it must be able to partly determine the phase of the cycle for low enough frequencies. And it must suffer from limitations of the resolution with which the frequencies may be distinguished; good musicians generally have a more precise sense of hearing. And the brain of course can't remember too much information about the function \(|\tilde f(\omega)|^2\) so it compresses it in some way – determines the loudest components (frequencies) and/or describes the remaining sound as "some sort of noise" etc. Otherwise I am not really able to think about other limitations that the law could have.

Can you help me? My guess is that the dissatisfied musicians must misunderstand some Fourier maths even if their ear is subconsciously doing a good job in the Fourier analysis. And artists may always hate science for "making things dull" (not true!). So I would guess that the opposition is ultimately irrational but I am ready to be proved wrong.

Add to del.icio.us Digg this Add to reddit

snail feedback (15) :


reader Philip Gibbs said...

Take a sound that stops suddenly. Fourier analyse and then change some phases, It no longer falls silent. Do we hear the difference? Of course we do.

Hearing might be better modelled by some kind of wavelet analysis.


reader Luboš Motl said...

Well, is that what I said - it's possible to see the abrupt end for low frequencies - or do you claim that this is possible to hear for ordinary middle-range frequencies/tones as well?

Take tons starting with concert pitch, 440 Hz. Does the ear care (can they say) whether the tones and their higher harmonics stop at time "t" or 1/440 of a second later?


reader Philip Gibbs said...

I think this must be true at all audible frequencies. The ear may work like a set of damped harmonic oscillators where you sense the energy but not phase in each one. The damping constants would determine how sharply you can detect a change at a given frequency


reader Philip Gibbs said...

I think this must be true at all audible frequencies. The ear may work like a set of damped harmoI think this must be true at all audible frequencies. The ear may work like a set of damped harmonic oscillators where you sense the energy but not phase in each one. The damping constants would determine how sharply you can detect a change at a given frequencynic oscillators where you sense the energy but not phase in each one. The damping constants would determine how sharply you can detect a change at a given frequency


reader Luboš Motl said...

Dear Phil, I obviously believe that you may detect "when" tones occur with some precision "dt". But I think that for normal tones, like 440 Hz, this "dt" is much longer than 1/440 of a second.


Take something like a 0.05 second interval of time - something I believe is a universal resolution of "dt" by the human ear - and make some Fourier expansion of the pressure in this interval. You get modes with "df" spaced by or uncertain by 20 Hz or so. For longer tones, I can hear the frequency much more accurately than plus minus 20 Hz but for shorter ones, it's impossible because of the "uncertainty principle".


Still, I believe that when you make a decomposition in time intervals such as 0.05 s, the human ear only hears the Fourier components |f(omega)|^2 and the only thing that changes for longer, more stable tones, is that the resolution of the ffrequency becomes better as one may Fourier decompose the sounds over longer intervals of time.


reader Art Slartibartfast said...

The way our ear works is what makes MP3 compression successful. We do not hear soft frequencies near loud frequencies for example, so we can leave this information out without audibly affecting the sound quality. Wikipedia has an intersting article on it.

I am no fan of MP3 though, because it is often abused and the quantisation noise then invariably makes music sound 'cheap', like people could not afford the bandwidth to treat the sound properly. By the way, quantisation noise shows up much sooner in classical music than it does in modern pop songs. Says something about the signal to noise level...


reader Philip Gibbs said...

I think dt might come from the damping time which would normally be longer than the cycle time. I am assuming that the receptors in the ear resonate but that this must be damped, so it would continue to vibrate for a time dt after the tone stops. I have not tried to look this up so it may not work like that at all. The time resolution of the nerves could equally well be what counts.


reader Gordon Wilson said...

Interesting post. The brain filters and audits the reality impinging on its inputs to an amazing extent, and fills in gaps with pattern recognition---an example is your blind spot that you are totally unaware of.

OT but fascinating---elephants communicate using infrasound--- http://www.birds.cornell.edu/brp/elephant/cyclotis/language/infrasound.html


reader papertiger0 said...

The ear doesn't infill gaps with pattern recognition, instead we hear a muddy, smeared sound, technically called "envelope distortion".

Here's a paper on it. http://www.ncbi.nlm.nih.gov/pubmed/1787236

I've been thinking about buying a pedal to cure this problem.

The BBE Sonic Stomp Sonic Maximizer.
The sonic maximizer time shifts the fundamental components of the sound so that the higher frequencies arrive to the ear ahead of the lower frequencies and harmonics, as they should.



Can't tell you more about until I buy the gizmo or at least give it a test drive.


reader papertiger0 said...

In the example where you play a C and E note, the implied "D" note is audible as a sort of undertone harmonic distortion. I don't know if it would be so clear on a piano, but it's definitely present on an overdriven guitar.


reader Rafa Spoladore Ψ said...

Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle: http://prl.aps.org/abstract/PRL/v110/i4/e044301


reader Andrew Tan said...

Yes, of course the ear is a Fourier analyser! I don't know any musicians that think otherwise.

There are interesting things beyond that, for example, the pitch of the missing fundamental. This is not a beat frequency: {400,600,800 Hz} does not have the same pitch as {410,610,810 Hz}.


reader Luboš Motl said...

This is complete rubbish. The uncertainty principle isn't about the limitations of some particular methods of measurements that ears or anything else could beat. It is a provable universal limit that describes how things actually are.



I don't dispute that the brains may distinguish various things so that "locally" the same sound may be measured with a very low df and, in a different context (abrupt change), its timing may be measured with a very low dt so that df,dt exceed the inequality but their interpretation is wrong. It's not the same situation because in the same situation, timing and frequency can't even be *defined* with a better accuracy than the uncertainty inequality says.


reader Robertson Smithwoods said...

Some time ago, I had to work some of this out on behalf of a colleague who was evaluating approaches to tinnitus(ringing in the ears). Apologies if my memory is a bit hazy. I found
Chris Darwin's notes invaluable. There we learn that Ohm's description is an approximation.

For instance, if two tones are close enough (much
closer than your C and E) the ear can interpret this as an intermediate tone - this is the basis of some of the "Blue" notes in jazz on a keyboard.

For tones below 3kHz, which is about 2 octaves above middle C, there is some encoding of phase. The nerves phase lock on the sound and fire spikes at some multiple of the frequency. So the phase is encoded, but with ambiguities. There are some papers that suggest that this is exploited in source location (i.e. where is the sound coming from).

The response of the ear is also significantly non-linear. I quote Professor Darwin. In
normal ears the response of the basilar membrane to sound is actually non-linear - there is significant distortion.

* If you double the input to the basilar membrane, the output less than doubles (saturating non-linearity).

* If you add a second tone at a different frequency, the response to the first tone decreases (Two-tone suppression)

* If you play two tones (say 1000 & 1200 Hz) a third tone can appear (at 800 Hz) - the so-called Cubic Difference Tone.


reader Luboš Motl said...

Very informative, thanks! Sort of reproduces much of the list of limitations I've mentioned, too.