Beauty of Gray



Friday, July 19, 2002
 

The second law is neither a law, nor about seconds. Discuss.


Bill Quick at DailyPundit noticed this interesting article on the BBC website about an Australian research group apparently demonstrating a violation of the Second Law of Thermodynamics. Steven Den Beste at the USS Clueless then took a whack at responding. I agree with some of what he says in his response. Specifically, the BBC news article was overwrought. This discovery is not at odds with what physicists had previously thought, and does not imply that the second law is false, or that we need to fundamentally change the way we think about the universe. However, I think Den Beste has misconstrued the experimental result (not hard to do when working from a second hand, popular press report.) The experiment in question really is an observation of a system which violates the second law.

But how can this be? You might be asking. Isn’t the second law a fundamental principle of the universe? How can I say that an experiment has violated it, yet at the same time say that this is no big deal? The basic reason is that the second law isn’t really a law, in the same sense that the Uncertainty Principle (to take another often misunderstood example) is. The second law is a probabilistic statement, rather than an absolute principle about the nature of matter. It’s a statistical law; it just so happens that the normal sample size is so huge in almost all cases that you can treat it as though it really is an absolute principle of the universe, one which must always hold. But if you really dig down to the underlying physics, this isn’t the case.

One way of expressing this fact is that, at the microscopic level, the laws of physics are symmetric with respect to time—that is, processes can run equally well both in forward and in reverse. But going one way, the entropy will be increasing, while going the other way, it will be decreasing. So at some level, it’s possible for interactions to happen which decrease entropy. Now, in the everyday world, entropy increases and you can’t reverse things—if you drop a vase and break it, you can’t wiggle your hips and watch it reassemble and jump back into your hands. So there’s a disconnect here somewhere.

On the one hand, there’s the second law, which says that entropy must increase. On the other hand, there’s this time symmetry of basic laws, which says that processes can run in reverse. The bridge here is the probabilistic nature of the second law. But exactly what the probabilities involved were—how small of a system do you have to look at, and for how short of a time, to see an entropy decrease?--was only recently accurately theoretically explained.

A group derived a formula which said just how unlikely a given process was, which depended on the entropy change of that process. Processes which increase entropy are much more likely than those which decrease it, so on the whole, entropy will increase. But if you set your system up just right, you can observe some of the fleeting processes which are actually decreasing entropy. The experiment described in the BBC report (and reported in Phys. Rev. Lett, v. 89, no. 5) was a test of this proposed theory, the Fluctuation Theorem.

So what is entropy? Entropy was first discovered experimentally, when chemists were doing things and making up the laws of thermodynamics, they found out that they couldn’t explain what they saw without introducing this new measure, entropy (which was somehow related to the temperature.) As late as 1905, the Encyclopedia Brittanica could still write that entropy “does not correspond to any directly measurable physical property, but is merely a mathematical function…”

However, as is usually the case when there’s some mysterious, important property, there really is an underlying physical basis for it. In this case, entropy is related to the number of microscopic states that are available to a system. Quantum mechanics says that particles (and systems of particles) can only exist in certain discrete states. The entropy of a system tells you how many such states are available to a system.

A basic principle of statistical mechanics is that a system is equally likely to be in any microscopic state that available to it. Now, a closed system can distribute it’s energy (and other variables) across the particles in that system in any way consistent with the conservation of energy. (That is, the total energy has to add up to the same amount.) However, different distributions of energy will have different probabilities, because there will be a different number of microscopic states open to the system for each.

Since all states are equally likely, probability theory says that the system will spend the most time with the distribution of energy that has the most available microscopic states. As it turns out, the probability function for even a very small system (as little as a few thousand atoms) is extremely sharply peaked, so that for practical purposes you can assume the system spends all its time in the most likely state. Or, to put it another way, the system arranges itself to maximize its entropy. But this is a consequence of probability.

Let me give a quick example, which will hopefully demonstrate this effect. Imagine a box, divided in two, with gas in one half and nothing in the other. Then, remove the divider in the box. The gas expands to fill the whole box. Since each gas molecule now has twice as much space to roam around in, it has increased it’s total number of available states, which increases the entropy of the system. This is an example of the second law in action. One statement of the second law is that, when you remove any constraint in a closed system, the entropy will increase. Which makes sense. Removing a constraint opens up new states to the particles in the system, which increases the entropy.

However, the gas is really just a bunch of atoms whizzing around and bouncing off each other in a more or less random fashion. So if you look at just one atom, it goes this way and that, and ends up bouncing all around the entire box. If you look at some instant, the gas molecule is equally likely to be anywhere inside the box. So, it’s got a 50% chance of being in the original half of the box where all the gas started. Now, consider a second gas molecule, at the same time. It also has a 50% chance of being in the first half of the box. (I know the probabilities are conditional here, so they won’t all be 50%, but the underlying point holds.)

Extending this, it’s theoretically possible to look at an instant of time and for all the gas molecules to be back in the original half of the box. However, since there are billions of billions of gas molecules in the box, you’ll never see that happen. You could watch the box until the universe collapsed and died and you would never see that happen, the probability is so small. But the probability isn’t zero. And if you scale the system down far enough, then the probabilities are no longer so small, and you might see momentary fluctuations where all the gas is over towards one side of the box.

In a much more complicated way, that’s sort of what the experimenters in this case did. They designed a very special system that allowed them to monitor a particle’s trajectory and measure the entropy change along that trajectory. They found that, in this system, on time scales of tenths of seconds, there was a reasonable chance that the particle would actually follow a trajectory which decreased the total entropy. On the whole, the particle tended to increase the entropy, and if you watched it for more than a second or two it was almost guaranteed that the total entropy would increase. But there was some chance this wouldn’t happen. And the probabilities they measured seemed to fit to the predictions of the fluctuation theorem.

So, yes this is an observation of the second law being violated. But it’s under very special circumstances, and doesn’t contradict our theoretical understanding of thermodynamics. The title of the article itself points out the special nature of these observations: "Experimental Demonstration of Violations of the Second Law of Thermodynamics for Small Systems and Short Time Scales." All the chemists and physicists doing calculations relying on the second law are still perfectly fine. What the report might indicate (although this remains to be determined) is that these violations of the second law could occur on time and length scales which might be important for some proposed nanotechnology applications, which might complicate their operation. But that is still hypothetical, and would depend on the particular nanostructure in question.


Link posted by Doug Turnbull at 11:13 AM


 
What was the point again?


In the post-attack investigations, one thing that has become clear is that the 9-11 attacks represented a significant failure of the US intelligence services. There were a lot of pieces out there, but do to a lack of coordination and aggressive prusuit of leads, the peices were never assembled until afterwords. As more information came to light, the magnitude of these intelligence failures became more apparent, and the calls for reform grew louder.

Bush responded with his proposal for a new Department of Homeland Security. Unfortunately, as it turns out, this department is a pure cosmetic effort. As the Washington Post reports, the Homeland Security office will have essentially no intelligence role. It will be completely at the mercy of the FBI and CIA to gather and analyze intelligence. To put it another way, the Homeland Security department doesn't address any of the underlying problems in intelligence which allowed the 9-11 attacks to occur. All it does is add another layer of bureaocracy to the process of responding to what intelligence we are able to gather.

So, the Homeland Security office doesn't address the most serious problems facing the country in the war on terror. But at least it can gather together some of the other threads and coordinate them. Things like immigration, border control, FEMA, the Coast Guard, etc. Right? Wrong. In Congressional turf wars, many congressmen on both sides have decided that their own personal interests are more important than protecting the country from future terrorist attacks.

Leading the charge is Rep. Don Young, who thinks his own personal power of oversight over the Coast Guard is much more important than national security. The lives of his countrymen are one thing, but pork barrel politics is quite another. After all, those terrorists are only killing New Yorkers anyway. Alaska is safe. Let the mainlanders die so I can maintain my influence, is apparently Young's position. Nice. If the Republican leadership had any guts Young would immediately be sacked from his position and prevented from ever holding another meaningful committee seat for the rest of his hopefully short tenure in Congress. It won't happen, but I can dream.

Link posted by Doug Turnbull at 8:04 AM



Wednesday, July 17, 2002
 
The myth of utopia


In an exchange between Steven Den Beste and Eric Raymond about this interesting (and extensively blogged) article in the NY Times magazine about diet and obesity, Den Beste closed with this statement:

Silverman is making the incorrect assumption that an ideal, harm-free diet actually is possible. That assumption isn't actually justified; there's no reason that a perfect diet must exist at all. We aren't designed to last forever.

This is a fundamentally important point, not just for nutritionists, but for all fields of human endeavor. In its general form, this statement is that “not all good things are compatible.” Applied to diet, what this says is that there is no single ideal diet. You can’t say that any single nutritional regimen is the best, because different diets maximize different things, all of which are considered good. Do you want to be thin? Do you want to avoid a heart attack? Live as long as possible? Maximize your enjoyment over your life? Prevent one form of cancer or another?

Everyone would like to achieve all of these goals. Unfortunately, you can’t maximize them all at the same time—all these good results are not compatible with each other. So tradeoffs are necessary. At best, dieticians and doctors can determine the effect of diet on each of these different medical areas. What they can’t do is lay down some canonical best diet that everyone can follow. Not just because the effect of diets varies from individual to individual, but because the very concept of a best diet is impossible. Determining what is best depends on what goals an individual wishes to achieve, and the balancing of these different goals is an individual choice. So an optimal diet only exists relative to a particular individual’s desires—their payoff or utility function, their measure of effectiveness, or whatever terminology you want to use.

There is a philosophical tradition going back to Plato, and certainly present in Enlightenment philosophers, that holds the opposite—that the good is a subset of the true, and since all truths must be consistent with each other, thus all good things are, and so perfection is possible. This conception is common in many fields—the very term political science is a testament to its strength in political philosophy: the idea that there is a discoverable truth in politics, some objective underpinning reality of truth that propositions can be tested against.

This can lead to pernicious outcomes, specifically the numerous persistent conceptions of utopia which intermittently pop up. The problem is, people are perverse, and no utopia you design will be to everyone’s taste. In designing your own perfect society, you will have made tradeoffs—how much freedom do you allow, what morals do you enforce, how much redistribution of wealth is present, what tradeoffs do you make between privacy and security, an on and on. So, any utopia is sustainable only by force—those who disagree or are displeased with it must be suppressed. (And of, course, the fact that there are citizens unhappy with the society means it’s not really a utopia after all.)

This is, to me, the most convincing argument for a liberal, pluralistic society. Although the underlying skepticism about compatibility of goods has been around for a long time (going back to Macchiavelli), as far as I know the political implications of this position were only fully developed recently, in the writings of the late intellectual historian and philosopher Isaiah Berlin. Since individuals can’t agree on the proper ends of mankind, then the state should, as far as possible, stay out their way. It’s role is to regulate interactions, not to legislate private behavior.

Berlin’s writings are interesting, if you like intellectual history. Even if you don’t, his book of Four Essays on Liberty is essential reading. But beyond the realm of politics and philosophy, the fundamental idea stands—not all good things are compatible with each other, and life involves tradeoffs. This is one reason why one-sided critiques of policies aren’t particularly valuable. Simply carping about the costs of a particular course of action is an unconvincing argument, because every course of action involves costs. There is no perfect means that will produce nothing but light and goodness.

This sort of argument is extremely common—on many contentious issues you’ll see each side only presenting half the argument, with both either failing to see or ignoring the fact that every decision involves both costs and benefits. To come to a reasoned conclusion requires seeing both sides and trying to balancing them. Life involves tradeoffs. It isn’t enough to say that course A is bad, you must show that course A is worse than course B.


Link posted by Doug Turnbull at 8:11 AM







A Renaissance blog: Politics, sports, literature, history, and whatever else strikes my fancy.

_______________
_______________



LINKS
Sneaking Suspicions
Crooked Timber
Calpundit
MaxSpeak
Oxblog
Unqualified Offerings
Asymmetrical Information
Cranky Professor
Ideofact
Tacitus
Demosthenes
The Poorman
newsrack blog

Powered by Blogger