So, I need to get back in the explanatory mindset, and I figured I'd do it about something of interest to everybody: the butterfly effect. What is it, how does it work, how strong is it, and how does it apply to AH? Well, hopefully I can explain.
We'll begin, as usual, with some history. In the mid-19th C, the invention of the telegraph made it possible to begin moving bulk data quickly. One field that this opened up was the possibility of weather prediction. Initially (and, indeed, for quite a while after) this was fairly basic: look at the weather upwind, extrapolate it downwind. The big issues with more accurate weather prediction were the lack of good models for how weather works, and the fact that even if you had such a model it would require implausibly large amounts of calculation. The second problem, of course, came under control in the 1950s when modern digital computers opened up whole new vistas of computation. The first problem turned out to be a little tougher.
The first attempts were pretty crude stuff - just throwing some fluid-dynamics equations at the computer and seeing what came out. For obvious reasons, the "predictions" these models made weren't much better than chance. But by the early 60s, the models were a lot more detailed, and could model systems that looked quite a bit like the real world[1]. But unfortunately, to the chagrin of just about everyone involved, these new models weren't significantly better at making predictions than the ones from a decade earlier. As an example (albeit one rather out of the league of 1960s modeling) the Florida Keys get hit by a hurricane just about every hurricane season; a given program might correctly model this, but the particulars of the hurricane - like what date it arrives, an obviously important fact to be able to predict - would be "typical" of Atlantic hurricanes but only lined up with the real world as well as you'd get by altering the dates on the previous year's records and calling it a "prediction". Given the time, money and effort that had gone into computer weather forecasting, this wasn't a good result to have. But most people just thought it was because the models, or the data being fed them, or both, weren't detailed enough.
In 1963 Edward Lorenz was trying to make yet another model when he ran into some interesting weather patterns. He decided to take another look at the simulation to try and decide where this interesting feature was coming from. But, with limited memory and I/O capacity, the way that his program "saved" old states was to print out a snapshot of all the variables as three-digit floating-point numbers every couple of hours. So Lorenz worked his way back to the last printout before the cool stuff appeared, manually reentered every variable, and ran the program again. To his horror, nothing happened; the weather ran on, then ran a little differently, then a lot differently, and by the time the fun stuff had originally showed up the program was cheerfully chugging along in a similar - but totally different - pattern. Since the program was completely deterministic, this wasn't supposed to happen, and Lorenz spent quite a while checking everything - his input, the software, the hardware - for the glitch that had to have caused this. Finally he came upon the issue: the program used six-digit floating-point numbers to do its calculations, but only printed out the three most significant digits. Whereas the original run might have had values of, say 0.652712, 0.534012, 0.282402... at print time, Lorenz had been implicitly reentering 0.652000, 0.534000, 0.282000... and the resulting disparity in the fourth place was throwing the simulation reruns off.
Lorenz dropped everything and started looking into this. He ran tests to see what part of the program was causing this behavior - called "extreme sensitivity to initial conditions" - and discovered that similar behavior appeared in almost any dynamic system with any complexity to it. He eventually got his program down to the simulation of a single convection cell, where the entire mathematical basis for the program can be summed up as
With a little thought you can understand why it is so sensitive. The trajectory starts at the very outside bottom, at the end of the line; actually, we can imagine it to be two separate trajectories, so close together that at this resolution we can't distinguish them. They loop around the right circuit and end up right towards the centre of the left circuit. Then they slowly spiral out clockwise; each lap around the left circuit doubles the distance to the innermost edge of the circuit. So the distance between the two trajectories doubles each lap as well. Eventually they reach the outer half of the left circuit and switch back over to the right circuit. They then loop counter-clockwise around the right circuit (distance from the center, and thus each other, doubling each pass) until they reach the outer half of the right circuit and cross back over to the left, and so on and on forever. Where the sensitivity comes in is now clear: since the distance between the two trajectories is always increasing, eventually we will reach a point where one is on the outer half of a circuit and the other is on the inner; the first will cross over while the other takes one last loop and from there it's all over. The two trajectories are now independent of each other and will only ever come close to each other again by chance.
Lorenz now realized he had the explanation for the failure of weather forecasting for the last decade. If weather is a system of this sort[3] then the model isn't the problem; even if your model is identical to the thing modeled, you literally need infinite precision in your input to guarantee accuracy. This is very different from how traditional mathematical physics ran: in a classical problem, say calculating the trajectory of a cannonball, the accuracy of your prediction scales pretty much with the accuracy of the numbers you use to make it. In chaos theory, the length of time for which your prediction is remotely valid scales with the accuracy of the numbers you use to make it; after that, your predictions are essentially useless. Long-term weather prediction is impossible. Lorenz published a paper on the subject, Deterministic Nonperiodic Flow, in 1963; in it he noted that if the theory were true, "one flap of a seagull's wings could change the course of weather forever". In later talks and in popular culture, the metaphor was refined down. Imagine you have a mode of the Earth's weather that is perfect. Every gust, every cloud, every hill and valley, every North Korean nuclear weapons test, is modeled exactly. You've accounted for the Milankovitch cycles and the fall of micrometeorites and the fluctuations of the sun. Your model is exact - except that it misses a single butterfly flapping its wings in the Amazon Rainforest. What chaos theory says is that as a result, your model will eventually get completely out of synch with the real world.
The math and physical measurements we have on the subject say that a model that good would be meaningfully accurate for something rather less than a year.
Tomorrow: so how does this apply to AH?
[1] Well, like toy versions of the real world. Lorenz's model mentioned below is trying to explain how air behaves when heated from below (it forms convection cells) with nothing else involved. Predicting hurricanes, like in my example, was well beyond the goals of the early sixties.
[2] Called the Lorenz Attractor; its butterfly shape is not where the term "Butterfly Effect" comes from, although it means it makes a nice illustration.
[3] This sort of system is called a "chaotic system", although you may note the Lorenz Attractor above is totally deterministic and even rather orderly. Mathematical "chaos" and chaos in the conventional meaning have not that much in common - really it's a bad name, although it sounds snazzy.
We'll begin, as usual, with some history. In the mid-19th C, the invention of the telegraph made it possible to begin moving bulk data quickly. One field that this opened up was the possibility of weather prediction. Initially (and, indeed, for quite a while after) this was fairly basic: look at the weather upwind, extrapolate it downwind. The big issues with more accurate weather prediction were the lack of good models for how weather works, and the fact that even if you had such a model it would require implausibly large amounts of calculation. The second problem, of course, came under control in the 1950s when modern digital computers opened up whole new vistas of computation. The first problem turned out to be a little tougher.
The first attempts were pretty crude stuff - just throwing some fluid-dynamics equations at the computer and seeing what came out. For obvious reasons, the "predictions" these models made weren't much better than chance. But by the early 60s, the models were a lot more detailed, and could model systems that looked quite a bit like the real world[1]. But unfortunately, to the chagrin of just about everyone involved, these new models weren't significantly better at making predictions than the ones from a decade earlier. As an example (albeit one rather out of the league of 1960s modeling) the Florida Keys get hit by a hurricane just about every hurricane season; a given program might correctly model this, but the particulars of the hurricane - like what date it arrives, an obviously important fact to be able to predict - would be "typical" of Atlantic hurricanes but only lined up with the real world as well as you'd get by altering the dates on the previous year's records and calling it a "prediction". Given the time, money and effort that had gone into computer weather forecasting, this wasn't a good result to have. But most people just thought it was because the models, or the data being fed them, or both, weren't detailed enough.
In 1963 Edward Lorenz was trying to make yet another model when he ran into some interesting weather patterns. He decided to take another look at the simulation to try and decide where this interesting feature was coming from. But, with limited memory and I/O capacity, the way that his program "saved" old states was to print out a snapshot of all the variables as three-digit floating-point numbers every couple of hours. So Lorenz worked his way back to the last printout before the cool stuff appeared, manually reentered every variable, and ran the program again. To his horror, nothing happened; the weather ran on, then ran a little differently, then a lot differently, and by the time the fun stuff had originally showed up the program was cheerfully chugging along in a similar - but totally different - pattern. Since the program was completely deterministic, this wasn't supposed to happen, and Lorenz spent quite a while checking everything - his input, the software, the hardware - for the glitch that had to have caused this. Finally he came upon the issue: the program used six-digit floating-point numbers to do its calculations, but only printed out the three most significant digits. Whereas the original run might have had values of, say 0.652712, 0.534012, 0.282402... at print time, Lorenz had been implicitly reentering 0.652000, 0.534000, 0.282000... and the resulting disparity in the fourth place was throwing the simulation reruns off.
Lorenz dropped everything and started looking into this. He ran tests to see what part of the program was causing this behavior - called "extreme sensitivity to initial conditions" - and discovered that similar behavior appeared in almost any dynamic system with any complexity to it. He eventually got his program down to the simulation of a single convection cell, where the entire mathematical basis for the program can be summed up as
- dx/dt = 10y - 10x
- dy/dt = 28x - xz - y
- dz/dt = xy - 8z/3
With a little thought you can understand why it is so sensitive. The trajectory starts at the very outside bottom, at the end of the line; actually, we can imagine it to be two separate trajectories, so close together that at this resolution we can't distinguish them. They loop around the right circuit and end up right towards the centre of the left circuit. Then they slowly spiral out clockwise; each lap around the left circuit doubles the distance to the innermost edge of the circuit. So the distance between the two trajectories doubles each lap as well. Eventually they reach the outer half of the left circuit and switch back over to the right circuit. They then loop counter-clockwise around the right circuit (distance from the center, and thus each other, doubling each pass) until they reach the outer half of the right circuit and cross back over to the left, and so on and on forever. Where the sensitivity comes in is now clear: since the distance between the two trajectories is always increasing, eventually we will reach a point where one is on the outer half of a circuit and the other is on the inner; the first will cross over while the other takes one last loop and from there it's all over. The two trajectories are now independent of each other and will only ever come close to each other again by chance.
Lorenz now realized he had the explanation for the failure of weather forecasting for the last decade. If weather is a system of this sort[3] then the model isn't the problem; even if your model is identical to the thing modeled, you literally need infinite precision in your input to guarantee accuracy. This is very different from how traditional mathematical physics ran: in a classical problem, say calculating the trajectory of a cannonball, the accuracy of your prediction scales pretty much with the accuracy of the numbers you use to make it. In chaos theory, the length of time for which your prediction is remotely valid scales with the accuracy of the numbers you use to make it; after that, your predictions are essentially useless. Long-term weather prediction is impossible. Lorenz published a paper on the subject, Deterministic Nonperiodic Flow, in 1963; in it he noted that if the theory were true, "one flap of a seagull's wings could change the course of weather forever". In later talks and in popular culture, the metaphor was refined down. Imagine you have a mode of the Earth's weather that is perfect. Every gust, every cloud, every hill and valley, every North Korean nuclear weapons test, is modeled exactly. You've accounted for the Milankovitch cycles and the fall of micrometeorites and the fluctuations of the sun. Your model is exact - except that it misses a single butterfly flapping its wings in the Amazon Rainforest. What chaos theory says is that as a result, your model will eventually get completely out of synch with the real world.
The math and physical measurements we have on the subject say that a model that good would be meaningfully accurate for something rather less than a year.
------------------------------------------------------------------------
Tomorrow: so how does this apply to AH?
[1] Well, like toy versions of the real world. Lorenz's model mentioned below is trying to explain how air behaves when heated from below (it forms convection cells) with nothing else involved. Predicting hurricanes, like in my example, was well beyond the goals of the early sixties.
[2] Called the Lorenz Attractor; its butterfly shape is not where the term "Butterfly Effect" comes from, although it means it makes a nice illustration.
[3] This sort of system is called a "chaotic system", although you may note the Lorenz Attractor above is totally deterministic and even rather orderly. Mathematical "chaos" and chaos in the conventional meaning have not that much in common - really it's a bad name, although it sounds snazzy.