Monday, March 26, 2007

The Probability of Clogging

Much higher if you're in Holland ...


General Probabilistic Approach to the Filtration Process

N. Roussel, T.L.H. Nguyen, and P. Coussot

PRL 98, 114502 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e114502

The authors start with a somewhat surprising experimental observation. Imagine taking a mesh grid and using it to filter particles of a characteristic size. You would probably expect one of the two following outcomes:
• If the particles are smaller than the holes in the mesh, all the particles pass through.
• If the particles are larger than the holes, none of them pass through.
Interpolating between these two, you might expect some linear crossover as the particles get larger. The surprising results of the authors is that perfect clogging takes place before the particle size exceeds the mesh size, and the crossover is rapid.

To explain this observation, they develop a probabilistic model. In the model, a pore will become clogged if the right number of particles arrive at the right location within a certain period of time. Their model reproduces experimental observations quite well, with perfect clogging for particles smaller than the holes as long as the volume fraction is large enough. It contains only a single free parameter that depends on properties of the fluid flow.

The authors extend their model to a porous medium by considering the medium as a stack of meshes with some characteristic spacing. Their model of a single sieve determines the outflow as a fraction of the inflow, so the extension to many layers leads to a geometric series.

The model and its predictions don't run contrary to my intuition, but I would not say the results are "intuitively obvious to even the most casual of observers." This is a very nice example of model building. The authors captured the essential features of clogging in a simple coincidence model, and collected all the complications of fluid dynamics into a single parameter. It's easy to add terms to the model to make it more realistic, but the authors have isolated the features that lead to the general "all or nothing" behavior observed in experiments.

Lorentz Boost in Graphene

Novel Electric Field Effects on Landau Levels in Graphene

Vinu Lukose, R. Shankar, and G. Baskaran

PRL 98, 116802 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e116802

Pretty slick! The authors solve a problem in graphene by using a Lorentz boost to transform away the electric field.

The authors investigate a graphene system with a magnetic field perpendicular to the surface and an electric field parallel to the plane. The magnetic field leads to the formation of Landau levels.

Since the low-energy physics is described by a Lorentz invariant Hamiltonian with the speed of light replaced by the fermi velocity, the authors use a Lorentz boost to eliminate the electric field. This transformation works as long as the electric field is smaller than the magnetic field. (If B is smaller, can one transform to a frame where there is only an electric field?) The resulting Hamiltonian is a Landau-level Hamiltonian with a rescaled magnetic field.

The authors solve the model, then transform back to the lab frame. The surprising result is that the spacing of the Landau levels decreases with increasing magnetic field, and become degenerate for E = B. This does not happen in a conventional 2DEG, where the level spacing is independent of E. Another difference between graphene and the 2DEG is that the centers of the harmonic oscillator functions in graphene shift with E, and the dependence on position vanishes when E=B!

To show that these results are not a superficial consequence of the low-energy theory, the authors diagonalize a tight-binding model and demonstrate the same phenomena. The shift of the centers of the various Landau levels leads them to predict a new kind of dielectric breakdown for graphene.

I found the idea of using a boost to eliminate the electric field very clever.

Thursday, March 22, 2007

Aharanov Bohm Effect for Neutral Particles

Classical and Quantum Interaction of the Dipole

Jeeva Anandan

PRL 85, 1354-1357 (2000)

URL: http://prola.aps.org/abstract/PRL/v85/i7/p1354_1


In this paper, Anandan derives a relativistic Lagrangian for a neutral particle interacting with an electromagnetic field. The particle interacts via its electric and magnetic dipole moments. Anandan derives covariant expressions for the Lagrangian, velocity, and forces on the particle that take advantage the duality of the electric and magnetic dipoles.

There are two main results derived in this paper:
• A neutral particle has its own analog of the Aharanov-Bohm effect. The dipole moment (the sum of the electric and magnetic dipoles) leads to a Yang-Mills field strength and a topological phase, just like a gauge potential for charged particles.
• At low energies, the dipole moment enters the Hamiltonian in exactly the same way that the gauge potential enters the Hamiltonian of a charged particle.

When Anandan calculates the forces on a neutral particle with dipole moments, he finds several new terms in the forces acting on the particle in its rest frame. He concludes by proposing some experiments that would demonstrate the Aharanov Bohm effect for neutral particles.

I really enjoyed the derivation of the relativistic Lagrangian. I've not seen quantum mechanics derived from a covariant Lagrangian --- only quantum field theories. The relativistic Lagrangian would fit right into the machinery of path integrals. Perhaps there's no real benefit in treating the problem this way --- maybe you would just end up with the same forces Anandan arrived at by his Hamiltonian methods. Still, it saves you the trouble of quantizing the theory.

I found this paper when I came across a letter to the editor by Ivezic: http://link.aps.org/abstract/PRL/v98/e108901.

Ivezic's main point seems to be that Anandan should have used the 4-velocity dx/d(tau) instead of dx/dt. I don't see any major differences, although Ivezic claims his work "significantly influences" the low-energy Lagrangian. Strangely, he does not show what the corrections to Anandan's expressions are. That would have been useful.

Wednesday, March 21, 2007

Atomic Tests of GR

Testing General Relativity with Atom Interferometry

Savas Dimopoulos, Peter W. Graham, Jason M. Hogan, and Mark A. Kasevich

PRL 98, 111102 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e111102

Wild! The authors demonstrate how an atomic physics experiment could be used to probe non-Newtonian effects predicted by general relativity. They are not the first to propose using the methods of atomic physics for this purpose, but claim to be the first to have really worked out all the details of their setup.

The setup is similar to the initial stage of a fountain clock. A cold cloud of atoms is launched upward, then falls under the influence of gravity back down to the bottom of the chamber. The authors propose using a series of laser pulses to make an atomic version of a Mach-Zehnder interferometer.

The interferometer works by splitting a beam of coherent light. One part of the beam passes through a sample while the other does not. In the absence of a sample, the two paths are identical, and there is no phase shift between the two beams. The sample alters the path of one of the beams, and information about the sample is obtained from the phase difference.

In the atomic version described by the authors, a photon is used as the beam splitter. An atom on its way up interacts with a photon which puts it into a superposition of velocity eigenstates. According to GR, the geodesics an atom follows are determined by its initial position and velocity. As a result, the atom follows two geodesics simultaneously and interferes with itself when it recombines!

The geodesics can be calculated by solving the equations of general relativity. The phase difference can then be calculated. The metric from which the geodesics are derived contains Newtonian gravity and higher order terms. By studying the effects of these higher order terms on the phase difference of the two paths, one can test GR's predictions of non-Newtonian phenomena.

Table II in the text lists 8 different contributions to the phase difference in the proposed experiment. Four of these are terms from general relativity that are not present in Newton's theory of gravity.

The current precision of atomic interferometry are not good enough to probe these effects, but they could provide more precise tests of the equivalence principle. The authors believe that technical developments in the field will make the precision good enough to eventually probe the other effects.

One big improvement would come in the signal to noise ratio. For uncorrelated atoms, this scales with the square root of the number of atoms. If the atoms are entangled, however, the ratio scales linearly with the number of atoms. This could improve the precision by several orders of magnitude.

The benefits of atomic experiments over astrophysical tests of GR are two-fold. First, their is the accuracy. Clocks can be synchronized to a part in 10^16. More important is control. In an atomic physics experiment, you can modify the setup to isolate some effect you wish to observe. If you're looking through a telescope, you simply collect data on the setup Nature has provided.

I find the idea of using entangled quantum states to probe general relativity very interesting. Popularizations would have you believe the two theories are totally incompatible. If that's the case, then the experiments proposed simply won't work. If not, then Brian Greene and his buddies need to do a better job of explaining exactly where the two theories clash.

Monday, March 19, 2007

MOND on Earth

Is Violation of Newton's Second Law Possible?

A.Y. Ignatiev

PRL 98, 101101 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e101101

Ignatiev analyzes the classical equations of motion for a particle in a noninertial frame to determine the feasibility of testing theories of Modified Newtonian Dynamics on earth.

MOND is the idea that Newton's Second Law, F=ma, is not valid for very small accelerations. Below some characteristic acceleration a0, the force vanishes faster than a --- something like F=ka^p, where p>1. The theory can explain all current data on galactic rotation, and Bekenstein has generalized MOND into a covariant theory. Ignatiev now proposes that we could test MOND with a ground-based experiment.

He starts by looking at the equation of motion for a particle on earth with respect to the center of mass of the galaxy. The acceleration of the particle in this noninertial frame depends on five parameters, and Ignatiev has already neglected several terms that are small compared to a0: Coriolis acceleration of the sun, variations in the length of the day on Earth, precession and nutation of the earth's rotation axis, polar motion, and something called Chandler's wobble.

Ignatiev uses a pretty clever continuity argument to show that there are at least two times during the year where the acceleration vanishes. The acceleration during the summer solstice is greater than zero, and that during the winter solstice is less than zero; therefore, the acceleration must vanish at least twice a year (roughly at the time of the spring and autumn equinoxes).

For a brief window of time --- on the order of a second --- the acceleration parallel to the Earth's angular velocity vanishes. This can be used to find two specific locations on the Earth's surface where the perpendicular acceleration vanishes as well.

This leads the author to predict that there is a window of opportunity twice a year of about 1 second at two antipodal regions on the earth where the acceleration relative to the center of the galaxy is smaller than a0. At this instant, it should be possible to observe any effects of MOND that are believed to occur in astrophysical processes.

Later, the author goes on to analyze the case where the apparatus is moving with constant velocity relative to the lab frame. This relaxes the constraint on geographic location, because the velocity can be tuned to effect the same cancellation of terms that resulted from location for a stationary apparatus.

The effect one should look for in one of these experiments is a spontaneous displacement of a test body at the instant the total acceleration changes sign.

Ignatiev predicts a displacement amplitude on the order of 10^-17 m in a time interval of about 0.5 ms. This sounds undetectable, but he points out that LIGO is supposed to measure displacements an order of magnitude smaller than this. He also mentions the possibility of measuring the effects with a torsional balance of the variety used to look for deviations from the 1/R^2 dependence of the gravitational force at small distances.

Ignatiev's point seems to be that although MOND was created to explain phenomena on astrophysical scales, it makes predictions for terrestrial objects that can be tested with existing technology.

It seems to me that the ability to gather data only twice a year for less than a second would put serious limits on statistics. But I guess the guys at Fermilab are claiming to have observed the Higgs boson based on three or four events. An entire second worth of data might convince these guys!

I can't imagine anyone getting the funding to build another LIGO experiment in Greenland to test this prediction. If you take money out of the equation, you can build your detector in space, free from the earth's rotation. If you put it at one of the Lagrange points, you could also eliminate most gravitational effects. Might this allow for longer observation windows? Or are the gravitational and rotational effects necessary to cancel one another?

The idea that MOND has a preferred reference point --- the center of the galaxy --- that gives rise to special points on the Earth's surface gives it a mystical quality. I could see people building something like Stonehenge around these points, gathering twice a year to watch the needle on a detector move, indicating the exact cancellation of all accelerations.

Of course, there is no universal reference frame. The center of the galaxy is only preferred in this case because it's the biggest gravitational mass around. In Andromeda, it would be the center of Andromeda: i.e., there is a locally preferred reference frame, but not a universally preferred one.

Friday, March 16, 2007

Faraday Waves in a BEC

Observation of Farraday Waves in a Bose-Einstein Condensate

P. Engels, C. Atherton, and M.A. Hoefer

PRL 98, 095301 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e095301

In 1831, Farraday observed the formation of patterns on thin layers of liquid on top of an oscillating piston. (There's an excellent video of this on YouTube, created by Robert Deegan, Florian Merkt, and Harry Swinney of the Center for Nonlinear Dynamics at the University of Texas.) According to the authors, this is one of the first scientific investigations of dynamically generated pattern formation.

One interesting aspect of Faraday waves is their nonlinear origin. The cylinder is being driven up and down, yet a pattern of waves (that oscillates at half the driving frequency) forms on the surface. These waves are not fundamental modes of a vibrating cylindrical membrane, but arise from nonlinear terms describing the fluid motion.

The authors do something similar with a Bose-Einstein condensate. They create a cigar-shaped BEC. The authors force the radial potential confining the atoms to oscillate periodically, then observe periodic variations in the density along the long axis of the cigar. They also investigate the regime of very strong driving by exciting a radial breathing mode of the condensate and driving it near its resonant frequency.

This didn't sound too surprising until I considered the symmetry of the system. Imagine taking an infinite cylinder (or, like the authors, a 250 micron BEC cigar), and periodically varying the radius. The system still has translational symmetry on the long axis, yet waves form. The continuous translation symmetry is broken into a discrete symmetry by driving radial fluctuations.

As with many good experimental papers, the graphs tell the story remarkably well. No wonder many people lament the decline in popularity of printed journals. You don't get to browse through all the figures in an online version. What's to catch your eye and tempt you to read an article outside your area of specialization? Very few titles in PRL are "catchy" ...

Watching a BEC Condense

Observing the Formation of Long-Range Order during Bose-Einstein Condensation

Stephan Ritter, Anton \"{O}ttl, Tobias Donner, Thomas Bourdel, Michael K\"{o}hl, and Tilman Esslinger

PRL 98, 090402 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e090402

Neat experiment!

The authors watched a Bose-Einstein condensate form. They took a cloud of bosonic atoms and shock cooled it (by lowering the temperature and getting rid of the hottest 30% of the population). This left a non-equilibrium gas of atoms whose equilibrium state was a Bose-Einstein condensate.

By measuring the formation of an interference pattern between two separated regions of the cloud, the authors were able to watch off-diagonal long-range order develop in real time. The visibility of the interference patter measures the phase coherence of the two regions.

The feature that differentiates a Bose-Einstein condensate from a cloud of cold atoms is off-diagonal long-range order. This means that the wave functions describing the density at regions far apart from one another maintains phase coherence --- kind of like entanglement, I suppose. It's known that this is the correct order parameter to study for BEC, superfluid, and superconducting phase transitions, but exactly how it goes from being zero to being not zero is an interesting question. The dynamics of a phase transition.

There are two stages of the condensation process.

• Kinematic: Collisions between particles bring the system into its equilibrium state --- i.e. the energy distribution is thermalized. The time scale of this phase of condensation is set by the collision time.

• Coherent: In this stage of condensation, small regions of coherent atoms merge into a single coherent state.

The authors find that the size of coherent regions grows at a speed about 1/5 the maximum speed of sound in the cloud. As far as I can tell, there is no theoretical estimate of what this speed should be --- just the intuitive idea that it shouldn't be faster than the speed of sound.

The plots in the paper really tell the story. The authors have done an excellent job of presenting their data.

Thursday, March 15, 2007

Dissipated Work

Dissipation: The Phase-Space Perspective

R. Kawai, J.M.R. Parrondo, C. Van den Broeck

PRL 98, 080602 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e080602

I attended a couple of interesting focus sessions on non-equilibrium thermodynamics at this year's APS March Meeting in Denver. This is the first paper I've read on the subject since returning.

The authors show how the dissipated work (the work you have to do in addition to the free energy change) for a non-equilibrium process can be calculated. They imagine a Hamiltonian with a single control parameter evolving from H(A) to H(B). By considering the trajectory in phase space and the time-reverse of the same trajectory, they show how one can calculate the dissipated work, even though the system does not remain in equilibrium.

Their derivation makes use of three ideas:
⁃ When the system is in equilibrium, the density of states in phase space is determined by the Hamiltonian and the partition function.
⁃ The phase space density is conserved along a Hamiltonian trajectory. (I believe this is Liouville's Theorem.)
⁃ Since the evolution is deterministic, the entire trajectory of the system through phase space is determined by its location in phase space at some particular time. (Sounds similar to Hamilton's principle.)

The third idea implies that if we know location of the system in phase space at some particular time, we can extrapolate backward to the initial by following the trajectory that passes through this state. The second idea implies that the density of states is the same at both of these points. The first means that if the evolution began in an equilibrium state, we can relate the density of state at any point along the trajectory to the initial density of states. Putting this all together, if we know the density of states at any time during the evolution, we can determine the Hamiltonian of the initial state of the system.

In order to calculate the work, we need to know the total change in the Hamiltonian over the course of the evolution:

W = H(B) - H(A)

If the system remained in equilibrium, or if it was in equilibrium at the end of the process, we could determine the Hamiltonian from the density of states. But this is not the case, in general.

To get around this, the authors consider the time reverse of the evolution. Starting in equilibrium with the final state, the state is evolved backward in time to the initial state. For this process, knowledge of the density of states at any point along the trajectory allows one to determine H(B).

By taking the ratio of the density of states for the forward and reverse paths at the same point in time, one can calculate the dissipated work for the process connecting the initial and final states, even if the system does not remain in equilibrium. To quote the authors, "The dissipated work is fully revealed by the phase-space density of forward and backward processes at any intermediate time of the experiment."

The authors establish the equivalence of their calculation and a well-known concept (to experts in the field) called the relative entropy. They go on to show that their algorithm can place a lower bound on the dissipated work even if one does not have perfect knowledge of the density of states. I.e., coarse graining the density of states decreases the calculated quantity. It provides a better estimate of the dissipated work than the second law of thermodynamics, which says the dissipated work is greater than or equal to zero.

Wednesday, March 14, 2007

Finger Rafting

Finger Rafting: A Generic Instability of Floating Elastic Sheets

Dominic Vell and J.S. Wettlaufer

PRL 98, 088303 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e088303

Another good recommendation from the editors of PRL.

The authors show that the phenomenon of finger rafting observed in ice floes is not related to an intrinsic property of water, but is a general occurrence for floating elastic sheets. Finger rafting describes the interlocking protrusions that form when two ice sheets collide. As shown in Figure 1, it looks very similar to interlocking fingers.

The authors analyze the plate equation for a two dimensional semi-infinite sheet floating on water with a delta function source at the edge. The plate equation describes vertical displacements of the sheet. It is similar to Laplace's equation for an electrostatic potential. However, Laplace's equation is quadratic in derivatives while the plate equation is quartic. This leads to solutions that both decay exponentially and oscillate.

I was able to solve the one-dimensional version of the equation using a Fourier transform, and it illustrates the origin of this phenomenon. Laplace's equation is quadratic in k. Inverting the transform, one has two simple poles which lie either on the real axis or the imaginary axis. If the poles lie on the real axis, the function oscillates; if they lie on the imaginary axis, the function decays exponentially. When inverting the transform for the quartic equation, there are four poles, two of which must be included in the contour integral. For the plate equation, the poles lie on the lines Re(z) = +/- Im(z) --- i.e., they have both real and imaginary parts. This leads to functions that decay exponentially and oscillate.

The oscillations are what give rise to finger rafting. The authors model the collision process as two semi-infinite plates colliding, with one plate having a slight protrusion that rises over the other. This leads to a localized deformation in each plate, but the forcing terms in the respective plate equations have opposite signs. As a result, the oscillations are exactly out of phase: the maxima of one plate face the minima of the other plate. The plates only touch where the displacement is zero, so the zeros of the displacement function determine the size of the fingers. Since the function decays exponentially, the authors only consider the first zero.

They find reasonable agreement with experiment, although none their experimental data for wax do not fall on the theoretical line. They attribute discrepancies to reasonable causes like nonuniform edges and buckling. The authors also point out that the formation of fingers would be a dynamics process --- the deformations travel at the wave velocity of the floating sheet. For sea ice, they estimate this to be about 5 m/s. That would be a neat process to watch!

An interesting feature of the results is their universality. The characteristic length scale and pressure are set by material properties, and the results are given in terms of these scaled variables. As a result, the theory can be applied to elastic sheets in microscopic systems as well as to the dynamics of tectonic plates.

A final note: The two-dimensional version of the plate equation involves an integral very similar to the one I carried out in the one-dimensional case. The exponential is replaced by a Bessel function, the measure becomes k dk, and the limit of integration is 0 to infinity rather than all space. I have not been able to work this one out yet.

Tuesday, March 13, 2007

Photonic Honeycomb Lattice

Conical Diffraction and Gap Solitons in Honeycomb Photonic Lattices

Or Peleg, Guy Bartal, Barak Freedman, Ofer manela, Mordechai Segev, Demetrios N. Christodoulides

PRL 98, 103901 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e103901


(For anyone who's not me reading this, let me add at this point that I am a condensed matter theorists studying carbon nanotubes and graphene. These low-dimensional carbon systems have a lot of interesting properties due to the interesting band structure of the honeycomb lattice.)

Interesting article. The authors have created a photonic honeycomb lattice. I don't understand how it works. In the authors' words, "We use the optical induction technique to induce a honeycomb lattice on a photorefractive SBN:75 crystal." Is the crystal similar to a diffraction grating? Perhaps interference of reflections from different parts of the crystal generate a standing wave pattern whose maxima are the vertices of a honeycomb lattice.

The authors report both theoretical and experimental results. Their theoretical work suggests that the linear dispersion near the Dirac points (diabolical points) of a honeycomb lattice would lead to a phenomena called conical diffraction. Apparently, Hamilton discovered conical points back in 1837! They are a hot topic in modern research now; it's hard to believe they've been know for so long.

Apparently, the diabolical points discovered by Hamilton exist in the space of possible polarizations in a biaxial crystal. If a randomly polarized beam strikes a biaxial crystal at the proper angle, it will be refracted into a cone. The diabolic points of the honeycomb are quite different, as the authors point out. They arise from the symmetry of the lattice. Moreover, the conical points in the honeycomb lattice occur in reciprocal space, not the space of possible polarizations.

I'd have to review the paraxial approximation before I made any deeper investigations.

It's interesting to me how big the photonic lattice is compared to the carbon lattices I study. The lattice constant of graphene is about 2.5 angstroms, or a quarter of a nanometer. The lattice spacing in the authors' experiment is 8 microns --- it's larger by a factor of 32 000! The crystal they use to generate the photonic lattice is measured in millimeters: you could see it sitting on a table, unlike a graphene sample. A nanotube can be centimeters long, but it's only a nanometer wide --- smaller than the wavelength of visible light. You'll never be able to see a nanotube on the table.

There are apparently some unique features in the soliton spectrum of the honeycomb lattice, but I know very little about solitons. Looking at the graphs, it is clear to me that the solitons inherit the threefold rotational symmetry of the underlying lattice. Solitons sound similar to excitons. They only exist in bandgaps, so you only see them for higher bands in graphene and metallic nanotubes.

The key feature of the honeycomb lattice, at least with respect to the optics experiments here, is conical diffraction. You send in a Gaussian, bell-shaped beam, and what comes out is a ring of constant thickness whose radius grows linearly as is propagates through the lattice.

That's a pretty amazing result. Could something similar happen in graphene? If you sent in an electron wave packet with a gaussian profile, would you see some interesting conical diffraction? One can also consider the converse: do any of the interesting properties of graphene have an analog in the photonic lattice? There are many possibilities: quantized conductance, relativistic quantum hall spectrum, antilocalization, absence of backscattering. Obviously light couples to fields very differently than electrons, but it seems that some of the scattering and transport properties might be common to both systems.

Keplerian Rotation

Spinning Discs in the Lab

Steven A. Balbus
Nature 444, p. 281-2 (2006)

URL: http://www.nature.com/nature/journal/v444/n7117/pdf/444281a.pdf

A relatively simple table-top experiment has shed light on astrophysical processes like accretion.

An accretion disc --- or any other gravitationally bound, rotating fluid disc --- has a velocity that is proportional to the square root of the distance from the center. Apparently this is well known. I can see it from an order of magnitude estimate. The mass inside a disc of radius R is roughly

M \sim \rho\pi R^2 t

where t is the thickness. Equating the centripetal acceleration and the acceleration due to gravity from this mass, one finds

v^2 = G \pi \rho t r.

However, when I try to actually integrate the potential for a uniform disc, I get logarithmic corrections that depend on a cutoff length at the center:

v^2 = G \pi \rho r \, \log [ (R^2 - r^2) / a^2 ]

where R is the radius of the disc, and a is a cutoff to handle the logarithmic divergence in my calculations. The deviation from the order of magnitude estimate is pretty good for r less than about R/2.

Of course, I haven't included any fluid dynamics considerations here --- it's just a continuum model --- no viscosity, no vorticity, no Navier, no Stokes. (It's times like these I wish my undergraduate education included a course on fluid dynamics --- or my graduate education, for that matter.)

O.K. Let's take v^2 \sim r for a Keplerian fluid, and see what the experiments had to say.

Researchers used two independently rotating cylinders (one inside the other) to simulate Keplerian rotation, where the velocity varies inversely with the square root of radius. The shocking result was ... nothing happened. The fluid rotated without turbulence.

According to Balbus, Rayleigh deduced this result for differentially rotating fluids a long time ago. The Rayleigh criterion states that if the specific angular momentum of a fluid increases with radius, then the flow is stable. But the Rayleigh criterion applies only to small, rotationally invariant disturbances. At high enough Reynolds number, it might not hold.

The experiment summarized by Balbus achieved a Reynolds number of 2 000 000, and found stable circulation.

What's interesting about this result is that turbulence in such a rotating fluid was thought to be a major channel for accretion discs to dissipate energy and transport angular momentum. If an accretion disc is rotating with all its particles in stable orbits, it won't accrete!

So what do the theorists do now? The turn from hydrodynamics to magnetohydrodynamics. Charged particles. While a neutral fluid is stable when the Rayleigh criterion is satisfied, "A magnetized gas becomes unstable when the angular velocity decreases as one moves away from the center," as is the case for Keplerian rotation. (This was a little confusing when I first read it. Balbus emphasize that velocity grows as the square root of radius for a neutral disc, but then talks about the angular velocity decreasing with radius for the magnetized gas. The angular velocity of a Keplerian disc decreases with the square root of the radius.)

This article highlights an aspect of astrophysics that amuses me. Astrophysicists have some major problems getting things to work. They can't get accretion discs to accrete without magnetohydrodynamics, supernovae to explode without neutrinos, galaxies to rotate fast enough without dark matter or modified theories of gravity, a universe to expand correctly without dark energy. It's surprising (to me, at least) that the most simple models of these phenomena just don't work. It's an exciting field.

Spacetime on a Chip

Better Geometry Through Chemistry

Randall D. Kamien
Science 315, p. 1083-4 (2007)

URL: http://www.sciencemag.org/cgi/content/summary/315/5815/1083

Short but sweet. In this article, Kamien describes an experimental method developed by Klein to print a metric onto a two-dimensional sheet. The idea is very simple, but the applications seem far-reaching.

By changing the spatial concentration of a chemical that contracts when heated, Klein and his colleagues can control where and how much a surface will curve.

You can make the perfect pringle. You can make the egg-crate potential that's so popular with theorists. You could reproduce the Rocky Mountains in at the molecular scale.

There are two applications that seem especially interesting to me. The first has to do with optics. You could pattern a photonic lattice as accurately as you like. You could put the defects in by hand, exactly where you want. (As a condensed matter theorist, the idea of controlling the defects in a lattice is quite appealing.) Unlike photonic lattices created with lasers, you could pattern a lattice once and use it over and over. You could make it in your lab in Europe, put it in your pocket and take it to your buddy in California to do tests on.

Another prospect that interests me is the study of Brownian motion in curved space. People probably do this already, but I'm not aware of it. If you read a popularization of general relativity, you'll undoubtedly find an analogy to a stretched rubber sheet. It's very elegant to think of marbles rolling along geodesics, following the curvature of the sheet. But what about a marble that gets a bunch of random kicks due to thermal fluctuations. You wouldn't see this kind of thing with planetary orbits, but you could certainly watch some florescent particles move around in your custom-made solar potential. Do particles with a drift velocity follow an approximate geodesic?

It's interesting to think about introducing random fluctuations to general relativity on a scale that is experimentally accessible. Maybe you could even explore "thermal foam" instead of quantum foam. Would these thermal fluctuations be analogous to quantum fluctuations of spacetime with a Planck length of a micron? It sounds sort of like another dream of Mr. Thompkins ...

I'm way off track now. In short, the ability to take your favorite 2D spacetime metric and print it on a chip could open up some interesting avenues of research.

Monday, March 12, 2007

Thermal Runaway

Spontaneous Thermal Runaway as an Ultimate Failure Mechanism of Materials

S. Braeck and Y.Y. Podladchikov

PRL 98, 095504 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e095504


Very interesting article! I find that the PRL editor's recommendations almost always prove to be interesting reads.

The authors investigated a feedback process that can occur in viscoelastic systems (which, I just learned, means systems that show features of both viscous and elastic materials). Two observations allowed them to obtain simplified equations of the strain profile:
⁃ Conservation of momentum implies the strain does no depend explicitly on position.
⁃ Vanishing velocity at the boundaries allows one to write the time derivative of the strain field in terms of an integral over the temperature profile.

The temperature profile is the solution of a diffusion equation with a forcing term that describes viscous dissipation. This set of coupled differential equations is the starting point for the authors' investigation.

After deriving the coupled set of partial differential equations, the authors first performed a linear stability analysis (LSA). I've seen this in a couple other papers. It seems like the basic approach is to approximate the equations of motion by linear equations, then solve these. If the solutions grow exponentially, the system is unstable. If they decay exponentially, the system is stable.

Applying LSA to the system under study, the authors find that the crossover from stable to unstable solutions depend on two dimensionless ratios:
⁃ \sigma_0/\sigma_c, the ratio of the strain at the boundaries to a critical strain that depends only on the physical properties of the material in questions
⁃ \tau_r/\tau_d, the ratio of the stress relaxation time of the system to the thermal diffusion time.

When the stress relaxation time is much smaller than the diffusion time, then the condition for stability is very simple: the applied strain must be smaller than the critical strain: \sigma_0 < \sigma_c.

Next the authors investigate the maximum temperature rise \Delta T during the evolution of the system as a function of all the control parameters. The authors say there are 13 dimensional parameters in the equations. Depending on how I count, I can get this number, or a few more: L, h, x, \sigma_0, \sigma, T, T_{bg}, T_0, G, E, A, n, \kappa, R, and C. The authors simplify this down to 6 dimensionless combinations using dimensional analysis.

Investigating the dependence on control parameters numerically, the authors find that the scaled maximum temperature is a function of only two dimensionless parameters --- the same two that emerged from their LSA! The phase diagram of \Delta T is divided into two regions: a stable deformation region and an adiabatic thermal runaway region.

The boundary between these is a critical region where the temperature distribution is highly localized at the center of the slab. In the adiabatic region, the temperature profile is roughly constant across the slab, but in the critical region, it is sharply peaked at the center, with the maximum value being several orders of magnitude larger than the rest of the slab. (This is beautifully illustrated in Figure 2.)

The nonlinear equations give rise to a feedback between temperature and strain profiles that result in a self-localizing runaway process. This results in a shear band that is localized to a region much smaller than the width of the perturbed region. I did not see any expression for the width of this region. It would be interesting to know how it scales with the other control parameters of the system, or whether it is infinitely localized.

The motivation for the work was to explain why real crystals seem to fail at stresses lower than a limit established by Frenkel in 1926: \sigma = G/10. The authors use material parameters from mantle rocks and metallic glass and find thermal runaway occurs at values lower than Frenkel's limit. The range of values given by the authors includes the values given by experimental data on both systems. They conclude by noting that the critical stress is of order 1 GPa because the kinetic terms do not appear in the expression for \sigma_c.

I thought this was a rather well-written paper. The authors did a good job of motivating their research and gave a very clear explanation of exactly what they did. The simplification gained through linear stability analysis and dimensional analysis gives a lot of insight into the problem. It is nice to see the numerics justify the conclusions drawn from the simplified version of the system, which I would not always expect from a nonlinear system.

Description of the Blog

What is this blog?

Basically, it's a place for me to store summaries of physics research articles I've read.

My filing cabinet is arranged alphabetically by author's last name. This makes it hard to track down articles if I only remember the subject or the title. By storing a summary and my comments on the Web, it will be much easier for me to track down an article, and I can spend less time worrying about my filing system.

I'm a graduate student at Penn right now. In my effort to learn about current developments in physics, I print out and read anywhere from 3-7 articles from the Physical Review Letters each week. I also read a lot of the News and Views pieces from Nature, and some longer articles from Physical Review B and Reviews of Modern Physics. Often, I dig into the archives of these journals to read classic papers or to follow up on the references from other papers I've read.

If I read a paper and find it interesting or useful enough to put into my filing cabinet instead of my ever-growing pile of scrap paper, I'll post a summary here. As a result, this blog will probably be updated a few times a week. It's likely that I'll go off on tangents as I'm writing my summaries, so think of this as more a journal than any kind of reputable academic resource.

Although this blog is designed to meet my own needs, others might find it interesting or useful as well. I welcome and encourage any comments, feedback, and discussion, as well as references to related articles and other suggested readings on relevant topics.