Friday, April 27, 2007

Metamaterials and Cloaking

METAMATERIALS

David R. Smith was this year's Walter Selove lecturer at Penn. He gave two talks on his work in developing metamaterials and using them in a cloaking device. I did some background reading to understand the principles behind the cloaking device. There were two papers, published back to back in Science that I read through:

Optical Conformal Mapping
Ulf Leonhardt
Science 312, 1777--1780 (2006)

Controlling Electromagnetic Fields
J.B. Pendry, D. Schurig, and D.R. Smith
Science 312, 1780--1782

They make a lot more sense after hearing Dr. Smith's lectures and talking with him over lunch.

Question 1: What is a material?

Dr. Smith made a convincing argument that, from the perspective of electrodynamics, a material is something with a permittivity and a permeability. These are the only parameters that enter Maxwell's equations for describing electromagnetic fields in matter. This is already an effective theory --- fundamentally, QED would describe any electromagnetic phenomena. The permittivity and permeability represent a type of coarse-graining in which the atomic properties are averaged of distances that are large compared to the atomic scale (say the Bohr radius), but small compared with the wavelength of light in question.

Question 2: What is a metamaterial?

If the wavelength of light is large enough, we can imagine averaging the fields over scales large enough to be manipulated by intelligent beings. For visible light, one might consider nanoscale patterns etched on a wafer. For microwaves, objects as large as wires and loops could form the effective medium.

A metamaterial is a medium in which you control the atoms. Dr. Smith cited the work of John Pendry as demonstrating how one could build up a material with a negative index of refraction (negative permeability and permittivity). A negative value of the permittivity is not uncommon. It occurs in metals near a plasmon resonance. Dr. Smith was part of a team at UCSD that built an array of wire loops and posts with both a negative permittivity and a negative permeability.

Question 3: What does a metamaterial do?

Control over the permeability and permittivity allows for some interesting phenomena. Dr. Smith's group demonstrated a negative index of refraction by performing a simple Snell's law experiment. Light is bent in the opposite direction. Many other possibilities were discussed by a Soviet physicist named Veselago in 1968, such as a phase velocity opposite the direction of propagation, and lensing from a flat slab. More recently, Pendry proposed a "perfect lens" which would allow focusing of non-propagating modes --- the near fields that are always ignored in textbook problems.

Another phenomena is called cloaking. It's gotten a lot of media attention and Dr. Smith's work has made the covers of several scientific publications.


CLOAKING

Both the papers from Science address the possibility of cloaking. An interesting mathematical property of Maxwell's equations is that a coordinate transformation can be completely described by a transformation of the fields, the permittivity, and the permeability. This has practical consequences, as both papers illustrate.

Suppose you want to cloak a region of space --- i.e., you don't want any electromagnetic fields to penetrate the region, and you don't want any waves reflected or absorbed. Light rays follow geodesics (e.g., Fermat's principle). If the geodescis of spacetime were to travel around the region to be cloaked, this would be a neat solution of the problem. A coordinate transformation can implement this solution. That coordinate transfomration leads to field lines the curve around the cloaked region and their corresponding permittivity and permeability. This is where metamaterials come in. They allow one to engineer the permittivity and permeability as needed.

Dr. Smith's group at Duke perfomred numerical simulations to determine the properties their metamaterial would need to cloak a disc from microwave radiation. They built the required structure (rather, an approximation to the ideal structure), then demonstrated that waves pass right around the central disc for the most part, even when a strong scatterer is placed inside.

So has Dr. Smith ushered in the age of Klingon cloaking devices? Not yet. He is the first to point out the limitations of his devices. They have a very small bandwidth, meaning they only work for a very small range of wavelengths. In addition, it's hard to manipulate matter on very small scales, so cloaking in the optical range is still a technical challenge, even for a single frequency. To effective cloak a device, one would need to cover a large range of frequencies.

Cloaking was more a proof of principle than The Next Big Thing. More practical applications include antennas and lenses that can do things they don't tell you about in freshman physics.

The mechanism behind cloaking --- an effective warping of spacetime --- got me thinking about general relativity. Are there gravitational object that could actually warp spacetime in the same way, so that light would pass right around them? Black holes pull everything in. I suppose a very dense region of antigravity would be required to deflect light around an object. Still, if this were possible, the cloak would work at all frequencies, because the actual geodesics of spacetime would curve around the object. It would not be the result of an effective dielectric constant that depends on frequency. Light, particles, rocks, and anything else would travel along the same geodesics. It would be cloaked from everything, not just light!

Electroabsorption Spectroscopy of Carbon Nanotubes

Elucidation of the Electronic Structure of Semiconducting Single Walled Carbon Nanotubes by Electroabsoprtion Spectroscopy

Hongbo Zhao and Sumit Mazumdar

PRL 98, 166805 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e166805

That title is quite a mouthful!

The paper discusses how electroabsorption might be used to determine exciton binding energies and the free-particle excitation gap in carbon nanotubes.

The basic idea of electroabsorption spectroscopy is very simple. You make an absorption measurement on a nanotube sample, then you turn on an electric field and repeat the measurement. By subtracting off the free nanotube data, you can observe the effect of the electric field.

da(E,w) = a(E,w) - a(0,w)

Mazumdar and Zhao identify three noteworthy features in the nanotube spectrum, two of which can be used to determine the exciton binding energy. Surprisingly, it seems this quantity is not known. The splitting of different exciton peaks are usually studied in photoluminescence experiments, but it seems the free particle excitation gap is harder to probe. Without knowing the free particle gap, one cannot infer the exciton binding energy.

The most prominent feature in the graphs are the oscillations in the free-particle continuum. I believe these are the Franz-Keldysh oscillations studied by Perebeinos, et alii. I feel that Perebeinos description of the features and the physical mechanisms responsible are much clearer. That paper was published in Nanoletters, but I've only read the arXiv version so far.

After reading this letter, I finally understand what a Fano resonance is. It is a coupling between bound an continuum states. I don't know what its effects are, however. It seems like these would occur often in semiconductors for excitons in any band higher than the first. I don't see how it could occur for an atomic system, except in Rydberg systems where the ionization energy is very small.

Interband Excitons in Carbon Nanotubes

Polarized Photoluminescence Excitation Spectrum of Single-Walled Carbon Nanotubes

J. Lefebvre and P. Finnie

PRL 98, 167406 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e167406

The authors report photoluminescence measurements in carbon nanotubes. They present data for light polarized parallel to the nanotube axis, similar to previous experiments. However, this paper is the first I've seen to probe the photoluminescence spectrum of light polarized perpendicular to the nanotube axis.

Louie, et alii published a paper in 1995 --- a theoretical investigation of the polarizability of carbon nanotubes. They predict the response to light polarized along the nanotube axis to be an order of magnitude larger than for perpendicular polarization. This prediction seems to have been borne out in experiments, especially the one reported in this Letter.

The authors see familiar patterns of absorption at E22 and emission at E11, plus new data on the E12 peak (which is identical to the E21 peak, according to the authors). They claim to see dependence on the chiral angle that has not been included in an analytic theory yet.

Overall, I think the authors have done an excellent job collecting and presenting their data. However, I am confused by their analysis. They report two sidebands of the E11 peak, but it seems they have devoted far too much space to what these sidebands are not. After four paragraphs, I still don't know what the author believe the sidebands are.

This is definitely a good paper for my "Excitons: Experiments" folder. The data presented here might be relevant to the next phase of my research.

Quantum Chernoff Bound

Discriminating States: The Quantum Chernoff Bound

K.M.R. Audenaert, et al.

PRL 98, 160501 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e160501


As the title implies, this Letter demonstrates a quantum analogue of the classical Chernoff bound in information theory. Some of the mathematical formalisms in the paper were beyond me, but I did learn a few cool math tricks.

What is the Chernoff bound? Suppose you have a signal, and you know that the output comes from one of two sources. If you perform N measurements, what is the probability of attributing the data to the wrong source? In 1952, Chernoff showed that the probability of error decreases exponentially for a large number of measurements: P(N) -> exp(-kN). The largest possible k is known as the Chernoff bound (among other things, as the authors point out in footnote 2). It provides a concept of distance between probability distributions.

Until this Letter was published, there was apparently no quantum version of Chernoff's analysis. The authors analyze a two state system. They claim that the probability of error in determining the state also obeys P(N) -> exp(-kN). The quantum Chernoff bound is determined by minimizing the trace of the product of two density matrices. In the classical limit, the expression reduces to that of the classical Chernoff bound.

The one statement I could not verify was the one that follows Eq. (4). The authors say "the upper bound trivially follows" from a mathematical relation that is not at all obvious to me. Somehow, the authors are able to break the logarithm of the trace of a matrix product into a sum of two logarithms:

log [ Tr{ (A P^n)^s (B Q^n)^(1-s) } ] = log[ A^s B^(1-s) ] + n log[ Tr{ P^s Q^(1-s) } ]

This is stated without reference or justification. I don't see how the logarithm of a sum (the trace) can be rewritten as the sum of two logarithms. The authors are correct, however. If this relation is true, their expression for the Chernoff bound follows immediately.

The first paragraph of the "Proof" section contains two neat math tricks. One allows me to rewrite the power of a number (or positive definite matrix) as an integral, and the other allows me to rewrite the difference of two numbers as the integral of a single function. The second trick is similar to the Feynman trick we used in quantum field theory for combining denominators. It's interesting that it can be extended to matrices.

I didn't get much out of the rest of the paper. There are apparently a lot of subtle points and nice mathematical properties of the quantum Chernoff bound, but I lack the background to appreciate them.

The idea behind the Chernoff bound is interesting, and it seems relevant to a variety of fields. For instance, in cryptography, you might know a coded message uses one of two encryption schemes. Chernoff's theory says you only need to know the true value of a certain number of bits and perhaps something about the content of the message before you could distinguish between the two encryption schemes and focus your efforts.

Another area is in the analysis of experimental data. With a large data set, the probability of attributing your results to the wrong theory (distribution) becomes exponentially small. Of course, this assumes you know the distributions before you do the experiment.

In short, it's hard to make a wrong guess about the distribution if you have enough data.

Friday, April 20, 2007

Macroscopic Laser Trapping

An All-Optical Trap for a Gram-Scale Mirror

Thomas Corbitt, et alii

PRL 98, 150802 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e150802

This experimental group has demonstrated laser cooling and trapping of a large object: a 1 gram mirror. (That doesn't sound large, but its 10^25 atoms --- a lot larger than the typical atomic and molecular clouds in these types of experiments.)

The authors were able to achieve an effective temperature of 0.8 K along the direction of the beams. (Temperature is a scalar quantity, so how can it have a direction? What the authors really measured were the mean-square fluctuations in the direction of the beam. The equipartition theorem allow one to turn this quantity into an effective temperature.)

It's not hard to point a laser at a mirror, so why hadn't someone done this before? The authors point out that there are two types of radiation pressure effects: damping forces and restoring forces. A damping force slows things down. It leads to effects like optical molasses. A restoring force keeps objects in a certain region, like in optical tweezers.

For a mirror in an optical cavity, both effects can be implemented by detuning a laser from the resonant frequency of the cavity, but not simultaneously. If the laser is above resonance, it will produce a restoring force, but also an anti-damping force. If the laser is below resonance, it gives rise to a damping force, but also an anti-restoring force. It is impossible to trap a mirror and damp its motion with a single laser. (Sounds like Heisenberg: you can't fix the momentum and position simultaneously. If you had a laser beam tuned to resonance, but with fluctuations above and below, could you achieve both effects with the same beam? Would the spread in momentum and position obey some uncertainty principle? Or would the whole system be totally unstable?)

The authors get around this difficulty by using two beams. One is tuned above resonance, the other below. The frequencies are chosen so that one gives a large restoring force with small anti-damping and the other a large damping force with small anti-restoring. The result is a very stable, rigid localizing force.

What caught my attention in this article was the rigidity. The authors imagine replacing the laser beam with a rigid rod of the same diameter. To achieve the same stiffness (spring constant) as their trap, you would need a material 20% stiffer than diamond!

The caveat of all this work is that the confinement is only to one dimension. The mirror still shows room temperature (or higher) fluctuations in the directions perpendicular to the beam. Perhaps a cubic mirror could be cooled and trapped just as effectively in all three directions.

Origami the Easy Way

Capillary Origami: Spontaneous Wrapping of a Droplet with an Elastic Sheet

Charlotte Py, Paul Reverdy, Lionel Doppler, Jose Bico, Benoit Roman, and Charles N. Baroud

PRL 98, 156103 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e156103

This article contains some of the pest pictures I've seen in a PRL article.

The authors place little sheets of polydimethylsiloxane (PDMS) on a hydrophobic surface and add a drop of water. As the water evaporates, the sheets fold up. The shape and thickness of the sheet determine what the final object will be. In this type of origami, Nature does the work for you. Like the chia pet, "Just add water!" (Of course, I am sure the authors invested a considerable amount of labor before they added the water and let Nature take over.)

The pictures on the first page show the folding of a square into a tube and a triangle into a tetrahedron. On the final page, the authors show of the expertise of their lab by folding a flower into a sphere, a cross into a cube, and a square with two rounded corners into a triangle with a tube at the bottom.

The theoretical explanation in this paper is excellent. If I had to summarize their entire theory of folding in one word, it would be "competition." Competition between bending energy and surface tension sets the fundamental length scale and determines the shape of the liquid-membrane interface. Competition between bending energy and stretching energy determines whether a sheet will bend or crumple. The authors explain these ideas and their model clearly.

This paper demonstrates a good balance between experiment and theory. Two-dimensional membranes are an interesting topic, because the systems are simple enough that analytic models can be derived and solved. However, new effects are reported frequently. Although the field has been around for hundreds of years, it continues to be a fertile area for research.

Wednesday, April 18, 2007

General Adiabatic Theorem

Sufficiency Criterion for the Validity of the Adiabatic Approximation

D.M. Tong, K. Singh, L.C. Kwek, and C.H. Oh

PRL 98, 150402 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e150402

The authors take a close look at the adiabatic approximation. They find that the usual criterion presented in textbooks (for instance, Messiah) is valid for a restricted set of systems, and they show that two additional criteria are required for a general quantum system. This was a difficult read, but the result was worth the effort.

The adiabatic theorem sounds so sensible that I'm not surprised it was used without being proven for so many years: If you have a system that is in an eigenstate |n(0)>, then you modify the system very slowly, then it will remain in |n(t)>. For instance, if I have a magnetic dipole that points in the direction of an applied magnetic field, the adiabatic theorem says if I slowly rotate the magnetic field, the dipole continues to point along it. (For the quantum version, replace "magnetic dipole" with "spin.") Or, if I am in the ground state of a harmonic oscillator and I slowly change the spring constant, at any time, the system will be in the ground state of the current oscillator.

It turns out that there are cases where common sense is misleading. I read such a counterexample just a couple months ago. The system was designed to satisfy the requirement of the adiabatic approximation, yet violate its prediction. The authors of the current paper point out that the adiabatic theorem assumes the first time derivative is small and all higher derivatives are smaller. This is not the case in the counterexample.

Clearly the adiabatic theorem works a lot of the time. Otherwise, it wouldn't still be used and taught to graduate students. The question raised by the counterexample is, How can you tell if the adiabatic theorem will work or not? In a one and a half pages of straight-forward but tedious calculations, the authors derive three criteria that apply to any quantum system. (That's 1.5 PRL pages -- probably equivalent to 5 normal pages.) If these criteria are satisfied, then the system will be in |n(t)> at time t with a probability that approaches 1.

The first of these criteria is the industry standard. It applies to systems where the energy difference between states is a constant and the time evolution of the inner product of two states is a constant. The other two criteria involve integrals that can't be evaluated in general. However, an upper limit can be placed on the integrals by replacing the integrand with its largest value. This gives a product of the maximum value and the time interval that must be small, so it sets limits on how long you might expect the adiabatic approximation to be valid.

This could be quite useful. I've never seen a calculation that suggests how long you might expect the adiabatic theorem to hold. In my examples above, I used the term slowly. The authors have given theorists a way to quantify exactly what we mean by "slowly." They apply their criteria to a spin 1/2 system in a rotating magentic field and find that the adiabatic theorem will only be valid for a fixed number of periods. Even "slow evolution" isn't allowed to take forever!

Tuesday, April 17, 2007

Rack and Pinion a la Casimir

Noncontact Rack and Pinion Powered by the Lateral Casimir Force

Arahs Ashourvan, MirFaez Miri, and Ramin Golestanian

PRL 98, 140801 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e140801

The authors propose a nanoscale rack and pinion where the Casimir force (rather than contact between the cogs) allows the pinion to move the rack. They find two regimes. There is a contact regime, where the gears move as if they were in direct contact with one another, and a skipping regime where the teeth can slip by one another. The interesting aspect of this device is that the cogs are never in physical contact with one another.

The equation of motion for the system presented in Eq. (1) is rather simple. You could write it down without knowing much about racks, pinions, or vacuum fluctuations. It simply describes the motion of a pendulum with damping and a driving torque. It's a nonlinear equation that cannot be solved analytically, but it is a classical system.

The authors go on to study the system in four perturbative regimes. First, they study the case of no damping or torque. This system can be integrated, and shows crossover behavior between the contact and skipping regimes mentioned earlier. Depending on the velocity of the rack, the pinion can rotate in either direction. When a torque is applied, the same general behavior is observed. The main differenc is that the boundary between the two regions depends on the applied torque.

The case with dissipation cannot be solved exactly. In the case of weak dissipation, the authors treat the damping as a small perturbation using something called the Melnikov Method. This introduces an interesting regime where the pinion velocity is independent of the torque, and a load at which the velocity drops to zero --- the stall force.

In the case of strong dissipation, the authors discard the acceleration term (like a Langevin equation) and integrate the approximate equation of motion. There is again a stall force, and the behavior is similar to case of weak damping above the skipping velocity.

Finally, the authors investigate the actual form of the Casimir force in their system. As I said earlier, the above analysis is for a damped pendulum under constant torque. To say anything meaningful about the role of the Casimir force, the authors have to introduce it into this phenomenological model. The force decays exponentially with increasing separation between the rack and pinion. The skipping velocity is a power law at small separations and decays exponentially at large separations (like a gamma distribution, I suppose). This is quite useful in applications.

I kept this article in my files because it seems like the type of fundamental nanotechnology research needed to move the field forward. We can't just build little versions of big machines because effects like thermal fluctuations and friction can ruin everything on tiny scales. These authors have shown how you can take advantage of an effect that only happens at these small scales. The rack and pinion steering of an automobile will never utilize the Casimir force, but if you want to build a tiny ratchet out of nanotubes and buckyballs, then you might not have any other option. Well done guys.

High Powered Lasers

Laser physics: Extreme light

Ed Gernster

Nature, 446, 16--18 (2007)

URL: http://www.nature.com/news/2007/070226/full/446016a.html

Every so often, I come across an article that really excites me. It makes me feel like running to the library and checking out 3 or 4 textbooks so I can learn more about the subject. These books usually sit on my shelves for a couple months, until I come across another article. I feel guilty about having 8 library books out at a time, so I trade in my first set for a different one, a little disappointed that I don't have the knowledge I was so excited about learning a couple months ago. Well, I think it's about time for a trip to the library.

This article blew me away. Ed wrote about the advances in laser technology that have taken place in the last 50 years, and about the new facilities under construction.

The first tid-bit that caught my attention was that an electric field of 8 x 10^18 V/m will make the vacuum boil. It will rip apart the pairs of virtual particles that pop into and out of existence. This is called the Schwinger limit.

Sounds exciting, but all kinds of neat things are supposed to happen when you probe the Planck length too. The exciting thing about the Schwinger limit is that experimentalists are only 3 orders of magnitude away right now. (For comparison, I think the people at CERN will still be 15 orders of magnitude away from the Planck energy.)

You hear about lasers all the time. What's so great about them? Nonlinear optics. "In the 1960s, the fact that early lasers were powerful enough to change the refractive index of the medium through which they travelled opened up fresh vistas in nonlinear optics." Todays lasers can acclerate all the electrons around them to relativistic speeds. The next generation will be able to do the same for ions.

What a neat idea! I could shine a laser on a block of metal on my desk and be able to probe relativistic interactions between charged particles.

Another really neat idea is that these lasers could produce accelerations the same order of magnitude as the gravitational accelerations in black holes. Einstein said that gravity and acceleration are the same. Hawking said gravity can make radiation. In the 1970s, Unruh connected these ideas and said that an accelerated particle will see Hawking-like radiation, even if it's not in a gravitational field.

To quote one of Gernster sources, Bob Bingham, "The vacuum really doesn't care if it's an electric field, a magnetic field, a gravitational field, ... If you can packe enough energy i, you can excite particles out ofthe vacuum. ... Nothing generates fileds even close to those produced by an ultra-high intensity laser --- except perhaps a black hole."

Another sources points out the analog of ultrarelativistic lasers with the nonlinear phenomena of the 1960s through the present: "We're going to change the index of refraction of the vacuum." What a concept!

Gernster gives a very clear description of how lasers are able to do what they do. The take a lot of energy, but not a whole lot --- on the order of a kilowatt hour. But instead of spreading the energy out over an hour, the compress it down to a few femtoseconds. The energy is the same, but the difference in power is huge.

Excellent article.

The Best Lorentz Frame for Calculations

Noninvariance of Space and Time Scale Ranges under a Lorentz Transformation and the Implications for the Study of Relativistic Interactions

J.L. Vay

PRL, 130405 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e130405

I don't think this was a particularly well-written article, but the ideas are quite interesting. The basic premise is that you can exploit time dilation and length contraction to find a frame that makes calculations simple.

In most experiments and numerical simulations, there is a hierarchy of length and time scales. For instance, in a carbon nanotube, the nanotube radius is probably the smallest important length scale; the largest might be the length of the nanotube, which can be thousands or millions of tube radii. If I had to carry out simulations that described all length scales, I would have a lot of grid points to worry about.

Vay says, "Well, Jesse, if you boosted to a frame that moves quickly enough, you could end up with a nanotube whose length is equal to or SMALLER than its radius. Einstein tells us that an experiment in this moving frame is just as good as one in the nanotube frame. Why not make it easy on yourself?"

Maybe it wouldn't help me out that much, but for simulations of relativistic beams of electrons, Vay shows that calculation times can be reduced by a factor of a thousand or more.

The first section of the paper is devoted to an example that shows the opposite: that the ratio of the longest to the shortest relevant length (and time) scales can be made extremely large depending on the Lorentz frame you choose for the calculation. Reading the paper a second time, I realized that the point was to demonstrate separation of scales, but since it contradicts the claim of the abstract, I was really confused the first time through.

The three physical examples in the second half of the paper clearly demonstrate the utility of choosing the right Lorentz frame.

To show that this approach is practical, Vay performed the same calculation of the passage of a relativistic beam of protons in a cylinder colliding with an electron gas. In the lab frame, the experiment spans a few kilometers, and the pipe radius is just a centimeter, so the length scales span 5 orders of magnitude.

The lab frame calculation took a week of supercomputer time. The boosted frame calculation took half an hour on the same computer. That's an amazing improvement, but I don't really know when this method will work. I wish Vay had devoted more time to explaining what types of calculations can be improved.

Boron Nitride vs. Carbon Nanotubes

Theory of Graphitic Boron Nitride Nanotubes

Angel Rubio, Jennifer L. Corkill, and Marvin L. Cohen

PRB 49, 5081--5084 (1994)



URL: http://link.aps.org/abstract/PRB/v49/p5081

This paper came out just a year after the discovery of carbon nanotubes. It really was a recent discovery when this paper was written. The authors discuss what to expect from boron nitride nanotubes.

I didn't realize how different boron nitride was until I read this paper. It's definitely not graphene with a sublattice asymmetry! The lattice spacing is similar: 1.45 angstroms in BN, 1.42 in graphene. Apparently, the average on-site potential is similar as well. The authors say this can be inferred from the band withs of the parent crystals. However, the similarity ends there.

Boron nitride is a semiconductor with an INDIRECT gap on the order of 5 electron volts. All boron nitride nanotubes are semiconducting as well.

The most surprising result is the scaling of bandgap with tube radius. In carbon nanotubes, the well-known result is that the band gap is inversely proportional to the nanotube radius. In boron nitride, the band gap INCREASES with increasing radius until it approaches the free BN sheet band gap. The effect of curvature reduces the band gap of boron nitride.

This made no sense to me, with my background in nanotubes. The authors point out that the band gap of hexagonal boron nitride decreases with increasing pressure. Equating the strain of a curved tube with pressure, I can at least make sense of the effect.

It is interesting that only some of the boron nitride nanotubes are indirect gap semiconductors even though the parent BN sheet is. The (n,0) tubes are direct gap. All carbon nanotubes are direct gap semiconductors.

The authors use a tight-binding model with first and second nearest neighbor interactions. I'd like to know how many parameters are in their model. Surely more than the two parameters of the corresonding graphene model.

I'll have to keep these differences in mind as I continue my research. I would not have expected such disparity between these two similar structures.

Tuesday, April 10, 2007

Every Rock Cracks the Same Way

Scaling and Universality in Rock Fracture

Jorn Davidsen

PRL 98, 125502 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e125502

When a rock is squeezed, tiny cracks form inside. Each crack makes a sound. Davidsen recorded the waiting time between cracking sounds in a variety of rock samples and performed a statistical analysis. When the waiting times were divided by the mean waiting time for a given experiment, all the probability distributions collapsed onto a single curve. Even earthquake data falls onto the same curve when scaled by the mean waiting time between aftershocks.

The probability distribution for the scaled waiting time is a gamma distribution: a power law multiplied with an exponential. I suppose the name comes from the normalization constant. Davidsen showed that the distribution is independent of the sample, the mechanism used to crush the rock, and a cutoff intensity. (You get the same distribution even if you ignore the cracks you can't hear.) All of this suggests that the probability distribution is a universal feature of rock fracture.

A universal mechanism for cracking suggests that a detailed analysis of the molecular properties and bonding is unnecessary. Davidsen doesn't address this directly, but PhysicsWeb stressed the point. It could provide a useful check for numerical and analytic models of crack formation. If you use your favorite model to generate a series of crack, then analyze the scaled waiting time, your data should generate the same gamma distribution as real rocks and earthquakes. If not, then your model has failed to capture whatever mechanism is responsible for this universality.

Is it possible to work backwards with the renormalization group? I.e., knowing the universal scaling relation of the scaled waiting times, can one deduce something useful about the interactions on the microscopic level?

In concluding, Davidsen makes two interesting observations. First, although the waiting times for rock fracture and earthquakes have the same probability distribution, the correlation between waiting times do not. Second, the statistics of rare events often generates a Poisson distribution. The fact that the waiting times do not is important.

Friday, April 6, 2007

Foolproof 3D Quasicrystals

Growing Perfect Decagonal Quasicrystals by Local Rules

Hyeong-Chai Jeong

PRL 98, 135501 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e135501

Perfect Penrose Tiling (PPT)

Penrose developed a set of rules which allow a perfect tiling of the 2D plane in a non repeating pattern. These are called quasicrystals. His rules allow such a tiling, but they do not guarantee it. At the edges, there are legal attachments that introduce defects and prevent you from tiling the entire plane. However, another guy called Onoda showed that if you start with a defect in the center, then local growth rules will fill the rest of the plane with a PPT. The only defect is the seed at the center.

To extend 2D tilings to 3D crystals, people thought the best you could do was stacking 2D planes on top of one another. This would introduce a line defect where all the decagons overlap. The authors of this letter developed an algorithm that starts with two defects, but their vertical and lateral attachment rules allow you to fill the rest of space --- "the bulk" --- with PPTs.

A decapod has 10 edges, and the way these growth rules work is to assign an arrow to each edge. Since the arrow at each edge can point in either of two directions, there are 1024 possible decagons. Symmetry under reflection and rotation reduces this number to 62. I'd like to see how that counting works, because 2x31 seems a difficult number to get from 2^10! Anyway, there are 62 unique decagons that will fill a plane with local growth rules. Of these, only one can be filled in: the cartwheel.

Using a special decagon at the bottom and a cartwheel on top of it, the authors were able to add a vertical growth rule that would overcome any dead zones. As a result, they are able to grow a 3D quasicrystal from a single point defect.

Who cares about local growth rules? Well, Nature for one. If I had a set of Penrose tiles (which I'd love to get my hands on), I could sit and meticulously place them one by one until I used up my bag, with a perfect tiling. However, Nature might not be as attentive as me. She'd probably take a tile and try to fit it at an edge. If it would stick (i.e. satisfied local growth rules), Nature would be happy and move on to the next tile. After enough of this, she'd eventually end up with nothing but dead zones and defects. Nature would have to be really lucky to make a quasicrystal as large as mine.

With a decagon in the middle, though, Nature couldn't go wrong. Every tile that fit on an edge would continue the pattern perfectly. Thus, a point defect at the center, the decagon seed, would allow a random growth algorithm to build a perfect quasicrystal except for the defect. The beauty of the author's work is that he showed you only need a point defect --- not an infinite line of them --- to do the same thing in three dimensions. He came up with a simple set of rules that make it impossible to mess up the tiling if you start with two special decagons stacked on top of each other. It's foolproof!

Forked Fountains

Splitting of a Liquid Jet

Srinivas Paruchuri and Michael P. Brenner

PRL 98, 134502 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e134502


Interesting work --- hard to believe this problem had not been studied earlier! I guess that's the effect of a well-written paper: The ideas are presented so clearly they seem obvious. This was, in my opinion, a very well-written paper.

The authors analyzed the conditions under which a jet of fluid can split. They derive a Navier-Stokes equation for their model jet, then solve it numerically. The numerical results are used as the basis of an analyitc model that captures the essential features of the numerical work. The authors demonstrated a very nice interplay between numerical and analytic work.

The conclusions of the authors are that tangential stress on a jet can lead to splitting, but normal stress cannot. Tangential stress must overcome the surface tension of the fluid; thus, there is a critical stress, below which splitting cannot occur.

Doodling in my notepad, I made a simple model for the pinching and splitting of two surfaces. It is entirely mathematical and does not include any physical parameters. I compared my sketches with the authors results, and was surprised to see sharp cusps in their cross-sections. In my model, there is a linear crossing at one instant in time, but before and after, the surfaces are smooth. It seems to me that a smooth membrane would be vastly lower in energy than one with a cusp. Of course, when the membranes touch, it's got to lead to some kind of singularity in the differential equation, so a numerical routine or an analytic solution to the full equations might not "know what to do" after the membranes meet.

Thursday, April 5, 2007

Efficiency of Non-Ideal Engines

Collective Working Regimes for Coupled Heat Engines

B. Jimenez de Cisneros and A. Calvo Hernandez

PRL 98, 130602 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e130602

The authors consider the efficiency of an array of coupled heat engines between two reservoirs at temperature t and T, with T > t. Long ago, Carnot showed that the maximum efficiency is

e = 1 - (t/T).

This efficiency can only be realized in an adiabatic, reversible process.

In the late 50s and mid 70s, physicists extended Carnot's analysis to finite-time endoreversible processes and found

e = 1 - sqrt(t/T).

This is called the Curzon-Ahlborn efficiency. (For T = 4t, this implies a reduction in efficiency from 50% to 25% -- significant!) The authors claim this efficiency provides a good approximation to the observed efficiency of several power plants, which suggests they are closer to maximum theoretical efficiency than one might have thought. If you want your power in finite time, you might have to settle for significantly less efficiency.

The authors analyze an array of coupled heat engines between two reservoirs. First, they show that the efficiency only depends on the endpoints and not the intermediate mechanisms. They derive an efficiency that depends on the heat fluxes at the ends, not the temperatures:

e = 1 - j/J.

They also calculate the rate of entropy production, which leads to an analysis of thermodynamic forces and Onsager coefficients, which I am not familiar with. The authors show that the Carnot efficiency is realized when the rate of entropy production is zero.

The rest of the paper is devoted to solutions of a Ricatti differential equation the authors derive for the Onsager coefficients. They show that the Carnot and Curzan-Ahlborn efficiencies are specific cases of their more general theory.

A surprising result is that global optimization of the total power of the system does not require that every element perform at its individual maximum power. I also infer from this analysis that the key to improving efficiency is reducing entropy production, or isolating the system from the environment.

A final note: I am nearly certain that Eq. (20) or (21) is incorrect. I can solve the differential equation, and the solution to Eq. (20) is not Eq. (21). I'm not sure where the error is, but something is amiss.

Phonon Mediated Spin Relaxation

Experimental Signature of Phonon Mediated Spin Relaxation in a Two-Electron Quantum Dot

T. Meunier, I.T. Vink, L.H. Willems van Beveren, K.J. Tielrooij, R. Hanson, F.H.L. Koppens, H.P. Tranitz, W. Wegscheider, L.P. Kouwenhoven, and L.M.K. Vandersypen

PRL 98, 126601 (2007)

URL: http://link.aps.org/abstract/PRL/v98/e126601


The authors measured the singlet-triplet relaxation time in a two electron quantum dot. They vary the splitting of the two levels using an applied magnetic field. They find a minimum relaxation time between zero field and the field at which the singlet and triplet states become degenerate. This would not be expected if the only mechanism at work was Zeeman splitting.

The only relevant pathway for energy dissipation in the setup is coupling with acoustic phonons. This coupling is indirect, as phonons cannot couple states with different spin. The phonons can couple different atomic levels, and spin-orbit coupling provides a pathway for phonons to carry away the spin-flip energy.

To explore whether this model of phonon-assisted spin flips could explain the minimum in the spin relaxation time, the authors developed a simple but elegant model. It reproduces the qualitative features of the data quite well. The phonon coupling is strongest --- and the relaxation time shortest --- when the energy splitting of the singlet and triplet states corresponds to a phonon whose wavelength is twice the size of the dot. I.e., there is a resonant acoustic mode at this energy, which maximizes the rate of phonon assisted spin-flips.

When the phonon wavelength is large compared to the dot size, couplings to the singlet and triplet states are roughly equal, and their respective contributions to the electron-phonon interaction cancel. When the phonon wavelength is small, the coupling to both states is small individually, and the overall coupling is small. It is the resonant phonon energy that maximizes the coupling and leads to the observed minimum in decay time.

I've made some notes on the model.