The Standing Invitation

Archive for the ‘Science’ Category

The Origin of Opacity

with 2 comments

A while back I wrote a post about vision and why it is that some things simply can not, even in principle, be described in visual terms. I focused (see how hard it is to avoid metaphors of sight?) on things smaller than atoms, but I didn’t need to go that far. Right now, you are reading these words through at least several inches of air – real-world, macroscale stuff that you are able to feel or hear when it moves, but are unable to see.

Transparency is something magical. As a child I was fascinated by glass: solid, hard, heavier than water – and yet invisible. I asked how this could be possible, and was never really satisfied with any answer I got. And it turns out this is because I was asking the wrong question. It turns out that glass’s seemingly magical transparency is not the phenomenon demanding an explanation. To gain the deep understanding I missed as a child, we must consider the origin of opacity.

Ranked in order of wavelength, the electromagnetic spectrum begins with radiowaves and continues (decreasing wavelength) with microwaves, the infrared, the ultraviolet, x-rays, and gamma rays. Note the omission: I have deliberately excluded visible light. Why?

The portion of the electromagnetic spectrum that we can actually see is vanishingly small. You could blink and miss it, though of course if you blink you do miss it. Visible light – colour – is an astoundingly narrow selection of the available wavelengths between infrared and ultraviolet. One might wonder why this particular chunk of real estate, between 390 and 750 nm, happens to be the one that we can see. And if you ask it in these terms, you are still asking the wrong question.

Recall that you “seeing” something corresponds to your brain detecting a chemical change in a substance called 11-cis-retinal in your eyeball. 11-cis-retinal only absorbs radiation with wavelengths between 390 nm and 750 nm; anything outside this range has no effect, and so is invisible. So this is why only some of the light gets “seen”. But this only pushes the question back one step further. Why do our eyes employ 11-cis-retinal, and not some other chemical with absorbance in another wavelength range?

We can narrow the possibilities using an understanding of chemistry. There are no known chemical compounds that undergo a chemical change on exposure to radiowaves. This means that no organism dependent on chemistry as we know it could ever treat radiowaves as its own personal “visible light”.  The same appears to go for microwaves, though this is contested. X-rays and gamma rays do cause chemical changes in molecules, but with wavelengths such as this it would be quite a challenge to evolve an eye that could handle them (an essay by Arthur C Clarke suggests an animal with a metal box for an eye and a microscopic pinhole to focus it, but only to illustrate the difficulties involved). So from the restrictions of photochemistry we’re limited to a window about 3500 nm wide available for seeing – and yet evolution has caused us to see only a fraction of that. Why? And why did it “choose” for us the wavelength range that it did?

Well, consider some possibilities. What if we saw in the range of about 100 to 200 nm? Chemically it’s possible. But no organism on Earth would evolve to see in that wavelength. Our atmosphere is 80% nitrogen, and nitrogen absorbs light at about 100 nm. If we saw in that range, air would not be transparent: it would be totally opaque. The ability to see in this wavelength range would be worthless, just as it would be worthless to see around 1450 nm, where water absorbs; we evolved from creatures that needed to see in water. Here is the answer to the problem of transparency, and the problem is revealed to be that the question was backwards. Air (or water, or glass) is not transparent by itself; it is transparent to us because eyes that don’t find air transparent would be of no use to us. The transparency of air is the result of the environment our genes have designed us to live in. Of course, a subterranean creature like a mole might welcome a design of eye that makes soil transparent – while simultaneously leaving worms opaque and visible. But the chemistry for that does not exist, and moles have to make do with being blind.

Practical considerations aside, it’s interesting to ask if X-ray vision might have been useful on evolutionary terms. If we saw in the X-ray region, most matter would be transparent to us, including our own bodies. This would be useful for some things, like spotting tumours or broken bones. But we would struggle to pick fruit, or detect approaching thunderclouds, or build tools out of wood. As a species, we are better off with the kind of eyes that can detect the chemical difference between an unripe fruit (green) and a ripe one (red). Evolution has selected for us a sense of vision that operates in the part of the spectrum that is richest in information relevant to our survival. Other animals make use of slightly different wavelength ranges, like bees, who prefer the shorter ultraviolet wavelengths rich in information about the availability of nectar in flowers.

In fact, it’s arresting to imagine an alien world, lit by sun that emits different wavelengths of light to our own – populated by aliens based on very different chemistry to our own, with strange eyes for detecting wavelengths we cannot ever hope to see. If ever they came to visit us, their children might well look at us in fascination, wondering why it is that we humans are as transparent to them as glass…

REFERENCES

http://en.wikipedia.org/wiki/Infrared

http://en.wikipedia.org/wiki/Ultraviolet

Daniel C Dennett: Consciousness Explained

Richard Dawkins: Unweaving the Rainbow

Arthur C Clarke: Report on Planet Three and Other Speculations

Advertisements

Written by The S I

May 6, 2012 at 1:10 am

The Cost of Perfection

leave a comment »

Take two particles.*

Particles are fussy things. They don’t like being too close to each other because atomic nuclei repel each other very strongly; at the same time, the shifting clouds of electrons surrounding the nuclei can be mildly attractive to one another, so they don’t like being too far apart either. This balance of attractive and repulsive forces is familiar from all social gatherings: when you’re talking to someone at a party you want to be close enough to hear them, but if they’re standing right in your face it’s uncomfortable.

What this means for particles is that there’s a sweet spot, an optimum separation between the two particles that makes them both happy. It’s the lowest-energy arrangement of particles, in that once they’re at this distance, it would require an input of energy to push them closer together or pull them further apart.

So this idea of the optimum distance between two particles is straightforward. And the same thing applies when you have billions of particles at once. They will move around at random until they find the lowest-energy arrangement, where the average distance between particles is as close as possible to the ideal separation.

When billions of particles try to reach their lowest-energy arrangement, they will try to form a lattice.

Lattices are three-dimensional patterns of points in space. They are infinitely large, infinitely repeating, purely mathematical constructs that can only be approached, never exactly attained. A lattice is a map of where particles should sit in space in order to be at the right distance from each other.

When particles arrange themselves on a lattice, we call this a crystal. Crystals, like diamond, are simply regular arrangements of particles – and they really are very regular, repeat themselves almost perfectly for millions and millions of layers.

But remember, lattices are mathematical ideals, perfect and Platonic, while crystals are real-world lumps of matter. The particles in a crystal may try to reach the perfect state of a lattice, but they will never reach it. There will always be defects – points at which an atom is not sitting where the lattice says it should be. These are the microscopic imperfections that mean the ideal of a lattice will never be attained. Even though all particles in a crystal would benefit from being in a perfect lattice (achieving the optimum separation from other particles), the defects are nevertheless unavoidable.

There are two kinds of defects: extrinsic and intrinsic. Extrinsic defects are easy to understand. They are simply impurities. A diamond crystal is supposedly a regular arrangement of carbon atoms, but since no source of carbon is perfectly pure, no diamond will be perfectly pure. The most common impurity in diamond is a nitrogen taking the place of a carbon. Diamonds, supposedly pure carbon, are typically 1% nitrogen. As well as being impurities in themselves, the presence of a nitrogen atom causes local distortions in the crystal surrounding it, as the adjacent carbons move slightly from their ideal lattice positions in order to compensate for it.

But even if some perfectly pure source of carbon could be found and a diamond crystal grown from it, would that crystal approach a perfect lattice? Not quite, because of the second kind of defect – intrinsic defects. An intrinsic defect occurs when a particle isn’t where it should be.

These intrinsic defects are interesting because they are unavoidable. They are the inevitable consequence of the great trade-off between enthalpy and entropy. Because although there is an energetic benefit to having particles sitting an ideal distance from each other, there is also an energetic cost to having them perfectly ordered. This again makes sense to anybody who has ever tried to organise anything. Of course you want things organised; things are more efficient when they are organised, and so the more organised, the better – but organising things takes time, effort, and energy. Ultimately a compromise has to be reached: you accept the amount of organisation you can achieve for the amount of energy you’re willing to expend on putting things in order.

Although particles like to be separated by an ideal distance, they also like moving around, particularly at high temperatures. For this reason, truly perfect crystals are impossible to grow. Imperfections will always sneak in. In fact, imperfections are necessary. Crystals exist because it is energetically favourable for particles to be organised; but because of the inevitable cost of organising anything, it is also energetically favourable for there to be defects. And the interesting thing is that because the imperfections are the result of particle movement, and movement depends on temperature, it is possible to predict how many imperfections there will be in a crystal at a given temperature. You can’t say where they will be, but you can say how many there will be per cubic centimetre. The defects obey exact laws that can be understood and exploited. They are perfect imperfections.

 

* In this post, by particles I mean atoms, ions, molecules or some colloids; things smaller than atoms behave rather differently.

Written by The S I

April 9, 2012 at 10:18 am

Things That Don’t Look Like Anything

with 2 comments

When I talk about things like molecules, atoms and particles with nonscientists, a question I am often asked is what these things look like. And they never seem satisfied with my response: that, really, they don’t look like anything at all. It’s not that they’re invisible as such; it’s just that sentences involving what they look like don’t make any sense. You can’t describe their appearance because they don’t have an appearance to describe.

The thought makes people uncomfortable.

The idea of something not looking like anything is not a new one. Sounds do not look like anything. We know that sounds exist, but that physical appearance is not something we can ascribe to them. When we talk about sounds, we describe them in nonvisual terms.

Sounds, or ideas or desires or smells, have a certain abstract quality that seems to excuse this. But particles are stuff. They are physical objects whose masses are known to remarkable degrees of accuracy, and since everything we can see is made up of aggregates of them, it seems impossible that they cannot be described visually.

Let’s consider what happens when you see something, step by step.

An object is illuminated by a bombardment of photons. These photons interact with the surface of the object. Some are absorbed by the object – it is this absorption that gives the object its colour. The photons that are not absorbed are scattered around in all directions, and many of them enter through the pupil of your eye. These photons reach the retina, where they cause chemical changes in molecules like 11-cis-retinal; electrical reports of these changes are transmitted to the brain, where they are interpreted as ‘seeing’ those photons.

So to ‘see’ something means that photons bouncing off the thing cause chemical changes in your eye. This is fine for large objects like apples and oranges, but what if the object is smaller? Most people can’t see objects smaller than 0.1 mm, because there aren’t enough photons reflecting off them to react with our eyes. We get around this problem by using stronger illumination and magnifying lenses, allowing us to see things like blood cells.

But what about objects that are even smaller?

Well, here we start to have a problem. For objects smaller than 0.002 mm, photons of visible light start to be too big to see things clearly. In order to resolve details at this size level, smaller, higher-energy particles than photons need to be used. This is how electron microscopy works: instead of using reflected photons, you use reflected electrons, which are much smaller and better able to probe the surface of what you’re examining.

Is this really ‘seeing’ the object? The microscopic object under examination is not being studied with light, remember. This is why electron microscope images are monochrome. Light isn’t involved in the process at any point until a computer screen shows you, with light, the pattern of reflected electrons. Still, we are presented with pictures of the object’s surface, so it’s certainly like seeing, and the object certainly has an appearance that can be discovered, even if only indirectly.

What if the object is smaller?

Eventually an object can be so small that not even electrons can give you good enough resolution, and even more indirect means of gathering information must be used. One of them, atomic force microscopy, is more analogous to touch than sight: it drags a tiny needle across a surface to register bumps in the surface where the individual atoms are. But apart from the atoms’ location in space, there’s no information here about their appearance. Atoms do not interact with light in a way that gives meaning to the word ‘looks like’. They do absorb light and so might be said to have colour in a technical sense, but no picture of an atom could ever be drawn based on their interaction with light. And smaller particles than atoms don’t interact with light at all. You can’t see them, ever, because there is nothing there to see.

But still, some picture of a very tiny object might be drawn. Questions about its shape, for example, are not meaningless – but on a small enough scale, questions of shape become questions about properties rather than appearance. The question ‘is x round?’ becomes ‘are all the points on x’s surface the same distance from one central point?’. This is a question that can be answered, but only because it is a mathematical question about the properties of a certain type of object. And it turns out that the equations describing these objects reveal the them to be strange and wonderful things – things that behave in ways that make absolutely no sense to people used to objects the size of apples and oranges. They cannot be seen, but they can be described, and this description is better than seeing them. A mathematical description of a particle is more precise and less fallible than the clumsy tool of vision that evolution gave us to survive in a world full of large-scale objects. And we can reach this level of acquaintance with these particles that no one has ever seen because even though we can’t see them, we can imagine them.

Written by The S I

March 26, 2012 at 3:00 am

Burning Curiosity

leave a comment »

Something about writing this thesis makes me think of setting things on fire. This prompted me to wonder exactly what a flame is.

Although a flame appears to be a stable, defined structure, we know that this is an illusion. Particles enter at the bottom and leave at the top, becoming visible for only a part of that journey; the region of space in which they’re visible we call a ‘flame’. It’s rather like a queue: it has a shape, a duration and a certain characteristic behaviour, but nothing about it is permanent. (Remember how no part of our bodies is the same after 20 years…?)

So a flame is really a time-averaged aggregate of microscopic events. But what light-emitting events lead to the thing we call a flame?

Candle wax is made of long-chain hydrocarbons that have a low melting point. Heat turns wax from a white solid to a clear liquid, and then to a gas. Heat rises, and the gas molecules are carried upwards. When they are hot enough, they react with oxygen to form carbon dioxide and water vapour, like this:

 

But this is too simple. The process of combustion is incredibly complex, with countless steps and short-lived intermediates – something a bit more like this.

Many of these reactions give off heat. The heat excites electrons in nearby molecules, and these electrons relax back to their original positions by emitting light. The colour of the light given off depends on how hot the molecule was. It’s how astronomers measure how hot the stars are, and what they’re made of.

The hottest part of a candle flame is the very bottom, where oxygen is plentiful and there is a high density of heat-generating combustion reactions occurring per second, driving up the temperature. The molecules here burn at about 1400 °C and give off a hot blue colour.

These very efficient reactions near the bottom effectively starve the rest of the flame of oxygen. Oxygen still gets in through the sides, but not as efficiently. The result is incomplete combustion: the wax is converted into particles of soot carried upwards by the current of air generated by the heat. These are still hot enough to glow, but the temperature is much lower, and so this region of the flame is a cool yellow.

The flame is only the part of the process we can see. It is misleading to see that soot particles emit light within the flame; better to say that the flame-space is defined as that region within which the particles are hot enough to glow. The tapering shape of the flame comes directly from the low availability of oxygen. When you trap a flame under a glass, the flame extends before going out, because the lifetime of a glowing particle is longer in the absence of oxygen.

This is demonstrated in a lovely picture from NASA of a flame in microgravity. Because there is no ‘up’ for the air currents to go, the flame burns in all directions. This is a much more efficient use of space: oxygen can get in from all directions, so the fire burns strongly, with no soot to give it a yellow colour.

 

REFERENCES

http://en.wikipedia.org/wiki/Candle

Also, see a fascinating and rather whimsical discussion on the ‘philosophy of candles’ by the mighty Michael Faraday here.

 

 

 

Written by The S I

November 8, 2011 at 11:59 pm

Functions of State

leave a comment »

What one thing would you change about the world?

Restricting ourselves to changing just one thing makes us into scientists. A scientist might express it in this way: in an experiment in which all other things are held constant, what variable would you alter in order to maximise the happiness of the world?

Even the most naïve scientist acknowledges that there are not many problems that can be solved by changing just one thing. But even that is an interesting observation. Let’s consider it in more detail – with graphs.

Here are three graphs with, as a y-axis, some imaginary scale of ‘aggregated societal happiness’ – a grotesque utilitarian caricature, but bear with me, I’m trying to make a point. What we vary lies along the horizontal axis. We change the value along the horizontal, and watch to see how happiness goes up or down.

Graph A shows a simple relationship where the more you have of X, the better off everyone is. X might be something like availability of food, ranging from 0% to 100% – if one more person can eat, the world is a little bit better off for it.

Graph B shows the opposite, where the more you have of X, the worse off everyone is. X here might be prevalence of smallpox; under no circumstance does more X mean more happiness.

In graph C, there is a certain value of X that ensures a maximum of happiness, and too little X or too much is actually a bad thing. X here might be freedom of expression. If you object to this, then I’m sure you won’t object to me hanging a Nazi poster in your bedroom. There’s only so much freedom of expression you can have before it starts to clash with other freedoms you enjoy, like your freedom of privacy.

But really interesting to me is a graph like this one.


Here we have two happiness maxima – two clearly different ways of organising a society, one, perhaps, happier than another – but separated by a chasm of misery for some levels of X.*

What are examples of X that would generate this curve? They are instances where everyone benefits from acting the same way, society suffers a little more for every person that deviates… until the deviants become the majority, in which case everyone is punished for those people who choose not to deviate. One good example for such an X is the tendency to drive on the left side of the road: it’s great if everyone does it, great if nobody does it, but chaos if exactly half the people do it.

But what if you want to change from driving on the left to driving on the right? To move between one maximum of utility and another? It has to be done in one step – overnight – to avoid the dangers of the middle ground. It can be done, and indeed has. But in some cases, a simple transition from one stable state to another is simply impossible; it is simply too costly. Other things will have to change to accommodate it.

REFERENCES

http://en.wikipedia.org/wiki/Dagen_h

The argument about freedom of expression as pertains to Nazi posters was stolen from Chomsky, Understanding Power. Can’t find the page number.

* It’s important to be clear that this is separate from the idea that you have to make things worse now in order to make them better later – which is itself an important concept, but not under discussion here. Happiness levels at a given X are taken to be instantaneous and without memory; they are functions of state, not functions of path.

Written by The S I

November 6, 2011 at 11:59 pm

Linguistic Legislation

leave a comment »

Another quick link to a video. I like Steven Pinker a lot. Usually when I tell this to my philosopher friends, they nod, smiling, and say “Ah, Pinker.” Usually when I tell my linguist friends, they roll their eyes and say “Ugh. Pinker.” Which of these grotesque caricatures best describes you? Watch his talk and decide for yourself. It’s over an hour long, but even if you do nothing else today, your life will be immeasurably enriched by an anecdote beginning at 20 minutes and 33 seconds…

Written by The S I

November 4, 2011 at 11:59 pm

Posted in Science

Tagged with , , ,

Xiangqi

with one comment

We like board games here at the SI, and the latest craze sweeping the office is xiangqi – Chinese chess.

Xiangqi is played on a nine-by-nine grid, and is similar to Western chess in many ways: it is turn-based, zero-sum, and is won by checkmating the opponent’s king (or ‘general’). It is easy to see how the two games might share a common ancestor. Many of the pieces move in similar ways – the pieces called ‘chariots’ move exactly like castles, the ‘horses’ almost exactly like knights, the ‘soldiers’ and ‘elephants’ a little bit like pawns and bishops. Some pieces have no equivalent in Western chess – ‘fireworks’ capture by jumping over exactly one other piece, and ‘advisors’ protect the general – but, rather pleasingly, once the pieces’ moves have been learnt, the tactics and strategy of chess carry over very well. If you are good at chess, you’re not too bad at xiangqi either, even if you don’t know it yet.

One important difference is that the xiangqi’s general is confined to a small space – a three-by-three grid. This has the effect that checkmate is much easier to achieve in xiangqi than in chess.

We may define this exactly. Consider the set of all possible combinations of up to 32 pieces on an 8×8 chessboard ­– a huge number, call it C. Now imagine the much smaller (though still gigantic) set of possible combinations of pieces that are checkmates for back. Call this smaller number C+. Now consider the equivalent numbers in xiangqi – the set of all possible positions, X, and the subset of all possible checkmates, X+. When we say that checkmate is easier in xiangqi than in chess, we mean that (X+ / X) is much bigger than (C+ / C) – checkmates make up a much bigger proportion of the available positions.

We’re talking about huge numbers when we discuss X and C, but still finite – Dennett fans would call them Vast. The number of possible positions is an upper bound, a kind of mathematical worst-case scenario. In fact the number can be made smaller by realising that some positions are unreachable. In chess, the set of positions in which a pawn sits on the first row may be removed from C; likewise in xiangqi the positions in which the two generals illegally face each other. These legally accessible positions are called the state spaces of xiangqi and chess.

State space sizes are difficult to calculate. People begin with an upper bound, then whittle this Vast figure down by working out the mathematical consequences of the games’ rules. Exact answers have been obtained with simple games like noughts and crosses: 765. Chess’s has been estimated as 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, xiangi’s as ten times bigger, but take these as approximations. The set of all possible games, of course, is much, much higher.

Don’t bother playing draughts, by the way. It’s been solved. The entire decision tree of a game draughts has been worked out, in 2007. From the starting position, draughts will always end in a draw if played perfectly. The game only becomes interesting because people make mistakes.

REFERENCES

http://fragrieu.free.fr/SearchingForSolutions.pdf

http://en.wikipedia.org/wiki/Solved_game

Written by The S I

November 2, 2011 at 11:59 pm