Although the phrase “ghost imaging” might sound like it involves photographing phantoms it is in fact a method of physical imaging otherwise I would be unlikely to write about it now. The basic concept is that an object can be photographed without the beam of particles that produce the photograph directly interacting with the object. To do this one beam of photons is sent to a high resolution multi pixel detector. The other beam of photons, split off from the same source, is aimed at the object being photographed behind which there is a single pixel plate detector known as a bucket. Thinking one pixel at a time makes it simpler to understand. If both the multi pixel and the bucket receive a signal at a particular pixel (and the corresponding location on the plate) then neither photon was blocked so the object was not covering this pixel. If the bucket receives no photons in a particular pixel coordinat then the object obscures this location and so an image can be formed despite the multi pixel photons never getting blocked. Since this is a quantum phenomenon it is not quite as simple as this and complex light distributions are actually used rather than single pixel illumination. What relation this has to quantum entanglement is still up for debate.
It has also been recently proven that this imaging techniques works with massive particles as well as photons. By getting two Bose-Einstein condensates and smashing them together correlated pairs of slow moving helium atoms were created which were then used to produce the same ghost images of objects. Although this may be impractical as an imaging technique it is still some very valuable information. This research can be developed to try and prove quantum entanglement and Bell’s inequality in relation to atoms rather than photons.
In condensed matter physics it is useful to see things in terms of particles. The models that describe physical attributes as particles are often called quasiparticles. When thinking about an electron flowing as part of a current it is a major simplification to ignore the interaction with the surrounding material. When an electron is present in an ionic lattice it attracts the positive parts and repels the negative particles in that lattice. This creates a phonon (mechanical photon) in the lattice that the electron then interacts with to create the quasiparticle known as the polaron. The phonon cloud will move with the electron insulating its charge, increasing its effective as and generally hindering the charges movement. This had only been a theory until a recent experiment seems to have provided some empirical data for it. In order to prove this concept correct manganese oxide (MnO) was scrutinised for any signs of electron lattice interactions. To observe this effect electron diffraction was used; this is where a negatively charged electrode is pumped with a laser to promote its electrons and then ultraviolet bursts are used to force the cathode to emit photoelectrons that can then be used for scattering measurements for how much the lattice moved. They found that both the electron cloud and the lattice shape changed when the two interacted with both coming to a middle ground rather than one dictating the deformation of the other. The proof of such a long standing theory combined with some more detailed information about what exactly occurs makes this research some of the most important you’ll find.
Materials are placed into categories to make them easier to classify when it comes to behaviour. Metals, plastics and rubbers are all categories that should be quite familiar. Amorphous materials are heard of less often but they are simply the materials in which there is no repeating structure that carries on throughout. Concrete and glass both fall into this definition. Some amorphous materials, when heated, become a viscous and more rubber like material. If they do this they are known as a “glass” although they are not all necessarily glass themselves. Viscous liquids can also undergo the phase change in reverse by being cooled very quickly they become brittle and form an amorphous solid, known as structural glasses, this process is called vitrification. The rapid cooling is necessary otherwise crystals would form and the liquid would simply freeze, however the slower the cooling rate, the lower the temperature is for a particular material for it to become a glass. It was pointed out by a Walter Kauzmann that if the transition temperature could be reduced to below absolute zero, then the cooling liquid would pass a point where it had less entropy than its own crystal form, despite crystals being the most stable form of any material. This is called the Kauzmann paradox and is often solved by introducing he concept of an “ideal glass.” Researchers have now presented a different idea that completely circumvents the Kauzmann paradox by introducing the idea of a “perfect glass.” A perfect glass bust be:
- Amorphous and completely uniform in this disorder.
- Lack an energy minima (most stable state) by the definition that if left to heat up would never crystallise.
- Possesses no freezing point, unlike the ideal glass model.
It it has been found that such a material is mathematically possible and now the primary concern is to discover if any real materials have some or perhaps all of the features.
As the weeks progress towards Christmas I am expecting the scientific world to slow down more and more although I am sure there will always be something interesting to write about. This weeks major news was that scientists from Imperial College in London have been questioning whether the speed of light is really stuck at 3×108 metres per second. They believe it might have been possible for light to have travelled faster in the early universe as an explanation as for why the universe is so homogeneous. Interesting fact: for the first 379,000 and then for the years 385,000 – 390,000 (all estimates) the universe was completely ionised so any photons were constantly scattered off electrons and as a result the universe was opaque. Apparently these Einstein sceptics are designing an experiment to see if they can prove their theory. People are also still raving about the EM-Drive, the reaction forceless engine that has apparently been built. There has still yet to be any recreated experiments that show the same conclusion which is not very surprising as the equipment and the finely tuned sensors required are not easy to come by and NASA, of course, has some of the best in the world. Overall things are still going well and hopefully despite the seasonal slow down scientists can continue produce the high quality and interesting research that continues to stimulate our imaginations.
Until tomorrow, goodnight.
When you search for a website you computer has to make sense of what you have typed in. For instance you may type in “google.” Your computer will then contact a local Domain Name System (DNS) server and ask it what “google” means. The DNS server will either contain the knowledge that the word “google” translates to a specific set of numbers that it will send back or it will act as a client for another DNS server which will perform the same process. Eventually your computer will get back the numerical domain name akin to the one you asked for. If someone was to crack (maliciously hack) into the DNS server they could tell it to send back the domain for their own fake website when receiving the legitimate websites name; as a result the DNS servers have to be secure and as quick as possible in their tasks.
It is in this kind of system in which quantum computing can be very useful. When a request for communication is established over a quantum internet both quantum key bits and quantum entangled bits which I have mentioned before are sent. The quantum keys are based on the idea that any unauthorised attempt to measure the nature of quantum particles will change the particles themselves and the make the intrusion obvious, a concept based on Heisenberg’s uncertainty principle. the quantum entangled bits would immediately change if the other did and so communication over the maximum entangled distance, which is currently about one hundred kilometres, would be instantaneous. Two computers communicating in this way would be the most basic form of the quantum internet which we could possibly see in the future. But there is however a fundamental cap the performance of such a system due to photon loss in the optical fibres and the quantum nature of the bits. This limit is something that can not be solved but rather has to be worked in order to create a working quantum communication system.
Today I learnt one of the most interesting things I have ever known. Below is a graph of the first digit of many different physical constants against the number of constants with this first digit.
Of course this graph doesn’t look right at all. It seems to show that the probability of any universal constant beginning with the digit 1 is higher and this probability decreases as the first digit rises with very few beginning with 9. The truth is this exactly right, and doesn’t just apply to physical constants. If you look at the population, surface area, or border length of all countries you’ll find that about 30% of these values begin with 1. This is always true provided the data set covers quite a couple orders of magnitude and is not completely randomly generated. This 30% is just an average and for specific set sizes such as the numbers 1 to 9999 the probability of beginning with 1 drops to 11%. But when 1 to 19999 is examined the chance of beginning with 1 rises to 56%. The probability fluctuates between these two extremes and averages at 30% for an unknown set size. An interesting application is that when humans make up numbers it is part of the human psyche to try and distribute the numbers evenly. If these made up numbers are supposed to represent coefficients in scientific papers then we can see that they are clearly as forged as real data would generally follow this first digit law also known as Benford’s law. This interesting quirk or probability can be sued to spot these fraudulent physics papers and other cases where humans attempt to forge data.
The Mpemba effect is quite well known though maybe not by name outside of physics. It is the idea that hot water will freeze faster than cold water. I can remember a competition held by the Royal Society of Chemistry four years ago that had the goal of finding an explanation for the Mpemba effect. But slowing down might be required to explain where this almost paradoxical claim came from. The story I heard and I believe to be in the most part true is that a boy called Erasto Mpemba back in the sixties was creating ice cream as part of a class food project. He was running behind schedule and so instead of letting his mix cool before putting in the freezer he put it in still hot. He was surprised to find his ice cream was ready before his classmates who had put their cold mixtures in before him and at the same time as him. Apart from the eponymous example there are many recorded examples of this happening throughout history even as far back as Aristotle. The explanation I believe to be most valid is that the hot water produces convention currents that can still be maintained even when the water cools. This means the rate of cooling is always faster in the hotter container. Other theories talk about how cold water freezing causes a layer of ice over the surface which can insulate the water below slowing the cooling rate, but in hot water ice forms around the base and sides first so this insulation doesn’t get a chance to be made. A paper has just been released that seems to come to most simple of all however: the Mpemba effect doesn’t actually exist. Generally the Mpemba effect is seen more in home based freezing methods rather than in laboratories. The empirical studies that do show the Mpemba effect only ever find the difference in freezing time to be very small and many can’t find any evidence for it at all. It is possible that under the strict controlled conditions the effect is reduced to nothing in which case the source must be some kind of rudimentary action somewhere. This result does raise an interesting question; if an effect is commonly observed in nature but can’t recreated or observed in laboratory conditions what conclusion is science supposed to pull?