Lunar swirls are distinctive features that appear on The Moon’s surface, made highly visible by their high albedo (reflective ability). On the right, an image of Reiner Gamma, one of the most obvious lunar swirls on the moon visible through most telescopes. Lunar swirls show the optical characteristics of young regolith (the broken stones and dust solid rock) although they aren’t associated with any particular rock type. These swirls aren’t mountains or craters and although they are often found superimposed upon crater sites they don’t themselves represent any particular change in surface topography. One connection that has been noted is that every lunar swirl observed is located in a region of strong magnetic field, though the reverse is not always true. Now the moon doesn’t have an active core dynamo and in fact may never have had one so the magnetic fields which exists are purely geological and are more magnetic anomalies than a proper orb spanning field.
Superconductivity is one of the most interesting effects in modern physics. When a material undergoes the transition to a superconducting state then the Bardeen, Cooper and Schrieffer (BCS) theory tells us that the vibrations within the metal have resulted in the forming of cooper pairs, turning electrons from fermions into quasi-bosons which can now move through the metal with no resistance. This theory works very well to explain the observed qualities of superconductivity and even has had the particle pairing principle be reapplied to areas such as nuclear physics to explain phenomena there. It has been hypothesised that pretty much all crystalline materials have the potential to become superconductors at low enough temperatures provided cooper pair formation occurs.
Vectors can be imagined as an arrow (we won’t need the more accurate form of a vector talked about here). A vector field is, therefore, a collection of these arrows existing at every point in space. A vector field is a very useful way to describe aspects of nature: there are velocity fields which describe the velocities of a medium at all points; stress fields which describe the tensions within materials; and force fields which might represent electromagnetic or gravitational force. The problem is that many vector fields may not be easy to calculate from theory alone as so computational work has to be done to understand them. The tracking of tracer particles can reveal perturbation at points within a field and the behaviour of these particles within a solid elucidate particular points of mechanical interest. In biological environments there is also the possibility of direct tracking of the objects under investigation such as bacteria.
Models which predict the how the Earth will change under various intensities of global warming need to look at a variety of features. Ocean currents and plankton activity, storm rates and atmospheric flows have all been studied in great detail. The complex models employed to foresee these changes give us a reasonably accurate prediction over time scales between 10 and 100 years, the time scale in which changes will have a direct impact on us. It would be of great scientific interest to be able to estimate the impact of current changes on the long term time scale of over 1000 years and all the way to the millions of years. But there is a problem. Like I said, the current models are very complex which means that there is a limit imposed by our current computing power of how far these models can be extended into the future.
Game theory is the analysis of competition between individuals. Since the 1950s the value seen in game theory has only been expanding. The mathematician John Nash produced his famous Nash equilibrium where all players settle on an optimal strategy (though not necessarily one that produces optimal results) and equilibriums have since been a staple in the analysis of games. In a way this can be a bit limiting. In small games with only a few players equilibria are normally quite easy to find and are quite useful. However in real games, like the economy (Nash actually won the Nobel prize for economics for his work), there are many players all interacting with slightly different goals and strategies. Some concepts from smaller games can be taken up to many player games but it’s highly unlikely that any player in these games has worked out the optimal strategy and heading for equilibrium.
The Hamiltonian is a concept in (unsurprisingly) Hamiltonian mechanics which represents an operator in a system. In many cases it corresponds to the total energy of a system and so I’ll continue to use it as analogous for the total energy although this is not always true. Just picture it as an equation with each term representing one of the kinds of energy a system contains. Now often you want to find the ground state, the lowest possible energy, for the system as this will provide useful information to solve it. But what if this Hamiltonian’s ground state is very complicated, if not impossible, to find algebraically? Well, this is the basis of an adapted form of quantum computing called adiabatic quantum computing.
There are many occasions in engineering where it is desired for a material to absorb light. Thermal photovoltaic cells, for instance, want to try to absorb as much energy as they can. Any reflection that occurs off a surface is a loss of energy they could have been using and so represents and inherent limit on the efficiency of these cells. In a way this is also true for devices that produce heat. Any of the produced heat from an electric circuit that can be recaptured and reused will overall drastically increase the efficiency of circuits as thermal loss is the man source of inefficiency.