The Monte Carlo method is an interesting example in physics where something was given a very good name. Normally the names that academics select fall into the categories of incomprehensibly dense jargon or being mundanely blunt. The Monte Carlo method however gets quite a creative name linked well into the theory behind it.
Vectors can be imagined as an arrow (we won’t need the more accurate form of a vector talked about here). A vector field is, therefore, a collection of these arrows existing at every point in space. A vector field is a very useful way to describe aspects of nature: there are velocity fields which describe the velocities of a medium at all points; stress fields which describe the tensions within materials; and force fields which might represent electromagnetic or gravitational force. The problem is that many vector fields may not be easy to calculate from theory alone as so computational work has to be done to understand them. The tracking of tracer particles can reveal perturbation at points within a field and the behaviour of these particles within a solid elucidate particular points of mechanical interest. In biological environments there is also the possibility of direct tracking of the objects under investigation such as bacteria.
As bacterial colonies grow they eventually become a fixed appenditure to whatever surface they’re developing on. A post I wrote two weeks ago covers the shape that colonies take as they form and how it relates to individual surface forces. These sessile communities of microbes are the infamous biofilms which plague so many filtration systems and pieces of medical equipment. These surface colonies are not just interesting because their damaging and infections nature but also because there is a distinct possibility that they’ll play a role in biocatalysts of the future.
Game theory is the analysis of competition between individuals. Since the 1950s the value seen in game theory has only been expanding. The mathematician John Nash produced his famous Nash equilibrium where all players settle on an optimal strategy (though not necessarily one that produces optimal results) and equilibriums have since been a staple in the analysis of games. In a way this can be a bit limiting. In small games with only a few players equilibria are normally quite easy to find and are quite useful. However in real games, like the economy (Nash actually won the Nobel prize for economics for his work), there are many players all interacting with slightly different goals and strategies. Some concepts from smaller games can be taken up to many player games but it’s highly unlikely that any player in these games has worked out the optimal strategy and heading for equilibrium.
The Hamiltonian is a concept in (unsurprisingly) Hamiltonian mechanics which represents an operator in a system. In many cases it corresponds to the total energy of a system and so I’ll continue to use it as analogous for the total energy although this is not always true. Just picture it as an equation with each term representing one of the kinds of energy a system contains. Now often you want to find the ground state, the lowest possible energy, for the system as this will provide useful information to solve it. But what if this Hamiltonian’s ground state is very complicated, if not impossible, to find algebraically? Well, this is the basis of an adapted form of quantum computing called adiabatic quantum computing.
In 1975 Gordon Moore, cofounder of Intel, made the prediction that the number of transistors on an integrated circuit chip would double every two years for at least the next decade. This is Moore’s law and hopefully it represents how rapidly computer science was progressing at the time, and amazingly, still is progressing. Moore’s law has actually remained true since 1975 and it is used as a target for industry. It’s quite incredible to think that the amount of transistors we can jam onto a circuit now is past ten thousand million (10,000,000,000 – ten billion for those using short scale) and in two years’ time we will have managed to get that many on again.
I have a friend who, for a PhD, booked his universities central computer for a solid month in order to model every ion in the Sun. From what I gathered it was every nuclear interaction across 1056 nuclei. After a month of chugging away the computer had managed to model the entire Sun for the duration of about 10-18 seconds. This was considered very impressive and as a result he’s now a Dr (not the kind that helps people). Hopefully this anecdote shows how incredibly powerful modern computation is but also how incredibly complicated (or perhaps just big) the things that are being modelled are.