The Hamiltonian is a concept in (unsurprisingly) in Hamiltonian mechanics which represents an operator in a system. In many cases it corresponds to the total energy of a system and so I’ll continue to use it as analogous for the total energy although this not always true. Just picture it as an equation with each term representing one of the kinds of energy a system contains. Now often you want to find the ground state, the lowest possible energy, for the system as this will provide useful information to solve it. But what if this Hamiltonian’s ground state is very complicated, if not impossible, to find algebraically? Well, this is the basis of an adapted form of quantum computing called adiabatic quantum computing.
In 1975 Gordon Moore, cofounder of Intel, made the prediction that the number of transistors on an integrated circuit chip would double ever two for at least the next decade. This is Moore’s law and hopefully it represents how rapidly computer science was progressing at the time, and amazingly, still is progressing. Moore’s law has actually remained true since 1975 and it is used as a target for industry. It’s quite incredible to think that the amount of transistors we can jam onto a circuit now is past ten thousand million (10,000,000,000 – ten billion for those using short scale) and in two years time, we will have managed to get that many on again.
I have a friend who, for a PhD, booked his universities central computer for a solid month in order to model every ion in the Sun. From what I gathered it was every nuclear interaction across 1056 nuclei. After a month of chugging away the computer had managed to model the entire Sun for the duration of about 10-18 seconds. This was considered very impressive and as a result he’s now a Dr (not the kind that helps people). Hopefully this anecdote shows how incredibly powerful modern computation is but also how incredibly complicated (or perhaps just big) the things that are being modelled are.
Quantum tunnelling is an effect which is often spoken about when quantum effects are brought up. It is the idea that a particle can move “through” an object. A more technical way of looking at it is that the particle is trapped in an energy well (as seen on the right) where it simply does not have the energy to escape. Yet somehow the particle manages to appear outside the well and fly off. The diagram on the right shows the particles as waves, wave-particle duality allows us to do and this is the key to the first explanation. The waves represent the probability of the particle being at those locations (more accurately the complex conjugate of the wave but that’s irrelevant here) and so are seen within the well. Yet what isn’t shown in the diagram is that the waves actually exist outside the well but decaying very rapidly to nothing. There is, therefore, a low chance that this will be the location the particle exists in and so the particle will appear “hop” through the barrier and end up outside the well to the amazement of everyone watching.
Over a year ago I wrote this post in which I attempt to describe the concept of ghost imaging. I wouldn’t say it is a bad explanation more that it is a hard concept to describe without a diagram, and so here one is:
The laser on the left produces a beam of photons. This beam is polarised and split, one beam is sent towards the object that will be imaged and the “bucket detector” and the other one is sent towards a CCD camera. The bucket detector only detects whether a beam is incident on it or not. The CCD camera detects the specific pixel the beam is incident on. So when the beam tracks across both detectors at the same time the CCD will have every pixel triggered and the bucket will confirm whether the the beam reached it or not. The beam will not trigger the bucket when the object is in the way. So a shadow of the object is produced if you only take the CCD readings from when the bucket was also sending a signal.
Improvements to this process start with the spacial polarisation of the laser (by the barium borate crystal). It is then possible to compare the photons in the imaging beam to the reference beam in order to produce the ghost image. However, a comparison directly to the reference beam is unnecessary as its distribution can be calculated from the phase dependent patterns produced on the polariser. This is often called computational ghost imaging and has a main disadvantage that the sampling rate measurements needs to be very high for a detailed image to be produced.
This paper suggests a process named Ghost Imaging using Deep Learning (GIDL) which aims to reduced the sampling rate but still keep a good image quality in the ghost images produced. Deep learning is a neural network computing process where the neural network is very, very big. The network is programmed by feeding large amounts of data into it which allows to improve its responses. Much like any person, the network begins to notice more and more trends in complex data and so its predictive capabilities keep on rising. This paper shows in both purely mathematical and in optical experiments that GIDL demonstrates a better performance at producing an image that standard ghost imaging or computational ghost imaging. This effect became very pronounced when the measurement count as compared to the noise in the experiment was decreased. With less measurements GIDL is predicted to easily outshine its competitors and offers a serious improvement on ghost imaging techniques.
Paper links: Deep-learning-based ghost imaging
Biomechanical modelling is the production of formula for the motion and form of the human body. An interesting example I can think of off the top of my head is in animation. When an animator wishes to make a character walk obviously their legs need to move. Even of the motion of the legs is done perfectly the cartoon may still have something strange about it. It is the realisation that the human leg alters speed during one walk cycle that can sort this problem, removing some frames (which increases the perceived speed) at the correct point will produce a much more realistic walk in the animation. Although this is quite a nontechnical example it still illustrates that the motions of the body are not obvious even for large movements, it can be imagined how many times more complicated the subtler motion of the face is and that is the topic of today’s post.
My ability to use computer programs is incredibly limited. I can often make them do roughly what I want (and always what I tell them) but the most infuriating part for me was when I’d (thought I’d) copied a piece of code perfectly from a different program, but it wasn’t working in its new context. I think this is no doubt a universal feeling, failed imitation leads to frustration. Now for the brain. Everyone knows that the brain can be described as a computer, running algorithms and producing thoughts. But this a purely external view, it doesn’t take into account what the brain is actually like. The other main model of the brain is the neural network. This matches the physical structure of the brain, a series many units which work together in a highly dynamic pattern. But no neural network we’ve ever been able to model has been able to produce any higher level cognitive function.