The news on the science front has been quite slow today and so instead I have a thought experiment for you to perform. What security measures would you choose to place on an artificially intelligent computer system, which we will define as a computer with human to above human level intelligence and the ability to change its own programming. My instant idea was to make it unconnected from any other computer system and then place it in a Faraday cage for good measure just to make sure it doesn’t get any chance to send a signal to the outside world. My friend, who knows a lot more about computers than I, suggested having a slightly older version of the AI constantly checking the current version in order to make sure it doesn’t get out of hand. This would take an absolutely massive amount of computing power (as you are literally running twice as many AIs as normal) and if the AI rate of upgrade increases it is possible that the AI will be able to outsmart even a one second earlier version of itself. Although shutting down a computer’s connection to the outside world might seem effective there is an experiment called the AI-Box Experiment, created by Eliezer Yudkowsky, which demonstrates it may not be. To be useful even a disconnected AI must have some method of being asked questions and giving out answers and so there will always be a human connection on the outside. In the experiment this “gate keeper” and the AI are both played by humans who communicate though online text with each other. The AI’s goal is to have the other player assent to letting it out and the gate keeper’s job is to not let the AI out for two hours. You might think that the gate keeper should simply be able to sit there and refuse with ease; and yet so many times the result is a released artificial intelligence. If some humans are intelligent enough to trick, convince and bully their way out wouldn’t an AI find the task simple especially when it can act on bribes and threats upon its release.