The early 20th and 21st century saw the growing popularity of artificially intelligent robots in science fiction. It may have begun with the “heartless” Tin man from the Wizard of Oz. Or perhaps it is Talos, who was a giant automation made of bronze in Greek mythology who was created to protect Europa in Crete from pirates and invaders.
By the 1950s, there was a generation of scientists, mathematicians, philosophers and other like-minded individuals who contributed to the foundation of machine science. These people had the concept of artificial intelligence culturally integrated in their minds. One of the most famous of such people was Alan Turing, who was a young British mathematician who suggested the possibility of “thinking” machines. He proposed that people use available information as well as reason in order to solve problems and make decisions, so machines should be able to do the same.
In 1950, Alan Turing published a paper, Computing Machinery and Intelligence in which he discussed the various possibilities of how to build intelligent machines and how to measure their intelligence.
Between the 1950s and the mid 1970s, AI quickly advanced in development and research. By this period, computers became more advanced, being able to store more information, handle faster processors, and they became cheaper to own. Machine learning algorithms also improved and this led to better problem-solving capabilities of computers. The advocacy of leading researchers motivated governments to increase funding in artificial intelligence as they were keen on developing machines that could transcribe and translate spoken language. High output data processing also became a priority as the digitalisation of this period meant that data handling was more reliant on computers.
Researchers had high hopes and had optimism in the fact that the development of artificial intelligence would be exponential. However, there was still many years to go before natural language processing, image recognition and abstract thinking could be achieved.
Even though by the 1990s government funding in AI researched dwindled, research still flourished. Many milestones in artificial intelligence research were achieved in the 90s and early 2000s. IBM’s Deep Blue, a chess playing computer program, defeated world chess champion Gary Kasparov. This was the first time that a world-chess champion lost to a computer and it proved to be a massive milestone for AI researchers. This was a big step towards an artificially intelligent decision making program. Speech recognition software was developed the same year by Dragon Systems, and was implemented on Windows.
With the development of the Internet and the World Wide Web, came the quick manifestation of Big Data. We have the capacity to collect massive amounts of information that would take several hundred years for a human to manually process. The application of AI in data analytics and processing would greatly alter the way mankind uses information. Breakthroughs in computer science and other areas would occur as a result of being able automatically extract valuable information from billions of data points.
The state of AI in 2019 is rather remarkable. The technology is advancing faster than ever. It is being integrated into industries such as healthcare, manufacturing, autonomous vehicles, online shopping, just to name a few. Artificial intelligence is already making a lot of decisions for us whether we are able to recognise it or not. In our article on Deep Learning, we discussed how online shopping websites and platforms like Netflix use forms of artificial intelligence to improve their service and increase customer retention.
Several start-ups are leading the way in artificial intelligence research. One notable company is Boston Dynamics, who is a world leader in mobile robots. Their robots tackle some of the biggest challenges in AI. With a dedicated team of engineers and scientists who combine analytical thinking with bold engineering, they’re able to create some of the most advanced robots today.