In our introductory machine learning article, we touched upon the fact that machine learning is a branch of artificial intelligence. In this article we will go more in-depth on what artificial is and where it originated. However it will not be a tech-heavy read. If you would like to gain a more concrete understanding of artificial intelligence, feel free to check out the other articles in the Cybiant Knowledge Centre.
Artificial Intelligence techniques have been in rapid development in the 21st century as technology becomes more advanced and our capabilities along with it.
What is AI?
Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. It is a wide-ranging branch of computer science concerned with constructing machines capable of performing tasks that naturally require human intelligence. Artificial Intelligence is an interdisciplinary science with multiple methods. The term is often used to describe machines or computers that can replicate cognitive functions that humans associate with the mind, such as learning and problem solving.
While movies and science fiction books depict artificial intelligence as humanoid robots that conquer the world. However, the current evolution of AI technologies is nowhere near that intelligent. AI has rather evolved to provide many specific task-oriented benefits in many industries.
Since the 1950s, scientists have argued over what qualifies as “thinking” and “intelligence” when it comes to machines. Alan Turing is generally credited with coming up with the idea that “thinking machines” could reason at the level of a human being.
The ambitious goal of artificial intelligence has given rise to many debates and questions. So much so, that no singular definition of artificial intelligence has been universally accepted. However, the major limitation in defining AI is first determining “what artificial intelligence is? And what makes a machine intelligent?”
“Our intelligence is what makes us human, and AI is an extension of that quality.” – Yann LeCun
Brief history of Artificial Intelligence
Figure 1: Progress of Artificial Intelligence over the years
The early 20th and 21st century saw the growing popularity of artificially intelligent robots in science fiction. It may have begun with the “heartless” Tin man from the Wizard of Oz. Or perhaps it is Talos, who was a giant automation made of bronze in Greek mythology who was created to protect Europa in Crete from pirates and invaders.
By the 1950s, there was a generation of scientists, mathematicians, philosophers and other like-minded individuals who contributed to the foundation of machine science. These people had the concept of artificial intelligence culturally integrated in their minds. One of the most famous of such people was Alan Turing, who was a young British mathematician who suggested the possibility of “thinking” machines. He proposed that people use available information as well as reason in order to solve problems and make decisions, so machines should be able to do the same.
In 1950, Alan Turing published a paper, Computing Machinery and Intelligence in which he discussed the various possibilities of how to build intelligent machines and how to measure their intelligence.
Between the 1950s and the mid 1970s, AI quickly advanced in development and research. By this period, computers became more advanced, being able to store more information, handle faster processors, and they became cheaper to own. Machine learning algorithms also improved and this led to better problem-solving capabilities of computers. The advocacy of leading researchers motivated governments to increase funding in artificial intelligence as they were keen on developing machines that could transcribe and translate spoken language. High output data processing also became a priority as the digitalisation of this period meant that data handling was more reliant on computers.
Researchers had high hopes and had optimism in the fact that the development of artificial intelligence would be exponential. However, there was still many years to go before natural language processing, image recognition and abstract thinking could be achieved.
Even though by the 1990s government funding in AI researched dwindled, research still flourished. Many milestones in artificial intelligence research were achieved in the 90s and early 2000s. IBM’s Deep Blue, a chess playing computer program, defeated world chess champion Gary Kasparov. This was the first time that a world-chess champion lost to a computer and it proved to be a massive milestone for AI researchers. This was a big step towards an artificially intelligent decision making program. Speech recognition software was developed the same year by Dragon Systems, and was implemented on Windows.
With the development of the Internet and the World Wide Web, came the quick manifestation of Big Data. We have the capacity to collect massive amounts of information that would take several hundred years for a human to manually process. The application of AI in data analytics and processing would greatly alter the way mankind uses information. Breakthroughs in computer science and other areas would occur as a result of being able automatically extract valuable information from billions of data points.
The state of AI in 2019 is rather remarkable. The technology is advancing faster than ever. It is being integrated into industries such as healthcare, manufacturing, autonomous vehicles, online shopping, just to name a few. Artificial intelligence is already making a lot of decisions for us whether we are able to recognise it or not. In our article on Deep Learning, we discussed how online shopping websites and platforms like Netflix use forms of artificial intelligence to improve their service and increase customer retention.
Several start-ups are leading the way in artificial intelligence research. One notable company is Boston Dynamics, who is a world leader in mobile robots. Their robots tackle some of the biggest challenges in AI. With a dedicated team of engineers and scientists who combine analytical thinking with bold engineering, they’re able to create some of the most advanced robots today.
Basics of Artificial Intelligence
Machine systems cannot explain their thinking in the way that humans do. This is the biggest challenge of artificial intelligence. They do not possess the same common sense that humans have. A good image recognition algorithm could identify any animal from a picture, but they would not be able to explain the difference between a cat or a dog unless it was pre-fed with the information. A computer only knows as much as the information that you feed it. But some machine learning algorithms are designed so that computers can formulate their own decisions and reasoning based on historic data and patterns.
Taking an example of an image recognition program designed to recognise dogs from pictures. The common approach would be to program explicit rules such as dogs have floppy ears, fur, big eyes, etc. If you then fed it and image of something that resembled a dog, like a hyena, what would the program do? The best approach of this would be to let the machine learn for itself for what is a dog and what isn’t by feeding it countless image of dogs. Over time it would search for patterns in these pictures and eventually the accuracy would improve substantially.
Figure 2: Elements of artificial intelligence
Artificial intelligence generally falls under two main categories:
- Narrow AI: Sometimes referred to as “Weak AI,” this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow artificial intelligence is often focused on performing specific and singular tasks extremely well. While these machines may seem to possess intelligence they are functioning under far more limitation than even the most basic human intelligence.
A few examples of Narrow AI include:
– Google search
– Image recognition
– Personal digital assistants (Siri, Alexa)
– Autonomous vehicles
– Google’s DeepMind
- Artificial General Intelligence (AGI): AGI, sometimes referred to as “Strong AI,” is the kind of artificial intelligence we see in science fiction, like the robots from Star Wars or super computers in Terminator. AGI is a machine with general intelligence and, much like a human, it can apply intelligence to solve any problem.
Teaching machines to learn for themselves is a favourable outcome of artificial intelligence research as it would save countless hours of programming, testing, and writing.
It should be remembered that artificial intelligence is a holistic scientific field that branches into more specific fields such as machine learning, deep learning and neural networks. All three of these AI concepts can enable machines to “think” and act dynamically, outside the limitations of code. Understanding these basics can lead to more advanced AI topics, including artificial general intelligence, super-intelligence, as well as ethics in AI.
If you are interested in learning more about these concepts, check out our other topics in machine learning and artificial intelligence in the Cybiant Knowledge Centre.
Leave A Comment