What do the machines think?

AI and human intelligence

Artificial Intelligence (AI) has started to permeate our everyday lives through a diverse range of products and services. From matchmaking, identifying protein cells, powering robots and performing predictive analytics; AI is now being developed to automate and improve a wide range of tasks.

Alan Turing famously predicted the future of artificial intelligence in the early 1950s:


It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.

AI has been defined as any type of machine that imitates, and tries to accurately depict and predict human intelligence. The Turing Test, otherwise known as the ‘imitation game’, set out to define the conditions within which a computer could be described as having intelligence. 


A computer would deserve to be called intelligent if it could deceive a human into believing that it was human

Alan Turing

Alan Turing

Around the same time, science fiction writers, such as the critically acclaimed Isaac Asimov, wrote several novels and short stories projecting AI’s possible future. His work has influenced and inspired both public opinion and in-turn scientific research. His so called ‘Three Laws of Robotics’, were designed to stop our future creations from taking over. Such notions are still speculated about today across many forms of popular culture. These laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Also in the 1950s, John McCarthy became a key part of the equation. He organised a seminal workshop at Dartmouth college where he, along with others such as Marvin Minsky, famously coined the term ‘artificial intelligence’.


We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

From there, different approaches were debated on how to develop AI: 

  • Top-down (deductive): logic can translate from an existing intelligent system to a computer. E.g. recreating the logic of language to produce speech and language interpretation systems. 

  • Bottom-up (inductive): the creation of basic elements that can evolve and develop based on the environment. E.g. creating genetic algorithms that help solve complex problems, inspired by biological systems such as mutation or natural selection.

Following the workshop Minsky and McCarthy were able to gain substantial funding to tackle the top-down approach. In 1958, McCarthy created LISP - a new programming language which was created specifically for artificial intelligence research. It has since been immortalised as ‘God's own programming language’.

The cover of Byte Magazine, August 1979

The cover of Byte Magazine, August 1979

Despite a subsequent flurry of academic activity and investment, this all changed in the mid 1970s, AI’s promise of new beginnings was starting to crumble, initial results were disappointing and limited. So began the first “AI Winter” - a time of reduced funding and interest in AI research. 

But some research continued, and in 1980, Digital Equipment Corporation developed XCON - a new form of AI that transcended previous developments by highlighting the usefulness of machine intelligence. In 1986, the AI managed to prove its ability to generate commercial value, by generating $40 million in annual savings for the company, paving the way for future corporations’ use of AI.

1986 was also the beginning for the inception of Deep Learning (a subset of Machine Learning) thanks to computer scientists Geoffrey Hinton and Yann LeCun. Both known as the ‘Godfathers of Deep Learning and AI’, they made vast discoveries that changed the course of Machine Learning. Hinton helped develop algorithms by demonstrating that a few neural networks could be trained using backpropagation (an algorithm that calculates the gradient of an error in relation to the neural network weights, using a gradient descent) to improve word prediction and shape recognition - the bases of Machine Learning. The introduction of errors in training led to a reassessment of the human understanding in relation to its inspiration behind computers; whether it relied on symbolic logic or connected representations. Hinton went on to coin “Deep Learning” in 2006. 

Geoffrey Hinton (left) and Yann LeCun (right)

Geoffrey Hinton (left) and Yann LeCun (right)

Whilst AI was entering its second winter; in 1988, IBM researchers developed the bottom-up approach further, by designing AI systems to determine the probability of varying results based on training data, as opposed to training them to determine rules. Its design and deployment is often considered as a much closer parallel of cognitive processes of the human brain, forming the basis of today’s machine learning.

Following on from Hinton’s discoveries for Deep Learning, LeCun created the first practical demonstration of backpropagation in 1989. He combined intricate neural networks with backpropagation to read “handwritten digits”; leading to its eventual usage to read numbers on handwritten checks. To this day, backpropagation and the idea of a gradient descent still plays a huge role in the foundations for Deep Learning. 

1991 started a tidal wave of future change: the internet was born. CERN researcher Tim Berners-Lee deployed the world’s first website online along with the hypertext transfer protocol (HTTP). The arrival of the worldwide web catalysed the evolution from computers only sharing data in educational institutions and large businesses, to a societal plug-in to the online world. Millions would soon be generating and sharing data that would fuel AI training at a previously incomprehensible rate.

Fast forward to 1997, after the end of the second AI Winter, where a supercomputer called Deep Blue - capable of sifting through up to 200 million chess positions a second - took on world chess champion Garry Kasaparov, beating him so well that he believed a human was behind the controls; confirming Turing’s test for machine intelligence. 

Artificial intelligence battling in chess

The 21st Century brought new and exciting developments to AI. 2012 was a big milestone as it highlighted the possibilities of deep learning - a subset of Machine learning. Stanford and Google managed to create an AI that learned how to recognise pictures of cats, by processing around 10 million images from Youtube recordings during training. Although the paper published highlighted the new ability to create an artificial network containing around 1 billion connections, it still conceded that there was significant more work to do to build an “artificial brain” that mimics the human brain - thought to contain around 10 trillion connectors. 

DeepMind, founded in 2010, brought scientists, engineers and machine learning experts together to advance AI, with a key goal to accelerate the field. Amongst others, their incredible feat against the game Go, has become famous. The most challenging classical game originating from China over 3000 years ago, requires numerous layers of strategic thinking using black or white stones to surround and capture the opponents’. AlphaGo, the computer program that merged an advanced search tree with deep neural networks uses the Go board as an input to process and work through many different network layers. This was a huge step for AI; the sheer complexity of the game, paired with the challenge of facing the strongest Go player in history, could only be defeated by capturing this intuitive aspect of the game - or as close as. AlphaGo won the first ever game against a professional Go player in October 2015, and from there, in March 2016, was victorious over the legendary Mr Lee Sedol, winner of 18 world titles.


I thought AlphaGo was based on probability calculation and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative.
— Lee Sedol

As a result, a further challenge was put forward: playing against itself, beginning with completely random play allowing it to not be limited by human knowledge. To accumulate thousands of years worth of Go human knowledge in just a few days, AlphaGo learned from the best in the world, and quickly surpassed all previous performances - both human and machine alike. 

More recently (2020), AI has once again made incredible strides. Open AI, a San Franciscan AI research centre, created the GPT-3 (Generative Pre-trained Transformer 3) Language model. This model is the third version of its kind, it is a very large language model that uses deep learning technology to create human-like text. It generates text using algorithms that have been trained using around 570GB of internet text data (499 billion tokens - 2x orders of magnitude higher than GPT-2). This means that it can answer questions, write essays, translate languages and even create computer code. Although its progress is revolutionary, particularly as it could change the make-up and design of new apps and software, the CEO of Open AI says that it is ‘just an early glimpse’ of the future potential of AI. For reference: the GPT3 model would take 355 years and cost $4.6M to train on the equivalent of a low cost cloud GPU.

This 21st century resurgence can be attributed to the wider availability of large-scale data for system training, developments in Machine Learning frameworks and tools, and the increased parallel processing power provided by GPUs, (Graphic Processing Units) which have all come together to power the explosion in AI.

AI is becoming one of the most popular technologies used by businesses across the globe. Since the outbreak of COVID-19, 88% of finance and insurance companies, and 76% of those involved in IT, have accelerated their executions of automation and AI. Also this past year, various studies show that over 77% of devices we use include one form or another of AI. Investments in AI have increased by 6 times since 2000, with the global AI market expected to reach $60 billion by 2025. It seems everyone is now jumping on the bandwagon of the future of Artificial Intelligence.


Breaking down AI

Pattern recognition

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.
— Investopedia

From its early creations, the developments of AI have clearly been both revolutionary and proof of getting closer to reaching the mirroring of the human brain. 


AI is the idea of a level above pattern recognition
— James Leigh, CTO and Co-Founder of Pimloc
The greatest thing about the promise of AI it is its ability to universally take anything and transform the way it works; but that is also its current downfall.
— Will Davies, ML Developer at Pimloc

Machine Learning (ML) is a critical element of Artificial intelligence. ML is a mathematically driven algorithm giving direction for a specific task; searching for patterns within large amounts of data (a dataset) to complete said task. Deep Learning, a subset of Machine Learning, uses deep neural networks to complete the task also, via methods that are either supervised (like ML) or unsupervised. 

Simply put, ML is pattern recognition. For example, if a ML algorithm specialising in visual data was to detect faces; it would “learn” the combination of pixels, defined by textures of the face (nose, mouth, eyes), as prescribed in the algorithm, in order to determine the pixel make-up of a face. 

To break it down further, the “learning” of ML, begins with an initial training stage where the system is shown examples of labelled data to learn from. Once trained the system can then use what it has learned to look for similar patterns in new data (either to detect specific data types or to classify them).


A lot of physics is similar in terms of knowing when your approximations are valid or not; you have to know the bounds of when you can take shortcuts - this is important for any kind of machine learning because you don’t want to over-fit your data; you need to make sure it is learning things that are relevant.
— Ryan Kavanagh, ML Developer at Pimloc and Master of Physics at the University of Oxford

But do the recent advances in machine learning equate to intelligence?

AI mirroring brain functions

That’s the magic question. Everyone calls it AI when nearly all of it is actually basic pattern recognition; Machine Learning. None of it is AI in the sense of deeply understanding the world, it’s just getting good at recognising and differentiating patterns… AI is the part that comes next, which is still quite a lofty research goal.

Most of ML is what’s called “supervised learning”, which is when a person has gone through and labelled all the data to say: this is object A and object B, which is obviously quite time consuming. What you really want to be able to do is learn unsupervised - without images being labelled. Ultimately, if it could learn more it would be better, but would it really be able to understand it - no.
— James Leigh

Fully unsupervised learning is yet to be explicitly proven. This doesn’t mean to say that ML is limited as a result, there is still lots of scope for how far ML can go to help solve a wide range of problems. For example, the researchers at Google AI Healthcare created a learning algorithm called LYNA (Lymph Node Assistant) in 2018 which was able to identify suspicious regions undistinguishable to the human eye that indicated cancerous cells. Its accuracy rate to classify the difference between cancerous and noncancerous was reportedly 99%, and halved the average case review time.

As previously stated, ML solutions “learn” from data - when presented with new data, they find similar patterns; they do not yet have intuition. So, the more representative the data, the more accurate it can become. Access to relevant data diversity within and across target domains is the main limiting factor for improving system performance over time. A machine would need to learn all the possible representations of a system and the interconnections of the entities within it in order to solve more generalised problems. A tall task given the level of complexity and interdependence of general human life. For now, most activities are focused on narrow domain challenges where systems can be trained around discrete tasks and data sets.

That being said, the pursuit of general AI continues to progress in the leading academic labs and global tech businesses. This is being helped by the continual growth in public data sets - the total amount of global data being created is forecast to reach 59 zettabytes (59 trillion gigabytes) in 2020, and more than double that by 2024. ML allows for the potential interpretation of this ever increasing data, and of course the more accurate an ML solution is at performing a specific task, the more useful it is for said task; even though it is less likely that it can yet “understand” other different tasks. 


This reliance on data for system training opens up wider questions on data access, protection and biases. 


This is one of the biggest issues in ML right now; how can AI systems and the data they have been trained on be regulated. Where should data be made available, how should it be made available, who owns it and is it representative. All these factors play right into the heart of who will be able to build the best solutions, how we can ensure that these systems are fair and not biased and that their deployment is done responsibly.
— Simon Randall, CEO and Co-Founder of Pimloc

Regulating data access is tricky when not only commercial value plays a role, but also data privacy.  Training an AI means using lots of data but who owns that data and where is it sourced from.


Even though an AI can be 99% accurate, is it trustworthy as a self-functioning, decision making tool, without any human supervision?
— James Leigh

Looking at the progress in driverless cars: there has been phenomenal advances made in the ability of a vehicle to automatically navigate around a road network, but teaching it every single possible scenario of what a car may face, so that it understands what action to take, or trusting it to do so when someone’s life is at stake, is still proving to be out of reach - even with the increasing volume of data being captured and money invested.


We are 90% of the way there. But it is that last bit which is the toughest. Being able reliably to do the right thing every single time, whether it’s raining, snowing, fog, is a bigger challenge than anticipated.
— Professor Nick Reed, a transport consultant who ran UK self-driving trials

Despite these concerns surrounding edge cases, Ryan Kavanagh, one of our ML Developers, is still optimistic:


Would I trust a driverless car more than a person, the answer is yes. In terms of the fringe cases, you’ll experience them in proportion with their probability of occuring - the difference is that a driverless car can learn from its errors, where there is no replay button for a human’s mistakes. I don’t think it will be hard to prove long term that the error rate with cars will be much less than a human, and as long as you can prove that the error rate is far less and diminished, I would consider that fully trusting in a societal sense.
— Ryan Kavanagh

Although narrow AI can be very effective in its outcomes, it seems as if the desire for general humanistic capability has currently overcome realistic expectations of its accuracy. True human intelligence is yet to be replicated within a computer. However, there is no denying that AI is moving the world forward, with new advancements constantly being deployed. Whether real machine intelligence is something that will happen in our lifetime is hard to say - but many leading academics certainly think that this is possible, with some still predicting this could happen as early as 2030. What is clear is that ML’s current trend of being used for practical and economical reasons seems to be driving discovery and accuracy forward in specific, narrow, domains.




With special thanks to the Pimloc team for their contribution and insight for this article.

References:

  1. https://en.wikipedia.org/wiki/Turing_test 

  2. https://www.bbc.co.uk/teach/ai-15-key-moments-in-the-story-of-artificial-intelligence/zh77cqt 

  3. https://en.wikipedia.org/wiki/Isaac_Asimov 

  4. https://en.wikipedia.org/wiki/Lisp_(programming_language) 

  5. https://en.wikipedia.org/wiki/Genetic_algorithm 

  6. https://towardsdatascience.com/what-is-a-gpu-and-do-you-need-one-in-deep-learning-718b9597aa0d 

  7. https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelligence/#:~:text=And%20experts%20have%20predicted%20the,singularity%20by%20the%20year%202060 

  8. https://www.theguardian.com/technology/2021/jan/03/peak-hype-driverless-car-revolution-uber-robotaxis-autonomous-vehicle 

  9. https://www.wired.com/insights/2015/03/ai-resurgence-now/ 

  10. https://www.technologyreview.com/2018/11/17/103781/what-is-machine-learning-we-drew-you-another-flowchart/ 

  11. https://deepmind.com/research/case-studies/alphago-the-story-so-far 

  12. https://www.forbes.com/sites/bernardmarr/2018/12/31/the-most-amazing-artificial-intelligence-milestones-so-far/?sh=4df434707753 

  13. https://bernardmarr.com/default.asp?contentID=2108 

  14. https://www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/?sh=7bcdf463481a 

  15. https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp 

  16. https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501#:~:text=The%20Three%20Laws&text=They%20are%3A,conflict%20with%20the%20First%20Law 

  17. https://www.mckinsey.com/featured-insights/future-of-work/what-800-executives-envision-for-the-postpandemic-workforce 

  18. https://en.wikipedia.org/wiki/Geoffrey_Hinton 

  19. https://en.wikipedia.org/wiki/Yann_LeCun 

  20. https://builtin.com/artificial-intelligence/deep-learning-history 

  21. https://www.dataversity.net/brief-history-deep-learning/#:~:text=The%20history%20of%20Deep%20Learning,to%20mimic%20the%20thought%20process

Images Accessed, from top to bottom:

  1. Title image: https://www.fundcalibre.com/artificial-intelligence-making-the-most-of-lockdown

  2. Alan Turing: https://www.aplustopper.com/alan-turing-biography/

  3. Byte Magazine: https://archive.org/details/byte-magazine-1979-08

  4. Hinton/LeCun: https://ladieslearningcode.github.io/llc-intro-to-ai-master/slides.html#slide1

  5. Other images: shutterstock

Previous
Previous

Top 3 Video Security and Privacy Trends for 2021

Next
Next

Top tips for remote working