Machine learning is a process in which computer programs grow from experience.
This is not science fiction. In science fiction, robots have been making progress before they occupy the world.
When we talk about machine learning, we mainly mean very clever algorithms.
In 1950, the mathematician Alan Turing thought it was a waste of time to ask whether machines could think. Instead, he proposed a game: a player has two written conversations, one with another person and one with a machine. On the basis of communication, human beings must decide who is who.
This “imitation game” will be used as a test of artificial intelligence. But how do we program the machine to play it?
Turing suggested that we teach children as we teach them. We can guide them to follow a series of rules and allow them to make minor adjustments based on experience.
For computers, the learning process looks a little different.
First, we need to provide them with a lot of data: from pictures of daily items to details of bank transactions.
Then we must tell the computer how to process this information.
Programmers do this by writing a series of step-by-step instructions or algorithms. These algorithms help computers recognize patterns in large amounts of data.
According to the pattern they found, the computer developed a “model” of how the system works.
For example, some programmers are using machine learning to develop medical software. First, they may provide hundreds of classified MRI scans for a program. They will then have the computer build a model to classify MRI images they have never seen before. In this way, medical software can find problems in patient scanning or mark some records for review.
Complex models like this usually require many hidden computational steps. For structures, programmers organize all processing decisions into layers. This is the source of “deep learning”.
These layers mimic the structure of the human brain, and neurons send signals to other neurons. That’s why we also call them “neural networks”
Neural networks are the basis of services we use every day, such as digital voice assistants and online translation tools. Over time, the ability of neural networks to listen to and respond to the information we provide to them continues to improve, which makes these services more and more accurate.
However, machine learning is not just something locked in an academic laboratory. Many machine learning algorithms are open source and widely available. They have been used in many things that affect our lives, large and small.
People have used these open source tools to do everything from training pets to creating experimental art to monitoring wildfires.
They also did something morally problematic, such as making deep fake videos with deep learning. Since the data algorithms used by machines are written by error prone people, they may contain bias. The algorithm may bring the manufacturer’s bias into the model and exacerbate problems such as racism and gender discrimination.
But this technology is unstoppable. People are discovering more and more complex applications, some of which will automate what we are used to doing – such as using neural networks to help run driverless cars. Given the complexity of the task, some of these applications require complex algorithmic tools.
Although this may continue, these systems still have a lot to learn.