
Google’s AutoML Zero Lets Machines to Self-learn and Evolve
Technology is always evolving. So does its’ subsets too. From AI to computers, from the field of research to healthcare. It has also changed the way we engage and interact with technology. Under AI, machine language has fundamentally advanced in various parameters, including model structures and learning methods. Attempts to automate this is one of the actively researched spaces in the ML community. Thus came AutoML or automated machine learning. Previously this approach largely focused on neural network architecture algorithm (mimicking how a human brain does cognitive thinking). The main challenge was it required human engineers to feed data input and provide instructions. It also required meticulously built restrictive search spaces and was prone to biased output.
Recently a team of Computer scientists at Google developed a program called AutoML-Zero. It uses simple mathematical concepts as building blocks to generate complete machine learning algorithms.
The published paper preprint is available on arXiv and code is also available on GitHub. The program starts with a random generation of a population of 100 candidate algorithms along with combining mathematical operations.
Each of these algorithms then performs a task, like recognizing an image of a cat or truck. Then the output is compared to hand-designed algorithms. A sophisticated trial-and-error process then identifies the best performers and retained for future iteration. Much like Darwin’s evolution concept.
The copies of top performers are modified by introducing random replacements, edits, or deletion of some of its code to create slight variations of the best algorithms. These child algorithms get added to the population, while older parent programs get removed. The cycle repeats.
The system can create thousands of populations at once, can be searched through per second per processor. These are mutated through random procedures. Over enough cycles, these self-generated algorithms get better and better until the machine finds one that performs well enough to evolve at performing tasks.
To speed up the search, the program sometimes exchanges the algorithms among the population. This prevents any possible chances of an evolutionary impasse. And it also refines the system by removing out the identical algorithms.
Ultimately, this can lead to wide usage of artificial intelligence systems and easier to access for programmers without any AI knowledge. It can also help us to eradicate human bias from AI, as humans are barely involved.
A giant leap to the future!
As per Google’s paper, “Despite the vastness of this space, the evolutionary search can still discover two-layer neural networks trained by back-propagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging.”
Quoc Le, the lead computer scientist of this project expressed his belief to Science Magazine, that he is hopeful that the processes can be scaled up to eventually create much more complex AIs, which human researchers could never find. All of which involves zero human input, and uses only basic mathematical concepts known by an average high school student.
Thus paving a way for an age of AI that needs minimal human interaction and supervision, learning and adapting by itself, discovering new non-neural networks.