Achieve Artificial Intelligence With Only Five Lines of Code
Artificial intelligence can be achieved with only five lines of code.
Most of the systems of artificial intelligence, are generally very complicated and lengthy and require a lot of energy and space. But, Johannes Overvelde and his team at AMOLF, a government-funded Dutch physics research institute, did in a new study that was released this week on behalf of the National Academy of Sciences.
Using only five lines of code, they were able to create a team of robots that worked together to achieve a common goal with a single sensor and no ability to communicate with each other. The findings have potential impacts on everything from self-healing materials to medical nanobots. The simple robots created by the team can sense its position and push or pull on an adjacent robot. If the robots are disconnected, they push and pull on nothing, so they need to work together to accomplish their goal.
To demonstrate the nature of the robots, Overvelde has mentioned in an interview, “We aimed for simplicity over complexity. Robustness over optimal behaviour,” However, making this robot was not the team’s main focus. They aimed to develop the algorithm and we’re successful in it. Simple processes can lead to complex behaviours. For example, a flock of birds has each bird connected in a simple format and generates a better chance at survival together. The robots were made to be like that flock of birds and be connected. In further advancement, those robots can share their current phase with their neighbours with those robots then weighing that additional input in various ways.
At this stage of advanced technology, those robots reassemble the nature of neural networks. The simple algorithms could be used in more complex situations too, like a car trying to steer itself down a lane. So long as the lane lines are easily visible, this is a very simple task for a robot to accomplish. Perhaps the most essential part is that the system has no meaningful memory. This isn’t a machine learning neural network where it can be trained in iterative simulations to get human-esque behaviour. These robots don’t have a model of themselves. They just have a simple task, and try to accomplish it without knowing what’s going on in the world. The team was able to intentionally damage a robot, keeping it from being able to push and pull on its neighbour, and they expected it to simply give up. Unexpectedly, it determined that it could contribute by actuating the motor even if it didn’t appear to be helping the cause.
Overvelde says this kind of behaviour is seen among living things, including fungi and slime mould, organisms that can solve mazes despite not having a central nervous system. They become “smarter” by cooperating with other cells.