Technion Researchers Accelerate AI Learning Process
February 14, 2019
Researchers in the Technion’s Andrew and Erna Viterbi Faculty of Electrical Engineering have developed a platform able to accelerate the learning process of artificial intelligence systems 1,000 fold. Professor Shahar Kvatinsky and doctoral student Tzofnat Greenberg-Toledo, together with students Roee Mazor and Ameer Haj-Ali, recently published their findings in IEEE Transactions on Circuits and Systems.
In recent years, there has been major progress in the world of artificial intelligence, mainly due to models of deep neural networks (DNNs); sets of algorithms inspired by the human brain and designed to recognize patterns. Inspired by human learning methods, these DNNs have had unprecedented success in dealing with complex tasks such as autonomous driving, natural language processing, image recognition and the development of innovative medical treatments which is achieved through the machine’s self-learning from a vast pool of examples often represented by images. This technology is developing rapidly in academic research groups and leading companies such as Facebook and Google are utilizing it for their specific needs.
Learning by example requires large scale computing power and is therefore carried out on computers that have graphic processing units (GPUs) suited for the task. Yet, these units consume considerable amounts of energy and their speed is slower than the required learning rate of the neural networks, thereby hindering the learning process. “In fact, we are dealing with hardware originally intended for mostly graphic purposes and it fails to keep up with the fast-paced activity of the neural networks,” explains Kvatinsky. “To solve this problem, we need to design a hardware that will be compatible with deep neural networks.”
Prof. Kvatinsky and his research group have developed a hardware system specifically designed to work with these networks, enabling the neural network to perform the learning phase with greater speed and less energy consumption. “Compared to GPU’s, the new hardware’s calculation speed is 1,000 times faster and reduces power consumption by 80%.”
Although their research is still at its theoretical stage, they have already demonstrated the implementation via simulation. “Currently, our development is destined to work with the momentum learning algorithms, but our intention is to continue developing the hardware so that it will be compatible with other learning algorithms as well. We may be able to develop a dynamic, multi-purpose hardware which will be able to adapt to various algorithms, instead of having a number of different hardware components,” Kvatinsky added.
This research is supported by the European Research Council under the Horizon 2020 Research and Innovation Program.
For more than a century, the Technion-Israel Institute of Technology has pioneered in science and technology education and delivered world-changing impact. Proudly a global university, the Technion has long leveraged boundary-crossing collaborations to advance breakthrough research and technologies. Now with a presence in three countries, the Technion will prepare the next generation of global innovators. Technion people, ideas and inventions make immeasurable contributions to the world, innovating in fields from cancer research and sustainable energy to quantum computing and computer science to do good around the world.
The American Technion Society supports visionary education and world-changing impact through the Technion-Israel Institute of Technology. Based in New York City, we represent thousands of US donors, alumni and stakeholders who invest in the Technion’s growth and innovation to advance critical research and technologies that serve the State of Israel and the global good. Over more than 75 years, our nationwide supporter network has funded new Technion scholarships, research, labs and facilities that have helped deliver world-changing contributions and extend Technion education to campuses in three countries.