artificial intelligence AIArtificial Intelligence (AI) can be defined as a machine’s ability to perform logical analysis, acquire knowledge and adapt to an environment that varies over time or in a given context.
AI is already being used in many different applications today, such as:

  • Super-smart computing-intensive AIs that help doctors make diagnoses by sifting through terabytes of patient data points to offer the best advice;
  • Self-driving cars, autonomous cars that understand everything about traffic and help to keep drivers and other road users safe while making the journey more efficient;
  • Chatbots, often indistinguishable from human operators, able to answer complex questions in real time;
  • Online shopping experiences that are tailored to individual preferences;
  • Personal voice assistants that are rapidly becoming pervasive to simplify everyday life.

This is only the beginning. The Internet of Things (IoT) is enabling tens of billions of intelligent connected devices that will make our lives easier and make the environments in which we live and work safer and more efficient, often by providing more natural human-machine communication. The addition of AI capabilities to these Smart Things will significantly enhance their functionality and usefulness, especially when the full power of these networked devices is harnessed – a trend that is often called AI on the Edge.

Artificial Intelligence & Deep Learning

AI uses an assembly of nature-inspired computational methods to address complex real-world problems where mathematical or traditional modeling have proven ineffective.  Examples include a process that is too complex for analytical modeling or when a process contains some unknowns due to its intrinsic dynamic behavior. Many real-life problems cannot be described in exact terms and fall into this category, so they cannot be processed by traditional computing systems.

Deep Learning Artificial IntelligenceArtificial Intelligence uses an approximation of the way the human brain reasons, using inexact and incomplete knowledge to produce decisions and actions in an adaptive way, with experience built up over time.

The basic concepts behind AI have been around since the 1950’s but modern programming techniques (such as python), the availability of huge quantities of data, open-source tools for neural-network training, powerful computing centers and ever-improving embedded-processing systems are contributing to AI taking off as a world-changing technology today.

Machine Learning (ML) is a subset of Artificial Intelligence and refers to techniques which enable machines to recognize underlying patterns and learn to make predictions and recommendations by analyzing data and experiences, rather than through traditional explicit programming instructions. ML adapts using new data and experiences to improve prediction performance over time.

Artificial Neural Network ANN

Deep Learning (DL) is a subset of machine learning. It aims to learn data patterns and dependencies by using a hierarchy of multiple layers that mimics the neuron connections of the human brain and make up any deep neural network. Deep Learning techniques work with very large datasets by analysing data, recognizing patterns and making predictions on next data points.

Before the advent of deep-learning techniques, creating and testing algorithms to solve some problems required detailed subject expertise and writing and debugging dedicated, hand-crafted, and often very complicated programs. 
With Deep Learning, a computer can train itself with a large set of data collected for this purpose. The learning stage, in which the neural network learns to classify different patterns, may use data-sets labelled in advance, a process referred to as Supervised Learning. In the case of unlabeled data-sets the learning process is called Unsupervised Learning and during the training the neural network tries to cluster the data-set into groups with similar patterns.

In both cases the result is an Artificial Neural Network (ANN) that contains all the information necessary to carry out the task. The ANN uses the knowledge acquired in the training to infer data features from new incoming data. This is called Inference stage and can be deployed in embedded devices with memory and processing capabilities orders of magnitude smaller than the servers used to train the ANN itself.

Back to top

AI on the Edge

Artificial Neural NetworkArtificial Neural Networks (ANNs) are available in various types, topologies, and complexities to address a variety of problems across a wide spectrum of applications. They can exploit the data provided by the exploding number of heterogeneous sensors present in our homes, offices, cars, factories, and personal items.

If we consider a model where the raw data from all these sensors are sent to a powerful central remote intelligence, then we quickly see the escalation in required data bandwidth and computational capabilities in the Cloud.  Especially if you consider processing audio, video or images from millions of end devices, not to mention the potential latency generated by such a centralized system.

AI enables much more efficient end-to-end solutions by switching from a centralized to a distributed intelligence system, where some of the analysis done in the cloud is moved closer to the sensing and actions. This distributed approach significantly reduces both the required bandwidth for data transfer and the processing capabilities of cloud servers. It also offers data privacy advantages, as personal source data is pre-analyzed and provided to service providers with a higher level of interpretation.

AI on the Edge

Moreover, depending on the complexity of the task, ANN solutions can offer various complexity levels, including numbers of operations per inference run and memory requirements, the input data rate, real-time processing requirement and allowable latency.

So AI and Deep Learning allow pure SW or mixed SW/HW low-power solutions to be deployed close to the sensor, enabling true edge computing.

More about AI on the Edge use cases

Back to top

Artificial Intelligence  @ ST

AI research and developmentST has been engaged in AI research and development for several years. As a leading supplier of high volume, broad-market, embedded-processing solutions we are focused on developing scalable, flexible products and technologies to allow AI approaches to benefit a wide variety of devices, supporting a virtually unlimited number of use cases.

AI on STM32 Microcontrollers

In the future, nearly any device with a 32-bit microcontroller will be able to use AI techniques. More concretely they will be able to run DNN (Deep Neural Networks) that have been trained to do specific tasks.

While most microcontrollers today do not have the memory and processing power to run the learning algorithms needed to create DNNs, they can run the DNNs themselves – provided that the networks are optimized for microcontrollers.

STM32CubeMx.AIST has created a tool to do that optimizing of DNNs for a microcontroller. STM32CubeMx.AI is planned for release later this year as part of the STM32 software ecosystem.

The tool takes the pre-trained neural network model output from a broad range of the most popular AI frameworks (including Caffe, CNTK, Keras, Lasagne, TensorFlow, and theano), and maps it to an optimized DNN that is adapted to the memory and processing-power capabilities of a target STM32 microcontroller.

The tool can also check the functionality of the adapted DCNN – which can be 10x smaller than the original, with negligible loss of accuracy.

Check out this video for a deeper explanation of how the tool works from one of ST’s AI experts.

Advanced R&D for Dedicated AI Processing Hardware

ST has developed an advanced System-on-Chip demonstrator that allows ultra-energy-efficient DCNN processing. It addresses the challenging requirements of image, video, and natural language processing in data rate and real-time processing performance. The demonstrator combines 8 convolutional accelerators, 8 dual-DSP clusters, and an optimized distributed-memory architecture in a 28nm FD-SOI System on Chip. It achieved an efficiency of 2.9 TOPS/W @200MHz, 0.575 V at the beginning of 2017.

ST-Published papers on Artificial Intelligence

The Orlando Project: A 28nm FD-SOI Low Memory Embedded Neural Network ASIC. ACIVS-2016, Giuseppe Desoli, Valeria Tomaselli, Emanuele Plebani, Giulio Urlini, Danilo Pau, Viviana D’Alto, Tommaso Majo, Fabio De Ambroggi, Thomas Boesch, Surinder-pal Singh, Elio Guidetti, Nitin Chawla, ACIVS2016

2.9TOPS/W Deep Convolutional Neural Network SoC in FD-SOI 28nm for Intelligent Embedded Systems. ISSCC 2017, Giuseppe Desoli, Nitin Chawla, Thomas Boesch, Surinder-pal Singh, Elio Guidetti, Fabio De Ambroggi, Tommaso Majo, Paolo Zambotti, Manuj Ayodhyawasi, Harvinder Singh, Nalin Aggarwal, Solid-State Circuits Conference (ISSCC), 2017 IEEE International
Also presented @ 17th INTERNATIONAL FORUM ON MPSoC

Detecting changes at the sensor level in cyber-physical systems: Methodology and technological implementation, Cesare Alippi, Viviana D'Alto, Mirko Falchetto, Danilo Pau, Manuel Roveri , Neural Networks (IJCNN), 2017 International Joint Conference on, 14-19 May 2017,

Complexity and Accuracy of Hand-Crafted Detection Methods Compared to Convolutional Neural Networks, Valeria Tomaselli, Emanuele Plebani, Mauro Strano, Danilo Pau, Proceedings of 19th International Conference on Image Analysis and Processing, ICIAP 2017, 11-15 Sept 2017

Intelligent Embedded and Real-Time ANN-based Motor Control for Multi-Rotor Unmanned Aircraft Systems, George Michael, Nectarios Efstathiou, Kyriacos Mantis, Theocharis Theocharides, Danilo Pau,   Proceedings of 25th IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC) Abu Dhabi, UAE October 23 - 25, 2017

Back to top