Loading spinner

AI transparency report

DISCLAIMER: No part of this document may be reproduced in any form without the written permission of STMicroelectronics (ST). The contents of this document may be revised by ST in its sole discretion, without notice, due to continued progress in our AI methodology, any changes in applicable laws, regulations or related guidance, or for any other reason. ST has no liability for any error or damage of any kind resulting from the use of this document, its contents or the information provided therewith. The contents of this document and other information conveyed are for informational purposes only and do not constitute legal advice (and should not be relied upon as such)

Introduction

We are committed to the responsible development and use of artificial intelligence (AI). We are dedicated to ensuring that AI deployed in our application (which we will refer to as AI features) not only meets regulatory requirements, but also aligns with industry best practices and the following AI principles:

  1. Transparency by design: We prioritize openness and explainability through in-product notices where appropriate and updates to our AI Transparency Report.
  2. Privacy-centric data protection: Transparent data collection, accountable use, and responsible management of data are the key pillars of protecting data in AI features.
  3. Algorithmic accountability: We require a clear system of accountability for managing AI risks.
  4. Fairness & inclusivity: We strive to mitigate the risks of unwanted biases and discriminatory practices in our AI features. We believe AI should empower everyone and contribute to overall prosperity, inclusivity, and growth for all.
  5. Data Stewardship: Our AI features are designed with industry-recognized information security safeguards.
  6. Safety and reliability assurance: Our AI features are designed to perform safely and reliably.

This AI Transparency Report provides information about AI features we have deployed through the  'AI feature cards' that outline important information about each AI feature, such as its purpose, methodology, and risk rating. The AI feature cards are organized into categories based on their functionality and data used in development and function.

1- Description of AI card

Name of the AI feature, solution, and product: the name of the AI feature is at the top of the AI feature card. The solution(s) and the product(s) the AI feature is associated with are listed in the first row. In some cases, the AI features are applied platform wide.

Purpose: It describes the use case and function of the AI feature.

Methodology: It provides the name of the AI model used for the AI feature, details on the operation of the AI feature, and information on any training data used in development.

Data Processed: It outlines the data used in execution to deliver the AI feature. Categories include public and proprietary sources and customer data or also called customer content (i.e., data or information submitted by or on behalf of a customer to the AI assistant/Chatbot.

Please note that customer data is never used to train the AI models. But we record prompts and responses and prompts for offline retraining or fine tuning. The recorded data is not used to re-create user content or attempt to re-identify user.

Controls: It details the options regarding customer control to enable or disable the AI feature.
  • Default: indicates whether the AI feature is on or off by default (i.e., as the pre-selected option).
  • Option to Disable: indicates if the customer administrator can turn the feature on or off. Controls for enabling/ disabling AI features are found under the AI feature table below.
EU AI Act Risk Rating: As part of our commitment to ethical AI development, ST does not develop ‘Prohibited AI systems’ as defined under the EU AI Act and is not planning to offer any high-risk AI features or systems as part of the services. Below are the descriptions of each AI risk category based on the risk qualification criteria outlined in the EU AI Act:
  • High-risk: AI features that pose a high probability of harm to health and safety or an adverse impact on the fundamental rights of individuals. Article 6 of the EU AI Act describes the thresholds that lead to an AI system being “high-risk.” Either such a system meets the safety component criteria in Article 6(1) or falls into a category referred to in Annex III (Specific List of High-Risk AI systems).
  • Limited (transparency) risk: AI features that use General Purpose AI models to create original content as well as AI features that interact directly with individuals. Examples include autonomous chatbots; technology that summarizes long-form content, autonomously creates software code, generates digital images from natural language, and produces articles or creative stories based on given prompts; music composition tools that create original scores and songs; and video generation tools that  produce animated sequences or edit videos based on text descriptions.
  • Minimal risk: AI features that do not fall into High or Limited/ Transparency AI risk categories. Examples include spam filters, inventory management systems, and business process improvement tools.

2 - Categorization

We organize our AI features into categories based on their functionality and similar data use.

AI Chat Assistant

This category includes AI features that deliver knowledge and answer customer questions by responding to input with data from trusted STMicroelectronics sources such as product documentation, product information and Insight article made publicly available on STMicroelectronics website. These AI features use natural language interfaces to deliver customized information, oftentimes with links to supporting material included for deeper reference and validation.

These AI features do not use any customer data apart from the prompts typed in by the customer to drive feature execution.

AI feature: STM32 Sidekick

Specification Description

Solution(s) and Product(s)

Community page on www.ST.com.

Purpose

Conversational chatbot that answers customers’ custom questions about STM32 documentation. Responses are generated from STMicroelectronics product documentation. By delivering customized answers to specific customer questions, this AI feature saves customers time on research and finding information across our customer-facing resources.

Methodology

Which model is used?

Azure OpenAI text-embedding and Azure OpenAI GPT variants.
 

How does it work?

A retrieval-augmented generation (RAG) technique converts our data into vector representations using Azure OpenAI text-embedding and then compares the data to find relevant responses to the user's question.
 

Training Data

None.

Data Processed

Customer-facing Product documentation, product information and Insight article.
Customer input is processed exclusively to execute the model.

Controls

Default: Off

Option to disable: No

EU AI Act Risk Rating

Limited risk

AI feature: STM32 Sidekick

Community page on www.ST.com.

Conversational chatbot that answers customers’ custom questions about STM32 documentation. Responses are generated from STMicroelectronics product documentation. By delivering customized answers to specific customer questions, this AI feature saves customers time on research and finding information across our customer-facing resources.

Which model is used?

Azure OpenAI text-embedding and Azure OpenAI GPT variants.
 

How does it work?

A retrieval-augmented generation (RAG) technique converts our data into vector representations using Azure OpenAI text-embedding and then compares the data to find relevant responses to the user's question.
 

Training Data

None.

Customer-facing Product documentation, product information and Insight article.
Customer input is processed exclusively to execute the model.

Default: Off

Option to disable: No

Limited risk

Read more
Read less