Skip to main content

Computer vision: artificial intelligence that sees and understands

Computer vision is an artificial intelligence technique for analyzing images and videos captured by cameras. Thanks to this technology, AI can understand and process visual information in real time.

With the increasing quality of video sensors, AI algorithms are now able to "see" the world: they categorize objects, analyze their movements and interpret visual contexts. These analyses then generate alerts or information accessible to humans, facilitating decision-making.

A concrete example: our algorithm, for example, is trained to detect fire outbreaks. It immediately alerts the relevant authorities for a rapid and effective response.


Multiple uses for local authorities and infrastructures

Cameras are already deployed in many public and private spaces, for a variety of functions such as :

  • Mobility counting,

  • Flow analysis,

  • Building protection,

  • Surveillance of high-risk areas (theft, assault),

  • Traffic offence detection.

Without AI, it's the human eye that has to analyze all these images. But with the exponential growth of data, which doubles every year, and the limited capacity of our brains (only 70% of visual data is actually perceived), computer vision comes to support and relieve the human eye.


A major technical challenge

The visual world is complex: an object may appear from different angles, be partially hidden, or be subject to changing lighting conditions.

To be effective, a computer vision system must be able to recognize an object in all these situations, which represents a real technical challenge.


Legal and ethical issues

The massive deployment of cameras in public and private spaces raises legitimate questions about the protection of individual liberties.

At XXII, we strictly comply with the RGPD:

  • We never analyze personal data.

  • We only distinguish categorized silhouettes (human, dog, car, bike...) without using biometric data.

  • Any personal data remains "dormant" and is not integrated into AI analyses.


Training data and deep learning

AI requires large volumes of image and video data to learn, test and improve.

This stage, known as deep learning, relies on artificial neural networks composed of many layers, which interpret and transmit information.

When building datasets, we ensure strict compliance with the RGPD, often accompanied by a data protection impact assessment.


Creation of responsible databases

We create databases dedicated to internal scientific research at XXII, always respecting legislation and the people concerned.

To limit the risks associated with personal data, we also work with artificially generated synthetic data.

Our datasets are :

  • targeted by use case (e.g. learning to detect a new object),

  • diversified (to test robustness in different situations),

  • anonymized or pseudonymized,

  • derived from open source solutions, or collected with consent.

Annotation and learning

Algorithm development begins with dataannotation: each image is tagged according to precise criteria (object types, viewing angles, weather conditions, etc.).

This annotated data is then used totrain the AImodels, often deep neural networks, to analyze and understand the images received.

Performance evaluation

We evaluate our models by measuring :

  • the rate of true positives,

  • the false positive rate,

  • true negative rate,

  • false negative rate.

These measurements, calculated by object category, help us build synthetic indicators (sensitivity, specificity), essential for understanding the strengths and limitations of our algorithms.

These tests are performed on pseudonymized data, using proprietary or in-house developed software.


Combating bias

A major challenge is to create bias-free algorithms: for example, AI must detect a red car as well as a blue one.

To achieve this, we constantly enrich our models with a wide variety of datasets.


The importance of data diversity

The diversity of viewing angles is also crucial. It enables us to check that the algorithms detect objects correctly, whatever the camera position.


This rigorous approach enables XXII to offer reliable, high-performance and ethical solutions, capable of supporting humans in analyzing and understanding the visual world.

RGPD & ethics at XXII

Groupe de texte qui explique l'éthique chez XXII le rapport avec la RGPD, le CSI, l'Europe, la DPJ, la CNIL et d'autre textes

CORE, a decision-making tool

Computer vision often processes video streams containing personal data. That's why compliance with the RGPD and related regulations is a priority for us.

We clearly reaffirm that CORE is a decision-making tool. In no way does it replace the human eye, but complements it.

Our platform does not trigger any automated procedure following a suspected offence. It simply facilitates access to information already available in a security center. In this way, the use of CORE always remains subject to human intervention.


Our commitment in accordance with the CNIL

  • Necessity: The system must serve a clear and legitimate purpose.

  • Proportionality: CORE acts only on images already covered by video protection, with no additional impact on individuals.

  • Data minimization: XXII does not store any video streams. Our software operates exclusively in real time.

  • Non-identification: Algorithms analyze silhouettes, without collecting personal data. Only position in space is studied, never a specific person. Analysis is always performed on groups, never on targeted individuals.


CORE thus combines performance, respect for privacy and human complementarity to offer an ethical and effective solution.


Newsletter.