EP4128055A1 - Vorrichtung und verfahren zur entscheidungsunterstützung eines künstlichen kognitiven systems - Google Patents

Vorrichtung und verfahren zur entscheidungsunterstützung eines künstlichen kognitiven systems

Info

Publication number
EP4128055A1
EP4128055A1 EP21709420.0A EP21709420A EP4128055A1 EP 4128055 A1 EP4128055 A1 EP 4128055A1 EP 21709420 A EP21709420 A EP 21709420A EP 4128055 A1 EP4128055 A1 EP 4128055A1
Authority
EP
European Patent Office
Prior art keywords
data
representation
unit
processing unit
decision support
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21709420.0A
Other languages
English (en)
French (fr)
Inventor
Andrea Ancora
Matthieu DA-SILVA-FILARDER
Maxime DEROME
Maurizio FILIPPONE
Pietro Michiardi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ampere SAS
Original Assignee
Renault SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renault SAS filed Critical Renault SAS
Publication of EP4128055A1 publication Critical patent/EP4128055A1/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • Artificial cognitive systems are equipped with sensors configured to capture physical quantities and collect information.
  • the raw data collected by the different sensors can be merged before processing.
  • the purpose of merging data from multiple sensors or multiple data sources is to combine that data so that the resulting information has less uncertainty than what would be obtained when these data sources are used individually. Reducing uncertainty may mean obtaining more precise, complete, or more reliable information, or may refer to the outcome of an emergent view, such as stereoscopic viewing (e.g., calculating depth information by combining two-dimensional images from two cameras at different points of view).
  • the data fusion algorithm is derived from a machine learning algorithm.
  • the invention further provides a decision support process for a cognitive system from data originating from a plurality of data sources, comprising the steps of:
  • the embodiments of the invention allow, through the reconstruction of sensor data, the detection of faults or anomalies of the sensors and the generation of virtual sensor data from existing sensors.
  • the embodiments of the invention offer increased reliability and precision thanks to the combination of all the information available to reconstruct the data of an improved sensor.
  • Figure 1 is a diagram showing a device 10 for decision support of a cognitive system 1, according to certain embodiments of the invention.
  • the cognitive system 1 can be part of or can be a tracking system, a detection system, a surveillance system, a navigation system, or an intelligent transport system.
  • an air traffic control radar for example a primary radar or a secondary radar
  • the cognitive system 1 can be implemented in an airborne surveillance system operating, for example, inside a surveillance aircraft.
  • a data source 11-i can be any type of image or video acquisition device configured to acquire image streams or video streams from the environment in which the cognitive system operates 1.
  • Cognitive system 1 to robotics include connected autonomous vehicles (Internet of vehicles and vehicle-to-vehicle communications or vehicle-to-infrastructure or vehicle and any object), home automation / smart home, smart cities, technology portable and connected health.
  • the data coming from the different data sources 11-i, for i varying from 1 to N, can have different representations in different data representation spaces.
  • the embodiments of the invention provide a device 10 for decision support of a cognitive system 1 from data from the plurality of data sources 11-i, with i ranging from 1 to N, implementing machine learning techniques for merging and reconstructing data from data sources 11-i in order to provide decision unit 12 with a representation of data based on from which the decision unit 12 can determine one or more actions to be implemented by the cognitive system 1.
  • the last layer of each auto encoder implemented in each encoding unit 1030-i can produce an average and variance vectors characterizing a multivariate normal distribution of the same dimension for the set of encoding units 1030-i associated with the plurality of data sources 11-i.
  • the model of representation of the environment (or even the modeling of the latent world) can be used by the decision unit 12 to determine an action to be implemented by the cognitive system 1 and / or can be used for other cognitive system 1 tasks such as understanding the context, trajectory planning or any other decision-making task.
  • a processing unit 103-i associated with the data source 11-i, for i varying from 1 to N can further comprise a comparison unit 1032-i configured to compare the data from the data source 11-i with the reconstructed representation of data determined by the data reconstruction unit 1031 -i associated with the data source 11-i.
  • the comparison makes it possible to detect random or systematic errors (for example, false alarms due to 'phantom' detections which can be highlighted by comparing the data coming from the data source with the reconstructed data), calibrate data sources (for example detecting misalignment of the support of a sensor due to a shock or estimating a delay), and more generally, detecting if a data source deviates from its nominal operating point using data reconstructed as a reference for uncorrupted nominal data calibrated at the factory).
  • calibrate data sources for example detecting misalignment of the support of a sensor due to a shock or estimating a delay
  • generative antagonist networks As discriminators to help the data reconstruction units 1031 -1 to 1031 -3 to provide images of better fidelity.
  • These generative antagonist networks are for example hybridized with generative variational autoencoders.
  • the reconstruction of data according to the embodiments of the invention also allows the transfer of information between the data sources, by extending the set of training data for a data source by exploiting the data of the data sources. other data sources. Data transfer also ignores the nature and position of data sources.
  • the decision unit 12 can be configured to determine an action to be implemented by the cognitive system 1 according to the model of representation of the environment and / or according to the comparisons made between the original data coming from the data sources 11 - i with the reconstructed data representations determined by the data reconstruction units 1031-i, for i ranging from 1 to N and / or as a function of the reconstructed representations.
  • the data reconstruction units 1031 -i can also be co-located with the data sources 11-i and the encoding units 1030-i, for i ranging from 1 to N.
  • a return channel from the modeling of the a posteriori world from the data fusion unit 105 to the processing unit of the data source 11-i can be implemented.
  • This architecture advantageously makes it possible to co-locate the comparison units 1032-i online, which can be configured to implement general tasks of monitoring data sources 11-i including for example dynamic activation / deactivation, calibration, time stamping, anomaly detection, and communication of the general state of the data source 11-i.
  • the data coming from the data sources can comprise data generated in real time by the data sources, data previously processed, and / or contextual data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
EP21709420.0A 2020-03-23 2021-03-08 Vorrichtung und verfahren zur entscheidungsunterstützung eines künstlichen kognitiven systems Pending EP4128055A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR2002789A FR3108423B1 (fr) 2020-03-23 2020-03-23 Dispositif et procédé d’aide à la décision d’un système cognitif artificiel
PCT/EP2021/055762 WO2021190910A1 (fr) 2020-03-23 2021-03-08 Dispositif et procédé d'aide à la décision d'un système cognitif artificiel

Publications (1)

Publication Number Publication Date
EP4128055A1 true EP4128055A1 (de) 2023-02-08

Family

ID=70614252

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21709420.0A Pending EP4128055A1 (de) 2020-03-23 2021-03-08 Vorrichtung und verfahren zur entscheidungsunterstützung eines künstlichen kognitiven systems

Country Status (5)

Country Link
US (1) US12499670B2 (de)
EP (1) EP4128055A1 (de)
CN (1) CN115461756A (de)
FR (1) FR3108423B1 (de)
WO (1) WO2021190910A1 (de)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11691637B2 (en) * 2020-06-19 2023-07-04 Ghost Autonomy Inc. Handling input data errors in an autonomous vehicle
EP4431974A1 (de) * 2023-03-15 2024-09-18 Zenseact AB Erzeugung einer darstellung einer umgebung eines fahrzeugs
FR3154842A1 (fr) * 2023-10-31 2025-05-02 Thales Procédé de contrôle de l'observation par un système de pistage d'un espace et dispositif associé
CN118468197B (zh) * 2024-07-10 2024-09-24 衢州海易科技有限公司 一种多通道特征融合车联网异常检测方法及系统

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180275658A1 (en) * 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007098979A (ja) * 2005-09-30 2007-04-19 Clarion Co Ltd 駐車支援装置
FR2956215B1 (fr) * 2010-02-09 2014-08-08 Renault Sa Procede d'estimation de la localisation d'un vehicule automobile
US10656657B2 (en) * 2017-08-08 2020-05-19 Uatc, Llc Object motion prediction and autonomous vehicle control
EP3682349A4 (de) * 2017-09-13 2021-06-16 HRL Laboratories, LLC Unabhängige komponentenanalyse von tensoren für die sensordatenfusion und -rekonstruktion
KR101939349B1 (ko) 2018-07-09 2019-04-11 장현민 기계학습모델을 이용하여 자동차용 어라운드 뷰 영상을 제공하는 방법
US11214268B2 (en) * 2018-12-28 2022-01-04 Intel Corporation Methods and apparatus for unsupervised multimodal anomaly detection for autonomous vehicles
KR102097742B1 (ko) * 2019-07-31 2020-04-06 주식회사 딥노이드 인공지능 기반의 의료영상 검색 시스템 및 그 구동방법
CN112579745B (zh) * 2021-02-22 2021-06-08 中国科学院自动化研究所 基于图神经网络的对话情感纠错系统

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180275658A1 (en) * 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LE GRUENWALD ET AL: "Using data mining to handle missing data in multi-hop sensor network applications", DATA ENGINEERING FOR WIRELESS AND MOBILE ACCESS, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 6 June 2010 (2010-06-06), pages 9 - 16, XP058163895, ISBN: 978-1-4503-0151-0, DOI: 10.1145/1850822.1850825 *
See also references of WO2021190910A1 *

Also Published As

Publication number Publication date
FR3108423B1 (fr) 2022-11-11
US20230306729A1 (en) 2023-09-28
US12499670B2 (en) 2025-12-16
CN115461756A (zh) 2022-12-09
FR3108423A1 (fr) 2021-09-24
WO2021190910A1 (fr) 2021-09-30

Similar Documents

Publication Publication Date Title
WO2021190910A1 (fr) Dispositif et procédé d'aide à la décision d'un système cognitif artificiel
US10921245B2 (en) Method and systems for remote emission detection and rate determination
Miclea et al. Visibility enhancement and fog detection: Solutions presented in recent scientific papers with potential for application to mobile systems
JP7536006B2 (ja) 知覚を改善するための多重チャネル多重偏光イメージング
DE102022102189A1 (de) Multimodales Segmentierungsnetz für ein verbessertes semantisches Labelingbei einer Kartenerzeugung
CN106682592B (zh) 一种基于神经网络方法的图像自动识别系统及方法
KR20230023530A (ko) 신뢰할 수 없는 맵 주석 입력들을 사용하는 센서 데이터의 시맨틱 주석 달기
CN119131674A (zh) 基于多模态数据的边坡监测方法及装置
EP3828866A1 (de) Verfahren und vorrichtung zur bestimmung der bewegungsbahnen von beweglichen elementen
CA3233479A1 (fr) Procede de detection d'obstacles
CN120528948B (zh) 一种自然资源调查监测方法、装置及电子设备
US20260073702A1 (en) Systems and methods for predicting occupancy in a voxel representation of an environment
FR3107359A1 (fr) Procede et dispositif de determination d'obstacles d'altitude
US20250113112A1 (en) Detecting sensor defects for three-dimensional time-of-flight sensors
US12505566B2 (en) Detecting and filtering the edge pixels of 3D point clouds obtained from time-of-flight sensors
US20240319375A1 (en) Precision prediction of time-of-flight sensor measurements
US20250111528A1 (en) Identification and fusion of super pixels and super voxels captured by time-of-flight sensors
US20240062349A1 (en) Enhanced high dynamic range pipeline for three-dimensional image signal processing
Chen et al. Intelligent Aerial-Based Recognition and Positioning System for Camel Grazing
Sarkar et al. Development of an Infrastructure Based Data Acquisition System to Naturalistically Collect the Roadway Environment
Guinoubi Survey on Lidar Sensing Technology for Vehicular Networks
Barbier Urcullu Navigation in challenging environments
Shi et al. Sensor Technologies
FR3105511A1 (fr) Procédé de reconnaissance automatique de signalisation routière pour véhicule autonome
WO2024009026A1 (fr) Procede et dispositif de classification et de localisation d'objets dans des sequences d'images, systeme, programme d'ordinateur et support d'informations associes

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220922

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230608

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: AMPERE SAS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20250721