MA54537B1 - Système et procédé de lecture et d'analyse du comportement comprenant les expressions verbales, corporelles et faciales afin de déterminer la congruence d'une personne - Google Patents
Système et procédé de lecture et d'analyse du comportement comprenant les expressions verbales, corporelles et faciales afin de déterminer la congruence d'une personneInfo
- Publication number
- MA54537B1 MA54537B1 MA54537A MA54537A MA54537B1 MA 54537 B1 MA54537 B1 MA 54537B1 MA 54537 A MA54537 A MA 54537A MA 54537 A MA54537 A MA 54537A MA 54537 B1 MA54537 B1 MA 54537B1
- Authority
- MA
- Morocco
- Prior art keywords
- congruence
- person
- bodily
- reading
- data
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/164—Lie detection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/167—Personality evaluation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Hospice & Palliative Care (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Child & Adolescent Psychology (AREA)
- Social Psychology (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Physiology (AREA)
- Psychology (AREA)
- Fuzzy Systems (AREA)
- Computational Linguistics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Audiology, Speech & Language Pathology (AREA)
Abstract
La présente invention concerne un système de traitement de données pour déterminer la congruence ou l'incongruence entre l'expression corporelle et les paroles d'une personne, comprenant une machine d'auto-apprentissage, telle qu'un réseau neuronal, configurée pour recevoir comme entrée un ensemble de données comprenant : des données approuvées d'un ensemble de paroles analysées de personnes, lesdites données approuvées comprenant pour chaque parole analysée : * un ensemble de séquences vidéo, comprenant des séquences audio et des séquences visuelles, chaque séquence audio correspondant à une séquence visuelle, et * un indicateur de congruence approuvé pour chacune desdites séquences vidéo - ladite machine d'auto-apprentissage étant entraînée de sorte que le système de traitement de données est capable de fournir comme résultat un indicateur de congruence.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CH001571/2018A CH715893A9 (fr) | 2018-12-20 | 2018-12-20 | Système et procédé destinés à lire et analyser le comportement, y compris le langage verbal, le langage corporel et les expressions faciales, afin de déterminer la congruence d'une personne. |
| EP19839416.5A EP3897388B1 (fr) | 2018-12-20 | 2019-12-20 | Système et procédé de lecture et d'analyse du comportement comprenant les expressions verbales, corporelles et faciales afin de déterminer la congruence d'une personne |
| PCT/IB2019/061184 WO2020128999A1 (fr) | 2018-12-20 | 2019-12-20 | Système et procédé de lecture et d'analyse du comportement comprenant les expressions verbales, corporelles et faciales afin de déterminer la congruence d'une personne |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| MA54537A MA54537A (fr) | 2022-03-30 |
| MA54537B1 true MA54537B1 (fr) | 2024-08-30 |
Family
ID=64959049
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| MA54537A MA54537B1 (fr) | 2018-12-20 | 2019-12-20 | Système et procédé de lecture et d'analyse du comportement comprenant les expressions verbales, corporelles et faciales afin de déterminer la congruence d'une personne |
Country Status (7)
| Country | Link |
|---|---|
| US (1) | US20220067353A1 (fr) |
| EP (1) | EP3897388B1 (fr) |
| CA (1) | CA3122729A1 (fr) |
| CH (1) | CH715893A9 (fr) |
| IL (1) | IL283516A (fr) |
| MA (1) | MA54537B1 (fr) |
| WO (1) | WO2020128999A1 (fr) |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111657971A (zh) * | 2020-07-07 | 2020-09-15 | 电子科技大学 | 基于微多普勒和视觉感知融合的非接触测谎系统及方法 |
| US20220101873A1 (en) * | 2020-09-30 | 2022-03-31 | Harman International Industries, Incorporated | Techniques for providing feedback on the veracity of spoken statements |
| CN112991076A (zh) * | 2021-02-08 | 2021-06-18 | 支付宝(杭州)信息技术有限公司 | 信息处理方法及装置 |
| US12062367B1 (en) * | 2021-06-28 | 2024-08-13 | Amazon Technologies, Inc. | Machine learning techniques for processing video streams using metadata graph traversal |
| US11900327B2 (en) * | 2021-06-30 | 2024-02-13 | Capital One Services, Llc | Evaluation adjustment factoring for bias |
| EP4252643A1 (fr) * | 2022-03-29 | 2023-10-04 | Emotion Comparator Systems Sweden AB | Système et procédé d'interprétation d'interaction interpersonnelle humaine |
| EP4325517B1 (fr) * | 2022-08-18 | 2024-11-20 | Carl Zeiss Vision International GmbH | Procédés et dispositifs pour effectuer une procédure de test de vision sur une personne |
| US12293010B1 (en) * | 2024-07-08 | 2025-05-06 | AYL Tech, Inc. | Context-sensitive portable messaging based on artificial intelligence |
| CN119993474B (zh) * | 2024-12-13 | 2026-01-06 | 华东师范大学 | 基于面部表情的人工智能自动识别系统 |
Family Cites Families (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7046300B2 (en) * | 2002-11-29 | 2006-05-16 | International Business Machines Corporation | Assessing consistency between facial motion and speech signals in video |
| WO2008063155A2 (fr) | 2005-10-18 | 2008-05-29 | Drexel University | Détection de la supercherie au moyen d'une spectroscopie dans l'infrarouge proche fonctionnelle |
| WO2008063527A2 (fr) | 2006-11-13 | 2008-05-29 | Faro Scott H | Détection de mensonge et de vérité à l'aide d'une imrf du cerveau |
| US20080260212A1 (en) | 2007-01-12 | 2008-10-23 | Moskal Michael D | System for indicating deceit and verity |
| US8903176B2 (en) * | 2011-11-14 | 2014-12-02 | Sensory Logic, Inc. | Systems and methods using observed emotional data |
| US20130139259A1 (en) | 2011-11-30 | 2013-05-30 | Elwha Llc | Deceptive indicia profile generation from communications interactions |
| US9832510B2 (en) | 2011-11-30 | 2017-11-28 | Elwha, Llc | Deceptive indicia profile generation from communications interactions |
| US9026678B2 (en) | 2011-11-30 | 2015-05-05 | Elwha Llc | Detection of deceptive indicia masking in a communications interaction |
| US8848068B2 (en) | 2012-05-08 | 2014-09-30 | Oulun Yliopisto | Automated recognition algorithm for detecting facial expressions |
| CN104537361A (zh) | 2015-01-15 | 2015-04-22 | 上海博康智能信息技术有限公司 | 基于视频的测谎方法及其测谎系统 |
| US10368792B2 (en) | 2015-06-02 | 2019-08-06 | The Charles Stark Draper Laboratory Inc. | Method for detecting deception and predicting interviewer accuracy in investigative interviewing using interviewer, interviewee and dyadic physiological and behavioral measurements |
| US9697833B2 (en) * | 2015-08-25 | 2017-07-04 | Nuance Communications, Inc. | Audio-visual speech recognition with scattering operators |
| US10049263B2 (en) | 2016-06-15 | 2018-08-14 | Stephan Hau | Computer-based micro-expression analysis |
| US20180160959A1 (en) * | 2016-12-12 | 2018-06-14 | Timothy James Wilde | Modular electronic lie and emotion detection systems, methods, and devices |
| CN107578015B (zh) | 2017-09-06 | 2020-06-30 | 竹间智能科技(上海)有限公司 | 一种基于深度学习的第一印象识别与回馈系统及方法 |
-
2018
- 2018-12-20 CH CH001571/2018A patent/CH715893A9/fr not_active Application Discontinuation
-
2019
- 2019-12-20 EP EP19839416.5A patent/EP3897388B1/fr active Active
- 2019-12-20 US US17/416,344 patent/US20220067353A1/en active Pending
- 2019-12-20 WO PCT/IB2019/061184 patent/WO2020128999A1/fr not_active Ceased
- 2019-12-20 CA CA3122729A patent/CA3122729A1/fr active Pending
- 2019-12-20 MA MA54537A patent/MA54537B1/fr unknown
-
2021
- 2021-05-27 IL IL283516A patent/IL283516A/en unknown
Also Published As
| Publication number | Publication date |
|---|---|
| IL283516A (en) | 2021-07-29 |
| MA54537A (fr) | 2022-03-30 |
| EP3897388C0 (fr) | 2024-05-01 |
| CH715893A9 (fr) | 2023-06-30 |
| US20220067353A1 (en) | 2022-03-03 |
| EP3897388A1 (fr) | 2021-10-27 |
| EP3897388B1 (fr) | 2024-05-01 |
| WO2020128999A1 (fr) | 2020-06-25 |
| CH715893A2 (fr) | 2020-08-31 |
| CA3122729A1 (fr) | 2020-06-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| MA54537B1 (fr) | Système et procédé de lecture et d'analyse du comportement comprenant les expressions verbales, corporelles et faciales afin de déterminer la congruence d'une personne | |
| Busch et al. | Auditory environment across the life span of cochlear implant users: Insights from data logging | |
| Hopp et al. | The extended Moral Foundations Dictionary (eMFD): Development and applications of a crowd-sourced approach to extracting moral intuitions from text | |
| Zhao et al. | See your mental state from your walk: Recognizing anxiety and depression through Kinect-recorded gait data | |
| Nasreen et al. | Alzheimer’s dementia recognition from spontaneous speech using disfluency and interactional features | |
| Strand et al. | Individual differences in susceptibility to the McGurk effect: Links with lipreading and detecting audiovisual incongruity | |
| Rey‐Martinez et al. | Vestibulo‐ocular reflex gain values in the suppression head impulse test of healthy subjects | |
| CN109765991A (zh) | 社交互动系统、用于帮助用户进行社交互动的系统及非暂时性计算机可读存储介质 | |
| CN109493968A (zh) | 一种认知评估方法及装置 | |
| GB2590201A (en) | Apparatus for estimating mental/neurological disease | |
| Michaels et al. | Racial differences in the appraisal of microaggressions through cultural consensus modeling | |
| Chandler et al. | Overcoming the bottleneck in traditional assessments of verbal memory: Modeling human ratings and classifying clinical group membership | |
| Needle et al. | Gendered associations of English morphology | |
| Stegmann et al. | Automated semantic relevance as an indicator of cognitive decline: out‐of‐sample validation on a large‐scale longitudinal dataset | |
| Takeshige-Amano et al. | Digital detection of Alzheimer’s disease using smiles and conversations with a chatbot | |
| Pan et al. | Exploring the ability of vocal biomarkers in distinguishing depression from bipolar disorder, schizophrenia, and healthy controls | |
| Beltrán et al. | Recognition of audible disruptive behavior from people with dementia | |
| Nault et al. | Investigating the influence of local and personal common ground on memory for conversation using an online referential communication task. | |
| Rosenwald et al. | Political ideologies of social workers: An under explored dimension of practice | |
| Mitsuyoshi et al. | Mental status assessment of disaster relief personnel by vocal affect display based on voice emotion recognition | |
| Ramotowska et al. | Most, but not more than half, is proportion-dependent and sensitive to individual differences | |
| KR102496412B1 (ko) | 청지각능력 훈련 시스템의 동작 방법 | |
| Gupta et al. | REDE-Detecting human emotions using CNN and RASA | |
| Kaushan et al. | Personalized and Interactive Demented Care and Learning Mate with a Virtual System Using Emotion Recognition | |
| US20220180871A1 (en) | Information processing device, information processing method, and program |