CN109460728A - A kind of big data safeguard management platform based on Emotion identification - Google Patents

A kind of big data safeguard management platform based on Emotion identification Download PDF

Info

Publication number
CN109460728A
CN109460728A CN201811291285.3A CN201811291285A CN109460728A CN 109460728 A CN109460728 A CN 109460728A CN 201811291285 A CN201811291285 A CN 201811291285A CN 109460728 A CN109460728 A CN 109460728A
Authority
CN
China
Prior art keywords
mood
module
information
face
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811291285.3A
Other languages
Chinese (zh)
Inventor
李光者
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Asb Technology Co Ltd
Original Assignee
Shenzhen Asb Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Asb Technology Co Ltd filed Critical Shenzhen Asb Technology Co Ltd
Priority to CN201811291285.3A priority Critical patent/CN109460728A/en
Publication of CN109460728A publication Critical patent/CN109460728A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Acoustics & Sound (AREA)
  • Child & Adolescent Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of big data safeguard management platform based on Emotion identification, it include: Emotion identification subsystem, for carrying out mood analysis according to the facial feature information and voice messaging of acquisition and obtaining mood value, if mood value is more than early warning value, subsystem start-up operation is captured;Subsystem is captured, is more than the facial image of early warning value for capturing mood value, and record candid photograph when and where, and identify the face identity information captured using face recognition technology, is formed and capture data and be uploaded to server;Server, for detecting whether candid photograph data are transmitted, if so, storage captures data, generates alarm signal and be sent to user terminal;User terminal captures data and alarm signal for receiving, and reminds show user in time.Emotion identification technology is combined by the present invention with face recognition technology, is found insecurity factor in time using Emotion identification technology, and determine the identity information of the insecurity factor, is of great significance for safety-security area.

Description

A kind of big data safeguard management platform based on Emotion identification
Technical field
The invention belongs to Emotion identification technical fields, and in particular to a kind of big data safeguard management based on Emotion identification is flat Platform.
Background technique
Mood is a kind of feeling for combining people, the state of thought and act, it includes people to extraneous or autostimulation Psychoreaction also includes the physiological reaction with this psychoreaction.
In the prior art, the method for Emotion identification mainly have self-report method, autonomic nerves system measurement, behavior measure, The modes such as brain measurement, language measurement and facial expression measurement.Language and facial expression are the important carriers of Human communication, it is not It can only express affective state, cognitive activities and the personality characteristics of the mankind, and its human body behavioural information for being rich in and people The other factors such as affective state, the state of mind, health status have extremely close association.Face Emotion identification is human-computer interaction With the important component of affection computation research, it is related to psychology, sociology, anthropology, life science, cognitive science, calculating The research fields such as machine science, to the great meaning of human-computer interaction intelligent harmonization.
Meanwhile Emotion identification is also of great significance in safety-security area, can be screened by Emotion identification with the presence or absence of peace It full hidden danger or is among danger, and simple face recognition technology compares, it, can be more rapidly because having the function of screening More efficiently go to have found that it is likely that unexpected situation occur.But Emotion identification existing defects itself can only screen, cannot be qualitative With deterministic acquisition identity of personage information.
Summary of the invention
In order to solve the above problems existing in the present technology, it is an object of that present invention to provide a kind of based on the big of Emotion identification Data safeguard management platform.
The technical scheme adopted by the invention is as follows: a kind of big data safeguard management platform based on Emotion identification, comprising:
Emotion identification subsystem carries out feature information extraction for the face to personnel in monitoring area and acquires monitored space The voice messaging in domain carries out mood analysis according to the facial feature information of acquisition and voice messaging and obtains mood value;
Subsystem is captured, is more than the facial image of early warning value for capturing mood value, and record candid photograph when and where, and The face identity information captured is identified using face recognition technology, is formed and captures data, and capturing data includes the face captured Image information, the face identity information of identification, candid photograph temporal information and candid photograph location information;
Server captures the candid photograph data that subsystem transmits for detecting whether having, if so, storage is captured data, produced Raw alarm signal;
User terminal captures data and alarm signal for receiving, and reminds show user in time.
Optionally, the Emotion identification subsystem includes:
Face's Emotion identification device, for carrying out mood analysis according to facial feature information and obtaining face's mood value;
Voice mood identification device, for carrying out mood analysis according to voice messaging and obtaining voice mood value;
Comparison module, by face's mood value of acquisition and language mood value respectively with preset face's mood threshold value and language Describing love affairs thread threshold value is compared, if face's mood value is greater than preset face's mood threshold value and/or language mood value greater than default Language mood threshold value, then send a signal to candid photograph subsystem.
Optionally, face's Emotion identification device includes:
First photographing module, for acquiring the facial expression image in monitoring area;
Image pre-processing module, for carrying out shear treatment to the facial expression image of acquisition, removal hair, background and Contour area then carries out dimension normalization to facial expression image and gray scale normalization is handled, obtains pure facial image;
Human facial feature extraction module, for extracting key feature relevant to expression expression from the pure facial image obtained Point, key feature points include eyebrow, eyes, lip and chin, and carry out strength grading to key feature points, and it is special to generate expression Levy image;
Expression emotion judgment module, the standard facial expression image in expressive features image and database for that will generate carry out Analysis is compared, to identify the mood value of the expressive features image generated, i.e. face's mood value;Wherein, the mark stored in database Quasi- facial expression image, which is classified, not to be stored, and each classification corresponds to different mood values, for standard facial expression image, closer to alarm It is required that the mood value then representated by it is bigger.
Optionally, the voice mood identification device includes:
Voice acquisition module, for acquiring the mixed audio flow data in monitoring area under more sound sources;
Mixed audio stream data separating is the corresponding independent audio stream data of each sound source by speech Separation module;
Audio feature vector extraction module, for extract the audio frequency characteristics of the sound bite in independent audio stream data to Amount, wherein sound bite corresponds to one section of word in independent audio stream data;
Mood matching module, for matching the audio feature vector of extraction with multiple emotional characteristics models, wherein Multiple emotional characteristics models respectively correspond multiple voice mood classifications, and each voice mood classification corresponds to different mood values, right In emotional characteristics model, required closer to alarm, then the mood value representated by it is bigger;
Matching result is that the corresponding voice mood of emotional characteristics model to match is classified by voice mood identification module Mood as the sound bite is classified, and mood value corresponding to the category is voice mood value;
Wherein, the audio feature vector includes one of following several audio frequency characteristics or a variety of: energy feature, fundamental tone Frequecy characteristic, formant feature and mel cepstrum coefficients feature.
Optionally, multiple emotional characteristics models are by including multiple voice mood classifications corresponding mood tag along sorts Multiple respective audio feature vectors of default sound bite are learnt in advance and are established.
Optionally, the process of the pre- study includes: the more of corresponding mood tag along sort that will classify including multiple moods A respective audio feature vector of default sound bite carries out clustering processing, obtains the cluster result of default mood classification;And According to the cluster result, the audio feature vector of the default sound bite in each cluster is trained for the feelings Thread characteristic model.
Optionally, the candid photograph subsystem includes:
Setting unit, the facial feature information for being more than early warning value for mood value are set as capturing target;
Second photographing module generates real-time imaging information flow for acquiring the image information of current monitored area;
Picture processing module, for being judged according to candid photograph target real-time imaging information flow, when real-time imaging information When flowing consistent with target is captured, the high definition pictorial information for capturing target, i.e. face of the mood value more than early warning value are generated Image;
Face recognition module, for extracting the face characteristic in high definition pictorial information, and by it and with identification information Face characteristic be compared, obtain crawl target identity information, i.e., the described face identity information;
Timing module, for recording the time of high definition pictorial information generation, i.e., the described candid photograph temporal information;
Locating module, for providing the location information of crawl target, i.e., the described candid photograph location information;
Data transmitting module, what high definition pictorial information, face recognition module for generating picture processing module obtained The candid photograph location information that the candid photograph temporal information and locating module that face identity information, timing module provide provide, which is formed, to be captured Data are simultaneously uploaded to server.
Optionally, the picture processing module includes:
Judgment module, for judging whether the real-time imaging information flow meets the judgment module for capturing target;
Module is captured, for being captured, generating the height for capturing target when real-time imaging information flow meets and captures target Clear pictorial information.
The invention has the benefit that the present invention utilizes Emotion identification subsystem, to the face of personnel in monitoring area into Row feature information extraction and the voice messaging for acquiring monitoring area are carried out according to the facial feature information of acquisition and voice messaging Mood value is analyzed and obtained to mood, if mood value is more than early warning value, captures subsystem start-up operation;It is grabbed using subsystem is captured It claps mood value and is more than the facial image of early warning value, and record candid photograph when and where, and grabbed using face recognition technology identification The face identity information of bat forms and captures data and be uploaded to server;Server detects whether to transmit candid photograph data, if so, It then stores and captures data, generates alarm signal and be sent to user terminal;User terminal, which receives, captures data and alarm signal, and and When remind show user;Emotion identification technology is combined with face recognition technology, is sent out in time using Emotion identification technology Existing insecurity factor, and determine the identity information of the insecurity factor, it is of great significance for safety-security area.
Specific embodiment
Further explaination is done to the present invention combined with specific embodiments below.
Embodiment
The present embodiment provides a kind of big data safeguard management platform based on Emotion identification, comprising:
Emotion identification subsystem carries out feature information extraction for the face to personnel in monitoring area and acquires monitored space The voice messaging in domain carries out mood analysis according to the facial feature information of acquisition and voice messaging and obtains mood value, if feelings Thread value is more than early warning value, then captures subsystem start-up operation;
Subsystem is captured, is more than the facial image of early warning value for capturing mood value, and record candid photograph when and where, and The face identity information captured is identified using face recognition technology, is formed and is captured data and be uploaded to server, captures data The face identity information of human face image information, identification including candid photograph captures temporal information and captures location information;
Server captures the candid photograph data that subsystem transmits for detecting whether having, if so, storage is captured data, produced Raw alarm signal is simultaneously sent to user terminal;
User terminal captures data and alarm signal for receiving, and reminds show user in time.
Optionally, the Emotion identification subsystem includes:
Face's Emotion identification device, for carrying out mood analysis according to facial feature information and obtaining face's mood value;
Voice mood identification device, for carrying out mood analysis according to voice messaging and obtaining voice mood value;
Comparison module, by face's mood value of acquisition and language mood value respectively with preset face's mood threshold value and language Describing love affairs thread threshold value is compared, if face's mood value is greater than preset face's mood threshold value and/or language mood value greater than default Language mood threshold value, then send a signal to candid photograph subsystem.
Optionally, face's Emotion identification device includes:
First photographing module, for acquiring the facial expression image in monitoring area;
Image pre-processing module, for carrying out shear treatment to the facial expression image of acquisition, removal hair, background and Contour area then carries out dimension normalization to facial expression image and gray scale normalization is handled, obtains pure facial image;
Human facial feature extraction module, for extracting key feature relevant to expression expression from the pure facial image obtained Point, key feature points include eyebrow, eyes, lip and chin, and carry out strength grading to key feature points, and it is special to generate expression Levy image;
Expression emotion judgment module, the standard facial expression image in expressive features image and database for that will generate carry out Analysis is compared, to identify the mood value of the expressive features image generated, i.e. face's mood value;Wherein, the mark stored in database Quasi- facial expression image, which is classified, not to be stored, and each classification corresponds to different mood values, for standard facial expression image, closer to alarm It is required that the mood value then representated by it is bigger.
Wherein, standard facial expression image can be the expressions such as terrified, panic, fierce and malicious, extreme irritability, and these expression datas are equal It is from the injured party that such as traffic accident, personal safety accident are such as plundered, murder, collected in hostage's case in contingency or to apply The expression data to victimize at that time.
Optionally, the voice mood identification device includes:
Voice acquisition module, for acquiring the mixed audio flow data in monitoring area under more sound sources;
Mixed audio stream data separating is the corresponding independent audio stream data of each sound source by speech Separation module;
Audio feature vector extraction module, for extract the audio frequency characteristics of the sound bite in independent audio stream data to Amount, wherein sound bite corresponds to one section of word in independent audio stream data;
Mood matching module, for matching the audio feature vector of extraction with multiple emotional characteristics models, wherein Multiple emotional characteristics models respectively correspond multiple voice mood classifications, and each voice mood classification corresponds to different mood values, right In emotional characteristics model, required closer to alarm, then the mood value representated by it is bigger;
Matching result is that the corresponding voice mood of emotional characteristics model to match is classified by voice mood identification module Mood as the sound bite is classified, and mood value corresponding to the category is voice mood value;
Wherein, the audio feature vector includes one of following several audio frequency characteristics or a variety of: energy feature, fundamental tone Frequecy characteristic, formant feature and mel cepstrum coefficients feature;Multiple voice mood classifications include terrified, panic, fierce and malicious, pole Spend the moods such as irritability.
More specifically, the energy feature includes: that short-time energy first-order difference and/or predeterminated frequency energy below are big It is small;The fundamental frequency feature includes: fundamental frequency and/or fundamental frequency first-order difference;The formant feature includes following One of several or a variety of: first formants, the second formant, third formant, the first formant first-order difference, second are total to Peak first-order difference of shaking and third formant first-order difference;The mel cepstrum coefficients feature includes 1-12 rank mel cepstrum system Several and/or 1-12 rank mel cepstrum coefficients first-order difference.
Optionally, multiple emotional characteristics models are by including multiple voice mood classifications corresponding mood tag along sorts Multiple respective audio feature vectors of default sound bite are learnt in advance and are established.
Optionally, the process of the pre- study includes: the more of corresponding mood tag along sort that will classify including multiple moods A respective audio feature vector of default sound bite carries out clustering processing, obtains the cluster result of default mood classification;And According to the cluster result, the audio feature vector of the default sound bite in each cluster is trained for the feelings Thread characteristic model.
Specifically, when the emotional characteristics model is mixed Gauss model, then the matching process of mood matching module is as follows: Calculate the audio feature vector likelihood probability between multiple emotional characteristics models respectively of sound bite;Voice mood identifies mould The identification process of block is as follows: likelihood probability being greater than preset threshold and the corresponding mood classification of maximum emotional characteristics model is made Classify for the mood of the sound bite.
Optionally, the candid photograph subsystem includes:
Setting unit, the facial feature information for being more than early warning value for mood value are set as capturing target;
Second photographing module generates real-time imaging information flow for acquiring the image information of current monitored area;
Picture processing module, for being judged according to candid photograph target real-time imaging information flow, when real-time imaging information When flowing consistent with target is captured, the high definition pictorial information for capturing target, i.e. face of the mood value more than early warning value are generated Image;
Face recognition module, for extracting the face characteristic in high definition pictorial information, and by it and with identification information Face characteristic be compared, obtain crawl target identity information, i.e., the described face identity information;
Timing module, for recording the time of high definition pictorial information generation, i.e., the described candid photograph temporal information;
Locating module, for providing the location information of crawl target, i.e., the described candid photograph location information;
Data transmitting module, what high definition pictorial information, face recognition module for generating picture processing module obtained The candid photograph location information that the candid photograph temporal information and locating module that face identity information, timing module provide provide, which is formed, to be captured Data are simultaneously uploaded to server.
Optionally, the picture processing module includes:
Judgment module, for judging whether the real-time imaging information flow meets the judgment module for capturing target;
Module is captured, for being captured, generating the height for capturing target when real-time imaging information flow meets and captures target Clear pictorial information.
The present invention is not limited to above-mentioned optional embodiment, anyone can show that other are each under the inspiration of the present invention The product of kind form.Above-mentioned specific embodiment should not be understood the limitation of pairs of protection scope of the present invention, protection of the invention Range should be subject to be defined in claims, and specification can be used for interpreting the claims.

Claims (8)

1. a kind of big data safeguard management platform based on Emotion identification characterized by comprising
Emotion identification subsystem carries out feature information extraction for the face to personnel in monitoring area and acquires monitoring area Voice messaging carries out mood analysis according to the facial feature information of acquisition and voice messaging and obtains mood value;
Subsystem is captured, is more than the facial image of early warning value for capturing mood value, and record candid photograph when and where, and utilize Face recognition technology identifies the face identity information captured, and is formed and captures data, and capturing data includes the facial image captured Information, the face identity information of identification, candid photograph temporal information and candid photograph location information;
Server captures the candid photograph data that subsystem transmits for detecting whether having, if so, storage captures data, generates report Alert signal;
User terminal captures data and alarm signal for receiving, and reminds show user in time.
2. the big data safeguard management platform according to claim 1 based on Emotion identification, which is characterized in that the mood Recognition subsystem includes:
Face's Emotion identification device, for carrying out mood analysis according to facial feature information and obtaining face's mood value;
Voice mood identification device, for carrying out mood analysis according to voice messaging and obtaining voice mood value;
Comparison module, by face's mood value of acquisition and language mood value respectively with preset face's mood threshold value and language feelings Thread threshold value is compared, if face's mood value is greater than preset face's mood threshold value and/or language mood value is greater than preset language Describing love affairs thread threshold value, then send a signal to candid photograph subsystem.
3. the big data safeguard management platform according to claim 2 based on Emotion identification, which is characterized in that the face Emotion identification device includes:
First photographing module, for acquiring the facial expression image in monitoring area;
Image pre-processing module, for carrying out shear treatment, removal hair, background and profile to the facial expression image of acquisition Region then carries out dimension normalization to facial expression image and gray scale normalization is handled, obtains pure facial image;
Human facial feature extraction module, for extracting key feature points relevant to expression expression from the pure facial image obtained, Key feature points include eyebrow, eyes, lip and chin, and carry out strength grading to key feature points, generate expressive features Image;
Expression emotion judgment module, for the expressive features image of generation to be compared with the standard facial expression image in database Analysis, to identify the mood value of the expressive features image generated, i.e. face's mood value;Wherein, the standard scale stored in database Feelings image, which is classified, not to be stored, and each classification corresponds to different mood values, for standard facial expression image, is wanted closer to alarm It asks, then the mood value representated by it is bigger.
4. the big data safeguard management platform according to claim 2 based on Emotion identification, which is characterized in that the voice Emotion identification device includes:
Voice acquisition module, for acquiring the mixed audio flow data in monitoring area under more sound sources;
Mixed audio stream data separating is the corresponding independent audio stream data of each sound source by speech Separation module;
Audio feature vector extraction module, for extracting the audio feature vector of the sound bite in independent audio stream data, Middle sound bite corresponds to one section of word in independent audio stream data;
Mood matching module, for matching the audio feature vector of extraction with multiple emotional characteristics models, wherein multiple Emotional characteristics model respectively corresponds multiple voice mood classifications, and each voice mood classification corresponds to different mood values, for feelings Thread characteristic model is required closer to alarm, then the mood value representated by it is bigger;
Matching result is the corresponding voice mood classification conduct of emotional characteristics model to match by voice mood identification module The mood of the sound bite is classified, and mood value corresponding to the category is voice mood value.
5. the big data safeguard management platform according to claim 4 based on Emotion identification, it is characterised in that: multiple moods Characteristic model passes through respective to multiple default sound bites including the corresponding mood tag along sort of multiple voice mood classifications Audio feature vector is learnt in advance and is established.
6. the big data safeguard management platform according to claim 5 based on Emotion identification, which is characterized in that pre- The process of habit include: will include that multiple moods are classified multiple default respective audios of sound bite of corresponding mood tag along sort Feature vector carries out clustering processing, obtains the cluster result of default mood classification;And according to the cluster result, will each it gather The audio feature vector of the default sound bite in class is trained for the emotional characteristics model.
7. the big data safeguard management platform according to claim 1 based on Emotion identification, which is characterized in that the candid photograph Subsystem includes:
Setting unit, the facial feature information for being more than early warning value for mood value are set as capturing target;
Second photographing module generates real-time imaging information flow for acquiring the image information of current monitored area;
Picture processing module, for according to capture target real-time imaging information flow is judged, when real-time imaging information flow with When candid photograph target is consistent, the high definition pictorial information for capturing target, i.e. facial image of the mood value more than early warning value are generated;
Face recognition module, for extracting the face characteristic in high definition pictorial information, and by itself and the people with identification information Face feature is compared, and obtains the identity information of crawl target, i.e., the described face identity information;
Timing module, for recording the time of high definition pictorial information generation, i.e., the described candid photograph temporal information;
Locating module, for providing the location information of crawl target, i.e., the described candid photograph location information;
Data transmitting module, the face that high definition pictorial information, face recognition module for generating picture processing module obtain The candid photograph location information that the candid photograph temporal information and locating module that identity information, timing module provide provide, which is formed, captures data And it is uploaded to server.
8. the big data safeguard management platform according to claim 7 based on Emotion identification, which is characterized in that the picture Processing module includes:
Judgment module, for judging whether the real-time imaging information flow meets the judgment module for capturing target;
Module is captured, for being captured, generating the high definition figure for capturing target when real-time imaging information flow meets and captures target Piece information.
CN201811291285.3A 2018-10-31 2018-10-31 A kind of big data safeguard management platform based on Emotion identification Pending CN109460728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811291285.3A CN109460728A (en) 2018-10-31 2018-10-31 A kind of big data safeguard management platform based on Emotion identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811291285.3A CN109460728A (en) 2018-10-31 2018-10-31 A kind of big data safeguard management platform based on Emotion identification

Publications (1)

Publication Number Publication Date
CN109460728A true CN109460728A (en) 2019-03-12

Family

ID=65609002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811291285.3A Pending CN109460728A (en) 2018-10-31 2018-10-31 A kind of big data safeguard management platform based on Emotion identification

Country Status (1)

Country Link
CN (1) CN109460728A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110206330A (en) * 2019-06-10 2019-09-06 邱鑫梅 A kind of campus floor intelligent protection system based on big data
CN110222623A (en) * 2019-05-31 2019-09-10 深圳市恩钛控股有限公司 Micro- expression analysis method and system
CN111401198A (en) * 2020-03-10 2020-07-10 广东九联科技股份有限公司 Audience emotion recognition method, device and system
CN111767367A (en) * 2020-05-13 2020-10-13 上海光数信息科技有限公司 Method and system for tracking student moods and extracting emotional features
CN112037820A (en) * 2019-05-16 2020-12-04 杭州海康威视数字技术股份有限公司 Security alarm method, device, system and equipment
CN112102850A (en) * 2019-06-18 2020-12-18 杭州海康威视数字技术股份有限公司 Processing method, device and medium for emotion recognition and electronic equipment
CN113269032A (en) * 2021-04-12 2021-08-17 北京华毅东方展览有限公司 Exhibition early warning method and system for exhibition hall
CN113538810A (en) * 2021-07-16 2021-10-22 中国工商银行股份有限公司 Security method, security system and automatic teller machine equipment
CN115174620A (en) * 2022-07-01 2022-10-11 北京博数嘉科技有限公司 Intelligent tourism comprehensive service system and method
WO2023137995A1 (en) * 2022-01-24 2023-07-27 中国第一汽车股份有限公司 Monitoring method for preventing scratching and theft of vehicle body, and vehicle body controller and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140043983A (en) * 2012-10-04 2014-04-14 지봉필 Prevention method for dry eye syndrome utilizing camera
CN107545699A (en) * 2016-10-31 2018-01-05 郑州蓝视科技有限公司 A kind of Intelligent campus safety-protection system
CN107729882A (en) * 2017-11-19 2018-02-23 济源维恩科技开发有限公司 Emotion identification decision method based on image recognition
CN108122552A (en) * 2017-12-15 2018-06-05 上海智臻智能网络科技股份有限公司 Voice mood recognition methods and device
CN108427916A (en) * 2018-02-11 2018-08-21 上海复旦通讯股份有限公司 A kind of monitoring system and monitoring method of mood of attending a banquet for customer service

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140043983A (en) * 2012-10-04 2014-04-14 지봉필 Prevention method for dry eye syndrome utilizing camera
CN107545699A (en) * 2016-10-31 2018-01-05 郑州蓝视科技有限公司 A kind of Intelligent campus safety-protection system
CN107729882A (en) * 2017-11-19 2018-02-23 济源维恩科技开发有限公司 Emotion identification decision method based on image recognition
CN108122552A (en) * 2017-12-15 2018-06-05 上海智臻智能网络科技股份有限公司 Voice mood recognition methods and device
CN108427916A (en) * 2018-02-11 2018-08-21 上海复旦通讯股份有限公司 A kind of monitoring system and monitoring method of mood of attending a banquet for customer service

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037820B (en) * 2019-05-16 2023-09-05 杭州海康威视数字技术股份有限公司 Security alarm method, device, system and equipment
CN112037820A (en) * 2019-05-16 2020-12-04 杭州海康威视数字技术股份有限公司 Security alarm method, device, system and equipment
CN110222623A (en) * 2019-05-31 2019-09-10 深圳市恩钛控股有限公司 Micro- expression analysis method and system
CN110206330B (en) * 2019-06-10 2020-03-03 广东叠一网络科技有限公司 Campus floor intelligence protection system based on big data
CN110206330A (en) * 2019-06-10 2019-09-06 邱鑫梅 A kind of campus floor intelligent protection system based on big data
CN112102850B (en) * 2019-06-18 2023-06-20 杭州海康威视数字技术股份有限公司 Emotion recognition processing method and device, medium and electronic equipment
CN112102850A (en) * 2019-06-18 2020-12-18 杭州海康威视数字技术股份有限公司 Processing method, device and medium for emotion recognition and electronic equipment
CN111401198A (en) * 2020-03-10 2020-07-10 广东九联科技股份有限公司 Audience emotion recognition method, device and system
CN111401198B (en) * 2020-03-10 2024-04-23 广东九联科技股份有限公司 Audience emotion recognition method, device and system
CN111767367A (en) * 2020-05-13 2020-10-13 上海光数信息科技有限公司 Method and system for tracking student moods and extracting emotional features
CN113269032A (en) * 2021-04-12 2021-08-17 北京华毅东方展览有限公司 Exhibition early warning method and system for exhibition hall
CN113538810A (en) * 2021-07-16 2021-10-22 中国工商银行股份有限公司 Security method, security system and automatic teller machine equipment
WO2023137995A1 (en) * 2022-01-24 2023-07-27 中国第一汽车股份有限公司 Monitoring method for preventing scratching and theft of vehicle body, and vehicle body controller and vehicle
CN115174620A (en) * 2022-07-01 2022-10-11 北京博数嘉科技有限公司 Intelligent tourism comprehensive service system and method

Similar Documents

Publication Publication Date Title
CN109460728A (en) A kind of big data safeguard management platform based on Emotion identification
Zhang Expression-EEG based collaborative multimodal emotion recognition using deep autoencoder
Wang et al. Channel selection method for EEG emotion recognition using normalized mutual information
Li et al. Single-channel EEG-based mental fatigue detection based on deep belief network
CN110811649A (en) Fatigue driving detection method based on bioelectricity and behavior characteristic fusion
Acharya et al. A long short term memory deep learning network for the classification of negative emotions using EEG signals
CN108427916A (en) A kind of monitoring system and monitoring method of mood of attending a banquet for customer service
WO2018151628A1 (en) Algorithm for complex remote non-contact multichannel analysis of a psycho-emotional and physiological condition of a subject from audio and video content
CN107669266A (en) A kind of animal brain electricity analytical system
CN109199412A (en) Abnormal emotion recognition methods based on eye movement data analysis
CN111000556A (en) Emotion recognition method based on deep fuzzy forest
CN109222966A (en) A kind of EEG signals sensibility classification method based on variation self-encoding encoder
Wickramasuriya et al. Online and offline anger detection via electromyography analysis
CN115691762A (en) Autism child safety monitoring system and method based on image recognition
Ooi et al. Prediction of clinical depression in adolescents using facial image analaysis
Li et al. Multi-modal emotion recognition based on deep learning of EEG and audio signals
KR102285482B1 (en) Method and apparatus for providing content based on machine learning analysis of biometric information
Kaur et al. Impact of ageing on EEG based biometric systems
Sulthan et al. Emotion recognition using brain signals
KR102528595B1 (en) Method and apparatus for supporting user's learning concentration based on analysis of user's voice
Alam et al. GeSmart: A gestural activity recognition model for predicting behavioral health
CN111723869A (en) Special personnel-oriented intelligent behavior risk early warning method and system
Patil et al. Goal-oriented auditory scene recognition
Jęśko Vocalization Recognition of People with Profound Intellectual and Multiple Disabilities (PIMD) Using Machine Learning Algorithms
Tapia et al. Learning to predict fitness for duty using near infrared periocular iris images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190312

RJ01 Rejection of invention patent application after publication