WO2020168468A1 - 基于表情识别的呼救方法、装置、电子设备及存储介质 - Google Patents

基于表情识别的呼救方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020168468A1
WO2020168468A1 PCT/CN2019/075482 CN2019075482W WO2020168468A1 WO 2020168468 A1 WO2020168468 A1 WO 2020168468A1 CN 2019075482 W CN2019075482 W CN 2019075482W WO 2020168468 A1 WO2020168468 A1 WO 2020168468A1
Authority
WO
WIPO (PCT)
Prior art keywords
preset
user
face image
motion
face
Prior art date
Application number
PCT/CN2019/075482
Other languages
English (en)
French (fr)
Inventor
王林
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to CN201980000283.0A priority Critical patent/CN110121715A/zh
Priority to PCT/CN2019/075482 priority patent/WO2020168468A1/zh
Publication of WO2020168468A1 publication Critical patent/WO2020168468A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns

Definitions

  • This application relates to the field of information processing technology, and in particular to a method, device, electronic equipment and storage medium for calling for help based on facial expression recognition.
  • identity verification methods based on 3D face recognition technology are widely used in various smart terminals, such as mobile phones, computers, electronic door locks, and so on.
  • the smart terminal verifies the user's identity by collecting facial features.
  • the present invention provides a method, device, electronic equipment and storage medium for calling for help based on facial expression recognition, which can analyze the user's emotion according to the change of the user's facial expression, and automatically generate a cry for help signal when the user's emotional abnormality is found, thereby realizing a secret call for help. Ensure the personal and property safety of users.
  • an embodiment of the present invention provides a method for calling for help based on facial expression recognition, including:
  • collecting the user's face image sequence includes:
  • the preset requirement is Refers to: the face image contains a complete facial area, and the clarity of the face image is greater than a preset threshold;
  • extracting motion features from the face image sequence includes:
  • determining whether the motion feature belongs to the target category according to the mapping relationship between the motion feature and the emotion category includes:
  • the current emotional category includes: happiness, calm, anger, fear, pain, and sadness.
  • the target categories include: fear, pain, and preset expressions; the preset expressions are custom expressions entered by the user in advance.
  • sending a preset distress signal to the preset platform includes:
  • the preset platform includes: a community security platform, a public security bureau alarm platform.
  • an embodiment of the present invention provides a distress call device based on facial expression recognition, including:
  • the collection module is used to collect the user's face image sequence
  • a determining module configured to determine whether the motion feature belongs to the target category according to the mapping relationship between the motion feature and the emotion category
  • the sending module is used to send a preset distress signal to the preset platform when the motion feature belongs to the target category.
  • the acquisition module is specifically used for:
  • the preset requirement is Refers to: the face image contains a complete facial area, and the clarity of the face image is greater than a preset threshold;
  • the extraction module is specifically used for:
  • the determining module is specifically used for:
  • the current emotional category includes: happiness, calm, anger, fear, pain, and sadness.
  • the target categories include: fear, pain, and preset expressions; the preset expressions are custom expressions entered by the user in advance.
  • the sending module is specifically used for:
  • the preset platform includes: a community security platform, a public security bureau alarm platform.
  • an embodiment of the present invention provides an electronic device, including: an image collector, a processor, and a memory; the memory stores an algorithm program, and the image collector is used to collect a user's face image; the processing The device is used to retrieve the algorithm program in the memory and execute the method for calling for help based on facial expression recognition as described in any one of the first aspect.
  • an embodiment of the present invention provides an access control system, including: an image collector, a processor, a memory, a door lock, and a communication device; the memory stores an algorithm program, and the image collector is used to collect the user’s personal information. Face image; the processor is configured to retrieve the algorithm program in the memory, and execute the method for calling for help based on facial expression recognition as described in any one of the first aspect; wherein:
  • the door lock is controlled to delay opening or refuse to be opened, and a preset distress signal is sent to the preset platform through the communication device.
  • an embodiment of the present invention provides a computer-readable storage medium, including: program instructions, which when run on a computer, cause the computer to execute the program instructions to implement any one of the first aspect The method of calling for help based on facial expression recognition.
  • the method, device, equipment, and storage medium for calling for help based on facial expression recognition collect a user's face image sequence; extract motion features from the face image sequence; according to the relationship between the motion feature and the emotion category To determine whether the motion feature belongs to the target category; if the motion feature belongs to the target category, send a preset distress signal to a preset platform.
  • the user's emotion can be analyzed according to the change of the user's expression, and when the user's emotion is found to be abnormal, a distress signal is automatically generated, so as to realize a secret call for help and ensure the safety of the user's personal and property.
  • Figure 1 is a schematic diagram of the principle of an application scenario of the present invention
  • FIG. 2 is a flowchart of a method for calling for help based on facial expression recognition according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic structural diagram of a distress call device based on facial expression recognition according to Embodiment 2 of the present invention
  • FIG. 4 is a schematic structural diagram of an electronic device according to Embodiment 3 of the present invention.
  • identity verification methods based on 3D face recognition technology are widely used in various smart terminals, such as mobile phones, computers, electronic door locks, and so on.
  • the smart terminal verifies the user's identity by collecting facial features.
  • the face area can be divided into a number of AUs (Action Units) that are mutually independent and connected.
  • AUs Action Units
  • Professor Ekman analyzed the motion characteristics of these motion units, the main areas they control, and the expressions related to them, and in 1976 proposed the Facial Action Coding System (FACS).
  • FACS Facial Action Coding System
  • the present invention provides a method for calling for help based on facial expression recognition, which can recognize the facial micro-expression of a recognized person, and then determine the result of recognizing the emotional state of the person. If the fear, pain or specific micro-expression on the face is recognized, a SoS emergency call for help is carried out, and the message of the victim being hijacked and duress is secretly sent out.
  • the method of the present invention has the following characteristics: 1) The facial behavior coding system is used to recognize the emotion of the human face without the criminals knowing, and the SoS is automatically activated to call for help when the situation is judged to be painful or fearful. 2) You can set a specific micro-expression to start the SoS call for help operation. This method can be applied to terminal devices, mobile phones, tablets, and door locks of consumer electronic system products for face recognition.
  • FIG. 1 is a schematic diagram of the principle of an application scenario of the present invention.
  • the distress call device 10 based on facial expression recognition includes: a collection module, an extraction module, a determination module, and a sending module.
  • the collection module of the distress device 10 based on facial expression recognition is used to collect the user's facial image sequence; the extraction module of the distress device 10 based on facial expression recognition is used to extract motion features from the facial image sequence; the distress based on facial expression recognition
  • the determining module of the device 10 is used to determine whether the motion feature belongs to the target category according to the mapping relationship between the motion feature and the emotion category; the sending module of the call for help device 10 based on facial expression recognition is used to determine whether the user’s current emotion category belongs to the target category.
  • a preset distress signal is sent to the preset platform 20.
  • the preset platform 20 can simultaneously receive distress signals from multiple distress devices 10 based on facial expression recognition.
  • the application of the above method can analyze the user's emotion according to the change of the user's facial expression, and automatically generate a distress signal when the user's emotional abnormality is found, so as to realize a secret call for help and ensure the user's personal and property safety.
  • Fig. 2 is a flowchart of a method for calling for help based on facial expression recognition according to Embodiment 1 of the present invention. As shown in Fig. 2, the method in this embodiment may include:
  • S101 Collect a face image sequence of a user.
  • At least one camera is used to collect the user's face image; it is determined whether the face image meets the preset requirements, if it meets the preset requirements, the face image is saved; if the preset requirements are not met, the user's face image is recollected Face image; among them, the preset requirement refers to: the face image contains a complete facial area, and the clarity of the face image is greater than the preset threshold; it is judged whether the number of face images reaches the preset number, and if so, the The preset number of face images are arranged in the order of shooting time to form the user's face image sequence; if not, the user's face images are recollected until the preset number of face images are collected.
  • At least one camera is used to collect the user's face image
  • two cameras are used, one of which is a visible light camera and the other is a near-infrared camera that filters out visible light.
  • the two cameras take pictures of the user's face at the same time or within a preset time difference. Then, it is determined that the face image contains a complete face area, and the sharpness of the face image is greater than a preset threshold. If the requirements are not met, the camera refocuses, expands or reduces the photographing range, takes pictures again, and collects the user's face image. Then, it is judged whether the number of face images reaches the preset number.
  • the preset number of face images in the order of shooting time to form the user's face image sequence; if not, then re-collect the user's face images until the preset number of face images are collected.
  • the collection of a face image sequence is the basis of the present invention, and the collected image sequence must meet the image quality requirements as well as the image quantity requirements. In this process, it is necessary to effectively determine that the object being collected is a living body, rather than other imitations, such as static photos.
  • the human face has 42 muscles, which are controlled by different areas of the brain. Some can be directly controlled by consciousness, while some are not easy to control by consciousness.
  • the sequence of facial images constitutes the changing process of facial expressions, from which user emotions can be extracted.
  • this embodiment does not limit the collection equipment for the face image, and those skilled in the art can add or reduce the collection equipment for the face image according to the actual situation.
  • the face area of the face image sequence is divided into multiple motion units; feature extraction is performed on the motion units to obtain the motion features corresponding to the motion units.
  • the facial area of a human face can be divided into multiple motion units such as a left eye area, a right eye area, a nose area, a lips area, a left cheek area, and a right cheek area.
  • the facial area can be further divided into multiple motion units such as upper eyelid, lower eyelid, nose wing, human middle, and lower jaw. From the face image sequence, the motion features of these motion units are extracted. For example, movement features such as raising and furrowing of the eyebrows, raising of the upper eyelid, tension of the lower eyelid, and retraction of the lips toward the ears.
  • S103 Determine whether the sports feature belongs to the target category according to the mapping relationship between the sports feature and the emotion category.
  • the emotion category to which the motion feature corresponding to each motion unit belongs is determined; according to the emotion category corresponding to each motion unit, the user’s current Emotion categories; among them, the current emotion categories include: happiness, calm, anger, fear, pain, and sadness.
  • the mapping relationship between the motion feature and the emotion is obtained through the facial behavior coding system FACS, and then the emotion corresponding to each motion unit is determined according to the emotion corresponding to the motion feature.
  • the raised and furrowed eyebrows express the emotions of fear and sadness
  • the raised upper eyelid expresses the emotions of fear and sadness
  • the tension of the lower eyelids expresses the emotions of fear and sadness
  • the withdrawal of the lips toward the ears expresses the emotions fear.
  • the probabilities of different emotions corresponding to each motion unit are synthesized, and then a weight can be set for each motion unit, and the category with the highest comprehensive score is calculated as the user's current emotion category.
  • a distress signal is sent to the preset platform.
  • the target categories include: fear, pain, and preset expressions; the preset expressions are custom expressions entered by the user in advance.
  • the preset distress signal can be sent to the preset platform through a local communication device, and/or the preset distress signal can be sent to the preset platform through a pre-associated terminal; wherein, the preset platform includes: cell security Platform, public security bureau alarm platform.
  • a call for help message is sent to the 110 alarm platform.
  • the call for help message can be in any form such as phone calls, short messages, and videos.
  • the call for help information may include information such as the location, time, and person calling for help. It is also possible to further take environmental images or videos through the camera, and send the environmental images or videos to the distress platform.
  • the preset platform may also be an emergency contact set independently by the user, such as parents. If the user's current emotion belongs to the target category, a distress signal is sent to the emergency contact.
  • the user can also independently set a specific emoticon as the trigger information for the distress signal. For example, inputting staring three times as a trigger signal in advance, if the preset expression is included in the face image sequence, a distress signal is sent to the preset platform.
  • this embodiment does not limit the specific type of the preset platform. Those skilled in the art can increase or decrease the type of the preset platform according to the actual situation.
  • the 110 alarm platform is used as the preset platform, or the user set Emergency contacts are the default platform.
  • This embodiment also does not limit the specific type of the distress signal. Those skilled in the art can increase or decrease the type of the distress signal according to the actual situation.
  • the distress message can be in any form such as a phone call, a short message, or a video.
  • the distress information can include information such as distress location, distress time, and distress personnel. It can also further include a camera to take environmental images or videos.
  • the user’s face image sequence is collected; the motion feature is extracted from the face image sequence; it is determined whether the motion feature belongs to the target category according to the mapping relationship between the motion feature and the emotion category; if the motion feature belongs to the target category , Send the preset distress signal to the preset platform.
  • the user's emotion can be analyzed according to the change of the user's expression, and when the user's emotion is found to be abnormal, a distress signal is automatically generated, so as to realize a secret call for help and ensure the safety of the user's personal and property.
  • FIG. 3 is a schematic structural diagram of a distress call apparatus based on expression recognition provided by Embodiment 2 of the present invention.
  • the distress call apparatus based on facial expression recognition in this embodiment may include:
  • the collection module 31 is used to collect the user's face image sequence
  • the extraction module 32 is used to extract motion features from the face image sequence
  • the determining module 33 is configured to determine whether the motion feature belongs to the target category according to the mapping relationship between the motion feature and the emotion category;
  • the sending module 34 is configured to send a preset distress signal to the preset platform when the motion feature belongs to the target category.
  • the acquisition module 31 is specifically used for:
  • the extraction module 32 is specifically used for:
  • Feature extraction is performed on the motion units to obtain the motion characteristics corresponding to each motion unit; among them, the motion characteristics of all the motion units constitute the motion characteristics of the face image.
  • the determining module 33 is specifically used for:
  • the user's current emotion category is determined; the current emotion category includes: happiness, calm, anger, fear, pain, and sadness.
  • the target categories include: fear, pain, and preset expressions; the preset expressions are custom expressions entered by the user in advance.
  • the sending module 34 is specifically used for:
  • the device for calling for help based on facial expression recognition of this embodiment can execute the technical solution in the method shown in FIG. 2.
  • FIG. 2 For the specific implementation process and technical principle, please refer to the related description in the method shown in FIG. 2, which will not be repeated here.
  • the user’s facial image sequence is collected; the motion feature is extracted from the facial image sequence; the user’s current emotion category is determined according to the mapping relationship between the motion feature and emotion; if the user’s current emotion belongs to the target category, Send a distress signal to the preset platform.
  • the user's emotion can be analyzed according to the change of the user's expression, and when the user's emotion is found to be abnormal, a distress signal is automatically generated, so as to realize a secret call for help and ensure the safety of the user's personal and property.
  • FIG. 4 is a schematic structural diagram of an electronic device provided in Embodiment 3 of the present invention. As shown in FIG. 4, the electronic device 40 in this embodiment includes:
  • Image collector 44 processor 41 and memory 42; among them:
  • the image collector 44 is used to collect the user's face image.
  • the memory 42 is used to store executable instructions, and the memory may also be a flash (flash memory).
  • the processor 41 is configured to execute executable instructions stored in the memory to implement various steps in the methods involved in the foregoing embodiments. For details, refer to the related description in the foregoing method embodiment.
  • the memory 42 may be independent or integrated with the processor 41.
  • the electronic device 40 may further include:
  • the bus 43 is used to connect the memory 42 and the processor 41.
  • the electronic device in this embodiment can execute the method shown in FIG. 2, and for its specific implementation process and technical principle, please refer to the related description in the method shown in FIG. 2, which will not be repeated here.
  • the electronic device may be a terminal device for face recognition, a mobile phone, a tablet, and a consumer electronic system product such as a door lock.
  • An embodiment of the present invention also provides an access control system, including: an image collector, a processor, a memory, a door lock, and a communication device; the memory stores an algorithm program, and the image collector is used to collect a user's face image; The processor is configured to retrieve the algorithm program in the memory, and execute the method for calling for help based on facial expression recognition as shown in FIG. 2; wherein:
  • the door lock is controlled to delay opening or refuse to be opened, and a preset distress signal is sent to the preset platform through the communication device.
  • the user’s face image sequence is collected; the motion feature is extracted from the face image sequence; it is determined whether the motion feature belongs to the target category according to the mapping relationship between the motion feature and the emotion category; if the motion feature belongs to the target category , Send the preset distress signal to the preset platform.
  • the user's emotion can be analyzed according to the change of the user's expression, and when the user's emotion is found to be abnormal, a distress signal is automatically generated, so as to realize a secret call for help and ensure the safety of the user's personal and property.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer-executable instructions.
  • the user equipment executes the aforementioned various possibilities. Methods.
  • the computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and can write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the application specific integrated circuit may be located in the user equipment.
  • the processor and the storage medium may also exist as discrete components in the communication device.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disks or optical disks etc., which can store program code medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本发明提供一种基于表情识别方法、装置、电子设备及存储介质。该方法,包括:采集用户的人脸图像序列;从所述人脸图像序列中提取出运动特征;根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别;若所述运动特征属于目标类别,则向预设平台发送预设的呼救信号。从而可以根据用户的表情变化来解析用户的情绪,并在发现用户情绪异常时,自动生成呼救信号,从而实现隐秘呼救,保证用户的人身和财产安全。

Description

基于表情识别的呼救方法、装置、电子设备及存储介质 技术领域
本申请涉及信息处理技术领域,尤其涉及一种基于表情识别的呼救方法、装置、电子设备及存储介质。
背景技术
随着信息技术和网络技术的迅猛发展,人们对身份识别技术的需求越来越多,对其安全可靠性的要求也越来越严格。基于传统密码认证的身份识别技术在实际信息网络应用中己经暴露出许多不足之处,而基于生物特征辨别的身份识别技术近年来日益成熟,并在实际应用中展现出极大的优越性。
目前,基于3D人脸识别技术的身份验证方式被广泛地应用于各种智能终端中,例如:手机、电脑,电子门锁等等。智能终端通过采集人脸特征来进行用户身份的验证。
但是,这种3D人脸识别技术容易被犯罪分子利用,如通过胁迫被害人来完成身份验证,从而导致了用户的财产损失。
发明内容
本发明提供一种基于表情识别的呼救方法、装置、电子设备及存储介质,可以根据用户的表情变化来解析用户的情绪,并在发现用户情绪异常时,自动生成呼救信号,从而实现隐秘呼救,保证用户的人身和财产安全。
第一方面,本发明实施例提供一种基于表情识别的呼救方法,包括:
采集用户的人脸图像序列;
从所述人脸图像序列中提取出运动特征;
根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别;
若所述运动特征属于目标类别,则向预设平台发送预设的呼救信号。
在一种可能的设计中,采集用户的人脸图像序列,包括:
通过至少一个摄像头采集用户的人脸图像;
判断所述人脸图像是否符合预设要求,若符合预设要求,则保存所述人脸图像;若不符合预设要求,则重新采集用户的人脸图像;其中,所述预设要求是指:所述人脸图像中包含完整的面部区域,且所述人脸图像的清晰度大于预设阈值;
判断所述人脸图像的数量是否达到预设数量,若是,则将所述预设数量的人脸图像按照拍摄时间顺序排列,以构成用户的人脸图像序列;若否,则重新采集用户的人脸图像,直到采集到预设数量的人脸图像。
在一种可能的设计中,从所述人脸图像序列中提取出运动特征,包括:
将所述人脸图像序列的面部区域划分成多个运动单元;
对所述运动单元进行特征提取,得到各个运动单元对应的运动特征;其中,所有运动单元的运动特征构成所述人脸图像的运动特征。
在一种可能的设计中,根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别,包括:
通过面部行为编码系统FACS中运动特征与情绪类别之间的映射关系,确定各个运动单元对应的运动特征所属的情绪类别;
根据各个运动单元所对应的情绪类别,确定用户当前情绪类别;所述当前情绪类别包括:愉快、平静、愤怒、恐惧、痛苦、悲伤。
在一种可能的设计中,所述目标类别包括:恐惧、痛苦、预设表情;所述预设表情为用户预先录入的自定义表情。
在一种可能的设计中,若所述运动特征属于目标类别,则向预设平台发送预设的呼救信号,包括:
通过本地通讯设备向预设平台发送预设的呼救信号,和/或通过预先关联的终端向预设平台发送预设的呼救信号;其中,所述预设平台包括:小区安保平台、公安局报警平台。
第二方面,本发明实施例提供一种基于表情识别的呼救装置,包括:
采集模块,用于采集用户的人脸图像序列;
提取模块,用于从所述人脸图像序列中提取出运动特征;
确定模块,用于根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别;
发送模块,用于在所述运动特征属于目标类别时,向预设平台发送预设的呼救信号。
在一种可能的设计中,所述采集模块,具体用于:
通过至少一个摄像头采集用户的人脸图像;
判断所述人脸图像是否符合预设要求,若符合预设要求,则保存所述人脸图像;若不符合预设要求,则重新采集用户的人脸图像;其中,所述预设要求是指:所述人脸图像中包含完整的面部区域,且所述人脸图像的清晰度大于预设阈值;
判断所述人脸图像的数量是否达到预设数量,若是,则将所述预设数量的人脸图像按照拍摄时间顺序排列,以构成用户的人脸图像序列;若否,则重新采集用户的人脸图像,直到采集到预设数量的人脸图像。
在一种可能的设计中,所述提取模块,具体用于:
将所述人脸图像序列的面部区域划分成多个运动单元;
对所述运动单元进行特征提取,得到各个运动单元对应的运动特征;其中,所有运动单元的运动特征构成所述人脸图像的运动特征。
在一种可能的设计中,所述确定模块,具体用于:
通过面部行为编码系统FACS中所述运动特征与情绪类别之间的映射关系,确定各个运动单元对应的运动特征所属的情绪类别;
根据各个运动单元所对应的情绪类别,确定用户当前情绪类别;所述当前情绪类别包括:愉快、平静、愤怒、恐惧、痛苦、悲伤。
在一种可能的设计中,所述目标类别包括:恐惧、痛苦、预设表情;所述预设表情为用户预先录入的自定义表情。
在一种可能的设计中,所述发送模块,具体用于:
通过本地通讯设备向预设平台发送预设的呼救信号,和/或通过预先关联的终端向预设平台发送预设的呼救信号;其中,所述预设平台包括:小区安保平台、公安局报警平台。
第三方面,本发明实施例提供一种电子设备,包括:图像采集器、处理器和存储器;所述存储器中存储有算法程序,所述图像采集器用于采集用户的人脸图像;所述处理器用于调取所述存储器中的算法程序,执行如第一方面中任一项所述的基于表情识别的呼救方法。
第四方面,本发明实施例提供一种门禁系统,包括:图像采集器、处 理器、存储器、门锁、通讯设备;所述存储器中存储有算法程序,所述图像采集器用于采集用户的人脸图像;所述处理器用于调取所述存储器中的算法程序,执行如第一方面中任一项所述的基于表情识别的呼救方法;其中:
若运动特征属于目标类别,则控制门锁延迟开启或者拒绝开启,并通过所述通讯设备向预设平台发送预设的呼救信号。
第四方面,本发明实施例提供一种计算机可读存储介质,包括:程序指令,当其在计算机上运行时,使得计算机执行所述程序指令,以实现如第一方面中任一项所述的基于表情识别的呼救方法。
本发明提供的基于表情识别的呼救方法、装置、设备及存储介质,通过采集用户的人脸图像序列;从所述人脸图像序列中提取出运动特征;根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别;若所述运动特征属于目标类别,则向预设平台发送预设的呼救信号。从而可以根据用户的表情变化来解析用户的情绪,并在发现用户情绪异常时,自动生成呼救信号,从而实现隐秘呼救,保证用户的人身和财产安全。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图进行简单的介绍。显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一应用场景的原理示意图;
图2为本发明实施例一提供的基于表情识别的呼救方法的流程图;
图3为本发明实施例二提供的基于表情识别的呼救装置的结构示意图;
图4为本发明实施例三提供的电子设备的结构示意图。
通过上述附图,已示出本公开明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本公开构思的范围,而是通过参考特定实施例为本领域技术人员说明本公开提到的概念。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例,能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含。例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
下面以具体的实施例对本发明的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
随着信息技术和网络技术的迅猛发展,人们对身份识别技术的需求越来越多,对其安全可靠性的要求也越来越严格。基于传统密码认证的身份识别技术在实际信息网络应用中己经暴露出许多不足之处,而基于生物特征辨别的身份识别技术近年来日益成熟,并在实际应用中展现出极大的优越性。
目前,基于3D人脸识别技术的身份验证方式被广泛地应用于各种智能终端中,例如:手机、电脑,电子门锁等等。智能终端通过采集人脸特征来进行用户身份的验证。
但是,这种3D人脸识别技术容易被犯罪分子利用,如通过胁迫被害人来完成身份验证,从而导致了用户的财产损失。例如,犯罪分子需要通过胁迫受害人,进行人脸识别开门、转账等操作,从而造成了受害人的财物损失。
根据人脸的解剖学特点,可以将人脸区域划分成若干既相互独立又相互联系的AU(Action Unit,运动单元)。Ekman教授分析了这些运动单元 的运动特征及其所控制的主要区域以及与之相关的表情,并于1976提出来面部行为编码系统(Facial Action Coding System,FACS)。
本发明提供一种基于表情识别的呼救方法,可以对识别人的脸部微表情进行识别,进而判断识别人情绪状态结果。若识别到人脸恐惧、痛苦或特定的微表情时,则进行SoS紧急呼救,将受害人被劫持和胁迫的消息隐秘地发送出去。本发明的方法具有以下特点:1)在犯罪分子不知情的情况下利用面部行为编码系统对人脸的情绪识别,在判断情况痛苦或恐惧表情时,自动启动SoS呼救。2)可以设定特定的微表情进行SoS启动呼救操作。本方法可以应用于人脸识别的终端设备、手机、平板以及门锁类消费电子系统商品。
在具体的实现过程中,图1为本发明一应用场景的原理示意图,如图1所示,基于表情识别的呼救装置10,包括:采集模块、提取模块、确定模块、发送模块。基于表情识别的呼救装置10的采集模块,用于采集用户的人脸图像序列;基于表情识别的呼救装置10的提取模块,用于从人脸图像序列中提取出运动特征;基于表情识别的呼救装置10的确定模块,用于根据运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别;基于表情识别的呼救装置10的发送模块,用于在用户当前情绪类别属于目标类别时,向预设平台20发送预设的呼救信号。
需要说明的是,预设平台20可以同时接收多个基于表情识别的呼救装置10的呼救信号。
应用上述方法可以根据用户的表情变化来解析用户的情绪,并在发现用户情绪异常时,自动生成呼救信号,从而实现隐秘呼救,保证用户的人身和财产安全。
下面以具体地实施例对本发明的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本发明的实施例进行描述。
图2为本发明实施例一提供的基于表情识别的呼救方法的流程图,如图2所示,本实施例中的方法可以包括:
S101、采集用户的人脸图像序列。
本实施例中,通过至少一个摄像头采集用户的人脸图像;判断人脸图 像是否符合预设要求,若符合预设要求,则保存人脸图像;若不符合预设要求,则重新采集用户的人脸图像;其中,预设要求是指:人脸图像中包含完整的面部区域,且人脸图像的清晰度大于预设阈值;判断人脸图像的数量是否达到预设数量,若是,则将预设数量的人脸图像按照拍摄时间顺序排列,以构成用户的人脸图像序列;若否,则重新采集用户的人脸图像,直到采集到预设数量的人脸图像。
具体地,通过至少一个摄像头采集用户的人脸图像,例如,采用两个摄像头,其中一个为拍摄可见光摄像头,另一个滤除可见光的近红外摄像头。两个摄像头为同时或者在预设时间差内对用户的脸部进行拍照。然后,在人脸图像中判断包含完整的面部区域,且人脸图像的清晰度大于预设阈值。若不符合该要求,则摄像头重新对焦,扩大或缩小拍照范围,重新进行拍照,采集用户的人脸图像。再然后,判断人脸图像的数量是否达到预设数量。若是,则将预设数量的人脸图像按照拍摄时间顺序排列,以构成用户的人脸图像序列;若否,则重新采集用户的人脸图像,直到采集到预设数量的人脸图像。这是因为人脸图像序列的采集是本发明的基础,采集的图像序列既要满足图像质量要求,也要满足图像数量要求。在该过程中,要有效地判断被采集的对象是活体,而不是其他仿照物,例如静态照片等。人类面部有42块肌肉,它们分别由大脑的不同区域控制,有些是可以由意识直接控制,有些则不容易用意识控制。人脸图像的序列,构成了人脸表情的变化过程,可以从其中提取用户的情绪。
需要说明的是,本实施例不限定人脸图像的采集设备,本领域的技术人员可以根据实际情况增加或者减少人脸图像的采集设备。
S102、从人脸图像序列中提取出运动特征。
本实施例中,将人脸图像序列的面部区域划分成多个运动单元;对运动单元进行特征提取,得到运动单元对应的运动特征。
具体地,可以将人脸的面部区域划分为左眼区域、右眼区域、鼻子区域、嘴唇区域、左脸颊区域、右脸颊区域等多个运动单元。也可以根据人脸肌肉,将面部区域进一步详细划分为上眼睑、下眼睑、鼻翼、人中、下颌等多个运动单元。从人脸图像序列中,提取这些运动单元的运动特征。例如,双眉抬升并皱起、上眼睑抬升、下眼睑紧张、双唇往双耳方向缩回等运动特征。
S103、根据运动特征与情绪类别之间的映射关系,确定运动特征是否属于目标类别。
本实施例中,通过面部行为编码系统FACS中运动特征与情绪类别之间的映射关系,确定各个运动单元对应的运动特征所属的情绪类别;根据各个运动单元所对应的情绪类别,确定用户的当前情绪类别;其中,当前情绪的类别包括:愉快、平静、愤怒、恐惧、痛苦、悲伤。
具体地,通过面部行为编码系统FACS获取运动特征与情绪之间的映射关系,然后,根据运动特征对应的情绪,确定每个运动单元对应的情绪。例如,双眉抬升并皱起表示的情绪是恐惧、悲伤,上眼睑抬升表示的情绪是恐惧、悲伤,下眼睑紧张表示的情绪是恐惧、悲伤,双唇往双耳方向缩回表示的情绪是恐惧。综合各个运动单元对应的不同情绪的概率,然后还可以对每个运动单元设置权重,计算综合分值最高的作为用户当前情绪的类别。例如,从人脸图像序列中识别出,双眉抬升并皱起、上眼睑抬升、下眼睑紧张、双唇往双耳方向缩回等的运动特征,判断出恐惧情绪的概率值为最高,悲伤情绪的概率值次之,最终认定用户的情绪为恐惧。
S104、若运动特征属于目标类别,则向预设平台发送预设的呼救信号。
本实施例中,若判断出用户当前情绪类别属于目标类别,则向预设平台发送呼救信号。目标类别包括:恐惧、痛苦、预设表情;预设表情为用户预先录入的自定义表情。
可选地,可以通过本地通讯设备向预设平台发送预设的呼救信号,和/或通过预先关联的终端向预设平台发送预设的呼救信号;其中,所述预设平台包括:小区安保平台、公安局报警平台。
具体地,若识别出用户的情绪为恐惧、痛苦、预设表情,表示用户被犯罪分子挟持的概率很高,则向110的报警平台发送呼救信息。其中,呼救信息可以是电话、短信息、视频等任何形式。呼救信息可以包括呼救地点、呼救时间、呼救人员等信息。也可以进一步通过摄像头拍摄环境图像或视频,并将环境图像或视频发送给呼救平台。
可选地,预设的平台,也可以是用户自主设置的紧急联系人,如父母等。若用户当前情绪属于目标类别,则向紧急联系人发送呼救信号。另外,用户也可以自主设置特定的表情,作为呼救信号的触发信息。例如,预先输入瞪眼3次作为触发信号,则如果人脸图像序列中包含该预设的表情时, 则向预设平台发送呼救信号。
需要说明的是,本实施例不限定预设平台的具体类型,本领域的技术人员可以根据实际情况增加或者减少预设平台的类型,例如将110报警平台作为预设平台,或者将用户设置的紧急联系人作为预设平台。本实施例也不限定呼救信号的具体类型,本领域的技术人员可以根据实际情况增加或者减少呼救信号的类型,例如呼救信息可以是电话、短信息、视频等任何形式。
需要说明的是,本实施例也不限定呼救信号的具体内容,本领域的技术人员可以根据实际情况增加或者减少呼救信号的内容,例如呼救信息可以包括呼救地点、呼救时间、呼救人员等信息,也可以进一步包括摄像头拍摄环境图像或视频。
本实施例,通过采集用户的人脸图像序列;从人脸图像序列中提取出运动特征;根据运动特征与情绪类别之间的映射关系,确定运动特征是否属于目标类别;若运动特征属于目标类别,则向预设平台发送预设的呼救信号。从而可以根据用户的表情变化来解析用户的情绪,并在发现用户情绪异常时,自动生成呼救信号,从而实现隐秘呼救,保证用户的人身和财产安全。
图3为本发明实施例二提供的基于表情识别的呼救装置的结构示意图,如图3所示,本实施例的基于表情识别的呼救装置可以包括:
采集模块31,用于采集用户的人脸图像序列;
提取模块32,用于从人脸图像序列中提取出运动特征;
确定模块33,用于根据运动特征与情绪类别之间的映射关系,确定运动特征是否属于目标类别;
发送模块34,用于在运动特征属于目标类别时,向预设平台发送预设的呼救信号。
在一种可能的设计中,采集模块31,具体用于:
通过至少一个摄像头采集用户的人脸图像;
判断人脸图像是否符合预设要求,若符合预设要求,则保存人脸图像;若不符合预设要求,则重新采集用户的人脸图像;其中,预设要求是指:人脸图像中包含完整的面部区域,且人脸图像的清晰度大于预设阈值;
判断人脸图像的数量是否达到预设数量,若是,则将预设数量的人脸图像按照拍摄时间顺序排列,以构成用户的人脸图像序列;若否,则重新 采集用户的人脸图像,直到采集到预设数量的人脸图像。
在一种可能的设计中,提取模块32,具体用于:
将人脸图像序列的面部区域划分成多个运动单元;
对运动单元进行特征提取,得到各个运动单元对应的运动特征;其中,所有运动单元的运动特征构成人脸图像的运动特征。
在一种可能的设计中,确定模块33,具体用于:
通过面部行为编码系统FACS中运动特征与情绪类别之间的映射关系,确定各个运动单元对应的运动特征所属的情绪类别;
根据各个运动单元所对应的情绪类别,确定用户的当前情绪类别;当前情绪类别包括:愉快、平静、愤怒、恐惧、痛苦、悲伤。
在一种可能的设计中,目标类别包括:恐惧、痛苦、预设表情;预设表情为用户预先录入的自定义表情。
在一种可能的设计中,所述发送模块34,具体用于:
通过本地通讯设备向预设平台发送预设的呼救信号,和/或通过预先关联的终端向预设平台发送预设的呼救信号;其中,预设平台包括:小区安保平台、公安局报警平台。
本实施例的基于表情识别的呼救装置,可以执行图2所示方法中的技术方案,其具体实现过程和技术原理参见图2所示方法中的相关描述,此处不再赘述。
本实施例,通过采集用户的人脸图像序列;从人脸图像序列中提取出运动特征;根据运动特征与情绪之间的映射关系,确定用户当前情绪的类别;若用户当前情绪属于目标类别,则向预设平台发送呼救信号。从而可以根据用户的表情变化来解析用户的情绪,并在发现用户情绪异常时,自动生成呼救信号,从而实现隐秘呼救,保证用户的人身和财产安全。
图4为本发明实施例三提供的电子设备的结构示意图,如图4所示,本实施例中的电子设备40包括:
图像采集器44、处理器41以及存储器42;其中:
图像采集器44,用于采集用户的人脸图像。
存储器42,用于存储可执行指令,该存储器还可以是flash(闪存)。
处理器41,用于执行存储器存储的可执行指令,以实现上述实施例涉及的方法中的各个步骤。具体可以参见前面方法实施例中的相关描述。
可选地,存储器42既可以是独立的,也可以跟处理器41集成在一起。
当存储器42是独立于处理器41之外的器件时,电子设备40还可以包括:
总线43,用于连接存储器42和处理器41。
本实施例中的电子设备可以执行图2所示的方法,其具体实现过程和技术原理参见图2所示方法中的相关描述,此处不再赘述。
可选地,该电子设备可以是人脸识别的终端设备、手机、平板以及门锁类消费电子系统商品。
本发明实施例还提供一种门禁系统,包括:图像采集器、处理器、存储器、门锁、通讯设备;所述存储器中存储有算法程序,所述图像采集器用于采集用户的人脸图像;所述处理器用于调取所述存储器中的算法程序,执行如图2所示的基于表情识别的呼救方法;其中:
若运动特征属于目标类别,则控制门锁延迟开启或者拒绝开启,并通过所述通讯设备向预设平台发送预设的呼救信号。
需要说明的是,本实施例不限定电子设备的具体类型,任何具备人脸识别功能的电子设备中均可以加载本实施例中图2所示的方法,其具体实现过程和技术原理参见图2所示方法中的相关描述,此处不再赘述。
本实施例,通过采集用户的人脸图像序列;从人脸图像序列中提取出运动特征;根据运动特征与情绪类别之间的映射关系,确定运动特征是否属于目标类别;若运动特征属于目标类别,则向预设平台发送预设的呼救信号。从而可以根据用户的表情变化来解析用户的情绪,并在发现用户情绪异常时,自动生成呼救信号,从而实现隐秘呼救,保证用户的人身和财产安全。
此外,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当用户设备的至少一个处理器执行该计算机执行指令时,用户设备执行上述各种可能的方法。
其中,计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器 和存储介质可以位于应用专用集成电路(ASIC)中。另外,该应用专用集成电路可以位于用户设备中。当然,处理器和存储介质也可以作为分立组件存在于通信设备中。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:只读存储器(ROM)、随机存取存储器(RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本发明旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求书指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求书来限制。

Claims (15)

  1. 一种基于表情识别的呼救方法,其特征在于,包括:
    采集用户的人脸图像序列;
    从所述人脸图像序列中提取出运动特征;
    根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别;
    若所述运动特征属于目标类别,则向预设平台发送预设的呼救信号。
  2. 根据权利要求1所述的方法,其特征在于,采集用户的人脸图像序列,包括:
    通过至少一个摄像头采集用户的人脸图像;
    判断所述人脸图像是否符合预设要求,若符合预设要求,则保存所述人脸图像;若不符合预设要求,则重新采集用户的人脸图像;其中,所述预设要求是指:所述人脸图像中包含完整的面部区域,且所述人脸图像的清晰度大于预设阈值;
    判断所述人脸图像的数量是否达到预设数量,若是,则将所述预设数量的人脸图像按照拍摄时间顺序排列,以构成用户的人脸图像序列;若否,则重新采集用户的人脸图像,直到采集到预设数量的人脸图像。
  3. 根据权利要求1所述的方法,其特征在于,从所述人脸图像序列中提取出运动特征,包括:
    将所述人脸图像序列的面部区域划分成多个运动单元;
    对所述运动单元进行特征提取,得到各个运动单元对应的运动特征;其中,所有运动单元的运动特征构成所述人脸图像的运动特征。
  4. 根据权利要求3所述的方法,其特征在于,根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别,包括:
    通过面部行为编码系统FACS中运动特征与情绪类别之间的映射关系,确定各个运动单元对应的运动特征所属的情绪类别;
    根据各个运动单元所对应的情绪类别,确定用户的当前情绪类别;所述当前情绪类别包括:愉快、平静、愤怒、恐惧、痛苦、悲伤。
  5. 根据权利要求1所述的方法,其特征在于,所述目标类别包括:恐惧、痛苦、预设表情;所述预设表情为用户预先录入的自定义表情。
  6. 根据权利要求1-5中任一所述的方法,其特征在于,若所述运动特征属于目标类别,则向预设平台发送预设的呼救信号,包括:
    通过本地通讯设备向预设平台发送预设的呼救信号,和/或通过预先关联的终端向预设平台发送预设的呼救信号;其中,所述预设平台包括:小区安保平台、公安局报警平台。
  7. 一种基于表情识别的呼救装置,其特征在于,包括:
    采集模块,用于采集用户的人脸图像序列;
    提取模块,用于从所述人脸图像序列中提取出运动特征;
    确定模块,用于根据所述运动特征与情绪类别之间的映射关系,确定所述运动特征是否属于目标类别;
    发送模块,用于在所述运动特征属于目标类别时,向预设平台发送预设的呼救信号。
  8. 根据权利要求7所述的装置,其特征在于,所述采集模块,具体用于:
    通过至少一个摄像头采集用户的人脸图像;
    判断所述人脸图像是否符合预设要求,若符合预设要求,则保存所述人脸图像;若不符合预设要求,则重新采集用户的人脸图像;其中,所述预设要求是指:所述人脸图像中包含完整的面部区域,且所述人脸图像的清晰度大于预设阈值;
    判断所述人脸图像的数量是否达到预设数量,若是,则将所述预设数量的人脸图像按照拍摄时间顺序排列,以构成用户的人脸图像序列;若否,则重新采集用户的人脸图像,直到采集到预设数量的人脸图像。
  9. 根据权利要求7所述的装置,其特征在于,所述提取模块,具体用于:
    将所述人脸图像序列的面部区域划分成多个运动单元;
    对所述运动单元进行特征提取,得到各个运动单元对应的运动特征;其中,所有运动单元的运动特征构成所述人脸图像的运动特征。
  10. 根据权利要求9所述的装置,其特征在于,所述确定模块,具体用于:
    通过面部行为编码系统FACS中所述运动特征与情绪类别之间的映射关系,确定各个运动单元对应的运动特征所属的情绪类别;
    根据各个运动单元所对应的情绪类别,确定用户的当前情绪类别;所述当前情绪类别包括:愉快、平静、愤怒、恐惧、痛苦、悲伤。
  11. 根据权利要求7所述的装置,其特征在于,所述目标类别包括:恐惧、痛苦、预设表情;所述预设表情为用户预先录入的自定义表情。
  12. 根据权利要求7-11中任一所述的装置,其特征在于,所述发送模块,具体用于:
    通过本地通讯设备向预设平台发送预设的呼救信号,和/或通过预先关联的终端向预设平台发送预设的呼救信号;其中,所述预设平台包括:小区安保平台、公安局报警平台。
  13. 一种电子设备,其特征在于,包括:图像采集器、处理器和存储器;所述存储器中存储有算法程序,所述图像采集器用于采集用户的人脸图像;所述处理器用于调取所述存储器中的算法程序,执行如权利要求1-6中任一项所述的基于表情识别的呼救方法。
  14. 一种门禁系统,其特征在于,包括:图像采集器、处理器、存储器、门锁、通讯设备;所述存储器中存储有算法程序,所述图像采集器用于采集用户的人脸图像;所述处理器用于调取所述存储器中的算法程序,执行如权利要求1-6中任一项所述的基于表情识别的呼救方法;其中:
    若运动特征属于目标类别,则控制门锁延迟开启或者拒绝开启,并通过所述通讯设备向预设平台发送预设的呼救信号。
  15. 一种计算机可读存储介质,其特征在于,包括:程序指令,当其在计算机上运行时,使得计算机执行所述程序指令,以实现如权利要求1-6中任一项所述的基于表情识别的呼救方法。
PCT/CN2019/075482 2019-02-19 2019-02-19 基于表情识别的呼救方法、装置、电子设备及存储介质 WO2020168468A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000283.0A CN110121715A (zh) 2019-02-19 2019-02-19 基于表情识别的呼救方法、装置、电子设备及存储介质
PCT/CN2019/075482 WO2020168468A1 (zh) 2019-02-19 2019-02-19 基于表情识别的呼救方法、装置、电子设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/075482 WO2020168468A1 (zh) 2019-02-19 2019-02-19 基于表情识别的呼救方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020168468A1 true WO2020168468A1 (zh) 2020-08-27

Family

ID=67524569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075482 WO2020168468A1 (zh) 2019-02-19 2019-02-19 基于表情识别的呼救方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN110121715A (zh)
WO (1) WO2020168468A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112691029A (zh) * 2020-12-25 2021-04-23 深圳市元征科技股份有限公司 一种经络数据处理方法、装置、设备及存储介质
CN112926407A (zh) * 2021-02-02 2021-06-08 华南师范大学 基于校园欺凌的求救信号检测方法、装置及系统
CN113017634A (zh) * 2021-03-22 2021-06-25 Oppo广东移动通信有限公司 情绪评估方法、装置、电子设备和计算机可读存储介质
CN114125145A (zh) * 2021-10-19 2022-03-01 华为技术有限公司 显示屏解锁的方法及其设备
CN114224286A (zh) * 2020-09-08 2022-03-25 上海联影医疗科技股份有限公司 一种乳腺检查的压迫方法、装置、终端和介质

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020168468A1 (zh) * 2019-02-19 2020-08-27 深圳市汇顶科技股份有限公司 基于表情识别的呼救方法、装置、电子设备及存储介质
CN110491009A (zh) * 2019-09-03 2019-11-22 北京华捷艾米科技有限公司 基于智能识别摄像头的家庭防盗方法及系统
CN110555970A (zh) * 2019-09-03 2019-12-10 亳州职业技术学院 一种基于图像识别的语音导游系统
CN110493474A (zh) * 2019-09-20 2019-11-22 北京搜狗科技发展有限公司 一种数据处理方法、装置和电子设备
CN113031456B (zh) * 2019-12-25 2023-12-12 佛山市云米电器科技有限公司 家电设备控制方法、系统、设备及计算机可读存储介质
CN111428572B (zh) * 2020-02-28 2023-07-25 中国工商银行股份有限公司 信息处理方法、装置、电子设备和介质
CN111429632A (zh) * 2020-03-11 2020-07-17 四川花间阁文化传媒有限责任公司 一种防疫应急管控双面智慧门设备及系统
CN111429630A (zh) * 2020-03-11 2020-07-17 四川花间阁文化传媒有限责任公司 一套防疫应急管控门服系统及设备
CN111540177A (zh) * 2020-04-22 2020-08-14 德施曼机电(中国)有限公司 一种基于信息识别的防劫持报警方法和系统
CN112489278A (zh) * 2020-11-18 2021-03-12 安徽领云物联科技有限公司 一种门禁识别方法及系统
CN112541425A (zh) * 2020-12-10 2021-03-23 深圳地平线机器人科技有限公司 情绪检测方法、装置、介质及电子设备
CN113129551A (zh) * 2021-03-23 2021-07-16 广州宸祺出行科技有限公司 一种通过司机微表情自动报警的方法、系统、介质和设备
CN113569784A (zh) * 2021-08-04 2021-10-29 重庆电子工程职业学院 一种内河航运执法系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009157780A (ja) * 2007-12-27 2009-07-16 Car Mate Mfg Co Ltd 監視システム
US20140022370A1 (en) * 2012-07-23 2014-01-23 The Industry & Academic Cooperation In Chungnam National University(Iac) Emotion recognition apparatus using facial expression and emotion recognition method using the same
CN104994335A (zh) * 2015-06-11 2015-10-21 广东欧珀移动通信有限公司 一种报警的方法及终端
CN107392112A (zh) * 2017-06-28 2017-11-24 中山职业技术学院 一种人脸表情识别方法及其应用的智能锁系统
TW201907329A (zh) * 2017-07-03 2019-02-16 中華電信股份有限公司 具備臉部辨識之門禁系統
CN110121715A (zh) * 2019-02-19 2019-08-13 深圳市汇顶科技股份有限公司 基于表情识别的呼救方法、装置、电子设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933827B (zh) * 2015-06-11 2018-01-19 广东欧珀移动通信有限公司 一种基于旋转摄像头的报警方法及终端
CN108449514A (zh) * 2018-03-29 2018-08-24 百度在线网络技术(北京)有限公司 信息处理方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009157780A (ja) * 2007-12-27 2009-07-16 Car Mate Mfg Co Ltd 監視システム
US20140022370A1 (en) * 2012-07-23 2014-01-23 The Industry & Academic Cooperation In Chungnam National University(Iac) Emotion recognition apparatus using facial expression and emotion recognition method using the same
CN104994335A (zh) * 2015-06-11 2015-10-21 广东欧珀移动通信有限公司 一种报警的方法及终端
CN107392112A (zh) * 2017-06-28 2017-11-24 中山职业技术学院 一种人脸表情识别方法及其应用的智能锁系统
TW201907329A (zh) * 2017-07-03 2019-02-16 中華電信股份有限公司 具備臉部辨識之門禁系統
CN110121715A (zh) * 2019-02-19 2019-08-13 深圳市汇顶科技股份有限公司 基于表情识别的呼救方法、装置、电子设备及存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114224286A (zh) * 2020-09-08 2022-03-25 上海联影医疗科技股份有限公司 一种乳腺检查的压迫方法、装置、终端和介质
CN112691029A (zh) * 2020-12-25 2021-04-23 深圳市元征科技股份有限公司 一种经络数据处理方法、装置、设备及存储介质
CN112926407A (zh) * 2021-02-02 2021-06-08 华南师范大学 基于校园欺凌的求救信号检测方法、装置及系统
CN113017634A (zh) * 2021-03-22 2021-06-25 Oppo广东移动通信有限公司 情绪评估方法、装置、电子设备和计算机可读存储介质
CN114125145A (zh) * 2021-10-19 2022-03-01 华为技术有限公司 显示屏解锁的方法及其设备
CN114125145B (zh) * 2021-10-19 2022-11-18 华为技术有限公司 显示屏解锁的方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN110121715A (zh) 2019-08-13

Similar Documents

Publication Publication Date Title
WO2020168468A1 (zh) 基于表情识别的呼救方法、装置、电子设备及存储介质
WO2019134246A1 (zh) 基于人脸识别的安全监控方法、装置及存储介质
KR101710478B1 (ko) 다중 생체 인증을 통한 모바일 전자 문서 시스템
WO2016180224A1 (zh) 一种人物图像处理方法及装置
CN107818308A (zh) 一种人脸识别智能比对方法、电子装置及计算机可读存储介质
CN109359548A (zh) 多人脸识别监控方法及装置、电子设备及存储介质
TWI706270B (zh) 身分識別方法、裝置和電腦可讀儲存媒體
CN104021398A (zh) 一种可穿戴智能设备及辅助身份识别的方法
CN107766785A (zh) 一种面部识别方法
CN107437067A (zh) 人脸活体检测方法及相关产品
CN109544384A (zh) 基于生物识别的津贴发放方法、装置、终端、存储介质
CN107437052A (zh) 基于微表情识别的相亲满意度计算方法和系统
EP4099198A1 (en) Unlocking method and apparatus based on facial expression, and computer device and storage medium
CN109389028A (zh) 基于动作分析的人脸识别方法、装置、设备及存储介质
CN208351494U (zh) 人脸识别系统
CN105957172A (zh) 智能拍照电子屏的拍照考勤应用系统
CN107977636B (zh) 人脸检测方法及装置、终端、存储介质
CN116110100A (zh) 一种人脸识别方法、装置、计算机设备及存储介质
CN114612986A (zh) 检测方法、装置、电子设备及存储介质
US11295117B2 (en) Facial modelling and matching systems and methods
CN109829388A (zh) 基于微表情的视频数据处理方法、装置和计算机设备
CN104318209B (zh) 虹膜图像采集方法和设备
JP2019074938A (ja) 通信中継装置、システム、方法及びプログラム
JP5930450B2 (ja) アノテーション装置及びアノテーションシステム
CN108334761A (zh) 一种用户权限的识别方法与装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19915660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19915660

Country of ref document: EP

Kind code of ref document: A1