WO2019029266A1 - Procédé de reconnaissance de mouvement corporel, robot et support de stockage - Google Patents

Procédé de reconnaissance de mouvement corporel, robot et support de stockage Download PDF

Info

Publication number
WO2019029266A1
WO2019029266A1 PCT/CN2018/091370 CN2018091370W WO2019029266A1 WO 2019029266 A1 WO2019029266 A1 WO 2019029266A1 CN 2018091370 W CN2018091370 W CN 2018091370W WO 2019029266 A1 WO2019029266 A1 WO 2019029266A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
limb motion
limb
motion
feature information
Prior art date
Application number
PCT/CN2018/091370
Other languages
English (en)
Chinese (zh)
Inventor
袁晖
Original Assignee
深圳市科迈爱康科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市科迈爱康科技有限公司 filed Critical 深圳市科迈爱康科技有限公司
Publication of WO2019029266A1 publication Critical patent/WO2019029266A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present disclosure relates to the field of video processing, and in particular, to a limb motion recognition method, a robot, and a storage medium.
  • the human visual system has limited temporal and spatial sensitivities, but some signals below the human visual system's ability to recognize often contain a certain amount of information. For example, with the blood circulation, people's skin color will be accompanied by slight changes, but this is for the human eye. Invisible changes can be used to aid in the diagnosis of human health. Similarly, small amplitude motions that are invisible to the human eye or have little visibility can be amplified to reveal meaningful medical behavior and the wonderful world around us.
  • the main purpose of the present disclosure is to provide a limb motion recognition method, a robot, and a storage medium, which aim to solve the technical problem that the limb motion recognition is not refined in the prior art.
  • the present disclosure provides a limb motion recognition method, the method comprising the following steps:
  • the first limb motion information includes change information of the first color, change information of the first limb motion, and first motion degree information;
  • the limb motion in the to-be-processed video is determined according to the comparison result to implement the limb motion recognition.
  • the present disclosure further provides a limb motion recognition device, including: a memory, a processor, and a limb motion recognition stored on the memory and operable on the processor
  • the procedure, the limb motion recognition program is configured to implement the steps of the limb motion recognition method as described above.
  • the present disclosure also provides a robot including: a memory, a processor, and a limb motion recognition program stored on the memory and operable on the processor, the limb motion
  • the recognition program is configured to implement the steps of the limb motion recognition method as described above.
  • the present disclosure further provides a storage medium on which a limb motion recognition program is stored, and when the limb motion recognition program is executed by a processor, the limb motion recognition method as described above is implemented. step.
  • the limb motion recognition method proposed by the present disclosure extracts limb motion information in a video to be processed, processes the limb motion information, generates a limb motion label, and compares it with a preset limb motion database, thereby implementing limb motion Refined recognition improves the accuracy of limb movement recognition.
  • FIG. 1 is a schematic structural diagram of a video database of a hardware operating environment involved in an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a first embodiment of a method for recognizing a limb motion according to the present disclosure
  • FIG. 3 is a schematic flow chart of a second embodiment of a method for recognizing a limb motion according to the present disclosure
  • FIG. 4 is a schematic flow chart of a third embodiment of a method for recognizing a limb motion according to the present disclosure.
  • FIG. 1 is a schematic structural diagram of a video database of a hardware operating environment according to an embodiment of the present disclosure.
  • the user terminal may include a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to implement connection communication between these components.
  • the user interface 1003 can include a display, an input unit such as a keyboard, and the optional user interface 1003 can also include a standard wired interface, a wireless interface.
  • the network interface 1004 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high speed RAM memory or a stable memory (non-volatile) Memory), such as disk storage.
  • the memory 1005 can also optionally be a storage device independent of the aforementioned processor 1001.
  • FIG. 1 does not constitute a definition of a video database, and may include more or fewer components than those illustrated, or some components may be combined, or different component arrangements.
  • the memory 1005 as a storage medium may include an operating system, a network communication module, a user interface module, and a limb motion recognition program.
  • the network interface 1004 is mainly used for connecting a video interface and performing data communication with a video interface;
  • the user interface 1003 is mainly used for connecting a user terminal and performing data communication with the terminal; and processing in the video database of the present disclosure.
  • the device 1001 and the memory 1005 may be disposed in a limb motion recognition device that calls the limb motion recognition program stored in the memory 1005 by the processor 1001 and performs the following operations:
  • the first limb motion information includes change information of the first color, change information of the first limb motion, and first motion degree information;
  • the limb motion in the to-be-processed video is determined according to the comparison result to implement the limb motion recognition.
  • the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
  • the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
  • the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
  • Corresponding third part feature information is determined according to the second correspondence relationship between the change information of the first limb motion and the third part feature information.
  • the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
  • the first motion level information is imaged, and the first motion level information is combined with the part feature information to generate a first limb motion label.
  • the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
  • the second limb motion information including change information of the second color, change information of the second limb motion, and second motion degree information;
  • the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
  • the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
  • the limb motion information in the video to be processed is extracted, and the limb motion information is processed to generate a limb motion label, and then compared with a preset preset limb motion database, thereby realizing the refinement of the limb motion. Identify and improve the accuracy of limb movement recognition.
  • FIG. 2 is a schematic flow chart of a first embodiment of a method for recognizing a limb motion according to the present disclosure.
  • the limb motion recognition method comprises the following steps:
  • Step S10 Extracting first limb motion information in the to-be-processed video, where the first limb motion information includes change information of the first color, change information of the first limb motion, and first motion degree information;
  • the micro motion of the human body may bring color change.
  • the Euler video method technology is used to enlarge the processed video to obtain the changed color.
  • the information, the motion information of the limb movement, and the motion degree information are not limited to the Euler video method technology, and may be other technologies that can implement the same or similar functions, which is not limited in this embodiment.
  • the subtle movement of the human body will bring about a color change, for example, in the same Under the condition of light, the breathing action of the human body will bring the nose to rise and fall, and the color change will be brought about by rising and falling.
  • the change information of the movement of the limb may include minute movements such as a breathing motion, a blinking motion, a heartbeat motion, a pulse motion, and a knee motion.
  • the motion degree information may be a magnitude, an intensity, and a frequency indicating a body motion of the human body.
  • the frequency will be relatively high, but with the addition of exercise time and the consumption of energy of the human body, the swing frequency of the arm will gradually weaken, and the action degree information is used as one of the reference information for the recognition of the limb motion, thereby improving the limb movement. The refinement of recognition.
  • Step S20 determining corresponding first part feature information according to the change information of the first color and the change information of the first limb motion
  • the part information corresponding to the motion of the limb is determined.
  • the corresponding part information is determined according to the specific feature of the part information, for example, the position of the face is confirmed according to the color change characteristic, according to The respiratory motion characteristic confirms the eye position, confirms the eye position based on the blinking motion characteristic, confirms the heart position based on the heartbeat motion characteristic, confirms the wrist position based on the pulse motion characteristic, and confirms the knee position based on the leg motion characteristic.
  • the position of the foot plate can be confirmed by acquiring the weight distribution feature, thereby more accurately determining the part information by the color change information, the change information of the limb movement, and the weight distribution characteristic information.
  • Step S30 generating the first limb motion label by combining the first part feature information with the first action level information
  • the limb motion information is symbolized, so that when the limb motion recognition is performed, the limb motion can be more accurately recognized.
  • the part information may be connected into a line to obtain a feature three-dimensional figure, and the feature three-dimensional figure is combined with the action degree information.
  • Generating a limb movement label for example, confirming the positions of the eyes, the nose, the wrist, the knee, and the foot board when the body is in a body motion, and then connecting the parts into a line to generate a three-dimensional figure of the feature, and then the feature is three-dimensionally
  • the graphic combines with the motion level information to generate a limb motion tag.
  • the limb motion tag may be a recorded limb motion and corresponding status information.
  • the limb movement label is a limb movement label that the baby is about to wake up
  • the limb movement label records the movement information of the limb that the baby is about to wake up, for example, a small facial movement such as an eye part to be opened, through the acquired tiny face of the baby.
  • Step S40 comparing the first limb motion label with a limb motion label in the preset limb motion database
  • the limb motion label is compared with the limb motion label in the preset limb motion database, and the accuracy of the limb motion recognition is determined by the comparison result.
  • the preset limb motion database is a pre-established database, and the database includes a correspondence between the limb motion label and the limb motion information, and the limb motion refinement recognition is performed according to the correspondence relationship, thereby improving the accuracy of the limb motion recognition. .
  • a limb motion database may be established in advance, and the preset limb motion database is established as follows:
  • the environment feature information in the to-be-processed video is extracted, and the part feature information, the environment feature information, and the action degree information are combined to generate a limb motion tag.
  • the accuracy of recognizing the movement of the limb can be improved by adding environmental information, for example, the human body in the video is being subjected to the movement of the limb under the armpit, and it is judged by the system that the human body may be taking a rest or may pick up something on the ground, but in the In the process of extracting environmental information, when the item information of the shoe is extracted next to the human body, it can be judged that the human body may be performing a limb movement of the shoe, thereby improving the accuracy of recognizing the movement of the human body.
  • the limb motion label is classified, and a correspondence between the limb motion and the limb motion label is established according to the classification result, and a preset limb motion database is generated according to the correspondence.
  • the limb movement label is classified, for example, standing, and the limb movement information corresponding to the limb movement label is moved by the standing related limb, so that the limb movement can be quickly recognized according to the limb movement label. Thereby improving the response time for recognizing the movement of the limb.
  • Step S50 Determine a limb motion in the to-be-processed video according to the comparison result to implement limb motion recognition.
  • the limb motion information in the video to be processed is extracted, and the limb motion information is processed to generate a limb motion label, and then compared with a preset preset limb motion database, thereby realizing the refinement of the limb motion. Identify and improve the accuracy of limb movement recognition.
  • the to-be-processed video includes first environment feature information.
  • the step S30 includes:
  • step S301 the first environment feature information in the to-be-processed video is extracted, and the first part feature information and the first action level information are combined with the first environment feature information to generate a first limb action tag.
  • the environment feature information in the to-be-processed video is extracted, and the part feature information, the environment feature information, and the action degree information are combined to generate a limb motion tag.
  • the accuracy of recognizing the movement of the limb can be improved by adding environmental information, for example, the human body in the video is being subjected to a limb movement under the arm, and the system is judged that the human body may be resting while picking up something on the ground, but in the environmental information.
  • the item information of the shoe is extracted next to the human body, and it can be judged that the human body may be performing the limb movement of the shoe, thereby improving the accuracy of recognizing the movement of the human body.
  • the environment feature information is extracted, and the part feature information and the action degree information are combined with the environment feature information to generate a limb action tag, thereby improving the refinement of the limb action tag and improving the limb motion recognition.
  • the correctness is obtained by the process of determining whether the environment feature information is extracted.
  • the method includes:
  • step S00 the captured video is amplified by using the Euler video amplification technology, and the amplified captured video is obtained, and the amplified captured video is used as a to-be-processed video, and the first color in the to-be-processed video is extracted.
  • the limb motion information in the embodiment may be minute limb motion information
  • the minute limb motion information may be pulse motion information, respiratory motion information, blink motion information, heartbeat motion information, and the like.
  • the small body motion information is sometimes not visible to the human eye.
  • the Euler video amplification technology expands the body motion information, and the technique can be used to identify smaller limb movements.
  • the captured video is amplified by the Euler video amplification technology, and the amplified captured video is obtained, and the amplified captured video is used as a to-be-processed video, so that the captured video can be more accurately identified. Characteristic information.
  • the first part information includes second part information and third part information
  • the method includes:
  • Step S201 determining corresponding second part feature information according to the first correspondence between the change information of the first color and the second part feature information
  • the limb motion recognition device may be a limb motion recognition device that learns through basic feature information, and the limb motion recognition device may have the ability to bring a basic learning function, and the limb motion recognition
  • the device stores a correspondence relationship between the color feature information and the part feature information, and by the correspondence, the part information can be determined, for example, the face position information is confirmed by the color change feature information.
  • Step S202 Determine corresponding third part feature information according to the second correspondence between the change information of the first limb motion and the third part feature information.
  • the body motion recognition device stores a correspondence relationship between the change information of the limb motion and the part feature information, and the location information can be determined by the correspondence relationship.
  • the feature information of the other part is confirmed by the change characteristic information of the limb movement, wherein the corresponding part information can be determined according to the specific feature of the part information, for example, the face position is confirmed according to the color change feature, and the eye position is confirmed according to the respiratory action feature, according to the blink action
  • the characteristics are confirmed by the eye position, the heart position is confirmed based on the heartbeat motion characteristics, the wrist position is confirmed based on the pulse motion characteristics, and the knee position is confirmed based on the leg motion characteristics.
  • step S30 includes:
  • Step S302 Perform image processing on the first motion level information, and generate the first limb motion label by combining the imaged first motion level information with the part feature information.
  • the motion degree information may be a magnitude, an intensity, and a frequency indicating a body motion of the human body.
  • the frequency will be relatively high, but with the addition of exercise time and the consumption of energy of the human body, the swing frequency of the arm will gradually weaken, and the action degree information is used as one of the reference information for the recognition of the limb motion, thereby improving the limb movement. The refinement of recognition.
  • the accuracy, and readability of the symbolization of the limb motion are enhanced by imaging the amplitude, intensity, and frequency of the limb motion of the human body and generating the limb motion label by combining the three-dimensional image constructed by the site information.
  • the method includes:
  • the change information of the two colors and the change information of the second limb motion determine the corresponding fourth part feature information; extract the second environment feature information in the to-be-processed video, and use the fourth part feature information, the second environment Combining the feature information and the second action level information to generate a second limb motion tag; classifying the second limb motion tag, and establishing a third correspondence between the second limb motion and the second limb motion tag according to the classification result And generating a preset limb motion database according to the third correspondence.
  • a refined limb motion database is established, and the database is compared with the limb motion label generated by the limb motion information in the captured video, and the accuracy of the limb motion recognition is improved by comparing the results.
  • the environment feature information in the to-be-processed video is extracted, and the part feature information, the environment feature information, and the action degree information are combined to generate a limb motion tag.
  • the accuracy of recognizing the movement of the limb can be improved by adding environmental information, for example, the human body in the video is being subjected to the movement of the limb under the armpit, and it is judged by the system that the human body may be taking a rest or may pick up something on the ground, but in the In the process of extracting environmental information, when the item information of the shoe is extracted next to the human body, it can be judged that the human body may be performing a limb movement of the shoe, thereby improving the accuracy of recognizing the movement of the human body.
  • the limb motion label is classified, and a correspondence between the limb motion and the limb motion label is established according to the classification result, and a preset limb motion database is generated according to the correspondence.
  • the limb movement label is classified, for example, standing, and the limb movement information corresponding to the limb movement label is moved by the standing related limb, so that the limb movement can be quickly recognized according to the limb movement label. Thereby improving the response time for recognizing the movement of the limb.
  • the robot can be deeply studied and trained, so that the robot can have the ability to recognize fine limb movements, such as intelligent robot through video detection, database information comparison, limb movement After identifying the process, confirm whether an elderly person with closed eyes is asleep and confirm that his breathing is normal. If the limb movement characteristics of sleep apnea are detected, the warning message is immediately transmitted to the relatives or caregivers.
  • the solution provided by this embodiment compares the database with the body motion label in the captured video by establishing a refined limb motion database, thereby improving the accuracy of the limb motion recognition.
  • the method includes:
  • Step S01 extracting each feature information in the to-be-processed video, and comparing the feature information with the preset skin color feature information;
  • the extraction of the limb motion information in order to improve the accuracy of the extracted limb motion information, it is first determined whether there is a human body in the to-be-processed video, and the human body exists in the to-be-processed video. And extracting body motion information of the human body in the to-be-processed video.
  • each of the feature information includes the skin color feature information by determining the feature information in the video to be processed, and if the skin color feature information is included, indicating that the video is to be processed. If there is no skin color feature information in the to-be-processed video, it means that there is no feature information of the person in the to-be-processed video, and no subsequent judgment is performed, thereby improving the efficiency of performing the limb motion recognition.
  • step S02 when the preset skin color feature information is included in each feature information in the to-be-processed video, the step of extracting the first limb motion information in the to-be-processed video is performed.
  • the solution provided in this embodiment determines the presence or absence of the human body by determining the skin color feature information of the video to be processed, and improves the efficiency of performing the limb motion recognition.
  • step S50 includes:
  • Step S501 when the third limb motion label is consistent with the limb motion label in the preset limb motion database, and the fourth limb motion label is inconsistent with the limb motion label in the preset limb motion database, Sends an alarm message that the limb movement is abnormal.
  • the limb motion label in the video to be processed may include a plurality of limb motion labels, such as a limb motion label of the hand, a limb motion label of the head, and the like.
  • a plurality of limb motion labels in the video to be processed may have a part of the limb motion label consistent with the limb motion label in the preset limb motion database, and some limbs are
  • the alarm information of the abnormality of the limb motion can be transmitted, thereby improving the accuracy of the limb motion recognition.
  • the limb movement label of the human hand and the limb movement label of the leg are consistent with the limb movement label in the preset limb database, but the limb movement label of the head is inconsistent with the limb movement label in the preset limb database, and then appears.
  • an alarm message indicating that the head limb movement abnormality can be transmitted.
  • the alarm information of the limb motion abnormality is transmitted, for example, by video detection, database information comparison, limb motion recognition, etc., and then one bit is closed. Whether the elderly with eyes are asleep and confirm that their breathing is normal. If the limb movement characteristics of sleep apnea are detected, the warning message is immediately transmitted to the relatives or caregivers.
  • the solution provided by this embodiment improves the user experience by identifying abnormal limb motion information.
  • the embodiment of the present disclosure further provides a storage medium, where the limb motion recognition program is stored, and when the limb motion recognition program is executed by the processor, the following operations are implemented:
  • the first limb motion information includes change information of the first color, change information of the first limb motion, and first motion degree information;
  • the limb motion in the to-be-processed video is determined according to the comparison result to implement the limb motion recognition.
  • Corresponding third part feature information is determined according to the second correspondence relationship between the change information of the first limb motion and the third part feature information.
  • the first motion level information is imaged, and the first motion level information is combined with the part feature information to generate a first limb motion label.
  • the second limb motion information including change information of the second color, change information of the second limb motion, and second motion degree information;
  • the limb motion recognition method proposed in this embodiment extracts the limb motion information in the video to be processed, processes the limb motion information, generates a limb motion label, and compares it with the preset limb motion database, thereby implementing the limb Fine-grained recognition of movements improves the accuracy of limb movement recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention concerne un procédé de reconnaissance de mouvement corporel, un robot et un support de stockage. Le procédé consiste à : extraire des informations de mouvement corporel dans une vidéo à traiter, les informations de mouvement corporel comprenant des informations de changement de couleur, des informations de changement de mouvement corporel et des informations de degré de mouvement ; en fonction des informations de changement de couleur et des informations de changement de mouvement corporel, déterminer des informations caractéristiques de parties correspondantes ; combiner les informations caractéristiques de parties avec les informations de degré de mouvement afin de générer une étiquette de mouvement corporel ; comparer l'étiquette de mouvement corporel avec l'étiquette de mouvement corporel dans une base de données de mouvement corporel prédéfinie ; et déterminer le mouvement corporel dans la vidéo à traiter en fonction du résultat de la comparaison afin d'effectuer une reconnaissance de mouvement corporel.
PCT/CN2018/091370 2017-08-07 2018-06-15 Procédé de reconnaissance de mouvement corporel, robot et support de stockage WO2019029266A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710668382.9 2017-08-07
CN201710668382.9A CN107609474B (zh) 2017-08-07 2017-08-07 肢体动作识别方法、装置、机器人及存储介质

Publications (1)

Publication Number Publication Date
WO2019029266A1 true WO2019029266A1 (fr) 2019-02-14

Family

ID=61064365

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091370 WO2019029266A1 (fr) 2017-08-07 2018-06-15 Procédé de reconnaissance de mouvement corporel, robot et support de stockage

Country Status (2)

Country Link
CN (1) CN107609474B (fr)
WO (1) WO2019029266A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609474B (zh) * 2017-08-07 2020-05-01 深圳市科迈爱康科技有限公司 肢体动作识别方法、装置、机器人及存储介质
CN108391162B (zh) * 2018-01-31 2021-12-03 科大讯飞股份有限公司 音量调整方法及装置、存储介质、电子设备
CN110314344B (zh) * 2018-03-30 2021-08-24 杭州海康威视数字技术股份有限公司 运动提醒方法、装置及系统
CN109411050A (zh) * 2018-09-30 2019-03-01 深圳市科迈爱康科技有限公司 运动处方执行方法、系统及计算机可读存储介质
CN111107279B (zh) * 2018-10-26 2021-06-29 北京微播视界科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN111126100B (zh) * 2018-10-30 2023-10-17 杭州海康威视数字技术股份有限公司 报警方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399637A (zh) * 2013-07-31 2013-11-20 西北师范大学 基于kinect人体骨骼跟踪控制的智能机器人人机交互方法
CN105245828A (zh) * 2015-09-02 2016-01-13 北京旷视科技有限公司 物品分析方法和设备
CN105867630A (zh) * 2016-04-21 2016-08-17 深圳前海勇艺达机器人有限公司 机器人的手势识别方法及装置及机器人系统
CN205486164U (zh) * 2016-01-21 2016-08-17 合肥君达高科信息技术有限公司 一种新型人脸3d表情动作识别系统
CN107609474A (zh) * 2017-08-07 2018-01-19 深圳市科迈爱康科技有限公司 肢体动作识别方法、装置、机器人及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8269834B2 (en) * 2007-01-12 2012-09-18 International Business Machines Corporation Warning a user about adverse behaviors of others within an environment based on a 3D captured image stream
US7840031B2 (en) * 2007-01-12 2010-11-23 International Business Machines Corporation Tracking a range of body movement based on 3D captured image streams of a user
JP6206411B2 (ja) * 2012-09-06 2017-10-04 ソニー株式会社 情報処理装置、情報処理方法及びプログラム
CN103679154A (zh) * 2013-12-26 2014-03-26 中国科学院自动化研究所 基于深度图像的三维手势动作的识别方法
CN104665789A (zh) * 2015-01-26 2015-06-03 周常安 生理反馈系统
CN106778450B (zh) * 2015-11-25 2020-04-24 腾讯科技(深圳)有限公司 一种面部识别方法和装置
CN106022208A (zh) * 2016-04-29 2016-10-12 北京天宇朗通通信设备股份有限公司 人体动作识别方法及装置
CN106156757B (zh) * 2016-08-02 2019-08-09 中国银联股份有限公司 结合活体检测技术的人脸识别方法及人脸识别系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399637A (zh) * 2013-07-31 2013-11-20 西北师范大学 基于kinect人体骨骼跟踪控制的智能机器人人机交互方法
CN105245828A (zh) * 2015-09-02 2016-01-13 北京旷视科技有限公司 物品分析方法和设备
CN205486164U (zh) * 2016-01-21 2016-08-17 合肥君达高科信息技术有限公司 一种新型人脸3d表情动作识别系统
CN105867630A (zh) * 2016-04-21 2016-08-17 深圳前海勇艺达机器人有限公司 机器人的手势识别方法及装置及机器人系统
CN107609474A (zh) * 2017-08-07 2018-01-19 深圳市科迈爱康科技有限公司 肢体动作识别方法、装置、机器人及存储介质

Also Published As

Publication number Publication date
CN107609474A (zh) 2018-01-19
CN107609474B (zh) 2020-05-01

Similar Documents

Publication Publication Date Title
WO2019029266A1 (fr) Procédé de reconnaissance de mouvement corporel, robot et support de stockage
AU2018327869B2 (en) Method and system for detecting dangerous situation
WO2017026680A1 (fr) Procédé permettant de détecter des informations biométriques et dispositif électronique utilisant celui-ci
WO2019085495A1 (fr) Procédé et appareil de reconnaissance de micro-expression, système et support de stockage lisible par ordinateur
WO2021107506A1 (fr) Dispositif électronique permettant de fournir un service de réalité augmentée et son procédé de fonctionnement
WO2015102156A1 (fr) Procédé et appareil pour mesurer la graisse corporelle au moyen d'un dispositif mobile
WO2018135884A1 (fr) Dispositif électronique permettant d'obtenir des empreintes digitales et son procédé de commande
WO2016133349A1 (fr) Dispositif électronique et procédé de mesure d'informations biométriques
EP3350679A1 (fr) Dispositif électronique et procédé de traitement de gestes associé
WO2018048054A1 (fr) Procédé et dispositif de production d'une interface de réalité virtuelle sur la base d'une analyse d'image 3d à caméra unique
WO2021025482A1 (fr) Dispositif électronique et procédé pour générer un certificat d'attestation sur la base d'une clé fusionnée
WO2017148112A1 (fr) Procédé de saisie d'empreintes digitales et terminal
WO2019088555A1 (fr) Dispositif électronique et procédé de détermination du degré d'hyperémie conjonctivale faisant appel à ce dernier
WO2017084337A1 (fr) Procédé, appareil et système de vérification d'identité
WO2019000801A1 (fr) Procédé, appareil et dispositif de synchronisation de données, et support d'informations lisible par ordinateur
WO2013022226A4 (fr) Procédé et appareil de génération d'informations personnelles d'un client, support pour leur enregistrement et système pos
WO2017111468A1 (fr) Procédé, support de stockage et dispositif électronique pour exécuter une fonction sur la base d'un signal biométrique
EP3868124A1 (fr) Dispositif électronique comprenant un écouteur et procédé de commande du dispositif électronique
WO2020209624A1 (fr) Dispositif de visiocasque et procédé de fonctionnement associé
WO2020251073A1 (fr) Dispositif de massage
WO2017080195A1 (fr) Procédé et dispositif de reconnaissance audio
WO2019194651A1 (fr) Procédé et dispositif de mesure d'informations biométriques dans un dispositif électronique
WO2020061887A1 (fr) Procédé et dispositif de mesure de la fréquence cardiaque et support de stockage lisible par ordinateur
WO2019156433A1 (fr) Procédé de génération d'informations de variabilité de fréquence cardiaque relatives à un objet externe au moyen d'une pluralité de filtres, et dispositif associé
WO2019144535A1 (fr) Procédé, appareil, et dispositif de déverrouillage de terminal, et support de stockage lisible

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18844285

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18844285

Country of ref document: EP

Kind code of ref document: A1