CN107609474B - Limb action recognition method and device, robot and storage medium - Google Patents

Limb action recognition method and device, robot and storage medium Download PDF

Info

Publication number
CN107609474B
CN107609474B CN201710668382.9A CN201710668382A CN107609474B CN 107609474 B CN107609474 B CN 107609474B CN 201710668382 A CN201710668382 A CN 201710668382A CN 107609474 B CN107609474 B CN 107609474B
Authority
CN
China
Prior art keywords
limb
information
action
limb action
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710668382.9A
Other languages
Chinese (zh)
Other versions
CN107609474A (en
Inventor
袁晖
李凝华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ikmak Technology Co ltd
Original Assignee
Shenzhen Ikmak Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ikmak Technology Co ltd filed Critical Shenzhen Ikmak Technology Co ltd
Priority to CN201710668382.9A priority Critical patent/CN107609474B/en
Publication of CN107609474A publication Critical patent/CN107609474A/en
Priority to PCT/CN2018/091370 priority patent/WO2019029266A1/en
Application granted granted Critical
Publication of CN107609474B publication Critical patent/CN107609474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a limb action recognition method, a limb action recognition device, a robot and a storage medium, wherein the method comprises the following steps: extracting limb action information in a video to be processed, wherein the limb action information comprises color change information, limb movement change information and action degree information; determining corresponding part characteristic information according to the color change information and the limb movement change information; combining the part characteristic information with the action degree information to generate a limb action label; comparing the limb action label with a limb action label in a preset limb action database; and determining the limb action in the video to be processed according to the comparison result so as to realize limb action identification. According to the invention, the limb action information is characterized, and the limb action is identified according to the corresponding limb action database, so that the refinement of the limb action identification is improved.

Description

Limb action recognition method and device, robot and storage medium
Technical Field
The invention relates to the field of video processing, in particular to a limb action recognition method, a limb action recognition device, a robot and a storage medium.
Background
The human visual system has limited spatiotemporal sensitivity, but some signals below the human visual system's ability to recognize often contain a certain amount of information, such as the skin color of a person with slight changes accompanying blood circulation, while this change, which is not visible to the human eye, can be used to assist in the diagnosis of human health. Likewise, small amplitude motions invisible or minimally visible to the human eye can reveal meaningful medical behavior and the wonderful world around us by magnification.
Disclosure of Invention
The invention mainly aims to provide a limb action recognition method, a limb action recognition device, a robot and a storage medium, and aims to solve the technical problem that limb action recognition in the prior art is not fine enough.
In order to achieve the above object, the present invention provides a limb movement recognition method, including the following steps:
extracting first limb action information in a video to be processed, wherein the first limb action information comprises change information of a first color, change information of first limb movement and first action degree information;
determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement;
combining the first part characteristic information with the first action degree information to generate a first limb action label;
comparing the first limb action label with a limb action label in a preset limb action database;
and determining the limb action in the video to be processed according to the comparison result so as to realize limb action identification.
Preferably, the video to be processed includes first environmental characteristic information;
the generating a first limb action tag by combining the first part feature information with the first action degree information specifically includes:
extracting first environment characteristic information in the video to be processed, and combining the first part characteristic information and the first action degree information with the first environment characteristic information to generate a first limb action label.
Preferably, before the extracting the first limb action information in the video to be processed, the method includes:
amplifying the collected video by using an Euler video amplification technology, acquiring the amplified collected video, taking the amplified collected video as a to-be-processed video, and extracting change information of a first color, change information of first limb movement, first action degree information and first environment characteristic information in the to-be-processed video.
Preferably, the first location information includes second location information and third location information;
before determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement, the method comprises the following steps:
determining corresponding second part characteristic information according to the first corresponding relation between the change information of the first color and the second part characteristic information;
and determining corresponding third part characteristic information according to the second corresponding relation between the change information of the first limb movement and the third part characteristic information.
Preferably, the generating a first limb action tag by combining the part feature information with the first action degree information specifically further includes:
and imaging the first action degree information, and combining the imaged first action degree information with the part characteristic information to generate a first limb action label.
Preferably, before comparing the limb motion label with a limb motion label in a preset limb motion database, the method comprises:
acquiring a sample video, and extracting second limb action information in the sample video, wherein the second limb action information comprises change information of a second color, change information of second limb movement and second action degree information;
determining corresponding fourth position characteristic information according to the change information of the second color and the change information of the second limb movement;
extracting second environment characteristic information in the video to be processed, and combining the fourth part characteristic information, the second environment characteristic information and the second action degree information to generate a second limb action label;
classifying the second limb action labels, establishing a third corresponding relation between the second limb action and the second limb action labels according to a classification result, and generating a preset limb action database according to the third corresponding relation.
Preferably, before the extracting the first limb action information in the video to be processed, the method further includes:
extracting each feature information in a video to be processed, and comparing each feature information with preset skin color feature information;
and when each feature information in the video to be processed contains the preset skin color feature information, executing the step of extracting the first body action information in the video to be processed.
Preferably, the first limb action tag comprises a third limb action tag and a fourth limb action tag;
the step of comparing the first limb action tag with limb action tags in a preset limb action database specifically comprises:
and sending alarm information of abnormal limb actions when the third limb action label is consistent with the limb action label in the preset limb action database and the third limb action label is inconsistent with the limb action label in the preset limb action database.
In addition, in order to achieve the above object, the present invention further provides a limb movement recognition device, including: a memory, a processor and a limb motion recognition program stored on the memory and executable on the processor, the limb motion recognition program being configured to implement the steps of a limb motion recognition method as described above.
In addition, to achieve the above object, the present invention also provides a robot comprising: a memory, a processor and a limb motion recognition program stored on the memory and executable on the processor, the limb motion recognition program being configured to implement the steps of a limb motion recognition method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium, wherein the storage medium stores a limb movement recognition program, and the limb movement recognition program realizes the steps of the limb movement recognition method as described above when being executed by a processor.
According to the limb action recognition method provided by the invention, the limb action information in the video to be processed is extracted and processed to generate the limb action label, and then the limb action label is compared with the preset limb action database, so that the limb action is finely recognized, and the accuracy of limb action recognition is improved.
Drawings
FIG. 1 is a schematic diagram of a video database architecture of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a limb movement recognition method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a limb movement recognition method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a limb movement recognition method according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating a fourth embodiment of a method for recognizing limb movements according to the present invention;
FIG. 6 is a flowchart illustrating a limb movement recognition method according to a fifth embodiment of the present invention;
FIG. 7 is a flowchart illustrating a limb movement recognition method according to a sixth embodiment of the present invention;
FIG. 8 is a flowchart illustrating a seventh embodiment of a method for recognizing limb movements according to the present invention;
fig. 9 is a flowchart illustrating a limb movement recognition method according to a seventh embodiment of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a video database structure of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the user terminal may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the video database structure shown in fig. 1 does not constitute a limitation of the video database and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a limb motion recognition program.
In the video database shown in fig. 1, the network interface 1004 is mainly used for connecting the video interface and performing data communication with the video interface; the user interface 1003 is mainly used for connecting a user terminal and performing data communication with the terminal; the processor 1001 and the memory 1005 in the video database of the present invention may be provided in a body motion recognition apparatus, and the body motion recognition apparatus calls the body motion recognition program stored in the memory 1005 through the processor 1001 and executes the following operations:
extracting first limb action information in a video to be processed, wherein the first limb action information comprises change information of a first color, change information of first limb movement and first action degree information;
determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement;
combining the first part characteristic information with the first action degree information to generate a first limb action label;
comparing the first limb action label with a limb action label in a preset limb action database;
and determining the limb action in the video to be processed according to the comparison result so as to realize limb action identification.
Further, the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
extracting first environment characteristic information in the video to be processed, and combining the first part characteristic information and the first action degree information with the first environment characteristic information to generate a first limb action label.
Further, the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
amplifying the collected video by using an Euler video amplification technology, acquiring the amplified collected video, taking the amplified collected video as a to-be-processed video, and extracting change information of a first color, change information of first limb movement, first action degree information and first environment characteristic information in the to-be-processed video.
Further, the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
determining corresponding second part characteristic information according to the first corresponding relation between the change information of the first color and the second part characteristic information;
and determining corresponding third part characteristic information according to the second corresponding relation between the change information of the first limb movement and the third part characteristic information.
Further, the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
and imaging the first action degree information, and combining the imaged first action degree information with the part characteristic information to generate a first limb action label.
Further, the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
acquiring a sample video, and extracting second limb action information in the sample video, wherein the second limb action information comprises change information of a second color, change information of second limb movement and second action degree information;
determining corresponding fourth position characteristic information according to the change information of the second color and the change information of the second limb movement;
extracting second environment characteristic information in the video to be processed, and combining the fourth part characteristic information, the second environment characteristic information and the second action degree information to generate a second limb action label;
classifying the second limb action labels, establishing a third corresponding relation between the second limb action and the second limb action labels according to a classification result, and generating a preset limb action database according to the third corresponding relation.
Further, the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
extracting each feature information in a video to be processed, and comparing each feature information with preset skin color feature information;
and when each feature information in the video to be processed contains the preset skin color feature information, executing the step of extracting the first body action information in the video to be processed.
Further, the processor 1001 may call the limb motion recognition program stored in the memory 1005, and further perform the following operations:
and sending alarm information of abnormal limb actions when the third limb action label is consistent with the limb action label in the preset limb action database and the third limb action label is inconsistent with the limb action label in the preset limb action database.
According to the method and the device, the limb action information in the video to be processed is extracted and processed to generate the limb action label, and the limb action label is compared with the preset limb action database, so that the limb action is finely recognized, and the accuracy of limb action recognition is improved.
Based on the hardware structure, the embodiment of the limb action recognition method is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a limb movement recognition method according to a first embodiment of the present invention.
In a first embodiment, the limb motion recognition method includes the following steps:
step S10, extracting first limb action information in the video to be processed, wherein the first limb action information comprises change information of a first color, change information of first limb movement and first action degree information;
it should be noted that, in general, under the same illumination condition, a slight motion of a human body may bring about a color change, in this embodiment, a video to be processed is amplified through the euler video method technology, and change information of the color, change information of limb motion, and motion degree information after amplification are obtained.
In order to improve the accuracy of limb movement identification, the color change information, the limb movement change information and the movement degree information in the limb movement information are acquired, and the color change is brought by the fine movement of a human body under the same illumination condition, for example, under the same illumination condition, the breathing movement of the human body brings the rising and the falling of the nose, and the color change is brought by the rising and the falling.
The movement change information of the limbs can comprise small movements such as breathing movement, blinking movement, heartbeat movement, pulse movement, knee movement and the like.
The action degree information can be amplitude, strength and frequency of limb actions of the human body, for example, when the human body runs, the arm can move to swing, the human body can be full at the starting time of the movement, at the moment, the swinging frequency of the arm can be relatively high, but the swinging frequency of the arm can be slowly weakened along with the addition of the movement time and the consumption of the energy of the human body, and the action degree information is used as one of reference information of the limb action identification, so that the refinement of the limb action identification is improved.
Step S20, determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement;
in order to improve the accuracy of limb movement recognition, the corresponding part information is determined according to the specific characteristics of the part information, for example, the face position is determined according to the color change characteristics, the eye position is determined according to the breathing movement characteristics, the eye position is determined according to the blinking movement characteristics, the heart position is determined according to the heartbeat movement characteristics, the wrist position is determined according to the pulse movement characteristics, and the knee position is determined according to the leg movement characteristics.
In order to further improve the accuracy of confirming the part characteristic information, the position of the foot plate can be confirmed by acquiring the weight distribution characteristic, so that the part information can be more accurately determined through the color change information, the change information of the limb movement and the weight distribution characteristic information.
Step S30, generating a first limb movement label by combining the first part feature information with the first movement degree information;
in order to improve the accuracy of limb action recognition, the limb action information is symbolized, so that the limb action can be recognized more accurately when the limb action is recognized.
In order to more accurately create the limb movement label, after the part information is confirmed, the part information may be connected and aligned to obtain a characteristic three-dimensional graph, and the characteristic three-dimensional graph is combined with the movement degree information to generate the limb movement label.
The limb action tag can record limb actions and corresponding state information. For example, the limb motion tag is a limb motion tag in which the infant is about to wake up, the limb motion tag records limb motion information in which the infant is about to wake up, for example, minute facial motion in which the eye part is about to open, and the corresponding limb motion tag is generated by the acquired minute facial motion information of the infant.
Step S40, comparing the first limb action label with a limb action label in a preset limb action database;
in order to improve the accuracy of limb movement identification, in this embodiment, the limb movement tags are compared with limb movement tags in a preset limb movement database, and the accuracy of limb movement identification is determined through a comparison result.
The preset limb action database is a pre-established database, the database comprises a corresponding relation between the limb action label and the limb movement information, and the limb action is finely identified according to the corresponding relation, so that the accuracy of limb action identification is improved.
It should be noted that, in order to perform refined identification on the limb actions, a limb action database may be established in advance, and the establishment method of the preset limb action database is as follows:
firstly, acquiring a limb action sample video, and extracting limb action information in the sample video, wherein the limb action information comprises color change information, limb movement change information and action degree information; determining corresponding part characteristic information according to the color change information and the limb movement change information;
then, in order to further improve the accuracy of limb movement recognition, extracting the environmental characteristic information in the video to be processed, and combining the part characteristic information, the environmental characteristic information and the movement degree information to generate a limb movement label, as can be seen from the above, the accuracy of limb movement recognition can be improved by adding the environmental information, for example, the limb movement of the human body in the video during squatting is collected, and the system judges that the human body may have a rest or may pick up something on the ground.
And finally, classifying the limb action labels, establishing a corresponding relation between the limb actions and the limb action labels according to a classification result, and generating a preset limb action database according to the corresponding relation.
In order to quickly identify the limb actions, the limb action labels are classified, for example, the limb actions are stood, the limb action information corresponding to the limb action labels is the limb actions related to the standing, so that the limb actions can be quickly identified according to the limb action labels, and the response time for identifying the limb actions is prolonged.
And step S50, determining the limb action in the video to be processed according to the comparison result so as to realize limb action recognition.
According to the method and the device, the limb action information in the video to be processed is extracted and processed to generate the limb action label, and the limb action label is compared with the preset limb action database, so that the limb action is finely recognized, and the accuracy of limb action recognition is improved.
Further, as shown in fig. 3, a second embodiment of the limb movement identification method according to the present invention is provided based on the first embodiment, in this embodiment, the to-be-processed video includes first environmental characteristic information;
the step S30 specifically includes:
step S301, extracting first environment characteristic information in the video to be processed, and combining the first part characteristic information and the first action degree information with the first environment characteristic information to generate a first limb action label.
In order to further improve the accuracy of limb movement identification, the environmental characteristic information in the video to be processed is extracted, and the part characteristic information, the environmental characteristic information and the movement degree information are combined to generate a limb movement label.
According to the scheme provided by the embodiment, the environment characteristic information is extracted, and the position characteristic information and the action degree information are combined with the environment characteristic information to generate the limb action label, so that the limb action label is refined, and the accuracy of limb action identification is improved.
Further, as shown in fig. 4, a third embodiment of the limb movement identification method of the present invention is proposed based on the first embodiment, and in this embodiment, before the step S10, the method includes:
and step S00, amplifying the collected video by using an Euler video amplification technology to obtain an amplified collected video, taking the amplified collected video as a to-be-processed video, and extracting change information of a first color, change information of first limb movement, first action degree information and first environment characteristic information in the to-be-processed video.
The limb movement information in the present embodiment may be minute limb movement information, such as pulse movement information, respiratory movement information, blink movement information, and heartbeat movement information, and the minute limb movement information may not be visible to the human eye, and the limb movement information is expanded by the euler video zoom technique, and the minute limb movement can be recognized by this technique.
In the embodiment, the collected video is amplified through the euler video amplification technology to obtain the amplified collected video, and the amplified collected video is used as the video to be processed, so that the characteristic information in the video to be processed can be more accurately identified.
In order to identify the corresponding part information, the part information can be identified in a refined manner by extracting the change information of the color, the change information of the limb movement, the action degree information and the environment characteristic information in the video to be processed.
According to the scheme provided by the embodiment, the collected video is amplified by utilizing the Euler video amplification technology, so that finer limb action information is extracted, and the identification of the part information is improved through the finer limb action information.
Further, as shown in fig. 5, a fourth embodiment of the limb movement recognition method according to the present invention is proposed based on the first embodiment, in this embodiment, the first location information includes second location information and third location information;
before the step S20, the method includes:
step S201, determining corresponding second part characteristic information according to a first corresponding relation between the change information of the first color and the second part characteristic information;
before specifying the part feature information, the body motion recognition device may be a body motion recognition device that has been subjected to basic feature information learning, the body motion recognition device may have a basic learning function, a correspondence relationship between the color feature information and the part feature information is stored in the body motion recognition device, and the part information may be specified by the correspondence relationship, for example, the face position information may be confirmed by the color change feature information.
Step S202, determining corresponding third part characteristic information according to the second corresponding relation between the change information of the first limb movement and the third part characteristic information.
In order to further improve the accuracy of identifying the part feature information, in the part feature information identification, the corresponding relationship between the change information of the body movement and the part feature information is stored in the body movement identification device, and the part information can be identified through the corresponding relationship, for example, the feature information of other parts can be identified through the change feature information of the body movement.
In the scheme provided by this embodiment, the corresponding relationship between the basic limb movement and the part information is stored in advance in the limb recognition device, and the part feature information corresponding to the limb movement is recognized through the corresponding relationship.
Further, as shown in fig. 6, a fifth embodiment of the limb movement identification method of the present invention is proposed based on the first embodiment, and in this embodiment, the step S30 specifically includes:
step S302, performing imaging processing on the first action degree information, and combining the imaged first action degree information with the part feature information to generate a first limb action tag.
The action degree information can be amplitude, strength and frequency of limb actions of the human body, for example, when the human body runs, the arm can move to swing, the human body can be full at the starting time of the movement, at the moment, the swinging frequency of the arm can be relatively high, but the swinging frequency of the arm can be slowly weakened along with the addition of the movement time and the consumption of the energy of the human body, and the action degree information is used as one of reference information of the limb action identification, so that the refinement of the limb action identification is improved.
In the embodiment, the limb action label is generated by imaging the amplitude, intensity and frequency of the limb action of the human body and combining a three-dimensional image constructed by the position information, so that the accuracy and readability of the limb action symbolization are enhanced.
According to the scheme provided by the embodiment, the body action label is generated by imaging the action degree information and combining the three-dimensional image constructed by the part information, so that the accuracy and readability of body action symbolization are enhanced.
Further, as shown in fig. 7, a sixth embodiment of the limb movement identification method of the present invention is proposed based on the first embodiment, and in this embodiment, before the step S40, the method includes:
step S401, a sample video is obtained, and second limb action information in the sample video is extracted, wherein the second limb action information comprises change information of a second color, change information of second limb movement and second action degree information;
step S402, determining corresponding fourth position characteristic information according to the change information of the second color and the change information of the second limb movement;
step S403, extracting second environment characteristic information in the video to be processed, and combining the fourth part characteristic information, the second environment characteristic information and the second action degree information to generate a second limb action label;
step S404, classifying the second limb action labels, establishing a third corresponding relation between the second limb action and the second limb action labels according to the classification result, and generating a preset limb action database according to the third corresponding relation.
In this embodiment, a refined limb action database is established, the database is compared with a limb action tag generated by collecting limb action information in a video, and the accuracy of limb action identification is improved through a comparison result.
The specific steps for establishing the database are as follows:
firstly, acquiring a limb action sample video, and extracting limb action information in the sample video, wherein the limb action information comprises color change information, limb movement change information and action degree information; determining corresponding part characteristic information according to the color change information and the limb movement change information;
then, in order to further improve the accuracy of limb movement recognition, extracting the environmental characteristic information in the video to be processed, and combining the part characteristic information, the environmental characteristic information and the movement degree information to generate a limb movement label, as can be seen from the above, the accuracy of limb movement recognition can be improved by adding the environmental information, for example, the limb movement of the human body in the video during squatting is collected, and the system judges that the human body may have a rest or may pick up something on the ground.
And finally, classifying the limb action labels, establishing a corresponding relation between the limb actions and the limb action labels according to a classification result, and generating a preset limb action database according to the corresponding relation.
In order to quickly identify the limb actions, the limb action labels are classified, for example, the limb actions are stood, the limb action information corresponding to the limb action labels is the limb actions related to the standing, so that the limb actions can be quickly identified according to the limb action labels, and the response time for identifying the limb actions is prolonged.
The robot can be deeply learned and trained by establishing a more refined database, so that the robot has the capability of recognizing the precise body actions, for example, after the intelligent robot passes through the processes of video detection, database information comparison, body action recognition and the like, whether a senior citizen with eyes closed sleeps or not is confirmed, and whether the breathing of the senior citizen is normal or not is confirmed. If the limb movement characteristic of sleep apnea is detected, a warning message is immediately transmitted to the relatives or caregivers.
According to the scheme provided by the embodiment, the database is compared with the limb action labels in the collected video by establishing the refined limb action database, so that the accuracy of limb action identification is improved.
Further, as shown in fig. 8, a seventh embodiment of the limb movement identification method of the present invention is proposed based on the first embodiment, and in this embodiment, before the step S10, the method includes:
step S00, extracting each feature information in the video to be processed, and comparing each feature information with preset skin color feature information;
before extracting the body motion information, in order to improve the accuracy of the extracted body motion information, it is first determined whether a human body exists in the video to be processed, and when a human body exists in the video to be processed, the body motion information of the human body in the video to be processed is extracted.
In order to identify the characteristic information of a human body, judging each characteristic information in a video to be processed by judging, and judging whether each characteristic information contains skin color characteristic information, if so, indicating that a person exists in the video to be processed, and if not, indicating that no person exists in the video to be processed, not further judging, and improving the efficiency of identifying the body movement.
Step S01, when each feature information in the video to be processed includes the preset skin color feature information, executing the step of extracting the first limb action information in the video to be processed.
According to the scheme provided by the embodiment, the judgment of the skin color characteristic information of the video to be processed is carried out, so that whether a human body exists or not is judged, and the efficiency of limb action recognition is improved.
Further, as shown in fig. 9, an eighth embodiment of the limb motion recognition method according to the present invention is proposed based on the first embodiment, in this embodiment, the first limb motion label includes a third limb motion label and a fourth limb motion label;
the step S50 specifically includes:
step S501, when the third limb action label is consistent with the limb action label in the preset limb action database, and the third limb action label is inconsistent with the limb action label in the preset limb action database, sending alarm information of abnormal limb action.
It should be noted that, in the preset limb movement database, the limb movement tags in the video to be processed may include a plurality of limb movement tags, for example, a limb movement tag of a hand, a limb movement tag of a head, and the like.
When the limb action tags are compared with the limb action tags in the preset database, the situation that a part of the limb action tags in the video to be processed are consistent with the limb action tags in the preset limb action database and a part of the limb action tags in the video to be processed are inconsistent with the limb action tags in the preset limb action database can occur, and under the situation, alarm information of abnormal limb action can be sent, so that the accuracy of limb action identification is improved. For example, when the limb movement tags of the hand and the leg of the human body are identical to the limb movement tags in the preset limb database, but the limb movement tags of the head are not identical to the limb movement tags in the preset limb database, the limb movement of the head is abnormal, and in this case, the alarm information of the abnormal limb movement of the head can be transmitted.
In order to improve the accuracy of limb movement identification, when the limb movement information is abnormal limb movement information, alarm information of the abnormal limb movement is sent, for example, after the processes of video detection, database information comparison, limb movement identification and the like, whether an old man with eyes closed sleeps or not is confirmed, and whether the breathing of the old man is normal or not is confirmed. If the limb movement characteristic of sleep apnea is detected, a warning message is immediately transmitted to the relatives or caregivers.
According to the scheme provided by the embodiment, the abnormal limb action information is identified, so that the user experience is improved.
In addition, an embodiment of the present invention further provides a storage medium, where a limb movement recognition program is stored on the storage medium, and when executed by a processor, the limb movement recognition program implements the following operations:
extracting first limb action information in a video to be processed, wherein the first limb action information comprises change information of a first color, change information of first limb movement and first action degree information;
determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement;
combining the first part characteristic information with the first action degree information to generate a first limb action label;
comparing the first limb action label with a limb action label in a preset limb action database;
and determining the limb action in the video to be processed according to the comparison result so as to realize limb action identification.
Further, the limb motion recognition program when executed by the processor further implements the following operations:
extracting first environment characteristic information in the video to be processed, and combining the first part characteristic information and the first action degree information with the first environment characteristic information to generate a first limb action label.
Further, the limb motion recognition program when executed by the processor further implements the following operations:
amplifying the collected video by using an Euler video amplification technology, acquiring the amplified collected video, taking the amplified collected video as a to-be-processed video, and extracting change information of a first color, change information of first limb movement, first action degree information and first environment characteristic information in the to-be-processed video.
Further, the limb motion recognition program when executed by the processor further implements the following operations:
determining corresponding second part characteristic information according to the first corresponding relation between the change information of the first color and the second part characteristic information;
and determining corresponding third part characteristic information according to the second corresponding relation between the change information of the first limb movement and the third part characteristic information.
Further, the limb motion recognition program when executed by the processor further implements the following operations:
and imaging the first action degree information, and combining the imaged first action degree information with the part characteristic information to generate a first limb action label.
Further, the limb motion recognition program when executed by the processor further implements the following operations:
acquiring a sample video, and extracting second limb action information in the sample video, wherein the second limb action information comprises change information of a second color, change information of second limb movement and second action degree information;
determining corresponding fourth position characteristic information according to the change information of the second color and the change information of the second limb movement;
extracting second environment characteristic information in the video to be processed, and combining the fourth part characteristic information, the second environment characteristic information and the second action degree information to generate a second limb action label;
classifying the second limb action labels, establishing a third corresponding relation between the second limb action and the second limb action labels according to a classification result, and generating a preset limb action database according to the third corresponding relation.
Further, the limb motion recognition program when executed by the processor further implements the following operations:
extracting each feature information in a video to be processed, and comparing each feature information with preset skin color feature information;
and when each feature information in the video to be processed contains the preset skin color feature information, executing the step of extracting the first body action information in the video to be processed.
Further, the limb motion recognition program when executed by the processor further implements the following operations:
and sending alarm information of abnormal limb actions when the third limb action label is consistent with the limb action label in the preset limb action database and the third limb action label is inconsistent with the limb action label in the preset limb action database.
According to the limb action recognition method provided by the embodiment, the limb action information in the video to be processed is extracted and processed to generate the limb action label, and the limb action label is compared with the preset limb action database, so that the limb action is finely recognized, and the accuracy of limb action recognition is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A limb motion recognition method is characterized by comprising the following steps:
extracting first limb action information in a video to be processed, wherein the first limb action information comprises change information of a first color, change information of first limb movement and first action degree information;
determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement;
combining the first part characteristic information with the first action degree information to generate a first limb action label, wherein the first limb action label is formed by connecting points according to the first part characteristic information to form a line to obtain a characteristic three-dimensional graph, and the characteristic three-dimensional graph is combined with the first action degree information to generate the first limb action label;
comparing the first limb action label with a limb action label in a preset limb action database;
determining the limb action in the video to be processed according to the comparison result so as to realize limb action identification;
the video to be processed comprises first environment characteristic information;
the generating a first limb action tag by combining the first part feature information with the first action degree information specifically includes:
extracting first environment characteristic information in the video to be processed, and combining the first part characteristic information and the first action degree information with the first environment characteristic information to generate a first limb action label;
determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement, wherein the method comprises the following steps:
acquiring weight distribution characteristic information, and determining corresponding first part characteristic information according to the change information of the first color, the change information of the first limb movement and the weight distribution characteristic information.
2. The limb motion recognition method according to claim 1, wherein the first part information includes second part information and third part information;
before determining corresponding first part characteristic information according to the change information of the first color and the change information of the first limb movement, the method comprises the following steps:
determining corresponding second part characteristic information according to the first corresponding relation between the change information of the first color and the second part characteristic information;
and determining corresponding third part characteristic information according to the second corresponding relation between the change information of the first limb movement and the third part characteristic information.
3. The limb motion recognition method according to claim 1, wherein the generating a first limb motion label by combining the part feature information with the first motion degree information specifically includes:
and imaging the first action degree information, and combining the imaged first action degree information with the part characteristic information to generate a first limb action label.
4. The limb motion recognition method according to claim 1, wherein before comparing the limb motion label with a limb motion label in a preset limb motion database, the method comprises:
acquiring a sample video, and extracting second limb action information in the sample video, wherein the second limb action information comprises change information of a second color, change information of second limb movement and second action degree information;
determining corresponding fourth position characteristic information according to the change information of the second color and the change information of the second limb movement;
extracting second environment characteristic information in the video to be processed, and combining the fourth part characteristic information, the second environment characteristic information and the second action degree information to generate a second limb action label;
classifying the second limb action labels, establishing a third corresponding relation between the second limb action and the second limb action labels according to a classification result, and generating a preset limb action database according to the third corresponding relation.
5. The limb motion recognition method according to claim 1, wherein before extracting the first limb motion information in the video to be processed, the method further comprises:
extracting each feature information in a video to be processed, and comparing each feature information with preset skin color feature information;
and when each feature information in the video to be processed contains the preset skin color feature information, executing the step of extracting the first body action information in the video to be processed.
6. The limb motion recognition method of claim 1, wherein the first limb motion label comprises a third limb motion label and a fourth limb motion label;
the step of comparing the first limb action tag with limb action tags in a preset limb action database specifically comprises:
and sending alarm information of abnormal limb actions when the third limb action label is consistent with the limb action label in the preset limb action database and the third limb action label is inconsistent with the limb action label in the preset limb action database.
7. A limb motion recognition device, comprising: memory, a processor and a limb motion recognition program stored on the memory and executable on the processor, the limb motion recognition program being configured to implement the steps of the limb motion recognition method as claimed in any one of claims 1 to 6.
8. A robot, characterized in that the robot comprises: memory, a processor and a limb motion recognition program stored on the memory and executable on the processor, the limb motion recognition program being configured to implement the steps of the limb motion recognition method as claimed in any one of claims 1 to 6.
9. A storage medium, characterized in that the storage medium has a limb motion recognition program stored thereon, which when executed by a processor implements the steps of the limb motion recognition method according to any one of claims 1 to 6.
CN201710668382.9A 2017-08-07 2017-08-07 Limb action recognition method and device, robot and storage medium Active CN107609474B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710668382.9A CN107609474B (en) 2017-08-07 2017-08-07 Limb action recognition method and device, robot and storage medium
PCT/CN2018/091370 WO2019029266A1 (en) 2017-08-07 2018-06-15 Body movement recognition method, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710668382.9A CN107609474B (en) 2017-08-07 2017-08-07 Limb action recognition method and device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN107609474A CN107609474A (en) 2018-01-19
CN107609474B true CN107609474B (en) 2020-05-01

Family

ID=61064365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710668382.9A Active CN107609474B (en) 2017-08-07 2017-08-07 Limb action recognition method and device, robot and storage medium

Country Status (2)

Country Link
CN (1) CN107609474B (en)
WO (1) WO2019029266A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609474B (en) * 2017-08-07 2020-05-01 深圳市科迈爱康科技有限公司 Limb action recognition method and device, robot and storage medium
CN108391162B (en) * 2018-01-31 2021-12-03 科大讯飞股份有限公司 Volume adjustment method and device, storage medium and electronic equipment
CN110314344B (en) * 2018-03-30 2021-08-24 杭州海康威视数字技术股份有限公司 Exercise reminding method, device and system
CN109411050A (en) * 2018-09-30 2019-03-01 深圳市科迈爱康科技有限公司 Exercise prescription executes method, system and computer readable storage medium
CN111107279B (en) * 2018-10-26 2021-06-29 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111126100B (en) * 2018-10-30 2023-10-17 杭州海康威视数字技术股份有限公司 Alarm method, alarm device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101238981A (en) * 2007-01-12 2008-08-13 国际商业机器公司 Tracking a range of body movement based on 3D captured image streams of a user
CN105245828A (en) * 2015-09-02 2016-01-13 北京旷视科技有限公司 Item analysis method and equipment
CN106022208A (en) * 2016-04-29 2016-10-12 北京天宇朗通通信设备股份有限公司 Human body motion recognition method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8269834B2 (en) * 2007-01-12 2012-09-18 International Business Machines Corporation Warning a user about adverse behaviors of others within an environment based on a 3D captured image stream
US10469416B2 (en) * 2012-09-06 2019-11-05 Sony Corporation Information processing device, information processing method, and program
CN103399637B (en) * 2013-07-31 2015-12-23 西北师范大学 Based on the intelligent robot man-machine interaction method of kinect skeleton tracing control
CN103679154A (en) * 2013-12-26 2014-03-26 中国科学院自动化研究所 Three-dimensional gesture action recognition method based on depth images
CN104665789A (en) * 2015-01-26 2015-06-03 周常安 Biofeedback system
CN106778450B (en) * 2015-11-25 2020-04-24 腾讯科技(深圳)有限公司 Face recognition method and device
CN205486164U (en) * 2016-01-21 2016-08-17 合肥君达高科信息技术有限公司 Novel people's face 3D expression action identification system
CN105867630A (en) * 2016-04-21 2016-08-17 深圳前海勇艺达机器人有限公司 Robot gesture recognition method and device and robot system
CN106156757B (en) * 2016-08-02 2019-08-09 中国银联股份有限公司 In conjunction with the face identification method and face identification system of In vivo detection technology
CN107609474B (en) * 2017-08-07 2020-05-01 深圳市科迈爱康科技有限公司 Limb action recognition method and device, robot and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101238981A (en) * 2007-01-12 2008-08-13 国际商业机器公司 Tracking a range of body movement based on 3D captured image streams of a user
CN105245828A (en) * 2015-09-02 2016-01-13 北京旷视科技有限公司 Item analysis method and equipment
CN106022208A (en) * 2016-04-29 2016-10-12 北京天宇朗通通信设备股份有限公司 Human body motion recognition method and device

Also Published As

Publication number Publication date
CN107609474A (en) 2018-01-19
WO2019029266A1 (en) 2019-02-14

Similar Documents

Publication Publication Date Title
CN107609474B (en) Limb action recognition method and device, robot and storage medium
CN107103733B (en) One kind falling down alarm method, device and equipment
JP7229174B2 (en) Person identification system and method
CN110477925A (en) A kind of fall detection for home for the aged old man and method for early warning and system
JP2020518051A (en) Face posture detection method, device and storage medium
CN112949417A (en) Tumble behavior identification method, equipment and system
CN108171138B (en) Biological characteristic information acquisition method and device
JP6277736B2 (en) State recognition method and state recognition device
KR20150106425A (en) Leveraging physical handshaking in head mounted displays
JP2018504960A (en) Method and apparatus for processing human body feature data
EP3944188A1 (en) Image processing device, image processing method, and recording medium in which program is stored
Joshi et al. A fall detection and alert system for an elderly using computer vision and Internet of Things
CN111710381A (en) Remote diagnosis method, device, equipment and computer storage medium
CN114764912A (en) Driving behavior recognition method, device and storage medium
CN116687394A (en) Tumble detection method, device, equipment and storage medium based on millimeter wave radar
CN111652192A (en) Tumble detection system based on kinect sensor
Chua et al. Vision-based hand grasping posture recognition in drinking activity
Roy et al. CovidAlert-a wristwatch-based system to alert users from face touching
CN109480852A (en) Sign monitoring method, system, wearable signal collecting device
JP2018082766A (en) Diagnostic system, diagnostic method and program
CN110739077A (en) Epileptic seizure early warning method and system
Hai et al. PCA-SVM algorithm for classification of skeletal data-based eigen postures
Safarzadeh et al. Real-time fall detection and alert system using pose estimation
CN114732377A (en) Health management method, device and equipment based on AI and readable storage medium
CN113673318A (en) Action detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant