CN114489328A - Robot control method and system based on gesture distribution template motion recognition - Google Patents

Robot control method and system based on gesture distribution template motion recognition Download PDF

Info

Publication number
CN114489328A
CN114489328A CN202111662379.9A CN202111662379A CN114489328A CN 114489328 A CN114489328 A CN 114489328A CN 202111662379 A CN202111662379 A CN 202111662379A CN 114489328 A CN114489328 A CN 114489328A
Authority
CN
China
Prior art keywords
gesture
probability
template
cuboid
external
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111662379.9A
Other languages
Chinese (zh)
Other versions
CN114489328B (en
Inventor
衡进
孙贇
姚郁巍
苏瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Terminus Technology Co Ltd
Original Assignee
Chongqing Terminus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Terminus Technology Co Ltd filed Critical Chongqing Terminus Technology Co Ltd
Priority to CN202111662379.9A priority Critical patent/CN114489328B/en
Publication of CN114489328A publication Critical patent/CN114489328A/en
Application granted granted Critical
Publication of CN114489328B publication Critical patent/CN114489328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The application provides a robot control method and system based on gesture distribution template motion recognition, which belong to the field of pattern recognition, and the system comprises: creating various gesture distribution templates and robot instructions corresponding to the gesture distribution templates; collecting a human body external cuboid of an object to be identified; dividing a human body external cuboid of an object to be recognized into small cubes of NxMxP; calculating each matching probability corresponding to each gesture distribution template; taking the maximum matching probability which exceeds the lowest threshold as the final matching result; and executing the robot instruction corresponding to the gesture distribution template. The system comprises: the device comprises a template instruction creating module, an acquisition module, a segmentation module, a calculation module, a matching result module and an execution module. The method and the device improve the robustness and accuracy of motion recognition, have small calculated amount, and are suitable for online real-time recognition.

Description

Robot control method and system based on gesture distribution template motion recognition
Technical Field
The application belongs to the technical field of pattern recognition, and particularly relates to a robot control method and system based on gesture distribution template motion recognition.
Background
With the development of robot technology, gesture-based motion recognition is widely applied, wherein a robot for receiving and sending express needs a certain technology to enable a person to smoothly communicate with the robot without obstacles, in the prior art, voice or a mobile phone is generally adopted to input a series of operations to communicate with the robot, but in practical application, the environment of the express robot is noisy, so that the voice is very noisy, and the accurate recognition is difficult; the adoption of a mobile phone to input a series of operations to communicate with the robot is not suitable for the old who cannot use the mobile phone, and the problem that the old cannot surmount the mobile phone often occurs in places with poor mobile phone signals.
Therefore, communication between gesture recognition and a robot is a feasible option, however, in the prior art, most of gesture and motion recognition adopt a certain created algorithm model, and the gestures collected by the camera are projected to the certain created algorithm model to obtain a motion recognition result. The method has the problem that the same gesture is easy to recognize wrong results under different visual angles, namely, the gesture recognition result is not accurate.
Disclosure of Invention
In order to solve the technical problems, the application provides a robot control method and system based on gesture distribution template motion recognition.
In a first aspect, the application provides a robot control method based on gesture distribution template motion recognition, including the following steps:
according to application requirements, various gesture distribution templates and a robot instruction corresponding to each gesture distribution template are created;
collecting a human body external cuboid of an object to be identified;
dividing the external human body cuboid of the object to be recognized into an NxMxP small cuboid, wherein N is the length of the external human body cuboid, M is the width of the external human body cuboid, and P is the height of the external human body cuboid;
calculating each matching probability corresponding to various gesture distribution templates according to the corresponding small cubes stroked in the gesture movement of the object to be recognized;
taking the maximum matching probability which exceeds the lowest threshold as the final matching result;
and executing the robot instruction corresponding to the gesture distribution template according to the final matching result.
According to application requirements, various gesture distribution templates and robot instructions corresponding to each gesture distribution template are created: the method comprises the following steps:
defining various gestures and meanings of the corresponding gestures according to the application requirements;
collecting data sets aiming at various gestures, and establishing various gesture distribution templates;
and establishing a one-to-one corresponding relation between various gesture distribution templates and the robot instructions according to the meanings of the corresponding gestures.
The acquisition of the data sets aiming at various gestures and the establishment of various gesture distribution templates comprises the following steps:
collecting a human body external cuboid of an object to be identified in a data set;
dividing the human body external cuboid into NxMxP small cubes, wherein N is the length of the human body external cuboid, M is the width of the human body external cuboid, and P is the height of the human body external cuboid;
for a certain gesture, when an object to be recognized in a data set moves from a starting small cube to a terminating small cube, counting the probability between every two small cubes;
for a certain gesture, all other uninvolved microcubes have a probability of zero between every two microcubes;
obtaining a corresponding probability matrix of all the NxMXP small cubes for a certain gesture, wherein the probability matrix is a gesture distribution template corresponding to the certain gesture;
and then obtaining gesture distribution templates corresponding to all gestures.
The external cuboid for the human body is defined as a cuboid surrounded by the outermost boundary of limbs of the human body in the action process of the human body.
The step of calculating each matching probability corresponding to each gesture distribution template according to the corresponding small cube crossed in the gesture movement of the object to be recognized comprises the following steps:
recording the starting minicubes and the ending minicubes stroked in the gesture movement;
extracting the probability between every two small cubes between the starting small cube and the ending small cube corresponding to the certain gesture distribution template;
summing the probabilities between every two small cubes to obtain a matching probability corresponding to a gesture distribution template;
and then the matching probability corresponding to all the gesture distribution templates is solved.
Taking the match probability which exceeds the lowest threshold and is the maximum as the final match result, comprising the following steps:
when all the matching probabilities are smaller than the minimum threshold value, the final matching result is that the gesture is an invalid gesture, and the robot does not respond;
when one matching probability is larger than or equal to the lowest threshold, the matching probability is the final matching result;
and when two or more matching probabilities are larger than or equal to the lowest threshold value, taking the maximum value of the two or more matching probabilities as a final matching result.
In a second aspect, the present application provides a robot control system based on gesture distribution template motion recognition, including: the device comprises a template instruction creating module, an acquisition module, a segmentation module, a calculation module, a matching result module and an execution module;
the template instruction creating module, the collecting module, the dividing module, the calculating module, the matching result module and the executing module are sequentially connected;
the template instruction creating module is used for creating various gesture distribution templates and robot instructions corresponding to the gesture distribution templates according to application requirements;
the acquisition module is used for acquiring a human body external cuboid of an object to be identified;
the segmentation module is used for segmenting the external human body cuboid of the object to be identified into an NxMxP small cuboid, wherein N is the length of the external human body cuboid, M is the width of the external human body cuboid, and P is the height of the external human body cuboid;
the calculation module is used for calculating each matching probability corresponding to various gesture distribution templates according to the corresponding small cubes stroked in the gesture movement of the object to be recognized;
the matching result module is used for taking the matching probability which exceeds the lowest threshold and is the maximum value as the final matching result;
and the execution module is used for executing the robot instruction corresponding to the gesture distribution template according to the final matching result.
The template instruction creating module comprises: the gesture establishing unit is used for establishing a corresponding relationship between the gesture and the template;
the gesture definition unit, the template establishing unit and the corresponding relation establishing unit are sequentially connected;
the gesture definition unit is used for defining various gestures and meanings of the corresponding gestures according to the application requirements;
the template establishing unit is used for acquiring data sets aiming at various gestures and establishing various gesture distribution templates;
the corresponding relation establishing unit is used for establishing one-to-one corresponding relation between various gesture distribution templates and robot instructions according to the meanings of the corresponding gestures.
The template establishing unit includes: an external cube establishing unit and an external cube dividing unit; a probability statistic unit; a distribution template establishing unit;
the external cube building unit and the external cube dividing unit are connected with the external cube building unit; a probability statistic unit; the distribution template building units are sequentially connected;
the external cube establishing unit is used for acquiring a human body external cuboid of an object to be identified in the data set;
the external cuboid dividing unit is used for dividing the human external cuboid into NxMxP small cubes, wherein N is the length of the human external cuboid, M is the width of the human external cuboid, and P is the height of the human external cuboid;
the probability statistical unit is used for counting the probability between every two small cubes when an object to be recognized in the data set moves from a starting small cube to a stopping small cube aiming at a certain gesture; for a certain gesture, all other uninvolved microcubes have a probability of zero between every two microcubes;
the distribution template establishing unit is used for obtaining a corresponding probability matrix of all the NxMXP microcubes for a certain gesture, and the probability matrix is a gesture distribution template corresponding to the certain gesture; and then obtaining gesture distribution templates corresponding to all gestures.
The calculation module comprises: a recording unit; a probability extraction unit; a probability summing unit;
the recording unit; a probability extraction unit; the probability adding units are sequentially connected;
the recording unit is used for recording the starting small cube and the ending small cube which are stroked in the gesture movement;
the probability extraction unit is used for extracting the probability between every two small cubes between the starting small cube and the ending small cube corresponding to the certain gesture distribution template;
the probability summing unit is used for summing the probabilities between every two small cubes to obtain the matching probability corresponding to a certain gesture distribution template; and then the matching probability corresponding to all the gesture distribution templates is solved.
The beneficial technical effects are as follows:
the application provides a robot control method and system based on gesture distribution template motion recognition, robustness and accuracy of motion recognition are improved, calculated amount is small, and the robot control method and system are suitable for on-line real-time recognition.
Drawings
Fig. 1 is a flowchart of a robot control method based on gesture distribution template motion recognition according to an embodiment of the present application;
FIG. 2 is a flowchart of robot instructions for creating various gesture distribution templates and corresponding to each gesture distribution template according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of creating various gesture distribution templates according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a process of calculating each matching probability corresponding to a gesture distribution template according to an embodiment of the present disclosure;
fig. 5 is a flowchart of obtaining a final matching result according to an embodiment of the present application;
fig. 6 is a schematic block diagram of a robot control system based on gesture distribution template motion recognition according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of a circumscribed rectangle according to an embodiment of the present application; wherein, (a) is an external cuboid when the object to be identified is a child, and (b) is an external cuboid when the object to be identified is an adult;
FIG. 8 is a schematic view of a circumscribed rectangle divided into small cubes according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a small cube associated with an intercepting waving action according to an embodiment of the present application.
The specific implementation mode is as follows:
the present application is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present application is not limited thereby.
The embodiment of the application is a robot for receiving and sending the express, and the robot needs to identify the gesture actions of people around the destination, and carries out different machine instructions according to different gestures, for example, the upper cover of the robot for receiving and sending the express is opened corresponding to the hand-lifting action so as to take out or put in the express; the hand waving action corresponds to the calling of the express receiving and sending robot, and the robot is required to stop walking and is not required to continue walking forward; the user can edit the instructions of different robots corresponding to different gestures for different application scenarios.
In a first aspect, the present application provides a robot control method based on gesture distribution template motion recognition, as shown in fig. 1, including the following steps:
step S1: according to application requirements, various gesture distribution templates and a robot instruction corresponding to each gesture distribution template are created;
step S2: collecting a human body external cuboid of an object to be identified;
step S3: dividing the external human body cuboid of the object to be recognized into an NxMxP small cuboid, wherein N is the length of the external human body cuboid, M is the width of the external human body cuboid, and P is the height of the external human body cuboid;
it should be noted that: creating various gesture distribution templates is to also divide the external human body cuboid corresponding to the template into nxmxp small cubes, and when the external human body cuboid of the object to be recognized is divided into nxmxp small cubes, the division at this time and the division in the template need to be considered, and the specification of the external human body cuboid should be consistent, for example, creating various gesture distribution templates to divide the external human body cuboid corresponding to the template into 6 × 4 × 8 small cubes, where N is 6, M is 4, and P is 8, and correspondingly dividing the external human body cuboid of the object to be recognized into 6 × 4 × 8 small cubes.
Step S4: calculating each matching probability corresponding to various gesture distribution templates according to the corresponding small cubes stroked in the gesture movement of the object to be recognized;
step S5: taking the maximum matching probability which exceeds the lowest threshold as the final matching result;
step S6: and executing the robot instruction corresponding to the gesture distribution template according to the final matching result.
According to application requirements, various gesture distribution templates and robot instructions corresponding to each gesture distribution template are created: as shown in fig. 2, the method comprises the following steps:
step S1.1: defining various gestures and meanings of the corresponding gestures according to the application requirements;
in order to clearly explain the specific application process, the embodiment defines the two-week gesture and the meaning of the corresponding gesture, and more gesture definitions may be added according to actual needs in the specific application process, and the embodiment defines: the hand-lifting action is corresponding to the opening of an upper cover of the robot for receiving and sending the express, so that the express can be taken out or put in conveniently; the hand waving action corresponds to calling the robot for receiving and sending the express delivery, and the robot is required to stop walking and is not required to continue walking forward.
Step S1.2: collecting data sets aiming at various gestures, and establishing various gesture distribution templates;
in order to make the created template more robust, at this time, multiple persons are required to combine the hand lifting motion and the hand waving motion into a data set, in the embodiment, 50 persons are adopted, and each person waves a hand 10 times and lifts a hand 10 times, so that the two gesture distribution templates are established.
Step S1.3: and establishing a one-to-one corresponding relation between various gesture distribution templates and the robot instructions according to the meanings of the corresponding gestures.
The collecting of the data sets for various gestures and establishing various gesture distribution templates as shown in fig. 3 includes the following steps:
step S1.2.1: collecting a human body external cuboid of an object to be identified in a data set;
step S1.2.2: dividing the human body external cuboid into NxMxP small cubes, wherein N is the length of the human body external cuboid, M is the width of the human body external cuboid, and P is the height of the human body external cuboid; the present embodiment is divided into 6 × 4 × 8 small cubes, where N is 6, M is 4, and P is 8 as shown in fig. 8, so that the small cubes are called as external cuboid for human body, the divided cubes are smaller than the external cuboid for human body, and the total of all the small cubes constitutes one external cuboid for human body.
The external cuboid for human body is defined as a cuboid surrounded by the outermost boundary of limbs of the cuboid in the human body action process, namely the situation that the limbs are extended in the human body action process needs to be considered, no matter how the limbs are extended in the action process, the external cuboid for human body is included in the external cuboid for human body, as shown in figure 7, (a) the external cuboid for object to be identified is a child, the outermost boundary of the limbs includes arms extended by the child, and (b) the external rectangle for object to be identified is an adult, the outermost boundary of the limbs includes arms extended by the adult, from figure 7(a) (b), the sizes of the external cuboids for human body corresponding to different height, fat and thin exercises and how to do exercises are different, but the final identification result of the application is not influenced, and because the application considers the probability of the corresponding action cube being small, even if the angles of the human bodies shot by the cameras are greatly different, the image definition is greatly different, or the motion modes of each person (such as waving motions, if shown in fig. 7) are greatly different, the method has strong robustness, if the angle deviation of the human bodies shot by the cameras is large, the image definition is poor, or the motion amplitude of each person is small, the result of calculation of the method is that the matching probability values corresponding to each gesture distribution template are uniformly reduced by one point, but the sequencing of the matching probabilities is not influenced, each matching probability is reduced by one point, but the maximum probability can be distinguished, namely the final recognition result is not influenced, so that the problem of inaccurate recognition is well solved by the method.
Step S1.2.3: for a certain gesture, when an object to be recognized in a data set moves from a starting small cube to a terminating small cube, counting the probability between every two small cubes;
step S1.2.4: for a certain gesture, all other uninvolved microcubes have a probability of zero between every two microcubes;
for simplicity and clarity of description of the process, in which all other minicubes of the two gestures are not involved, the probability of each two minicubes is set to zero, and the present application only takes three minicubes A, B, C related to the hand-waving motion process, as shown in fig. 9, and describes the specific process in detail:
for example, the hand waving motion of an adult generally passes through the following small cubes a → B → C → B → a, so the probability between each two small cubes is counted:
template 1:
the corresponding established probabilities are as in table 1:
table 1: probability alphabet representation corresponding to template 1
Figure BDA0003447227880000081
Wherein the content of the first and second substances,
Figure BDA0003447227880000082
the probability value of template 1 from the small cube a to the small cube a is shown, other letters are analogized in turn, and the probability between every two small cubes is counted after 50 persons wave 10 times, as shown in table 2:
table 2: template 1 statistical probability values
A B C
A 0.86 0.82 0.84
B 0.65 0.75 0.57
C 0.54 0.63 0.61
Adult hand-up typically passes through the following minicubes B → D, so the probability between every two minicubes is counted:
template 2:
the corresponding established probabilities are as in table 3:
table 3: probability alphabet representation corresponding to template 2
Figure BDA0003447227880000083
Wherein the content of the first and second substances,
Figure BDA0003447227880000084
representing the probability value of the template 2 from the small cube A to the small cube A, and the rest letters are analogized in turn, and after 50 persons lift hands 10 times, counting the probability between every two small cubes, as shown in Table 4:
table 4: probability alphabet representation corresponding to template 2
Figure BDA0003447227880000085
Figure BDA0003447227880000091
Because the action of lifting the hand generally involves
Figure BDA0003447227880000092
The other probability values are smaller. And other small cubes which are not involved have a probability value of zero in the gesture distribution template.
Step S1.2.5: and obtaining a corresponding probability matrix of all the NxMXP small cubes for a certain gesture, wherein the probability matrix is a gesture distribution template corresponding to the certain gesture, and further obtaining gesture distribution templates corresponding to all the gestures.
Calculating each matching probability corresponding to each gesture distribution template according to the corresponding minicubes stroked in the gesture movement of the object to be recognized, as shown in fig. 4, including the following steps:
step S4.1: recording the starting minicubes and the ending minicubes stroked in the gesture movement;
the present embodiment adopts the hand waving motion through the following small cube a → B → C → B → a;
step S4.2: extracting the probability between every two small cubes between the starting small cube and the ending small cube corresponding to the certain gesture distribution template;
step S4.3: summing the probabilities between every two small cubes to obtain a matching probability corresponding to a gesture distribution template; and then the matching probability corresponding to all the gesture distribution templates is solved.
Match probability for template 1
Figure BDA0003447227880000093
Match probability for template 2
Figure BDA0003447227880000094
The step of taking the match probability which exceeds the lowest threshold and is the maximum as the final match result, as shown in fig. 5, includes the following steps:
step S5.1: when all the matching probabilities are smaller than the minimum threshold value, the final matching result is that the gesture is an invalid gesture, and the robot does not respond;
step S5.2: when one matching probability is larger than or equal to the lowest threshold, the matching probability is the final matching result;
step S5.3: and when two or more matching probabilities are larger than or equal to the lowest threshold value, taking the maximum value of the two or more matching probabilities as a final matching result.
It is obvious that the matching probability of the template 1 is the highest in this embodiment, and the lowest threshold set in this embodiment is 0.35, then both matching probabilities are higher than the lowest threshold, and the highest matching probability is selected as the final matching result, that is, the motion of waving the hand corresponding to the template 1, and the robot instruction that requires the robot to stop walking without going forward is executed.
In a second aspect, the present application provides a robot control system based on gesture distribution template motion recognition, as shown in fig. 6, including: the device comprises a template instruction creating module, an acquisition module, a segmentation module, a calculation module, a matching result module and an execution module;
the template instruction creating module, the collecting module, the dividing module, the calculating module, the matching result module and the executing module are sequentially connected;
the template instruction creating module is used for creating various gesture distribution templates and robot instructions corresponding to the gesture distribution templates according to application requirements;
the acquisition module is used for acquiring a human body external cuboid of an object to be identified;
the segmentation module is used for segmenting the external human body cuboid of the object to be identified into an NxMxP small cuboid, wherein N is the length of the external human body cuboid, M is the width of the external human body cuboid, and P is the height of the external human body cuboid;
the calculation module is used for calculating each matching probability corresponding to various gesture distribution templates according to the corresponding small cubes stroked in the gesture movement of the object to be recognized;
the matching result module is used for taking the matching probability which exceeds the lowest threshold and is the maximum value as the final matching result;
and the execution module is used for executing the robot instruction corresponding to the gesture distribution template according to the final matching result.
The template instruction creating module comprises: the gesture establishing unit is used for establishing a corresponding relationship between the gesture and the template;
the gesture definition unit, the template establishing unit and the corresponding relation establishing unit are sequentially connected;
the gesture definition unit is used for defining various gestures and meanings of the corresponding gestures according to the application requirements;
the template establishing unit is used for acquiring data sets aiming at various gestures and establishing various gesture distribution templates;
the corresponding relation establishing unit is used for establishing one-to-one corresponding relation between various gesture distribution templates and robot instructions according to the meanings of the corresponding gestures.
The template establishing unit includes: an external cube establishing unit and an external cube dividing unit; a probability statistic unit; a distribution template establishing unit;
the external cube building unit and the external cube dividing unit are connected with the external cube building unit; a probability statistic unit; the distribution template building units are sequentially connected;
the external cube establishing unit is used for acquiring a human body external cuboid of an object to be identified in the data set;
the external cuboid dividing unit is used for dividing the human external cuboid into NxMxP small cubes, wherein N is the length of the human external cuboid, M is the width of the human external cuboid, and P is the height of the human external cuboid;
the probability statistic unit is used for counting the probability between every two small cubes when the object to be recognized in the data set moves from the starting small cube to the ending small cube aiming at a certain gesture; for a certain gesture, all other uninvolved microcubes have a probability of zero between every two microcubes;
the distribution template establishing unit is used for obtaining a corresponding probability matrix of all the NxMXP microcubes for a certain gesture, and the probability matrix is a gesture distribution template corresponding to the certain gesture; and then obtaining gesture distribution templates corresponding to all gestures.
The calculation module comprises: a recording unit; a probability extraction unit; a probability summing unit;
the recording unit; a probability extraction unit; the probability adding units are sequentially connected;
the recording unit is used for recording the starting small cube and the ending small cube which are stroked in the gesture movement;
the probability extraction unit is used for extracting the probability between every two small cubes between the starting small cube and the ending small cube corresponding to the certain gesture distribution template;
the probability summing unit is used for summing the probabilities between every two small cubes to obtain the matching probability corresponding to a certain gesture distribution template; and then the matching probability corresponding to all the gesture distribution templates is solved.
The present applicant has described and illustrated embodiments of the present invention in detail with reference to the accompanying drawings, but it should be understood by those skilled in the art that the above embodiments are merely preferred embodiments of the present invention, and the detailed description is only for the purpose of helping the reader to better understand the spirit of the present invention, and not for limiting the scope of the present invention, and on the contrary, any improvement or modification made based on the spirit of the present invention should fall within the scope of the present invention.

Claims (10)

1. A robot control method based on gesture distribution template motion recognition is characterized by comprising the following steps:
according to application requirements, various gesture distribution templates and a robot instruction corresponding to each gesture distribution template are created;
collecting a human body external cuboid of an object to be identified;
dividing the external human body cuboid of the object to be recognized into an NxMxP small cuboid, wherein N is the length of the external human body cuboid, M is the width of the external human body cuboid, and P is the height of the external human body cuboid;
calculating each matching probability corresponding to various gesture distribution templates according to the corresponding small cubes stroked in the gesture movement of the object to be recognized;
taking the maximum matching probability which exceeds the lowest threshold as the final matching result;
and executing the robot instruction corresponding to the gesture distribution template according to the final matching result.
2. The robot control method based on gesture distribution template motion recognition according to claim 1, wherein the various gesture distribution templates and the robot command corresponding to each gesture distribution template are created according to application requirements: the method comprises the following steps:
defining various gestures and meanings of the corresponding gestures according to the application requirements;
collecting data sets aiming at various gestures, and establishing various gesture distribution templates;
and establishing a one-to-one corresponding relation between various gesture distribution templates and the robot instructions according to the meanings of the corresponding gestures.
3. The robot control method based on gesture distribution template motion recognition as claimed in claim 2, wherein the collecting data sets for various gestures and establishing various gesture distribution templates comprises the following steps:
collecting a human body external cuboid of an object to be identified in a data set;
dividing the human body external cuboid into NxMxP small cubes, wherein N is the length of the human body external cuboid, M is the width of the human body external cuboid, and P is the height of the human body external cuboid;
for a certain gesture, when an object to be recognized in a data set moves from a starting small cube to a terminating small cube, counting the probability between every two small cubes;
for a certain gesture, all other unrelated minicubes have zero probability between every two minicubes;
obtaining a probability matrix corresponding to a certain gesture by all the small N × M × P cubes, wherein the probability matrix is a gesture distribution template corresponding to the certain gesture;
and then obtaining gesture distribution templates corresponding to all gestures.
4. The robot control method based on gesture distribution template motion recognition according to claim 3, wherein the human body circumscribed cuboid is defined as a cuboid surrounded by the outermost boundaries of limbs thereof in the human body motion process.
5. The robot control method based on gesture distribution template motion recognition according to claim 1, wherein the step of calculating each matching probability corresponding to various gesture distribution templates according to the corresponding small cube stroked in the gesture movement of the object to be recognized comprises the following steps:
recording the starting minicubes and the ending minicubes stroked in the gesture movement;
extracting the probability between every two small cubes between the starting small cube and the ending small cube corresponding to the certain gesture distribution template;
summing the probabilities between every two small cubes to obtain a matching probability corresponding to a gesture distribution template;
and then the matching probability corresponding to all the gesture distribution templates is solved.
6. The robot control method based on gesture distribution template motion recognition according to claim 1, wherein the step of taking the match probability exceeding the lowest threshold and being the maximum as the final match result comprises the steps of:
when all the matching probabilities are smaller than the minimum threshold value, the final matching result is that the gesture is an invalid gesture, and the robot does not respond;
when one matching probability is larger than or equal to the lowest threshold, the matching probability is the final matching result;
and when two or more matching probabilities are larger than or equal to the lowest threshold value, taking the maximum value of the two or more matching probabilities as a final matching result.
7. A robot control system based on gesture distribution template motion recognition is characterized by comprising a template instruction creating module, an acquisition module, a segmentation module, a calculation module, a matching result module and an execution module;
the template instruction creating module, the collecting module, the dividing module, the calculating module, the matching result module and the executing module are sequentially connected;
the template instruction creating module is used for creating various gesture distribution templates and robot instructions corresponding to the gesture distribution templates according to application requirements;
the acquisition module is used for acquiring a human body external cuboid of an object to be identified;
the segmentation module is used for segmenting the external human body cuboid of the object to be identified into an NxMxP small cuboid, wherein N is the length of the external human body cuboid, M is the width of the external human body cuboid, and P is the height of the external human body cuboid;
the calculation module is used for calculating each matching probability corresponding to various gesture distribution templates according to the corresponding small cubes stroked in the gesture movement of the object to be recognized;
the matching result module is used for taking the matching probability which exceeds the lowest threshold and is the maximum value as the final matching result;
and the execution module is used for executing the robot instruction corresponding to the gesture distribution template according to the final matching result.
8. The gesture distribution template motion recognition based robot control system of claim 7, wherein the template instruction creation module comprises: the gesture establishing unit is used for establishing a corresponding relationship between the gesture and the template;
the gesture definition unit, the template establishing unit and the corresponding relation establishing unit are sequentially connected;
the gesture definition unit is used for defining various gestures and meanings of the corresponding gestures according to the application requirements;
the template establishing unit is used for acquiring data sets aiming at various gestures and establishing various gesture distribution templates;
the corresponding relation establishing unit is used for establishing one-to-one corresponding relation between various gesture distribution templates and robot instructions according to the meanings of the corresponding gestures.
9. The robot control system based on gesture distribution template motion recognition of claim 7, wherein the template establishing unit comprises: an external cube establishing unit and an external cube dividing unit; a probability statistic unit; a distribution template establishing unit;
the external cube building unit and the external cube dividing unit are connected with the external cube building unit; a probability statistic unit; the distribution template building units are sequentially connected;
the external cube establishing unit is used for acquiring a human body external cuboid of an object to be identified in the data set;
the external cuboid dividing unit is used for dividing the human external cuboid into NxMxP small cubes, wherein N is the length of the human external cuboid, M is the width of the human external cuboid, and P is the height of the human external cuboid;
the probability statistic unit is used for counting the probability between every two small cubes when the object to be recognized in the data set moves from the starting small cube to the ending small cube aiming at a certain gesture; for a certain gesture, all other uninvolved microcubes have a probability of zero between every two microcubes;
the distribution template establishing unit is used for obtaining a corresponding probability matrix of all the NxMXP microcubes for a certain gesture, and the probability matrix is a gesture distribution template corresponding to the certain gesture; and then obtaining gesture distribution templates corresponding to all gestures.
10. The robot control system based on gesture distribution template motion recognition of claim 7, wherein the calculation module comprises: a recording unit; a probability extraction unit; a probability summing unit;
the recording unit; a probability extraction unit; the probability adding units are sequentially connected;
the recording unit is used for recording the starting small cube and the ending small cube which are stroked in the gesture movement;
the probability extraction unit is used for extracting the probability between every two small cubes between the starting small cube and the ending small cube corresponding to the certain gesture distribution template;
the probability summing unit is used for summing the probabilities between every two small cubes to obtain the matching probability corresponding to a certain gesture distribution template; and then the matching probability corresponding to all the gesture distribution templates is solved.
CN202111662379.9A 2021-12-30 2021-12-30 Robot control method and system based on gesture distribution template action recognition Active CN114489328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111662379.9A CN114489328B (en) 2021-12-30 2021-12-30 Robot control method and system based on gesture distribution template action recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111662379.9A CN114489328B (en) 2021-12-30 2021-12-30 Robot control method and system based on gesture distribution template action recognition

Publications (2)

Publication Number Publication Date
CN114489328A true CN114489328A (en) 2022-05-13
CN114489328B CN114489328B (en) 2024-04-05

Family

ID=81508743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111662379.9A Active CN114489328B (en) 2021-12-30 2021-12-30 Robot control method and system based on gesture distribution template action recognition

Country Status (1)

Country Link
CN (1) CN114489328B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080143975A1 (en) * 2006-10-25 2008-06-19 International Business Machines Corporation System and method for interacting with a display
US20100031202A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US20150199018A1 (en) * 2014-01-14 2015-07-16 Microsoft Corporation 3d silhouette sensing system
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data
JP2018097889A (en) * 2018-01-17 2018-06-21 セイコーエプソン株式会社 Object recognition device, object recognition method, object recognition program, robot system, and robot
CN108596948A (en) * 2018-03-16 2018-09-28 中国科学院自动化研究所 The method and device of human body head posture is identified based on depth camera
JP2021020285A (en) * 2019-07-29 2021-02-18 株式会社キーエンス Robot setting device and robot setting method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080143975A1 (en) * 2006-10-25 2008-06-19 International Business Machines Corporation System and method for interacting with a display
US20100031202A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US20150199018A1 (en) * 2014-01-14 2015-07-16 Microsoft Corporation 3d silhouette sensing system
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data
JP2018097889A (en) * 2018-01-17 2018-06-21 セイコーエプソン株式会社 Object recognition device, object recognition method, object recognition program, robot system, and robot
CN108596948A (en) * 2018-03-16 2018-09-28 中国科学院自动化研究所 The method and device of human body head posture is identified based on depth camera
JP2021020285A (en) * 2019-07-29 2021-02-18 株式会社キーエンス Robot setting device and robot setting method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘慧 等: "虚拟界面自然交互模型与算法", 浙江大学学报(工学版), vol. 50, no. 06, 30 June 2015 (2015-06-30), pages 1167 - 1175 *
刘杰 等: "模板匹配的三维手势识别算法", 计算机辅助设计与图形学学报, vol. 28, no. 08, 31 August 2016 (2016-08-31), pages 1365 - 1372 *

Also Published As

Publication number Publication date
CN114489328B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN106650687B (en) Posture correction method based on depth information and skeleton information
CN111931701B (en) Gesture recognition method and device based on artificial intelligence, terminal and storage medium
CN108595008B (en) Human-computer interaction method based on eye movement control
CN107765852A (en) Multi-modal interaction processing method and system based on visual human
CN108052884A (en) A kind of gesture identification method based on improvement residual error neutral net
CN110633004B (en) Interaction method, device and system based on human body posture estimation
CN102592115B (en) Hand positioning method and system
CN112036261A (en) Gesture recognition method and device, storage medium and electronic device
CN112418135A (en) Human behavior recognition method and device, computer equipment and readable storage medium
CN102831408A (en) Human face recognition method
CN106503619B (en) Gesture recognition method based on BP neural network
Rao et al. Neural network classifier for continuous sign language recognition with selfie video
CN107704813A (en) A kind of face vivo identification method and system
CN107103271A (en) A kind of method for detecting human face
CN109993135A (en) A kind of gesture identification method based on augmented reality, system and device
CN111339940B (en) Video risk identification method and device
CN113705466A (en) Human face facial feature occlusion detection method used for occlusion scene, especially under high-imitation occlusion
CN109669537A (en) A kind of man-machine interactive system based on computer virtual interface
CN114489328A (en) Robot control method and system based on gesture distribution template motion recognition
Wang et al. Human posture recognition based on convolutional neural network
CN112149517A (en) Face attendance checking method and system, computer equipment and storage medium
CN211025083U (en) Game equipment is felt to wall body based on monocular camera gesture control
CN115826764A (en) Gesture control method and system based on thumb
CN112633224B (en) Social relation recognition method and device, electronic equipment and storage medium
CN113239858A (en) Face detection model training method, face recognition method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant