CN109176512A - A kind of method, robot and the control device of motion sensing control robot - Google Patents

A kind of method, robot and the control device of motion sensing control robot Download PDF

Info

Publication number
CN109176512A
CN109176512A CN201811009318.0A CN201811009318A CN109176512A CN 109176512 A CN109176512 A CN 109176512A CN 201811009318 A CN201811009318 A CN 201811009318A CN 109176512 A CN109176512 A CN 109176512A
Authority
CN
China
Prior art keywords
artis
manipulator
color image
dimensional coordinate
connection relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811009318.0A
Other languages
Chinese (zh)
Inventor
刘艺成
周宸
李元媛
费小平
郭汉超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang And Germany Communications Technology Co Ltd
Shanghai Wind Communication Technologies Co Ltd
Original Assignee
Nanchang And Germany Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang And Germany Communications Technology Co Ltd filed Critical Nanchang And Germany Communications Technology Co Ltd
Priority to CN201811009318.0A priority Critical patent/CN109176512A/en
Publication of CN109176512A publication Critical patent/CN109176512A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The present embodiments relate to Visual identification technology fields, disclose method, robot and the control device of a kind of motion sensing control robot.A kind of method of motion sensing control robot in the present invention, comprising: obtain color image and depth image comprising manipulator;Color image input convolutional neural networks are obtained into the connection relationship on color image between manipulator's artis two-dimensional coordinate and each artis;Depth image and color image are registrated, three-dimensional coordinate of manipulator's artis two dimensional coordinate map into depth image is obtained;According to the connection relationship between three-dimensional coordinate and each artis, the angle of each joint of manipulator is calculated;According to the angle of three-dimensional coordinate and each joint, controls robot and follow the movement of manipulator and act.Method, robot and the control device of a kind of motion sensing control robot provided by the invention, enable robot to identify the posture of manipulator accurately and in time in more people, are capable of the posture of analog manipulation person accurately and in time.

Description

A kind of method, robot and the control device of motion sensing control robot
Technical field
The present embodiments relate to Visual identification technology field, in particular to a kind of method of motion sensing control robot, machine Device people and control device.
Background technique
The application of vision monitoring and identification at present becomes more and more popular, and mentions in security, traffic, entertainment field to human skeleton It takes, is the basis of many Activity recognitions, body feeling interaction.This technology is after depth network successful application, in Microsoft On COCO data set, good promotion is obtained.After deep learning large-scale application, the method for common identification human body It is as follows:
The prior art one: selecting the single pedestrian in image with a pedestrian detection circle, later, to the single posture in frame Judged.But in the prior art one, when number is excessive in the picture, interpersonal hypotelorism easily blocks or people When side is stood, it is easy to cause single detection block recognition failures;And for human body each in image be both needed to human testing frame into The primary single posture judgement of row, detection duration extend as number increases.
The prior art two: by human body key points all in detection image, skeleton is connected by global calculate later. Due to the prior art second is that carrying out global reckoning according to all human joint points by solving integer linear programming problem, and The average handling time of such issues that integer linear programming problem the problem of belonging to NP-Hard, solution will be about a few minutes to several A hour, therefore, real-time are unable to get guarantee.
To sum up, when following the movement of human body to control robot in the way of existing detection human body to act, The posture of manipulator can not be identified when more people accurately and in time, it is thus impossible to which enough control robot accurately and in time The posture of analog manipulation person.
Summary of the invention
A kind of method for being designed to provide motion sensing control robot, robot and the control dress of embodiment of the present invention It sets, robot is enabled to identify the posture of manipulator accurately and in time in more people, behaviour can be simulated accurately and in time The posture of control person.
In order to solve the above technical problems, embodiments of the present invention provide a kind of method of motion sensing control robot, packet It includes: obtaining color image and depth image comprising manipulator;Color image input convolutional neural networks are obtained into color image Connection relationship between upper manipulator's artis two-dimensional coordinate and each artis;Depth image and color image are registrated, obtained Take three-dimensional coordinate of manipulator's artis two dimensional coordinate map into depth image;According between three-dimensional coordinate and each artis Connection relationship calculates the angle of each joint of manipulator;According to the angle of three-dimensional coordinate and each joint, controls robot and follow The movement of manipulator and act.
Embodiments of the present invention additionally provide a kind of robot, comprising: at least one processor;And at least one The memory of a processor communication connection;Wherein, memory is stored with the instruction that can be executed by least one processor, instructs quilt At least one processor executes, so that the method that at least one processor is able to carry out above-mentioned motion sensing control robot.
Embodiments of the present invention additionally provide a kind of control device, comprising: at least one processor;And at least The memory of one processor communication connection;Wherein, memory is stored with the instruction that can be executed by least one processor, instruction It is executed by least one processor, so that the method that at least one processor is able to carry out above-mentioned motion sensing control robot.
Embodiment of the present invention in terms of existing technologies, provides a kind of method of motion sensing control robot, comprising: Color image and depth image comprising manipulator are obtained, color image input convolutional neural networks are obtained into institute on color image There is the connection relationship between the body joint point coordinate and each artis of human body;From between all human body body joint point coordinates and each artis Connection relationship in obtain connection relationship between manipulator's artis two-dimensional coordinate and each artis;By depth image and coloured silk Color image registration obtains three-dimensional coordinate of manipulator's artis two dimensional coordinate map into depth image, according to three-dimensional coordinate and Connection relationship between each artis calculates the angle of each joint of manipulator, according to the angle of three-dimensional coordinate and each joint, Control robot follows the movement of manipulator and acts.By the artis of manipulator in convolutional neural networks forecast image and each Connection relationship between artis, farthest solve block and side identification caused by accuracy is not high asks Topic;Image is carried out in depth optimization forecast image between the body joint point coordinate and each artis of more people using convolutional neural networks Connection relationship obtain the connection relationship between the body joint point coordinate and each artis of manipulator, reduce algorithm calculation amount, Without solving the problems, such as complicated NP-Hard programming, the speed of service is improved;Runing time is not directly proportional growth to number, Alleviate because in image number rise caused by runing time and the problem of complexity directly proportional rising, it is quick, accurate to realize Connection relationship of the ground between the two-dimensional coordinate and each artis of manipulator for obtaining manipulator's artis in more people's images;Pass through Depth image and color image are registrated to obtain three-dimensional coordinate of manipulator's artis in depth map, and according to three-dimensional coordinate with And the connection relationship between each artis obtains the angle of each joint, obtains the dimensional posture of manipulator, enables robot The angle of enough three-dimensional coordinates according to manipulator and each joint accurately analog manipulation person posture, realizes robot according to people Movement acted accurately and in time.
In addition, calculating the angle of each joint of manipulator, tool according to the connection relationship between three-dimensional coordinate and each artis Body includes: to establish coordinate system in each joint of manipulator respectively according to three-dimensional coordinate;According between three-dimensional coordinate and each artis Connection relationship, calculate the angle of manipulator's joint under each joint coordinate system.
In addition, by color image input convolutional neural networks obtain on color image manipulator's artis two-dimensional coordinate and It the step of connection relationship between each artis, specifically includes: color image input convolutional neural networks is obtained into color image Connection relationship between the artis two-dimensional coordinate and each artis of upper all human bodies;Identify the manipulator in color image; Extract the connection relationship between the two-dimensional coordinate and each artis of each artis of the manipulator identified.By to manipulator Identification, realize robot simulation manipulator.
In addition, the manipulator in identification color image obtains especially by following manner: recognition of face or position identification.
In addition, color image input convolutional neural networks are obtained the artis two-dimensional coordinate of all human bodies on color image And the connection relationship step between each artis, it specifically includes: reducing the resolution ratio of color image;After reducing resolution ratio Color image input convolutional neural networks obtain the artis of all human bodies on color image two-dimensional pixel coordinate and each joint Connection relationship between point;The two-dimensional pixel coordinate of the artis of human bodies all on color image is amplified to original image to differentiate Rate obtains the artis two-dimensional coordinate of all human bodies on the color image in original image.By the way that color image is being inputted volume Before product neural network, color image resolution is reduced, improves convolutional neural networks processing speed.
Detailed description of the invention
One or more embodiments are illustrated by the picture in corresponding attached drawing, these exemplary theorys The bright restriction not constituted to embodiment, the element in attached drawing with same reference numbers label are expressed as similar element, remove Non- to have special statement, composition does not limit the figure in attached drawing.
Fig. 1 is the flow diagram in the method for the motion sensing control robot of first embodiment according to the present invention;
Fig. 2 is the flow diagram in the method for the motion sensing control robot of second embodiment according to the present invention;
Fig. 3 is the structural schematic diagram of the robot of third embodiment according to the present invention;
Fig. 4 is the structural schematic diagram of the control device of the 4th embodiment according to the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Each embodiment be explained in detail.However, it will be understood by those skilled in the art that in each embodiment party of the present invention In formula, in order to make the reader understand this application better, many technical details are proposed.But even if without these technical details And various changes and modifications based on the following respective embodiments, the application technical solution claimed also may be implemented.
The first embodiment of the present invention is related to a kind of method of motion sensing control robot, the core of present embodiment exists In providing a kind of method of motion sensing control robot, comprising: obtain include manipulator color image and depth image, will Color image input convolutional neural networks obtain on color image between manipulator's artis two-dimensional coordinate and each artis Connection relationship;Depth image and color image are registrated, obtain manipulator's artis two dimensional coordinate map into depth image Three-dimensional coordinate calculates the angle of each joint of manipulator, according to three according to the connection relationship between three-dimensional coordinate and each artis The angle of coordinate and each joint is tieed up, control robot follows the movement of manipulator and acts.It is predicted by convolutional neural networks Connection relationship in image between the artis of manipulator and each artis, farthest solve block and side identification Caused by the not high problem of accuracy;The pass of more people in depth optimization forecast image is carried out to image using convolutional neural networks Connection relationship between node coordinate and each artis obtains the connection between the body joint point coordinate and each artis of manipulator Relationship reduces algorithm calculation amount, without solving the problems, such as complicated NP-Hard programming, improves the speed of service;Runing time with Number is not directly proportional growth, alleviate because in image number rise caused by runing time and the directly proportional rising of complexity Problem realizes two-dimensional coordinate and each joint of manipulator that manipulator's artis is quickly and accurately obtained in more people's images Connection relationship between point;By the way that depth image and color image are registrated to obtain three-dimensional of manipulator's artis in depth map Coordinate, and the angle of each joint is obtained according to the connection relationship between three-dimensional coordinate and each artis, obtain manipulator's Dimensional posture is allowed the robot to according to the angle of the three-dimensional coordinate of manipulator and each joint accurately analog manipulation person appearance Gesture realizes robot and is acted accurately and in time according to the movement of people.
The realization details of the method for the motion sensing control robot of present embodiment is specifically described below, it is interior below Hold only for convenience of the realization details provided is understood, not implements the necessary of this programme.
The flow diagram of the method for one of present embodiment motion sensing control robot is as shown in Figure 1:
Step 101: obtaining color image and depth image comprising manipulator.
Specifically, being equipped with Kinect depth camera in present embodiment in robot, depth camera obtains packet RGB color image and depth image containing manipulator, it is 640x480 that color image is identical with the resolution ratio of depth image, will The color image and depth image of acquisition carry out the processing of next step.
Preferably, pretreatment first is normalized to color image before the processing for carrying out next step to color image, It can prevent affine transformation from reducing the influence of geometric transformation to the influence of image, and can be improved computational accuracy.
Step 102: color image input convolutional neural networks are obtained into manipulator's artis two-dimensional coordinate on color image And the connection relationship between each artis.
Specifically, wherein convolutional neural networks are preparatory trained Caffe network, the training figure of convolutional neural networks Image set obtains more people's images by multi-camera system from different perspectives.The convolutional neural networks of training prediction human body attitude, Essence is the model for the high-dimensional function of complexity that training one can be fitted skeleton point in picture, and model is to utilize training number It is trained according to collection, the predictive ability of the scale of training dataset and levels of precision itself and convolutional neural networks has very big Relationship.In other words, if the mark picture in training set contains more human body informations, such as we are to containing having more than 10 people Picture accurately marked, the neural network forecast ability trained also can be stronger.Therefore, we when obtaining training dataset, It captures more people's images from different perspectives using multi-camera system, more human body marks is carried out to the largely picture comprising more people of capture Filter many incorrect while note as a result, iterating to obtain accurate more human body attitude training datasets, by what is obtained More human body attitude training datasets are trained as the input of convolutional neural networks.
Caffe network includes VGG19 network, the first branch (confidence map branch) and the second branch (parent in present embodiment With field analysis PAF branch) composition.Preceding 10 layer initialization of the color image through VGG19 network simultaneously finely tunes to obtain original image spy Characteristic pattern input confidence map branch is obtained the confidence map of human joint points by sign figure, i.e. human joint points appear in each pixel Position confidence level two-dimensional representation;Characteristic pattern input PAF branch is obtained into the connection relationship between human joint points, i.e., respectively Whether a human joint points belong to the same person.Wherein, the first branch and the second branch include seven stages, in the stage 1 Small convolution kernel size is 3x3, and big convolution kernel size is 7x7 in the stage 2 to 7, and in subsequent each stage, will be come from previous The prediction of the Liang Ge branch in stage is connected together with primitive image features figure for generating accurate prediction.
People's body detective operators are generallyd use in the prior art, and a human testing frame is determined to the image of input Set, that is, with rectangle frame positioning image in people, the loss function of such algorithm be to entire human testing frame position with The error metrics of labeled data.Then, single human joint points detection is carried out for each human testing frame, finally extracted Human joint points in all frames are combined into the body joint point coordinate of more people.More people's detections may be implemented in such detection mode, but It is that real-time is poor, because detection time increases with human body quantity and increased.
Without determining multiple human testing frames using human testing operator in present embodiment, but use Caffe network branches predict human joint points coordinate and artis connection relationship respectively, are made with the loss function of Liang Ge branch The error minimized for error metrics.After each frame image detection, according to artis connection relationship by each artis Connect into each individual.The real-time of more human testings can be improved in this way.
It is worth noting that the Caffe network of present embodiment can obtain the biggish visual field using the big convolution kernel of multilayer, Also the joint being blocked can more preferably be predicted.Caffe network is in the stage later, since each stage is defeated in Caffe network Connected out with input, therefore, before stage input of the output confidence map as next stage, i.e., repeatedly It is modified to the confidence map come is predicted.And the loss function of each branch is the loss of each layer of recirculating network of each branch The sum of function can gradually promote the accuracy of more people's detections to guarantee that deeper network still can train down in this way.
Color image is inputted into above-mentioned trained convolutional neural networks (Caffe network) in advance, passes through the preparatory training Connection relationship in good convolutional neural networks (Caffe network) forecast image between the artis and each artis of manipulator, Farthest solve the problems, such as to block and side identification caused by accuracy it is not high, convolutional neural networks are by simultaneously Connection relationship in forecast image between the artis of more people and each artis come obtain manipulator artis and each joint Point between connection relationship, realize quickly and accurately in more people's images obtain manipulator's artis two-dimensional coordinate and Connection relationship between each artis of manipulator.
Step 103: depth image and color image being registrated, obtain manipulator's artis two dimensional coordinate map to depth map Three-dimensional coordinate as in.
Specifically, being calculated in depth image each to depth image and color image using light stream image processing techniques Data offset in pixel and color image between each pixel, color image data and depth image data are according to number It is matched according to offset, keeps the information of each pixel consistent.I.e. by depth image and color image unified coordinate system.Due to The depth (i.e. the distance of video camera point in real space) of each point in image can be obtained in depth image, therefore, After depth image and color image registration, the depth of each human joint points can be obtained, in conjunction with the manipulator joint of acquisition The two-dimensional coordinate of point can obtain three-dimensional coordinate of manipulator's artis two dimensional coordinate map into depth image.
Step 104: according to the connection relationship between three-dimensional coordinate and each artis, calculating the angle of each joint of manipulator Degree.
Specifically, calculating the angle of each joint of manipulator according to the connection relationship between three-dimensional coordinate and each artis Degree, specifically includes: establishing coordinate system in each joint of manipulator respectively according to three-dimensional coordinate;According to three-dimensional coordinate and each artis Between connection relationship, calculate the angle of manipulator's joint under each joint coordinate system.Such as: it is sat with the three-dimensional of wrist It is designated as origin and establishes coordinate system, according to the three-dimensional coordinate of forearm and the connection relationship of hand and the artis being connected, can obtain Angle between hand and forearm, i.e. angle at wrist.For another example, coordinate system, root are established by origin of the three-dimensional coordinate of knee According to the three-dimensional coordinate of shank and the connection relationship of thigh and the artis being connected, the angle between shank and thigh can be obtained Degree, the i.e. angle of knee.
It is worth noting that the angle of each joint of above-mentioned acquisition is specially Eulerian angles, Eulerian angles are for uniquely It determines the triplets independence angle parameter of Fixed-point Motion of A object space, includes nutational angle, angle of precession, angle of rotation.Come with Eulerian angles As the angle parameter of control robot, so that the movement of robot is more accurate.
Step 105: according to the angle of three-dimensional coordinate and each joint, controlling robot and follow the movement of manipulator and move Make.
Specifically, robot is according to the three-dimensional coordinate of each artis and the angle of each joint, to simulate human body Current pose is acted according to the movement of manipulator.
It is emphasized that above-mentioned steps 101 to 105 can be the step of real-time perfoming, so that robot can not only Simulate the posture of people, additionally it is possible to continuous a series of movement is carried out according to the movement of people.
Compared with prior art, embodiment of the present invention provides a kind of method of motion sensing control robot, comprising: obtain Color image and depth image comprising manipulator are taken, color image input convolutional neural networks are obtained manipulating on color image Connection relationship between person's artis two-dimensional coordinate and each artis;Depth image and color image are registrated, manipulation is obtained Three-dimensional coordinate of person's artis two dimensional coordinate map into depth image is closed according to the connection between three-dimensional coordinate and each artis System calculates the angle of each joint of manipulator, according to the angle of three-dimensional coordinate and each joint, controls robot and follows manipulator Movement and act.It is closed by the connection between the artis and each artis of manipulator in convolutional neural networks forecast image System, farthest solve the problems, such as to block and side identification caused by accuracy it is not high, utilize convolutional neural networks Connection relationship in depth optimization forecast image between the body joint point coordinate and each artis of more people is carried out to image to be grasped Connection relationship between the body joint point coordinate of control person and each artis reduces algorithm calculation amount, without solving complicated NP- Hard programs problem, improves the speed of service;Runing time is not directly proportional growth to number, is alleviated because of number in image The problem of runing time complexity direct ratio caused by rising rises realizes and quickly and accurately obtains manipulation in more people's images Connection relationship between the two-dimensional coordinate of person's artis and each artis of manipulator;By the way that depth image and color image are matched It will definitely be closed to three-dimensional coordinate of manipulator's artis in depth map, and according to the connection between three-dimensional coordinate and each artis System obtains the angle of each joint, obtains the dimensional posture of manipulator, allows the robot to the three-dimensional coordinate according to manipulator And the angle of each joint accurately analog manipulation person posture, it realizes robot and is carried out accurately and in time according to the movement of people Movement.
Second embodiment of the present invention is related to a kind of method of motion sensing control robot.Second embodiment is to first The improvement of embodiment, mainly thes improvement is that, color image input convolutional neural networks are obtained manipulating on color image It the step of connection relationship between person's artis two-dimensional coordinate and each artis, specifically includes: color image is inputted into convolution Neural network obtains the connection relationship between the artis two-dimensional coordinate and each artis of all human bodies on color image;Identification Manipulator in color image;Extract the company between the two-dimensional coordinate and each artis of each artis of the manipulator identified Connect relationship.By the identification to manipulator, robot simulation manipulator is realized.
The flow diagram of the method for motion sensing control robot in present embodiment is as shown in Fig. 2, specifically include:
Step 201: obtaining color image and depth image comprising manipulator.
Above-mentioned steps 201 are roughly the same with the step 101 in first embodiment, no longer repeated herein.
Step 202: color image input convolutional neural networks are obtained into the artis two dimension of all human bodies on color image Connection relationship between coordinate and each artis.
Specifically, color image input convolutional neural networks are obtained into the artis two dimension of all human bodies on color image Connection relationship step between coordinate and each artis, specifically includes: reducing the resolution ratio of color image;Resolution ratio will be reduced Color image input convolutional neural networks afterwards obtain the two-dimensional pixel coordinate of the artis of all human bodies on color image and each Connection relationship between artis;The two-dimensional pixel coordinate of the artis of human bodies all on color image is amplified to original image point Resolution obtains the artis two-dimensional coordinate of all human bodies on the color image in original image.Color image is being inputted into volume Before product neural network, the resolution ratio of color image is first reduced, such as the resolution ratio of color image can be reduced to 200x150, To accelerate the detection of convolutional neural networks, when so that in the picture including more people, it still is able to realize quick detection.It is differentiated reducing After rate color image input convolutional neural networks obtain the artis of all human bodies on color image two-dimensional pixel coordinate and Connection relationship between each artis is seen, is later amplified to the two-dimensional pixel coordinate of the artis of human bodies all on color image Original image resolution obtains the artis two-dimensional coordinate of all human bodies on the color image in original image.
Step 203: the manipulator in identification color image.
Step 204: extracting the connection between the two-dimensional coordinate and each artis of each artis of the manipulator identified Relationship.
For above-mentioned steps 203 to step 204, specifically, the manipulator in identification color image is especially by following Mode obtains: recognition of face or position identification.Identify the manipulator in color image with from institute by recognition of face or position Connection relationship between the two-dimensional coordinate of the artis of somebody's body and each artis extracts each artis for determining manipulator Connection relationship between two-dimensional coordinate and each artis.
It is understood that specifying the mode of operator to be not limited in the people in present embodiment in identification color image Face identification and position identification, other, which are identified in color images, specifies the mode of operator also should be in the protection scope of present embodiment Within.
Step 205: depth image and color image being registrated, obtain manipulator's artis two dimensional coordinate map to depth map Three-dimensional coordinate as in.
Step 206: according to the connection relationship between three-dimensional coordinate and each artis, calculating the angle of each joint of manipulator Degree.
Step 207: according to the angle of three-dimensional coordinate and each joint, controlling robot and follow the movement of manipulator and move Make.
Above-mentioned steps 205 to step 207 and the step 103 in first embodiment to 105 roughly the same, herein no longer into Row repeats.
Compared with prior art, in embodiment of the present invention, a kind of method of motion sensing control robot is provided, it will be colored Image input convolutional neural networks obtain the connection on color image between manipulator's artis two-dimensional coordinate and each artis The step of relationship, specifically includes: color image input convolutional neural networks are obtained the artis of all human bodies on color image Connection relationship between two-dimensional coordinate and each artis;Identify the manipulator in color image;Extract the manipulator identified Each artis two-dimensional coordinate and each artis between connection relationship.By the identification to manipulator, robot is realized Analog manipulation person.
The step of various methods divide above, be intended merely to describe it is clear, when realization can be merged into a step or Certain steps are split, multiple steps are decomposed into, as long as including identical logical relation, all in the protection scope of this patent It is interior;To adding inessential modification in algorithm or in process or introducing inessential design, but its algorithm is not changed Core design with process is all in the protection scope of the patent.
Third embodiment of the invention is related to a kind of robot, as shown in figure 3, including at least one processor 301;With And the memory 302 with the communication connection of at least one processor 301;Wherein, be stored with can be by least one for memory 302 The instruction that device 301 executes is managed, instruction is executed by least one processor 301, so that at least one processor 301 is able to carry out The method for stating the motion sensing control robot in any embodiment.
Wherein, memory 302 is connected with processor 301 using bus mode, and bus may include any number of interconnection Bus and bridge, bus is by one or more processors together with the various circuit connections of memory 302.Bus can also incite somebody to action Together with various other circuit connections of management circuit or the like, these are all abilities for such as peripheral equipment, voltage-stablizer Well known to domain, therefore, it will not be further described herein.Bus interface is provided between bus and transceiver and is connect Mouthful.Transceiver can be an element, is also possible to multiple element, such as multiple receivers and transmitter, provides for passing The unit communicated on defeated medium with various other devices.The data handled through processor are passed on the radio medium by antenna Defeated, further, antenna also receives data and transfers data to processor 301.
Processor 301 is responsible for management bus and common processing, can also provide various functions, including timing, periphery connects Mouthful, voltage adjusting, power management and other control functions.And memory 302 can be used for storage processor and execute behaviour Used data when making.
Four embodiment of the invention is related to a kind of control device, as shown in figure 4, including at least one processor 401;With And the memory 402 with the communication connection of at least one processor 401;Wherein, be stored with can be by least one for memory 402 The instruction that device 401 executes is managed, instruction is executed by least one processor 401, so that at least one processor 401 is able to carry out The method for stating the motion sensing control robot in any embodiment.
Wherein, memory 402 is connected with processor 401 using bus mode, and bus may include any number of interconnection Bus and bridge, bus is by one or more processors together with the various circuit connections of memory 402.Bus can also incite somebody to action Together with various other circuit connections of management circuit or the like, these are all abilities for such as peripheral equipment, voltage-stablizer Well known to domain, therefore, it will not be further described herein.Bus interface is provided between bus and transceiver and is connect Mouthful.Transceiver can be an element, is also possible to multiple element, such as multiple receivers and transmitter, provides for passing The unit communicated on defeated medium with various other devices.The data handled through processor are passed on the radio medium by antenna Defeated, further, antenna also receives data and transfers data to processor 401.
Processor 401 is responsible for management bus and common processing, can also provide various functions, including timing, periphery connects Mouthful, voltage adjusting, power management and other control functions.And memory 402 can be used for storage processor and execute behaviour Used data when making.
Fifth embodiment of the invention is related to a kind of computer readable storage medium, is stored with computer program, the meter The method of motion sensing control robot in any of the above-described embodiment in fact when calculation machine program is executed by processor.
That is, it will be understood by those skilled in the art that implement the method for the above embodiments be can be with Relevant hardware is instructed to complete by program, which is stored in a storage medium, including some instructions are to make It obtains an equipment (can be single-chip microcontroller, chip etc.) or processor (processor) executes side described in each embodiment of the application The all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize specific embodiments of the present invention, And in practical applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.

Claims (9)

1. a kind of method of motion sensing control robot characterized by comprising
Obtain color image and depth image comprising manipulator;
Color image input convolutional neural networks are obtained into the artis two-dimensional coordinate of all human bodies on the color image And the connection relationship between each artis;
The manipulator joint is obtained from the connection relationship between all human joint points two-dimensional coordinates and each artis Connection relationship between point two-dimensional coordinate and each artis;
The depth image and the color image are registrated, obtain manipulator's artis two dimensional coordinate map to the depth Spend the three-dimensional coordinate in image;
According to the connection relationship between the three-dimensional coordinate and each artis, the angle of each joint of the manipulator is calculated Degree;
According to the angle of the three-dimensional coordinate and each joint, controls robot and follow the movement of the manipulator and move Make.
2. the method for motion sensing control robot according to claim 1, which is characterized in that described according to the three-dimensional coordinate And the connection relationship between each artis, the angle of each joint of the manipulator is calculated, is specifically included:
Coordinate system is established in each joint of the manipulator respectively according to the three-dimensional coordinate;
According to the connection relationship between the three-dimensional coordinate and each artis, calculate under each joint coordinate system described in The angle of manipulator's joint.
3. the method for motion sensing control robot according to claim 1, which is characterized in that described from all human bodies Manipulator's artis two-dimensional coordinate and each joint are obtained in connection relationship between artis two-dimensional coordinate and each artis Point between connection relationship the step of before, further includes:
Identify the manipulator in the color image;
The manipulator is obtained in the connection relationship between all human joint points two-dimensional coordinates and each artis It the step of connection relationship between artis two-dimensional coordinate and each artis, specifically includes:
It is closed according to the manipulator identified from the connection between all human joint points two-dimensional coordinates and each artis The connection relationship between the two-dimensional coordinate and each artis of each artis of the manipulator is obtained in system.
4. the method for motion sensing control robot according to claim 3, which is characterized in that the identification color image In manipulator obtained especially by following manner: recognition of face or position identification.
5. the method for motion sensing control robot according to claim 3, which is characterized in that described that the color image is defeated Enter convolutional neural networks and obtains the company between the artis two-dimensional coordinate and each artis of all human bodies on the color image Relationship step is connect, is specifically included:
Reduce the resolution ratio of the color image;
Color image after reduction resolution ratio is inputted into the convolutional neural networks and obtains all human bodies on the color image Connection relationship between the two-dimensional pixel coordinate and each artis of artis;
The two-dimensional pixel coordinate of the artis of human bodies all on the color image is amplified to original image resolution, is obtained in original The artis two-dimensional coordinate of all human bodies on the color image in image.
6. the method for motion sensing control robot according to claim 1, which is characterized in that the convolutional neural networks packet Contain: confidence map branch and affine field analysis branch;
The method of the motion sensing control robot further include:
Confidence map point when the loss function of the confidence map branch of the convolutional neural networks is convolutional neural networks training The sum of the loss function in each stage of branch;
The parent when loss function of the affine field analysis branch of the convolutional neural networks is convolutional neural networks training With the sum of the loss function in each stage of field analysis branch.
7. the method for motion sensing control robot according to claim 1, which is characterized in that the instruction of the convolutional neural networks Practice image set and more people's images are obtained by multi-camera system from different perspectives.
8. a kind of robot characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out the motion sensing control machine as described in any in claim 1 to 7 The method of people.
9. a kind of control device characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out the motion sensing control machine as described in any in claim 1 to 7 The method of people.
CN201811009318.0A 2018-08-31 2018-08-31 A kind of method, robot and the control device of motion sensing control robot Pending CN109176512A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811009318.0A CN109176512A (en) 2018-08-31 2018-08-31 A kind of method, robot and the control device of motion sensing control robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811009318.0A CN109176512A (en) 2018-08-31 2018-08-31 A kind of method, robot and the control device of motion sensing control robot

Publications (1)

Publication Number Publication Date
CN109176512A true CN109176512A (en) 2019-01-11

Family

ID=64917708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811009318.0A Pending CN109176512A (en) 2018-08-31 2018-08-31 A kind of method, robot and the control device of motion sensing control robot

Country Status (1)

Country Link
CN (1) CN109176512A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070573A (en) * 2019-04-25 2019-07-30 北京卡路里信息技术有限公司 Joint figure determines method, apparatus, equipment and storage medium
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
CN110515384A (en) * 2019-09-09 2019-11-29 深圳市三宝创新智能有限公司 A kind of the human body follower method and robot of view-based access control model mark
CN110826405A (en) * 2019-09-30 2020-02-21 许昌许继软件技术有限公司 Equipment control method and device based on human body posture image
CN111055275A (en) * 2019-12-04 2020-04-24 深圳市优必选科技股份有限公司 Action simulation method and device, computer readable storage medium and robot
CN111208783A (en) * 2019-12-30 2020-05-29 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium
CN111435535A (en) * 2019-01-14 2020-07-21 株式会社日立制作所 Method and device for acquiring joint point information
CN111462234A (en) * 2020-03-27 2020-07-28 北京华捷艾米科技有限公司 Position determination method and device
CN111949111A (en) * 2019-05-14 2020-11-17 Oppo广东移动通信有限公司 Interaction control method and device, electronic equipment and storage medium
WO2020228217A1 (en) * 2019-05-13 2020-11-19 河北工业大学 Human body posture visual recognition method for transfer carrying nursing robot, and storage medium and electronic device
CN112070835A (en) * 2020-08-21 2020-12-11 达闼机器人有限公司 Mechanical arm pose prediction method and device, storage medium and electronic equipment
WO2021008158A1 (en) * 2019-07-15 2021-01-21 深圳市商汤科技有限公司 Method and apparatus for detecting key points of human body, electronic device and storage medium
CN112568898A (en) * 2019-09-29 2021-03-30 杭州福照光电有限公司 Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image
CN112766153A (en) * 2021-01-19 2021-05-07 合肥工业大学 Three-dimensional human body posture estimation method and system based on deep learning
CN113892112A (en) * 2019-07-10 2022-01-04 赫尔实验室有限公司 Action classification using deep-nested clustering
JP2022536439A (en) * 2020-06-01 2022-08-17 深▲せん▼華鵲景医療科技有限公司 Upper limb function evaluation device and method, and upper limb rehabilitation training system and method
WO2023273093A1 (en) * 2021-06-30 2023-01-05 奥比中光科技集团股份有限公司 Human body three-dimensional model acquisition method and apparatus, intelligent terminal, and storage medium
CN117340914A (en) * 2023-10-24 2024-01-05 哈尔滨工程大学 Humanoid robot human body feeling control method and control system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103170973A (en) * 2013-03-28 2013-06-26 上海理工大学 Man-machine cooperation device and method based on Kinect video camera
CN103386683A (en) * 2013-07-31 2013-11-13 哈尔滨工程大学 Kinect-based motion sensing-control method for manipulator
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network
CN106625658A (en) * 2016-11-09 2017-05-10 华南理工大学 Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time
US20170193298A1 (en) * 2014-03-19 2017-07-06 Neurala, Inc. Methods and apparatus for autonomous robotic control
CN106956266A (en) * 2017-05-16 2017-07-18 北京京东尚科信息技术有限公司 robot control method, device and robot
CN107170011A (en) * 2017-04-24 2017-09-15 杭州司兰木科技有限公司 A kind of robot vision tracking and system
CN107545242A (en) * 2017-07-25 2018-01-05 大圣科技股份有限公司 A kind of method and device that human action posture is inferred by 2D images
CN108052896A (en) * 2017-12-12 2018-05-18 广东省智能制造研究所 Human bodys' response method based on convolutional neural networks and support vector machines
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103170973A (en) * 2013-03-28 2013-06-26 上海理工大学 Man-machine cooperation device and method based on Kinect video camera
CN103386683A (en) * 2013-07-31 2013-11-13 哈尔滨工程大学 Kinect-based motion sensing-control method for manipulator
US20170193298A1 (en) * 2014-03-19 2017-07-06 Neurala, Inc. Methods and apparatus for autonomous robotic control
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network
CN106625658A (en) * 2016-11-09 2017-05-10 华南理工大学 Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time
CN107170011A (en) * 2017-04-24 2017-09-15 杭州司兰木科技有限公司 A kind of robot vision tracking and system
CN106956266A (en) * 2017-05-16 2017-07-18 北京京东尚科信息技术有限公司 robot control method, device and robot
CN107545242A (en) * 2017-07-25 2018-01-05 大圣科技股份有限公司 A kind of method and device that human action posture is inferred by 2D images
CN108052896A (en) * 2017-12-12 2018-05-18 广东省智能制造研究所 Human bodys' response method based on convolutional neural networks and support vector machines
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111435535A (en) * 2019-01-14 2020-07-21 株式会社日立制作所 Method and device for acquiring joint point information
CN111435535B (en) * 2019-01-14 2024-03-08 株式会社日立制作所 Method and device for acquiring joint point information
CN110070573A (en) * 2019-04-25 2019-07-30 北京卡路里信息技术有限公司 Joint figure determines method, apparatus, equipment and storage medium
CN110070573B (en) * 2019-04-25 2021-07-06 北京卡路里信息技术有限公司 Joint map determination method, device, equipment and storage medium
WO2020228217A1 (en) * 2019-05-13 2020-11-19 河北工业大学 Human body posture visual recognition method for transfer carrying nursing robot, and storage medium and electronic device
CN111949111B (en) * 2019-05-14 2022-04-26 Oppo广东移动通信有限公司 Interaction control method and device, electronic equipment and storage medium
CN111949111A (en) * 2019-05-14 2020-11-17 Oppo广东移动通信有限公司 Interaction control method and device, electronic equipment and storage medium
CN113892112A (en) * 2019-07-10 2022-01-04 赫尔实验室有限公司 Action classification using deep-nested clustering
WO2021008158A1 (en) * 2019-07-15 2021-01-21 深圳市商汤科技有限公司 Method and apparatus for detecting key points of human body, electronic device and storage medium
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
CN110515384A (en) * 2019-09-09 2019-11-29 深圳市三宝创新智能有限公司 A kind of the human body follower method and robot of view-based access control model mark
CN112568898A (en) * 2019-09-29 2021-03-30 杭州福照光电有限公司 Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image
CN110826405A (en) * 2019-09-30 2020-02-21 许昌许继软件技术有限公司 Equipment control method and device based on human body posture image
CN111055275B (en) * 2019-12-04 2021-10-29 深圳市优必选科技股份有限公司 Action simulation method and device, computer readable storage medium and robot
CN111055275A (en) * 2019-12-04 2020-04-24 深圳市优必选科技股份有限公司 Action simulation method and device, computer readable storage medium and robot
CN111208783B (en) * 2019-12-30 2021-09-17 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium
CN111208783A (en) * 2019-12-30 2020-05-29 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium
US11940774B2 (en) 2019-12-30 2024-03-26 Ubtech Robotics Corp Ltd Action imitation method and robot and computer readable storage medium using the same
CN111462234A (en) * 2020-03-27 2020-07-28 北京华捷艾米科技有限公司 Position determination method and device
JP7382415B2 (en) 2020-06-01 2023-11-16 深▲せん▼華鵲景医療科技有限公司 Upper limb function evaluation device and method and upper limb rehabilitation training system and method
JP2022536439A (en) * 2020-06-01 2022-08-17 深▲せん▼華鵲景医療科技有限公司 Upper limb function evaluation device and method, and upper limb rehabilitation training system and method
CN112070835A (en) * 2020-08-21 2020-12-11 达闼机器人有限公司 Mechanical arm pose prediction method and device, storage medium and electronic equipment
CN112070835B (en) * 2020-08-21 2024-06-25 达闼机器人股份有限公司 Mechanical arm pose prediction method and device, storage medium and electronic equipment
CN112766153A (en) * 2021-01-19 2021-05-07 合肥工业大学 Three-dimensional human body posture estimation method and system based on deep learning
WO2023273093A1 (en) * 2021-06-30 2023-01-05 奥比中光科技集团股份有限公司 Human body three-dimensional model acquisition method and apparatus, intelligent terminal, and storage medium
CN117340914A (en) * 2023-10-24 2024-01-05 哈尔滨工程大学 Humanoid robot human body feeling control method and control system
CN117340914B (en) * 2023-10-24 2024-05-14 哈尔滨工程大学 Humanoid robot human body feeling control method and control system

Similar Documents

Publication Publication Date Title
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
EP3971686A1 (en) Ar scenario-based gesture interaction method and apparatus, storage medium, and communication terminal
Fang et al. Visual SLAM for robot navigation in healthcare facility
CN106650630B (en) A kind of method for tracking target and electronic equipment
CN109816725A (en) A kind of monocular camera object pose estimation method and device based on deep learning
CN101619984B (en) Mobile robot visual navigation method based on colorful road signs
CN110020633A (en) Training method, image-recognizing method and the device of gesture recognition model
CN107990899A (en) A kind of localization method and system based on SLAM
CN106897697A (en) A kind of personage and pose detection method based on visualization compiler
CN109800689A (en) A kind of method for tracking target based on space-time characteristic fusion study
CN103930944B (en) Adaptive tracking system for space input equipment
CN104346816A (en) Depth determining method and device and electronic equipment
CN108875902A (en) Neural network training method and device, vehicle detection estimation method and device, storage medium
CN104463191A (en) Robot visual processing method based on attention mechanism
CN109934847A (en) The method and apparatus of weak texture three-dimension object Attitude estimation
CN109325456A (en) Target identification method, device, target identification equipment and storage medium
CN108229559A (en) Dress ornament detection method, device, electronic equipment, program and medium
CN108805016A (en) A kind of head and shoulder method for detecting area and device
CN109740454A (en) A kind of human body posture recognition methods based on YOLO-V3
Suzuki et al. Enhancement of gross-motor action recognition for children by CNN with OpenPose
CN108829233B (en) Interaction method and device
CN113989944B (en) Operation action recognition method, device and storage medium
CN110532883A (en) On-line tracking is improved using off-line tracking algorithm
CN102853830A (en) Robot vision navigation method based on general object recognition
CN111178170B (en) Gesture recognition method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190111