CN108628306A - Robot ambulation disorder detection method, device, computer equipment and storage medium - Google Patents

Robot ambulation disorder detection method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108628306A
CN108628306A CN201810314149.5A CN201810314149A CN108628306A CN 108628306 A CN108628306 A CN 108628306A CN 201810314149 A CN201810314149 A CN 201810314149A CN 108628306 A CN108628306 A CN 108628306A
Authority
CN
China
Prior art keywords
image
coordinate
coordinate system
human body
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810314149.5A
Other languages
Chinese (zh)
Other versions
CN108628306B (en
Inventor
曾伟
周宝
王健宗
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810314149.5A priority Critical patent/CN108628306B/en
Priority to PCT/CN2018/102854 priority patent/WO2019196313A1/en
Publication of CN108628306A publication Critical patent/CN108628306A/en
Application granted granted Critical
Publication of CN108628306B publication Critical patent/CN108628306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A kind of robot ambulation disorder detection method of present invention offer and device obtain image by video camera, and application image human testing algorithm obtains human body image coordinate (u, v) of the human joint points under image coordinate system;Human body image coordinate (u, v) under image coordinate system is converted to the image physical coordinates (x, y) under image physical coordinates system;According to x, y determine laser emitter irradiate critical body points' artis vertical deflection angle [alpha] and X deflection angle β, start laser transmitter projects laser with measure human body to robot distance Z ';According to x, y, Z ' be calculated human joint points camera coordinate system coordinate (X, Y, Z);According to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L;The distance between corresponding human joint points according to arbitrary two people L determines direction of travel, realizes accurate pedestrian's avoidance.Additionally provide a kind of computer equipment and storage medium.

Description

Robot ambulation disorder detection method, device, computer equipment and storage medium
Technical field
The present invention relates to robot obstacle-avoiding technical fields, specifically, the present invention relates to a kind of inspections of robot ambulation obstacle It surveys method and apparatus and a kind of computer equipment and is stored with the storage medium of computer-readable instruction.
Background technology
In the navigation procedure of robot, the obstacle detection of robot is the key factor of robot success Navigational Movements, Relative to stationary object, pedestrian on the move is even more to increase many difficult points to the avoidance of robot.Hinder in current robot Hinder in detection method, there are many problems, such as ultrasound examination to be limited in scope, even if loading more quantity, it is also difficult to comprehensively Ground covers three dimensions;Although laser detection precision is higher, there is also same problems;Model is covered by depth camera machine testing Enclose larger, but since huge data processing and the vision depth of field limit, there is also the not high problems of detection of obstacles precision.And And these methods are primarily directed to static object, but pedestrian is equal to other detection of obstacles, in evacuation of generating strategy Seeming when pedestrian, comparison is passive, reduces the efficiency of effective avoidance.
Invention content
The purpose of the present invention is intended at least solve above-mentioned one of technological deficiency, is especially difficult to the technology of avoidance pedestrian Defect.
The present invention provides a kind of robot ambulation disorder detection method, the robot have video camera, laser emitter, Laser pickoff, described method includes following steps:
Step S10:Image is obtained by video camera, application image human testing algorithm obtains human joint points and sat in image Human body image coordinate (u, v) under mark system;
Step S20:Human body image coordinate (u, v) under image coordinate system is converted to the image under image physical coordinates system Physical coordinates (x, y);
Step S30:Determine that laser emitter irradiates the vertical deflection angle [alpha] and level of critical body points' artis according to x, y Deflection angle β, start laser transmitter projects laser with measure human body to robot distance Z ';
Step S40:According to x, y, Z ' be calculated human joint points camera coordinate system coordinate (X, Y, Z);
Step S50:According to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L;
Step S60:The distance between corresponding human joint points according to arbitrary two people L determines direction of travel.
Step S20 includes in one of the embodiments,:
Human body image coordinate under image coordinate system is converted into the image physical coordinates under image physical coordinates system, is converted Relationship is:
Wherein, (u, v) is the human body image coordinate under image coordinate system, and (x, y) is the people under image physical coordinates system Body image coordinate, u0And v0For coordinate value of the camera optical axis under image coordinate system, dx and dy are each pixel in x-axis and y-axis Physical size value.
Step S30 includes in one of the embodiments,:
The vertical deflection angle [alpha] and X deflection angle β for determining laser emitter irradiation critical body points' artis, start and swash Optical transmitting set emit laser with measure human body to robot distance Z ', wherein:
duIt is offset of the camera coordinate system relative to image physical coordinates system, dvIt is laser coordinate system relative to image The offset of physical coordinates system, d are horizontal distance of the laser emitter to video camera imaging plane.
Step S40 includes in one of the embodiments,:
According to x, y, d, Z ' be calculated each human joint points camera coordinate system coordinate (X, Y, Z), according to (X, Y, Z) the distance between the corresponding human joint points of arbitrary two people L, the wherein coordinate (X, Y, Z) of camera coordinate system is calculated Relationship between the coordinate (x, y) of image physical coordinates system is:
F is focal length of camera, Z=Z '+d.
The present invention also provides a kind of robot ambulation obstacle detecting device, the robot has video camera, Laser emission Device, laser pickoff, described device include:
Image coordinate acquisition module, for obtaining image by video camera, application image human testing algorithm obtains human body Human body image coordinate (u, v) of the artis under image coordinate system;
Physical coordinates acquisition module, for the human body image coordinate (u, v) under image coordinate system to be converted to image physics Image physical coordinates (x, y) under coordinate system;
Human body is apart from acquisition module, for determining that laser emitter irradiates the vertical inclined of critical body points' artis according to x, y Gyration α and X deflection angle β, start laser transmitter projects laser with measure human body to robot distance Z ';
Camera coordinates acquisition module, for according to x, y, Z ' be calculated human joint points camera coordinate system seat It marks (X, Y, Z);
Crowd's spacing acquisition module, for according to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L;
Direction of travel determining module, for determining walking according to the distance between the corresponding human joint points of arbitrary two people L Direction.
The physical coordinates acquisition module is used in one of the embodiments,:
Human body image coordinate under image coordinate system is converted into the image physical coordinates under image physical coordinates system, is converted Relationship is:
Wherein, (u, v) is the human body image coordinate under image coordinate system, and (x, y) is the people under image physical coordinates system Body image coordinate, u0And v0For coordinate value of the camera optical axis under image coordinate system, dx and dy are each pixel in x-axis and y-axis Physical size value.
The human body is used for apart from acquisition module in one of the embodiments,:
The vertical deflection angle [alpha] and X deflection angle β for determining laser emitter irradiation critical body points' artis, start and swash Optical transmitting set emit laser with measure human body to robot distance Z ', wherein:
duIt is offset of the camera coordinate system relative to image physical coordinates system, dvIt is laser coordinate system relative to image The offset of physical coordinates system, d are horizontal distance of the laser emitter to video camera imaging plane.
The camera coordinates acquisition module is used in one of the embodiments,:
According to x, y, d, Z ' be calculated each human joint points camera coordinate system coordinate (X, Y, Z), according to (X, Y, Z) the distance between the corresponding human joint points of arbitrary two people L, the wherein coordinate (X, Y, Z) of camera coordinate system is calculated Relationship between the coordinate (x, y) of image physical coordinates system is:
F is focal length of camera, Z=Z '+d.
The present invention also provides a kind of computer equipment, including memory and processor, calculating is stored in the memory Machine readable instruction, when the computer-readable instruction is executed by the processor so that the processor executes any of the above-described Described in embodiment the step of robot ambulation disorder detection method.
The present invention also provides a kind of storage mediums being stored with computer-readable instruction, and the computer-readable instruction is by one When a or multiple processors execute so that one or more processors execute robot ambulation barrier described in any of the above-described embodiment The step of hindering detection method.
Above-mentioned robot ambulation disorder detection method, device, computer equipment and storage medium, is obtained by video camera Image, application image human testing algorithm obtain human body image coordinate (u, v) of the human joint points under image coordinate system;It will figure The image physical coordinates (x, y) under image physical coordinates system are converted to as the human body image coordinate (u, v) under coordinate system;According to x, Y determines the vertical deflection angle [alpha] and X deflection angle β of laser emitter irradiation critical body points' artis, starts Laser emission Device emit laser with measure human body to robot distance Z ';Human joint points are calculated in camera coordinates according to x, y, Z ' The coordinate (X, Y, Z) of system;According to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L;According to the corresponding human joint points of arbitrary two people The distance between L determine direction of travel.By combining image recognition and laser ranging to determine, human body actual coordinate (is sat in video camera Mark the coordinate of system), the real time position of human body can be accurately obtained, so that it is determined that the spacing L between arbitrary two people, then by L come Real-time judge walking path realizes accurate pedestrian's avoidance detection.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description Obviously, or practice through the invention is recognized.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, wherein:
Fig. 1 is the internal structure schematic diagram of one embodiment Computer equipment;
Fig. 2 is the robot ambulation disorder detection method flow diagram of one embodiment;
Fig. 3 is the human testing flow diagram of one embodiment;
Fig. 4 is the human testing exemplary plot of one embodiment;
Fig. 5 is the robot ambulation obstacle detecting device module diagram of one embodiment.
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, and is only used for explaining the present invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative " one " used herein, " one It is a ", " described " and "the" may also comprise plural form.It is to be further understood that is used in the specification of the present invention arranges It refers to there are the feature, integer, step, operation, element and/or component, but it is not excluded that presence or addition to take leave " comprising " Other one or more features, integer, step, operation, element, component and/or their group.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art Language and scientific terminology), there is meaning identical with the general understanding of the those of ordinary skill in fields of the present invention.Should also Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art The consistent meaning of meaning, and unless by specific definitions as here, the meaning of idealization or too formal otherwise will not be used To explain.
Fig. 1 is the internal structure schematic diagram of one embodiment Computer equipment.As shown in Figure 1, the computer equipment packet Include processor, non-volatile memory medium, memory and the network interface connected by system bus.Wherein, which sets Standby non-volatile memory medium is stored with operating system, database and computer-readable instruction, and control can be stored in database Part information sequence when the computer-readable instruction is executed by processor, may make processor to realize a kind of robot ambulation obstacle Detection method.The processor of the computer equipment supports the operation of entire computer equipment for providing calculating and control ability. Computer-readable instruction can be stored in the memory of the computer equipment, when which is executed by processor, Processor may make to execute a kind of robot ambulation disorder detection method.The network interface of the computer equipment is used to connect with terminal Connect letter.It will be understood by those skilled in the art that structure shown in Fig. 1, is only tied with the relevant part of application scheme The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment May include either combining certain components than more or fewer components as shown in the figure or being arranged with different components.
Robot ambulation disorder detection method described below, can be applied to intelligent robot, such as customer service robot, Sweeping robot etc..
Fig. 2 is the robot ambulation disorder detection method flow diagram of one embodiment.
A kind of robot ambulation disorder detection method, the robot have video camera, laser emitter, laser pickoff, This method comprises the following steps:
Step S10:Image is obtained by video camera, application image human testing algorithm obtains human joint points and sat in image Human body image coordinate (u, v) under mark system.
Image human testing algorithm refers to the technology that human body is identified by image recognition technology.Image human testing side There are three types of methods, the method respectively based on global characteristics, the method based on human body, the method based on stereoscopic vision.
Method based on global characteristics
Such method is the human body detecting method of current more mainstream, mainly uses edge feature, shape feature, statistics special All kinds of static natures of sign or the images such as transform characteristics describe human body, wherein representative feature includes the small bauds of Haar Sign, HOG features, Edgelet features, Shapelet features and contour mould feature etc..
(1) method based on Haar wavelet characters
Papageorgiou and Poggio proposes that the concept of Harr small echos, Viola etc. have introduced the concept of integrogram earliest, The extraction rate of Harr features is accelerated, and this method is applied to human testing, the movement in conjunction with human body and skin mode structure Detecting system of human body is built, preferable detection result is achieved, is laid a good foundation for the development of Human Detection.
(2) method based on HOG features
The concept of gradient orientation histogram (Histogram of Oriented Gradients, HOG) is used for Human testing obtains intimate 100% detection success rate on MIT somatic datas library;Including the changes such as visual angle, illumination and background On the INRIA somatic datas library of change, about 90% detection success rate is also achieved.
(3) method based on edgelet features
" small side " (Edgelet) feature, i.e. some short straight lines or curve segment, and it is applied to complex scene The human testing of single image achieves about 92% verification and measurement ratio on CAVIAR databases.
(4) method based on Shapelet features
The method that machine learning can be utilized automatically derives feature, i.e. Shapelet features.The algorithm is first from training sample The gradient information of this extraction picture different directions, is then trained using AdaBoost algorithms, to obtain Shapelet spies Sign.
(5) method based on contour mould
This method refers to being passed through using information architectures templates such as edge contour, texture and the gray scales of target object in image The method of template matches detects target.
Method based on human body
The basic thought of such method is that human body is divided into several component parts, then to being examined respectively per part in image It surveys, testing result is integrated according to certain restriction relation finally, finally judges whether human body.
Method based on stereoscopic vision
Such method refers to carrying out Image Acquisition by 2 or 2 or more video cameras, then analyzes target in image Three-dimensional information is to identify human body.
In the present embodiment, image human testing algorithm is not limited specifically, if can recognize that human body and Its artis.Human joint points, refer to for example head, neck, shoulder, upper arm, elbow, wrist, waist, chest, thigh, The key node of knee, shank, ankle etc. human body key position, everyone human joint points have multiple.
In the present embodiment, the method based on human body may be used to carry out human testing.OpenPose is one The library of increasing income write as using OpenCV and Caffe and with C++, for realizing that more people's key points of multithreading detect in real time.Human body Detection is predominantly detected comprising two parts:Human joint points detect and the correlation detection of artis.Wherein artis detection makes With the posture machine (Convolutional Pose Machine) of convolution.Posture machine is by being added convolutional neural networks (convolution neural network), using the texture information of image, spatial structural form and central feature figure (center map) can be inferred that the position of human body key point by learning with having supervision.A volume is used simultaneously in parallel Correlation between product neural network detection artis, i.e. which artis should connect together.
As shown in figure 3, when collecting the image comprising human body, we use whole image as the defeated of detection network Enter, and at the same time inputting an image into two two parallel subnetworks, while detecting the confidence level of artis and the phase of artis Guan Du.Artis confidence characteristic figure and artis degree of correlation characteristic pattern and input picture are grouped together simultaneously, as under The input in one stage, in actual detection process, cascaded 6 as detect network.
As shown in figure 4, figure a is complete input picture, figure b is the confidence characteristic pattern of artis, and figure c is that artis is related Property characteristic pattern, figure d is to realize that the figure after a series of candidate artis matching, figure e are to be finally completed people by sparse method Figure after the combination connection of each artis of body.
Multiple people may be recognized in image, therefore in the present embodiment, image, application image are obtained by video camera Human testing algorithm obtains a series of human body image coordinate { (uk1,vk1),...,(uki,vki),...,(ukn,vkn), wherein k K-th of people is represented, i represents i-th of human joint points.In order to make it easy to understand, certain calculating process below are only with a joint For point.
Image coordinate system is the two-dimensional coordinate system in video camera imaging plane, and origin is usually located at the upper left corner of picture, is drawn The width (transverse direction) in face is x-axis, and the height (longitudinal direction) of picture is y-axis, as unit of pixel number.By identifying human body in picture Position, obtain human body image coordinate (u, v) of each node of human body under image coordinate system, u indicates u-th of pixel of x-axis, v Indicate v-th of pixel of y-axis.
Step S20:Human body image coordinate (u, v) under image coordinate system is converted to the image under image physical coordinates system Physical coordinates (x, y).Since image coordinate system is as unit of pixel number, it is unfavorable for calculating, it is therefore desirable to by image coordinate Human body image coordinate (u, v) under system is converted to the image physical coordinates (x, y) under image physical coordinates system, is calculated with facilitating. Image physical coordinates system is the image coordinate system indicated with physical unit (such as rice), it is possible to understand that is the translation of image coordinate system Coordinate system and two-dimensional coordinate system.
Step S20 detailed processes include in one of the embodiments,:
Human body image coordinate under image coordinate system is converted into the image physical coordinates under image physical coordinates system, is converted Relationship is:
Wherein above-mentioned formula is matrix operation formula, and (u, v) is the human body image coordinate under image coordinate system, and (x, y) is Human body image coordinate under image physical coordinates system, u0And v0For coordinate value of the camera optical axis under image coordinate system, dx and Dy is physical size value of each pixel in x-axis and y-axis.
Step S30:Determine that laser emitter irradiates the vertical deflection angle [alpha] and level of critical body points' artis according to x, y Deflection angle β, start laser transmitter projects laser with measure human body to robot distance Z '.
Critical body points' artis herein is at least one of above-mentioned human joint points, such as can be that above-mentioned human body closes Whole artis in node or several important artis, such as head, shoulder, elbow, wrist, knee, ankle. In some embodiments, for convenience, one of them important artis can be only chosen as critical body points' artis, such as head Portion.Obtain critical body points' artis to robot distance, that is, obtain human body to robot distance.
Step S30 detailed processes include in one of the embodiments,:
The vertical deflection angle [alpha] and X deflection angle β for determining laser emitter irradiation critical body points' artis, start and swash Optical transmitting set emit laser with measure human body to robot distance Z ', wherein:
duIt is offset of the camera coordinate system relative to image physical coordinates system, dvIt is laser coordinate system relative to image The offset of physical coordinates system, d are horizontal distance of the laser emitter to video camera imaging plane.Camera coordinate system is three-dimensional Coordinate system, origin are the optical center of video camera, and X-axis, Y-axis are parallel with the x of image coordinate system, y-axis, and z-axis is the light of video camera Axis, z-axis are vertical with video camera imaging plane.
Vertical deflection angle [alpha] and X deflection angle β are to calculate vertical deflection angle for camera coordinate system After spending α and X deflection angle β, laser emitter is adjusted to corresponding position to be directed at key person according to current posture and α, β Body artis, then emit laser with measure human body to robot distance Z '.
Through the above steps, x, y, the Z ' that each human joint points can be got, so as to true by subsequent step Determine the specific location (X, Y, Z) of human joint points, and determines the spacing between arbitrary two people.
Step S40:According to x, y, Z ' be calculated human joint points camera coordinate system coordinate (X, Y, Z).
The distance phase of the position of the imaging plane of camera coordinate system and the coordinate origin of laser coordinate system in z-axis direction A poor d, ignore laser emitter and video camera position little deviation (usual laser emitter and video camera spatially away from From not far), laser measurement values are then the actual value of corresponding world coordinate system in camera coordinate system, we can be according to camera shooting The proportionate relationship of machine coordinate value and true world coordinates value finds out the spacing of two people in true world coordinate system.
Step S40 detailed processes include in one of the embodiments,:
According to x, y, d, Z ' be calculated each human joint points camera coordinate system coordinate (X, Y, Z), according to (X, Y, Z) the distance between the corresponding human joint points of arbitrary two people L, the wherein coordinate (X, Y, Z) of camera coordinate system is calculated Relationship between the coordinate (x, y) of image physical coordinates system is:
F is focal length of camera, Z=Z '+d.
Step S50:According to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L.It can be according to camera coordinates value and true The proportionate relationship of world coordinates value finds out the spacing L of two people in true world coordinate system.
Step S60:The distance between corresponding human joint points according to arbitrary two people L determines direction of travel.When determining Behind each one position, so that it is determined that the distance between arbitrary two people L, robot can determine Running strategy at this time.In determination It, can be in conjunction with the distance between human body and robot, human body walking speed, robot ambulation speed etc. at least one when Running strategy A factor analysis.
Assuming that human joint points human body image coordinate is { (xk1,yk1),...,(xki,yki),...,(xkn,ykn), respectively The spacing for calculating wherein arbitrary two people, finally chooses wherein best spacing distance and passes through.
Assuming that calculating P1P2Interval between two people, wherein:
P1={ (x11,y11),...,(x1i,y1i),...,(x1n,y1n)}
P2={ (x21,y21),…,(x2i,y2i),…,(x2n,y2n)}
Since the distance between two people are directly proportional with the distance of X-direction, X axis coordinate can be only focused on, so can All points detected of two people are formed a queue:
P={ x11,x21,…,x1i,x2i,…,x1n,x2n}
With quick sorting algorithm, it can be quickly found out nearest or intersection point between two discrete point groups, thus The distance between these points can be calculated according to the proportionate relationship of camera coordinates value noted earlier and true world coordinates value, And it the distance between ceaselessly detects every two people, can find out in real time and best pass through distance.
Fig. 5 is the robot ambulation obstacle detecting device module diagram of one embodiment.Corresponding above-mentioned robot row Disorder detection method is walked, the present invention also provides a kind of robot ambulation obstacle detecting device, the robot has video camera, swashs Optical transmitting set, laser pickoff, described device include:Image coordinate acquisition module 100, physical coordinates acquisition module 200, human body Apart from acquisition module 300, camera coordinates acquisition module 400, crowd's spacing acquisition module 500, direction of travel determining module 600。
Image coordinate acquisition module 100 is used to obtain image by video camera, and application image human testing algorithm obtains people Human body image coordinate (u, v) of the body artis under image coordinate system;Physical coordinates acquisition module is used for will be under image coordinate system Human body image coordinate (u, v) be converted to the image physical coordinates (x, y) under image physical coordinates system;Human body is apart from acquisition module For determining that laser emitter irradiates the vertical deflection angle [alpha] and X deflection angle β of critical body points' artis according to x, y, open Dynamic laser transmitter projects laser with measure human body to robot distance Z ';Camera coordinates acquisition module be used for according to x, y, Z ' be calculated human joint points camera coordinate system coordinate (X, Y, Z);Crowd's spacing acquisition module is used for according to arbitrary Coordinate (X of the corresponding human joint points of two people in camera coordinate system1,Y1,Z1) and (X2,Y2,Z2) calculate the phase of arbitrary two people Answer the distance between human joint points L;Direction of travel determining module is between the corresponding human joint points according to arbitrary two people Distance L determine direction of travel.
Image coordinate acquisition module 100 obtains image by video camera, and application image human testing algorithm obtains human body pass Human body image coordinate (u, v) of the node under image coordinate system.
Image human testing algorithm refers to the technology that human body is identified by image recognition technology.Image human testing side There are three types of methods, the method respectively based on global characteristics, the method based on human body, the method based on stereoscopic vision.
Method based on global characteristics
Such method is the human body detecting method of current more mainstream, mainly uses edge feature, shape feature, statistics special All kinds of static natures of sign or the images such as transform characteristics describe human body, wherein representative feature includes the small bauds of Haar Sign, HOG features, Edgelet features, Shapelet features and contour mould feature etc..
(1) method based on Haar wavelet characters
Papageorgiou and Poggio proposes that the concept of Harr small echos, Viola etc. have introduced the concept of integrogram earliest, The extraction rate of Harr features is accelerated, and this method is applied to human testing, the movement in conjunction with human body and skin mode structure Detecting system of human body is built, preferable detection result is achieved, is laid a good foundation for the development of Human Detection.
(2) method based on HOG features
The concept of gradient orientation histogram (Histogram of Oriented Gradients, HOG) is used for Human testing obtains intimate 100% detection success rate on MIT somatic datas library;Including the changes such as visual angle, illumination and background On the INRIA somatic datas library of change, about 90% detection success rate is also achieved.
(3) method based on edgelet features
" small side " (Edgelet) feature, i.e. some short straight lines or curve segment, and it is applied to complex scene The human testing of single image achieves about 92% verification and measurement ratio on CAVIAR databases.
(4) method based on Shapelet features
The method that machine learning can be utilized automatically derives feature, i.e. Shapelet features.The algorithm is first from training sample The gradient information of this extraction picture different directions, is then trained using AdaBoost algorithms, to obtain Shapelet spies Sign.
(5) method based on contour mould
This method refers to being passed through using information architectures templates such as edge contour, texture and the gray scales of target object in image The method of template matches detects target.
Method based on human body
The basic thought of such method is that human body is divided into several component parts, then to being examined respectively per part in image It surveys, testing result is integrated according to certain restriction relation finally, finally judges whether human body.
Method based on stereoscopic vision
Such method refers to carrying out Image Acquisition by 2 or 2 or more video cameras, then analyzes target in image Three-dimensional information is to identify human body.
In the present embodiment, image human testing algorithm is not limited specifically, if can recognize that human body and Its artis.Human joint points, refer to for example head, neck, shoulder, upper arm, elbow, wrist, waist, chest, thigh, The key node of knee, shank, ankle etc. human body key position, everyone human joint points have multiple.
In the present embodiment, the method based on human body may be used to carry out human testing.OpenPose is one The library of increasing income write as using OpenCV and Caffe and with C++, for realizing that more people's key points of multithreading detect in real time.Human body Detection is predominantly detected comprising two parts:Human joint points detect and the correlation detection of artis.Wherein artis detection makes With the posture machine (Convolutional Pose Machine) of convolution.Posture machine is by being added convolutional neural networks (convolution neural network), using the texture information of image, spatial structural form and central feature figure (center map) can be inferred that the position of human body key point by learning with having supervision.A volume is used simultaneously in parallel Correlation between product neural network detection artis, i.e. which artis should connect together.
As shown in figure 3, when collecting the image comprising human body, we use whole image as the defeated of detection network Enter, and at the same time inputting an image into two two parallel subnetworks, while detecting the confidence level of artis and the phase of artis Guan Du.Artis confidence characteristic figure and artis degree of correlation characteristic pattern and input picture are grouped together simultaneously, as under The input in one stage, in actual detection process, cascaded 6 as detect network.
As shown in figure 4, figure a is complete input picture, figure b is the confidence characteristic pattern of artis, and figure c is that artis is related Property characteristic pattern, figure d is to realize that the figure after a series of candidate artis matching, figure e are to be finally completed people by sparse method Figure after the combination connection of each artis of body.
Multiple people may be recognized in image, therefore in the present embodiment, image, application image are obtained by video camera Human testing algorithm obtains a series of human body image coordinate { (uk1,vk1),...,(uki,vki),...,(ukn,vkn), wherein k K-th of people is represented, i represents i-th of human joint points.In order to make it easy to understand, certain calculating process below are only with a joint For point.
Image coordinate system is the two-dimensional coordinate system in video camera imaging plane, and origin is usually located at the upper left corner of picture, is drawn The width (transverse direction) in face is x-axis, and the height (longitudinal direction) of picture is y-axis, as unit of pixel number.By identifying human body in picture Position, obtain human body image coordinate (u, v) of each node of human body under image coordinate system, u indicates u-th of pixel of x-axis, v Indicate v-th of pixel of y-axis.
Human body image coordinate (u, v) under image coordinate system is converted to image physics and sat by physical coordinates acquisition module 200 Image physical coordinates (x, y) under mark system.Since image coordinate system is as unit of pixel number, it is unfavorable for calculating, therefore needs Human body image coordinate (u, v) under image coordinate system is converted into the image physical coordinates (x, y) under image physical coordinates system, It is calculated with facilitating.Image physical coordinates system is the image coordinate system indicated with physical unit (such as rice), it is possible to understand that is image The translational coordination system of coordinate system and two-dimensional coordinate system.
Physical coordinates acquisition module 200 is specifically used in one of the embodiments,:
Human body image coordinate under image coordinate system is converted into the image physical coordinates under image physical coordinates system, is converted Relationship is:
Wherein above-mentioned formula is matrix operation formula, and (u, v) is the human body image coordinate under image coordinate system, and (x, y) is Human body image coordinate under image physical coordinates system, u0And v0For coordinate value of the camera optical axis under image coordinate system, dx and Dy is physical size value of each pixel in x-axis and y-axis.
Human body distance obtains 300 root tuber of mould and determines that laser emitter irradiates the vertical deflection of critical body points' artis according to x, y Angle [alpha] and X deflection angle β, start laser transmitter projects laser with measure human body to robot distance Z '.
Critical body points' artis herein is at least one of above-mentioned human joint points, such as can be that above-mentioned human body closes Whole artis in node or several important artis, such as head, shoulder, elbow, wrist, knee, ankle. In some embodiments, for convenience, one of them important artis can be only chosen as critical body points' artis, such as head Portion.Obtain critical body points' artis to robot distance, that is, obtain human body to robot distance.
Human body is specifically used for apart from acquisition module 300 in one of the embodiments,:
The vertical deflection angle [alpha] and X deflection angle β for determining laser emitter irradiation critical body points' artis, start and swash Optical transmitting set emit laser with measure human body to robot distance Z ', wherein:
duIt is offset of the camera coordinate system relative to image physical coordinates system, dvIt is laser coordinate system relative to image The offset of physical coordinates system, d are horizontal distance of the laser emitter to video camera imaging plane.Camera coordinate system is three-dimensional Coordinate system, origin are the optical center of video camera, and X-axis, Y-axis are parallel with the x of image coordinate system, y-axis, and z-axis is the light of video camera Axis, z-axis are vertical with video camera imaging plane.
Vertical deflection angle [alpha] and X deflection angle β are to calculate vertical deflection angle for camera coordinate system After spending α and X deflection angle β, laser emitter is adjusted to corresponding position to be directed at key person according to current posture and α, β Body artis, then emit laser with measure human body to robot distance Z '.
Camera coordinates acquisition module 400 according to x, y, Z ' be calculated human joint points camera coordinate system coordinate (X,Y,Z)。
The distance phase of the position of the imaging plane of camera coordinate system and the coordinate origin of laser coordinate system in z-axis direction A poor d, ignore laser emitter and video camera position little deviation (usual laser emitter and video camera spatially away from From not far), laser measurement values are then the actual value of corresponding world coordinate system in camera coordinate system, we can be according to camera shooting The proportionate relationship of machine coordinate value and true world coordinates value finds out the spacing of two people in true world coordinate system.
Camera coordinates acquisition module 400 is specifically used in one of the embodiments,:
According to x, y, d, Z ' be calculated each human joint points camera coordinate system coordinate (X, Y, Z), according to (X, Y, Z) the distance between the corresponding human joint points of arbitrary two people L, the wherein coordinate (X, Y, Z) of camera coordinate system is calculated Relationship between the coordinate (x, y) of image physical coordinates system is:
F is focal length of camera, Z=Z '+d.
Crowd's spacing acquisition module 500 according to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L.It can be sat according to video camera The proportionate relationship of scale value and true world coordinates value finds out the spacing L of two people in true world coordinate system.
Direction of travel determining module 600 determines walking side according to the distance between the corresponding human joint points of arbitrary two people L To.After each one position is determined, so that it is determined that the distance between arbitrary two people L, robot can determine walking at this time Strategy.It, can be in conjunction with the distance between human body and robot, human body walking speed, robot ambulation when determining Running strategy At least one factor analysis such as speed.
Assuming that human joint points human body image coordinate is { (xk1,yk1),...,(xki,yki),...,(xkn,ykn), respectively The spacing for calculating wherein arbitrary two people, finally chooses wherein best spacing distance and passes through.
Assuming that calculating P1P2Interval between two people, wherein:
P1={ (x11,y11),...,(x1i,y1i),...,(x1n,y1n)}
P2={ (x21,y21),…,(x2i,y2i),…,(x2n,y2n)}
Since the distance between two people are directly proportional with the distance of X-direction, X axis coordinate can be only focused on, so can All points detected of two people are formed a queue:
P={ x11,x21,…,x1i,x2i,…,x1n,x2n}
Direction of travel determining module 600 uses quick sorting algorithm, can be quickly found out between two discrete point groups recently Or the point intersected, it can thus be calculated according to the proportionate relationship of camera coordinates value noted earlier and true world coordinates value Go out the distance between these points, and the distance between ceaselessly detect every two people, can find out in real time it is best by away from From.
The present invention also provides a kind of computer equipment, including memory and processor, calculating is stored in the memory Machine readable instruction, when the computer-readable instruction is executed by the processor so that the processor executes any of the above-described Described in embodiment the step of robot ambulation disorder detection method.
The present invention also provides a kind of storage mediums being stored with computer-readable instruction, and the computer-readable instruction is by one When a or multiple processors execute so that one or more processors execute robot ambulation barrier described in any of the above-described embodiment The step of hindering detection method.
Above-mentioned robot ambulation disorder detection method, device, computer equipment and storage medium, is obtained by video camera Image, application image human testing algorithm obtain human body image coordinate (u, v) of the human joint points under image coordinate system;It will figure The image physical coordinates (x, y) under image physical coordinates system are converted to as the human body image coordinate (u, v) under coordinate system;According to x, Y determines the vertical deflection angle [alpha] and X deflection angle β of laser emitter irradiation critical body points' artis, starts Laser emission Device emit laser with measure human body to robot distance Z ';Human joint points are calculated in camera coordinates according to x, y, Z ' The coordinate (X, Y, Z) of system;According to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L;According to the corresponding human joint points of arbitrary two people The distance between L determine direction of travel.By combining image recognition and laser ranging to determine, human body actual coordinate (is sat in video camera Mark the coordinate of system), the real time position of human body can be accurately obtained, so that it is determined that the spacing L between arbitrary two people, then by L come Real-time judge walking path realizes accurate pedestrian's avoidance detection.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, which can be stored in a computer-readable storage and be situated between In matter, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, storage medium above-mentioned can be The non-volatile memory mediums such as magnetic disc, CD, read-only memory (Read-Only Memory, ROM) or random storage note Recall body (Random Access Memory, RAM) etc..
It should be understood that although each step in the flow chart of attached drawing is shown successively according to the instruction of arrow, These steps are not that the inevitable sequence indicated according to arrow executes successively.Unless expressly stating otherwise herein, these steps Execution there is no stringent sequences to limit, can execute in the other order.Moreover, at least one in the flow chart of attached drawing Part steps may include that either these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps Completion is executed, but can be executed at different times, execution sequence is also not necessarily to be carried out successively, but can be with other Either the sub-step of other steps or at least part in stage execute step in turn or alternately.
The above is only some embodiments of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (10)

1. a kind of robot ambulation disorder detection method, which is characterized in that the robot have video camera, laser emitter, Laser pickoff, described method includes following steps:
Step S10:Image is obtained by video camera, application image human testing algorithm obtains human joint points in image coordinate system Under human body image coordinate (u, v);
Step S20:Human body image coordinate (u, v) under image coordinate system is converted to the image physics under image physical coordinates system Coordinate (x, y);
Step S30:Determine that laser emitter irradiates vertical deflection angle [alpha] and the horizontal deflection of critical body points' artis according to x, y Angle beta, start laser transmitter projects laser with measure human body to robot distance Z ';
Step S40:According to x, y, Z ' be calculated human joint points camera coordinate system coordinate (X, Y, Z);
Step S50:According to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1,Y1,Z1) and (X2,Y2, Z2) calculate the distance between the corresponding human joint points of arbitrary two people L;
Step S60:The distance between corresponding human joint points according to arbitrary two people L determines direction of travel.
2. robot ambulation disorder detection method according to claim 1, which is characterized in that step S20 includes:
Human body image coordinate under image coordinate system is converted into the image physical coordinates under image physical coordinates system, transformational relation For:
Wherein, (u, v) is the human body image coordinate under image coordinate system, and (x, y) is the human figure under image physical coordinates system As coordinate, u0And v0For coordinate value of the camera optical axis under image coordinate system, dx and dy are object of each pixel in x-axis and y-axis Manage size value.
3. robot ambulation disorder detection method according to claim 2, which is characterized in that step S30 includes:
The vertical deflection angle [alpha] and X deflection angle β for determining laser emitter irradiation critical body points' artis start laser hair Emitter emit laser with measure human body to robot distance Z ', wherein:
duIt is offset of the camera coordinate system relative to image physical coordinates system, dvIt is laser coordinate system relative to image physics The offset of coordinate system, d are horizontal distance of the laser emitter to video camera imaging plane.
4. robot ambulation disorder detection method according to claim 3, which is characterized in that step S40 includes:
According to x, y, d, Z ' be calculated each human joint points camera coordinate system coordinate (X, Y, Z), according to (X, Y, Z) The distance between the corresponding human joint points of arbitrary two people L, the wherein coordinate (X, Y, Z) of camera coordinate system and figure is calculated As physical coordinates system coordinate (x, y) between relationship be:
F is focal length of camera, Z=Z '+d.
5. a kind of robot ambulation obstacle detecting device, which is characterized in that the robot have video camera, laser emitter, Laser pickoff, described device include:
Image coordinate acquisition module, for obtaining image by video camera, application image human testing algorithm obtains human synovial Human body image coordinate (u, v) of the point under image coordinate system;
Physical coordinates acquisition module, for the human body image coordinate (u, v) under image coordinate system to be converted to image physical coordinates Image physical coordinates (x, y) under system;
Human body is apart from acquisition module, for determining that laser emitter irradiates the vertical deflection angle of critical body points' artis according to x, y Spend α and X deflection angle β, start laser transmitter projects laser with measure human body to robot distance Z ';
Camera coordinates acquisition module, for according to x, y, Z ' be calculated human joint points camera coordinate system coordinate (X,Y,Z);
Crowd's spacing acquisition module, for according to the corresponding human joint points of arbitrary two people camera coordinate system coordinate (X1, Y1,Z1) and (X2,Y2,Z2) calculate the distance between the corresponding human joint points of arbitrary two people L;
Direction of travel determining module determines direction of travel for the distance between corresponding human joint points according to arbitrary two people L.
6. robot ambulation obstacle detecting device according to claim 5, which is characterized in that the physical coordinates obtain mould Block is used for:
Human body image coordinate under image coordinate system is converted into the image physical coordinates under image physical coordinates system, transformational relation For:
Wherein, (u, v) is the human body image coordinate under image coordinate system, and (x, y) is the human figure under image physical coordinates system As coordinate, u0And v0For coordinate value of the camera optical axis under image coordinate system, dx and dy are object of each pixel in x-axis and y-axis Manage size value.
7. robot ambulation obstacle detecting device according to claim 6, which is characterized in that the human body distance obtains mould Block is used for:
The vertical deflection angle [alpha] and X deflection angle β for determining laser emitter irradiation critical body points' artis start laser hair Emitter emit laser with measure human body to robot distance Z ', wherein:
duIt is offset of the camera coordinate system relative to image physical coordinates system, dvIt is laser coordinate system relative to image physics The offset of coordinate system, d are horizontal distance of the laser emitter to video camera imaging plane.
8. robot ambulation obstacle detecting device according to claim 7, which is characterized in that the camera coordinates obtain Module is used for:
According to x, y, d, Z ' be calculated each human joint points camera coordinate system coordinate (X, Y, Z), according to (X, Y, Z) The distance between the corresponding human joint points of arbitrary two people L, the wherein coordinate (X, Y, Z) of camera coordinate system and figure is calculated As physical coordinates system coordinate (x, y) between relationship be:
F is focal length of camera, Z=Z '+d.
9. a kind of computer equipment, including memory and processor, it is stored with computer-readable instruction in the memory, it is described When computer-readable instruction is executed by the processor so that the processor executes such as any one of claims 1 to 4 right It is required that the step of robot ambulation disorder detection method.
10. a kind of storage medium being stored with computer-readable instruction, the computer-readable instruction is handled by one or more When device executes so that one or more processors execute the robot ambulation as described in any one of claims 1 to 4 claim The step of disorder detection method.
CN201810314149.5A 2018-04-10 2018-04-10 Robot walking obstacle detection method and device, computer equipment and storage medium Active CN108628306B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810314149.5A CN108628306B (en) 2018-04-10 2018-04-10 Robot walking obstacle detection method and device, computer equipment and storage medium
PCT/CN2018/102854 WO2019196313A1 (en) 2018-04-10 2018-08-29 Robot walking obstacle detection method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810314149.5A CN108628306B (en) 2018-04-10 2018-04-10 Robot walking obstacle detection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108628306A true CN108628306A (en) 2018-10-09
CN108628306B CN108628306B (en) 2021-06-25

Family

ID=63704999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810314149.5A Active CN108628306B (en) 2018-04-10 2018-04-10 Robot walking obstacle detection method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN108628306B (en)
WO (1) WO2019196313A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508688A (en) * 2018-11-26 2019-03-22 平安科技(深圳)有限公司 Behavioral value method, terminal device and computer storage medium based on skeleton
CN110119682A (en) * 2019-04-04 2019-08-13 北京理工雷科电子信息技术有限公司 A kind of infrared remote sensing Image Fire point recognition methods
CN112753210A (en) * 2020-04-26 2021-05-04 深圳市大疆创新科技有限公司 Movable platform, control method thereof and storage medium
CN113268063A (en) * 2021-06-03 2021-08-17 北京京东乾石科技有限公司 Control method and device for robot and non-volatile computer readable storage medium
CN113506340A (en) * 2021-06-15 2021-10-15 浙江大华技术股份有限公司 Method and equipment for predicting cloud deck pose and computer readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324114A (en) * 2020-01-22 2020-06-23 南宁职业技术学院 Sweeping robot and path planning method thereof
CN112230652A (en) * 2020-09-04 2021-01-15 安克创新科技股份有限公司 Walking robot, method of controlling movement of walking robot, and computer storage medium
CN112116529A (en) * 2020-09-23 2020-12-22 浙江浩腾电子科技股份有限公司 PTZ camera-based conversion method for GPS coordinates and pixel coordinates

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102499692A (en) * 2011-11-30 2012-06-20 沈阳工业大学 Ultrasonic gait detection device and method
US20130166137A1 (en) * 2011-12-23 2013-06-27 Samsung Electronics Co., Ltd. Mobile apparatus and localization method thereof
JP2017021829A (en) * 2016-09-12 2017-01-26 シャープ株式会社 Information processor and control program
CN206224246U (en) * 2016-10-19 2017-06-06 九阳股份有限公司 A kind of robot for realizing target positioning and tracking
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6250911A (en) * 1985-08-30 1987-03-05 Kajima Corp Self-traveling type robot
CN104375505B (en) * 2014-10-08 2017-02-15 北京联合大学 Robot automatic road finding method based on laser ranging
CN105676845A (en) * 2016-01-19 2016-06-15 中国人民解放军国防科学技术大学 Security service robot and intelligent obstacle avoidance method of robot in complex environment
CN105550667B (en) * 2016-01-25 2019-01-25 同济大学 A kind of framework information motion characteristic extracting method based on stereoscopic camera
CN106291535B (en) * 2016-07-21 2018-12-28 触景无限科技(北京)有限公司 A kind of obstacle detector, robot and obstacle avoidance system
CN106020207B (en) * 2016-07-26 2019-04-16 广东宝乐机器人股份有限公司 Self-movement robot traveling method and device
CN106313046A (en) * 2016-09-27 2017-01-11 成都普诺思博科技有限公司 Multi-level obstacle avoidance system of mobile robot
CN207182092U (en) * 2017-05-09 2018-04-03 叶仕通 A kind of drive device for mobile robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102499692A (en) * 2011-11-30 2012-06-20 沈阳工业大学 Ultrasonic gait detection device and method
US20130166137A1 (en) * 2011-12-23 2013-06-27 Samsung Electronics Co., Ltd. Mobile apparatus and localization method thereof
JP2017021829A (en) * 2016-09-12 2017-01-26 シャープ株式会社 Information processor and control program
CN206224246U (en) * 2016-10-19 2017-06-06 九阳股份有限公司 A kind of robot for realizing target positioning and tracking
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508688A (en) * 2018-11-26 2019-03-22 平安科技(深圳)有限公司 Behavioral value method, terminal device and computer storage medium based on skeleton
CN109508688B (en) * 2018-11-26 2023-10-13 平安科技(深圳)有限公司 Skeleton-based behavior detection method, terminal equipment and computer storage medium
CN110119682A (en) * 2019-04-04 2019-08-13 北京理工雷科电子信息技术有限公司 A kind of infrared remote sensing Image Fire point recognition methods
CN112753210A (en) * 2020-04-26 2021-05-04 深圳市大疆创新科技有限公司 Movable platform, control method thereof and storage medium
CN113268063A (en) * 2021-06-03 2021-08-17 北京京东乾石科技有限公司 Control method and device for robot and non-volatile computer readable storage medium
CN113506340A (en) * 2021-06-15 2021-10-15 浙江大华技术股份有限公司 Method and equipment for predicting cloud deck pose and computer readable storage medium
CN113506340B (en) * 2021-06-15 2024-08-20 浙江大华技术股份有限公司 Method, equipment and computer readable storage medium for predicting cradle head pose

Also Published As

Publication number Publication date
WO2019196313A1 (en) 2019-10-17
CN108628306B (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN108628306A (en) Robot ambulation disorder detection method, device, computer equipment and storage medium
Fan et al. Pothole detection based on disparity transformation and road surface modeling
Hill et al. Model-based interpretation of 3d medical images.
Ruan et al. Multi-correlation filters with triangle-structure constraints for object tracking
Zou et al. Coslam: Collaborative visual slam in dynamic environments
CN108926355A (en) X-ray system and method for object of standing
CN106052646A (en) Information processing apparatus and information processing method
CN105869166B (en) A kind of human motion recognition method and system based on binocular vision
KR102450931B1 (en) Image registration method and associated model training method, apparatus, apparatus
CN109086706A (en) Applied to the action identification method based on segmentation manikin in man-machine collaboration
CN104392223A (en) Method for recognizing human postures in two-dimensional video images
Wang et al. Point linking network for object detection
CN110263605A (en) Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation
CN115035546B (en) Three-dimensional human body posture detection method and device and electronic equipment
Yang et al. Precise measurement of position and attitude based on convolutional neural network and visual correspondence relationship
CN107357426A (en) A kind of motion sensing control method for virtual reality device
JP2003061936A (en) Moving three-dimensional model formation apparatus and method
CN114565976A (en) Training intelligent test method and device
Darujati et al. Facial motion capture with 3D active appearance models
Ito et al. Probe localization using structure from motion for 3D ultrasound image reconstruction
CN113313824A (en) Three-dimensional semantic map construction method
Lim et al. Use of log polar space for foveation and feature recognition
CN111881744B (en) Face feature point positioning method and system based on spatial position information
CN115359513A (en) Multi-view pedestrian detection method based on key point supervision and grouping feature fusion
Le 3-D human pose estimation in traditional martial art videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant