CN110175523A - A kind of self-movement robot animal identification and hide method and its storage medium - Google Patents

A kind of self-movement robot animal identification and hide method and its storage medium Download PDF

Info

Publication number
CN110175523A
CN110175523A CN201910342589.6A CN201910342589A CN110175523A CN 110175523 A CN110175523 A CN 110175523A CN 201910342589 A CN201910342589 A CN 201910342589A CN 110175523 A CN110175523 A CN 110175523A
Authority
CN
China
Prior art keywords
frame
animal
self
movement robot
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910342589.6A
Other languages
Chinese (zh)
Other versions
CN110175523B (en
Inventor
黄骏
周晓军
陶明
孙赛
王行
李骊
盛赞
李朔
杨淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Huajie Imi Software Technology Co Ltd
Original Assignee
Nanjing Huajie Imi Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Huajie Imi Software Technology Co Ltd filed Critical Nanjing Huajie Imi Software Technology Co Ltd
Priority to CN201910342589.6A priority Critical patent/CN110175523B/en
Publication of CN110175523A publication Critical patent/CN110175523A/en
Application granted granted Critical
Publication of CN110175523B publication Critical patent/CN110175523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

A kind of self-movement robot animal identification and hide method and its storage medium, the method be the environmental information acquired around self-movement robot, it obtains RGB and schemes and depth map, the identification using CNN to animal;The pixel of animal is removed, realizes vision inertia odometer;Calculate the transition matrix between self-movement robot b1 frame and b2 frameThe pixel of animal part is extracted, the transition matrix between b1 frame and b2 frame is calculatedThe depth map of animal is changed into point cloud chart, using ICP to b1Frame, b2Point cloud chart between frame is matched;Animal is in b1Transition matrix under frame coordinate systemDriving self-movement robot movement makes post exercise coordinate system and b1The transition matrix of frame referential isSelf-movement robot is set to keep the position orientation relation kept constant with animal.The present invention had both improved the ability of getting rid of poverty of self-movement robot, also improved self-movement robot practicability, intelligence and environmental interaction.

Description

A kind of self-movement robot animal identification and hide method and its storage medium
Technical field
The present invention relates to self-movement robot fields, particularly, be related to a kind of self-movement robot to the identification of animal, estimate It counts the movement of animal and hides the method and storage medium of animal, to improve self-movement robot practicability, intelligence and ring Border interactivity.
Background technique
Self-movement robot works in environment indoors, and pet is the most common animal in indoor environment.From moving machine Device people is not only influenced by animal during the motion, can also impact to the environment of animal, such as self-movement robot It is chased in moving process by animal, not only results in the damage of self-movement robot, it is also possible to animal sheet can be hurt Body.Indoor self-movement robots most at present do not have identification and hide the function of animal, therefore are easy to produce above ask Topic, causes self-movement robot practicability, intelligence and environmental interaction to have certain defect.
Therefore, how to identify animal, when the pose of animal and self-movement robot is less than preset level, hide animal simultaneously The technical issues of becoming prior art urgent need to resolve with the position orientation relation that animal keeps constant.
Summary of the invention
It, should it is an object of the invention to propose a kind of self-movement robot animal identification and hide method and its storage medium Method enables to self-movement robot to move, the position orientation relation kept constant with animal.Both it is practical self-movement robot had been improved Property, intelligence and environmental interaction, also enhance self-movement robot and get rid of poverty ability.
To achieve this purpose, the present invention adopts the following technical scheme:
A kind of self-movement robot animal identification and hide method, which comprises the steps of:
Animal identification step S110: RGB figure and the depth map being obtained from front of moveable robot movement pass through convolutional Neural Network (CNN) schemes the RGB to carry out animal identification, when recognizing animal, judges animal distance from shifting by depth image The pose of mobile robot carries out the following steps of this method when the pose is less than preset level;
Transition matrixWithCalculate step S120:
The conversion square between self-movement robot b1 frame and b2 frame is calculated using RGB figure, depth map without animal Battle array
Extract b1、b2The corresponding depth pixel data of animal rgb pixel in frame calculate animal between b1 frame and b2 frame Transition matrix
Transition matrix of the two frame animal point clouds under b1 frame referentialCalculate step S130: by the depth of two frame animals Figure is converted to point cloud chart, to being transformed into b1Two frame animal point clouds in frame coordinate system are iterated, and calculate two frame animal point clouds Transition matrix under b1 frame referential
Actuation step S140: driving self-movement robot movement makes post exercise coordinate system and b1The conversion square of frame referential Battle array beThe position orientation relation for keeping constant self-movement robot and animal.
Optionally, described that the RGB is schemed to carry out by convolutional neural networks (CNN) in animal identification step S110 Animal identification specifically: the convolutional neural networks utilize convolutional layer, pond layer, full articulamentum, generate classifier and are predicted Identification;Output matrix is obtained by being multiplied with convolution kernel in the convolutional layer, feature is extracted from image;The pond layer reduces Feature vector dimension reduces over-fitting, reduces noise transmitting;The full articulamentum the tensor of pond layer be cut into Amount, is multiplied by weight, is used for ReLU activation primitive, with gradient descent method Optimal Parameters, generates classifier;Eventually by described Classifier carries out Forecasting recognition.
Optionally, it after identifying animal, also using the RGB figure and depth map obtained in advance, respectively obtains without dynamic The RGB of object schemes and depth map, and RGB figure and depth map containing only animal, for estimating animal movement initial value.
Optionally, wherein transition matrixCalculating specifically: using IUM obtain self-movement robot angular speed and plus IMU data pre-integration between b1 frame and b2 frame is obtained IMU between b1 frame and b2 frame and measures residual error, according to re-projection by speed The residual error of error calculation image detects latest frame b using slip window sampling2Frame previous frame b therewith1Whether frame has stable spy Sign, if there is stable feature then by latest frame, is added in sliding window, using used based on sliding window close coupling vision Property odometer (vision VIO) calculates the transition matrix between b1 frame and b2 frameAnd/or
Wherein transition matrixCalculating specifically: extract b1、b2The corresponding depth pixel number of animal rgb pixel in frame According to calculating conversion of the animal between b1 frame and b2 frame by the RGB of animal figure and depth map using direct linear transformation (DLT) Matrix
Optionally, transition matrix of the two frame animal point clouds under b1 frame referentialCalculate step S130 specifically:
By reference frame b1, present frame b2The depth map of animal switchs to point cloud chart in frame, passes through two field pictures b1Frame and b2Frame it Between transition matrixBy present frame b2The point cloud data of animal switchs to reference frame b in frame1The point cloud of animal in frame coordinate system, Using ICP (Iterative Closest Point iteration closest approach) algorithm by being transformed into b1Two frames in frame coordinate system Animal point cloud iteration, willAs the initial value of above-mentioned ICP iteration, which can make two frame animal point cloud fast convergences, Calculate transition matrix of the two frame animal point clouds under b1 frame referential
Optionally, the self-movement robot has depth camera and IMU, and the depth camera is for acquiring from moving machine Environmental information around device people, obtains RGB figure and depth map, and the IMU is used to obtain the angular speed of self-movement robot and adds Speed.
Optionally, the self-movement robot circular flow step S110-S140, the self-movement robot circular flow Step S110-S140 obtains next frame animal RGB figure, depth map, identifies animal, calculatesCalculate two frame animals Transition matrix of the point cloud under b1 frame referentialMake post exercise coordinate system and b1The transition matrix of frame referential isDriving self-movement robot movement makes self-movement robot keep the position orientation relation kept constant with animal.
The invention also discloses a kind of storage mediums, for storing computer executable instructions, it is characterised in that:
The computer executable instructions executed when being executed by processor above-mentioned self-movement robot animal identification with Hide method.
The present invention further discloses a kind of self-movement robots, with above-mentioned storage medium, it is characterised in that: institute Storage medium is stated to execute above-mentioned self-movement robot animal identification and hide method.
The present invention further discloses a kind of self-movement robots, it is characterised in that: the self-movement robot has deep Camera and IMU are spent, and is able to carry out above-mentioned self-movement robot animal identification and hides method.
To sum up, self-movement robot can identify animal, estimation animal movement in the present invention, hide animal and protect with animal Hold constant position orientation relation.Both the ability of getting rid of poverty for having improved self-movement robot, also improves self-movement robot practicability, intelligence It can property and environmental interaction.Overwhelming majority self-movement robot does not have this kind of function at present, this function can make from moving machine The interactivity that device people and animal keep friends with.
Detailed description of the invention
Fig. 1 is the self-movement robot animal identification of specific embodiment according to the present invention and the flow chart for hiding method;
Fig. 2 is the iteration initial value of specific embodiment according to the present inventionThe step of;
The step of Fig. 3 is the calculating animal movement of specific embodiment according to the present invention and drives self-movement robot.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
The invention reside in so that self-movement robot has depth camera and IMU (Inertial Measurement Unit), wherein depth phase Machine is used to acquire environmental information around self-movement robot, obtains RGB figure and depth map, animal and estimates animal for identification Movement.When the pose of animal and self-movement robot is less than preset level, self-movement robot will be moved, and be protected with animal Constant position orientation relation is held, makes animal will not be further towards self-movement robot.
Specifically, realizing identification to animal using convolutional neural networks, the pixel of animal is removed, using IMU and Vision inertia odometer is realized in camera fusion;Calculate the transition matrix between self-movement robot b1 frame and b2 frameIt extracts dynamic The pixel of object part calculates transition matrix of the animal between b1 frame and b2 frameTwo frame animal point clouds are calculated to join in b1 frame Examine the transition matrix under beingDriving self-movement robot movement makes post exercise coordinate system and b1The conversion square of frame referential Battle array beSelf-movement robot is set to keep the position orientation relation kept constant with animal.
Specifically, showing self-movement robot animal identification referring to Fig. 1 and hiding the flow chart of method, including such as Lower step:
Animal identification step S110: RGB figure and the depth map being obtained from front of moveable robot movement pass through convolutional Neural Network (CNN) schemes the RGB to carry out animal identification, when recognizing animal, judges animal distance from shifting by depth image The pose of mobile robot carries out the following steps of this method when the pose is less than preset level.
In an alternative embodiment, described that the RGB is schemed by convolutional neural networks (CNN) to carry out animal identification Specifically: the convolutional neural networks utilize convolutional layer, pond layer, full articulamentum, generate classifier and carry out Forecasting recognition;It is described Output matrix is obtained by being multiplied with convolution kernel in convolutional layer, feature is extracted from image;The pond layer reduces feature vector Dimension reduces over-fitting, reduces noise transmitting;The tensor of pond layer is cut into vector by the full articulamentum, is multiplied by power Weight, is used for ReLU activation primitive, with gradient descent method Optimal Parameters, generates classifier;Eventually by the classifier into Row Forecasting recognition.
Further, it after identifying animal, also using the RGB figure and depth map obtained in advance, respectively obtains and is free of The RGB of animal schemes and depth map, and RGB figure and depth map containing only animal, with the calculating for subsequent conversion matrix.
In the present invention, the self-movement robot has depth camera and IMU, and the depth camera is for acquiring from shifting Environmental information around mobile robot, obtains RGB figure and depth map, and the IMU is used to obtain the angular speed of self-movement robot And acceleration.
Transition matrixWithCalculate step S120:
The step including the use of RGB figure, depth map without animal be calculated self-movement robot b1 frame and b2 frame it Between transition matrix
Extract b1、b2The corresponding depth pixel data of animal rgb pixel in frame calculate animal between b1 frame and b2 frame Transition matrix
Wherein transition matrixCalculating specifically: use IUM obtain self-movement robot angular speed and acceleration, will IMU data pre-integration between b1 frame and b2 frame obtains IMU between b1 frame and b2 frame and measures residual error, according to re-projection error meter The residual error of nomogram picture detects latest frame b using slip window sampling2Frame previous frame b therewith1Whether frame has stable feature, if There are stable features then by latest frame, is added in sliding window, using based on sliding window close coupling vision inertia mileage Meter (vision VIO) calculates the transition matrix between b1 frame and b2 frame
Wherein transition matrixCalculating specifically: extract b1、b2The corresponding depth pixel number of animal rgb pixel in frame According to calculating conversion of the animal between b1 frame and b2 frame by the RGB of animal figure and depth map using direct linear transformation (DLT) Matrix
In this step, the calculating of two transition matrixes is to lay base in the calculating of next step iteration initial value Plinth.
In the present invention, RGB figure and depth map, therefore, present frame b are shot simultaneously using depth camera2With reference frame b1More More is at the time of indicating shooting image.
Referring to fig. 2, estimation transition matrix is shownWithAnd initial valueRequired corresponding steps.
The format of two transition matrixes is
In formula: R is spin matrix, and t is translation vector.
Transition matrix of the two frame animal point clouds under b1 frame referentialCalculate step S130: by the depth of two frame animals Figure is converted to point cloud chart, to being transformed into b1Two frame animal point clouds in frame coordinate system are iterated, and calculate two frame animal point clouds Transition matrix under b1 frame referential
Specifically: by reference frame b1, present frame b2The depth map of animal switchs to point cloud chart in frame, passes through two field pictures b1Frame With b2Transition matrix between frameBy present frame b2The point cloud data of animal switchs to reference frame b in frame1Animal in frame coordinate system Point cloud, using ICP (Iterative Closest Point iteration closest approach) algorithm by being transformed into b1In frame coordinate system Two frame animal point cloud iteration, willAs the initial value of above-mentioned ICP iteration, i.e. two corresponding dot products of matrix, the value It can make two frame animal point cloud fast convergences, calculate transition matrix of the two frame animal point clouds under b1 frame referential
Actuation step S140: driving self-movement robot movement makes post exercise coordinate system and b1The conversion square of frame referential Battle array beSelf-movement robot is set to keep the position orientation relation kept constant with animal.
Referring to Fig. 3, shows the calculating animal movement of specific embodiment according to the present invention and drive self-movement robot institute The corresponding steps needed.
Therefore, by step S110-S140, the position orientation relation for enabling to self-movement robot and animal to keep constant. Enhance the practicability intelligence and environmental interaction to environment of self-movement robot.
Further, the self-movement robot circular flow step S110-S140 obtains next frame animal RGB and schemes, is deep Degree figure, identifies animal, calculatesCalculate transition matrix of the two frame animal point clouds under b1 frame referentialDriving Self-movement robot movement makes post exercise coordinate system and b1The transition matrix of frame referential isMake self-movement robot Keep the position orientation relation kept constant with animal.
Drive self-movement robot movement.
The present invention further discloses a kind of storage mediums, and for storing computer executable instructions, the computer can It executes instruction and executes above-mentioned self-movement robot animal identification when being executed by processor and hide method.
The invention also discloses a kind of self-movement robots can execute above-mentioned shifting certainly with above-mentioned storage medium Mobile robot animal identification and hide method.
Alternatively, a kind of self-movement robot, has depth camera and IMU, it is dynamic to be able to carry out above-mentioned self-movement robot Object identifies and hides method.
To sum up, self-movement robot can identify animal, estimation animal movement in the present invention, hide animal and protect with animal Hold constant position orientation relation.Both the ability of getting rid of poverty for having improved self-movement robot, also improves self-movement robot practicability, intelligence It can property and environmental interaction.Overwhelming majority self-movement robot does not have this kind of function at present, this function can make from moving machine The interactivity that device people and animal keep friends with.
Obviously, it will be understood by those skilled in the art that above-mentioned each unit of the invention or each step can be with general Computing device realizes that they can concentrate on single computing device, and optionally, they can be executable with computer installation Program code realize, be performed by computing device so as to be stored in storage device, or by they point It is not fabricated to each integrated circuit modules, or makes multiple modules or steps in them to single integrated circuit module It realizes.In this way, the present invention is not limited to the combinations of any specific hardware and software.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that A specific embodiment of the invention is only limitted to this, for those of ordinary skill in the art to which the present invention belongs, is not taking off Under the premise of from present inventive concept, several simple deduction or replace can also be made, all shall be regarded as belonging to the present invention by institute Claims of submission determine protection scope.

Claims (10)

1. a kind of self-movement robot animal identification and hiding method, which comprises the steps of:
Animal identification step S110: RGB figure and the depth map being obtained from front of moveable robot movement pass through convolutional neural networks (CNN) scheme progress animal identification to the RGB judges animal distance from moving machine when recognizing animal by depth image The pose of device people carries out the following steps of this method when the pose is less than preset level;
Transition matrixWithCalculate step S120:
The transition matrix between self-movement robot b1 frame and b2 frame is calculated using RGB figure, depth map without animal
Extract b1、b2The corresponding depth pixel data of animal rgb pixel in frame calculate conversion of the animal between b1 frame and b2 frame Matrix
Transition matrix of the two frame animal point clouds under b1 frame referentialIt calculates step S130: the depth map of two frame animals is turned It is changed to point cloud chart, to being transformed into b1Two frame animal point clouds in frame coordinate system are iterated, and calculate two frame animal point clouds in b1 Transition matrix under frame referential
Actuation step S140: driving self-movement robot movement makes post exercise coordinate system and b1The transition matrix of frame referential isThe position orientation relation for keeping constant self-movement robot and animal.
2. self-movement robot animal identification according to claim 1 and hiding method, it is characterised in that:
It is described specific to RGB figure progress animal identification by convolutional neural networks (CNN) in animal identification step S110 Are as follows: the convolutional neural networks utilize convolutional layer, pond layer, full articulamentum, generate classifier and carry out Forecasting recognition;The convolution By being multiplied to obtain output matrix with convolution kernel in layer, feature is extracted from image;The pond layer reduces feature vector dimension, Reduce over-fitting, reduces noise transmitting;The tensor of pond layer is cut into vector by the full articulamentum, is multiplied by weight, right It uses ReLU activation primitive, with gradient descent method Optimal Parameters, generates classifier;It is predicted eventually by the classifier Identification.
3. self-movement robot animal identification according to claim 2 and hiding method, it is characterised in that:
After identifying animal, also using obtain in advance RGB figure and depth map, respectively obtain without animal RGB figure and Depth map, and RGB figure and depth map containing only animal, for estimating animal movement initial value.
4. self-movement robot animal identification according to claim 1 and hiding method, it is characterised in that:
Wherein transition matrixCalculating specifically: using IUM obtain self-movement robot angular speed and acceleration, by b1 frame IMU data pre-integration between b2 frame obtains IMU between b1 frame and b2 frame and measures residual error, calculated and schemed according to re-projection error The residual error of picture detects latest frame b using slip window sampling2Frame previous frame b therewith1Whether frame has stable feature, if there is Stable feature is then added to latest frame in sliding window, using based on sliding window close coupling vision inertia odometer (depending on Feel VIO) calculate transition matrix between b1 frame and b2 frameAnd/or
Wherein transition matrixCalculating specifically: extract b1、b2The corresponding depth pixel data of animal rgb pixel in frame use Direct linear transformation (DLT) calculates transition matrix of the animal between b1 frame and b2 frame by the RGB figure and depth map of animal
5. self-movement robot animal identification according to claim 1 and hiding method, it is characterised in that:
Transition matrix of the two frame animal point clouds under b1 frame referentialCalculate step S130 specifically:
By reference frame b1, present frame b2The depth map of animal switchs to point cloud chart in frame, passes through two field pictures b1Frame and b2Between frame Transition matrixBy present frame b2The point cloud data of animal switchs to reference frame b in frame1The point cloud of animal in frame coordinate system uses ICP (Iterative Closest Point iteration closest approach) algorithm is by being transformed into b1Two frame animals in frame coordinate system Point cloud iteration, willAs the initial value of above-mentioned ICP iteration, which can make two frame animal point cloud fast convergences, calculate Transition matrix of the two frame animal point clouds under b1 frame referential out
6. self-movement robot animal identification according to claim 1 and hiding method, it is characterised in that:
The self-movement robot has depth camera and IMU, and the depth camera is for acquiring around self-movement robot Environmental information, obtains RGB figure and depth map, and the IMU is used to obtain the angular speed and acceleration of self-movement robot.
7. self-movement robot animal identification according to claim 1 and hiding method, it is characterised in that:
The self-movement robot circular flow step S110-S140, the self-movement robot circular flow step S110- S140 obtains next frame animal RGB figure, depth map, identifies animal, calculatesTwo frame animal point clouds are calculated in b1 Transition matrix under frame referentialMake post exercise coordinate system and b1The transition matrix of frame referential isDriving Self-movement robot movement makes self-movement robot keep the position orientation relation kept constant with animal.
8. a kind of storage medium, for storing computer executable instructions, it is characterised in that:
Computer executable instructions perform claim when being executed by processor requires described in any one of 1-7 from mobile Robot animal identification and hide method.
9. a kind of self-movement robot, with storage medium according to any one of claims 8, it is characterised in that:
The storage medium perform claim requires self-movement robot animal identification and the side of hiding described in any one of 1-7 Method.
10. a kind of self-movement robot, it is characterised in that:
The self-movement robot has depth camera and IMU, and described in any one of being able to carry out claim 1-7 from Mobile robot animal identification and hide method.
CN201910342589.6A 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof Active CN110175523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910342589.6A CN110175523B (en) 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910342589.6A CN110175523B (en) 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof

Publications (2)

Publication Number Publication Date
CN110175523A true CN110175523A (en) 2019-08-27
CN110175523B CN110175523B (en) 2021-05-14

Family

ID=67690149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910342589.6A Active CN110175523B (en) 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof

Country Status (1)

Country Link
CN (1) CN110175523B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884838A (en) * 2021-03-16 2021-06-01 重庆大学 Robot autonomous positioning method
CN113470591A (en) * 2020-03-31 2021-10-01 京东方科技集团股份有限公司 Monitor toning method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105137973A (en) * 2015-08-21 2015-12-09 华南理工大学 Method for robot to intelligently avoid human under man-machine cooperation scene
EP3007025A1 (en) * 2014-10-10 2016-04-13 LG Electronics Inc. Robot cleaner and method for controlling the same
CN106096559A (en) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 Obstacle detection method and system and moving object
CN107995962A (en) * 2017-11-02 2018-05-04 深圳市道通智能航空技术有限公司 A kind of barrier-avoiding method, device, loose impediment and computer-readable recording medium
US20180278820A1 (en) * 2014-07-01 2018-09-27 Brain Corporation Optical detection apparatus and methods
CN108805906A (en) * 2018-05-25 2018-11-13 哈尔滨工业大学 A kind of moving obstacle detection and localization method based on depth map
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot
CN109461185A (en) * 2018-09-10 2019-03-12 西北工业大学 A kind of robot target automatic obstacle avoidance method suitable for complex scene

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180278820A1 (en) * 2014-07-01 2018-09-27 Brain Corporation Optical detection apparatus and methods
EP3007025A1 (en) * 2014-10-10 2016-04-13 LG Electronics Inc. Robot cleaner and method for controlling the same
CN105137973A (en) * 2015-08-21 2015-12-09 华南理工大学 Method for robot to intelligently avoid human under man-machine cooperation scene
CN106096559A (en) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 Obstacle detection method and system and moving object
CN107995962A (en) * 2017-11-02 2018-05-04 深圳市道通智能航空技术有限公司 A kind of barrier-avoiding method, device, loose impediment and computer-readable recording medium
CN108805906A (en) * 2018-05-25 2018-11-13 哈尔滨工业大学 A kind of moving obstacle detection and localization method based on depth map
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot
CN109461185A (en) * 2018-09-10 2019-03-12 西北工业大学 A kind of robot target automatic obstacle avoidance method suitable for complex scene

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MIKAEL SVENSTRUP ET AL.: "Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction", 《2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 *
吴国盛 等: "一种基于极坐标系下的机器人动态避碰算法", 《2006中国控制与决策学术年会论文集》 *
吴康 等: "一种基于模糊识别的移动机器人避障算法", 《东南大学学报(自然科学版)》 *
张毅 等: "基于深度图像的移动机器人动态避障算 法", 《控制工程》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470591A (en) * 2020-03-31 2021-10-01 京东方科技集团股份有限公司 Monitor toning method and device, electronic equipment and storage medium
CN113470591B (en) * 2020-03-31 2023-11-14 京东方科技集团股份有限公司 Monitor color matching method and device, electronic equipment and storage medium
CN112884838A (en) * 2021-03-16 2021-06-01 重庆大学 Robot autonomous positioning method
CN112884838B (en) * 2021-03-16 2022-11-15 重庆大学 Robot autonomous positioning method

Also Published As

Publication number Publication date
CN110175523B (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN109544636B (en) Rapid monocular vision odometer navigation positioning method integrating feature point method and direct method
EP3698323B1 (en) Depth from motion for augmented reality for handheld user devices
US10769411B2 (en) Pose estimation and model retrieval for objects in images
CN112767373B (en) Robot indoor complex scene obstacle avoidance method based on monocular camera
Biswas et al. Gesture recognition using microsoft kinect®
US11064178B2 (en) Deep virtual stereo odometry
CN111462135A (en) Semantic mapping method based on visual S L AM and two-dimensional semantic segmentation
WO2019031083A1 (en) Method and system for detecting action
US20190301871A1 (en) Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
WO2015126443A1 (en) Moving object localization in 3d using a single camera
WO2021098765A1 (en) Key frame selection method and apparatus based on motion state
JP2022546643A (en) Image processing system and method for landmark position estimation with uncertainty
CN105930790A (en) Human body behavior recognition method based on kernel sparse coding
CN103914855A (en) Moving object positioning method and system
CN110175523A (en) A kind of self-movement robot animal identification and hide method and its storage medium
CN114004883B (en) Visual perception method and device for curling ball, computer equipment and storage medium
CN111798373A (en) Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization
CN112581540A (en) Camera calibration method based on human body posture estimation in large scene
CN112488067B (en) Face pose estimation method and device, electronic equipment and storage medium
CN109725699A (en) Recognition methods, device and the equipment of identification code
CN110520813A (en) It is mobile throughout the multiple agent confrontation type of label formation using the transformation of RADON cumulative distribution and canonical correlation analysis prediction
US11238604B1 (en) Densifying sparse depth maps
WO2022228391A1 (en) Terminal device positioning method and related device therefor
CN110377033B (en) RGBD information-based small football robot identification and tracking grabbing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant