CN110175523B - Self-moving robot animal identification and avoidance method and storage medium thereof - Google Patents
Self-moving robot animal identification and avoidance method and storage medium thereof Download PDFInfo
- Publication number
- CN110175523B CN110175523B CN201910342589.6A CN201910342589A CN110175523B CN 110175523 B CN110175523 B CN 110175523B CN 201910342589 A CN201910342589 A CN 201910342589A CN 110175523 B CN110175523 B CN 110175523B
- Authority
- CN
- China
- Prior art keywords
- animal
- frame
- self
- moving robot
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 241001465754 Metazoa Species 0.000 title claims abstract description 158
- 238000000034 method Methods 0.000 title claims abstract description 34
- 239000011159 matrix material Substances 0.000 claims abstract description 58
- 230000009466 transformation Effects 0.000 claims abstract description 39
- 230000033001 locomotion Effects 0.000 claims abstract description 19
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 238000006243 chemical reaction Methods 0.000 claims description 18
- 238000013527 convolutional neural network Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 14
- 239000013598 vector Substances 0.000 claims description 13
- 238000011176 pooling Methods 0.000 claims description 9
- 230000001133 acceleration Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000008878 coupling Effects 0.000 claims 1
- 238000010168 coupling process Methods 0.000 claims 1
- 238000005859 coupling reaction Methods 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Aviation & Aerospace Engineering (AREA)
- Bioinformatics & Computational Biology (AREA)
- Automation & Control Theory (AREA)
- Radar, Positioning & Navigation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Remote Sensing (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Human Computer Interaction (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
A self-moving robot animal identification and avoidance method and storage medium thereof, the method is that environment information around the self-moving robot is collected to obtain RGB image and depth image, CNN is used to identify the animal; removing pixels of the animal to realize a visual inertial odometer; computing a transformation matrix between b1 and b2 frames from a mobile robotExtracting pixels of animal parts, and calculating a transformation matrix between b1 frame and b2 frameConverting the depth map of the animal into a cloud point map using ICP vs. b1Frame, b2Matching point clouds among frames; animals are in b1Transformation matrix under frame coordinate systemDriving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system isThe self-moving robot is enabled to keep a constant pose relation with the animal. The invention improves the difficulty-escaping capability of the self-moving robot and also improves the practicability, intelligence and environment interactivity of the self-moving robot.
Description
Technical Field
The present invention relates to the field of self-moving robots, and more particularly, to a method and a storage medium for identifying an animal, estimating a motion of the animal, and avoiding the animal by a self-moving robot, so as to improve the practicability, intelligence, and environmental interactivity of the self-moving robot.
Background
Self-moving robots work in indoor environments, while pets are the most common animals in indoor environments. The self-moving robot is not only influenced by animals during the moving process, but also influences the environment of the animals, for example, the self-moving robot is chased by the animals during the moving process, which not only causes the damage of the self-moving robot, but also may hurt the animals. At present, most indoor self-moving robots do not have the functions of identifying and avoiding animals, so that the problems are easily caused, and the practicability, intelligence and environment interactivity of the self-moving robots have certain defects.
Therefore, how to identify the animal becomes a technical problem to be solved urgently in the prior art when the pose of the animal and the self-moving robot is smaller than the preset value, the animal is avoided, and the constant pose relationship between the animal and the animal is kept.
Disclosure of Invention
The invention aims to provide a self-moving robot animal identification and avoidance method and a storage medium thereof, wherein the method can enable the self-moving robot to move and keep a constant pose relation with an animal. The practicability, intelligence and environment interactivity of the self-moving robot are improved, and the ability of the self-moving robot to get rid of difficulties is enhanced.
In order to achieve the purpose, the invention adopts the following technical scheme:
a self-moving robot animal identification and avoidance method is characterized by comprising the following steps:
an animal identification step S110, acquiring an RGB (red, green and blue) image and a depth image in front of the movement of the mobile robot, identifying the RGB image by a Convolutional Neural Network (CNN), judging the position and posture of the animal from the mobile robot by the depth image when the animal is identified, and performing the following steps of the method when the position and posture are smaller than a preset value;
calculating a conversion matrix between b1 frames and b2 frames of the mobile robot by using the RGB map and the depth map without animals
Extraction of b1、b2Calculating the conversion matrix of the animal between b1 frame and b2 frame according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Transformation matrix of two frames of animal point clouds under b1 frame reference systemCalculation step S130: converting the depth map of two frames of animals into point cloud map, and converting the point cloud map into b1Iterating two frames of animal point clouds in the frame coordinate system, and calculating a transformation matrix of the two frames of animal point clouds under a b1 frame reference system
Driving step S140: driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system isThe self-moving robot and the animal are kept in a constant pose relationship.
Optionally, in the step S110 of identifying an animal, the identifying an animal with the RGB diagram by using a Convolutional Neural Network (CNN) specifically includes: the convolutional neural network generates a classifier by utilizing a convolutional layer, a pooling layer and a full-link layer to carry out prediction identification; obtaining an output matrix by multiplying the convolution layer with a convolution kernel, and extracting features from the image; the pooling layer reduces the dimension of the characteristic vector, reduces the over-fitting phenomenon and reduces the noise transmission; the full connection layer cuts the tensor of the pooling layer into vectors, multiplies the vectors by the weights, uses a ReLU activation function for the vectors, optimizes parameters by a gradient descent method, and generates a classifier; and finally performing prediction identification through the classifier.
Optionally, after the animal is identified, the RGB map and the depth map which are acquired in advance are used to obtain the RGB map and the depth map which do not contain the animal, and the RGB map and the depth map which only contain the animal, respectively, so as to estimate the initial value of the animal movement.
Optionally, wherein the matrix is transformedThe calculation of (a) is specifically: obtaining the angular velocity and the acceleration of the mobile robot by using an IUM, pre-integrating IMU data between a b1 frame and a b2 frame to obtain an IMU measurement residual error between a b1 frame and a b2 frame, calculating the residual error of an image according to a reprojection error, and detecting the b of the latest frame by adopting a sliding window method2Frame and preceding frame b1Whether the frame has stable characteristics or not, if so, adding the latest frame into a sliding window, and calculating the distance between the b1 frame and the b2 frame by using a sliding window-based tightly-coupled visual inertial odometer (visual VIO)Transformation matrixAnd/or the presence of a gas in the gas,
wherein the conversion matrixThe calculation of (a) is specifically: extraction of b1、b2Calculating a transformation matrix of the animal between the b1 frame and the b2 frame by using Direct Linear Transformation (DLT) through the RGB map and the depth map of the animal according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Optionally, a transformation matrix of the two frames of animal point clouds under the b1 frame reference systemThe calculating step S130 specifically includes:
reference frame b1Current frame b2The depth image of the animal in the frame is converted into a point cloud image and passes through two frame images b1Frame and b2Transition matrix between framesThe current frame b2Converting point cloud data of animals in frame into reference frame b1Point clouds of animals in frame coordinate system by pair conversion to b using ICP (Iterative Closest Point) algorithm1Two frames of animal point clouds in the frame coordinate system are iterated, andas the initial value of the above ICP iteration, the value enables two frames of animal point clouds to be converged quickly, and a transformation matrix of the two frames of animal point clouds under a b1 frame reference system is calculated
Optionally, the self-moving robot has a depth camera for collecting environmental information around the self-moving robot to obtain an RGB map and a depth map, and an IMU for obtaining an angular velocity and an acceleration of the self-moving robot.
Optionally, the self-moving robot circularly runs steps S110 to S140, and the self-moving robot circularly runs steps S110 to S140, acquires an RGB image and a depth image of the next frame of animal, identifies the animal, and calculatesCalculating a transformation matrix of two frames of animal point clouds under a b1 frame reference systemLet the coordinate system after movement and b1The transformation matrix of the frame reference system isThe self-moving robot is driven to move, so that the self-moving robot keeps a constant pose relation with the animal.
The invention also discloses a storage medium for storing computer executable instructions, which is characterized in that:
the computer executable instructions, when executed by the processor, perform the self-moving robotic animal identification and avoidance method described above.
The invention further discloses a self-moving robot, which is provided with the storage medium and is characterized in that: the storage medium executes the self-moving robot animal identification and avoidance method.
The invention further discloses a self-moving robot, which is characterized in that: the self-moving robot is provided with a depth camera and an IMU, and can execute the self-moving robot animal identification and avoidance method.
In conclusion, the self-moving robot can identify the animal, estimate the motion of the animal, avoid the animal and keep a constant pose relation with the animal. The self-moving robot has the advantages that the difficulty-escaping capability of the self-moving robot is improved, and the practicability, intelligence and environment interactivity of the self-moving robot are also improved. At present, most self-moving robots do not have the function, and the function can enable the self-moving robots to keep friendly interactivity with animals.
Drawings
Fig. 1 is a flow chart of a self-moving robotic animal identification and avoidance method according to an embodiment of the present invention;
fig. 3 is a step of calculating animal motion and driving a self-moving robot according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
The invention consists in providing a self-moving robot with a depth camera for acquiring environmental information from the surroundings of the robot, obtaining an RGB-map and a depth-map for identifying the animal and estimating its movement, and an IMU (inertial measurement unit). When the pose of the animal and the self-moving robot is smaller than the preset value, the self-moving robot moves and keeps a constant pose relation with the animal, so that the animal cannot further approach the self-moving robot.
Specifically, a convolutional neural network is used for recognizing the animal, pixels of the animal are removed, and an IMU and a camera are fused to realize a visual inertial odometer; computing a transformation matrix between b1 and b2 frames from a mobile robotExtracting the pixels of the animal part, and calculating the transformation matrix of the animal between the b1 frame and the b2 frameCalculating a transformation matrix of two frames of animal point clouds under a b1 frame reference systemDriving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system isThe self-moving robot is enabled to keep a constant pose relation with the animal.
In particular, referring to fig. 1, a flow chart of a self-moving robotic animal identification and avoidance method is shown, comprising the steps of:
and an animal identification step S110, acquiring an RGB (red, green and blue) image and a depth image in front of the movement of the mobile robot, identifying the RGB image by a Convolutional Neural Network (CNN), judging the pose of the animal from the mobile robot by the depth image when the animal is identified, and performing the following steps of the method when the pose is smaller than a preset value.
In an alternative embodiment, the animal identification of the RGB map by the Convolutional Neural Network (CNN) is specifically: the convolutional neural network generates a classifier by utilizing a convolutional layer, a pooling layer and a full-link layer to carry out prediction identification; obtaining an output matrix by multiplying the convolution layer with a convolution kernel, and extracting features from the image; the pooling layer reduces the dimension of the characteristic vector, reduces the over-fitting phenomenon and reduces the noise transmission; the full connection layer cuts the tensor of the pooling layer into vectors, multiplies the vectors by the weights, uses a ReLU activation function for the vectors, optimizes parameters by a gradient descent method, and generates a classifier; and finally performing prediction identification through the classifier.
Further, after the animal is identified, the RGB image and the depth map which are acquired in advance are used to obtain the RGB image and the depth map which do not contain the animal and the RGB image and the depth map which only contain the animal, respectively, so as to be used for calculating a subsequent conversion matrix.
In the invention, the self-moving robot is provided with a depth camera and an IMU, wherein the depth camera is used for collecting environment information around the self-moving robot to obtain an RGB (red, green and blue) map and a depth map, and the IMU is used for obtaining the angular speed and the acceleration of the self-moving robot.
the step comprises calculating a conversion matrix between b1 frame and b2 frame of the mobile robot by using an RGB (red, green and blue) map and a depth map without animals
Extraction of b1、b2Calculating the conversion matrix of the animal between b1 frame and b2 frame according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Wherein the conversion matrixThe calculation of (a) is specifically: obtaining the angular velocity and the acceleration of the mobile robot by using an IUM, pre-integrating IMU data between a b1 frame and a b2 frame to obtain an IMU measurement residual error between a b1 frame and a b2 frame, calculating the residual error of an image according to a reprojection error, and detecting the b of the latest frame by adopting a sliding window method2Frame and preceding frame b1Whether the frame has stable characteristics or not, if so, adding the latest frame into a sliding window, and calculating a conversion matrix between the b1 frame and the b2 frame by using a sliding window-based tightly-coupled visual inertial odometer (visual VIO)
Wherein the conversion matrixThe calculation of (a) is specifically: extraction of b1、b2Calculating a transformation matrix of the animal between the b1 frame and the b2 frame by using Direct Linear Transformation (DLT) through the RGB map and the depth map of the animal according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
In this step, the calculation of the two transformation matrices is based on the calculation of the initial values in the next step iteration.
In the present invention, the depth camera is used to simultaneously capture the RGB image and the depth image, and thus, the current frame b2And a reference frame b1More so, the time at which the image was taken.
Referring to FIG. 2, an estimated transformation matrix is shownAndand an initial valueThe corresponding steps are required.
The two transformation matrices are formatted as
In the formula: r is a rotation matrix and t is a translation vector.
Transformation matrix of two frames of animal point clouds under b1 frame reference systemCalculation step S130: converting the depth map of two frames of animals into point cloud map, and converting the point cloud map into b1Iterating two frames of animal point clouds in the frame coordinate system, and calculating a transformation matrix of the two frames of animal point clouds under a b1 frame reference system
The method specifically comprises the following steps: reference frame b1Current frame b2The depth image of the animal in the frame is converted into a point cloud image and passes through two frame images b1Frame and b2Transition matrix between framesThe current frame b2Converting point cloud data of animals in frame into reference frame b1Point clouds of animals in frame coordinate system by pair conversion to b using ICP (Iterative Closest Point) algorithm1Two frames of animal point clouds in the frame coordinate system are iterated, andthe initial value of the ICP iteration is the corresponding point multiplication of the two matrixes, the value can enable the two frames of animal point clouds to be converged quickly, and a conversion matrix of the two frames of animal point clouds under a b1 frame reference system is calculated
Driving step S140: driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system isThe self-moving robot is enabled to keep a constant pose relation with the animal.
Referring to fig. 3, the corresponding steps required to calculate animal motion and drive the self-moving robot according to a specific embodiment of the present invention are shown.
Therefore, through steps S110 to S140, the self-moving robot and the animal can be kept in a constant pose relationship. The practical intelligence and the environmental interactivity of the self-moving robot to the environment are enhanced.
Further, the self-moving robot circularly runs the steps S110 to S140 to obtain an RGB (red, green and blue) image and a depth image of the next frame of animal, identify the animal and calculateCalculating a transformation matrix of two frames of animal point clouds under a b1 frame reference systemDriving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system isThe self-moving robot is enabled to keep a constant pose relation with the animal.
And driving the self-moving robot to move.
The invention further discloses a storage medium for storing computer-executable instructions which, when executed by a processor, perform the self-moving robotic animal identification and avoidance method described above.
The invention also discloses a self-moving robot which is provided with the storage medium and can execute the animal identification and avoidance method of the self-moving robot.
Alternatively, a self-moving robot having a depth camera and an IMU is capable of performing the above-described self-moving robot animal recognition and avoidance method.
In conclusion, the self-moving robot can identify the animal, estimate the motion of the animal, avoid the animal and keep a constant pose relation with the animal. The self-moving robot has the advantages that the difficulty-escaping capability of the self-moving robot is improved, and the practicability, intelligence and environment interactivity of the self-moving robot are also improved. At present, most self-moving robots do not have the function, and the function can enable the self-moving robots to keep friendly interactivity with animals.
It will be apparent to those skilled in the art that the various elements or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device, or alternatively, they may be implemented using program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
While the invention has been described in further detail with reference to specific preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A self-moving robot animal identification and avoidance method is characterized by comprising the following steps:
an animal identification step S110, acquiring an RGB (red, green and blue) image and a depth image in front of the movement of the mobile robot, carrying out animal identification on the RGB image through a Convolutional Neural Network (CNN), judging the position and posture of the animal from the mobile robot through the depth image when the animal is identified, and carrying out the following steps when the position and posture is smaller than a preset value;
calculating a conversion matrix between b1 frames and b2 frames of the mobile robot by using the RGB map and the depth map without animals
Extraction of b1、b2Calculating the conversion matrix of the animal between b1 frame and b2 frame according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Transformation matrix of two frames of animal point clouds under b1 frame reference systemCalculation step S130: converting the depth map of two frames of animals into point cloud map, and converting the point cloud map into b1Iterating two frames of animal point clouds in the frame coordinate system, and calculating a transformation matrix of the two frames of animal point clouds under a b1 frame reference system
2. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
in the animal recognition step S110, the animal recognition of the RGB map by the convolutional neural network CNN specifically includes: the convolutional neural network generates a classifier by utilizing a convolutional layer, a pooling layer and a full-link layer to carry out prediction identification; obtaining an output matrix by multiplying the convolution layer with a convolution kernel, and extracting features from the image; the pooling layer reduces the dimension of the characteristic vector, reduces the over-fitting phenomenon and reduces the noise transmission; the full connection layer cuts the tensor of the pooling layer into vectors, multiplies the vectors by the weights, uses a ReLU activation function for the vectors, optimizes parameters by a gradient descent method, and generates a classifier; and finally performing prediction identification through the classifier.
3. The self-moving robotic animal identification and avoidance method of claim 2, wherein:
after the animal is identified, the RGB image and the depth image which are acquired in advance are also used for respectively obtaining the RGB image and the depth image which do not contain the animal and the RGB image and the depth image which only contain the animal so as to be used for estimating the initial value of the animal movement.
4. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
wherein the conversion matrixThe calculation of (a) is specifically: obtaining the angular velocity and the acceleration of the mobile robot by using the IMU, pre-integrating IMU data between a b1 frame and a b2 frame to obtain an IMU measurement residual error between a b1 frame and a b2 frame, calculating the residual error of an image according to a reprojection error, and detecting the b of the latest frame by adopting a sliding window method2Frame and preceding frame b1Whether the frame has stable characteristics or not, if the stable characteristics exist, adding the latest frame into a sliding window, and calculating a conversion matrix between the b1 frame and the b2 frame by using a visual inertial odometer vision VIO based on the tight coupling of the sliding windowAnd/or the presence of a gas in the gas,
wherein the conversion matrixThe calculation of (a) is specifically: extraction of b1、b2Calculating a transformation matrix of the animal between the b1 frame and the b2 frame by the RGB image and the depth image of the animal by using Direct Linear Transformation (DLT) according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
5. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
transformation matrix of two frames of animal point clouds under b1 frame reference systemThe calculating step S130 specifically includes:
reference frame b1Current frame b2The depth image of the animal in the frame is converted into a point cloud image and passes through two frame images b1Frame and b2Transition matrix between framesThe current frame b2Converting point cloud data of animals in frame into reference frame b1Point clouds of animals in frame coordinate system by pair conversion to b using ICP (Iterative Closest Point) algorithm1Two frames of animal point clouds in the frame coordinate system are iterated, andas the initial value of the above ICP iteration, the value enables two frames of animal point clouds to be converged quickly, and a transformation matrix of the two frames of animal point clouds under a b1 frame reference system is calculated
6. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
the self-moving robot is provided with a depth camera and an IMU, wherein the depth camera is used for collecting environment information around the self-moving robot to obtain an RGB (red, green and blue) graph and a depth graph, and the IMU is used for obtaining the angular speed and the acceleration of the self-moving robot.
7. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
the self-moving robot circularly runs the steps S110 to S140, and the self-moving robot circularly runs the steps S110 to S140, obtains an RGB (red, green and blue) image and a depth image of the next frame of animal, identifies the animal, and calculatesCalculating a transformation matrix of two frames of animal point clouds under a b1 frame reference systemLet the coordinate system after movement and b1The transformation matrix of the frame reference system isThe self-moving robot is driven to move, so that the self-moving robot keeps a constant pose relation with the animal.
8. A storage medium for storing computer-executable instructions, characterized in that:
the computer executable instructions, when executed by a processor, perform the self-moving robotic animal identification and avoidance method of any of claims 1-7.
9. A self-moving robot having the storage medium of claim 8, characterized in that:
the storage medium performs the self-moving robotic animal identification and avoidance method of any of claims 1-7.
10. A self-moving robot, characterized by:
the self-moving robot has a depth camera and an IMU, and is capable of performing the self-moving robot animal recognition and avoidance method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910342589.6A CN110175523B (en) | 2019-04-26 | 2019-04-26 | Self-moving robot animal identification and avoidance method and storage medium thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910342589.6A CN110175523B (en) | 2019-04-26 | 2019-04-26 | Self-moving robot animal identification and avoidance method and storage medium thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110175523A CN110175523A (en) | 2019-08-27 |
CN110175523B true CN110175523B (en) | 2021-05-14 |
Family
ID=67690149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910342589.6A Active CN110175523B (en) | 2019-04-26 | 2019-04-26 | Self-moving robot animal identification and avoidance method and storage medium thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110175523B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113470591B (en) * | 2020-03-31 | 2023-11-14 | 京东方科技集团股份有限公司 | Monitor color matching method and device, electronic equipment and storage medium |
CN112884838B (en) * | 2021-03-16 | 2022-11-15 | 重庆大学 | Robot autonomous positioning method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105137973A (en) * | 2015-08-21 | 2015-12-09 | 华南理工大学 | Method for robot to intelligently avoid human under man-machine cooperation scene |
EP3007025A1 (en) * | 2014-10-10 | 2016-04-13 | LG Electronics Inc. | Robot cleaner and method for controlling the same |
CN107995962A (en) * | 2017-11-02 | 2018-05-04 | 深圳市道通智能航空技术有限公司 | A kind of barrier-avoiding method, device, loose impediment and computer-readable recording medium |
CN108805906A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工业大学 | A kind of moving obstacle detection and localization method based on depth map |
CN108958263A (en) * | 2018-08-03 | 2018-12-07 | 江苏木盟智能科技有限公司 | A kind of Obstacle Avoidance and robot |
CN109461185A (en) * | 2018-09-10 | 2019-03-12 | 西北工业大学 | A kind of robot target automatic obstacle avoidance method suitable for complex scene |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9848112B2 (en) * | 2014-07-01 | 2017-12-19 | Brain Corporation | Optical detection apparatus and methods |
CN106096559A (en) * | 2016-06-16 | 2016-11-09 | 深圳零度智能机器人科技有限公司 | Obstacle detection method and system and moving object |
-
2019
- 2019-04-26 CN CN201910342589.6A patent/CN110175523B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3007025A1 (en) * | 2014-10-10 | 2016-04-13 | LG Electronics Inc. | Robot cleaner and method for controlling the same |
CN105137973A (en) * | 2015-08-21 | 2015-12-09 | 华南理工大学 | Method for robot to intelligently avoid human under man-machine cooperation scene |
CN107995962A (en) * | 2017-11-02 | 2018-05-04 | 深圳市道通智能航空技术有限公司 | A kind of barrier-avoiding method, device, loose impediment and computer-readable recording medium |
CN108805906A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工业大学 | A kind of moving obstacle detection and localization method based on depth map |
CN108958263A (en) * | 2018-08-03 | 2018-12-07 | 江苏木盟智能科技有限公司 | A kind of Obstacle Avoidance and robot |
CN109461185A (en) * | 2018-09-10 | 2019-03-12 | 西北工业大学 | A kind of robot target automatic obstacle avoidance method suitable for complex scene |
Non-Patent Citations (3)
Title |
---|
Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction;Mikael Svenstrup et al.;《2009 IEEE International Conference on Robotics and Automation》;20091231;第3571-3576页 * |
一种基于极坐标系下的机器人动态避碰算法;吴国盛 等;《2006中国控制与决策学术年会论文集》;20061231;第1409-1411、1415页 * |
基于深度图像的移动机器人动态避障算 法;张毅 等;《控制工程》;20130731;第20卷(第4期);第663-666、675页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110175523A (en) | 2019-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11216971B2 (en) | Three-dimensional bounding box from two-dimensional image and point cloud data | |
US11361196B2 (en) | Object height estimation from monocular images | |
EP3405910B1 (en) | Deep machine learning methods and apparatus for robotic grasping | |
US10275649B2 (en) | Apparatus of recognizing position of mobile robot using direct tracking and method thereof | |
EP3414710B1 (en) | Deep machine learning methods and apparatus for robotic grasping | |
US11064178B2 (en) | Deep virtual stereo odometry | |
CN106780608B (en) | Pose information estimation method and device and movable equipment | |
US10399228B2 (en) | Apparatus for recognizing position of mobile robot using edge based refinement and method thereof | |
KR102462799B1 (en) | Method and apparatus for estimating pose | |
US20190301871A1 (en) | Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization | |
US20210081791A1 (en) | Computer-Automated Robot Grasp Depth Estimation | |
CN109323709B (en) | Visual odometry method, device and computer-readable storage medium | |
CN111322993B (en) | Visual positioning method and device | |
US11822621B2 (en) | Systems and methods for training a machine-learning-based monocular depth estimator | |
CN113052907B (en) | Positioning method of mobile robot in dynamic environment | |
US11403764B2 (en) | Method and computing system for processing candidate edges | |
CN110175523B (en) | Self-moving robot animal identification and avoidance method and storage medium thereof | |
JP6901803B2 (en) | A learning method and learning device for removing jittering from video generated by a swaying camera using multiple neural networks for fault tolerance and fracture robustness, and a test method and test device using it. | |
Ruf et al. | Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision | |
Ge et al. | Vipose: Real-time visual-inertial 6d object pose tracking | |
US11551379B2 (en) | Learning template representation libraries | |
US11417063B2 (en) | Determining a three-dimensional representation of a scene | |
CN110919644B (en) | Method and system for positioning interaction by using camera equipment and robot | |
US11657506B2 (en) | Systems and methods for autonomous robot navigation | |
WO2022151507A1 (en) | Movable platform and method and apparatus for controlling same, and machine-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |