CN110175523B - Self-moving robot animal identification and avoidance method and storage medium thereof - Google Patents

Self-moving robot animal identification and avoidance method and storage medium thereof Download PDF

Info

Publication number
CN110175523B
CN110175523B CN201910342589.6A CN201910342589A CN110175523B CN 110175523 B CN110175523 B CN 110175523B CN 201910342589 A CN201910342589 A CN 201910342589A CN 110175523 B CN110175523 B CN 110175523B
Authority
CN
China
Prior art keywords
animal
frame
self
moving robot
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910342589.6A
Other languages
Chinese (zh)
Other versions
CN110175523A (en
Inventor
黄骏
周晓军
陶明
孙赛
王行
李骊
盛赞
李朔
杨淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Huajie Imi Technology Co ltd
Original Assignee
Nanjing Huajie Imi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Huajie Imi Technology Co ltd filed Critical Nanjing Huajie Imi Technology Co ltd
Priority to CN201910342589.6A priority Critical patent/CN110175523B/en
Publication of CN110175523A publication Critical patent/CN110175523A/en
Application granted granted Critical
Publication of CN110175523B publication Critical patent/CN110175523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Automation & Control Theory (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Human Computer Interaction (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

A self-moving robot animal identification and avoidance method and storage medium thereof, the method is that environment information around the self-moving robot is collected to obtain RGB image and depth image, CNN is used to identify the animal; removing pixels of the animal to realize a visual inertial odometer; computing a transformation matrix between b1 and b2 frames from a mobile robot
Figure DDA0002041232500000011
Extracting pixels of animal parts, and calculating a transformation matrix between b1 frame and b2 frame
Figure DDA0002041232500000012
Converting the depth map of the animal into a cloud point map using ICP vs. b1Frame, b2Matching point clouds among frames; animals are in b1Transformation matrix under frame coordinate system
Figure DDA0002041232500000013
Driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system is
Figure DDA0002041232500000014
The self-moving robot is enabled to keep a constant pose relation with the animal. The invention improves the difficulty-escaping capability of the self-moving robot and also improves the practicability, intelligence and environment interactivity of the self-moving robot.

Description

Self-moving robot animal identification and avoidance method and storage medium thereof
Technical Field
The present invention relates to the field of self-moving robots, and more particularly, to a method and a storage medium for identifying an animal, estimating a motion of the animal, and avoiding the animal by a self-moving robot, so as to improve the practicability, intelligence, and environmental interactivity of the self-moving robot.
Background
Self-moving robots work in indoor environments, while pets are the most common animals in indoor environments. The self-moving robot is not only influenced by animals during the moving process, but also influences the environment of the animals, for example, the self-moving robot is chased by the animals during the moving process, which not only causes the damage of the self-moving robot, but also may hurt the animals. At present, most indoor self-moving robots do not have the functions of identifying and avoiding animals, so that the problems are easily caused, and the practicability, intelligence and environment interactivity of the self-moving robots have certain defects.
Therefore, how to identify the animal becomes a technical problem to be solved urgently in the prior art when the pose of the animal and the self-moving robot is smaller than the preset value, the animal is avoided, and the constant pose relationship between the animal and the animal is kept.
Disclosure of Invention
The invention aims to provide a self-moving robot animal identification and avoidance method and a storage medium thereof, wherein the method can enable the self-moving robot to move and keep a constant pose relation with an animal. The practicability, intelligence and environment interactivity of the self-moving robot are improved, and the ability of the self-moving robot to get rid of difficulties is enhanced.
In order to achieve the purpose, the invention adopts the following technical scheme:
a self-moving robot animal identification and avoidance method is characterized by comprising the following steps:
an animal identification step S110, acquiring an RGB (red, green and blue) image and a depth image in front of the movement of the mobile robot, identifying the RGB image by a Convolutional Neural Network (CNN), judging the position and posture of the animal from the mobile robot by the depth image when the animal is identified, and performing the following steps of the method when the position and posture are smaller than a preset value;
transformation matrix
Figure BDA0002041232480000021
And
Figure BDA0002041232480000022
calculation step S120:
calculating a conversion matrix between b1 frames and b2 frames of the mobile robot by using the RGB map and the depth map without animals
Figure BDA0002041232480000023
Extraction of b1、b2Calculating the conversion matrix of the animal between b1 frame and b2 frame according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Figure BDA0002041232480000024
Transformation matrix of two frames of animal point clouds under b1 frame reference system
Figure BDA0002041232480000025
Calculation step S130: converting the depth map of two frames of animals into point cloud map, and converting the point cloud map into b1Iterating two frames of animal point clouds in the frame coordinate system, and calculating a transformation matrix of the two frames of animal point clouds under a b1 frame reference system
Figure BDA0002041232480000026
Driving step S140: driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system is
Figure BDA0002041232480000027
The self-moving robot and the animal are kept in a constant pose relationship.
Optionally, in the step S110 of identifying an animal, the identifying an animal with the RGB diagram by using a Convolutional Neural Network (CNN) specifically includes: the convolutional neural network generates a classifier by utilizing a convolutional layer, a pooling layer and a full-link layer to carry out prediction identification; obtaining an output matrix by multiplying the convolution layer with a convolution kernel, and extracting features from the image; the pooling layer reduces the dimension of the characteristic vector, reduces the over-fitting phenomenon and reduces the noise transmission; the full connection layer cuts the tensor of the pooling layer into vectors, multiplies the vectors by the weights, uses a ReLU activation function for the vectors, optimizes parameters by a gradient descent method, and generates a classifier; and finally performing prediction identification through the classifier.
Optionally, after the animal is identified, the RGB map and the depth map which are acquired in advance are used to obtain the RGB map and the depth map which do not contain the animal, and the RGB map and the depth map which only contain the animal, respectively, so as to estimate the initial value of the animal movement.
Optionally, wherein the matrix is transformed
Figure BDA0002041232480000031
The calculation of (a) is specifically: obtaining the angular velocity and the acceleration of the mobile robot by using an IUM, pre-integrating IMU data between a b1 frame and a b2 frame to obtain an IMU measurement residual error between a b1 frame and a b2 frame, calculating the residual error of an image according to a reprojection error, and detecting the b of the latest frame by adopting a sliding window method2Frame and preceding frame b1Whether the frame has stable characteristics or not, if so, adding the latest frame into a sliding window, and calculating the distance between the b1 frame and the b2 frame by using a sliding window-based tightly-coupled visual inertial odometer (visual VIO)Transformation matrix
Figure BDA0002041232480000032
And/or the presence of a gas in the gas,
wherein the conversion matrix
Figure BDA0002041232480000033
The calculation of (a) is specifically: extraction of b1、b2Calculating a transformation matrix of the animal between the b1 frame and the b2 frame by using Direct Linear Transformation (DLT) through the RGB map and the depth map of the animal according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Figure BDA0002041232480000034
Optionally, a transformation matrix of the two frames of animal point clouds under the b1 frame reference system
Figure BDA0002041232480000035
The calculating step S130 specifically includes:
reference frame b1Current frame b2The depth image of the animal in the frame is converted into a point cloud image and passes through two frame images b1Frame and b2Transition matrix between frames
Figure BDA0002041232480000036
The current frame b2Converting point cloud data of animals in frame into reference frame b1Point clouds of animals in frame coordinate system by pair conversion to b using ICP (Iterative Closest Point) algorithm1Two frames of animal point clouds in the frame coordinate system are iterated, and
Figure BDA0002041232480000037
as the initial value of the above ICP iteration, the value enables two frames of animal point clouds to be converged quickly, and a transformation matrix of the two frames of animal point clouds under a b1 frame reference system is calculated
Figure BDA0002041232480000038
Optionally, the self-moving robot has a depth camera for collecting environmental information around the self-moving robot to obtain an RGB map and a depth map, and an IMU for obtaining an angular velocity and an acceleration of the self-moving robot.
Optionally, the self-moving robot circularly runs steps S110 to S140, and the self-moving robot circularly runs steps S110 to S140, acquires an RGB image and a depth image of the next frame of animal, identifies the animal, and calculates
Figure BDA0002041232480000041
Calculating a transformation matrix of two frames of animal point clouds under a b1 frame reference system
Figure BDA0002041232480000042
Let the coordinate system after movement and b1The transformation matrix of the frame reference system is
Figure BDA0002041232480000043
The self-moving robot is driven to move, so that the self-moving robot keeps a constant pose relation with the animal.
The invention also discloses a storage medium for storing computer executable instructions, which is characterized in that:
the computer executable instructions, when executed by the processor, perform the self-moving robotic animal identification and avoidance method described above.
The invention further discloses a self-moving robot, which is provided with the storage medium and is characterized in that: the storage medium executes the self-moving robot animal identification and avoidance method.
The invention further discloses a self-moving robot, which is characterized in that: the self-moving robot is provided with a depth camera and an IMU, and can execute the self-moving robot animal identification and avoidance method.
In conclusion, the self-moving robot can identify the animal, estimate the motion of the animal, avoid the animal and keep a constant pose relation with the animal. The self-moving robot has the advantages that the difficulty-escaping capability of the self-moving robot is improved, and the practicability, intelligence and environment interactivity of the self-moving robot are also improved. At present, most self-moving robots do not have the function, and the function can enable the self-moving robots to keep friendly interactivity with animals.
Drawings
Fig. 1 is a flow chart of a self-moving robotic animal identification and avoidance method according to an embodiment of the present invention;
FIG. 2 is an iteration initial value according to an embodiment of the present invention
Figure BDA0002041232480000044
A step (2);
fig. 3 is a step of calculating animal motion and driving a self-moving robot according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
The invention consists in providing a self-moving robot with a depth camera for acquiring environmental information from the surroundings of the robot, obtaining an RGB-map and a depth-map for identifying the animal and estimating its movement, and an IMU (inertial measurement unit). When the pose of the animal and the self-moving robot is smaller than the preset value, the self-moving robot moves and keeps a constant pose relation with the animal, so that the animal cannot further approach the self-moving robot.
Specifically, a convolutional neural network is used for recognizing the animal, pixels of the animal are removed, and an IMU and a camera are fused to realize a visual inertial odometer; computing a transformation matrix between b1 and b2 frames from a mobile robot
Figure BDA0002041232480000051
Extracting the pixels of the animal part, and calculating the transformation matrix of the animal between the b1 frame and the b2 frame
Figure BDA0002041232480000052
Calculating a transformation matrix of two frames of animal point clouds under a b1 frame reference system
Figure BDA0002041232480000053
Driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system is
Figure BDA0002041232480000054
The self-moving robot is enabled to keep a constant pose relation with the animal.
In particular, referring to fig. 1, a flow chart of a self-moving robotic animal identification and avoidance method is shown, comprising the steps of:
and an animal identification step S110, acquiring an RGB (red, green and blue) image and a depth image in front of the movement of the mobile robot, identifying the RGB image by a Convolutional Neural Network (CNN), judging the pose of the animal from the mobile robot by the depth image when the animal is identified, and performing the following steps of the method when the pose is smaller than a preset value.
In an alternative embodiment, the animal identification of the RGB map by the Convolutional Neural Network (CNN) is specifically: the convolutional neural network generates a classifier by utilizing a convolutional layer, a pooling layer and a full-link layer to carry out prediction identification; obtaining an output matrix by multiplying the convolution layer with a convolution kernel, and extracting features from the image; the pooling layer reduces the dimension of the characteristic vector, reduces the over-fitting phenomenon and reduces the noise transmission; the full connection layer cuts the tensor of the pooling layer into vectors, multiplies the vectors by the weights, uses a ReLU activation function for the vectors, optimizes parameters by a gradient descent method, and generates a classifier; and finally performing prediction identification through the classifier.
Further, after the animal is identified, the RGB image and the depth map which are acquired in advance are used to obtain the RGB image and the depth map which do not contain the animal and the RGB image and the depth map which only contain the animal, respectively, so as to be used for calculating a subsequent conversion matrix.
In the invention, the self-moving robot is provided with a depth camera and an IMU, wherein the depth camera is used for collecting environment information around the self-moving robot to obtain an RGB (red, green and blue) map and a depth map, and the IMU is used for obtaining the angular speed and the acceleration of the self-moving robot.
Transformation matrix
Figure BDA0002041232480000061
And
Figure BDA0002041232480000062
calculation step S120:
the step comprises calculating a conversion matrix between b1 frame and b2 frame of the mobile robot by using an RGB (red, green and blue) map and a depth map without animals
Figure BDA0002041232480000063
Extraction of b1、b2Calculating the conversion matrix of the animal between b1 frame and b2 frame according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Figure BDA0002041232480000064
Wherein the conversion matrix
Figure BDA0002041232480000065
The calculation of (a) is specifically: obtaining the angular velocity and the acceleration of the mobile robot by using an IUM, pre-integrating IMU data between a b1 frame and a b2 frame to obtain an IMU measurement residual error between a b1 frame and a b2 frame, calculating the residual error of an image according to a reprojection error, and detecting the b of the latest frame by adopting a sliding window method2Frame and preceding frame b1Whether the frame has stable characteristics or not, if so, adding the latest frame into a sliding window, and calculating a conversion matrix between the b1 frame and the b2 frame by using a sliding window-based tightly-coupled visual inertial odometer (visual VIO)
Figure BDA0002041232480000066
Wherein the conversion matrix
Figure BDA0002041232480000067
The calculation of (a) is specifically: extraction of b1、b2Calculating a transformation matrix of the animal between the b1 frame and the b2 frame by using Direct Linear Transformation (DLT) through the RGB map and the depth map of the animal according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Figure BDA0002041232480000068
In this step, the calculation of the two transformation matrices is based on the calculation of the initial values in the next step iteration.
In the present invention, the depth camera is used to simultaneously capture the RGB image and the depth image, and thus, the current frame b2And a reference frame b1More so, the time at which the image was taken.
Referring to FIG. 2, an estimated transformation matrix is shown
Figure BDA0002041232480000071
And
Figure BDA0002041232480000072
and an initial value
Figure BDA0002041232480000073
The corresponding steps are required.
The two transformation matrices are formatted as
Figure BDA0002041232480000074
In the formula: r is a rotation matrix and t is a translation vector.
Transformation matrix of two frames of animal point clouds under b1 frame reference system
Figure BDA0002041232480000075
Calculation step S130: converting the depth map of two frames of animals into point cloud map, and converting the point cloud map into b1Iterating two frames of animal point clouds in the frame coordinate system, and calculating a transformation matrix of the two frames of animal point clouds under a b1 frame reference system
Figure BDA0002041232480000076
The method specifically comprises the following steps: reference frame b1Current frame b2The depth image of the animal in the frame is converted into a point cloud image and passes through two frame images b1Frame and b2Transition matrix between frames
Figure BDA0002041232480000077
The current frame b2Converting point cloud data of animals in frame into reference frame b1Point clouds of animals in frame coordinate system by pair conversion to b using ICP (Iterative Closest Point) algorithm1Two frames of animal point clouds in the frame coordinate system are iterated, and
Figure BDA0002041232480000078
the initial value of the ICP iteration is the corresponding point multiplication of the two matrixes, the value can enable the two frames of animal point clouds to be converged quickly, and a conversion matrix of the two frames of animal point clouds under a b1 frame reference system is calculated
Figure BDA0002041232480000079
Driving step S140: driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system is
Figure BDA00020412324800000710
The self-moving robot is enabled to keep a constant pose relation with the animal.
Referring to fig. 3, the corresponding steps required to calculate animal motion and drive the self-moving robot according to a specific embodiment of the present invention are shown.
Therefore, through steps S110 to S140, the self-moving robot and the animal can be kept in a constant pose relationship. The practical intelligence and the environmental interactivity of the self-moving robot to the environment are enhanced.
Further, the self-moving robot circularly runs the steps S110 to S140 to obtain an RGB (red, green and blue) image and a depth image of the next frame of animal, identify the animal and calculate
Figure BDA0002041232480000081
Calculating a transformation matrix of two frames of animal point clouds under a b1 frame reference system
Figure BDA0002041232480000082
Driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system is
Figure BDA0002041232480000083
The self-moving robot is enabled to keep a constant pose relation with the animal.
And driving the self-moving robot to move.
The invention further discloses a storage medium for storing computer-executable instructions which, when executed by a processor, perform the self-moving robotic animal identification and avoidance method described above.
The invention also discloses a self-moving robot which is provided with the storage medium and can execute the animal identification and avoidance method of the self-moving robot.
Alternatively, a self-moving robot having a depth camera and an IMU is capable of performing the above-described self-moving robot animal recognition and avoidance method.
In conclusion, the self-moving robot can identify the animal, estimate the motion of the animal, avoid the animal and keep a constant pose relation with the animal. The self-moving robot has the advantages that the difficulty-escaping capability of the self-moving robot is improved, and the practicability, intelligence and environment interactivity of the self-moving robot are also improved. At present, most self-moving robots do not have the function, and the function can enable the self-moving robots to keep friendly interactivity with animals.
It will be apparent to those skilled in the art that the various elements or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device, or alternatively, they may be implemented using program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
While the invention has been described in further detail with reference to specific preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A self-moving robot animal identification and avoidance method is characterized by comprising the following steps:
an animal identification step S110, acquiring an RGB (red, green and blue) image and a depth image in front of the movement of the mobile robot, carrying out animal identification on the RGB image through a Convolutional Neural Network (CNN), judging the position and posture of the animal from the mobile robot through the depth image when the animal is identified, and carrying out the following steps when the position and posture is smaller than a preset value;
transformation matrix
Figure FDA0002953566230000011
And
Figure FDA0002953566230000012
calculation step S120:
calculating a conversion matrix between b1 frames and b2 frames of the mobile robot by using the RGB map and the depth map without animals
Figure FDA0002953566230000013
Extraction of b1、b2Calculating the conversion matrix of the animal between b1 frame and b2 frame according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Figure FDA0002953566230000014
Transformation matrix of two frames of animal point clouds under b1 frame reference system
Figure FDA0002953566230000015
Calculation step S130: converting the depth map of two frames of animals into point cloud map, and converting the point cloud map into b1Iterating two frames of animal point clouds in the frame coordinate system, and calculating a transformation matrix of the two frames of animal point clouds under a b1 frame reference system
Figure FDA0002953566230000016
Driving step S140: driving the self-moving robot to move to make the coordinate system after the movement and b1The transformation matrix of the frame reference system is
Figure FDA0002953566230000017
The self-moving robot and the animal are kept in a constant pose relationship.
2. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
in the animal recognition step S110, the animal recognition of the RGB map by the convolutional neural network CNN specifically includes: the convolutional neural network generates a classifier by utilizing a convolutional layer, a pooling layer and a full-link layer to carry out prediction identification; obtaining an output matrix by multiplying the convolution layer with a convolution kernel, and extracting features from the image; the pooling layer reduces the dimension of the characteristic vector, reduces the over-fitting phenomenon and reduces the noise transmission; the full connection layer cuts the tensor of the pooling layer into vectors, multiplies the vectors by the weights, uses a ReLU activation function for the vectors, optimizes parameters by a gradient descent method, and generates a classifier; and finally performing prediction identification through the classifier.
3. The self-moving robotic animal identification and avoidance method of claim 2, wherein:
after the animal is identified, the RGB image and the depth image which are acquired in advance are also used for respectively obtaining the RGB image and the depth image which do not contain the animal and the RGB image and the depth image which only contain the animal so as to be used for estimating the initial value of the animal movement.
4. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
wherein the conversion matrix
Figure FDA0002953566230000021
The calculation of (a) is specifically: obtaining the angular velocity and the acceleration of the mobile robot by using the IMU, pre-integrating IMU data between a b1 frame and a b2 frame to obtain an IMU measurement residual error between a b1 frame and a b2 frame, calculating the residual error of an image according to a reprojection error, and detecting the b of the latest frame by adopting a sliding window method2Frame and preceding frame b1Whether the frame has stable characteristics or not, if the stable characteristics exist, adding the latest frame into a sliding window, and calculating a conversion matrix between the b1 frame and the b2 frame by using a visual inertial odometer vision VIO based on the tight coupling of the sliding window
Figure FDA0002953566230000022
And/or the presence of a gas in the gas,
wherein the conversion matrix
Figure FDA0002953566230000023
The calculation of (a) is specifically: extraction of b1、b2Calculating a transformation matrix of the animal between the b1 frame and the b2 frame by the RGB image and the depth image of the animal by using Direct Linear Transformation (DLT) according to the depth pixel data corresponding to the RGB pixels of the animal in the frame
Figure FDA0002953566230000024
5. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
transformation matrix of two frames of animal point clouds under b1 frame reference system
Figure FDA0002953566230000031
The calculating step S130 specifically includes:
reference frame b1Current frame b2The depth image of the animal in the frame is converted into a point cloud image and passes through two frame images b1Frame and b2Transition matrix between frames
Figure FDA0002953566230000032
The current frame b2Converting point cloud data of animals in frame into reference frame b1Point clouds of animals in frame coordinate system by pair conversion to b using ICP (Iterative Closest Point) algorithm1Two frames of animal point clouds in the frame coordinate system are iterated, and
Figure FDA0002953566230000033
as the initial value of the above ICP iteration, the value enables two frames of animal point clouds to be converged quickly, and a transformation matrix of the two frames of animal point clouds under a b1 frame reference system is calculated
Figure FDA0002953566230000034
6. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
the self-moving robot is provided with a depth camera and an IMU, wherein the depth camera is used for collecting environment information around the self-moving robot to obtain an RGB (red, green and blue) graph and a depth graph, and the IMU is used for obtaining the angular speed and the acceleration of the self-moving robot.
7. The self-moving robotic animal identification and avoidance method of claim 1, wherein:
the self-moving robot circularly runs the steps S110 to S140, and the self-moving robot circularly runs the steps S110 to S140, obtains an RGB (red, green and blue) image and a depth image of the next frame of animal, identifies the animal, and calculates
Figure FDA0002953566230000035
Calculating a transformation matrix of two frames of animal point clouds under a b1 frame reference system
Figure FDA0002953566230000036
Let the coordinate system after movement and b1The transformation matrix of the frame reference system is
Figure FDA0002953566230000037
The self-moving robot is driven to move, so that the self-moving robot keeps a constant pose relation with the animal.
8. A storage medium for storing computer-executable instructions, characterized in that:
the computer executable instructions, when executed by a processor, perform the self-moving robotic animal identification and avoidance method of any of claims 1-7.
9. A self-moving robot having the storage medium of claim 8, characterized in that:
the storage medium performs the self-moving robotic animal identification and avoidance method of any of claims 1-7.
10. A self-moving robot, characterized by:
the self-moving robot has a depth camera and an IMU, and is capable of performing the self-moving robot animal recognition and avoidance method of any one of claims 1-7.
CN201910342589.6A 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof Active CN110175523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910342589.6A CN110175523B (en) 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910342589.6A CN110175523B (en) 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof

Publications (2)

Publication Number Publication Date
CN110175523A CN110175523A (en) 2019-08-27
CN110175523B true CN110175523B (en) 2021-05-14

Family

ID=67690149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910342589.6A Active CN110175523B (en) 2019-04-26 2019-04-26 Self-moving robot animal identification and avoidance method and storage medium thereof

Country Status (1)

Country Link
CN (1) CN110175523B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470591B (en) * 2020-03-31 2023-11-14 京东方科技集团股份有限公司 Monitor color matching method and device, electronic equipment and storage medium
CN112884838B (en) * 2021-03-16 2022-11-15 重庆大学 Robot autonomous positioning method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105137973A (en) * 2015-08-21 2015-12-09 华南理工大学 Method for robot to intelligently avoid human under man-machine cooperation scene
EP3007025A1 (en) * 2014-10-10 2016-04-13 LG Electronics Inc. Robot cleaner and method for controlling the same
CN107995962A (en) * 2017-11-02 2018-05-04 深圳市道通智能航空技术有限公司 A kind of barrier-avoiding method, device, loose impediment and computer-readable recording medium
CN108805906A (en) * 2018-05-25 2018-11-13 哈尔滨工业大学 A kind of moving obstacle detection and localization method based on depth map
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot
CN109461185A (en) * 2018-09-10 2019-03-12 西北工业大学 A kind of robot target automatic obstacle avoidance method suitable for complex scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9848112B2 (en) * 2014-07-01 2017-12-19 Brain Corporation Optical detection apparatus and methods
CN106096559A (en) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 Obstacle detection method and system and moving object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3007025A1 (en) * 2014-10-10 2016-04-13 LG Electronics Inc. Robot cleaner and method for controlling the same
CN105137973A (en) * 2015-08-21 2015-12-09 华南理工大学 Method for robot to intelligently avoid human under man-machine cooperation scene
CN107995962A (en) * 2017-11-02 2018-05-04 深圳市道通智能航空技术有限公司 A kind of barrier-avoiding method, device, loose impediment and computer-readable recording medium
CN108805906A (en) * 2018-05-25 2018-11-13 哈尔滨工业大学 A kind of moving obstacle detection and localization method based on depth map
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot
CN109461185A (en) * 2018-09-10 2019-03-12 西北工业大学 A kind of robot target automatic obstacle avoidance method suitable for complex scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction;Mikael Svenstrup et al.;《2009 IEEE International Conference on Robotics and Automation》;20091231;第3571-3576页 *
一种基于极坐标系下的机器人动态避碰算法;吴国盛 等;《2006中国控制与决策学术年会论文集》;20061231;第1409-1411、1415页 *
基于深度图像的移动机器人动态避障算 法;张毅 等;《控制工程》;20130731;第20卷(第4期);第663-666、675页 *

Also Published As

Publication number Publication date
CN110175523A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
US11216971B2 (en) Three-dimensional bounding box from two-dimensional image and point cloud data
US11361196B2 (en) Object height estimation from monocular images
EP3405910B1 (en) Deep machine learning methods and apparatus for robotic grasping
US10275649B2 (en) Apparatus of recognizing position of mobile robot using direct tracking and method thereof
EP3414710B1 (en) Deep machine learning methods and apparatus for robotic grasping
US11064178B2 (en) Deep virtual stereo odometry
CN106780608B (en) Pose information estimation method and device and movable equipment
US10399228B2 (en) Apparatus for recognizing position of mobile robot using edge based refinement and method thereof
KR102462799B1 (en) Method and apparatus for estimating pose
US20190301871A1 (en) Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization
US20210081791A1 (en) Computer-Automated Robot Grasp Depth Estimation
CN109323709B (en) Visual odometry method, device and computer-readable storage medium
CN111322993B (en) Visual positioning method and device
US11822621B2 (en) Systems and methods for training a machine-learning-based monocular depth estimator
CN113052907B (en) Positioning method of mobile robot in dynamic environment
US11403764B2 (en) Method and computing system for processing candidate edges
CN110175523B (en) Self-moving robot animal identification and avoidance method and storage medium thereof
JP6901803B2 (en) A learning method and learning device for removing jittering from video generated by a swaying camera using multiple neural networks for fault tolerance and fracture robustness, and a test method and test device using it.
Ruf et al. Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
Ge et al. Vipose: Real-time visual-inertial 6d object pose tracking
US11551379B2 (en) Learning template representation libraries
US11417063B2 (en) Determining a three-dimensional representation of a scene
CN110919644B (en) Method and system for positioning interaction by using camera equipment and robot
US11657506B2 (en) Systems and methods for autonomous robot navigation
WO2022151507A1 (en) Movable platform and method and apparatus for controlling same, and machine-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant