CN112363495A - Navigation method of inspection robot for livestock and poultry farm - Google Patents

Navigation method of inspection robot for livestock and poultry farm Download PDF

Info

Publication number
CN112363495A
CN112363495A CN202011038150.3A CN202011038150A CN112363495A CN 112363495 A CN112363495 A CN 112363495A CN 202011038150 A CN202011038150 A CN 202011038150A CN 112363495 A CN112363495 A CN 112363495A
Authority
CN
China
Prior art keywords
path
image
inspection robot
point
livestock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011038150.3A
Other languages
Chinese (zh)
Inventor
张铁民
卢锦枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202011038150.3A priority Critical patent/CN112363495A/en
Publication of CN112363495A publication Critical patent/CN112363495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a navigation method of a patrol robot in a livestock and poultry farm, which comprises the following steps: s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path; s2, the inspection robot moves in the livestock and poultry farm according to the optimal path, and in the moving process, the path identifier and the QR two-dimensional code image which are arranged on the ground are obtained in real time; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle; wherein the drive control parameters include a lateral deviation and an average angular deviation. According to the invention, the shot path identifier and the image of the QR two-dimensional code are processed to obtain the path information and the navigation parameter, and the patrol mobile platform is controlled to move according to the navigation parameter, so that the patrol mobile platform can automatically patrol in the livestock and poultry farm.

Description

Navigation method of inspection robot for livestock and poultry farm
Technical Field
The invention relates to the technical field of safe and efficient livestock and poultry breeding information, in particular to a navigation method of a patrol robot in a livestock and poultry farm.
Background
In the autonomous navigation process, the inspection robot knows the position of the inspection robot, and only when knowing the accurate position of the inspection robot, the inspection robot can accurately move to a target point.
The existing navigation control technology of the inspection robot is to control the inspection robot to run along a magnetic strip paved on the ground, and a control system adopting a GPS or radar technology is also provided. The form of laying the magnetic strips on the ground not only has high requirement on the environment of a use site, but also is time-consuming and high in price, and is not convenient for replacing scenes for use; the GPS navigation is mostly used outdoors, and the indoor use effect is not good.
With the rapid development of electronic technology, a navigation technology based on laser radar, millimeter wave radar and an inertial navigation technology also makes a significant breakthrough, and the method is widely applied to the related fields of automatic inspection, agricultural machinery, automatic logistics distribution, automatic driving and the like, and overcomes the defects of magnetic navigation, GPS and radar navigation to a certain extent. Laser radar is also great by ambient light's influence, and inertial navigation technique adds GPS and is used for outdoor navigation, nevertheless can produce accumulative error because of long-time the use, and the robot operation is patrolled and examined at fixed plant or confined structural space to beasts and birds scale plant, and regular or unscheduled patrols and examines.
Therefore, the industry needs to develop a navigation control method or system of the inspection robot for the livestock and poultry farm with low cost and high reliability.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a navigation method of a patrol robot in a livestock and poultry farm.
The purpose of the invention is realized by the following technical scheme:
a navigation method of a patrol robot in a livestock and poultry farm comprises the following steps:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
s2, the inspection robot moves in the livestock and poultry farm according to the optimal path, and in the moving process, the path identifier and the image of the QR two-dimensional code which are arranged on the ground are obtained in real time, the central navigation path is found out by using a centerline method, and the QR two-dimensional code is identified and decoded; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation.
Preferably, step S2 includes: shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement; acquiring a driving control parameter of the inspection robot in real time; and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter.
Preferably, step S2 further includes: converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480; splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation; after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path; setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation; calculating the current transverse deviation and angle deviation of the inspection robot by using the path center line; and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
Preferably, automatically planning the optimal path includes: setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of a livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, the numerical value between the rectangular frames is a weight value, and a straight line with a bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position; generating an optimal path by utilizing a Dijkstra algorithm; the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points; (b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
Preferably, the recognizing and decoding of the QR two-dimensional code includes: graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering; carrying out binarization processing on the smoothed picture; using a contour extraction function findCountours to find out all contours in the binary image; judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image; calculating to obtain the central winning of the three positioning angles; finding out a point with the largest angle in the central coordinate; acquiring a projective transformation matrix through a getAffiniTransform function; carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture; and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
Preferably, the detecting of the obstacle comprises: firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map; carrying out binarization processing on the point cloud image by using a depth threshold value; carrying out corrosion expansion operation on the binary image to remove noise; extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance; extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline; judging whether the middle path meets the width requirement of operation or not, and if the middle path does not meet the width requirement of operation, judging the sizes of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path; and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
Compared with the prior art, the invention has the following advantages:
according to the invention, the shot path identifier and the image of the QR two-dimensional code are processed to obtain path information and navigation parameters, the mobile inspection mobile platform is controlled to move according to the navigation parameters, so that the mobile inspection mobile platform can automatically inspect in a livestock and poultry farm, and meanwhile, the invention detects the environmental temperature and humidity, the concentration of harmful gases and the wind speed, and wirelessly transmits the livestock and poultry behavior video to the PC terminal in real time. The navigation control method fusing the identification path and the QR two-dimensional code information can overcome the defect of influence of ambient light; compared with an inertial navigation control method, the navigation control method fusing the identification path and the QR two-dimensional code information can correct accumulated errors generated by long-time work of an inertial system and an encoder, and therefore a target point can be accurately reached.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart illustrating a navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 2 is a path planning diagram of a navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 3 is a flow chart of the identification path recognition of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 4 is a QR two-dimensional code recognition flow chart of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 5 is a navigation parameter acquisition flow chart of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
fig. 6 is an obstacle avoidance flow chart of the navigation method of the inspection robot for the livestock and poultry farm of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
Referring to fig. 1, a navigation method of a patrol robot for a livestock and poultry farm includes:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
in this embodiment, the inspection robot is further configured between the step S1 and the step S2 to detect whether the electric quantity of the robot meets the threshold requirement, and if not, the robot is provided with a buzzer to alarm; if yes, proceed to step S2;
and S2, the inspection robot moves in the livestock and poultry farm according to the optimal path and performs autonomous navigation. In the moving process, acquiring a path identifier and an image of the QR two-dimensional code which are arranged on the ground in real time, finding out a central navigation path by using a centerline method, and identifying and decoding the QR two-dimensional code; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation. In the process of obtaining navigation parameters according to the path information and the drive control parameters, judging whether the CCD camera detects the image of the QR two-dimensional code, and when the QR two-dimensional code is not detected, the average angle deviation is the average value of the angle deviation obtained after the image processing and the course angle fed back by the IMU module; when the QR two-dimensional code is detected, the angle deviation is the average value of the angle deviation obtained after image processing, the heading angle fed back by the IMU module and theta in the QR two-dimensional code;
in the present embodiment, step S2 includes: shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement; the path mark is a yellow colored ribbon, a central navigation path is found out by using a centerline method according to the ground, and the inspection robot moves on the central navigation path. Acquiring a driving control parameter of the inspection robot in real time; and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter. The QR two-dimensional code is associated with an ID number of a corresponding position and an absolute pose (X, Y, theta) of the current position.
In this embodiment, a clearly refined path (central navigation path) is obtained by processing the image of the path identifier and the QR two-dimensional code on the ground, and the motion deviation (lateral deviation and angle deviation) of the inspection robot is calculated, which specifically includes, referring to fig. 3:
converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480;
splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation;
after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path; setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation; calculating the current transverse deviation and angle deviation of the inspection robot by using the path center line; and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
In this embodiment, automatically planning the optimal path includes: setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of the livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, and in the figure 2, 1-16 are the specific numerical value numbers of the QR two-dimensional code; wherein the lower left corner is the origin of coordinates; x is the abscissa position of the label in the whole map, Y is the coordinate position, and theta is the set deviation angle;
(X, Y, θ) denoted by 1 is (0,0,0), and is regarded as the coordinate of the image in-place;
(X, Y, theta) numbered 2 is (0,10, 0);
……
and so on.
The numerical values among the rectangular frames are weight values, and the straight line with the bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position; generating an optimal path by utilizing a Dijkstra algorithm;
the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points; x is the abscissa position of the label in the whole map, Y is the coordinate position, and theta is the set deviation angle; (b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
In this embodiment, referring to fig. 4, the recognizing and decoding of the QR two-dimensional code includes: graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering; carrying out binarization processing on the smoothed picture; using a contour extraction function findCountours to find out all contours in the binary image; judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image; calculating to obtain the central winning of the three positioning angles; finding out a point with the largest angle in the central coordinate; acquiring a projective transformation matrix through a getAffiniTransform function; carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture; and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
In the present embodiment, referring to fig. 5 to 6, the detecting of the obstacle includes: firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map; carrying out binarization processing on the point cloud image by using a depth threshold value; carrying out corrosion expansion operation on the binary image to remove noise; extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance; extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline; judging whether the middle path meets the width requirement of operation or not, and if the middle path does not meet the width requirement of operation, judging the sizes of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path; and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
In step S2, the obstacle information obtained by the depth camera, the identification path and QR two-dimensional code information obtained by the CCD camera, and the navigation parameters obtained after image processing are simultaneously transmitted to the PC upper computer as the driving control parameters and the information transmitted thereby is received.
It should be noted that as long as the inspection robot moves, the encoder and the IMU module both feed back real-time data. If the QR two-dimensional code is not detected, selecting a proper navigation parameter according to the transverse deviation and the angle deviation which are calculated after the image processing and the average angle deviation of the fed-back course angle of the IMU module by referring to the fuzzy rule table; if the QR two-dimensional code is detected, selecting a proper navigation parameter according to the transverse deviation obtained by calculation after image processing, the angle deviation obtained by calculation after image processing and the average angle deviation of the heading angle fed back by the IMU module and theta in the QR two-dimensional code by referring to the fuzzy rule table;
the above-mentioned embodiments are preferred embodiments of the present invention, and the present invention is not limited thereto, and any other modifications or equivalent substitutions that do not depart from the technical spirit of the present invention are included in the scope of the present invention.

Claims (6)

1. A navigation method of a patrol robot for a livestock and poultry farm is characterized by comprising the following steps:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
s2, the inspection robot moves in the livestock and poultry farm according to the optimal path, and in the moving process, the path identifier and the image of the QR two-dimensional code which are arranged on the ground are obtained in real time, the central navigation path is found out by using a centerline method, and the QR two-dimensional code is identified and decoded; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation.
2. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein the step S2 comprises:
shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement;
acquiring a driving control parameter of the inspection robot in real time;
and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter.
3. The navigation method for the inspection robot of the livestock and poultry farm according to claim 1, wherein the step S2 further comprises:
converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480;
splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation;
after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path;
setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation;
calculating the current transverse deviation and the average angle deviation of the inspection robot by using the path center line;
and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the course angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
4. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein automatically planning the optimal path comprises:
setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of a livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, the numerical value between the rectangular frames is a weight value, and a straight line with a bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position;
generating an optimal path by utilizing a Dijkstra algorithm;
the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points;
(b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
5. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein the recognizing and decoding of the QR two-dimensional code comprises:
graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering;
carrying out binarization processing on the smoothed picture;
using a contour extraction function findCountours to find out all contours in the binary image;
judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image;
calculating to obtain the central winning of the three positioning angles;
finding out a point with the largest angle in the central coordinate;
acquiring a projective transformation matrix through a getAffiniTransform function;
carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture;
and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
6. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein the detecting of the obstacle comprises:
firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map;
carrying out binarization processing on the point cloud image by using a depth threshold value;
carrying out corrosion expansion operation on the binary image to remove noise;
extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance;
extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline;
determining whether the intermediate path meets the width requirements of the run,
if the middle path does not meet the width requirement of operation, judging the size of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path;
and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
CN202011038150.3A 2020-09-28 2020-09-28 Navigation method of inspection robot for livestock and poultry farm Pending CN112363495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011038150.3A CN112363495A (en) 2020-09-28 2020-09-28 Navigation method of inspection robot for livestock and poultry farm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011038150.3A CN112363495A (en) 2020-09-28 2020-09-28 Navigation method of inspection robot for livestock and poultry farm

Publications (1)

Publication Number Publication Date
CN112363495A true CN112363495A (en) 2021-02-12

Family

ID=74508055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011038150.3A Pending CN112363495A (en) 2020-09-28 2020-09-28 Navigation method of inspection robot for livestock and poultry farm

Country Status (1)

Country Link
CN (1) CN112363495A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239134A (en) * 2021-05-07 2021-08-10 河南牧原智能科技有限公司 Pig house navigation map establishing method and device, electronic equipment and storage medium
CN114019977A (en) * 2021-11-03 2022-02-08 诺力智能装备股份有限公司 Path control method and device for mobile robot, storage medium and electronic device
CN117270548A (en) * 2023-11-23 2023-12-22 安徽领云物联科技有限公司 Intelligent inspection robot with route correction function
CN114019977B (en) * 2021-11-03 2024-06-04 诺力智能装备股份有限公司 Path control method and device for mobile robot, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105388899A (en) * 2015-12-17 2016-03-09 中国科学院合肥物质科学研究院 An AGV navigation control method based on two-dimension code image tags
CN106054900A (en) * 2016-08-08 2016-10-26 电子科技大学 Temporary robot obstacle avoidance method based on depth camera
CN107943051A (en) * 2017-12-14 2018-04-20 华南理工大学 Indoor AGV navigation methods and systems based on Quick Response Code guiding with visible light-seeking
CN109460029A (en) * 2018-11-29 2019-03-12 华南农业大学 Livestock and poultry cultivation place inspection mobile platform and its control method
CN111046776A (en) * 2019-12-06 2020-04-21 杭州成汤科技有限公司 Mobile robot traveling path obstacle detection method based on depth camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105388899A (en) * 2015-12-17 2016-03-09 中国科学院合肥物质科学研究院 An AGV navigation control method based on two-dimension code image tags
CN106054900A (en) * 2016-08-08 2016-10-26 电子科技大学 Temporary robot obstacle avoidance method based on depth camera
CN107943051A (en) * 2017-12-14 2018-04-20 华南理工大学 Indoor AGV navigation methods and systems based on Quick Response Code guiding with visible light-seeking
CN109460029A (en) * 2018-11-29 2019-03-12 华南农业大学 Livestock and poultry cultivation place inspection mobile platform and its control method
CN111046776A (en) * 2019-12-06 2020-04-21 杭州成汤科技有限公司 Mobile robot traveling path obstacle detection method based on depth camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
庄晓霖: "基于机器视觉的路径识别及避障导航系统", 《中国优秀硕士学位论文全文数据库·农业科技辑》 *
曹勇: "基于多传感器融合的仓储AGV导航定位系统设计与实现", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *
李林慧: "智能AGV多目视觉导引与惯性测量的复合导航技术研究", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *
王松涛: "基于视觉的AGV路径识别和跟踪控制研究", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239134A (en) * 2021-05-07 2021-08-10 河南牧原智能科技有限公司 Pig house navigation map establishing method and device, electronic equipment and storage medium
CN114019977A (en) * 2021-11-03 2022-02-08 诺力智能装备股份有限公司 Path control method and device for mobile robot, storage medium and electronic device
CN114019977B (en) * 2021-11-03 2024-06-04 诺力智能装备股份有限公司 Path control method and device for mobile robot, storage medium and electronic equipment
CN117270548A (en) * 2023-11-23 2023-12-22 安徽领云物联科技有限公司 Intelligent inspection robot with route correction function
CN117270548B (en) * 2023-11-23 2024-02-09 安徽领云物联科技有限公司 Intelligent inspection robot with route correction function

Similar Documents

Publication Publication Date Title
Kalinov et al. Warevision: Cnn barcode detection-based uav trajectory optimization for autonomous warehouse stocktaking
US20210190512A1 (en) System and method of detecting change in object for updating high-definition map
Duggal et al. Plantation monitoring and yield estimation using autonomous quadcopter for precision agriculture
US20210012124A1 (en) Method of collecting road sign information using mobile mapping system
EP3842751B1 (en) System and method of generating high-definition map based on camera
CN106017477A (en) Visual navigation system of orchard robot
CN112363495A (en) Navigation method of inspection robot for livestock and poultry farm
CN109828267A (en) The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN111198496A (en) Target following robot and following method
Chen et al. Global path planning in mobile robot using omnidirectional camera
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN110705385A (en) Method, device, equipment and medium for detecting angle of obstacle
CN113378701B (en) Ground multi-AGV state monitoring method based on unmanned aerial vehicle
CN113807309A (en) Orchard machine walking route planning method based on deep learning
CN112540382B (en) Laser navigation AGV auxiliary positioning method based on visual identification detection
CN112083732A (en) Robot navigation method and system for detecting visible line laser
CN114445494A (en) Image acquisition and processing method, image acquisition device and robot
CN112330748A (en) Tray identification and positioning method based on binocular depth camera
Shimoda et al. Autonomous motion control of a mobile robot using marker recognition via deep learning in gps-denied environments
CN117367425B (en) Mobile robot positioning method and system based on multi-camera fusion
CN114782626B (en) Transformer substation scene map building and positioning optimization method based on laser and vision fusion
WIETRZYKOWSKI et al. Terrain classification for autonomous navigation in public urban areas
US20230351755A1 (en) Processing images for extracting information about known objects
CN117330090A (en) Mobile device repositioning method, mobile device and storage medium
CN110109460B (en) AGV dolly navigation based on ten characters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210212