CN108534788B - AGV navigation method based on kinect vision - Google Patents

AGV navigation method based on kinect vision Download PDF

Info

Publication number
CN108534788B
CN108534788B CN201810185790.3A CN201810185790A CN108534788B CN 108534788 B CN108534788 B CN 108534788B CN 201810185790 A CN201810185790 A CN 201810185790A CN 108534788 B CN108534788 B CN 108534788B
Authority
CN
China
Prior art keywords
agv
frame image
contour
image
moving range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810185790.3A
Other languages
Chinese (zh)
Other versions
CN108534788A (en
Inventor
朱静
全永彬
黄文恺
何海城
叶谱生
韩晓英
姚佳岷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN201810185790.3A priority Critical patent/CN108534788B/en
Publication of CN108534788A publication Critical patent/CN108534788A/en
Application granted granted Critical
Publication of CN108534788B publication Critical patent/CN108534788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an AGV navigation method based on kinect vision, which is characterized in that a walking path is preset for each AGV; acquiring an image in real time by a kinect instrument above the AGV moving range, and constructing a two-dimensional map; and tracking each AGV through the AGV outline of each frame of image. When a current frame image is obtained, calculating the slope of deviation of each AGV outline through the current frame image and a previous frame image, and determining whether to correct the current walking track of the AGV; when a current frame image is acquired, determining whether to control the traveling of the AGVs by calculating the distance between two adjacent AGVs in the same traveling direction, and simultaneously controlling the traveling of each AGV by judging whether a plurality of AGVs arrive at the next moment at each position in the AGV moving range at the same time; the method carries out global positioning through Kinect vision, and can carry out reasonable and effective intelligent obstacle avoidance.

Description

AGV navigation method based on kinect vision
Technical Field
The invention relates to an AGV navigation method, in particular to an AGV navigation method based on kinect vision.
Background
Traditional manpower and semi-mechanized factory logistics mode are with high costs, efficient, can't satisfy production automation and intelligent requirement. AGV (automated Guided vehicle) AS a novel intelligent logistics device has the characteristics of high automation, high integration, high flexibility and the like, and can be organically combined with various RS/AS input/output ports, production lines, assembly lines, conveying lines, platforms, goods shelves, operation points and the like quickly; different functions can be realized in different combinations according to different requirements; the system can shorten the logistics turnover period to the maximum extent, reduce the turnover consumption of materials, realize the flexible connection of incoming materials and processing, logistics and production, finished products and sales and the like, improve the working efficiency of a production system to the maximum extent, and is widely applied to the industries such as the storage industry, the manufacturing industry and the like.
In recent years, the development of the e-commerce industry is not kept in pace, and logistics is an important part of the industry, and the efficiency of logistics sorting greatly influences the development speed of the industry. Traditional industry can be replaced gradually, can not keep pace with the development of trade, and the robot replaces artifical, improves efficiency and accuracy, and reduce cost is the inevitable trend of development. AGV commodity circulation letter sorting system loads, carries the express delivery parcel with a large amount of mobile robot, can greatly improve work efficiency.
The existing automatic logistics sorting in China mainly depends on large-scale logistics sorting equipment, although the logistics sorting equipment has higher logistics sorting efficiency, the large-scale logistics sorting equipment determines the large-scale work place, and thus the application range of the logistics sorting is greatly limited. The development of the automatic logistics sorting in China is still in the large-scale stage, and the automatic logistics sorting system is mainly suitable for large-scale warehouses and is used for carrying out automatic logistics sorting on large-scale packages. But nowadays, the application range of small-package and miniaturized logistics sorting is more and more extensive.
Only a few of the current logistics sorting in China realize automation, for example, some large enterprises or large logistics companies have large automatic logistics sorting warehouses. Only in these large automated logistics warehouses are specialized automated logistics sorting equipment available for automated sorting of logistics packages.
In most cases, the logistics sorting in China is still finished manually. Generally, after goods to be sorted arrive at a warehouse, logistics staff scan bar codes on express packages to record logistics information into a system, then the logistics packages are classified according to stacks according to logistics information, and when the number of the goods to be sorted is large, the goods to be sorted are conveyed to a corresponding warehouse area at one time. However, in such cases, the goods are often stacked in a mess or damaged due to misoperation or inattention of workers, which results in a great increase in the damage rate and error rate of goods sorted by logistics.
The traditional AGV navigation mode mainly adopts electromagnetic navigation and optical navigation. The navigation mode needs to lay a path on the ground, motion control is achieved according to deviation of the central axis of the AGV body and the path, flexibility is poor, and the path is difficult to change and expand.
The positioning function of the mobile robot is the most important function in multiple fields of navigation and is the most basic link for completing the navigation task of the robot. The accuracy and reliability of positioning directly determine whether the mobile robot can correctly complete the navigation function. There are generally two ways for AGV positioning — relative positioning and absolute positioning: in the case of relative positioning, namely, when the initial pose of the AGV is determined, the changes of the poses at the current time and the previous time are calculated by using internal sensors such as a photoelectric encoder and an accelerometer, so as to update the relative pose between the current state and the initial state. This is the most widely used way of AGV positioning. The method has higher positioning accuracy in short-distance movement, but because of the adoption of an incremental calculation method, the positioning error is continuously accumulated along with the increase of the movement distance and time, and an additional sensor is matched to eliminate the error. In contrast, absolute positioning refers to directly calculating the absolute position of the AGV by using an external sensor, so that no accumulated error exists, and the problem of 'kidnapping' of the AGV under non-autonomous movement can be solved.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an AGV navigation method based on Kinect vision.
The purpose of the invention is realized by the following technical scheme: an AGV navigation method based on kinect vision comprises the following steps:
step S1, presetting a walking path for each AGV in the AGV moving range;
step S2, acquiring an image of the AGV moving range in real time through a kinect instrument placed above the AGV moving range;
step S3, constructing a two-dimensional map corresponding to the AGV moving range according to the current frame image of the AGV moving range acquired by the kinect instrument;
meanwhile, for a current frame image of the AGV moving range acquired by a kinect instrument, sequentially processing graying and binaryzation on the current frame image, extracting an object contour from the image after the graying and binaryzation processing, filtering the object contour to obtain the AGV contour, and acquiring the mass center coordinate of each AGV contour according to a two-dimensional map of the AGV moving range constructed under the frame image;
step S4, tracking each AGV contour according to the mass center coordinates of each AGV contour extracted from the current frame image and the previous frame image;
step S5, aiming at each AGV contour in the current frame image, acquiring the mass center coordinate of the AGV contour according to the two-dimensional map constructed under the previous frame image, then calculating the deviation slope of the AGV contour, judging whether the slope is greater than a certain value, and if so, controlling the AGV to correct the current walking track;
meanwhile, the mass center coordinates of all AGV contours in the current frame image are obtained according to a two-dimensional map constructed under the current frame image, whether multiple AGVs arrive at each position in the AGV moving range at the next moment or not is judged according to the mass center coordinates of all AGV contours and the walking paths of all the AGVs, and if yes, all the AGVs are sequentially controlled to pass through the positions; meanwhile, judging whether the distance between two adjacent AGV in the same walking direction is smaller than a certain value e according to the mass center coordinate of each AGV outline and the walking path of each AGV, and if so, controlling one AGV to stop walking firstly;
step S6, judging whether the current frame image of the AGV moving range acquired by the kinect instrument is the last frame image, if not, returning to the step S3 when the kinect instrument acquires the next frame image of the AGV moving range; if yes, the process is ended.
Preferably, in step S3, for the current frame image of the AGV moving range acquired by the kinect instrument, a specific process of constructing a two-dimensional map corresponding to the AGV moving range according to the frame image is as follows:
firstly, aiming at a current frame image of an AGV moving range, constructing image coordinates (u, v) according to the frame image, and then converting the image coordinates into actual coordinates (x, y):
x=au;
y=bv;
wherein:
Figure GDA0002373830540000041
Figure GDA0002373830540000042
wherein, width is the width of the current frame image of the AGV moving range, hight is the length of the current frame image of the AGV moving range, and dimension _ x is the actual width of the ground of the AGV moving range, and dimension _ y is the actual length of the ground of the AGV moving range.
Preferably, in step S3, for the current frame image of the AGV moving range acquired by the kinect instrument, the process of performing the graying and binarization processing on the frame image is as follows:
after a current frame image of the AGV moving range is acquired by a kinect instrument, a mat-form picture src is newly built for the frame image, and then the image is grayed and stored in the newly built mat picture src through a cvtColor () function; finally, selecting a proper Threshold value through a Threshold () function, and setting the gray value of the pixel point of the image src which is grayed to be 0 or 255;
wherein the formula of the cvtColor () function is as follows:
Gray=0.299R+0.587G+0.114B;
wherein Gray is the Gray value of the image after graying.
Preferably, in step S3, after extracting the object contour from the image after the graying and binarization processing, the process of filtering the object contour to obtain the AGV contour is as follows:
for each object contour extracted from the image, judging whether the object contour meets the following conditions: the length is greater than a first threshold value g, and the width is greater than a second threshold value h;
if yes, determining that the object contour is an AGV contour in the image;
if not, judging that the object contour is not the AGV contour in the image, and removing the object contour.
Preferably, in step S4, the process of tracking each AGV contour according to the coordinates of the center of mass of each AGV contour extracted from the current frame image and the previous frame image is specifically as follows:
firstly, presetting the distance traveled by each AGV between two frames of images as S;
when the kinect instrument acquires a current image of the AGV moving range, regarding each AGV contour in the current frame image, regarding the AGV contour of which the distance between the previous frame image and the AGV contour is smaller than S as the AGV contour; if the distance between the last frame of image and the AGV outline is not smaller than the AGV outline of S, taking the AGV outline as a new AGV appearing in the AGV moving range;
wherein the distance between the AGV outline in the previous frame image and the AGV outline in the current frame image is s:
Figure GDA0002373830540000051
wherein (x ", y") is the coordinate of the center of mass of an AGV contour in the two-dimensional map constructed under the previous image, and (x ', y') is the coordinate of the center of mass of the AGV contour in the two-dimensional map constructed under the current image.
Further, S is 6 cm.
Preferably, in step S5, the slope k of the deviation of the AGV profile is calculated as:
Figure GDA0002373830540000052
where (x2, y2) is the coordinates of the center of mass of an AGV profile in the two-dimensional map constructed under the current frame image, and (x1, y1) is the coordinates of the center of mass of the AGV profile in the two-dimensional map constructed under the previous frame image.
Furthermore, when the slope k of the deviation of the AGV outline is judged to be larger than a certain value of 0.17, the AGV is controlled to automatically correct the traveling track.
Preferably, the predetermined value e is 10 cm.
Preferably, in step S1, the upper computer presets a travel path of each AGV within the AGV activity range, where the travel path of each AGV includes a start point, corner points, and target points, and each AGV returns to the start point after reaching the target point from the start point; wherein the starting points of all the AGVs in the same AGV moving range are the same; when the AGV at the starting point receives the traveling path information preset by the upper computer and the goods information is successfully received, the AGV moves to the target point according to the preset traveling path, and after the goods information reaches the target point and is unloaded, the AGV returns to the starting point from the target point according to the preset traveling path.
Compared with the prior art, the invention has the following advantages and effects:
(1) the invention relates to an AGV navigation method based on kinect vision, which comprises the steps of firstly presetting a walking path for each AGV in an AGV moving range; when the AGV in the AGV moving range starts sorting work, acquiring an image of the AGV moving range in real time through a kinect instrument placed above the AGV moving range, and constructing a two-dimensional map of the AGV moving range in real time according to the acquired image of the AGV moving range; and tracking each AGV through the AGV outline of each frame image of the AGV moving range acquired by the kinect instrument. When the kinect instrument acquires a current frame image of the AGV moving range, calculating the slope of deviation of each AGV contour through the current frame image and a previous frame image, and determining whether to correct the current walking track of the AGV or not through the slope of deviation; in addition, when the kinect instrument acquires the current frame image of the AGV moving range, whether the AGV needs to be controlled to walk is determined by calculating the distance between two adjacent AGVs in the same walking direction, and meanwhile, whether a plurality of AGVs reach each other at the next moment in the AGV moving range or not is judged; therefore, the method carries out global positioning through Kinect vision, knows the surrounding environment information of each AGV, and can carry out reasonable and effective intelligent obstacle avoidance compared with the traditional method for carrying out obstacle avoidance by installing a sensor (such as an infrared sensor) on the AGV.
(2) According to the AGV navigation method based on kinect vision, aiming at each frame of image of an AGV moving range acquired by a kinect instrument, firstly, carrying out gray level and binarization sequential processing on the frame of image, and then extracting an object contour from the image after the gray level and binarization processing; according to the method, the object contour is extracted through the image after the graying and binarization processing, the AGV and the surrounding environment can be effectively and obviously distinguished, and the characteristic extraction of the AGV contour is convenient to perform in the next step.
Drawings
FIG. 1 is a flowchart of an AGV navigation method based on kinect vision according to the present invention.
FIG. 2 is a flow chart of the travel of each AGV in the method of the present invention.
FIG. 3 is a schematic diagram of the method of the present invention with the kinect instrument positioned above the reach of the AGV.
FIG. 4 is a schematic diagram of the travel path of each AGV in the two-dimensional map according to the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
The embodiment of the invention discloses an AGV navigation method based on kinect vision, which comprises the following steps as shown in FIG. 1:
step S1, presetting a walking path for each AGV in the AGV moving range; in the embodiment, the upper computer presets the traveling path of each AGV in the AGV moving range, wherein the traveling path of the AGV comprises a starting point, corner points and a target point; as shown in fig. 2, each AGV reaches the target point from the starting point and then returns to the starting point, where the starting points of the AGVs within the same AGV activity range are the same; when the AGV at the starting point receives the preset walking path information sent by the upper computer and the goods information is successfully received, the AGV moves to the target point according to the preset walking path, and after the goods information reaches the target point and is unloaded, the AGV returns to the starting point from the target point according to the preset walking path. Wherein AGV can receive walking path information and the goods information of host computer through wifi communication module.
Step S2, when the AGV in the AGV moving range starts sorting work, acquiring an image of the AGV moving range in real time through a kinect instrument placed above the AGV moving range, and transmitting the image to an upper computer; FIG. 3 is a schematic diagram of a kinect instrument placed over the active range of an AGV, where in this embodiment, the kinect placement height is related to its visual coverage, and preferably just covers the active range of the AGV.
Step S3, constructing a two-dimensional map corresponding to the AGV moving range according to the current frame image of the AGV moving range acquired by the kinect instrument; in this embodiment, after receiving the current frame image of the AGV range of motion sent by the kinect instrument, the upper computer constructs a two-dimensional map of the AGV range of motion according to the frame image, specifically: firstly, aiming at a current frame RGB image of an AGV moving range, constructing image coordinates (u, v) according to the frame RGB image, and then converting the image coordinates into actual coordinates (x, y):
x=au;
y=bv;
wherein:
Figure GDA0002373830540000071
Figure GDA0002373830540000072
wherein, width is the width of the current frame image of the AGV moving range, hight is the length of the current frame image of the AGV moving range, and dimension _ x is the actual width of the ground of the AGV moving range, and dimension _ y is the actual length of the ground of the AGV moving range. Fig. 4 is a schematic diagram of preset traveling paths of 6 AGVs in a two-dimensional map, where the 6 AGVs are a first AGV, a second AGV, a third AGV, a fourth AGV, a fifth AGV and a sixth AGV, respectively, and each arrow direction is a traveling direction of each AGV, where the first AGV starts to travel from a starting point 1 to a first target point 2-1, and then returns from the first target point 2-1 to the starting point 1; the second AGV starts to travel from the start point 1 to the second target point 2-2, and then returns from the second target point 2-1 to the start point 1, the third AGV starts to travel from the start point 1 to the third target point 2-3, and then returns from the third target point 2-3 to the start point 1, the fourth AGV starts to travel from the start point 1 to the fourth target point 2-4, and then returns from the fourth target point 2-4 to the start point 1, the fifth AGV starts to travel from the start point 1 to the fifth target point 2-5, and then returns from the fifth target point 2-5 to the start point 1, the sixth AGV starts to travel from the start point 1 to the sixth target point 2-6, and then returns from the sixth target point 2-6 to the start point 1.
Meanwhile, for a current frame image of the AGV moving range acquired by a kinect instrument, sequentially processing graying and binaryzation on the current frame image, extracting an object contour from the image after the graying and binaryzation processing, filtering the object contour to obtain the AGV contour, and acquiring the mass center coordinate of each AGV contour according to a two-dimensional map of the AGV moving range constructed under the frame image;
in this embodiment, for a current frame image of the AGV activity range acquired by the kinect instrument, the graying and binarization processing of the frame image is as follows:
after a current frame RGB image of an AGV moving range is acquired by a kinect instrument, a mat-form picture src is newly built for the frame RGB image, and then the RGB image is grayed and stored in the newly built mat picture src through a cvtColor () function; finally, selecting a proper Threshold value through a Threshold () function, and setting the gray value of the pixel point of the image src which is grayed to be 0 or 255; wherein the formula of the cvtColor () function is as follows:
Gray=0.299R+0.587G+0.114B;
wherein Gray is a Gray value of the RGB image after graying.
In this embodiment, in the above steps, after extracting the object contour from the image after the graying and binarization processing, the process of filtering the object contour to obtain the AGV contour is as follows: for each object contour extracted from the image, judging whether the object contour meets the following conditions: the length is greater than a first threshold value g, and the width is greater than a second threshold value h; if yes, determining that the object contour is an AGV contour in the image; if not, judging that the object contour is not the AGV contour in the image, and removing the object contour. The values of the first threshold value g and the second threshold value h are set according to the length and the width of the actual AGV.
Step S4, tracking each AGV contour according to the mass center coordinates of each AGV contour extracted from the current frame image and the previous frame image; the method specifically comprises the following steps:
firstly, presetting the distance traveled by each AGV between two frames of images as S; in this example, S is 6 cm. When the kinect instrument acquires a current frame image of an AGV moving range, regarding each AGV contour in the current frame image, regarding the AGV contour of which the distance between the previous frame image and the AGV contour is smaller than S as the AGV contour; if the current frame image is the first frame image of the AGV moving range acquired by the kinect instrument, directly marking each AGV outline;
wherein the distance between the AGV outline in the previous frame image and the AGV outline in the current frame image is s:
Figure GDA0002373830540000091
wherein (x ", y") is the coordinate of the center of mass of an AGV contour in the two-dimensional map constructed under the previous image, and (x ', y') is the coordinate of the center of mass of the AGV contour in the two-dimensional map constructed under the current image.
In this step, if there is no AGV contour having a distance smaller than S from the AGV contour in the previous image, the AGV contour is determined as a new AGV appearing in the AGV range.
Step S5, aiming at each AGV contour in the current frame image, acquiring the mass center coordinate of the AGV contour according to the two-dimensional map constructed under the previous frame image, then calculating the deviation slope of the AGV contour, judging whether the deviation slope of the AGV contour is larger than a certain value, if so, controlling the AGV to correct the current walking track, and if not, not correcting the walking track of the AGV; in this embodiment, during the traveling process of each AGV in the AGV moving range as shown in fig. 2, the traveling track needs to be corrected according to the above method.
In this embodiment, the slope k of the deviation of the AGV profile is:
Figure GDA0002373830540000092
where (x2, y2) is the coordinates of the center of mass of an AGV profile in the two-dimensional map constructed under the current frame image, and (x1, y1) is the coordinates of the center of mass of the AGV profile in the two-dimensional map constructed under the previous frame image. In this embodiment, it is determined whether the gradient of the deviation of the AGV profile is greater than a predetermined value 0.17, and if so, the AGV is controlled to correct the current travel track.
Meanwhile, the mass center coordinates of all AGV contours in the current frame image are obtained according to a two-dimensional map constructed under the current frame image, whether multiple AGVs arrive at each position in the AGV moving range at the next moment or not is judged according to the mass center coordinates of all AGV contours and the walking paths of all the AGVs, and if yes, all the AGVs are sequentially controlled to pass through the positions; meanwhile, judging whether the distance between two adjacent AGVs in the same walking direction is smaller than a certain value e or not according to the mass center coordinate of each AGV profile and the walking path of each AGV, if so, controlling one of the AGVs to stop walking first until the distance between the two AGVs is larger than the certain value e; the constant value e may be set to 10cm in the embodiment. In this embodiment, each AGV in the AGV activity range needs to be executed according to the above method in the process of traveling as shown in fig. 2, so as to realize cooperation and obstacle avoidance of multiple AGVs.
Step S6, judging whether the current frame image of the AGV moving range acquired by the kinect instrument is the last frame image, if not, returning to the step S3 when the kinect instrument acquires the next frame image of the AGV moving range; if yes, the process is ended.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. An AGV navigation method based on kinect vision is characterized by comprising the following steps:
step S1, presetting a walking path for each AGV in the AGV moving range;
step S2, acquiring an image of the AGV moving range in real time through a kinect instrument placed above the AGV moving range;
step S3, constructing a two-dimensional map corresponding to the AGV moving range according to the current frame image of the AGV moving range acquired by the kinect instrument;
meanwhile, for a current frame image of the AGV moving range acquired by a kinect instrument, sequentially processing graying and binaryzation on the current frame image, extracting an object contour from the image after the graying and binaryzation processing, filtering the object contour to obtain the AGV contour, and acquiring the mass center coordinate of each AGV contour according to a two-dimensional map of the AGV moving range constructed under the frame image;
step S4, tracking each AGV contour according to the mass center coordinates of each AGV contour extracted from the current frame image and the previous frame image;
step S5, aiming at each AGV contour in the current frame image, acquiring the mass center coordinate of the AGV contour according to the two-dimensional map constructed under the previous frame image, then calculating the deviation slope of the AGV contour, judging whether the slope is greater than a certain value, and if so, controlling the AGV to correct the current walking track;
meanwhile, the mass center coordinates of all AGV contours in the current frame image are obtained according to a two-dimensional map constructed under the current frame image, whether multiple AGVs arrive at each position in the AGV moving range at the next moment or not is judged according to the mass center coordinates of all AGV contours and the walking paths of all the AGVs, and if yes, all the AGVs are sequentially controlled to pass through the positions; meanwhile, judging whether the distance between two adjacent AGV in the same walking direction is smaller than a certain value e according to the mass center coordinate of each AGV outline and the walking path of each AGV, and if so, controlling one AGV to stop walking firstly;
step S6, judging whether the current frame image of the AGV moving range acquired by the kinect instrument is the last frame image, if not, returning to the step S3 when the kinect instrument acquires the next frame image of the AGV moving range; if yes, the process is ended.
2. The method according to claim 1, wherein in step S3, for the current frame image of the AGV activity range obtained by the kinect instrument, a specific process of constructing the two-dimensional map corresponding to the AGV activity range according to the frame image is as follows:
firstly, aiming at a current frame image of an AGV moving range, constructing image coordinates (u, v) according to the frame image, and then converting the image coordinates into actual coordinates (x, y):
x=au;
y=bv;
wherein:
Figure FDA0002373830530000021
Figure FDA0002373830530000022
wherein, width is the width of the current frame image of the AGV moving range, hight is the length of the current frame image of the AGV moving range, and dimension _ x is the actual width of the ground of the AGV moving range, and dimension _ y is the actual length of the ground of the AGV moving range.
3. The method according to claim 1, wherein in step S3, for the current frame image of the AGV activity range acquired by the kinect instrument, the graying and binarization processing of the current frame image is as follows:
after a current frame image of the AGV moving range is acquired by a kinect instrument, a mat-form picture src is newly built for the frame image, and then the image is grayed and stored in the newly built mat picture src through a cvtColor () function; finally, selecting a proper Threshold value through a Threshold () function, and setting the gray value of the pixel point of the image src which is grayed to be 0 or 255;
wherein the formula of the cvtColor () function is as follows:
Gray=0.299R+0.587G+0.114B;
wherein Gray is the Gray value of the image after graying.
4. The method for navigating the AGV based on kinect vision according to claim 1, wherein in step S3, after extracting the object contour from the grayed and binarized image, the process of filtering the object contour to obtain the AGV contour is as follows:
for each object contour extracted from the image, judging whether the object contour meets the following conditions: the length is greater than a first threshold value g, and the width is greater than a second threshold value h;
if yes, determining that the object contour is an AGV contour in the image;
if not, judging that the object contour is not the AGV contour in the image, and removing the object contour.
5. The method according to claim 1, wherein in step S4, the process of tracking each AGV contour according to the coordinates of the center of mass of each AGV contour extracted from the current frame image and the previous frame image is as follows:
firstly, presetting the distance traveled by each AGV between two frames of images as S;
when the kinect instrument acquires a current image of the AGV moving range, regarding each AGV contour in the current frame image, regarding the AGV contour of which the distance between the previous frame image and the AGV contour is smaller than S as the AGV contour; if the distance between the last frame of image and the AGV outline is not smaller than the AGV outline of S, taking the AGV outline as a new AGV appearing in the AGV moving range;
wherein the distance between the AGV outline in the previous frame image and the AGV outline in the current frame image is s:
Figure FDA0002373830530000031
wherein (x ", y") is the coordinate of the center of mass of an AGV contour in the two-dimensional map constructed under the previous image, and (x ', y') is the coordinate of the center of mass of the AGV contour in the two-dimensional map constructed under the current image.
6. The method of claim 5, wherein S is 6 cm.
7. The method according to claim 1, wherein in step S5, the slope k of the deviation of the AGV profile is calculated as:
Figure FDA0002373830530000032
where (x2, y2) is the coordinates of the center of mass of an AGV profile in the two-dimensional map constructed under the current frame image, and (x1, y1) is the coordinates of the center of mass of the AGV profile in the two-dimensional map constructed under the previous frame image.
8. The method of claim 7, wherein when the slope k of the deviation of the AGV profile is greater than a certain value 0.17, the AGV is controlled to automatically correct the travel track.
9. The method of claim 1, wherein the certain value e is 10 cm.
10. The method according to claim 1, wherein in step S1, the upper computer presets a travel path of each AGV within the AGV activity range, the travel path of each AGV includes a start point, corner points, and a target point, and each AGV returns to the start point after reaching the target point from the start point; wherein the starting points of all the AGVs in the same AGV moving range are the same; when the AGV at the starting point receives the traveling path information preset by the upper computer and the goods information is successfully received, the AGV moves to the target point according to the preset traveling path, and after the goods information reaches the target point and is unloaded, the AGV returns to the starting point from the target point according to the preset traveling path.
CN201810185790.3A 2018-03-07 2018-03-07 AGV navigation method based on kinect vision Active CN108534788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810185790.3A CN108534788B (en) 2018-03-07 2018-03-07 AGV navigation method based on kinect vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810185790.3A CN108534788B (en) 2018-03-07 2018-03-07 AGV navigation method based on kinect vision

Publications (2)

Publication Number Publication Date
CN108534788A CN108534788A (en) 2018-09-14
CN108534788B true CN108534788B (en) 2020-06-05

Family

ID=63486459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810185790.3A Active CN108534788B (en) 2018-03-07 2018-03-07 AGV navigation method based on kinect vision

Country Status (1)

Country Link
CN (1) CN108534788B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111007855A (en) * 2019-12-19 2020-04-14 厦门理工学院 AGV navigation control method
CN112149555B (en) * 2020-08-26 2023-06-20 华南理工大学 Global vision-based multi-warehouse AGV tracking method
CN114202687B (en) * 2021-08-12 2024-07-26 昆明理工大学 Automatic tobacco plant extraction and counting method and system based on unmanned aerial vehicle images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106774310A (en) * 2016-12-01 2017-05-31 中科金睛视觉科技(北京)有限公司 A kind of robot navigation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010108047A2 (en) * 2009-03-18 2010-09-23 Intouch Graphics, Inc. Systems, methods, and software for providing orientation and wayfinding data to blind travelers

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106774310A (en) * 2016-12-01 2017-05-31 中科金睛视觉科技(北京)有限公司 A kind of robot navigation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Obstacle Avoidance for AGV with Kinect Sensor;Jianping Han;《International Journal of Smart Home》;20161031;第10卷(第8期);第65-74页 *
基于视觉和距离传感器的SLAM和导航方法的探新;花罡辰;《深圳信息职业技术学院学报》;20150315;第13卷(第01期);第83-88页 *

Also Published As

Publication number Publication date
CN108534788A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
US10810544B2 (en) Distributed autonomous robot systems and methods
US20230381954A1 (en) Robotic system with enhanced scanning mechanism
CN111496770B (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
US11488323B2 (en) Robotic system with dynamic packing mechanism
US11797926B2 (en) Robotic system with automated object detection mechanism and methods of operating the same
CN108534788B (en) AGV navigation method based on kinect vision
CN107065861A (en) Robot collection intelligence is carried, is loaded and unloaded on integral method and apparatus
WO2023005384A1 (en) Repositioning method and device for mobile equipment
WO2022121460A1 (en) Agv intelligent forklift, and method and apparatus for detecting platform state of floor stack inventory areas
CN108622590B (en) intelligent transportation robot that commodity circulation warehouse was used
CN114089735A (en) Method and device for adjusting shelf pose of movable robot
CN114537940A (en) Shuttle vehicle for warehousing system, warehousing system and control method of shuttle vehicle
CN112935703A (en) Mobile robot pose correction method and system for identifying dynamic tray terminal
Beinschob et al. Advances in 3d data acquisition, mapping and localization in modern large-scale warehouses
CN114862301A (en) Tray forklift AGV automatic loading method based on two-dimensional code auxiliary positioning
US11797906B2 (en) State estimation and sensor fusion switching methods for autonomous vehicles
CN101497420B (en) Automatic carrier system for industry and operation method thereof
CN115289966A (en) Goods shelf detecting and positioning system and method based on TOF camera
CN111470244B (en) Control method and control device for robot system
CN114283193A (en) Pallet three-dimensional visual positioning method and system
CN111056197B (en) Automatic container transferring method based on local positioning system
TWI715358B (en) State estimation and sensor fusion methods for autonomous vehicles
US20230236600A1 (en) Operational State Detection for Obstacles in Mobile Robots
CN111056195B (en) Butt joint control method for automatic loading and unloading of containers for unmanned equipment
TW202411139A (en) Container storage method and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20180914

Assignee: GUANGZHOU DAWEI COMMUNICATION CO.,LTD.

Assignor: Guangzhou University

Contract record no.: X2022980024622

Denomination of invention: An AGV Navigation Method Based on Kinect Vision

Granted publication date: 20200605

License type: Common License

Record date: 20221202

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20180914

Assignee: QIANHAI JUYING (SHENZHEN) PRECISION TECHNOLOGY Co.,Ltd.

Assignor: Guangzhou University

Contract record no.: X2022980024921

Denomination of invention: An AGV Navigation Method Based on Kinect Vision

Granted publication date: 20200605

License type: Common License

Record date: 20221207

Application publication date: 20180914

Assignee: Shenzhen Jinsui Fangyuan Technology Co.,Ltd.

Assignor: Guangzhou University

Contract record no.: X2022980024991

Denomination of invention: An AGV Navigation Method Based on Kinect Vision

Granted publication date: 20200605

License type: Common License

Record date: 20221207

Application publication date: 20180914

Assignee: Shenzhen Zhonghaocheng Technology Development Co.,Ltd.

Assignor: Guangzhou University

Contract record no.: X2022980024911

Denomination of invention: An AGV Navigation Method Based on Kinect Vision

Granted publication date: 20200605

License type: Common License

Record date: 20221207