CN112070759B - Fork truck tray detection and positioning method and system - Google Patents

Fork truck tray detection and positioning method and system Download PDF

Info

Publication number
CN112070759B
CN112070759B CN202010970716.XA CN202010970716A CN112070759B CN 112070759 B CN112070759 B CN 112070759B CN 202010970716 A CN202010970716 A CN 202010970716A CN 112070759 B CN112070759 B CN 112070759B
Authority
CN
China
Prior art keywords
template
target
tray
detected
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010970716.XA
Other languages
Chinese (zh)
Other versions
CN112070759A (en
Inventor
黄泽仕
朱程利
余小欢
陈嵩
白云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Guangpo Intelligent Technology Co ltd
Original Assignee
Zhejiang Guangpo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Guangpo Intelligent Technology Co ltd filed Critical Zhejiang Guangpo Intelligent Technology Co ltd
Priority to CN202010970716.XA priority Critical patent/CN112070759B/en
Publication of CN112070759A publication Critical patent/CN112070759A/en
Application granted granted Critical
Publication of CN112070759B publication Critical patent/CN112070759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention discloses a forklift pallet detection and positioning method, which comprises the steps of rasterizing an acquired depth image in a current scene to generate a corresponding raster image; performing region growth on the raster image to obtain candidate frame regions of a plurality of targets to be detected, calculating the inclination degree of each target to be detected, and performing template self-adaption processing on a preset tray raster image standard template according to the inclination degree of each target to be detected to generate a tray raster image template corresponding to each target to be detected; performing pixel matching on the candidate frame region of each target to be detected and the corresponding self-adaptive tray raster image template, and if the matching is successful, acquiring the candidate frame region of the target tray; and converting the grid image of the candidate frame area of the target tray into a corresponding depth image to obtain the target tray in the depth image. The invention can accurately detect and position the forklift pallet.

Description

Fork truck tray detection and positioning method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to a forklift pallet detection and positioning method and system.
Background
With the development of modern logistics technology, an Automatic Guided Vehicle (AGV) plays an increasingly important role in intelligent storage technology, and the technology of detecting and identifying forklift pallets becomes more important. The forklift pallet stacks goods through the pallet, and the pallet is forked and transported through the forklift and is transported to a designated position, so that industrial automatic transportation is realized.
The forklift pallet identification means that a sensor arranged on a forklift detects and identifies the forklift pallet through machine vision and an image processing algorithm, the forklift ranging means that the result of pallet identification is utilized, and three-dimensional coordinate attitude information of the forklift relative to the pallet is calculated through a mathematical model formula according to sensor information. The patent application number 2017106117652 is based on the cargo pallet detection method of the plane contour matching of the point cloud, the technical scheme is based on the point cloud layer surface, and the running speed is low; the method adopts a contour matching algorithm, because contour matching is easily affected by depth image quality, edge fluctuation is difficult to match, and tray holes are not accurately positioned.
Disclosure of Invention
Based on the above, the invention aims to provide a forklift pallet detection and positioning method and system, which can accurately detect and position forklift pallets.
In order to achieve the above object, the present invention provides a forklift pallet detecting and positioning method, including:
s1, rasterizing the acquired depth image in the current scene to generate a corresponding raster image;
s2, carrying out region growth on the raster image to obtain a plurality of candidate frame regions, wherein each candidate frame region corresponds to a target to be detected;
s3, calculating the inclination degree of each target to be detected;
s4, performing template self-adaption processing on a preset tray raster image standard template according to the inclination degree of each target to be detected, and generating a tray raster image template corresponding to each target to be detected;
s5, carrying out pixel matching on the candidate frame region of each target to be detected and the corresponding self-adaptive tray raster image template, and if matching is successful, acquiring the candidate frame region of the target tray;
s6, converting the grid image of the candidate frame area of the target tray into a corresponding depth image, and obtaining the target tray in the depth image.
Preferably, the step S1 further includes a cutting process on the depth image, and specifically includes:
converting the depth image into a point cloud based on a calculation method of the depth image turning point cloud;
and deleting pixel points on the depth image corresponding to points which are not in the detection range in the point cloud based on a preset detection range, and obtaining the cut depth image.
Preferably, the step S1 further includes:
setting the length and the width of a single grid in the grid image in the depth image, wherein the length of the depth image divided by the length of the single grid is the number of transverse pixels of the grid image, and the width of the depth image divided by the width of the single grid is the number of longitudinal pixels of the grid image;
and taking the average value of the pixel values of all the pixel points, located in one grid, of the pixel points in the depth image, wherein the average value is used as the depth value of the grid, and generating one grid image.
Preferably, the step S2 specifically includes:
performing region growth on the grid image according to a region generation algorithm to obtain a plurality of link regions; calculating the average value of the depth values of all the pixel points in each link area, and taking the average value of the depth values as the average distance of the link area;
the rectangular area occupied by each link area is calculated.
Preferably, the step S2 further includes a step of screening the link region, which specifically includes:
setting a length threshold value and a width threshold value of a link area;
if the length and the width of the link region are both within the length threshold value and the width threshold value, the link region is a candidate link region;
and if the candidate link region is within a preset length-width ratio threshold, the candidate link region is a candidate frame region, and the candidate frame region corresponds to the target to be detected.
Preferably, the step S3 includes:
acquiring leftmost and rightmost points of the region of the target to be detected;
if the difference between the average distance between the point in the preset area of the leftmost point and the area of the target to be detected is smaller than a first distance threshold value, adding the point into the neighborhood of the leftmost point, accumulating the values of all the pixel points in the neighborhood of the leftmost point, and taking an average value to obtain the reliable depth value of the leftmost point;
if the difference between the average distance between the point in the field of the rightmost point and the area of the object to be detected is smaller than the first distance threshold, adding the point into the neighborhood of the rightmost point, accumulating the values of all the pixel points in the neighborhood of the rightmost point, and taking an average value to obtain the reliable depth value of the rightmost point;
calculating the inclination degree of the target to be detected according to a calculation formula (1);
t=(leftz-rightz)/(leftx-rightx) (1);
where t is the degree of tilt, leftx is the x-coordinate of the leftmost point in the raster image, leftz is the reliable depth value of the leftmost point, lightx is the x-coordinate of the rightmost point in the raster image, and lightz is the reliable depth value of the rightmost point.
Preferably, the step S4 includes:
based on the tray type, constructing a tray raster image standard template corresponding to the tray type, wherein the tray raster image standard template comprises a long-distance template and a short-distance template;
selecting a close range template for matching according to the average distance of the area of the target to be detected, if the average distance is larger than a second distance threshold, otherwise selecting a long range template for matching;
dividing the selected long-distance template or short-distance template into a left half template and a right half template from the middle, calculating the self-adaptive width of the left half template according to a formula (2), and calculating the self-adaptive width of the right half template according to a formula (3);
wherein l_w is the self-adaptive width of the left half template, m_w is the width of the complete template, k is a preset value, and t is the inclination degree of the target to be detected;
r_w=m_w-l_w (3);
wherein r_w is the adaptive width of the right half-mold plate;
performing image scaling processing on the left half template and the right half template according to the self-adaptive widths of the left half template and the right half template, and splicing the processed left half template and right half template into a complete template;
and performing image scaling processing on the spliced complete template to obtain the self-adaptive tray raster image template.
Preferably, the step S5 includes:
matching a pixel point of a candidate frame area of a current target to be detected with a pixel point of a corresponding tray raster image template, if the matching is consistent, accumulating and counting the pixel point into the matched pixel points until all the pixel points are matched, and counting the total number of the matched pixel points;
dividing the total number of the matched pixel points by the total number of all the pixel points of the candidate frame area of the current target to be detected to obtain the matching score of the current target to be detected, and the like to obtain the matching score of each target to be detected;
if the matching score is smaller than a score threshold, deleting the target to be detected, otherwise, reserving the target to be detected; calculating the final grading value of the reserved target to be detected based on a formula (4);
s=s template -s distancex -s diatancey (4);
wherein s is the final score value, s template To match the score, s distancex S is a weighted value of the deviation of the center position of the object to be detected from the x position of the center point of the raster image diatancey A weighted value of the deviation amount of the center position of the object to be detected and the center point y of the grid image;
and sequencing the calculated final grading values of the targets to be detected, wherein the target to be detected with the highest final grading value is the target tray obtained by detection.
Preferably, the method further comprises:
cutting out a candidate frame area of the target tray in the depth image;
selecting a pixel point of a candidate frame area of the target tray to start traversing, deleting the pixel point if the difference of the average distance between the pixel point and the candidate frame area exceeds a third distance threshold value, and so on, wherein all the rest pixel points form a surface area of the target tray;
performing region growth on the surface region of the target tray to obtain a fine candidate frame region of the target tray;
calculating to obtain the fine inclination degree of the target tray in the depth image;
cutting a middle part of a fine candidate frame area from the fine candidate frame area of the target tray, and cutting a middle tray leg part from the fine candidate frame area of the target tray according to the fine inclination degree of the target tray;
and acquiring a candidate frame area of the tray leg, and taking the central position of the area as the central position of the target tray.
In order to achieve the above object, the present invention provides a forklift pallet detecting and positioning system, comprising:
the grid module is used for carrying out rasterization on the acquired depth image in the current scene to generate a corresponding grid image;
the region growing module is used for carrying out region growing on the grid image to obtain a plurality of candidate frame regions, and each candidate frame region corresponds to one target to be detected;
the inclination module is used for calculating the inclination degree of each object to be detected;
the template self-adaptation module is used for carrying out template self-adaptation processing on a preset tray raster image standard template according to the inclination degree of each target to be detected, so as to generate a tray raster image template corresponding to each template;
the matching module is used for carrying out pixel matching on the candidate frame area of each target to be detected and the corresponding self-adaptive tray raster image template, and if the matching is successful, the candidate frame area of the target tray is obtained;
and the confirmation module is used for converting the grid image of the candidate frame area of the target tray into a corresponding depth image to obtain the target tray in the depth image.
Compared with the prior art, the forklift pallet detection and positioning method and system have the beneficial effects that: the forklift pallet can be accurately identified and detected, and pallet holes of the forklift pallet can be accurately positioned; the technical scheme is based on the processing of the depth image layer, and has high running speed and high running efficiency; according to the scheme, the grid image matching method is adopted, so that the matching difficulty of the forklift pallet is reduced, and the forklift pallet can be matched with the pallet template more easily.
Drawings
Fig. 1 is a flow chart of a forklift pallet detection and positioning method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a remote template according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a close-up stencil in accordance with an embodiment of the invention.
Fig. 4 is a schematic diagram of die cutting according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a template splice according to an embodiment of the invention.
Fig. 6 is a system schematic diagram of a forklift pallet detection and positioning system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the specific embodiments shown in the drawings, but these embodiments are not limited to the present invention, and structural, method, or functional modifications made by those skilled in the art based on these embodiments are included in the scope of the present invention.
In one embodiment of the present invention as shown in fig. 1, the present invention provides a forklift pallet detecting and positioning method, which includes:
s1, rasterizing the acquired depth image in the current scene to generate a corresponding raster image;
s2, carrying out region growth on the raster image to obtain a plurality of candidate frame regions, wherein each candidate frame region corresponds to a target to be detected;
s3, calculating the inclination degree of each target to be detected;
s4, performing template self-adaption processing on a preset tray raster image standard template according to the inclination degree of each target to be detected, and generating a tray raster image template corresponding to each target to be detected;
s5, carrying out pixel matching on the candidate frame region of each target to be detected and the corresponding self-adaptive tray raster image template, and if matching is successful, acquiring the candidate frame region of the target tray;
s6, converting the grid image of the candidate frame area of the target tray into a corresponding depth image, and obtaining the target tray in the depth image.
In the step S1, parameter calibration is performed by the TOF camera, including calibration of parameters such as focal length and optical center of the depth camera, and a depth image under the current scene is acquired and acquired. And filtering the depth image, wherein the integrity of the edge of the obstacle in the depth image can be ensured through the filtering. The variety of the filtering algorithms is more, and various filtering algorithms can be properly matched according to the quality of the depth image. In the embodiment, a median filtering algorithm and a bilateral filtering algorithm are adopted to carry out filtering treatment on the depth image, so as to obtain a filtered depth image. And filtering out the flying pixel points based on the filtered depth image. Through filtering the flying pixel points, each outlier pixel point in the depth image can be screened and removed. In a specific embodiment of the present invention, the step S1 further includes a cutting process for the depth image, and specifically includes: converting the depth image into a point cloud based on a calculation method of the point cloud of the depth image, deleting pixel points on the depth image corresponding to points which are not in a detection range in the point cloud based on a preset detection range, obtaining a cut depth image, and finishing cutting the depth image.
And rasterizing the cut depth image to obtain a corresponding raster image. Specifically, each grid size in the grid image may be set by itself as needed. Setting the length and the width of a single grid in the grid image in the depth image, wherein the length of the depth image divided by the length of the single grid is the number of transverse pixels of the grid image, and the width of the depth image divided by the width of the single grid is the number of longitudinal pixels of the grid image; and taking the average value of the pixel values of all the pixel points, located in one grid, of the pixel points in the depth image, wherein the average value is used as the depth value of the grid, so that one grid image is generated. For example, setting the length and width of each single grid to be x_length and y_length; the resolution range of the depth image is width, height; the resolution of the raster image is: width/x_length.
In the step S2, the raster image is subjected to region growing based on a region growing algorithm. Specifically, the extraction of the seed points of the region growing algorithm is obtained by traversing the top left to bottom right of the raster image, the seed points are used as seed points for starting the region growing one by one, and the grown region is not used as the seed points when being traversed later. Performing region growth on the grid image according to a region generation algorithm, obtaining a plurality of link regions, calculating the average value of depth values of all pixel points in each link region, and taking the average value of the depth values as the average distance of the link regions; and calculating a rectangular area occupied by each link area to obtain a candidate frame which can totally frame the rectangular area and is parallel to the xy axis of the image.
According to an embodiment of the present invention, the step S2 further includes a step of screening the link area, and specifically includes: screening the link areas grown from the grid image, setting a length threshold and a width threshold of the link areas, and if the length and the width of the link areas are within the length threshold and the width threshold, the link areas are candidate link areas; and if the candidate link region is within a preset length-width ratio threshold, the candidate link region is a candidate frame region, and the candidate frame region corresponds to the target to be detected. The length threshold and the width threshold of the link region are set according to an average distance of the link region, and the larger the average distance is, the smaller the length threshold and the width threshold are, because of the principle of the near-far-small acquired depth image. By setting the length threshold and the width threshold, too large link areas or too small link areas are eliminated. Because the forklift pallet is a standard object, the length-width ratio of the forklift pallet is fixed after the deep image processing, and the forklift pallet is subjected to shape screening by setting an length-width ratio threshold with allowance, and a candidate frame area obtained after screening is a target to be detected.
In the step S3, the inclination degree of the target to be detected is calculated. Specifically, based on the step of generating the region, acquiring the leftmost point and the rightmost point of the region of the object to be detected, if the difference between the point in a preset field of the leftmost point and the average distance of the region of the object to be detected is smaller than a first distance threshold, adding the point into the neighborhood of the leftmost point, accumulating the values of all the pixel points in the neighborhood of the leftmost point, and taking an average value to obtain the reliable depth value of the leftmost point; if the difference between the average distance between the point in the field of the rightmost point and the area of the object to be detected is smaller than the first distance threshold, adding the point into the neighborhood of the rightmost point, accumulating the values of all the pixel points in the neighborhood of the rightmost point, and taking an average value to obtain the reliable depth value of the rightmost point; calculating the inclination degree of the target to be detected according to a calculation formula (1);
t=(leftz-rightz)/(leftx-rightx) (1);
where t is the degree of tilt, leftx is the x-coordinate of the leftmost point in the raster image, leftz is the reliable depth value of the leftmost point, lightx is the x-coordinate of the rightmost point in the raster image, and lightz is the reliable depth value of the rightmost point.
In the step S4, according to the inclination degree of each target to be detected, the template adaptation is performed on the tray raster image templates to be matched. Specifically, a tray raster image standard template corresponding to a tray type is constructed based on the tray type, and the tray raster image template needs to be matched with the raster image, so that the pixel size of the template corresponds to the pixel size of a predesigned raster image. The values of the tray raster image templates are divided into 0 and 1, or any two different values can be set, and only the tray surface area and the hollowed-out area in the templates can be distinguished. The tray raster image standard templates include a far-distance template and a near-distance template based on the distance of the target object from the TOF camera. Such as the remote template shown in fig. 2 and the close template shown in fig. 3. Since the target object is limited by the field angle of the TOF camera after the short distance, two edges of the forklift pallet cannot be shot, and only the middle edge can be seen, the target object is switched to a short distance module after the short distance. Selecting a close range template for matching according to the average distance of the area of the target to be detected, if the average distance is larger than a second distance threshold, otherwise selecting a long range template for matching; the selected long-distance template or short-distance template is divided from the middle into a left half template and a right half template, as shown in fig. 4, and the adaptive width of the left half template is calculated according to formula (2), and the adaptive width of the right half template is calculated according to formula (3),
wherein l_w is the self-adaptive width of the left half template, m_w is the width of the complete template, k is a preset value, and t is the inclination degree of the target to be detected;
r_w=m_w-l_w (3);
where r_w is the adaptive width of the right half template. And performing image scaling processing on the left half template and the right half template according to the self-adaptive widths of the left half template and the right half template, keeping the height unchanged, and splicing the processed left half template and right half template into a complete template, as shown in fig. 5. And performing image scaling processing on the spliced complete template, wherein the length and width of the template are the length and width of the candidate frame area of the target to be detected, and obtaining the self-adaptive tray raster image template. According to the implementation steps, the template self-adaptive processing is carried out on the selected long-distance module and the selected short-distance template according to the inclination degree of the target to be detected, so that the template matching can be better carried out.
In the step S5, pixel matching is performed on the candidate frame area of each target to be detected and the corresponding self-adaptive tray raster image template, and if matching is successful, the candidate frame area of the target tray is obtained. Because the template is self-adaptive based on the inclination degree of each target to be detected in the steps, the template is processed to be as long as the length and width of the candidate frame area of the target to be detected, and therefore the template matching is performed by performing one-to-one correspondence of pixel positions on the template and the candidate frame area. Specifically, matching a pixel point of a candidate frame area of a current target to be detected with a pixel point of a corresponding tray raster image template, if the matching is consistent, accumulating and counting the pixel point into the matched pixel points until all the pixel points are matched, and counting the total number of the matched pixel points; dividing the total number of the matched pixel points by the total number of all the pixel points of the candidate frame area of the current target to be detected to obtain the matching score of the current target to be detected, and the like to obtain the matching score of each target to be detected; if the matching score is smaller than a score threshold, deleting the target to be detected, otherwise, reserving the target to be detected; calculating the final grading value of the reserved target to be detected based on a formula (4);
s=s template -s distancex -s diatancey (4);
wherein s is the final score value, s template To match the score, s distancex S is a weighted value of the deviation of the center position of the object to be detected from the x position of the center point of the raster image diatancey And the weighted value of the deviation amount of the center position of the object to be detected and the center point y of the grid image is used. And sequencing the calculated final grading values of the targets to be detected, wherein the target to be detected with the highest final grading value is the target tray obtained by detection. Based on the distance weighted comparison, the farther the center of the object to be detected is deviated from the center of the raster image, the lower the probability that the object is a tray.
And converting the coordinate positions of the grid images of the candidate frame areas of the target tray into the coordinate positions of the corresponding depth images. The conversion method comprises the following steps: and multiplying the coordinates of each pixel point of the candidate frame region of the target tray by the length and the width of a single grid respectively to obtain the candidate frame region of the target tray in the depth image coordinate system, further obtaining the target tray in the depth image, and finishing the detection of the tray.
According to a specific embodiment of the invention, the invention provides a technical scheme for positioning a tray hole of a tray. Specifically, a candidate frame region of the target tray is cut out in the depth image; traversing the candidate frame area of the target tray, selecting a pixel point to start traversing, deleting the pixel point if the distance difference between the pixel point and the average distance of the candidate frame area exceeds a third distance threshold value, and the like, wherein all the rest pixel points form the surface area of the target tray; performing region growth on the surface region of the target tray to obtain a fine candidate frame region of the target tray, wherein the region is more attached to the tray region; consistent with the implementation of the step S3, the fine inclination degree of the target tray in the depth image is calculated, and the inclination degree can be converted into a slope by converting the point cloud under the world coordinate system. Cutting a middle part of the fine candidate frame area from the fine candidate frame area of the target tray, determining a cutting mode and a cutting position according to the type of the tray, and cutting a middle tray leg part from the fine candidate frame area of the target tray according to the fine inclination degree of the target tray; and carrying out fine positioning on the tray leg part to obtain a candidate frame area of the tray leg, taking the central position of the candidate frame area as the central position of the target tray, and finishing the accurate positioning of the tray hole.
In one embodiment of the present invention as shown in fig. 6, the present invention provides a forklift pallet detection and positioning system, the system comprising:
the grid module 60 performs rasterization processing on the acquired depth image in the current scene to generate a corresponding grid image;
the region growing module 61 performs region growing on the grid image to obtain a plurality of candidate frame regions, wherein each candidate frame region corresponds to a target to be detected;
a gradient module 62 for calculating a gradient degree of each object to be detected;
the template self-adaptation module 63 performs template self-adaptation processing on a preset tray raster image standard template according to the inclination degree of each target to be detected, and generates a tray raster image template corresponding to each template;
the matching module 64 performs pixel matching on the candidate frame area of each target to be detected and the corresponding self-adaptive tray raster image template, and if the matching is successful, the candidate frame area of the target tray is obtained;
and the confirmation module 65 converts the grid image of the candidate frame area of the target tray into a corresponding depth image to obtain the target tray in the depth image.
And the grid module acquires and acquires a depth image under the current scene through the TOF camera, and cuts the depth image. And rasterizing the cut depth image to obtain a corresponding raster image.
Based on an area growing algorithm, an area growing module carries out area growth on the grid image, a plurality of link areas are obtained, the average value of depth values of all pixel points in each link area is calculated, the average value of the depth values is used as the average distance of the link areas, the link areas are screened, too large link areas or too small link areas are eliminated, shape screening is carried out on the too large link areas or too small link areas, and candidate frame areas obtained after screening are targets to be detected.
The inclination module calculates the inclination degree of the target to be detected. For specific implementation see the methods implementation section.
And the template self-adaptation module is used for carrying out template self-adaptation on the tray raster image templates to be matched according to the inclination degree of each target to be detected. And carrying out template self-adaptive processing on the selected long-distance module and the selected short-distance template according to the inclination degree of the target to be detected so as to better match the templates.
And the matching module performs pixel matching on the candidate frame region of each target to be detected and the corresponding self-adaptive tray raster image template, and if the matching is successful, the candidate frame region of the target tray is obtained. And acquiring the target tray based on the distance weighted comparison and the matching score.
The confirming module converts the coordinate position of the grid image of the candidate frame area of the target tray into the coordinate position of the corresponding depth image, and further obtains the target tray in the depth image, and the detection of the tray is completed.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Claims (9)

1. The forklift pallet detection and positioning method is characterized by comprising the following steps:
s1, rasterizing the acquired depth image in the current scene to generate a corresponding raster image;
s2, carrying out region growth on the raster image to obtain a plurality of candidate frame regions, wherein each candidate frame region corresponds to a target to be detected;
s3, calculating the inclination degree of each target to be detected;
s4, performing template self-adaption processing on a preset tray raster image standard template according to the inclination degree of each target to be detected, and generating a tray raster image template corresponding to each target to be detected;
s5, carrying out pixel matching on the candidate frame region of each target to be detected and the corresponding self-adaptive tray raster image template, and if matching is successful, acquiring the candidate frame region of the target tray;
s6, converting the grid image of the candidate frame area of the target tray into a corresponding depth image to obtain the target tray in the depth image;
the step S4 includes:
based on the tray type, constructing a tray raster image standard template corresponding to the tray type, wherein the tray raster image standard template comprises a long-distance template and a short-distance template;
selecting a close range template for matching according to the average distance of the area of the target to be detected, if the average distance is larger than a second distance threshold, otherwise selecting a long range template for matching;
dividing the selected long-distance template or short-distance template into a left half template and a right half template from the middle, calculating the self-adaptive width of the left half template according to a formula (2), and calculating the self-adaptive width of the right half template according to a formula (3);
wherein l_w is the self-adaptive width of the left half template, m_w is the width of the complete template, k is a preset value, and t is the inclination degree of the target to be detected;
r_w = m_w - l_w (3);
wherein r_w is the adaptive width of the right half die plate;
performing image scaling processing on the left half template and the right half template according to the self-adaptive widths of the left half template and the right half template, and splicing the processed left half template and right half template into a complete template;
and performing image scaling processing on the spliced complete template to obtain the self-adaptive tray raster image template.
2. The forklift pallet detecting and positioning method according to claim 1, wherein the step S1 further includes a cutting process of the depth image, and specifically includes:
converting the depth image into a point cloud based on a calculation method of the depth image turning point cloud;
and deleting pixel points on the depth image corresponding to points which are not in the detection range in the point cloud based on a preset detection range, and obtaining the cut depth image.
3. The forklift pallet detection and positioning method according to claim 2, wherein said step S1 further comprises:
setting the length and the width of a single grid in the grid image in the depth image, wherein the length of the depth image divided by the length of the single grid is the number of transverse pixels of the grid image, and the width of the depth image divided by the width of the single grid is the number of longitudinal pixels of the grid image;
and taking the average value of the pixel values of all the pixel points, located in one grid, of the pixel points in the depth image, wherein the average value is used as the depth value of the grid, and generating one grid image.
4. The forklift pallet detecting and positioning method according to claim 3, wherein the step S2 specifically includes:
performing region growth on the grid image according to a region generation algorithm to obtain a plurality of link regions; calculating the average value of the depth values of all the pixel points in each link area, and taking the average value of the depth values as the average distance of the link area;
the rectangular area occupied by each link area is calculated.
5. The forklift pallet detecting and positioning method according to claim 4, wherein the step S2 further comprises a step of screening the link area, and the method specifically comprises:
setting a length threshold value and a width threshold value of a link area;
if the length and the width of the link region are both within the length threshold value and the width threshold value, the link region is a candidate link region;
and if the candidate link region is within a preset length-width ratio threshold, the candidate link region is a candidate frame region, and the candidate frame region corresponds to the target to be detected.
6. The forklift pallet detecting and positioning method according to claim 5, wherein said step S3 comprises:
acquiring leftmost and rightmost points of the region of the target to be detected;
if the difference between the average distance between the point in the preset area of the leftmost point and the area of the target to be detected is smaller than a first distance threshold value, adding the point into the neighborhood of the leftmost point, accumulating the values of all the pixel points in the neighborhood of the leftmost point, and taking an average value to obtain the reliable depth value of the leftmost point;
if the difference between the average distance between the point in the field of the rightmost point and the area of the object to be detected is smaller than the first distance threshold, adding the point into the neighborhood of the rightmost point, accumulating the values of all the pixel points in the neighborhood of the rightmost point, and taking an average value to obtain the reliable depth value of the rightmost point;
calculating the inclination degree of the target to be detected according to a calculation formula (1);
t=(leftz-rightz)/(leftx-rightx) (1);
where t is the degree of tilt, leftx is the x-coordinate of the leftmost point in the raster image, leftz is the reliable depth value of the leftmost point, lightx is the x-coordinate of the rightmost point in the raster image, and lightz is the reliable depth value of the rightmost point.
7. The forklift pallet detecting and positioning method of claim 6, wherein said step S5 comprises:
matching a pixel point of a candidate frame area of a current target to be detected with a pixel point of a corresponding tray raster image template, if the matching is consistent, accumulating and counting the pixel point into the matched pixel points until all the pixel points are matched, and counting the total number of the matched pixel points;
dividing the total number of the matched pixel points by the total number of all the pixel points of the candidate frame area of the current target to be detected to obtain the matching score of the current target to be detected, and the like to obtain the matching score of each target to be detected;
if the matching score is smaller than a score threshold, deleting the target to be detected, otherwise, reserving the target to be detected;
calculating the final grading value of the reserved target to be detected based on a formula (4);
s=s template -s distancex -s diatancey (4) Wherein s is a final score value, s template To match the score, s distancex S is a weighted value of the deviation of the center position of the object to be detected from the x position of the center point of the raster image diatancey A weighted value of the deviation amount of the center position of the object to be detected and the center point y of the grid image;
and sequencing the calculated final grading values of the targets to be detected, wherein the target to be detected with the highest final grading value is the target tray obtained by detection.
8. The forklift pallet detection and positioning method of claim 7, wherein said method further comprises:
cutting out a candidate frame area of the target tray in the depth image;
selecting a pixel point of a candidate frame area of the target tray to start traversing, deleting the pixel point if the difference of the average distance between the pixel point and the candidate frame area exceeds a third distance threshold value, and so on, wherein all the rest pixel points form a surface area of the target tray;
performing region growth on the surface region of the target tray to obtain a fine candidate frame region of the target tray; calculating to obtain the fine inclination degree of the target tray in the depth image;
cutting a middle part of a fine candidate frame area from the fine candidate frame area of the target tray, and cutting a middle tray leg part from the fine candidate frame area of the target tray according to the fine inclination degree of the target tray;
and acquiring a candidate frame area of the tray leg, and taking the central position of the area as the central position of the target tray.
9. A forklift pallet detection and positioning system, the system comprising:
the grid module is used for carrying out rasterization on the acquired depth image in the current scene to generate a corresponding grid image;
the region growing module is used for carrying out region growing on the grid image to obtain a plurality of candidate frame regions, and each candidate frame region corresponds to one target to be detected;
the inclination module is used for calculating the inclination degree of each object to be detected;
the template self-adaptation module is used for carrying out template self-adaptation processing on a preset tray raster image standard template according to the inclination degree of each target to be detected, so as to generate a tray raster image template corresponding to each template;
the matching module is used for carrying out pixel matching on the candidate frame area of each target to be detected and the corresponding self-adaptive tray raster image template, and if the matching is successful, the candidate frame area of the target tray is obtained;
the confirming module is used for converting the grid image of the candidate frame area of the target tray into a corresponding depth image to obtain the target tray in the depth image;
the template self-adaptation module is specifically used for:
based on the tray type, constructing a tray raster image standard template corresponding to the tray type, wherein the tray raster image standard template comprises a long-distance template and a short-distance template;
selecting a close range template for matching according to the average distance of the area of the target to be detected, if the average distance is larger than a second distance threshold, otherwise selecting a long range template for matching;
dividing the selected long-distance template or short-distance template into a left half template and a right half template from the middle, calculating the self-adaptive width of the left half template according to a formula (2), and calculating the self-adaptive width of the right half template according to a formula (3);
wherein l_w is the self-adaptive width of the left half template, m_w is the width of the complete template, k is a preset value, and t is the inclination degree of the target to be detected;
r_w = m_w - l_w (3);
wherein r_w is the adaptive width of the right half-mold plate;
performing image scaling processing on the left half template and the right half template according to the self-adaptive widths of the left half template and the right half template, and splicing the processed left half template and right half template into a complete template;
and performing image scaling processing on the spliced complete template to obtain the self-adaptive tray raster image template.
CN202010970716.XA 2020-09-16 2020-09-16 Fork truck tray detection and positioning method and system Active CN112070759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010970716.XA CN112070759B (en) 2020-09-16 2020-09-16 Fork truck tray detection and positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010970716.XA CN112070759B (en) 2020-09-16 2020-09-16 Fork truck tray detection and positioning method and system

Publications (2)

Publication Number Publication Date
CN112070759A CN112070759A (en) 2020-12-11
CN112070759B true CN112070759B (en) 2023-10-24

Family

ID=73695956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010970716.XA Active CN112070759B (en) 2020-09-16 2020-09-16 Fork truck tray detection and positioning method and system

Country Status (1)

Country Link
CN (1) CN112070759B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614183A (en) * 2020-12-25 2021-04-06 深圳市镭神智能系统有限公司 Tray pose detection method, device, equipment and storage medium
CN112766181B (en) * 2021-01-22 2022-09-23 电子科技大学 Method for improving line graph identification accuracy
CN112935703B (en) * 2021-03-19 2022-09-27 山东大学 Mobile robot pose correction method and system for identifying dynamic tray terminal
CN113435524A (en) * 2021-06-30 2021-09-24 兰剑智能科技股份有限公司 Intelligent stacker and method, device and equipment for identifying position abnormality of tray
CN113344917B (en) * 2021-07-28 2021-11-23 浙江华睿科技股份有限公司 Detection method, detection device, electronic equipment and storage medium
CN114162463A (en) * 2021-11-13 2022-03-11 深圳市坤同智能仓储科技有限公司 Material box based on image recognition and material box positioning method
CN114078220B (en) * 2022-01-19 2022-05-27 浙江光珀智能科技有限公司 Tray identification method based on depth camera
CN117132590B (en) * 2023-10-24 2024-03-01 威海天拓合创电子工程有限公司 Image-based multi-board defect detection method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2439487A1 (en) * 2010-10-06 2012-04-11 Sick Ag Volume measuring device for mobile objects
CN105139416A (en) * 2015-10-10 2015-12-09 北京微尘嘉业科技有限公司 Object identification method based on image information and depth information
CN105976375A (en) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 RGB-D-type sensor based tray identifying and positioning method
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
CN109160452A (en) * 2018-10-23 2019-01-08 西安中科光电精密工程有限公司 Unmanned transhipment fork truck and air navigation aid based on laser positioning and stereoscopic vision
CN109784145A (en) * 2018-12-05 2019-05-21 北京华捷艾米科技有限公司 Object detection method and storage medium based on depth map
CN110058591A (en) * 2019-04-24 2019-07-26 合肥柯金自动化科技股份有限公司 A kind of AGV system based on laser radar Yu depth camera hybrid navigation
CN111369544A (en) * 2020-03-09 2020-07-03 广州市技田信息技术有限公司 Tray positioning detection method and device and intelligent forklift
CN111445517A (en) * 2020-03-14 2020-07-24 苏州艾吉威机器人有限公司 Robot vision end positioning method and device and computer readable storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277187B2 (en) * 2001-06-29 2007-10-02 Quantronix, Inc. Overhead dimensioning system and method
KR101283262B1 (en) * 2011-10-21 2013-07-11 한양대학교 산학협력단 Method of image processing and device thereof
US10124489B2 (en) * 2016-02-26 2018-11-13 Kinema Systems Inc. Locating, separating, and picking boxes with a sensor-guided robot
US10721451B2 (en) * 2016-03-23 2020-07-21 Symbol Technologies, Llc Arrangement for, and method of, loading freight into a shipping container
US10262222B2 (en) * 2016-04-13 2019-04-16 Sick Inc. Method and system for measuring dimensions of a target object
US9990535B2 (en) * 2016-04-27 2018-06-05 Crown Equipment Corporation Pallet detection using units of physical length
US10614319B2 (en) * 2016-08-10 2020-04-07 John Bean Technologies Corporation Pallet localization systems and methods
US10328578B2 (en) * 2017-04-21 2019-06-25 X Development Llc Methods and systems for detecting, recognizing, and localizing pallets
US10640347B2 (en) * 2017-12-22 2020-05-05 X Development Llc Pallet tracking during engagement and disengagement

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2439487A1 (en) * 2010-10-06 2012-04-11 Sick Ag Volume measuring device for mobile objects
CN105139416A (en) * 2015-10-10 2015-12-09 北京微尘嘉业科技有限公司 Object identification method based on image information and depth information
CN105976375A (en) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 RGB-D-type sensor based tray identifying and positioning method
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
CN109160452A (en) * 2018-10-23 2019-01-08 西安中科光电精密工程有限公司 Unmanned transhipment fork truck and air navigation aid based on laser positioning and stereoscopic vision
CN109784145A (en) * 2018-12-05 2019-05-21 北京华捷艾米科技有限公司 Object detection method and storage medium based on depth map
CN110058591A (en) * 2019-04-24 2019-07-26 合肥柯金自动化科技股份有限公司 A kind of AGV system based on laser radar Yu depth camera hybrid navigation
CN111369544A (en) * 2020-03-09 2020-07-03 广州市技田信息技术有限公司 Tray positioning detection method and device and intelligent forklift
CN111445517A (en) * 2020-03-14 2020-07-24 苏州艾吉威机器人有限公司 Robot vision end positioning method and device and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于区域生长的目标检测方法;高晶;蔡幸福;刘志强;常燕;;北京工业大学学报;第42卷(第06期);第856-861页 *

Also Published As

Publication number Publication date
CN112070759A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN112070759B (en) Fork truck tray detection and positioning method and system
CN107507167B (en) Cargo tray detection method and system based on point cloud plane contour matching
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN110930459A (en) Vanishing point extraction method, camera calibration method and storage medium
CN112613378B (en) 3D target detection method, system, medium and terminal
CN110322457A (en) A kind of de-stacking method of 2D in conjunction with 3D vision
US9361668B2 (en) Method and apparatus for generating disparity map
WO2022188663A1 (en) Target detection method and apparatus
EP4086846A1 (en) Automatic detection of a calibration standard in unstructured lidar point clouds
CN115761550A (en) Water surface target detection method based on laser radar point cloud and camera image fusion
CN114972968A (en) Tray identification and pose estimation method based on multiple neural networks
KR101549155B1 (en) Method of automatic extraction of building boundary from lidar data
CN111325138A (en) Road boundary real-time detection method based on point cloud local concave-convex characteristics
CN112434119A (en) High-precision map production device based on heterogeneous data fusion
JP6753134B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and image processing program
CN113362385A (en) Cargo volume measuring method and device based on depth image
CN114170521B (en) Forklift pallet butt joint identification positioning method
CN110197489B (en) Method and device for selecting landing area of unmanned aerial vehicle
WO2020209046A1 (en) Object detection device
JPH11144028A (en) Processor and method for image processing for soft landing on moon and planet
CN114693546B (en) Image denoising method and device, electronic equipment and computer readable storage medium
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
Iz et al. An image-based path planning algorithm using a UAV equipped with stereo vision
CN114724111A (en) Intelligent forklift identification obstacle avoidance method based on deepstream
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant