CN116486119A - Flat carriage detection method and device based on unmanned carrier and unmanned carrier - Google Patents

Flat carriage detection method and device based on unmanned carrier and unmanned carrier Download PDF

Info

Publication number
CN116486119A
CN116486119A CN202310257404.8A CN202310257404A CN116486119A CN 116486119 A CN116486119 A CN 116486119A CN 202310257404 A CN202310257404 A CN 202310257404A CN 116486119 A CN116486119 A CN 116486119A
Authority
CN
China
Prior art keywords
tail
head
image
data stream
space coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310257404.8A
Other languages
Chinese (zh)
Inventor
于新莉
任宇飞
薄涵文
尹晓旭
孟德强
孙志成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Aerospace Information Research Institute
Original Assignee
Qilu Aerospace Information Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Aerospace Information Research Institute filed Critical Qilu Aerospace Information Research Institute
Priority to CN202310257404.8A priority Critical patent/CN116486119A/en
Publication of CN116486119A publication Critical patent/CN116486119A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting a flat carriage based on an unmanned carrier and the unmanned carrier. The position of the flat carriage can be accurately identified under complex scenes such as unfixed parking positions of trucks, various types and specifications of cargoes and the like, a foundation is provided for dividing cargo positions by combining the specifications of cargoes, accurate navigation to appointed cargo positions is facilitated, and flexible loading is realized.

Description

Flat carriage detection method and device based on unmanned carrier and unmanned carrier
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a device for detecting a flat carriage based on an unmanned carrier and the unmanned carrier.
Background
With the rapid development of computer and robot technologies, unmanned carrier vehicles (Automated Guided Vehicle, AGVs) are widely applied in logistics industry, and the development of warehouse logistics technology to automation and intellectualization is marked. For the study of loading, the mode loading which stays under a specific device is carried out, for example, a laser radar is arranged at the entrance and the exit of a garage, the loading is carried out through a mechanical arm, the loading is carried out through a telescopic machine, the loading field is required to be modified, and the device is suitable for loading containers depending on the specific device.
However, for a flat wagon, in a complex warehouse logistics scene with irregular parking, unfixed positions, various cargo types and specifications, stacking unevenness and dumping risks are easy to occur, and the modularized loading is not applicable, so that the loading cannot be completed smoothly when an AGV faces the complex warehouse logistics scene of the flat wagon.
Disclosure of Invention
The invention provides a method and a device for detecting a flat carriage based on an unmanned carrier and the unmanned carrier, which are used for solving the defect that a cargo space cannot be accurately positioned due to irregular parking of the flat carriage in the loading process.
The invention provides a flat carriage detection method based on an unmanned carrier, which comprises the following steps:
Under the condition that the interval distance between the unmanned carrier and the head point in the first length side edge of the platform carriage to be detected is larger than or equal to a first preset threshold value, starting a camera shooting module on the unmanned carrier so as to continuously receive RGB data streams acquired by the camera shooting module; wherein, the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively;
matching a head reference picture and a tail reference picture corresponding to the RGB data stream according to a flat car graphic library; the flat car graphic library stores partial images of head and tail endpoints of a reserved flat car and endpoint pixel coordinates marked on the partial images;
performing feature point matching on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream respectively, and determining a head pixel coordinate and a tail pixel coordinate;
acquiring a head space coordinate and a tail space coordinate of the flat car to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and car head and tail point cloud data; the carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera module while the RGB data stream is acquired and the camera internal parameters of the camera module;
Constructing a linear equation by utilizing the head space coordinate and the tail space coordinate, and determining the position information of the first length side edge of the flat carriage to be detected under an absolute space coordinate system; the position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
According to the method for detecting the flat carriage based on the unmanned carrier, the feature point matching is carried out on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream respectively, and the head pixel coordinate and the tail pixel coordinate are determined, and the method comprises the following steps:
extracting feature points and descriptors from the head image and the head reference image to obtain a first feature point set and a second feature point set;
determining a feature point, of which the distance between the pixel coordinates in the second feature point set and the first endpoint pixel coordinates carried by the first reference image is smaller than a second preset threshold, as a first detection key point;
screening out a point pair containing the first detection key point from a first point pair set obtained by carrying out feature point matching on the first feature point set and the second feature point set, and setting the pixel coordinate of the first detection key point as the head pixel coordinate corresponding to a head end point in the head image;
Correspondingly, extracting feature points and descriptors from the vehicle tail image and the vehicle tail reference image to obtain a third feature point set and a fourth feature point set;
determining a feature point with a distance between a pixel coordinate in the third feature point set and a pixel coordinate of a tail end point carried by the tail reference image smaller than the second preset threshold value as a second detection key point;
and screening out a point pair containing the second detection key point from a second point pair set obtained by carrying out characteristic point matching on the third characteristic point set and the fourth characteristic point set, and setting the pixel coordinate of the second detection key point as the tail pixel coordinate corresponding to a tail end point in the tail image.
According to the method for detecting the flat carriage based on the unmanned carrier, provided by the invention, the method for acquiring the head space coordinate and the tail space coordinate of the flat carriage to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and the carriage head and tail point cloud data comprises the following steps:
indexing in the carriage head-tail point cloud data by utilizing the head-tail pixel coordinates and the tail-tail pixel coordinates to obtain the head-tail coordinates and the tail-tail coordinates of the to-be-detected flat carriage under a camera coordinate system;
Performing coordinate system conversion on the head coordinate and the tail coordinate to obtain a head space coordinate and a tail space coordinate of the flat car to be detected under an absolute space coordinate system;
the absolute space coordinate system is a geodetic coordinate system or an absolute coordinate system with an origin fixed at any corner point in the storage space.
According to the method for detecting the flat carriage based on the unmanned carrier, before the head space coordinate and the tail space coordinate of the flat carriage to be detected under the absolute space coordinate system are obtained based on the head pixel coordinate, the tail pixel coordinate and the carriage head and tail point cloud data, the method further comprises the following steps:
based on the depth data stream and the camera internal and external parameters of the camera module, respectively carrying out distortion correction on the depth data stream and the RGB data stream and then aligning;
and filtering the depth data stream after distortion correction, mapping the depth data stream to a point cloud image, and obtaining the head-tail point cloud data of the carriage aligned with the RGB data stream.
According to the method for detecting the flat carriage based on the unmanned carrier, after the characteristic point matching is carried out on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream respectively, the method further comprises the steps of:
Constructing a linear equation based on the carriage width, the head space coordinate and the tail space coordinate of the flat carriage to be detected, and determining the position information of the second length side edge of the flat carriage to be detected under an absolute space coordinate system;
and controlling the unmanned carrier to carry out loading operation on the corresponding side by utilizing the position information of the first length side and/or the second length side of the to-be-detected flat car under the absolute space coordinate system.
According to the method for detecting the flat carriage based on the unmanned carrier, provided by the invention, the position information of the first length side edge and/or the second length side edge of the flat carriage to be detected under the absolute space coordinate system is utilized to control the unmanned carrier to carry out loading operation on the corresponding side edge, and the method comprises the following steps:
determining a cargo operation position based on the position information of the first length side edge and/or the second length side edge of the flat carriage to be detected under the absolute space coordinate system and the cargo attribute information;
and the goods operation position is issued to the unmanned carrier, so that the unmanned carrier controls the fork arms to carry the goods borne on the fork arms to the goods operation position from the corresponding length side edges, and the goods loading operation is completed.
The invention also provides a flat carriage detection device based on the unmanned carrier, which comprises:
the image acquisition module is used for starting the camera module on the unmanned carrier to continuously receive the RGB data stream acquired by the camera module under the condition that the interval distance between the unmanned carrier and the head point in the first length side edge of the platform car to be detected is larger than or equal to a first preset threshold value; wherein, the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively;
the image retrieval module is used for matching the head reference image and the tail reference image corresponding to the RGB data stream according to the flatbed graphic library; the flat car graphic library stores partial images of head and tail endpoints of a reserved flat car and endpoint pixel coordinates marked on the partial images;
the image matching module is used for matching the head image and the tail image in the RGB data stream with the head reference image and the tail reference image corresponding to the RGB data stream respectively to determine the head pixel coordinate and the tail pixel coordinate;
the head-tail positioning module is used for acquiring the head space coordinate and the tail space coordinate of the flat car to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and the car head-tail point cloud data; the carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera module while the RGB data stream is acquired and the camera internal parameters of the camera module;
The carriage fitting module is used for constructing a linear equation by utilizing the head space coordinate and the tail space coordinate and determining the position information of the first length side edge of the flat carriage to be detected under an absolute space coordinate system; the position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
The invention also provides an unmanned carrier, which comprises an unmanned carrier body, a camera module arranged in the longitudinal axis direction of the unmanned carrier body, and a processor arranged on the unmanned carrier body, wherein the processor realizes the detection method of the flat carriage based on the unmanned carrier when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the automated guided vehicle-based flatbed detection method as described in any of the above.
The present invention also provides a computer program product comprising a computer program which when executed by a processor implements the automated guided vehicle-based flatbed detection method as described in any of the above.
According to the method and device for detecting the flat carriage based on the unmanned carrier and the unmanned carrier, the image retrieval network is utilized to extract the head reference image and the tail reference image which are closest to the RGB data stream monitored in real time from the flat carriage graphic library updated and maintained in real time, then the head reference image and the tail reference image are matched with the head image and the tail image in the RGB data stream to locate pixel coordinates, the head space coordinates and the tail space coordinates in an absolute space coordinate system are obtained by utilizing the located head pixel coordinates and the located tail pixel coordinates in a carriage head-tail point cloud data index, and a linear equation matched with the side of the first length is fitted to guide the loading operation position according to each position of the side of the first length. The position of the flat carriage can be accurately identified under complex scenes such as unfixed parking positions of trucks, various types and specifications of cargoes, a foundation is provided for dividing cargo positions by combining the specifications of the cargoes, accurate navigation to appointed cargo positions is facilitated, and flexible loading is achieved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an automated guided vehicle-based method for detecting a flat car according to the present invention;
FIG. 2 is one of the image matching visualization schematics provided by the present invention;
FIG. 3 is a second view of an image matching visualization provided by the present invention;
FIG. 4 is a second flow chart of the method for detecting a flat car based on an automated guided vehicle according to the present invention;
fig. 5 is a schematic structural view of a detection device for a flat car based on an unmanned carrier;
fig. 6 is a schematic structural view of the automated guided vehicle according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more.
It is to be understood that the terminology used in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising" indicate the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Fig. 1 is a schematic flow chart of a method for detecting a flat car based on an unmanned carrier. As shown in fig. 1, the method for detecting a flat carriage based on an unmanned carrier provided by the embodiment of the invention comprises the following steps: step 101, under the condition that the interval distance between the unmanned carrier and the head point in the first length side edge of the platform car to be detected is larger than or equal to a first preset threshold value, starting a camera module on the unmanned carrier so as to continuously receive RGB data streams collected by the camera module.
And the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively.
It should be noted that, the execution subject of the method for detecting a flat car based on an unmanned carrier according to the embodiment of the present invention is a flat car detection device based on an unmanned carrier. The platform carriage detection device based on the unmanned carrier can be a built-in central processing unit (Central Processing Unit, CPU) of the unmanned carrier or a development board based on CPU integration so as to process information and run programs.
The application scene of the platform carriage detection method based on the unmanned carrier provided by the embodiment of the invention is that the camera module is controlled to start a visual task under the condition that the unmanned carrier is at a certain safe distance from the platform carriage to the platform carriage for loading goods, and the head and the tail of the platform carriage are identified and positioned.
The camera module comprises a color camera and a depth camera. The depth camera types include, but are not limited to, binocular cameras, TOF cameras or structured light cameras, etc., and the different types of depth cameras are only different in ranging principle. In the effective ranging range, the ranging precision of the depth camera can reach the centimeter level, even higher, the smaller the ranging range is, the higher the precision is, and the variation range of the effective ranging result is still at the centimeter level.
It should be noted that, the first preset threshold is a threshold set for a vertical distance between the unmanned carrier and the head end point in the first length side of the platform car to be detected, the value is often limited by a perception distance and a perception precision of the depth camera, and factors such as a field of view range of the camera, a forklift arm length, a forklift path planning and the like are additionally considered, and the setting of the distance is often not more than 3 meters.
Illustratively, the present invention sets a first preset threshold, denoted D, of 2 meters that triggers the initiation of a visual task.
It should be noted that the flat carriage is formed by horizontally splicing flat boards with different numbers by using the vehicle frame and the vehicle tray, and four sides of the flat carriage can be opened for cargo loading operation. The side parallel to the longitudinal central axis of the body of the flat carriage is a length side, and the side parallel to the longitudinal central axis of the body is a width side.
The first length side edge is a length side edge, opposite to a lens of the camera module, of which the first length side edge is close to a head end point of the head and is far away from a tail end point of the head.
Specifically, in step 101, the platform carriage detection device based on the automated guided vehicle may monitor the distance between the platform carriage detection device and the surrounding object in real time through the sensing element in the running process of the automated guided vehicle, and when the distance between the automated guided vehicle and the head point in the first length side of the platform carriage to be detected is monitored to be greater than or equal to the first preset threshold, the opening visual task on the automated guided vehicle is driven, and the image capturing module is started while the automated guided vehicle is controlled to move from the head point to the tail point of the platform carriage while keeping a fixed distance from the vehicle body, and the triggering mode of the image capturing module is set to be continuously triggered, so as to obtain the RGB data stream.
The RGB data stream comprises all carriage partial RGB images continuously collected by the camera module in the moving process. The starting image frame of the RGB data stream is a head image of a carriage head end point in the view center of the camera module, and the ending image frame of the RGB data stream is a tail image of a carriage tail end point in the view center of the camera module.
It will be appreciated that the visual task may also be determined by the user via the visual image of the front end and manually triggered.
And 102, matching the head reference map and the tail reference map corresponding to the RGB data stream according to a flat car graphic library. The flat car graphic library stores partial images of head and tail end points of the reserved flat car and end point pixel coordinates marked on the partial images.
The flat car graphic library comprises a plurality of flat car side images collected under different angles, different heights and different light rays, and partial images of the head end and the tail end of a flat car are reserved for cutting images in order to reduce the calculated amount. And when the labelme software is used for manually marking the endpoints of the flat carriage, storing the marked endpoint pixel coordinates into the json file.
Specifically, in step 102, the device for detecting a flat carriage based on the automated guided vehicle may perform feature extraction on the RGB data stream, then screen out an endpoint local picture with similarity to the extracted feature vector exceeding a certain threshold from the flat carriage graphic library, and output the endpoint local picture as a corresponding bow reference picture and a corresponding tail reference picture according to the endpoint type marked in the endpoint local picture.
The method for screening in the flat car graphic library according to the vector similarity includes, but is not limited to, euclidean distance, mahalanobis distance, manhattan distance, chebyshev distance, minkowski distance, hamming distance, cosine similarity, pelson correlation coefficient and the like.
Preferably, the embodiment of the invention can adopt image retrieval networks such as MobileNetVlad and the like for processing, and the specific implementation process comprises a training stage and an application stage, wherein:
in the training stage, in order to reduce the calculated amount, the picture in the flat car graphic library is resampled to half of the original size, a MobileNetVlad network model is input, the model extracts the characteristics of the picture, and a vector is applied to store the characteristics. And then, expressing the similarity by calculating the difference between the two vectors, and repeatedly updating the model parameters until convergence by utilizing the back propagation of the gradient, thereby achieving the purpose of optimizing the model.
In the application stage, the RGB data stream is directly input into a MobileNetVlad network, and the network outputs pictures which are the most similar to the head images and the tail images in the RGB data stream in a flat car graphic library and have the similarity exceeding a certain threshold value as head reference pictures and tail reference pictures.
And 103, matching characteristic points of the head image and the tail image in the RGB data stream with the head reference image and the tail reference image corresponding to the RGB data stream respectively, and determining the head pixel coordinates and the tail pixel coordinates.
Specifically, in step 103, the slab carriage detection device based on the unmanned carrier respectively uses the pixel points in the head image, which are matched with the end points marked by the corresponding head reference image, as the head end points for the head reference image and the closest head reference image in the image library, and assigns the pixel coordinates corresponding to the marked points in the head reference image to the head end points, and outputs the head pixel coordinates.
And similarly, regarding the tail image and the tail reference image closest to the tail image in the image library, taking the pixel point matched with the end point marked by the tail reference image in the tail image as a tail end point, assigning the pixel coordinate corresponding to the mark point in the tail reference image to the tail end point, and outputting the tail pixel coordinate.
And 104, acquiring the head space coordinate and the tail space coordinate of the flat car to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and car head and tail point cloud data.
The carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera shooting module while the RGB data stream is acquired and the camera internal parameters of the camera shooting module.
The carriage head and tail point cloud data refers to a set of three-dimensional point cloud coordinates obtained by converting a depth map in a depth data stream obtained when an RGB data stream is acquired from an image coordinate system to a world coordinate system by using camera internal and external parameters of a camera shooting module.
Specifically, in step 104, the flatbed carriage detection device based on the unmanned carrier indexes the head and tail point cloud data of the carriage by using the head and tail pixel coordinates, respectively, and converts the three-dimensional point cloud coordinates indexed from the world coordinate system to the absolute space coordinate system, to obtain the head space coordinate (denoted as (X) of the first length side of the flatbed carriage to be detected in the absolute space coordinate system 1 ,Y 1 ,Z 1 ) And the space coordinates of the tail (denoted as (X) 2 ,Y 2 ,Z 2 ))。
And 105, constructing a linear equation by using the head space coordinates and the tail space coordinates, and determining the position information of the first length side edge of the flat carriage to be detected under an absolute space coordinate system. The position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
Specifically, in step 105, the automated guided vehicle-based flat car detection device constructs a linear equation using the head space coordinates and the tail space coordinates in the absolute space coordinate system as follows:
therefore, the position information of the first length side edge of the flat car to be detected under the absolute space coordinate system can be determined, and the length of the flat car and the height of the flat car can be fitted. And then, according to the size of the pallet goods, the corresponding goods space is divided from the position information of the first length side edge under the absolute space coordinate system, so as to guide the forklift loading navigation to the operation position from the first length side edge.
According to the embodiment of the invention, an image retrieval network is utilized to extract a head reference image and a tail reference image which are closest to a RGB data stream monitored in real time from a flat car graphic library updated and maintained in real time, then, characteristic point pairs are matched with the head image and the tail image in the RGB data stream to locate pixel coordinates, the head pixel coordinates and the tail pixel coordinates are utilized to obtain head space coordinates and tail space coordinates in an absolute space coordinate system in a carriage head-tail point cloud data index, and a linear equation matched with a first length side is fitted to guide a loading operation position according to each position of the first length side. The position of the flat carriage can be accurately identified under complex scenes such as unfixed parking positions of trucks, various types and specifications of cargoes, a foundation is provided for dividing cargo positions by combining the specifications of the cargoes, accurate navigation to appointed cargo positions is facilitated, and flexible loading is achieved.
On the basis of any one of the above embodiments, performing feature point matching on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream, respectively, to determine a head pixel coordinate and a tail pixel coordinate, including: and extracting feature points and descriptors from the head image and the head reference image to obtain a first feature point set and a second feature point set.
Specifically, in step 103, the flat car detection device based on the unmanned carrier adopts the SuperPoint network to extract feature points and descriptors of the head image and the head reference image, so as to obtain a first feature point set and a second feature point set respectively.
The embodiment of the invention does not specifically limit the parameter setting of the SuperPoint network.
Illustratively, the confidence threshold is set to 0.25, the weight is set to outdoor detection, and the non-maximum suppression radius is 4. The feature points are corner points which do not change with the factors such as light rays and shooting angles.
And judging the feature points, of which the distance between the pixel coordinates in the second feature point set and the pixel coordinates of the head end point carried by the head reference image is smaller than a second preset threshold value, as first detection key points.
The second preset threshold value is a threshold value set for a distance between a feature point and a marking point of the head reference image, and the smaller the value is, the higher the reliability that the corresponding feature point in the head reference image is the marking point is.
Illustratively, the present invention sets the second preset threshold to 5 pixels.
Specifically, a flat carriage detection device based on an unmanned carrier reads a json annotation file of a head reference picture, a feature point closest to the pixel coordinates of an annotation point of the head reference picture is screened out in a second feature point set by combining a second preset threshold, and if the feature point is in five pixels around the annotation point, the feature point is used as a first detection key point of the head end of the flat carriage of the reference picture.
If the first detection key point is not found, the confidence threshold of the feature point extracted by the SuperPoint network is reduced, and feature extraction is carried out again.
And screening out a point pair containing the first detection key point from a first point pair set obtained by carrying out characteristic point matching on the first characteristic point set and the second characteristic point set, and setting the pixel coordinate of the first detection key point as the head pixel coordinate corresponding to a head end point in the head image.
Specifically, the flat carriage detection device based on the unmanned carrier can adopt a SuperGlue network to perform feature point matching on the first feature point set and the second feature point set to obtain a first point pair set comprising a plurality of matching point pairs, a point pair where a first detection key point is located is taken from the first point pair set, the other feature point in the point pair is positioned as a head end point in a head image, and pixel coordinates of the first detection key point are assigned to the feature point to output the head pixel coordinates.
Fig. 2 is one of the visual diagrams of image matching provided by the present invention. The matching result of the first image and the first reference image is shown in fig. 2, wherein the thickened connecting line is the matching result of the point pair where the first detection key point is located.
If the point pairs containing the first detection key point cannot be screened out from the first point pair set, correspondingly reducing the matching threshold parameters of the SuperGlue network structure so as to expand the matching point pairs as much as possible.
Correspondingly, extracting feature points and descriptors from the vehicle tail image and the vehicle tail reference image to obtain a third feature point set and a fourth feature point set.
And judging the feature points, of which the distance between the pixel coordinates in the third feature point set and the pixel coordinates of the tail end point carried by the tail reference image is smaller than the second preset threshold value, as second detection key points.
And screening out a point pair containing the second detection key point from a second point pair set obtained by carrying out characteristic point matching on the third characteristic point set and the fourth characteristic point set, and setting the pixel coordinate of the second detection key point as the tail pixel coordinate corresponding to a tail end point in the tail image.
Specifically, the flat carriage detection device based on the unmanned carrier uses the processing logic described in the flow to extract the characteristic points of the tail image and the tail reference image to obtain a third characteristic point set and a fourth characteristic point set, and takes the characteristic points in five pixels around the marking point of the tail reference image from the fourth characteristic points as second detection key points. And taking a point pair where a second detection key point is located from a second point pair set obtained by carrying out feature point matching on the third feature point set and the fourth feature point set, positioning the other feature point in the point pair as a tail end point in the tail image, and assigning the pixel coordinates of the second detection key point to the feature point so as to output the tail pixel coordinates.
FIG. 3 is a second view of the image matching visualization provided by the present invention. The matching result of the tail image and the tail reference image is shown in fig. 3, wherein the thickened connecting line is the matching result of the point pair where the second detection key point is located.
If the point pairs containing the second detection key points cannot be screened out from the second point pair set, correspondingly lowering the matching threshold parameters of the SuperGlue network structure so as to expand the matching point pairs as much as possible.
The embodiment of the invention utilizes a feature extraction network to extract feature points of a head image and a head reference image respectively, takes the feature point closest to a marking point in the head reference image as a detection key point of the head end of a reference image flat car, and takes the pixel coordinates of a matching point of the detection key point of the head end of the flat car as the pixel coordinates of the head of the flat car in the process of matching the feature points. And then the tail pixel coordinates are positioned from the tail image and the tail reference image by using the same processing flow. The head-tail end key position of the flat carriage can be accurately identified under the complex scene that the parking position of the truck is not fixed.
On the basis of any one of the above embodiments, acquiring the head space coordinate and the tail space coordinate of the flat car to be detected in the absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and the car head-tail point cloud data includes: and indexing the head pixel coordinates and the tail pixel coordinates in the carriage head and tail point cloud data to obtain the head coordinates and the tail coordinates of the to-be-detected flat carriage under a camera coordinate system.
Specifically, the flat carriage detection device based on the unmanned carrier indexes to the carriage head and tail point cloud data according to the head pixel coordinates and the tail pixel coordinates, respectively acquires the head coordinates (marked as (x) of the flat carriage to be detected under the camera coordinate system 1 ,y 1 ,z 1 ) And the tail coordinates (denoted as (x) 2 ,y 2 ,z 2 ))。
It can be understood that if the coordinates of the head or tail of the vehicle are empty, the coordinates of the pixels of the adjacent points are obtained and output as the coordinates of the head or tail of the vehicle.
And converting the coordinate system of the head coordinate and the tail coordinate to obtain the head space coordinate and the tail space coordinate of the flat car to be detected under the absolute space coordinate system.
The absolute space coordinate system is a geodetic coordinate system or an absolute coordinate system with an origin fixed at any corner point in the storage space.
Since the AGV navigation depends on the antenna receiving signals, the position of the forklift in the absolute space coordinate system is actually the antenna position (X h ,Z h ). Ignoring the height information on the Y axis, performing dimension reduction processing on a camera coordinate system, and simplifying the vehicle head coordinate into (x) 1 ,z 1 ) The antenna coordinate system is obtained by translating a camera plane coordinate system, and the distance difference between the origins of the two coordinate systems in the X-axis direction is marked as delta X, and the distance difference in the Z-axis direction is marked as delta Z. And then the antenna coordinate system is rotated by θ around the Y-axis to obtain an absolute spatial coordinate system.
The absolute space coordinate system may be a custom coordinate system in a logistics field, which is not specifically limited in the embodiment of the present invention. For example, the absolute space coordinate system may be customized to set the origin as a certain corner in the object field.
The absolute space coordinate system may be a geocentric coordinate system established with the origin and the centroid of the earth being coincident with each other and with the ellipsoid and the normal line as references.
Specifically, in step 104, the unmanned carrier-based flatbed car detection device first compares the head coordinates (x 1 ,z 1 ) The coordinates in the antenna coordinate system are (x 1 +Δx,z 1 +Δz), the antenna coordinates are set to (X) h ,Z h ) Is substituted into the following formula to calculate the space coordinates (X) 1 ,Y 1 ,Z 1 ):
X 1 =(x 1 +Δx)cosθ+(z 1 +Δz)sinθ+X h
Y 1 =y 1
Z 1 =-(x 1 +Δx)sinθ+(z 1 +Δz)cosθ+Z h
Similarly, the position of the forklift antenna in the absolute space coordinate system at this time is set as (X t ,Z t ) Tail coordinates (x) 2 ,z 2 ) The coordinates in the antenna coordinate system are (x 2 +Δx,z 2 +Δz). Fork truck coordinate (X) t ,Z t ) Is substituted into the following formula to calculate the space coordinates (X) 2 ,Y 2 ,Z 2 ):
X 2 =(x 2 +Δx)cosθ+(z 2 +Δz)sinθ+X t
Y 2 =y 2
Z 2 =-(x 2 +Δx)sinθ+(z 2 +Δz)cosθ+Z t
The embodiment of the invention firstly utilizes the head pixel coordinates and the tail pixel coordinates to index in point cloud data, converts the head coordinates and the tail coordinates under the obtained camera coordinate system into the absolute space coordinate system of AGV navigation, and obtains the head space coordinates and the tail space coordinates. The method can accurately identify the position of the head end and the tail end of the flat carriage in the storage space under the complex scene that the parking position of the truck is not fixed.
On the basis of any one of the above embodiments, before the acquiring, based on the head pixel coordinate, the tail pixel coordinate, and the car head-tail point cloud data, the head-space coordinate and the tail-space coordinate of the flat car to be detected in the absolute space coordinate system, the method further includes: and based on the depth data stream and the camera internal and external parameters of the camera shooting module, respectively carrying out distortion correction on the depth data stream and the RGB data stream and then aligning.
Specifically, before step 104, after the depth data stream and the RGB data stream are subjected to corresponding distortion correction by the slab carriage detection device based on the unmanned carrier, the depth data stream is subjected to coordinate system conversion by using the camera internal and external parameters of the camera module, so that the depth data stream is aligned with the RGB data stream after distortion correction.
And filtering the depth data stream after distortion correction, mapping the depth data stream to a point cloud image, and obtaining the head-tail point cloud data of the carriage aligned with the RGB data stream.
Specifically, the depth data stream is filtered by the flat carriage detection device based on the unmanned carrier, noise points are removed, and then the depth data stream is mapped to a point cloud image to obtain carriage head-tail point cloud data aligned with RGB data.
According to the embodiment of the invention, the corresponding distortion correction flow is set for the depth data flow and the RGB data flow, so that the accuracy can be greatly improved in the subsequent image processing process.
On the basis of any one of the above embodiments, after the feature point matching is performed on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream, determining the head pixel coordinate and the tail pixel coordinate, the method further includes: and constructing a linear equation based on the carriage width, the head space coordinate and the tail space coordinate of the flat carriage to be detected, and determining the position information of the second length side edge of the flat carriage to be detected under an absolute space coordinate system.
Specifically, after step 104, the flat car detection device based on the unmanned carrier may further convert the head space coordinate and the tail space coordinate of the first length side to the head and tail end point coordinates of the second length side by combining the car width of the flat car to be detected, and fit the position information of the second length side under the absolute space coordinate system in a linear equation constructed by the head and tail end point coordinates of the second length side.
And controlling the unmanned carrier to carry out loading operation on the corresponding side by utilizing the position information of the first length side and/or the second length side of the to-be-detected flat car under the absolute space coordinate system.
Specifically, the flat carriage detection device based on the unmanned carrier can plan the AGV operation position according to the position information of the first length side edge or the second length side edge under the absolute space coordinate system, and control the unmanned carrier to sequentially carry out loading operation on the first length side edge or the second length side edge according to planned operation points.
The second length side is parallel to the first length side and is not in the view finding content of the camera module, the second length side is close to the head end point of the head, and the second length side is far away from the tail end point of the head.
Preferably, the planning of the double-side operation positions of the AGV can be further performed by utilizing the position information of the first length side edge and the second length side edge under the absolute space coordinate system, and the unmanned carrier is controlled to perform loading operation on the first length side edge and the second length side edge according to the planned operation points.
It can be understood that the first length side and the second length side can be respectively provided with an unmanned carrier provided with a platform carriage detection device based on the unmanned carrier, and the two sides can simultaneously carry out visual tasks and analyze the linear equations of the corresponding sides.
According to the embodiment of the invention, the head and tail end points and the carriage edge linear equation of the other side can be determined according to the carriage width of the flat car and the head and tail space and the tail space of any side, so that loading business can be conveniently and simultaneously carried out on two sides of the carriage, and the working efficiency of the unmanned carrier is improved.
On the basis of any one of the above embodiments, controlling the automated guided vehicle to perform loading operation on the corresponding side by using the position information of the first length side and/or the second length side of the to-be-detected flat car under the absolute space coordinate system, includes: and determining the cargo operation position based on the position information of the first length side edge and/or the second length side edge of the to-be-detected flat carriage under the absolute space coordinate system and the cargo attribute information.
Specifically, the flat car detection device based on the unmanned carrier can predict the storage scale of the cargo according to cargo attribute information (such as basic attribute information of cargo type, cargo name, cargo size, cargo weight, cargo quantity and the like), divide an area matched with the storage scale according to the position information of the first length side and/or the second length side in an absolute space coordinate system, and plan the cargo operation position according to the position contained in the area.
And the goods operation position is issued to the unmanned carrier, so that the unmanned carrier controls the fork arms to carry the goods borne on the fork arms to the goods operation position from the corresponding length side edges to finish the loading operation.
Specifically, the flat car detection device based on the unmanned carrier issues the planned cargo operation position to the unmanned carrier.
The unmanned carrier receives and responds to the control instruction carrying the goods operation position, and after driving the unmanned carrier to move to the position of the corresponding length side edge, the control fork arm moves to the goods operation position to carry out the goods loading operation on the goods with corresponding specifications.
Fig. 4 is a second flow chart of the method for detecting a flat car based on an unmanned carrier according to the present invention. As shown in fig. 4, the embodiment of the invention provides a complete implementation process of a method for detecting a flat carriage based on an unmanned carrier:
step one, building a flat car graphic library:
and acquiring a plurality of side images of the flat car at different angles, different heights and different light rays, and reserving a local image of the end point of the flat car for cutting the images so as to reduce the calculated amount. Manually labeling flatbed car endpoints using labelme software
Continuously acquiring RGBD images by using a TOF camera:
and after distortion correction is carried out on RGB data flow in the RGBD image, the depth data flow is aligned with the image after the RGB distortion correction by utilizing the camera internal and external parameters, and then noise is removed by utilizing a filter and mapped to the point cloud image, so that the head and tail point cloud data of the carriage aligned with the RGB data flow is obtained.
Step three, an image retrieval module:
the RGB data stream of the head end of the flat car obtained by the camera is input into a MobileNetVlad network, and the network outputs a picture which is the most similar to the picture in the flat car graphic library and has the similarity exceeding a certain threshold value, and the picture is called a head reference picture.
Step four, extracting features:
and extracting characteristic points and descriptors from the RGB data of the head end of the flat car and the head reference diagram by adopting a SuperPoint network, and reading a json annotation file of the head reference diagram, acquiring the nearest characteristic point near the annotation point, and taking the point as a first detection key point of the head end of the flat car of the reference diagram if the point is in five pixels around the annotation point.
Step five, a feature matching module:
and carrying out feature point matching on the reference image and the matching image by adopting a SuperGlue network, and taking the pixel coordinates of the matching points of the first detection key point at the head end of the flat car from a plurality of matching point pairs.
Step six, a ranging module:
and (3) acquiring the coordinates of the head of the flat carriage according to the point cloud data acquired in the step (II) from the index of the first detection key point, and acquiring the pixel coordinates of the adjacent points of the space point coordinates as the coordinates of the head of the flat carriage when the space point coordinates are empty.
If the resolution of the RGB image of the camera is 1280×720, the steps one to six take about 170ms.
Step seven, coordinate transformation:
and converting the head space coordinates under the camera coordinate system into the head space coordinates under the absolute space coordinate system of AGV navigation.
Step eight, acquiring space coordinates of the vehicle tail
And (3) repeating the steps two to six by utilizing the tail images in the RGB data stream, and obtaining the tail space coordinates of the tail end of the flat carriage under the absolute space coordinate system.
Step nine, constructing a linear equation
In the absolute space coordinate system, a linear equation is constructed by utilizing the head space coordinate and the tail space coordinate, so that the length of the flat car and the height of the carriage can be determined.
And dividing cargo positions on the flat car according to the size of the pallet cargo, and determining the accurate position of forklift loading navigation. In addition, according to the width of the carriage of the flat car, the straight line equation of the head and tail end points and the carriage edge at the other side of the flat car can be determined, so that loading business can be conveniently and simultaneously carried out at two sides of the carriage.
According to the embodiment of the invention, corresponding goods positions are divided in the first length side edge and/or the second length side edge according to the goods attribute information of the pallet goods, and the goods delivery operation position is planned so as to guide the unmanned carrier to operate according to the goods operation positions. The accurate position of forklift loading navigation can be determined, and an effective solution is provided for cargo space division and flexible loading.
Fig. 5 is a schematic structural diagram of the detection device for the flat carriage based on the unmanned carrier. On the basis of any of the above embodiments, as shown in fig. 5, the apparatus includes an image acquisition module 510, an image retrieval module 520, an image matching module 530, a head-to-tail positioning module 540, and a car fitting module 550, wherein:
the image acquisition module 510 is configured to, when it is determined that a separation distance between the unmanned carrier and a head point in a first length side of the flatbed to be detected is greater than or equal to a first preset threshold, start a camera module on the unmanned carrier, so as to continuously receive an RGB data stream acquired by the camera module. And the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively.
And the image retrieval module 520 is configured to match the head reference map and the tail reference map corresponding to the RGB data stream according to the flatbed graphic library. The flat car graphic library stores partial images of head and tail end points of the reserved flat car and end point pixel coordinates marked on the partial images.
The image matching module 530 is configured to match the head image and the tail image in the RGB data stream with the head reference image and the tail reference image corresponding to the RGB data stream, respectively, to determine the head pixel coordinate and the tail pixel coordinate.
And the head-tail positioning module 540 is configured to obtain the head space coordinate and the tail space coordinate of the flat car to be detected in the absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and the car head-tail point cloud data. The carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera shooting module while the RGB data stream is acquired and the camera internal parameters of the camera shooting module.
And the carriage fitting module 550 is configured to construct a linear equation by using the head space coordinate and the tail space coordinate, and determine the position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system. The position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
Specifically, the image acquisition module 510, the image retrieval module 520, the image matching module 530, the head-to-tail positioning module 540, and the car fitting module 550 are electrically connected in sequence.
The image acquisition module 510 can monitor the distance between the unmanned carrier and surrounding objects in real time in the running process of the unmanned carrier through the sensing element, when the distance between the unmanned carrier and the head point in the first length side edge of the platform car to be detected is monitored to be larger than or equal to a first preset threshold value, the opening visual task on the unmanned carrier is driven, the camera module is started in the process of controlling the unmanned carrier to move from the head point to the tail point of the platform car while keeping a fixed distance with the car body, and the triggering mode of the camera module is set to be continuously triggered to acquire RGB data streams.
The image retrieval module 520 may perform feature extraction on the RGB data stream, and then screen out an endpoint local picture with similarity with the extracted feature vector exceeding a certain threshold from the flatbed graphic library, and output the endpoint local picture as a corresponding head reference picture and a corresponding tail reference picture according to the endpoint type marked in the endpoint local picture.
The image matching module 530 respectively uses the pixel points matched with the end points marked by the corresponding reference images in the head images as the head end points for the head reference images and the closest head reference images in the image library, and assigns the pixel coordinates corresponding to the marked points in the head reference images to the head end points, so as to output the head pixel coordinates.
And similarly, regarding the tail image and the tail reference image closest to the tail image in the image library, taking the pixel point matched with the end point marked by the tail reference image in the tail image as a tail end point, assigning the pixel coordinate corresponding to the mark point in the tail reference image to the tail end point, and outputting the tail pixel coordinate.
The head-tail positioning module 540 indexes the head-tail point cloud data of the carriage by using the head-pixel coordinates and the tail-pixel coordinates respectively, and converts the indexed three-dimensional point cloud coordinates from a world coordinate system to an absolute space coordinate system to obtain the head-space coordinates and the tail-space coordinates of the first length side edge of the flat carriage to be detected under the absolute space coordinate system.
The car fitting module 550 constructs a linear equation using the head space coordinates and the tail space coordinates in the absolute space coordinate system, thereby determining the position information of the first length side of the flat car to be detected in the absolute space coordinate system, and fitting the length of the flat car and the height of the car. And then, according to the size of the pallet goods, the corresponding goods space is divided from the position information of the first length side edge under the absolute space coordinate system, so as to guide the forklift loading navigation to the operation position from the first length side edge.
Optionally, the image matching module 530 includes a first feature extraction unit, a first detection keypoint screening unit, a first matching unit, a second feature extraction unit, a second detection keypoint screening unit, and a second matching unit, wherein:
and the first feature extraction unit is used for extracting feature points and descriptors from the head image and the head reference image to obtain a first feature point set and a second feature point set.
And the first detection key point screening unit is used for judging the characteristic points, of which the distance between the pixel coordinates in the second characteristic point set and the first endpoint pixel coordinates carried by the first reference image is smaller than a second preset threshold value, as the first detection key points.
And the first matching unit is used for screening out a point pair containing the first detection key point from a first point pair set obtained by carrying out characteristic point matching on the first characteristic point set and the second characteristic point set, and setting the pixel coordinate of the first detection key point as the head pixel coordinate corresponding to a head point in the head image.
And the second feature extraction unit is used for extracting feature points and descriptors from the vehicle tail image and the vehicle tail reference image to obtain a third feature point set and a fourth feature point set.
And the second detection key point screening unit is used for judging the characteristic points, of which the distance between the pixel coordinates in the third characteristic point set and the pixel coordinates of the tail end point carried by the tail reference image is smaller than the second preset threshold value, as second detection key points.
And the second matching unit is used for screening out a point pair containing the second detection key point from a second point pair set obtained by carrying out characteristic point matching on the third characteristic point set and the fourth characteristic point set, and setting the pixel coordinate of the second detection key point as the tail pixel coordinate corresponding to a tail end point in the tail image.
Optionally, the head-to-tail positioning module 540 includes a head-to-tail positioning unit and a coordinate system conversion unit, where:
and the head-tail positioning unit is used for indexing in the carriage head-tail point cloud data by utilizing the head pixel coordinates and the tail pixel coordinates to obtain the head coordinates and the tail coordinates of the to-be-detected flat carriage under a camera coordinate system.
And the coordinate system conversion unit is used for carrying out coordinate system conversion on the head coordinate and the tail coordinate to obtain the head space coordinate and the tail space coordinate of the flat car to be detected under the absolute space coordinate system.
The absolute space coordinate system is a geodetic coordinate system or an absolute coordinate system with an origin fixed at any corner point in the storage space.
Optionally, the apparatus further comprises a distortion correction module and a point cloud mapping module, wherein:
and the distortion correction module is used for aligning the depth data stream and the RGB data stream after distortion correction respectively based on the depth data stream and the camera internal and external parameters of the camera shooting module.
And the point cloud mapping module is used for carrying out filtering treatment on the depth data stream after distortion correction and mapping the depth data stream to a point cloud image to obtain the carriage head-tail point cloud data aligned with the RGB data stream.
Optionally, the device further comprises a side fitting module and a loading and unloading planning module, wherein:
and the side fitting module is used for constructing a linear equation based on the carriage width, the head space coordinate and the tail space coordinate of the flat carriage to be detected and determining the position information of the second length side of the flat carriage to be detected under an absolute space coordinate system.
And the loading and unloading planning module is used for controlling the unmanned carrier to carry out loading operation on the corresponding side by utilizing the position information of the first length side and/or the second length side of the platform truck to be detected under the absolute space coordinate system.
Optionally, the loading and unloading planning module comprises an operation planning unit and a control unit, wherein:
and the operation planning unit is used for determining the cargo operation position based on the position information of the first length side edge and/or the second length side edge of the flat carriage to be detected under the absolute space coordinate system and the cargo attribute information.
And the control unit is used for issuing the goods operation position to the unmanned carrier so that the unmanned carrier can control the fork arms to carry the goods borne on the fork arms to the goods operation position from the corresponding length side edges to finish the loading operation.
The embodiment of the invention provides a flat carriage detection device based on an unmanned carrier, which is used for executing the flat carriage detection method based on the unmanned carrier, and the implementation mode of the flat carriage detection device based on the unmanned carrier is consistent with the implementation mode of the flat carriage detection method based on the unmanned carrier, and the same beneficial effects can be achieved, and the description is omitted here.
According to the embodiment of the invention, an image retrieval network is utilized to extract a head reference image and a tail reference image which are closest to a RGB data stream monitored in real time from a flat car graphic library updated and maintained in real time, then, characteristic point pairs are matched with the head image and the tail image in the RGB data stream to locate pixel coordinates, the head pixel coordinates and the tail pixel coordinates are utilized to obtain head space coordinates and tail space coordinates in an absolute space coordinate system in a carriage head-tail point cloud data index, and a linear equation matched with a first length side is fitted to guide a loading operation position according to each position of the first length side. The position of the flat carriage can be accurately identified under complex scenes such as unfixed parking positions of trucks, various types and specifications of cargoes, a foundation is provided for dividing cargo positions by combining the specifications of the cargoes, accurate navigation to appointed cargo positions is facilitated, and flexible loading is achieved.
Fig. 6 is a schematic structural view of the automated guided vehicle according to the present invention. On the basis of any of the above embodiments, as shown in fig. 6, the automated guided vehicle includes an automated guided vehicle body 610, and an image capturing module 620 mounted in the longitudinal axis direction of the automated guided vehicle body 610, and further includes a processor 630 disposed on the automated guided vehicle body 610, where the processor 630 implements the automated guided vehicle-based platform compartment detection method as described above when executing the program.
Specifically, the camera module 620 is mounted on the longitudinal axis of the automated guided vehicle body 610, and the optical center of the camera module is denoted as O and is the origin of the camera coordinate system. The main antenna is installed 1 m above the camera module 620 along the longitudinal axis of the automated guided vehicle body 610, and the center of the main antenna is denoted as O1 and is the origin of the antenna coordinate system. In the actual geographic space of storage, the Beidou base station is taken as an origin, recorded as Ow, the north direction is taken as a Z axis, and the east direction is taken as an X axis to establish an absolute space coordinate system. Wherein:
and the camera module 620 is configured to continuously capture images from the head end to the tail end of the flatbed wagon, and simultaneously acquire an RGB data stream and a depth data stream.
The processor 630 is configured to invoke logic instructions in the memory to execute a flat car detection method based on the unmanned carrier, and sequentially perform steps of flat car warehouse building, TOF camera image acquisition, image retrieval, feature extraction, feature matching, ranging, coordinate transformation, straight line construction and the like, so as to identify the position of the flat car, thereby providing a solution for cargo space division and flexible loading.
The unmanned carrier can be an intelligent forklift which is unmanned and can also be a traditional forklift which is unmanned.
According to the embodiment of the invention, an image retrieval network is utilized to extract a head reference image and a tail reference image which are closest to a RGB data stream monitored in real time from a flat car graphic library updated and maintained in real time, then, characteristic point pairs are matched with the head image and the tail image in the RGB data stream to locate pixel coordinates, the head pixel coordinates and the tail pixel coordinates are utilized to obtain head space coordinates and tail space coordinates in an absolute space coordinate system in a carriage head-tail point cloud data index, and a linear equation matched with a first length side is fitted to guide a loading operation position according to each position of the first length side. The position of the flat carriage can be accurately identified under complex scenes such as unfixed parking positions of trucks, various types and specifications of cargoes, a foundation is provided for dividing cargo positions by combining the specifications of the cargoes, accurate navigation to appointed cargo positions is facilitated, and flexible loading is achieved.
Further, the logic instructions in the memory described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product including a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing the method for detecting a flat car based on an unmanned carrier, provided by the above methods, the method comprising: under the condition that the interval distance between the unmanned carrier and the head point in the first length side edge of the platform carriage to be detected is larger than or equal to a first preset threshold value, starting a camera shooting module on the unmanned carrier so as to continuously receive RGB data streams acquired by the camera shooting module; wherein, the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively; matching a head reference picture and a tail reference picture corresponding to the RGB data stream according to a flat car graphic library; the flat car graphic library stores partial images of head and tail endpoints of a reserved flat car and endpoint pixel coordinates marked on the partial images; performing feature point matching on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream respectively, and determining a head pixel coordinate and a tail pixel coordinate; acquiring a head space coordinate and a tail space coordinate of the flat car to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and car head and tail point cloud data; the carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera module while the RGB data stream is acquired and the camera internal parameters of the camera module; constructing a linear equation by utilizing the head space coordinate and the tail space coordinate, and determining the position information of the first length side edge of the flat carriage to be detected under an absolute space coordinate system; the position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for automated guided vehicle-based flatbed detection provided by the above methods, the method comprising: under the condition that the interval distance between the unmanned carrier and the head point in the first length side edge of the platform carriage to be detected is larger than or equal to a first preset threshold value, starting a camera shooting module on the unmanned carrier so as to continuously receive RGB data streams acquired by the camera shooting module; wherein, the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively; matching a head reference picture and a tail reference picture corresponding to the RGB data stream according to a flat car graphic library; the flat car graphic library stores partial images of head and tail endpoints of a reserved flat car and endpoint pixel coordinates marked on the partial images; performing feature point matching on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream respectively, and determining a head pixel coordinate and a tail pixel coordinate; acquiring a head space coordinate and a tail space coordinate of the flat car to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and car head and tail point cloud data; the carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera module while the RGB data stream is acquired and the camera internal parameters of the camera module; constructing a linear equation by utilizing the head space coordinate and the tail space coordinate, and determining the position information of the first length side edge of the flat carriage to be detected under an absolute space coordinate system; the position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The method for detecting the flat carriage based on the unmanned carrier is characterized by comprising the following steps of:
under the condition that the interval distance between the unmanned carrier and the head point in the first length side edge of the platform carriage to be detected is larger than or equal to a first preset threshold value, starting a camera shooting module on the unmanned carrier so as to continuously receive RGB data streams acquired by the camera shooting module; wherein, the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively;
matching a head reference picture and a tail reference picture corresponding to the RGB data stream according to a flat car graphic library; the flat car graphic library stores partial images of head and tail endpoints of a reserved flat car and endpoint pixel coordinates marked on the partial images;
Performing feature point matching on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream respectively, and determining a head pixel coordinate and a tail pixel coordinate;
acquiring a head space coordinate and a tail space coordinate of the flat car to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and car head and tail point cloud data; the carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera module while the RGB data stream is acquired and the camera internal parameters of the camera module;
constructing a linear equation by utilizing the head space coordinate and the tail space coordinate, and determining the position information of the first length side edge of the flat carriage to be detected under an absolute space coordinate system; the position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
2. The method for detecting a flat car based on an unmanned carrier according to claim 1, wherein the feature point matching is performed on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream, respectively, and determining the head pixel coordinates and the tail pixel coordinates includes:
Extracting feature points and descriptors from the head image and the head reference image to obtain a first feature point set and a second feature point set;
determining a feature point, of which the distance between the pixel coordinates in the second feature point set and the first endpoint pixel coordinates carried by the first reference image is smaller than a second preset threshold, as a first detection key point;
screening out a point pair containing the first detection key point from a first point pair set obtained by carrying out feature point matching on the first feature point set and the second feature point set, and setting the pixel coordinate of the first detection key point as the head pixel coordinate corresponding to a head end point in the head image;
correspondingly, extracting feature points and descriptors from the vehicle tail image and the vehicle tail reference image to obtain a third feature point set and a fourth feature point set;
determining a feature point with a distance between a pixel coordinate in the third feature point set and a pixel coordinate of a tail end point carried by the tail reference image smaller than the second preset threshold value as a second detection key point;
and screening out a point pair containing the second detection key point from a second point pair set obtained by carrying out characteristic point matching on the third characteristic point set and the fourth characteristic point set, and setting the pixel coordinate of the second detection key point as the tail pixel coordinate corresponding to a tail end point in the tail image.
3. The method for detecting a flat car based on an unmanned carrier according to claim 1, wherein the acquiring the head space coordinate and the tail space coordinate of the flat car to be detected in the absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and the car head-tail point cloud data comprises:
indexing in the carriage head-tail point cloud data by utilizing the head-tail pixel coordinates and the tail-tail pixel coordinates to obtain the head-tail coordinates and the tail-tail coordinates of the to-be-detected flat carriage under a camera coordinate system;
performing coordinate system conversion on the head coordinate and the tail coordinate to obtain a head space coordinate and a tail space coordinate of the flat car to be detected under an absolute space coordinate system;
the absolute space coordinate system is a geodetic coordinate system or an absolute coordinate system with an origin fixed at any corner point in the storage space.
4. The method for detecting a flat car based on an unmanned carrier according to claim 1, wherein before the acquiring of the head space coordinate and the tail space coordinate of the flat car to be detected in an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and the car head-tail point cloud data, further comprises:
Based on the depth data stream and the camera internal and external parameters of the camera module, respectively carrying out distortion correction on the depth data stream and the RGB data stream and then aligning;
and filtering the depth data stream after distortion correction, mapping the depth data stream to a point cloud image, and obtaining the head-tail point cloud data of the carriage aligned with the RGB data stream.
5. The automated guided vehicle-based flat bed detection method of claim 1, wherein after performing feature point matching on the head image and the tail image in the RGB data stream and the head reference image and the tail reference image corresponding to the RGB data stream, respectively, determining a head pixel coordinate and a tail pixel coordinate, further comprises:
constructing a linear equation based on the carriage width, the head space coordinate and the tail space coordinate of the flat carriage to be detected, and determining the position information of the second length side edge of the flat carriage to be detected under an absolute space coordinate system;
and controlling the unmanned carrier to carry out loading operation on the corresponding side by utilizing the position information of the first length side and/or the second length side of the to-be-detected flat car under the absolute space coordinate system.
6. The method for detecting a flat car based on an unmanned carrier according to claim 5, wherein the controlling the unmanned carrier to carry out the loading operation on the corresponding side using the position information of the first length side and/or the second length side of the flat car to be detected in the absolute space coordinate system comprises:
determining a cargo operation position based on the position information of the first length side edge and/or the second length side edge of the flat carriage to be detected under the absolute space coordinate system and the cargo attribute information;
and the goods operation position is issued to the unmanned carrier, so that the unmanned carrier controls the fork arms to carry the goods borne on the fork arms to the goods operation position from the corresponding length side edges, and the goods loading operation is completed.
7. Flat car detection device based on unmanned carrier, its characterized in that includes:
the image acquisition module is used for starting the camera module on the unmanned carrier to continuously receive the RGB data stream acquired by the camera module under the condition that the interval distance between the unmanned carrier and the head point in the first length side edge of the platform car to be detected is larger than or equal to a first preset threshold value; wherein, the RGB data stream takes the head image and the tail image of the flat car to be detected as a start frame and a stop frame respectively;
The image retrieval module is used for matching the head reference image and the tail reference image corresponding to the RGB data stream according to the flatbed graphic library; the flat car graphic library stores partial images of head and tail endpoints of a reserved flat car and endpoint pixel coordinates marked on the partial images;
the image matching module is used for matching the head image and the tail image in the RGB data stream with the head reference image and the tail reference image corresponding to the RGB data stream respectively to determine the head pixel coordinate and the tail pixel coordinate;
the head-tail positioning module is used for acquiring the head space coordinate and the tail space coordinate of the flat car to be detected under an absolute space coordinate system based on the head pixel coordinate, the tail pixel coordinate and the car head-tail point cloud data; the carriage head-tail point cloud data are determined according to the depth data stream acquired by the camera module while the RGB data stream is acquired;
the carriage fitting module is used for constructing a linear equation by utilizing the head space coordinate and the tail space coordinate and determining the position information of the first length side edge of the flat carriage to be detected under an absolute space coordinate system; the position information of the first length side edge of the flat carriage to be detected under the absolute space coordinate system is used for controlling the unmanned carrier to carry out loading operation from the first length side edge.
8. The unmanned carrier comprises an unmanned carrier body and a camera module arranged in the longitudinal axis direction of the unmanned carrier body, and is characterized by further comprising a processor arranged on the unmanned carrier body, wherein the processor is used for realizing the detection method of the flat carriage based on the unmanned carrier according to any one of claims 1 to 6 when executing a program.
9. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the automated guided vehicle-based flatbed detection method of any one of claims 1 to 6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the automated guided vehicle-based flatbed detection method of any one of claims 1 to 6.
CN202310257404.8A 2023-03-13 2023-03-13 Flat carriage detection method and device based on unmanned carrier and unmanned carrier Pending CN116486119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310257404.8A CN116486119A (en) 2023-03-13 2023-03-13 Flat carriage detection method and device based on unmanned carrier and unmanned carrier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310257404.8A CN116486119A (en) 2023-03-13 2023-03-13 Flat carriage detection method and device based on unmanned carrier and unmanned carrier

Publications (1)

Publication Number Publication Date
CN116486119A true CN116486119A (en) 2023-07-25

Family

ID=87214568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310257404.8A Pending CN116486119A (en) 2023-03-13 2023-03-13 Flat carriage detection method and device based on unmanned carrier and unmanned carrier

Country Status (1)

Country Link
CN (1) CN116486119A (en)

Similar Documents

Publication Publication Date Title
CN109074668B (en) Path navigation method, related device and computer readable storage medium
CN110807350B (en) System and method for scan-matching oriented visual SLAM
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
US11120280B2 (en) Geometry-aware instance segmentation in stereo image capture processes
EP3510562A1 (en) Method and system for calibrating multiple cameras
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
Fiala et al. Visual odometry using 3-dimensional video input
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN109447902B (en) Image stitching method, device, storage medium and equipment
CN114037762A (en) Real-time high-precision positioning method based on image and high-precision map registration
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
US20210272289A1 (en) Sky determination in environment detection for mobile platforms, and associated systems and methods
CN107767366B (en) A kind of transmission line of electricity approximating method and device
WO2023036212A1 (en) Shelf locating method, shelf docking method and apparatus, device, and medium
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
Duan et al. Image digital zoom based single target apriltag recognition algorithm in large scale changes on the distance
CN116486119A (en) Flat carriage detection method and device based on unmanned carrier and unmanned carrier
CN111656404A (en) Image processing method and system and movable platform
CN115077563A (en) Vehicle positioning accuracy evaluation method and device and electronic equipment
CN114862953A (en) Mobile robot repositioning method and device based on visual features and 3D laser
CN113994382A (en) Depth map generation method, electronic device, calculation processing device, and storage medium
CN111967290A (en) Object identification method and device and vehicle
CN116468787A (en) Position information extraction method and device of forklift pallet and domain controller
CN116434192A (en) Automatic driving visual perception method and device, electronic equipment and readable storage medium
CN117037138A (en) Three-dimensional target detection method, three-dimensional target detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination