CN114186908A - Automatic checking system for three-dimensional warehouse - Google Patents

Automatic checking system for three-dimensional warehouse Download PDF

Info

Publication number
CN114186908A
CN114186908A CN202111304531.6A CN202111304531A CN114186908A CN 114186908 A CN114186908 A CN 114186908A CN 202111304531 A CN202111304531 A CN 202111304531A CN 114186908 A CN114186908 A CN 114186908A
Authority
CN
China
Prior art keywords
goods
remote control
control terminal
camera
automatic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111304531.6A
Other languages
Chinese (zh)
Inventor
蒋舒婵
易桂丰
王中杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Yima Intelligent Technology Co ltd
Original Assignee
Hunan Yima Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Yima Intelligent Technology Co ltd filed Critical Hunan Yima Intelligent Technology Co ltd
Priority to CN202111304531.6A priority Critical patent/CN114186908A/en
Publication of CN114186908A publication Critical patent/CN114186908A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • G06Q10/0875Itemisation or classification of parts, supplies or services, e.g. bill of materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Geometry (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Finance (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Warehouses Or Storage Devices (AREA)

Abstract

The invention relates to an automatic checking method for a three-dimensional library, which comprises the following steps: s1: the remote control terminal learns and trains to establish a goods scale; s2: moving a pile to a stacking platform position by the carrying equipment to trigger the photoelectric sensor; s3: the remote control terminal receives the trigger signal to control the 3D camera to collect first image information of the goods; s4: the remote control terminal controls the 2D camera to collect second image information of the stacked goods; s5: the remote control terminal finishes goods checking based on the image information and the volume information of the goods; and stores the inventory result data. The method can realize the functions of automatic sensing, automatic gauge identification, automatic image acquisition, automatic box number calculation and automatic data storage of the pile, thereby realizing the acquisition of the pile data under the condition of no human intervention, namely realizing automatic checking.

Description

Automatic checking system for three-dimensional warehouse
Technical Field
The invention belongs to the technical field of warehouse goods checking, and particularly relates to an automatic checking method and system for a three-dimensional warehouse.
Background
The three-dimensional warehouse is widely applied to the logistics industry and has the characteristics of safety, high automation degree, small occupied area and the like. However, how to efficiently count the goods stored in the three-dimensional warehouse is always an unsolved problem. At present, the three-dimensional warehouse is generally checked by adopting a manual checking method, and the process is as follows: the goods in each cargo space are taken down by a stacker, manually checked and then placed back in the cargo space. For large three-dimensional libraries, this process is very inefficient, and generally a typical large three-dimensional library performs a complete inventory at least 1 week long, and there are uncontrollable situations such as manual inventory errors.
Another approach is to take an overall picture of multiple sides of the pile based on a 2D camera, then identify the location of each bin of goods in the image based on image characteristics, and then algorithmically infer the number of bins of the overall pile. However, this approach has two disadvantages: 1. the space in the three-dimensional warehouse is narrow, and for some large multi-layer stacks, images can only be acquired at the upper part, and the images cannot be acquired completely at 4 side surfaces through cameras; 2. if the image can be acquired only from the top and cannot be acquired from the side, the 2D image cannot directly acquire the three-dimensional height information of the object, so that the number of layers of the stack cannot be judged, and the total number of boxes of the stack cannot be calculated. How to realize the efficient and automatic checking of goods in a large-scale three-dimensional warehouse is a problem which needs to be solved urgently by the technical field of personnel.
Disclosure of Invention
The invention aims to provide a method and a system for realizing efficient and automatic checking of goods in a large-scale three-dimensional warehouse.
The technical scheme adopted by the invention is as follows:
an automatic checking method for a three-dimensional library comprises the following steps:
s1: the remote control terminal learns and trains to establish a goods scale;
s2: moving a stack to a stacker platform position by the carrying equipment to trigger a photoelectric sensor;
s3: the remote control terminal receives the trigger signal to control the 3D camera to acquire first image information of the stacked goods;
s4: the remote control terminal controls the 2D camera to collect second image information of the stacked goods;
s5: the remote control terminal finishes goods checking based on the first image information and the second image information of the stacked goods; and stores the inventory result data.
Preferably, the steps S2 and S3 may be performed simultaneously, or step S3 is performed first and step S2 is performed.
Preferably, in the step S5, the cargo inventory is completed according to the following steps:
s51 obtaining goods specification information: the remote control terminal calls the goods scale type to perform image information matching to find out the goods specification information based on the second image information acquired by the 2D camera and temporarily stores the goods specification information in the remote control terminal;
s52: the remote control terminal calculates and synthesizes a fusion point cloud under a world coordinate system based on calibration parameters obtained by first image information acquired by the 3D camera;
s53: the remote control terminal calculates the height and the surface area of the highest layer through the fused point cloud and calls the goods and goods gauge information to calculate and determine the number of layers of the pile and the number of boxes on the top layer;
s54: the remote control terminal adds the box number of the full-layer stack and the box number of the top layer to obtain the total box number, and the goods counting is completed; and stores the inventory result data.
Preferably, the step S52 synthesizes a fused point cloud under a world coordinate system, which mainly includes the following steps:
s521: setting a calibration board under a 3D camera, and setting a world coordinate system dot o by using the position of a cross point at the center of the calibration board; determining coordinate values Pw (i) of all other intersection points, wherein i represents the serial number of the intersection point;
s522: determining the cross point of the calibration plate at a first image position through the color information of the first image information, and recording the point coordinates Pc (k, j) of the 3D camera of the image position in the first image information, wherein k represents the serial number of the 3D camera, and j represents the image coordinates;
s523: determining the corresponding relation between the point coordinates under the kth 3D camera coordinate system and the point coordinates of the calibration plate under the world coordinate system; the point coordinates have the following correspondence:
Pw(i)=Tk·Pc(k,j)
wherein T iskIs a 4 x 4 linear transformation matrix, and when at least 3 point correspondences are determined, TkCan be uniquely determined;
s524: and converting the point cloud coordinates into a world coordinate system, realizing the registration of the point clouds of a plurality of cameras, and synthesizing a fused point cloud under the world coordinate system.
Preferably, the step of determining the number of bins in step S53 further includes the steps of:
s533: resampling the fused point cloud, and triangularizing the fused point cloud by using Delauney triangularization to form a surface model;
s534: projecting the surface model to the bottom surface of the stacker to obtain a depth map based on the bottom surface;
s535: and determining the height and the surface area of the highest layer according to the depth map, and calling the goods and goods gauge information to calculate and determine the number of layers of the pile and the number of top boxes.
According to another aspect of the invention, the three-dimensional library automatic checking system is used for the three-dimensional library automatic checking method, the three-dimensional library automatic checking system is installed on a stacker platform and mainly comprises a first image acquisition device, a second image acquisition device, a triggering device and a remote control terminal, the remote control terminal is in communication connection with the first image acquisition device, the second image acquisition device and the triggering device, the triggering device and the first image acquisition device are arranged at the bottom of the stacker platform, the second image acquisition device is arranged above the stacker platform, and the remote control terminal is arranged on the side of the stacker platform.
Preferably, the triggering devices are two photoelectric sensors which are respectively arranged at the front part and the rear part of the stacker entering and exiting direction on the stacker platform.
Preferably, the first image acquisition device comprises 12 3D cameras arranged above the stacker platform in 3 rows and 4 columns.
Preferably, the second image acquisition device comprises 4 2D cameras, each arranged at 4 corners of the bottom of the stacker platform.
Preferably, the 2D cameras are further respectively provided with a shadowless annular light source.
Compared with the prior art, the invention has the following advantages:
1. the method of adopting a 2D camera to measure the volume is abandoned, 3D image acquisition is carried out from the top of a three-dimensional library, and the accurate stacking volume can be directly measured;
2. local image acquisition is carried out through close-range 2D shooting, the specification of goods is determined through the pattern of a local box body, then the specification information is used, the volume of each box of goods is obtained from a database, and finally the total box number is obtained through the method of dividing the whole volume by the volume of each box;
3. when the stack is moved into the stacker platform, the system can realize the functions of automatic sensing, automatic gauge identification, automatic image acquisition, automatic box number calculation and automatic data storage of the stack. Therefore, acquisition of pile data is realized under the condition of no human intervention, namely automatic counting is realized.
Drawings
Fig. 1 is a schematic view of an apparatus installation of an automatic checking system of a three-dimensional library according to the present invention;
fig. 2 is a control flowchart of the stereoscopic garage automatic inventory system of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples:
one embodiment of the present invention is shown in fig. 1, a three-dimensional library automatic inventory system for a three-dimensional library automatic inventory method is installed on a stacker platform 1, and is composed of a first image acquisition device 3, a second image acquisition device 2, a triggering device (not specifically shown in the figure) and a remote control terminal 4, wherein the remote control terminal 4 is in communication connection with the first image acquisition device 3, the second image acquisition device 2 and the triggering device;
specifically, the triggering device 5 is 2 photoelectric sensors installed at the front and the rear of the stacking in-out direction of the stacker crane, and since the stack can be in-out in the left and right directions, it is determined that the stack has moved in place only after both sensors sense the stack. On the contrary, if only one sensor is provided, the situation that the pile is moved in place or moved in the moving process cannot be judged, when the two sensors sense the situation that the pile is in place, if the two sensors do not sense the situation that the pile is not on the platform, the situation that the pile is not on the platform is judged, and when the two photoelectric sensors are changed into a non-triggering state, the pile is moved out of the platform.
Specifically, the second image acquisition devices 2 are 2D cameras installed at 4 corners of the bottom of the stacking machine platform 1, and are 30cm to 35cm away from the bottom plane of the stack, so that the specifications of the stack can be identified as long as a box of goods is arranged on the stack, and each 2D camera is illuminated by a shadowless annular light source, so that the acquired image information is clear and not blurred. The shadowless annular light source is arranged on a plane where the camera is used as the center and the front edge of the lens is located, and the camera and the light source are integrally fixed on the stacker by a support. More specifically, the 2D camera may be a color camera.
Specifically, the first image acquisition device 3 is formed by arranging 12 3D cameras above the top of the stacker platform 1 in 3 rows and 4 columns, the 12 3D cameras ensure that the space range of 1 m × 1.2 m × 2 m is covered, the distance between the cameras and the surface of the highest layer of the stacker is 35cm-40cm, the cameras in the first column and the fourth column are installed in a manner of inclining by about 30 degrees towards the direction of the central line of the stacker, and the shielding of a cross bar at the top of the stacker can be avoided. Meanwhile, a calibration plate consisting of black and white checkerboard grids is placed below the black and white checkerboard cameras, and each camera can see one part of the calibration plate. The 12 3D cameras used here are RGBD cameras, i.e. each frame image of a 3D camera contains both color information and position information.
Specifically, the remote control terminal 4 is arranged on one side of the stacker platform 1, and cargo stacking and image information acquisition are not affected. The remote control terminal 4 may be an industrial personal computer, a computer, or other electronic devices with data processing and computing capabilities, and is not limited herein.
According to an embodiment of the present invention, an embodiment of an automatic checking method for a three-dimensional library is provided, which mainly includes the following steps:
s1: the remote control terminal 4 learns and trains to establish a goods scale;
the product scale obtained by machine learning training comprises parameter information such as stacking layer number, layer height, surface area of each box, placing rule and the like corresponding to each goods product gauge, and is temporarily stored in the system.
S2: moving a stack to a stacker platform 1 position by the carrying equipment to trigger a photoelectric sensor; when both sensors sense, it is an indication that the pile is in position, and if neither sensor senses, it is an indication that there is no pile on the platform.
S3: the remote control terminal 4 receives the trigger signal to control the 3D camera to collect first image information of the goods;
the first image packet obtainable at this step contains both color information and position information.
S4: the remote control terminal controls the 2D camera to collect second image information of the stacked goods;
the side pattern information of the goods on the stack is obtained in the step, so that the goods delivery gauge can be judged according to the patterns on the side face of the box body.
S5: the remote control terminal 4 finishes goods checking based on the acquired first image information and second image information of the goods; and stores the inventory result data.
Specifically, the capturing order of the first image information and the second image information in steps S2 and S3 may be reversed or performed simultaneously.
In the present embodiment, in step S5, the cargo inventory is completed according to the following steps:
s51 obtaining goods specification information: the remote control terminal 4 calls the goods scale type to perform image information matching to find out the goods specification information based on the second image information shot by the 2D camera and temporarily stores the goods specification information in the remote control terminal;
s52: the remote control terminal 4 calculates and synthesizes a fusion point cloud under a world coordinate system, namely a complete virtual three-dimensional pile based on calibration parameters obtained by first image information acquired by the 3D camera;
s53: the remote control terminal 4 calculates the height and the surface area of the highest layer by fusing the point clouds and calls goods and goods gauge information to calculate and determine the number of layers of the pile and the number of boxes on the top layer;
s54: the remote control terminal 4 adds the number of the full-layer stacking boxes and the number of the top-layer boxes to obtain the total number of the finished boxes, and the goods counting is finished; and will count the result data and store.
More specifically, the step S52 of synthesizing a fused point cloud under a world coordinate system mainly includes the following steps:
s521: setting a calibration board under a 3D camera, and setting a cross point position of the center of the calibration board as a world coordinate system dot o; determining coordinate values Pw (i) of all other intersection points through manual measurement, wherein i represents the serial number of the intersection points;
s522: determining the cross point of the calibration plate at a first image position through the color information of the first image information, and recording the point coordinates Pc (k, j) of the 3D camera of the image position in the first image information, wherein k represents the serial number of the 3D camera, and j represents the image coordinates;
s523: determining the corresponding relation between the point coordinates under the kth 3D camera coordinate system and the point coordinates of the calibration plate under the world coordinate system; the point coordinates have the following correspondence:
Pw(i)=Tk·Pc(k,j)
more specifically, the calibration process is only required to be performed once after the 3D camera is installed, and calibration is not required in the subsequent use process. The points on the calibration plate are precise coordinates and must be precisely measured, and the points Pc measured by the 3D camera during calibration are all the points Pw to be aligned to the measurement on the calibration plate. If one camera is manually moved, recalibration is needed, and through manual point selection, a corresponding relation between point coordinates under a series of camera coordinate systems and calibration board point coordinates under a world coordinate system can be obtained, wherein Tk is a 4 x 4 linear transformation matrix, and when the corresponding relation of at least 3 points is determined, Tk can be uniquely determined; to improve the stability of the solution, it is necessary to increase the number of point correspondences, in which case the above equation can be solved using the least squares method.
S524: and converting the point cloud coordinates into a world coordinate system, realizing the registration of the point clouds of a plurality of cameras, and synthesizing a fused point cloud under the world coordinate system.
After the transformation matrix of each camera is determined, the point clouds obtained by each frame of all the cameras can be multiplied by the respective transformation matrix of the cameras, so that the point cloud coordinates can be converted into a world coordinate system, and the registration of the point clouds of a plurality of cameras is realized.
Specifically, the step of determining the number of bins in step S53 further includes the steps of:
s533: resampling the fused point cloud, and triangularizing the fused point cloud by using Delauney triangularization to form a surface model;
after the fused point cloud under the world coordinate system is obtained, because the density of the point cloud is uneven, some local points are dense, and some local points are sparse, the point cloud needs to be resampled again to ensure the distance between the points. The method is to traverse each point and discard the point if the nearest neighbor distance of the point is too small
S534: projecting the surface model to the bottom surface of the stacker to obtain a depth map based on the bottom surface;
on the depth map, each pixel point is a height value in the vertical direction of the point, so that the height of the highest layer and the area of the highest layer can be determined.
S535: and determining the height and the surface area of the highest layer according to the depth map, and calling the goods and goods gauge information to calculate and determine the number of layers of the pile and the number of top boxes.
Here, the default goods taking rule is that after one layer is taken, the other layer is taken, so that the number of boxes of the pile can be uniquely determined, and after the placing rule is obtained through calling of a specification, the number of the layers of the pile and the number of the boxes on the uppermost layer can be obtained.
As shown in fig. 2, the system work flow is as follows:
when the stacker moves a stack to the stacker platform 1, the triggering device, namely the two photoelectric sensors, can be triggered at the same time. At this time, the remote control terminal 4 automatically starts to control the first image capturing device 3, namely 12 3D cameras to capture images, and controls the second image capturing device 2 to turn on the light source to capture images of the 2D camera after the image capture is finished. At this time, the image capturing operation is ended, and the remote control terminal 4 starts data processing. The remote control terminal 4 processes the picture shot by the 2D camera and performs quality and specification identification on the current image by using the quality scale obtained by machine learning training. Through the product gauge number, parameter information such as the stacking layer number, the layer height, the surface area of each box and the like corresponding to the product gauge is searched from a database and is temporarily stored in the system. And then integrating the partial three-dimensional structures acquired by the 12 3D cameras into a complete virtual three-dimensional pile through calibration parameters. The number of layers of the pile is determined by calculating the highest value in this virtual three-dimensional pile, and in addition, the number of top boxes is determined by calculating the surface area of the highest value. Because we have a limit to unstacking, only unstacking from the top of the stack can be done, and extraction from the bottom is not done, we only need to count the number of boxes at the top layer, and then add the number of boxes at the bottom layer to get the total number of boxes. And finally, the remote control terminal 4 stores the result data in the server and waits for the two photoelectric sensors to become in an un-triggered state, namely the pile is moved out of the platform. Waiting for the next calculation, and circulating.
The present invention has been described above by way of example, but the present invention is not limited to the above-described specific embodiments, and any modification or variation made based on the present invention is within the scope of the present invention as claimed.

Claims (10)

1. An automatic checking method for a three-dimensional library is characterized by comprising the following steps:
s1: the remote control terminal learns and trains to establish a goods scale;
s2: moving a stack to a stacker platform position by the carrying equipment to trigger a photoelectric sensor;
s3: the remote control terminal receives the trigger signal to control the 3D camera to acquire first image information of the stacked goods;
s4: the remote control terminal controls the 2D camera to collect second image information of the stacked goods;
s5: the remote control terminal finishes goods checking based on the first image information and the second image information of the stacked goods; and stores the inventory result data.
2. The stereoscopic garage auto-inventorying method of claim 1, wherein the steps S2, S3 are performed simultaneously, or step S3 is performed first and step S2 is performed.
3. The stereoscopic garage automatic inventory method of claim 2,
in the step S5, the goods inventory is completed according to the following steps:
s51 obtaining goods specification information: the remote control terminal calls the goods scale type to perform image information matching to find out the goods specification information based on the second image information acquired by the 2D camera and temporarily stores the goods specification information in the remote control terminal;
s52: the remote control terminal calculates and synthesizes a fusion point cloud under a world coordinate system based on calibration parameters obtained by first image information acquired by the 3D camera;
s53: the remote control terminal calculates the height and the surface area of the highest layer through the fused point cloud and calls the goods and goods gauge information to calculate and determine the number of layers of the pile and the number of boxes on the top layer;
s54: the remote control terminal adds the box number of the full-layer stack and the box number of the top layer to obtain the total box number, and the goods counting is completed; and stores the inventory result data.
4. The method for automatic inventory of a stereoscopic library according to claim 3, wherein the step S52 is to synthesize a fused point cloud under a world coordinate system, which mainly comprises the following steps:
s521: setting a calibration board under a 3D camera, and setting a world coordinate system dot o by using the position of a cross point at the center of the calibration board; determining coordinate values Pw (i) of all other intersection points, wherein i represents the serial number of the intersection point;
s522: determining the cross point of the calibration plate at a first image position through the color information of the first image information, and recording the point coordinates Pc (k, j) of the 3D camera of the image position in the first image information, wherein k represents the serial number of the 3D camera, and j represents the image coordinates;
s523: determining the corresponding relation between the point coordinates under the kth 3D camera coordinate system and the point coordinates of the calibration plate under the world coordinate system; the point coordinates have the following correspondence:
Pw(i)=Tk·Pc(k,j)
wherein T iskIs a 4 x 4 linear transformation matrix, and when at least 3 point correspondences are determined, TkCan be uniquely determined;
s524: and converting the point cloud coordinates into a world coordinate system, realizing the registration of the point clouds of a plurality of cameras, and synthesizing a fused point cloud under the world coordinate system.
5. The stereoscopic library automatic inventory system of claim 4, wherein the determination of the number of bins in step S53 further comprises the steps of:
s533: resampling the fused point cloud, and triangularizing the fused point cloud by using Delauney triangularization to form a surface model;
s534: projecting the surface model to the bottom surface of the stacker to obtain a depth map based on the bottom surface;
s535: and determining the height and the surface area of the highest layer according to the depth map, and calling the goods and goods gauge information to calculate and determine the number of layers of the pile and the number of top boxes.
6. A stereoscopic library automatic inventory system for implementing the stereoscopic library automatic inventory method of claims 1-5, characterized in that: the automatic checking system of the three-dimensional warehouse is installed on a stacker platform and mainly comprises a first image acquisition device, a second image acquisition device, a trigger device and a remote control terminal, wherein the remote control terminal is in communication connection with the first image acquisition device, the second image acquisition device and the trigger device, the trigger device is arranged at the bottom of the stacker platform, the second image acquisition device is arranged above the stacker platform, and the remote control terminal is arranged at the side of the stacker platform.
7. The stereoscopic library automatic inventory system of claim 6, wherein: the triggering device is two photoelectric sensors which are respectively arranged at the front part and the rear part of the stacker entering and exiting direction on the stacker platform.
8. The stereoscopic library automatic inventory system of claim 7, wherein: the first image acquisition device comprises 12 3D cameras arranged above the stacker platform and forming 3 rows and 4 columns.
9. The stereoscopic library automatic inventory system of claim 6, wherein: the second image acquisition device comprises 4 2D cameras which are respectively arranged at 4 corners of the bottom of the stacker crane platform.
10. The stereoscopic library automatic inventory system of claim 9, wherein: the 2D camera is also respectively provided with a shadowless annular light source.
CN202111304531.6A 2021-11-05 2021-11-05 Automatic checking system for three-dimensional warehouse Pending CN114186908A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111304531.6A CN114186908A (en) 2021-11-05 2021-11-05 Automatic checking system for three-dimensional warehouse

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111304531.6A CN114186908A (en) 2021-11-05 2021-11-05 Automatic checking system for three-dimensional warehouse

Publications (1)

Publication Number Publication Date
CN114186908A true CN114186908A (en) 2022-03-15

Family

ID=80540758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111304531.6A Pending CN114186908A (en) 2021-11-05 2021-11-05 Automatic checking system for three-dimensional warehouse

Country Status (1)

Country Link
CN (1) CN114186908A (en)

Similar Documents

Publication Publication Date Title
JP6746820B1 (en) High-speed confirmation method of warehouse storage map, equipment, storage medium and robot
EP1324268B1 (en) Position detecting device for takeout apparatus
US8874270B2 (en) Apparatus for taking out bulk stored articles by robot
CN110322457B (en) 2D and 3D vision combined unstacking method
US20170150129A1 (en) Dimensioning Apparatus and Method
JP2020196118A (en) Robotic system with dynamic packing mechanism
CN103302666A (en) Information processing apparatus and information processing method
JP2010008352A (en) Size measuring method and size measuring device
CN111656130A (en) Computer vision system and method for tank calibration using optical reference line method
CN103033132A (en) Plane measuring method and plane measuring device based on monocular vision
CN101901501A (en) Method for generating laser color cloud picture
CN115451964B (en) Ship scene simultaneous mapping and positioning method based on multi-mode mixing characteristics
CN105466523A (en) Grain-piling height measuring method and apparatus based on single camera image
CN113658241A (en) Monocular structured light depth recovery method, electronic device and storage medium
CN115115768A (en) Object coordinate recognition system, method, device and medium based on stereoscopic vision
CN114170521B (en) Forklift pallet butt joint identification positioning method
US20240134024A1 (en) Three-dimensional towered checkerboard for multi-sensor calibration
CN114186908A (en) Automatic checking system for three-dimensional warehouse
CN106248058B (en) A kind of localization method, apparatus and system for means of transport of storing in a warehouse
CN111696152A (en) Method, device, computing equipment, system and storage medium for detecting package stacking
CN209342062U (en) 3D vision guide de-stacking measuring system
CN115810052A (en) Camera calibration method and device, electronic equipment and storage medium
CN115848878B (en) AGV-based tobacco frame identification and stacking method and system
JP4280572B2 (en) Automatic orientation method using special marks
CN115100642B (en) 3D library position state detection method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination