WO2021190595A1 - Procédé de détection de colis, dispositif, appareil informatique, système logistique, et support de stockage - Google Patents

Procédé de détection de colis, dispositif, appareil informatique, système logistique, et support de stockage Download PDF

Info

Publication number
WO2021190595A1
WO2021190595A1 PCT/CN2021/082964 CN2021082964W WO2021190595A1 WO 2021190595 A1 WO2021190595 A1 WO 2021190595A1 CN 2021082964 W CN2021082964 W CN 2021082964W WO 2021190595 A1 WO2021190595 A1 WO 2021190595A1
Authority
WO
WIPO (PCT)
Prior art keywords
package
barcode
image
area
coordinate system
Prior art date
Application number
PCT/CN2021/082964
Other languages
English (en)
Chinese (zh)
Inventor
顾睿
邓志辉
Original Assignee
杭州海康机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人技术有限公司 filed Critical 杭州海康机器人技术有限公司
Publication of WO2021190595A1 publication Critical patent/WO2021190595A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management

Definitions

  • This application relates to the field of logistics automation technology, and in particular to a method, device, computing device, logistics system, and storage medium for detecting packages.
  • the package on the conveyor belt needs to be tested for attributes, for example, the size, volume, face list, barcode and other attributes of the package are tested.
  • Devices that detect different properties of the package can be distributed at different locations on the conveyor belt.
  • This application proposes a method, device, computing device, logistics system, and storage medium for detecting packages, which can automatically associate barcode information with packages and their attributes.
  • a method for detecting packages including:
  • the detection position of the contour in the detection area and the conveying speed of the conveyor belt it is predicted that the reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first corresponding to the reading position
  • An image identification the code reading camera is located downstream of the detection area, and the first image identification is used to identify a package image taken by the code reading camera when the outline reaches the barcode reading position;
  • the barcode reading position is matched with the recognized barcode
  • the matched barcode is associated with the package.
  • the detection position of the contour in the detection area is the coordinates of the contour in the first world coordinate system; the detection position according to the contour in the detection area and the The transmission speed of the conveyor belt, predicting the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position, includes:
  • the second image identifier of the image frame collected by the barcode reading camera at the current moment is determined, and the second image identifier is the frame number or the time stamp;
  • the projection position of the contour in the image coordinate system of the code reading camera is taken as the code reading position
  • the first image identifier corresponding to the code reading position is determined.
  • the method of detecting the package further includes:
  • the target attribute of the package is detected, and the target attribute includes at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package;
  • the matched barcode is associated with the target attribute.
  • the position matching the barcode reading position with the barcode includes:
  • the method of detecting the package further includes:
  • the detection position and the transmission speed continuously update the predicted position of the package in the extended processing area over time, and add a tracking frame to the panoramic image according to the predicted position;
  • the tracking frame when the package is associated with a barcode, the tracking frame is presented in a first color, and when the package is not associated with a barcode, the tracking frame is presented in a second color.
  • recognizing the contour of the designated area on the package and recognizing the detection position of the contour in the detection area includes:
  • the depth image of the package is acquired; according to the depth image, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined, wherein the The upper surface is the designated area; or
  • the detection area When the package on the conveyor belt passes through the detection area, obtain the grayscale image and depth image of the package; determine the contour of the single area of the package in the grayscale image; according to the surface of the package in the grayscale image The contour of a single area determines the first depth area in the depth image corresponding to the single area of the surface; according to the first depth area, the detection position of the contour of the single area of the surface in the first world coordinate system is determined, where The single surface area is the designated area.
  • the method of detecting the package further includes:
  • the first calibration disk Acquiring the first world coordinate system established according to the first calibration disk, the first calibration disk being placed on the conveyor belt and in the field of view of the depth camera;
  • the fourth mapping relationship between the coordinate system of the code-reading camera and the image coordinate system of the code-reading camera is determined.
  • the determining the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system according to the depth image includes:
  • the depth image determine the three-dimensional model of the package; according to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, determine the contour of the upper surface of the package and the coordinates of the upper surface in the first world
  • the detection position in the system, the detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system; or
  • the detection position of the upper surface of the package in the first world coordinate system is determined, and the detection position It is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
  • a device for detecting packages including:
  • the detection unit is configured to identify the contour of a designated area on the package when the package passes through the detection area on the conveyor belt, and identify the detection position of the contour in the detection area, and the designated area includes the barcode of the package;
  • a predicting unit for predicting the barcode reading position and the barcode reading of the contour in the imaging area of the image coordinate system of the barcode reading camera based on the detection position of the contour in the detection area and the conveying speed of the conveyor belt A first image identifier corresponding to a position, the code reading camera is located downstream of the detection area, and the first image identifier is used to identify a package image taken by the code reading camera when the outline reaches the code reading position;
  • the barcode recognition unit is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is obtained;
  • the matching unit is used to match the barcode reading position with the recognized barcode
  • the associating unit is used to associate the matched barcode with the package.
  • a computing device including: a memory; a processor; a program, stored in the memory and configured to be executed by the processor, the program including a package for executing the inspection package according to the present application Method of instruction.
  • a storage medium storing a program, the program including instructions that, when executed by a computing device, cause the computing device to execute the method for detecting packages according to the present application.
  • a logistics system which includes: a computing device; a conveyor belt; a depth camera; and a code reading camera.
  • the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted.
  • the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system when the package image corresponding to the first image identifier is acquired.
  • the package detection solution of the present application by matching the position of the barcode and the barcode reading position in the package image, the association relationship between the package and the barcode can be determined, and further, the barcode can be associated with the attributes of the package.
  • Figure 1 shows a schematic diagram of a logistics system according to some embodiments of the present application
  • FIG. 2 shows a flowchart of a method 200 for detecting a package according to some embodiments of the present application
  • FIG. 3 shows a flowchart of a method 300 for detecting a package according to some embodiments of the present application
  • Figure 4 shows a schematic diagram of a coordinate system in a logistics system according to some embodiments of the present application
  • FIG. 5 shows a flowchart of a method 500 for determining a detection position according to some embodiments of the present application
  • FIG. 6 shows a flowchart of a method 600 for determining a detection position according to some embodiments of the present application
  • FIG. 7 shows a flowchart of a method 700 for determining a detection position corresponding to the upper surface of a package according to some embodiments of the present application
  • Fig. 8 shows a flowchart of a method 800 for determining a detection position corresponding to the upper surface of a package according to some embodiments of the present application
  • FIG. 9 shows a flowchart of a method 900 for predicting a barcode reading position according to some embodiments of the present application.
  • FIG. 10A shows a schematic diagram of a conveyor belt placed with a package that does not enter the field of view of the depth camera 120;
  • FIG. 10B shows the detection position of the package B3 in the detection area determined by the computing device 140
  • FIG. 10C shows a schematic diagram of a target location of the package B3 predicted by the computing device 140
  • FIG. 10D shows a schematic diagram of the projection of 4 vertices to the image coordinate system
  • FIG. 10E shows the projection area in the image coordinate system when the package is at the target position in FIG. 10C;
  • FIG. 11 shows a flowchart of a method 1100 for position matching a barcode and a barcode reading position according to some embodiments of the present application
  • Figure 12A shows a package image according to some embodiments of the present application
  • Figure 12B shows a package image according to some embodiments of the present application.
  • Figure 13A shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • FIG. 13B shows a schematic diagram of a panoramic image according to some embodiments of the present application.
  • Figure 14 shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • FIG. 15 shows a schematic diagram of an apparatus 1500 for detecting packages according to some embodiments of the present application.
  • FIG. 16 shows a schematic diagram of an apparatus 1600 for detecting a package according to some embodiments of the present application
  • Figure 17 shows a schematic diagram of a computing device according to some embodiments of the present application.
  • Fig. 1 shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • the logistics system may include a conveyor belt 110, a depth camera 120, a code reading camera 130 and a computing device 140.
  • the driving belt 110 conveys the packages according to the conveying direction of the conveying belt (for example, the direction from right to left in FIG. 1 ).
  • the depth camera 120 shown in FIG. 1 is, for example, a structured light camera.
  • the structured light camera may include a laser emission module 121 and an image acquisition module 122.
  • the field of view V1 of the image acquisition module 122 can cover the detection area S1 on the conveyor belt 110.
  • the depth camera 120 may also use a Time of Flight (ToF, abbreviated as ToF) camera, a binocular vision (Stereo) camera, etc.
  • ToF Time of Flight
  • Stepo binocular vision
  • the depth camera 120 can collect images of the package passing through the field of view V1, and output a sequence of image frames to the computing device 140 in real time.
  • the computing device 140 may be, for example, a server, a notebook computer, a tablet computer, or a handheld business communication device.
  • the computing device 140 can build a three-dimensional model of the package according to the sequence of image frames from the depth camera 120. In this way, the computing device 140 can detect the target attribute of the package, for example, determine the size of the package or the volume of the package.
  • the computing device 140 may determine the detection location of the package on the conveyor belt 110 in the detection area S1 at the moment when the target attribute is detected, that is, determine the actual location of the package at the current moment.
  • the detection position wrapped in the detection area S1 can be represented by, for example, the coordinates of the 4 vertices on the upper surface of the package.
  • the code reading camera 130 is downstream of the depth camera 120.
  • the barcode reading field V2 of the barcode reading camera 130 covers the barcode identification area S2 on the conveyor belt 110.
  • the code reading camera 130 may be an industrial camera with an image capturing function, or a smart camera integrated with image capturing and image processing functions.
  • the code reading camera 130 may output image frames to the computing device 140.
  • the computing device 140 can perform barcode recognition on the image frame from the barcode reading camera 130.
  • the computing device 140 can detect one-dimensional barcodes and/or two-dimensional codes.
  • the computing device 140 may establish an association relationship between the barcode information and the target attribute. The manner of establishing an association relationship will be described below with reference to FIG. 2.
  • FIG. 2 shows a flowchart of a method 200 for detecting a package according to some embodiments of the present application.
  • the method 200 may be executed by the computing device 140.
  • step S201 when the package on the conveyor belt passes through the detection area, the contour of the designated area on the package is recognized, and the detection position of the contour in the detection area is recognized.
  • the designated area includes the barcode of the package, for example, the upper surface of the package or the surface area of the package.
  • step S201 for example, the coordinates of at least three vertices in the designated area may be determined, and the coordinates of the at least three vertices may be used to indicate the detection position of the contour in the detection area.
  • the coordinates of multiple vertices of the designated area can be obtained to indicate the detection position of the contour in the detection area.
  • step S202 according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position are predicted.
  • the code reading camera 130 is located downstream of the detection area.
  • the first image identification is used to identify the package image taken by the barcode reading camera 130 when the outline reaches the barcode reading position.
  • the first image identifier is, for example, a frame number or a time stamp.
  • the code reading position of the contour refers to the coordinate position of the contour in the image coordinate system of the code reading camera 130 when at least a part of the contour is in the imaging area.
  • the target attribute includes at least one of the following: package volume and package size.
  • the target attribute of the package may be determined according to the sequence of image frames collected by the depth camera 120.
  • the depth camera 120 is, for example, a line structured light camera.
  • step S201 can generate a three-dimensional model of the package according to the scanned image frame sequence. In this way, in step S201, the target attribute of the package can be determined according to the three-dimensional model.
  • the detection position of the upper surface of the package in the detection area on the conveyor belt 110 can be determined.
  • the coordinates of 4 vertices on the upper surface of the package can be determined, and the coordinates of the 4 vertices can be used to indicate the detection position of the package in the detection area.
  • the detection position can be used as a starting point, and at least one code reading position can be determined according to the conveyor speed.
  • the reading position can be indicated by the coordinates of the 4 vertices on the upper surface of the package.
  • the coordinates of the 4 vertices can define a rectangular area.
  • the rectangular area corresponds to the upper surface of the package. At least a part of the rectangular area is in the reading field of view.
  • step S203 when the package image corresponding to the first image identifier is acquired, barcode recognition is performed on the package image.
  • step S204 may execute step S204 to position the identified barcode with the barcode reading position.
  • step S204 may perform position matching on the barcode and the barcode reading position in the same coordinate system (for example, the image coordinate system of the barcode reading camera 130).
  • step S203 may identify multiple barcodes in the package image, such as barcode C1, barcode C2, and barcode C3.
  • step S204 when at least a part of the barcode area corresponding to the barcode C1 belongs to the designated area of the package B1, the position matching between the barcode C1 and the barcode reading position corresponding to the package B1 may be determined when the image of the package is projected. . That is, in step S204, it can be determined that when the outline of the designated area on the package B1 reaches the barcode reading position, the designated area of the package B1 is projected in the package image, and further, if it is determined that at least a part of the barcode area corresponding to the barcode C1 belongs to the projection area , Then it can be determined that the barcode C1 matches the barcode reading position corresponding to the package B1.
  • the first image identifier is a frame number
  • an image with the frame number in the image captured by the code reading camera 130 can be acquired as the package image corresponding to the first image identifier.
  • the first image identifier is a timestamp
  • an image with the timestamp in the image taken by the code reading camera 130 can be acquired as the package image corresponding to the first image identifier.
  • step S205 when the barcode reading position matches the barcode, associate the matched barcode with the package. If a barcode is identified in step S204, the barcode matched in step S205 is the barcode identified in step S204. If multiple barcodes are identified in step S204, the barcode matched in step S205 may be one of the multiple barcodes identified in step S204.
  • the package detection method 200 of the present application when the detection position of the package is detected, the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted.
  • the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system of the barcode reading camera when the package image corresponding to the first image identifier is acquired.
  • the package detection method 200 according to the present application can determine the association relationship between the package and the barcode by matching the position of the barcode and the barcode reading position in the package image.
  • the attributes of the package after the association relationship between the package and the barcode is determined, the attributes of the package can also be obtained, and the attributes of the package can be associated with the barcode.
  • the attributes of the package can be detected in advance.
  • the identification of each package can be generated. Furthermore, after the detection position is determined, and the code reading position and the first image identification are predicted, the correspondence relationship between the identification of the package, the detection position, the reading position and the first image identification can be recorded. Subsequently, when the package image corresponding to the first image identification is obtained, the barcode is recognized and the barcode reading position matches the barcode, the identification of the package corresponding to the barcode reading position can be determined, and then the matched barcode is corresponding to the determined identification The packages are associated.
  • FIG. 3 shows a flowchart of a method 300 for detecting a package according to some embodiments of the present application.
  • the method 300 may be executed by the computing device 140.
  • step S301 the first world coordinate system established according to the first calibration plate is acquired.
  • the first calibration disk is placed on the conveyor belt 110 and is in the field of view of the depth camera 120.
  • the first calibration board is, for example, a checkerboard calibration board. As shown in Figure 4, the first world coordinate system can be (X1, Y1, Z1).
  • step S302 the external parameters of the depth camera are calibrated according to the first world coordinate system and the image of the first calibration disk taken by the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system.
  • step S303 the second world coordinate system established according to the second calibration plate is acquired.
  • the second calibration disk is placed on the conveyor belt 110 and is in the code reading field of the code reading camera 130.
  • step S304 the external parameters of the code reading camera are calibrated according to the second world coordinate system and the image of the second calibration plate taken by the code reading camera to obtain a second mapping between the code reading camera coordinate system and the second world coordinate system relation.
  • the second world coordinate system can be (X2, Y2, Z2).
  • step S305 a third mapping relationship between the first world coordinate system and the second world coordinate system is determined.
  • step S306 according to the internal parameters of the code reading camera, a fourth mapping relationship between the coordinate system of the code reading camera and the image coordinate system of the code reading camera is determined.
  • Figure 4 shows a schematic diagram of the coordinate system in the logistics system.
  • Figure 4 shows the first world coordinate system R1 (X1, Y1, Z1), the second world coordinate system R2 (X2, Y2, Z2), the depth camera coordinate system R3 (X3, Y3, Z3), the code reading camera coordinate It is R4 (X4, Y4, Z4) and the image coordinate system R5 (X5, Y5) of the barcode reading camera 130.
  • the image coordinate system R5 corresponds to the imaging plane of the code reading camera 130.
  • the external parameters of the barcode reading camera 130 can be represented by T CB , that is, the second mapping relationship.
  • T CB the external parameters of the barcode reading camera 130
  • it can be expressed as the following matrix form:
  • R is, for example, a 3*3 rotation matrix, which represents the rotation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • T is, for example, a 3*1 translation matrix, which represents the translation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • I is an orthogonal matrix.
  • the calibration method of the external parameters of the depth camera 120 is similar to the calibration method of the external parameters of the code reading camera 130.
  • the first mapping relationship can refer to the related introduction of the second mapping relationship.
  • the transformation matrix corresponding to the third mapping relationship may be [0,d,0,0] T , where d represents the offset of the second world coordinate system relative to the first world coordinate system in the conveying direction of the conveyor belt.
  • the internal parameters of the code reading camera 130 can be represented by K C , that is, the fourth mapping relationship. Specifically, it can be expressed as the following matrix form:
  • f x and f y are the focal length parameters of the code reading camera 130 respectively
  • c x and c y are the offsets of the camera coordinate system relative to the image coordinate system.
  • the above steps 301 to S306 are the calibration of each coordinate system and the determination of the mapping relationship. Before the package is inspected, each coordinate system can be calibrated based on step 301-step S306, and the mapping relationship can be determined. Subsequently, when the package needs to be inspected, step S307-step S315 can be executed. In other words, the above steps 301 to S306 only need to be performed once, and it is not necessary to perform the above steps 301 to S306 every time the package is inspected.
  • step S307 when the package on the conveyor belt passes through the detection area, the contour of the designated area on the package is recognized, and the detection position of the contour in the detection area is recognized.
  • the designated area includes the barcode of the package, for example, the upper surface of the package or the surface area of the package.
  • the designated area is the upper surface of the package, and step S307 may be implemented as method 500.
  • step S501 when the package passes through the inspection area on the conveyor belt, a depth image of the package is acquired.
  • the depth camera 120 can take pictures of the packages transferred on the conveyor belt 110, and then can obtain the depth images of the packages passing through the inspection area.
  • step S502 according to the depth image, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined.
  • the upper surface of the package can be used as a designated area.
  • the method 500 can determine the contour of the upper surface of the package according to the depth image taken by the depth camera 120, so that the upper surface is taken as the designated area, and the detection position of the contour in the detection area is determined.
  • the designated area is a single-sided area on the package.
  • Step S307 may be implemented as method 600.
  • step S601 when the package passes through the inspection area on the conveyor belt, a grayscale image and a depth image of the package are acquired.
  • step S602 the contour of the single-surface area wrapped in the grayscale image is determined.
  • step S603 according to the contour of the single-surface area in the gray image, the first depth area corresponding to the single-surface area in the depth image is determined.
  • step S604 the detection position of the contour of the single area in the first world coordinate system is determined according to the first depth area.
  • the method 600 can use the grayscale image to determine the surface area of the package, and then determine the contour and detection position of the surface area based on the grayscale image and the depth image.
  • step S502 may be implemented as method 700.
  • step S701 a three-dimensional model of the package is determined according to the depth image.
  • step S702 according to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined.
  • the detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system.
  • the coordinates of the upper surface of the package in the depth camera coordinate system can be obtained according to the coordinates of the 3D model in the depth camera coordinate system. . Based on the coordinates of the upper surface of the package in the depth camera coordinate system, the contour of the upper surface of the package can be determined. In addition, after obtaining the coordinates of the upper surface of the package in the depth camera coordinate system, the coordinates of at least three vertices of the upper surface of the package in the depth camera coordinate system can be selected, and combined with the first mapping relationship, the upper surface of the package can be obtained. The coordinates of at least three vertices in the first world coordinate system.
  • the method 700 can determine the three-dimensional model of the package according to the depth image, and then use the three-dimensional model to determine the contour of the designated area (ie, the upper surface) and the detection position of the contour in the first world coordinate system.
  • step S502 may be implemented as method 800.
  • step S801 a gray image corresponding to the depth image is acquired.
  • step S802 the contour of the upper surface of the package is determined in the grayscale image.
  • the contour of the upper surface of the package is, for example, a rectangular area.
  • step S803 according to the contour of the upper surface of the package in the grayscale image, a second depth region corresponding to the upper surface in the depth image is determined, and at least three vertices of the second depth region are obtained.
  • step S804 the coordinates of at least three vertices of the second depth region in the depth camera coordinate system are determined.
  • step S805 the detection position of the upper surface of the package in the first world coordinate system is determined according to the coordinates of the at least three vertices of the second depth region in the depth camera coordinate system and the first mapping relationship.
  • the detection position is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
  • the method 800 can determine the vertex coordinates of the upper surface of the package according to the grayscale image and the depth image, and then determine the detection position of the upper surface of the package in the first world coordinate system.
  • the method 300 may perform step S308 in addition to step S307.
  • the target attribute of the package on the conveyor belt is detected.
  • the target attribute may include, for example, at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package.
  • the depth camera 120 may include a line structured light camera.
  • step S308 may obtain a depth image of the package according to the scanned image frame sequence, and determine the size or volume of the package according to the depth image. For example, based on the depth image, step S308 may determine the three-dimensional model of the package, and determine the size or volume of the package according to the three-dimensional model.
  • a weighing instrument may also be deployed in the detection area.
  • the weighing instrument can detect the quality of the package.
  • step S308 may use the grayscale image taken by the depth camera 120 to determine the face sheet of the package.
  • step S309 according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position are predicted.
  • the barcode reading camera is located downstream of the detection area.
  • the first image identifier is used to identify the package image taken by the code reading camera when the outline reaches the code reading position.
  • the first image identifier is, for example, a frame number or a time stamp. In some embodiments, the first image identifier is, for example, a frame number or a time stamp.
  • the reading position of the contour refers to the coordinate position of the contour in the image coordinate system when at least a part of the contour is in the imaging area.
  • step S309 may use the detected position as a starting point, and determine at least one offset position according to the conveyor belt speed.
  • the position of the projection area is the offset position.
  • the offset position is represented by, for example, the coordinates of the projection points of at least three vertices of the designated area in the image coordinate system.
  • step S309 may be implemented as method 900.
  • step S901 when the detection position is identified, the second image identifier of the image frame currently collected by the code reading camera is determined.
  • the second image is identified as a frame number or a time stamp.
  • the second image identification is generated by the code reading camera 130, for example.
  • the second image identifier is a frame number or a time stamp added to the image frame currently received from the code reading camera 130 when the computing device 140 determines the detection position.
  • step S902 the movement distance of the package in a single collection period of the code reading camera is acquired.
  • the single acquisition period of the code reading camera 130 is T 1 .
  • the transmission speed is v.
  • the moving distance s v*T 1 .
  • step S903 the offset position of the contour in the first world coordinate system is determined according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt.
  • the offset position satisfies: when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area. For example, when at least a part of the projection area of the contour at the offset position is in the imaging area in the image coordinate system, step S903 determines that the projection position is in the imaging area.
  • the resolution of the code reading camera 130 is x*y.
  • step S903 may determine that the projection position is in the imaging area.
  • step S903 uses the detection position as the starting point and the movement distance as the offset unit to determine the offset position of the package when it is in the code reading field of view.
  • the difference between the offset position and the detection position is equal to an integer number of offset units.
  • the distance between the offset position and the detection position is equal to the sum of N offset units. N is a positive integer.
  • step S903 takes the detection position as a starting point, and uses the moving distance as the offset unit, and uses the target position that meets the target condition as the offset position.
  • the target condition is: the difference between the target position and the detection position is equal to an integer number of offset units, and the target position is in the image coordinate system of the code reading camera. (Images) overlap.
  • each image frame captured by the code reading camera 130 includes at least a part of the package.
  • step S903 the package shooting positions corresponding to a part of the image frames or the package shooting positions corresponding to all image frames can be selected as the offset position. Therefore, step S904 can predict one or more reading positions.
  • step S903 is based on the coordinates of the offset position in the first world coordinate system (that is, the coordinates of the 4 vertices on the upper surface when the package is placed at the offset position), and according to the second mapping relationship and the third mapping The relationship and the fourth mapping relationship determine the image coordinates of the projection position of the offset position in the image coordinate system of the code reading camera 130.
  • the image coordinates of the projection point of a vertex in the image coordinate system can be calculated according to the following formula:
  • [u k ,v k ,w k ] T is the image coordinate of the projection point of a vertex L.
  • P L represents the coordinates of the vertex L in the first world coordinate system when the package is in the detection position.
  • [0,d,0,0] T represents the transformation matrix corresponding to the third mapping relationship.
  • the offset of the second world coordinate system relative to the first world coordinate system in the conveying direction of the conveyor belt is d.
  • P L -[0,d,0,0] T represents the coordinates of the vertex L in the second world coordinate system when the package is at the detection position.
  • T CB is an external parameter matrix, which represents the external parameters of the code reading camera 130, and can represent the second mapping relationship.
  • the external parameters can be expressed as the following matrix form:
  • R is, for example, a 3*3 rotation matrix, which represents the rotation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • T is, for example, a 3*1 translation matrix, which represents the translation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • I is an orthogonal matrix.
  • K C is the internal parameter matrix of the code reading camera 130, which is used to represent the internal parameters of the code reading camera 130, and can represent the fourth mapping relationship.
  • f x and f y are the focal length parameters of the code reading camera 130 respectively
  • c x and c y are the offsets of the camera coordinate system relative to the image coordinate system.
  • step S905 the difference between the offset position and the detection position is calculated, and the number of image frames taken by the code reading camera before the package reaches the code reading position from the detection position is determined according to the number of movement distances contained in the difference. The number of moving distances is consistent with the number of image frames.
  • the first image identifier corresponding to the code reading position is determined.
  • the second image is identified as frame number I 2
  • the number of offset units included in the gap between the offset position and the detection position is k 1 .
  • the frame number I 1 I 2 +k 1 identified by the first image.
  • the second image is identified as a timestamp t 2
  • the number of offset units included in the difference between the offset position and the detection position is k 1 .
  • T 1 is the time difference between adjacent frames of the code reading camera (ie, the collection period of the code reading camera).
  • the method 900 can determine the projection area of the offset position in the package image according to the second mapping relationship, the third mapping relationship, and the fourth mapping relationship, so as to accurately predict that the designated area of the package is at least one code reading field of the code reading field.
  • the position and the identification of the image frame corresponding to each reading position ie, the first image identification).
  • packages B1, B2 and B3 that do not enter the field of view of the depth camera 120 are placed.
  • the packages B1 and B3 are placed side by side.
  • B2 is behind packages B1 and B3.
  • FIG. 10B shows the detection position of the package B3 in the detection area determined by the computing device 140.
  • the detection position of the package B3 is represented by the coordinates of the four vertices e1-e4 on the upper surface of the package B3.
  • the detection position of the package B3 is the position where the package B3 has just left the field of view V1.
  • the computing device 140 may determine the coordinates of the vertices e1-e4 in the first world coordinate system according to the first mapping relationship.
  • FIG. 10C shows a schematic diagram of a target location of the package B3 predicted by the computing device 140.
  • FIG. 10C only shows the positions of the four vertices e1-e4 on the upper surface of the package B3, and uses the positions of the four vertices to indicate the target position of the package B3.
  • FIG. 10D shows a schematic diagram of the projection of 4 vertices e1-e4 to the image coordinate system (imaging plane).
  • Fig. 10E shows the projection area in the image coordinate system when the package is at the target position in Fig. 10C.
  • the projection area B3' represents the projection area of the upper surface of the package in the image coordinate system when the package is at the target position of Fig. 10C.
  • V2' represents the imaging area, that is, the range of the image generated by the code reading camera 130 in the image coordinate system. It can be seen from FIG. 10E that when the package B3 is at the target position, the projection area of the package B3 in the image coordinate system (that is, the projection area of the designated area in the image coordinate system) is in the imaging area.
  • the computing device 140 may determine that the difference between the target position of FIG. 10C and the detection position of the package B3 includes an integer number of offset units. Therefore, the computing device 140 may use the target position shown in FIG. 10C as an offset position.
  • the method 300 may further include step S310.
  • step S310 when the package image corresponding to the first image identifier is acquired, barcode recognition is performed on the package image.
  • the method 300 may execute step S311 to position the identified barcode with the barcode reading position.
  • step S311 may be implemented as method 1100.
  • step S1101 the barcode area of the barcode in the package image is determined.
  • step S1101 may determine the image coordinates of each barcode area in the package image.
  • FIG. 12A shows a schematic diagram of a package image according to some embodiments of the present application.
  • the package image P1 in FIG. 12A includes barcode areas C1 and C2.
  • step S1102 it is determined whether the barcode area belongs to the area corresponding to the barcode reading position.
  • step S1102 may determine that the package is in When shifting the position, the projection area of the designated area in the image coordinate system (ie the area corresponding to the reading position).
  • step S1102 based on the third mapping relationship (that is, the mapping relationship between the first world coordinate system and the second world coordinate system), the coordinates of the offset position in the second world coordinate system can be determined.
  • step S1102 can determine the coordinates of the offset position in the second world coordinate system in the code reading camera coordinate system. Based on the fourth mapping relationship (ie the mapping relationship between the code reading camera coordinate system and the image coordinate system of the code reading camera) and the coordinates of the offset position in the code reading camera coordinate system, step S1102 determines the projection of the designated area of the package at the offset position The coordinates of the projection area to the image coordinate system (ie, the reading position).
  • step S1102 can determine the projection area of package B3 in the package image (ie, the area corresponding to the barcode reading position).
  • Figure 12B shows the projection area B3" of the package B3.
  • step S1102 it can be determined that the barcode area of the barcode C1 is outside the projection area B3", and the barcode area of the barcode C2 belongs to the projection area B3".
  • step S1102 When it is determined in step S1102 that at least a part of the barcode area belongs to the area corresponding to the barcode reading position, that is, when it is determined that the barcode area belongs to the area corresponding to the barcode reading position, the method 1100 may execute step S1103 to determine the position matching between the barcode and the barcode reading position .
  • step S1103 it can be determined that the barcode C1 and the parcel B3 have a position matching between the barcode reading positions.
  • step S1102 When it is determined in step S1102 that the barcode region is outside the region corresponding to the barcode reading position, that is, when it is determined that the barcode region does not belong to the region corresponding to the barcode reading position, the method 1100 may execute step S1104 to determine that the barcode and the barcode reading position do not match. .
  • the method 1100 can determine whether the barcode and the barcode reading position match according to the position relationship between the barcode reading position and the barcode.
  • the barcode matching with the reading position can be understood as: the barcode is on the package corresponding to the reading position.
  • the mismatch between the barcode and the barcode reading position can be understood as: the barcode does not belong to the package corresponding to the barcode reading position.
  • the method 300 may further include steps S312 and S313.
  • step S312 when the barcode reading position matches the barcode, the matched barcode is associated with the target attribute. Taking FIG. 12B as an example, in step S312, the barcode C2 and the target attribute of the package B3 may be associated and stored.
  • step S313 the matched barcode is associated with the package.
  • the package detection method 300 of the present application when the detection position of the package is detected, the code reading position of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted. On this basis, according to the package detection method 300 of the present application, by matching the barcode and the barcode reading position in the package image, the association relationship between the package and the barcode can be determined, and the target attribute of the package can be associated with the barcode. Yes, after determining the attributes of the package, you can also associate the attributes of the package with the barcode.
  • the method 300 may further include step S314 of obtaining a panoramic image of the extended processing area of the conveyor belt.
  • the extended processing area is located downstream of the barcode reading field.
  • step S315 according to the detected position and the transmission speed, continuously update the predicted position of the package in the extended processing area over time, and add a tracking frame to the panoramic image according to the predicted position.
  • the tracking box can be presented as the first color.
  • step S315 may present the tracking box in the second color. For example, the first color is green and the second color is red.
  • FIG. 13A shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • FIG. 13A further adds a camera 150 on the basis of FIG. 1.
  • the camera 150 may output a sequence of image frames to the computing device 140, that is, a sequence of panoramic image frames.
  • the camera 150 is downstream of the code reading camera 130.
  • the field of view of the camera 150 is V3.
  • the field of view V3 can cover the extended processing area S3 of the conveyor belt.
  • the computing device 140 can update the predicted position of each package in the extended processing area according to the detected position and the transmission speed, and add a tracking frame to the panoramic image. In this way, the computing device 140 can track the location of the package through the tracking box.
  • the computing device 140 can display different states of the package by displaying the tracking frame of the package in different colors.
  • the staff can easily determine that the target attribute of the package corresponding to the tracking frame of the second color is not associated with a barcode, that is, there is no identifiable barcode on the upper surface of the package corresponding to the tracking frame of the second color.
  • the case where there is no identifiable bar code on the upper surface of the package is, for example, the bar code on the upper surface of the package is incomplete, the package does not have a bar code, or the package bar code is on the side or bottom of the package.
  • the staff can perform code supplementation and other processing for packages that do not have identifiable barcodes. For example, FIG.
  • FIG. 13B shows a schematic diagram of a panoramic image according to some embodiments of the present application.
  • the tracking frame M1 may be presented in green, for example, and the tracking frame M2 may be presented in red, for example.
  • the staff can quickly find the red package and perform operations such as complementing the code.
  • the package detection method 300 of the present application it is possible to track the packages leaving the code reading field through steps S314 and S315, and by presenting different colors to prompt the package status, it can greatly improve the detection of abnormal packages ( If the target attribute is not associated with the barcode package), it is convenient to handle.
  • Figure 14 shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • Figure 14 shows an array of code reading cameras.
  • the array includes, for example, code reading cameras 130, 160, and 170.
  • the field of view of adjacent code reading cameras can be adjacent or partially overlapped.
  • the computing device 140 can predict the barcode reading position of a package in the barcode reading cameras 130, 160, and 170, and associate the target attribute of the package with the barcode according to the package image of each barcode reading camera. In this way, multiple barcode reading cameras are used to associate target attributes with barcodes.
  • the computing device 140 can compare the association results (ie, the association results between the target attribute and the barcode) corresponding to different barcode reading cameras, so as to improve the accuracy of associating the target attribute with the barcode. For example, the association result corresponding to the barcode reading camera 130 and 160 is the same, and the association result corresponding to the barcode reading camera 170 is different from the association result corresponding to the barcode reading camera 130, then the computing device 140 uses the association result corresponding to the barcode camera 130 (160) The result shall prevail.
  • FIG. 15 shows a schematic diagram of an apparatus 1500 for detecting packages according to some embodiments of the present application.
  • the apparatus 1500 may be deployed in the computing device 140, for example.
  • the device 1500 for detecting packages may include: a detection unit 1501, a prediction unit 1502, a barcode recognition unit 1503, a matching unit 1504, and an association unit 1505.
  • the detection unit 1501 is used to identify the contour of the designated area on the package when the package passes through the detection area on the conveyor belt, and to identify the detection position of the contour in the detection area.
  • the designated area includes the barcode of the package.
  • the prediction unit 1502 is used for predicting the reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identifier corresponding to the reading position according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt.
  • the code reading camera is located downstream of the detection area, and the first image identifier is used to identify the package image taken by the code reading camera when the contour reaches the code reading position.
  • the barcode recognition unit 1503 is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is acquired.
  • the matching unit 1504 is used to match the barcode reading position with the recognized barcode.
  • the associating unit 1505 is used to associate the matched barcode with the package.
  • the device 1500 for detecting packages according to the present application can predict the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position when the detection position of the package is detected.
  • the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system when the package image corresponding to the first image identifier is acquired.
  • the device 1500 for detecting a package according to the present application can determine the relationship between the package and the barcode by matching the barcode and the barcode reading position in the package image.
  • it can also Associate the attributes of the package with the barcode.
  • FIG. 16 shows a schematic diagram of an apparatus 1600 for detecting packages according to some embodiments of the present application.
  • the apparatus 1600 may be deployed in the computing device 140, for example.
  • the device 1600 for detecting packages may include: a detection unit 1601, a prediction unit 1602, a barcode recognition unit 1603, a matching unit 1604, an association unit 1605, a first calibration unit 1606, and a second calibration unit 1607.
  • the first calibration unit 1606 can obtain the first world coordinate system established according to the first calibration disk.
  • the first calibration disc is placed on the conveyor belt and is in the field of view of the depth camera. According to the first world coordinate system and the image of the first calibration disk taken by the depth camera, the first calibration unit 1606 can calibrate the external parameters of the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system.
  • the second calibration unit 1607 acquires the second world coordinate system established according to the second calibration disk.
  • the second calibration disc is placed on the conveyor belt and is in the reading field of the code reading camera.
  • the second calibration unit 1607 can calibrate the external parameters of the code-reading camera to obtain the second between the code-reading camera coordinate system and the second world coordinate system. Mapping relations.
  • the second calibration unit 1607 can also determine the third mapping relationship between the first world coordinate system and the second world coordinate system. According to the internal parameters of the code reading camera, the second calibration unit 1607 can determine the fourth mapping relationship between the code reading camera's coordinate system and the image coordinate system of the code reading camera.
  • the detection unit 1601 is used to identify the contour of the designated area on the package and the detection position of the contour in the detection area when the package passes through the detection area on the conveyor belt.
  • the designated area includes the barcode of the package.
  • the detection unit 1601 may obtain a depth image of the package when the package passes through the detection area on the conveyor belt. According to the depth image, the detection unit 1601 determines the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system. Wherein, the upper surface is the designated area.
  • the detection unit 1601 may determine a three-dimensional model of the package according to the depth image. According to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, the detection unit 1601 can determine the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system. The detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system.
  • the detection unit 1601 may obtain a grayscale image corresponding to the depth image.
  • the detection unit 1601 can determine the contour of the upper surface of the package in the gray image. According to the contour of the upper surface of the package in the grayscale image, the detection unit 1601 can determine the second depth region corresponding to the upper surface in the depth image, and obtain at least three vertices of the second depth region.
  • the detection unit 1601 may determine the coordinates of at least three vertices of the second depth region in the depth camera coordinate system. According to the coordinates of the at least three vertices of the second depth region in the depth camera coordinate system and the first mapping relationship, the detection unit 1601 can determine the detection position of the upper surface of the package in the first world coordinate system. The detection position is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
  • the detection unit 1601 may obtain a grayscale image and a depth image of the package when the package passes through the detection area on the conveyor belt.
  • the detection unit 1601 can determine the contour of the single-surface area wrapped in the grayscale image.
  • the detection unit 1601 can determine the first depth area corresponding to the single-surface area in the depth image.
  • the detection unit 1601 can determine the detection position of the contour of the single area in the first world coordinate system.
  • the single area is the designated area.
  • the detection unit 1601 may also detect the target attribute of the package when the package passes through the detection area on the conveyor belt.
  • the target attributes include at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package.
  • the prediction unit 1602 is used for predicting the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identifier corresponding to the code reading position according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt.
  • the code reading camera is located downstream of the detection area, and the first image identifier is used to identify the package image taken by the code reading camera when the outline reaches the code reading position.
  • the detection position of the contour in the detection area is the coordinate of the contour in the first world coordinate system.
  • the prediction unit 1602 can determine the second image identifier of the image frame collected by the barcode reading camera at the current moment.
  • the second image is identified as a frame number or a time stamp.
  • the prediction unit 1602 can obtain the moving distance of the package in a single collection period of the code reading camera.
  • the prediction unit 1602 can determine the offset position of the contour in the first world coordinate system according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt.
  • the offset position satisfies: when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area. In this way, the prediction unit 1602 can use the projection position of the contour in the image coordinate system of the code reading camera as the code reading position when the contour is at the offset position.
  • the prediction unit 1602 may calculate the difference between the offset position and the detection position, and determine the number of image frames taken by the code reading camera before the package reaches the code reading position from the detection position according to the number of movement distances included in the difference. According to the second image identifier and the number of image frames, the prediction unit 1602 can determine the first image identifier corresponding to the code reading position.
  • the barcode recognition unit 1603 is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is acquired.
  • the matching unit 1604 is used to match the barcode reading position with the barcode.
  • the matching unit 1604 may determine the barcode area of the barcode in the package image. The matching unit 1604 determines whether the barcode area belongs to the area corresponding to the barcode reading position. When at least a part of the barcode area belongs to the area corresponding to the barcode reading position, the matching unit 1604 determines the position matching between the barcode and the barcode reading position. When the barcode area is outside the area corresponding to the barcode reading position, the matching unit 1604 determines that the barcode and the barcode reading position do not match.
  • the associating unit 1605 is used to associate the matched barcode with the package.
  • the associating unit 1605 can associate the matched barcode with the target attribute.
  • the apparatus 1600 may further include a tracking unit 1608.
  • the tracking unit 1608 can acquire a panoramic image of the extended processing area of the conveyor belt. Among them, the extended processing area is located downstream of the barcode reading field. Based on the detected position and the transmission speed, the prediction unit 1602 continuously updates the predicted position of the package in the extended processing area over time.
  • the tracking unit 1608 may add a tracking frame to the panoramic image according to the predicted position. Wherein, when the target attribute of the package is associated with the barcode, the tracking unit 1608 presents the tracking frame as the first color. When the target attribute of the package is not associated with the barcode, the tracking unit 1608 presents the tracking frame in the second color.
  • a more specific implementation manner of the apparatus 1600 is similar to that of the method 300, and will not be repeated here.
  • Figure 17 shows a schematic diagram of a computing device according to some embodiments of the present application.
  • the computing device includes one or more processors (CPU) 1702, a communication module 1704, a memory 1706, a user interface 1710, and a communication bus 1708 for interconnecting these components.
  • processors CPU
  • communication module 1704 a communication module 1704
  • memory 1706 a memory 1706
  • user interface 1710 a user interface 1710
  • a communication bus 1708 for interconnecting these components.
  • the processor 1702 may receive and send data through the communication module 1704 to implement network communication and/or local communication.
  • the user interface 1710 includes one or more output devices 1712, which includes one or more speakers and/or one or more visual displays.
  • the user interface 1710 also includes one or more input devices 1714.
  • the user interface 1710 may, for example, receive instructions from a remote controller, but is not limited to this.
  • the memory 1706 may be a high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid-state storage devices; or a non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, or flash memory devices, Or other non-volatile solid-state storage devices.
  • a high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid-state storage devices
  • non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, or flash memory devices, Or other non-volatile solid-state storage devices.
  • the memory 1706 stores an instruction set executable by the processor 1702, including:
  • Operating system 1716 including programs for processing various basic system services and performing hardware-related tasks;
  • the application 1718 includes various programs for realizing the aforementioned package detection, for example, it may include a package detection device 1500 or 1600. Such a program can implement the processing procedures in the above-mentioned examples, and may include, for example, a method of detecting packages.
  • each embodiment of the present application can be implemented by a data processing program executed by a data processing device such as a computer.
  • the data processing program constitutes this application.
  • a data processing program usually stored in a storage medium is executed by directly reading the program out of the storage medium or by installing or copying the program to a storage device (such as a hard disk and/or a memory) of the data processing device. Therefore, such a storage medium also constitutes the present application.
  • the storage medium can use any type of recording method, such as paper storage medium (such as paper tape, etc.), magnetic storage medium (such as floppy disk, hard disk, flash memory, etc.), optical storage medium (such as CD-ROM, etc.), magneto-optical storage medium (such as MO, etc.) and so on.
  • paper storage medium such as paper tape, etc.
  • magnetic storage medium such as floppy disk, hard disk, flash memory, etc.
  • optical storage medium such as CD-ROM, etc.
  • magneto-optical storage medium Such as MO, etc.
  • this application also discloses a non-volatile storage medium in which a program is stored.
  • the program includes instructions that, when executed by the processor, cause the computing device to execute the method of detecting packages according to the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

Procédé de détection de colis, dispositif, appareil informatique, système logistique et support de stockage, pouvant associer automatiquement des informations de code à barres à un colis. Le procédé consiste : lorsqu'un colis passe à travers une région de détection (S1) sur un tapis roulant (110), à identifier un contour d'une région désignée sur le colis et à acquérir une position détectée du contour dans la région de détection (S1) ; à prédire, en fonction de la position détectée du contour dans la région de détection (S1) et d'une vitesse de transport du tapis roulant (110), une position de lecture de code du contour dans une région d'imagerie d'un système de coordonnées d'image d'une caméra de lecture de code (130) et un premier marqueur d'image correspondant à la position de lecture de code, la caméra de lecture de code (130) se trouvant en aval par rapport à la zone de détection (S1), et le premier marqueur d'image étant utilisé pour marquer une image de colis capturée par la caméra de lecture de code (130) lorsque le contour atteint la position de lecture de code ; lors de l'acquisition de l'image de colis correspondant au premier marqueur d'image, à réaliser une identification de code à barres sur l'image de colis ; lorsqu'un code à barres est identifié à partir de l'image de colis, à effectuer une mise en correspondance de position de la position de lecture de code et du code à barres identifié ; et si la position de lecture de code correspond au code à barres, à associer le code à barres correspondant au colis.
PCT/CN2021/082964 2020-03-25 2021-03-25 Procédé de détection de colis, dispositif, appareil informatique, système logistique, et support de stockage WO2021190595A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010216758.4A CN113449532B (zh) 2020-03-25 2020-03-25 检测包裹的方法、装置、计算设备、物流系统及存储介质
CN202010216758.4 2020-03-25

Publications (1)

Publication Number Publication Date
WO2021190595A1 true WO2021190595A1 (fr) 2021-09-30

Family

ID=77807583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082964 WO2021190595A1 (fr) 2020-03-25 2021-03-25 Procédé de détection de colis, dispositif, appareil informatique, système logistique, et support de stockage

Country Status (2)

Country Link
CN (1) CN113449532B (fr)
WO (1) WO2021190595A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693735A (zh) * 2022-03-23 2022-07-01 成都智元汇信息技术股份有限公司 一种基于目标识别的视频融合方法及装置
CN115494556A (zh) * 2022-08-18 2022-12-20 成都智元汇信息技术股份有限公司 一种基于段落法模糊匹配的包包关联方法
CN117140558A (zh) * 2023-10-25 2023-12-01 菲特(天津)检测技术有限公司 坐标转换方法、系统及电子设备
CN117765065A (zh) * 2023-11-28 2024-03-26 中科微至科技股份有限公司 一种基于目标检测的单件分离包裹快速定位方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191469A (zh) * 2021-04-30 2021-07-30 南方科技大学 基于二维码的物流管理方法、系统、服务器和存储介质
CN114950977B (zh) * 2022-04-08 2023-11-24 浙江华睿科技股份有限公司 一种包裹追溯方法、装置、系统和计算机可读存储介质
CN114972509B (zh) * 2022-05-26 2023-09-29 北京利君成数字科技有限公司 一种快速识别餐具位置的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107328364A (zh) * 2017-08-15 2017-11-07 顺丰科技有限公司 一种体积、重量测量系统及其工作方法
CN107832999A (zh) * 2017-11-10 2018-03-23 顺丰科技有限公司 一种货物条码信息采集系统
US20190347455A1 (en) * 2018-05-11 2019-11-14 Optoelectronics Co., Ltd. Optical information reading apparatus and optical information reading method
CN112215022A (zh) * 2019-07-12 2021-01-12 杭州海康机器人技术有限公司 物流读码方法和物流读码装置以及物流系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020014533A1 (en) * 1995-12-18 2002-02-07 Xiaxun Zhu Automated object dimensioning system employing contour tracing, vertice detection, and forner point detection and reduction methods on 2-d range data maps
JP5814275B2 (ja) * 2010-03-12 2015-11-17 サンライズ アール アンド ディーホールディングス,エルエルシー 製品識別のためのシステム及び方法
CN108627092A (zh) * 2018-04-17 2018-10-09 南京阿凡达机器人科技有限公司 一种包裹体积的测量方法、系统、储存介质及移动终端
CN109127445B (zh) * 2018-06-04 2021-05-04 顺丰科技有限公司 条码读取方法及条码读取系统
CN109583535B (zh) * 2018-11-29 2023-04-18 中国人民解放军国防科技大学 一种基于视觉的物流条形码检测方法、可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107328364A (zh) * 2017-08-15 2017-11-07 顺丰科技有限公司 一种体积、重量测量系统及其工作方法
CN107832999A (zh) * 2017-11-10 2018-03-23 顺丰科技有限公司 一种货物条码信息采集系统
US20190347455A1 (en) * 2018-05-11 2019-11-14 Optoelectronics Co., Ltd. Optical information reading apparatus and optical information reading method
CN112215022A (zh) * 2019-07-12 2021-01-12 杭州海康机器人技术有限公司 物流读码方法和物流读码装置以及物流系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693735A (zh) * 2022-03-23 2022-07-01 成都智元汇信息技术股份有限公司 一种基于目标识别的视频融合方法及装置
CN115494556A (zh) * 2022-08-18 2022-12-20 成都智元汇信息技术股份有限公司 一种基于段落法模糊匹配的包包关联方法
CN115494556B (zh) * 2022-08-18 2023-09-12 成都智元汇信息技术股份有限公司 一种基于段落法模糊匹配的包包关联方法
CN117140558A (zh) * 2023-10-25 2023-12-01 菲特(天津)检测技术有限公司 坐标转换方法、系统及电子设备
CN117140558B (zh) * 2023-10-25 2024-01-16 菲特(天津)检测技术有限公司 坐标转换方法、系统及电子设备
CN117765065A (zh) * 2023-11-28 2024-03-26 中科微至科技股份有限公司 一种基于目标检测的单件分离包裹快速定位方法
CN117765065B (zh) * 2023-11-28 2024-06-04 中科微至科技股份有限公司 一种基于目标检测的单件分离包裹快速定位方法

Also Published As

Publication number Publication date
CN113449532A (zh) 2021-09-28
CN113449532B (zh) 2022-04-19

Similar Documents

Publication Publication Date Title
WO2021190595A1 (fr) Procédé de détection de colis, dispositif, appareil informatique, système logistique, et support de stockage
CN105026997B (zh) 投影系统、半导体集成电路及图像修正方法
CN107525466B (zh) 体积尺寸标注器中的自动模式切换
JP5421624B2 (ja) 三次元計測用画像撮影装置
JP2001194114A (ja) 画像処理装置および画像処理方法、並びにプログラム提供媒体
TW201118791A (en) System and method for obtaining camera parameters from a plurality of images, and computer program products thereof
TW201104508A (en) Stereoscopic form reader
JP2010219825A (ja) 三次元計測用画像撮影装置
WO2021114776A1 (fr) Procédé de détection d'objet, dispositif de détection d'objet, dispositif terminal, et support
US20130170756A1 (en) Edge detection apparatus, program and method for edge detection
CN109934873B (zh) 标注图像获取方法、装置及设备
KR102492821B1 (ko) 감소된 왜곡을 갖는 물체의 3차원 재구성을 생성하기 위한 방법 및 장치
JP2013108933A (ja) 情報端末装置
CN110807431A (zh) 对象定位方法、装置、电子设备及存储介质
CN111295683A (zh) 基于增强现实的包裹查找辅助系统
JP6017343B2 (ja) データベース生成装置、カメラ姿勢推定装置、データベース生成方法、カメラ姿勢推定方法、およびプログラム
JP4554231B2 (ja) 歪みパラメータの生成方法及び映像発生方法並びに歪みパラメータ生成装置及び映像発生装置
CN101180657A (zh) 信息终端
CN104677911A (zh) 用于机器视觉检验的检验设备和方法
CN117253022A (zh) 一种对象识别方法、装置及查验设备
US20230125042A1 (en) System and method of 3d point cloud registration with multiple 2d images
CN112262411B (zh) 图像关联方法、系统和装置
CN117078762A (zh) 一种虚拟现实设备、相机标定装置及方法
KR102217215B1 (ko) 스케일바를 이용한 3차원 모델 제작 서버 및 방법
KR20210084339A (ko) 이미지 연관 방법, 시스템 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21776287

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21776287

Country of ref document: EP

Kind code of ref document: A1