WO2021190595A1 - Parcel detection method, device, computing apparatus, logistics system, and storage medium - Google Patents

Parcel detection method, device, computing apparatus, logistics system, and storage medium Download PDF

Info

Publication number
WO2021190595A1
WO2021190595A1 PCT/CN2021/082964 CN2021082964W WO2021190595A1 WO 2021190595 A1 WO2021190595 A1 WO 2021190595A1 CN 2021082964 W CN2021082964 W CN 2021082964W WO 2021190595 A1 WO2021190595 A1 WO 2021190595A1
Authority
WO
WIPO (PCT)
Prior art keywords
package
barcode
image
area
coordinate system
Prior art date
Application number
PCT/CN2021/082964
Other languages
French (fr)
Chinese (zh)
Inventor
顾睿
邓志辉
Original Assignee
杭州海康机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人技术有限公司 filed Critical 杭州海康机器人技术有限公司
Publication of WO2021190595A1 publication Critical patent/WO2021190595A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management

Definitions

  • This application relates to the field of logistics automation technology, and in particular to a method, device, computing device, logistics system, and storage medium for detecting packages.
  • the package on the conveyor belt needs to be tested for attributes, for example, the size, volume, face list, barcode and other attributes of the package are tested.
  • Devices that detect different properties of the package can be distributed at different locations on the conveyor belt.
  • This application proposes a method, device, computing device, logistics system, and storage medium for detecting packages, which can automatically associate barcode information with packages and their attributes.
  • a method for detecting packages including:
  • the detection position of the contour in the detection area and the conveying speed of the conveyor belt it is predicted that the reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first corresponding to the reading position
  • An image identification the code reading camera is located downstream of the detection area, and the first image identification is used to identify a package image taken by the code reading camera when the outline reaches the barcode reading position;
  • the barcode reading position is matched with the recognized barcode
  • the matched barcode is associated with the package.
  • the detection position of the contour in the detection area is the coordinates of the contour in the first world coordinate system; the detection position according to the contour in the detection area and the The transmission speed of the conveyor belt, predicting the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position, includes:
  • the second image identifier of the image frame collected by the barcode reading camera at the current moment is determined, and the second image identifier is the frame number or the time stamp;
  • the projection position of the contour in the image coordinate system of the code reading camera is taken as the code reading position
  • the first image identifier corresponding to the code reading position is determined.
  • the method of detecting the package further includes:
  • the target attribute of the package is detected, and the target attribute includes at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package;
  • the matched barcode is associated with the target attribute.
  • the position matching the barcode reading position with the barcode includes:
  • the method of detecting the package further includes:
  • the detection position and the transmission speed continuously update the predicted position of the package in the extended processing area over time, and add a tracking frame to the panoramic image according to the predicted position;
  • the tracking frame when the package is associated with a barcode, the tracking frame is presented in a first color, and when the package is not associated with a barcode, the tracking frame is presented in a second color.
  • recognizing the contour of the designated area on the package and recognizing the detection position of the contour in the detection area includes:
  • the depth image of the package is acquired; according to the depth image, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined, wherein the The upper surface is the designated area; or
  • the detection area When the package on the conveyor belt passes through the detection area, obtain the grayscale image and depth image of the package; determine the contour of the single area of the package in the grayscale image; according to the surface of the package in the grayscale image The contour of a single area determines the first depth area in the depth image corresponding to the single area of the surface; according to the first depth area, the detection position of the contour of the single area of the surface in the first world coordinate system is determined, where The single surface area is the designated area.
  • the method of detecting the package further includes:
  • the first calibration disk Acquiring the first world coordinate system established according to the first calibration disk, the first calibration disk being placed on the conveyor belt and in the field of view of the depth camera;
  • the fourth mapping relationship between the coordinate system of the code-reading camera and the image coordinate system of the code-reading camera is determined.
  • the determining the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system according to the depth image includes:
  • the depth image determine the three-dimensional model of the package; according to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, determine the contour of the upper surface of the package and the coordinates of the upper surface in the first world
  • the detection position in the system, the detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system; or
  • the detection position of the upper surface of the package in the first world coordinate system is determined, and the detection position It is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
  • a device for detecting packages including:
  • the detection unit is configured to identify the contour of a designated area on the package when the package passes through the detection area on the conveyor belt, and identify the detection position of the contour in the detection area, and the designated area includes the barcode of the package;
  • a predicting unit for predicting the barcode reading position and the barcode reading of the contour in the imaging area of the image coordinate system of the barcode reading camera based on the detection position of the contour in the detection area and the conveying speed of the conveyor belt A first image identifier corresponding to a position, the code reading camera is located downstream of the detection area, and the first image identifier is used to identify a package image taken by the code reading camera when the outline reaches the code reading position;
  • the barcode recognition unit is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is obtained;
  • the matching unit is used to match the barcode reading position with the recognized barcode
  • the associating unit is used to associate the matched barcode with the package.
  • a computing device including: a memory; a processor; a program, stored in the memory and configured to be executed by the processor, the program including a package for executing the inspection package according to the present application Method of instruction.
  • a storage medium storing a program, the program including instructions that, when executed by a computing device, cause the computing device to execute the method for detecting packages according to the present application.
  • a logistics system which includes: a computing device; a conveyor belt; a depth camera; and a code reading camera.
  • the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted.
  • the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system when the package image corresponding to the first image identifier is acquired.
  • the package detection solution of the present application by matching the position of the barcode and the barcode reading position in the package image, the association relationship between the package and the barcode can be determined, and further, the barcode can be associated with the attributes of the package.
  • Figure 1 shows a schematic diagram of a logistics system according to some embodiments of the present application
  • FIG. 2 shows a flowchart of a method 200 for detecting a package according to some embodiments of the present application
  • FIG. 3 shows a flowchart of a method 300 for detecting a package according to some embodiments of the present application
  • Figure 4 shows a schematic diagram of a coordinate system in a logistics system according to some embodiments of the present application
  • FIG. 5 shows a flowchart of a method 500 for determining a detection position according to some embodiments of the present application
  • FIG. 6 shows a flowchart of a method 600 for determining a detection position according to some embodiments of the present application
  • FIG. 7 shows a flowchart of a method 700 for determining a detection position corresponding to the upper surface of a package according to some embodiments of the present application
  • Fig. 8 shows a flowchart of a method 800 for determining a detection position corresponding to the upper surface of a package according to some embodiments of the present application
  • FIG. 9 shows a flowchart of a method 900 for predicting a barcode reading position according to some embodiments of the present application.
  • FIG. 10A shows a schematic diagram of a conveyor belt placed with a package that does not enter the field of view of the depth camera 120;
  • FIG. 10B shows the detection position of the package B3 in the detection area determined by the computing device 140
  • FIG. 10C shows a schematic diagram of a target location of the package B3 predicted by the computing device 140
  • FIG. 10D shows a schematic diagram of the projection of 4 vertices to the image coordinate system
  • FIG. 10E shows the projection area in the image coordinate system when the package is at the target position in FIG. 10C;
  • FIG. 11 shows a flowchart of a method 1100 for position matching a barcode and a barcode reading position according to some embodiments of the present application
  • Figure 12A shows a package image according to some embodiments of the present application
  • Figure 12B shows a package image according to some embodiments of the present application.
  • Figure 13A shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • FIG. 13B shows a schematic diagram of a panoramic image according to some embodiments of the present application.
  • Figure 14 shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • FIG. 15 shows a schematic diagram of an apparatus 1500 for detecting packages according to some embodiments of the present application.
  • FIG. 16 shows a schematic diagram of an apparatus 1600 for detecting a package according to some embodiments of the present application
  • Figure 17 shows a schematic diagram of a computing device according to some embodiments of the present application.
  • Fig. 1 shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • the logistics system may include a conveyor belt 110, a depth camera 120, a code reading camera 130 and a computing device 140.
  • the driving belt 110 conveys the packages according to the conveying direction of the conveying belt (for example, the direction from right to left in FIG. 1 ).
  • the depth camera 120 shown in FIG. 1 is, for example, a structured light camera.
  • the structured light camera may include a laser emission module 121 and an image acquisition module 122.
  • the field of view V1 of the image acquisition module 122 can cover the detection area S1 on the conveyor belt 110.
  • the depth camera 120 may also use a Time of Flight (ToF, abbreviated as ToF) camera, a binocular vision (Stereo) camera, etc.
  • ToF Time of Flight
  • Stepo binocular vision
  • the depth camera 120 can collect images of the package passing through the field of view V1, and output a sequence of image frames to the computing device 140 in real time.
  • the computing device 140 may be, for example, a server, a notebook computer, a tablet computer, or a handheld business communication device.
  • the computing device 140 can build a three-dimensional model of the package according to the sequence of image frames from the depth camera 120. In this way, the computing device 140 can detect the target attribute of the package, for example, determine the size of the package or the volume of the package.
  • the computing device 140 may determine the detection location of the package on the conveyor belt 110 in the detection area S1 at the moment when the target attribute is detected, that is, determine the actual location of the package at the current moment.
  • the detection position wrapped in the detection area S1 can be represented by, for example, the coordinates of the 4 vertices on the upper surface of the package.
  • the code reading camera 130 is downstream of the depth camera 120.
  • the barcode reading field V2 of the barcode reading camera 130 covers the barcode identification area S2 on the conveyor belt 110.
  • the code reading camera 130 may be an industrial camera with an image capturing function, or a smart camera integrated with image capturing and image processing functions.
  • the code reading camera 130 may output image frames to the computing device 140.
  • the computing device 140 can perform barcode recognition on the image frame from the barcode reading camera 130.
  • the computing device 140 can detect one-dimensional barcodes and/or two-dimensional codes.
  • the computing device 140 may establish an association relationship between the barcode information and the target attribute. The manner of establishing an association relationship will be described below with reference to FIG. 2.
  • FIG. 2 shows a flowchart of a method 200 for detecting a package according to some embodiments of the present application.
  • the method 200 may be executed by the computing device 140.
  • step S201 when the package on the conveyor belt passes through the detection area, the contour of the designated area on the package is recognized, and the detection position of the contour in the detection area is recognized.
  • the designated area includes the barcode of the package, for example, the upper surface of the package or the surface area of the package.
  • step S201 for example, the coordinates of at least three vertices in the designated area may be determined, and the coordinates of the at least three vertices may be used to indicate the detection position of the contour in the detection area.
  • the coordinates of multiple vertices of the designated area can be obtained to indicate the detection position of the contour in the detection area.
  • step S202 according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position are predicted.
  • the code reading camera 130 is located downstream of the detection area.
  • the first image identification is used to identify the package image taken by the barcode reading camera 130 when the outline reaches the barcode reading position.
  • the first image identifier is, for example, a frame number or a time stamp.
  • the code reading position of the contour refers to the coordinate position of the contour in the image coordinate system of the code reading camera 130 when at least a part of the contour is in the imaging area.
  • the target attribute includes at least one of the following: package volume and package size.
  • the target attribute of the package may be determined according to the sequence of image frames collected by the depth camera 120.
  • the depth camera 120 is, for example, a line structured light camera.
  • step S201 can generate a three-dimensional model of the package according to the scanned image frame sequence. In this way, in step S201, the target attribute of the package can be determined according to the three-dimensional model.
  • the detection position of the upper surface of the package in the detection area on the conveyor belt 110 can be determined.
  • the coordinates of 4 vertices on the upper surface of the package can be determined, and the coordinates of the 4 vertices can be used to indicate the detection position of the package in the detection area.
  • the detection position can be used as a starting point, and at least one code reading position can be determined according to the conveyor speed.
  • the reading position can be indicated by the coordinates of the 4 vertices on the upper surface of the package.
  • the coordinates of the 4 vertices can define a rectangular area.
  • the rectangular area corresponds to the upper surface of the package. At least a part of the rectangular area is in the reading field of view.
  • step S203 when the package image corresponding to the first image identifier is acquired, barcode recognition is performed on the package image.
  • step S204 may execute step S204 to position the identified barcode with the barcode reading position.
  • step S204 may perform position matching on the barcode and the barcode reading position in the same coordinate system (for example, the image coordinate system of the barcode reading camera 130).
  • step S203 may identify multiple barcodes in the package image, such as barcode C1, barcode C2, and barcode C3.
  • step S204 when at least a part of the barcode area corresponding to the barcode C1 belongs to the designated area of the package B1, the position matching between the barcode C1 and the barcode reading position corresponding to the package B1 may be determined when the image of the package is projected. . That is, in step S204, it can be determined that when the outline of the designated area on the package B1 reaches the barcode reading position, the designated area of the package B1 is projected in the package image, and further, if it is determined that at least a part of the barcode area corresponding to the barcode C1 belongs to the projection area , Then it can be determined that the barcode C1 matches the barcode reading position corresponding to the package B1.
  • the first image identifier is a frame number
  • an image with the frame number in the image captured by the code reading camera 130 can be acquired as the package image corresponding to the first image identifier.
  • the first image identifier is a timestamp
  • an image with the timestamp in the image taken by the code reading camera 130 can be acquired as the package image corresponding to the first image identifier.
  • step S205 when the barcode reading position matches the barcode, associate the matched barcode with the package. If a barcode is identified in step S204, the barcode matched in step S205 is the barcode identified in step S204. If multiple barcodes are identified in step S204, the barcode matched in step S205 may be one of the multiple barcodes identified in step S204.
  • the package detection method 200 of the present application when the detection position of the package is detected, the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted.
  • the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system of the barcode reading camera when the package image corresponding to the first image identifier is acquired.
  • the package detection method 200 according to the present application can determine the association relationship between the package and the barcode by matching the position of the barcode and the barcode reading position in the package image.
  • the attributes of the package after the association relationship between the package and the barcode is determined, the attributes of the package can also be obtained, and the attributes of the package can be associated with the barcode.
  • the attributes of the package can be detected in advance.
  • the identification of each package can be generated. Furthermore, after the detection position is determined, and the code reading position and the first image identification are predicted, the correspondence relationship between the identification of the package, the detection position, the reading position and the first image identification can be recorded. Subsequently, when the package image corresponding to the first image identification is obtained, the barcode is recognized and the barcode reading position matches the barcode, the identification of the package corresponding to the barcode reading position can be determined, and then the matched barcode is corresponding to the determined identification The packages are associated.
  • FIG. 3 shows a flowchart of a method 300 for detecting a package according to some embodiments of the present application.
  • the method 300 may be executed by the computing device 140.
  • step S301 the first world coordinate system established according to the first calibration plate is acquired.
  • the first calibration disk is placed on the conveyor belt 110 and is in the field of view of the depth camera 120.
  • the first calibration board is, for example, a checkerboard calibration board. As shown in Figure 4, the first world coordinate system can be (X1, Y1, Z1).
  • step S302 the external parameters of the depth camera are calibrated according to the first world coordinate system and the image of the first calibration disk taken by the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system.
  • step S303 the second world coordinate system established according to the second calibration plate is acquired.
  • the second calibration disk is placed on the conveyor belt 110 and is in the code reading field of the code reading camera 130.
  • step S304 the external parameters of the code reading camera are calibrated according to the second world coordinate system and the image of the second calibration plate taken by the code reading camera to obtain a second mapping between the code reading camera coordinate system and the second world coordinate system relation.
  • the second world coordinate system can be (X2, Y2, Z2).
  • step S305 a third mapping relationship between the first world coordinate system and the second world coordinate system is determined.
  • step S306 according to the internal parameters of the code reading camera, a fourth mapping relationship between the coordinate system of the code reading camera and the image coordinate system of the code reading camera is determined.
  • Figure 4 shows a schematic diagram of the coordinate system in the logistics system.
  • Figure 4 shows the first world coordinate system R1 (X1, Y1, Z1), the second world coordinate system R2 (X2, Y2, Z2), the depth camera coordinate system R3 (X3, Y3, Z3), the code reading camera coordinate It is R4 (X4, Y4, Z4) and the image coordinate system R5 (X5, Y5) of the barcode reading camera 130.
  • the image coordinate system R5 corresponds to the imaging plane of the code reading camera 130.
  • the external parameters of the barcode reading camera 130 can be represented by T CB , that is, the second mapping relationship.
  • T CB the external parameters of the barcode reading camera 130
  • it can be expressed as the following matrix form:
  • R is, for example, a 3*3 rotation matrix, which represents the rotation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • T is, for example, a 3*1 translation matrix, which represents the translation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • I is an orthogonal matrix.
  • the calibration method of the external parameters of the depth camera 120 is similar to the calibration method of the external parameters of the code reading camera 130.
  • the first mapping relationship can refer to the related introduction of the second mapping relationship.
  • the transformation matrix corresponding to the third mapping relationship may be [0,d,0,0] T , where d represents the offset of the second world coordinate system relative to the first world coordinate system in the conveying direction of the conveyor belt.
  • the internal parameters of the code reading camera 130 can be represented by K C , that is, the fourth mapping relationship. Specifically, it can be expressed as the following matrix form:
  • f x and f y are the focal length parameters of the code reading camera 130 respectively
  • c x and c y are the offsets of the camera coordinate system relative to the image coordinate system.
  • the above steps 301 to S306 are the calibration of each coordinate system and the determination of the mapping relationship. Before the package is inspected, each coordinate system can be calibrated based on step 301-step S306, and the mapping relationship can be determined. Subsequently, when the package needs to be inspected, step S307-step S315 can be executed. In other words, the above steps 301 to S306 only need to be performed once, and it is not necessary to perform the above steps 301 to S306 every time the package is inspected.
  • step S307 when the package on the conveyor belt passes through the detection area, the contour of the designated area on the package is recognized, and the detection position of the contour in the detection area is recognized.
  • the designated area includes the barcode of the package, for example, the upper surface of the package or the surface area of the package.
  • the designated area is the upper surface of the package, and step S307 may be implemented as method 500.
  • step S501 when the package passes through the inspection area on the conveyor belt, a depth image of the package is acquired.
  • the depth camera 120 can take pictures of the packages transferred on the conveyor belt 110, and then can obtain the depth images of the packages passing through the inspection area.
  • step S502 according to the depth image, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined.
  • the upper surface of the package can be used as a designated area.
  • the method 500 can determine the contour of the upper surface of the package according to the depth image taken by the depth camera 120, so that the upper surface is taken as the designated area, and the detection position of the contour in the detection area is determined.
  • the designated area is a single-sided area on the package.
  • Step S307 may be implemented as method 600.
  • step S601 when the package passes through the inspection area on the conveyor belt, a grayscale image and a depth image of the package are acquired.
  • step S602 the contour of the single-surface area wrapped in the grayscale image is determined.
  • step S603 according to the contour of the single-surface area in the gray image, the first depth area corresponding to the single-surface area in the depth image is determined.
  • step S604 the detection position of the contour of the single area in the first world coordinate system is determined according to the first depth area.
  • the method 600 can use the grayscale image to determine the surface area of the package, and then determine the contour and detection position of the surface area based on the grayscale image and the depth image.
  • step S502 may be implemented as method 700.
  • step S701 a three-dimensional model of the package is determined according to the depth image.
  • step S702 according to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined.
  • the detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system.
  • the coordinates of the upper surface of the package in the depth camera coordinate system can be obtained according to the coordinates of the 3D model in the depth camera coordinate system. . Based on the coordinates of the upper surface of the package in the depth camera coordinate system, the contour of the upper surface of the package can be determined. In addition, after obtaining the coordinates of the upper surface of the package in the depth camera coordinate system, the coordinates of at least three vertices of the upper surface of the package in the depth camera coordinate system can be selected, and combined with the first mapping relationship, the upper surface of the package can be obtained. The coordinates of at least three vertices in the first world coordinate system.
  • the method 700 can determine the three-dimensional model of the package according to the depth image, and then use the three-dimensional model to determine the contour of the designated area (ie, the upper surface) and the detection position of the contour in the first world coordinate system.
  • step S502 may be implemented as method 800.
  • step S801 a gray image corresponding to the depth image is acquired.
  • step S802 the contour of the upper surface of the package is determined in the grayscale image.
  • the contour of the upper surface of the package is, for example, a rectangular area.
  • step S803 according to the contour of the upper surface of the package in the grayscale image, a second depth region corresponding to the upper surface in the depth image is determined, and at least three vertices of the second depth region are obtained.
  • step S804 the coordinates of at least three vertices of the second depth region in the depth camera coordinate system are determined.
  • step S805 the detection position of the upper surface of the package in the first world coordinate system is determined according to the coordinates of the at least three vertices of the second depth region in the depth camera coordinate system and the first mapping relationship.
  • the detection position is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
  • the method 800 can determine the vertex coordinates of the upper surface of the package according to the grayscale image and the depth image, and then determine the detection position of the upper surface of the package in the first world coordinate system.
  • the method 300 may perform step S308 in addition to step S307.
  • the target attribute of the package on the conveyor belt is detected.
  • the target attribute may include, for example, at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package.
  • the depth camera 120 may include a line structured light camera.
  • step S308 may obtain a depth image of the package according to the scanned image frame sequence, and determine the size or volume of the package according to the depth image. For example, based on the depth image, step S308 may determine the three-dimensional model of the package, and determine the size or volume of the package according to the three-dimensional model.
  • a weighing instrument may also be deployed in the detection area.
  • the weighing instrument can detect the quality of the package.
  • step S308 may use the grayscale image taken by the depth camera 120 to determine the face sheet of the package.
  • step S309 according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position are predicted.
  • the barcode reading camera is located downstream of the detection area.
  • the first image identifier is used to identify the package image taken by the code reading camera when the outline reaches the code reading position.
  • the first image identifier is, for example, a frame number or a time stamp. In some embodiments, the first image identifier is, for example, a frame number or a time stamp.
  • the reading position of the contour refers to the coordinate position of the contour in the image coordinate system when at least a part of the contour is in the imaging area.
  • step S309 may use the detected position as a starting point, and determine at least one offset position according to the conveyor belt speed.
  • the position of the projection area is the offset position.
  • the offset position is represented by, for example, the coordinates of the projection points of at least three vertices of the designated area in the image coordinate system.
  • step S309 may be implemented as method 900.
  • step S901 when the detection position is identified, the second image identifier of the image frame currently collected by the code reading camera is determined.
  • the second image is identified as a frame number or a time stamp.
  • the second image identification is generated by the code reading camera 130, for example.
  • the second image identifier is a frame number or a time stamp added to the image frame currently received from the code reading camera 130 when the computing device 140 determines the detection position.
  • step S902 the movement distance of the package in a single collection period of the code reading camera is acquired.
  • the single acquisition period of the code reading camera 130 is T 1 .
  • the transmission speed is v.
  • the moving distance s v*T 1 .
  • step S903 the offset position of the contour in the first world coordinate system is determined according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt.
  • the offset position satisfies: when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area. For example, when at least a part of the projection area of the contour at the offset position is in the imaging area in the image coordinate system, step S903 determines that the projection position is in the imaging area.
  • the resolution of the code reading camera 130 is x*y.
  • step S903 may determine that the projection position is in the imaging area.
  • step S903 uses the detection position as the starting point and the movement distance as the offset unit to determine the offset position of the package when it is in the code reading field of view.
  • the difference between the offset position and the detection position is equal to an integer number of offset units.
  • the distance between the offset position and the detection position is equal to the sum of N offset units. N is a positive integer.
  • step S903 takes the detection position as a starting point, and uses the moving distance as the offset unit, and uses the target position that meets the target condition as the offset position.
  • the target condition is: the difference between the target position and the detection position is equal to an integer number of offset units, and the target position is in the image coordinate system of the code reading camera. (Images) overlap.
  • each image frame captured by the code reading camera 130 includes at least a part of the package.
  • step S903 the package shooting positions corresponding to a part of the image frames or the package shooting positions corresponding to all image frames can be selected as the offset position. Therefore, step S904 can predict one or more reading positions.
  • step S903 is based on the coordinates of the offset position in the first world coordinate system (that is, the coordinates of the 4 vertices on the upper surface when the package is placed at the offset position), and according to the second mapping relationship and the third mapping The relationship and the fourth mapping relationship determine the image coordinates of the projection position of the offset position in the image coordinate system of the code reading camera 130.
  • the image coordinates of the projection point of a vertex in the image coordinate system can be calculated according to the following formula:
  • [u k ,v k ,w k ] T is the image coordinate of the projection point of a vertex L.
  • P L represents the coordinates of the vertex L in the first world coordinate system when the package is in the detection position.
  • [0,d,0,0] T represents the transformation matrix corresponding to the third mapping relationship.
  • the offset of the second world coordinate system relative to the first world coordinate system in the conveying direction of the conveyor belt is d.
  • P L -[0,d,0,0] T represents the coordinates of the vertex L in the second world coordinate system when the package is at the detection position.
  • T CB is an external parameter matrix, which represents the external parameters of the code reading camera 130, and can represent the second mapping relationship.
  • the external parameters can be expressed as the following matrix form:
  • R is, for example, a 3*3 rotation matrix, which represents the rotation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • T is, for example, a 3*1 translation matrix, which represents the translation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system.
  • I is an orthogonal matrix.
  • K C is the internal parameter matrix of the code reading camera 130, which is used to represent the internal parameters of the code reading camera 130, and can represent the fourth mapping relationship.
  • f x and f y are the focal length parameters of the code reading camera 130 respectively
  • c x and c y are the offsets of the camera coordinate system relative to the image coordinate system.
  • step S905 the difference between the offset position and the detection position is calculated, and the number of image frames taken by the code reading camera before the package reaches the code reading position from the detection position is determined according to the number of movement distances contained in the difference. The number of moving distances is consistent with the number of image frames.
  • the first image identifier corresponding to the code reading position is determined.
  • the second image is identified as frame number I 2
  • the number of offset units included in the gap between the offset position and the detection position is k 1 .
  • the frame number I 1 I 2 +k 1 identified by the first image.
  • the second image is identified as a timestamp t 2
  • the number of offset units included in the difference between the offset position and the detection position is k 1 .
  • T 1 is the time difference between adjacent frames of the code reading camera (ie, the collection period of the code reading camera).
  • the method 900 can determine the projection area of the offset position in the package image according to the second mapping relationship, the third mapping relationship, and the fourth mapping relationship, so as to accurately predict that the designated area of the package is at least one code reading field of the code reading field.
  • the position and the identification of the image frame corresponding to each reading position ie, the first image identification).
  • packages B1, B2 and B3 that do not enter the field of view of the depth camera 120 are placed.
  • the packages B1 and B3 are placed side by side.
  • B2 is behind packages B1 and B3.
  • FIG. 10B shows the detection position of the package B3 in the detection area determined by the computing device 140.
  • the detection position of the package B3 is represented by the coordinates of the four vertices e1-e4 on the upper surface of the package B3.
  • the detection position of the package B3 is the position where the package B3 has just left the field of view V1.
  • the computing device 140 may determine the coordinates of the vertices e1-e4 in the first world coordinate system according to the first mapping relationship.
  • FIG. 10C shows a schematic diagram of a target location of the package B3 predicted by the computing device 140.
  • FIG. 10C only shows the positions of the four vertices e1-e4 on the upper surface of the package B3, and uses the positions of the four vertices to indicate the target position of the package B3.
  • FIG. 10D shows a schematic diagram of the projection of 4 vertices e1-e4 to the image coordinate system (imaging plane).
  • Fig. 10E shows the projection area in the image coordinate system when the package is at the target position in Fig. 10C.
  • the projection area B3' represents the projection area of the upper surface of the package in the image coordinate system when the package is at the target position of Fig. 10C.
  • V2' represents the imaging area, that is, the range of the image generated by the code reading camera 130 in the image coordinate system. It can be seen from FIG. 10E that when the package B3 is at the target position, the projection area of the package B3 in the image coordinate system (that is, the projection area of the designated area in the image coordinate system) is in the imaging area.
  • the computing device 140 may determine that the difference between the target position of FIG. 10C and the detection position of the package B3 includes an integer number of offset units. Therefore, the computing device 140 may use the target position shown in FIG. 10C as an offset position.
  • the method 300 may further include step S310.
  • step S310 when the package image corresponding to the first image identifier is acquired, barcode recognition is performed on the package image.
  • the method 300 may execute step S311 to position the identified barcode with the barcode reading position.
  • step S311 may be implemented as method 1100.
  • step S1101 the barcode area of the barcode in the package image is determined.
  • step S1101 may determine the image coordinates of each barcode area in the package image.
  • FIG. 12A shows a schematic diagram of a package image according to some embodiments of the present application.
  • the package image P1 in FIG. 12A includes barcode areas C1 and C2.
  • step S1102 it is determined whether the barcode area belongs to the area corresponding to the barcode reading position.
  • step S1102 may determine that the package is in When shifting the position, the projection area of the designated area in the image coordinate system (ie the area corresponding to the reading position).
  • step S1102 based on the third mapping relationship (that is, the mapping relationship between the first world coordinate system and the second world coordinate system), the coordinates of the offset position in the second world coordinate system can be determined.
  • step S1102 can determine the coordinates of the offset position in the second world coordinate system in the code reading camera coordinate system. Based on the fourth mapping relationship (ie the mapping relationship between the code reading camera coordinate system and the image coordinate system of the code reading camera) and the coordinates of the offset position in the code reading camera coordinate system, step S1102 determines the projection of the designated area of the package at the offset position The coordinates of the projection area to the image coordinate system (ie, the reading position).
  • step S1102 can determine the projection area of package B3 in the package image (ie, the area corresponding to the barcode reading position).
  • Figure 12B shows the projection area B3" of the package B3.
  • step S1102 it can be determined that the barcode area of the barcode C1 is outside the projection area B3", and the barcode area of the barcode C2 belongs to the projection area B3".
  • step S1102 When it is determined in step S1102 that at least a part of the barcode area belongs to the area corresponding to the barcode reading position, that is, when it is determined that the barcode area belongs to the area corresponding to the barcode reading position, the method 1100 may execute step S1103 to determine the position matching between the barcode and the barcode reading position .
  • step S1103 it can be determined that the barcode C1 and the parcel B3 have a position matching between the barcode reading positions.
  • step S1102 When it is determined in step S1102 that the barcode region is outside the region corresponding to the barcode reading position, that is, when it is determined that the barcode region does not belong to the region corresponding to the barcode reading position, the method 1100 may execute step S1104 to determine that the barcode and the barcode reading position do not match. .
  • the method 1100 can determine whether the barcode and the barcode reading position match according to the position relationship between the barcode reading position and the barcode.
  • the barcode matching with the reading position can be understood as: the barcode is on the package corresponding to the reading position.
  • the mismatch between the barcode and the barcode reading position can be understood as: the barcode does not belong to the package corresponding to the barcode reading position.
  • the method 300 may further include steps S312 and S313.
  • step S312 when the barcode reading position matches the barcode, the matched barcode is associated with the target attribute. Taking FIG. 12B as an example, in step S312, the barcode C2 and the target attribute of the package B3 may be associated and stored.
  • step S313 the matched barcode is associated with the package.
  • the package detection method 300 of the present application when the detection position of the package is detected, the code reading position of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted. On this basis, according to the package detection method 300 of the present application, by matching the barcode and the barcode reading position in the package image, the association relationship between the package and the barcode can be determined, and the target attribute of the package can be associated with the barcode. Yes, after determining the attributes of the package, you can also associate the attributes of the package with the barcode.
  • the method 300 may further include step S314 of obtaining a panoramic image of the extended processing area of the conveyor belt.
  • the extended processing area is located downstream of the barcode reading field.
  • step S315 according to the detected position and the transmission speed, continuously update the predicted position of the package in the extended processing area over time, and add a tracking frame to the panoramic image according to the predicted position.
  • the tracking box can be presented as the first color.
  • step S315 may present the tracking box in the second color. For example, the first color is green and the second color is red.
  • FIG. 13A shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • FIG. 13A further adds a camera 150 on the basis of FIG. 1.
  • the camera 150 may output a sequence of image frames to the computing device 140, that is, a sequence of panoramic image frames.
  • the camera 150 is downstream of the code reading camera 130.
  • the field of view of the camera 150 is V3.
  • the field of view V3 can cover the extended processing area S3 of the conveyor belt.
  • the computing device 140 can update the predicted position of each package in the extended processing area according to the detected position and the transmission speed, and add a tracking frame to the panoramic image. In this way, the computing device 140 can track the location of the package through the tracking box.
  • the computing device 140 can display different states of the package by displaying the tracking frame of the package in different colors.
  • the staff can easily determine that the target attribute of the package corresponding to the tracking frame of the second color is not associated with a barcode, that is, there is no identifiable barcode on the upper surface of the package corresponding to the tracking frame of the second color.
  • the case where there is no identifiable bar code on the upper surface of the package is, for example, the bar code on the upper surface of the package is incomplete, the package does not have a bar code, or the package bar code is on the side or bottom of the package.
  • the staff can perform code supplementation and other processing for packages that do not have identifiable barcodes. For example, FIG.
  • FIG. 13B shows a schematic diagram of a panoramic image according to some embodiments of the present application.
  • the tracking frame M1 may be presented in green, for example, and the tracking frame M2 may be presented in red, for example.
  • the staff can quickly find the red package and perform operations such as complementing the code.
  • the package detection method 300 of the present application it is possible to track the packages leaving the code reading field through steps S314 and S315, and by presenting different colors to prompt the package status, it can greatly improve the detection of abnormal packages ( If the target attribute is not associated with the barcode package), it is convenient to handle.
  • Figure 14 shows a schematic diagram of a logistics system according to some embodiments of the present application.
  • Figure 14 shows an array of code reading cameras.
  • the array includes, for example, code reading cameras 130, 160, and 170.
  • the field of view of adjacent code reading cameras can be adjacent or partially overlapped.
  • the computing device 140 can predict the barcode reading position of a package in the barcode reading cameras 130, 160, and 170, and associate the target attribute of the package with the barcode according to the package image of each barcode reading camera. In this way, multiple barcode reading cameras are used to associate target attributes with barcodes.
  • the computing device 140 can compare the association results (ie, the association results between the target attribute and the barcode) corresponding to different barcode reading cameras, so as to improve the accuracy of associating the target attribute with the barcode. For example, the association result corresponding to the barcode reading camera 130 and 160 is the same, and the association result corresponding to the barcode reading camera 170 is different from the association result corresponding to the barcode reading camera 130, then the computing device 140 uses the association result corresponding to the barcode camera 130 (160) The result shall prevail.
  • FIG. 15 shows a schematic diagram of an apparatus 1500 for detecting packages according to some embodiments of the present application.
  • the apparatus 1500 may be deployed in the computing device 140, for example.
  • the device 1500 for detecting packages may include: a detection unit 1501, a prediction unit 1502, a barcode recognition unit 1503, a matching unit 1504, and an association unit 1505.
  • the detection unit 1501 is used to identify the contour of the designated area on the package when the package passes through the detection area on the conveyor belt, and to identify the detection position of the contour in the detection area.
  • the designated area includes the barcode of the package.
  • the prediction unit 1502 is used for predicting the reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identifier corresponding to the reading position according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt.
  • the code reading camera is located downstream of the detection area, and the first image identifier is used to identify the package image taken by the code reading camera when the contour reaches the code reading position.
  • the barcode recognition unit 1503 is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is acquired.
  • the matching unit 1504 is used to match the barcode reading position with the recognized barcode.
  • the associating unit 1505 is used to associate the matched barcode with the package.
  • the device 1500 for detecting packages according to the present application can predict the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position when the detection position of the package is detected.
  • the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system when the package image corresponding to the first image identifier is acquired.
  • the device 1500 for detecting a package according to the present application can determine the relationship between the package and the barcode by matching the barcode and the barcode reading position in the package image.
  • it can also Associate the attributes of the package with the barcode.
  • FIG. 16 shows a schematic diagram of an apparatus 1600 for detecting packages according to some embodiments of the present application.
  • the apparatus 1600 may be deployed in the computing device 140, for example.
  • the device 1600 for detecting packages may include: a detection unit 1601, a prediction unit 1602, a barcode recognition unit 1603, a matching unit 1604, an association unit 1605, a first calibration unit 1606, and a second calibration unit 1607.
  • the first calibration unit 1606 can obtain the first world coordinate system established according to the first calibration disk.
  • the first calibration disc is placed on the conveyor belt and is in the field of view of the depth camera. According to the first world coordinate system and the image of the first calibration disk taken by the depth camera, the first calibration unit 1606 can calibrate the external parameters of the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system.
  • the second calibration unit 1607 acquires the second world coordinate system established according to the second calibration disk.
  • the second calibration disc is placed on the conveyor belt and is in the reading field of the code reading camera.
  • the second calibration unit 1607 can calibrate the external parameters of the code-reading camera to obtain the second between the code-reading camera coordinate system and the second world coordinate system. Mapping relations.
  • the second calibration unit 1607 can also determine the third mapping relationship between the first world coordinate system and the second world coordinate system. According to the internal parameters of the code reading camera, the second calibration unit 1607 can determine the fourth mapping relationship between the code reading camera's coordinate system and the image coordinate system of the code reading camera.
  • the detection unit 1601 is used to identify the contour of the designated area on the package and the detection position of the contour in the detection area when the package passes through the detection area on the conveyor belt.
  • the designated area includes the barcode of the package.
  • the detection unit 1601 may obtain a depth image of the package when the package passes through the detection area on the conveyor belt. According to the depth image, the detection unit 1601 determines the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system. Wherein, the upper surface is the designated area.
  • the detection unit 1601 may determine a three-dimensional model of the package according to the depth image. According to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, the detection unit 1601 can determine the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system. The detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system.
  • the detection unit 1601 may obtain a grayscale image corresponding to the depth image.
  • the detection unit 1601 can determine the contour of the upper surface of the package in the gray image. According to the contour of the upper surface of the package in the grayscale image, the detection unit 1601 can determine the second depth region corresponding to the upper surface in the depth image, and obtain at least three vertices of the second depth region.
  • the detection unit 1601 may determine the coordinates of at least three vertices of the second depth region in the depth camera coordinate system. According to the coordinates of the at least three vertices of the second depth region in the depth camera coordinate system and the first mapping relationship, the detection unit 1601 can determine the detection position of the upper surface of the package in the first world coordinate system. The detection position is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
  • the detection unit 1601 may obtain a grayscale image and a depth image of the package when the package passes through the detection area on the conveyor belt.
  • the detection unit 1601 can determine the contour of the single-surface area wrapped in the grayscale image.
  • the detection unit 1601 can determine the first depth area corresponding to the single-surface area in the depth image.
  • the detection unit 1601 can determine the detection position of the contour of the single area in the first world coordinate system.
  • the single area is the designated area.
  • the detection unit 1601 may also detect the target attribute of the package when the package passes through the detection area on the conveyor belt.
  • the target attributes include at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package.
  • the prediction unit 1602 is used for predicting the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identifier corresponding to the code reading position according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt.
  • the code reading camera is located downstream of the detection area, and the first image identifier is used to identify the package image taken by the code reading camera when the outline reaches the code reading position.
  • the detection position of the contour in the detection area is the coordinate of the contour in the first world coordinate system.
  • the prediction unit 1602 can determine the second image identifier of the image frame collected by the barcode reading camera at the current moment.
  • the second image is identified as a frame number or a time stamp.
  • the prediction unit 1602 can obtain the moving distance of the package in a single collection period of the code reading camera.
  • the prediction unit 1602 can determine the offset position of the contour in the first world coordinate system according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt.
  • the offset position satisfies: when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area. In this way, the prediction unit 1602 can use the projection position of the contour in the image coordinate system of the code reading camera as the code reading position when the contour is at the offset position.
  • the prediction unit 1602 may calculate the difference between the offset position and the detection position, and determine the number of image frames taken by the code reading camera before the package reaches the code reading position from the detection position according to the number of movement distances included in the difference. According to the second image identifier and the number of image frames, the prediction unit 1602 can determine the first image identifier corresponding to the code reading position.
  • the barcode recognition unit 1603 is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is acquired.
  • the matching unit 1604 is used to match the barcode reading position with the barcode.
  • the matching unit 1604 may determine the barcode area of the barcode in the package image. The matching unit 1604 determines whether the barcode area belongs to the area corresponding to the barcode reading position. When at least a part of the barcode area belongs to the area corresponding to the barcode reading position, the matching unit 1604 determines the position matching between the barcode and the barcode reading position. When the barcode area is outside the area corresponding to the barcode reading position, the matching unit 1604 determines that the barcode and the barcode reading position do not match.
  • the associating unit 1605 is used to associate the matched barcode with the package.
  • the associating unit 1605 can associate the matched barcode with the target attribute.
  • the apparatus 1600 may further include a tracking unit 1608.
  • the tracking unit 1608 can acquire a panoramic image of the extended processing area of the conveyor belt. Among them, the extended processing area is located downstream of the barcode reading field. Based on the detected position and the transmission speed, the prediction unit 1602 continuously updates the predicted position of the package in the extended processing area over time.
  • the tracking unit 1608 may add a tracking frame to the panoramic image according to the predicted position. Wherein, when the target attribute of the package is associated with the barcode, the tracking unit 1608 presents the tracking frame as the first color. When the target attribute of the package is not associated with the barcode, the tracking unit 1608 presents the tracking frame in the second color.
  • a more specific implementation manner of the apparatus 1600 is similar to that of the method 300, and will not be repeated here.
  • Figure 17 shows a schematic diagram of a computing device according to some embodiments of the present application.
  • the computing device includes one or more processors (CPU) 1702, a communication module 1704, a memory 1706, a user interface 1710, and a communication bus 1708 for interconnecting these components.
  • processors CPU
  • communication module 1704 a communication module 1704
  • memory 1706 a memory 1706
  • user interface 1710 a user interface 1710
  • a communication bus 1708 for interconnecting these components.
  • the processor 1702 may receive and send data through the communication module 1704 to implement network communication and/or local communication.
  • the user interface 1710 includes one or more output devices 1712, which includes one or more speakers and/or one or more visual displays.
  • the user interface 1710 also includes one or more input devices 1714.
  • the user interface 1710 may, for example, receive instructions from a remote controller, but is not limited to this.
  • the memory 1706 may be a high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid-state storage devices; or a non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, or flash memory devices, Or other non-volatile solid-state storage devices.
  • a high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid-state storage devices
  • non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, or flash memory devices, Or other non-volatile solid-state storage devices.
  • the memory 1706 stores an instruction set executable by the processor 1702, including:
  • Operating system 1716 including programs for processing various basic system services and performing hardware-related tasks;
  • the application 1718 includes various programs for realizing the aforementioned package detection, for example, it may include a package detection device 1500 or 1600. Such a program can implement the processing procedures in the above-mentioned examples, and may include, for example, a method of detecting packages.
  • each embodiment of the present application can be implemented by a data processing program executed by a data processing device such as a computer.
  • the data processing program constitutes this application.
  • a data processing program usually stored in a storage medium is executed by directly reading the program out of the storage medium or by installing or copying the program to a storage device (such as a hard disk and/or a memory) of the data processing device. Therefore, such a storage medium also constitutes the present application.
  • the storage medium can use any type of recording method, such as paper storage medium (such as paper tape, etc.), magnetic storage medium (such as floppy disk, hard disk, flash memory, etc.), optical storage medium (such as CD-ROM, etc.), magneto-optical storage medium (such as MO, etc.) and so on.
  • paper storage medium such as paper tape, etc.
  • magnetic storage medium such as floppy disk, hard disk, flash memory, etc.
  • optical storage medium such as CD-ROM, etc.
  • magneto-optical storage medium Such as MO, etc.
  • this application also discloses a non-volatile storage medium in which a program is stored.
  • the program includes instructions that, when executed by the processor, cause the computing device to execute the method of detecting packages according to the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

A parcel detection method, a device, a computing apparatus, a logistics system, and a storage medium, capable of automatically associating barcode information with a parcel. The method comprises: when a parcel passes through a detection region (S1) on a conveyor belt (110), identifying an outline of a designated region on the parcel and acquiring a detected position of the outline in the detection region (S1); predicting, according to the detected position of the outline in the detection region (S1) and a conveying speed of the conveyor belt (110), a code reading position of the outline in an imaging region of an image coordinate system of a code reading camera (130) and a first image marker corresponding to the code reading position, wherein the code reading camera (130) is downstream with respect to the detection region (S1), and the first image marker is used to mark a parcel image captured by the code reading camera (130) when the outline reaches the code reading position; upon acquisition of the parcel image corresponding to the first image marker, performing barcode identification on the parcel image; when a barcode is identified from the parcel image, performing position matching of the code reading position and the identified barcode; and if the code reading position matches the barcode, associating the matching barcode with the parcel.

Description

检测包裹的方法、装置、计算设备、物流系统及存储介质Method, device, computing equipment, logistics system and storage medium for detecting packages
本申请要求于2020年03月25日提交中国专利局、申请号为202010216758.4发明名称为“检测包裹的方法、装置、计算设备、物流系统及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office with the application number 202010216758.4 on March 25, 2020, with the title of "Method, Apparatus, Computing Equipment, Logistics System and Storage Medium for Parcel Detection", and its entire contents Incorporated in this application by reference.
技术领域Technical field
本申请涉及物流自动化技术领域,特别涉及一种检测包裹的方法、装置、计算设备、物流系统及存储介质。This application relates to the field of logistics automation technology, and in particular to a method, device, computing device, logistics system, and storage medium for detecting packages.
背景技术Background technique
目前,在物流的应用场景中,传送带上的包裹需要进行属性检测,例如,检测包裹的尺寸、体积、面单、条码等属性。检测包裹不同属性的设备(例如读码相机和深度相机等)可以分布在传送带的不同位置处。At present, in the application scenario of logistics, the package on the conveyor belt needs to be tested for attributes, for example, the size, volume, face list, barcode and other attributes of the package are tested. Devices that detect different properties of the package (such as code reading cameras and depth cameras, etc.) can be distributed at different locations on the conveyor belt.
然而,当传送带上出现多包裹并发传送的情况时,即,当多个包裹并排或前后交错地同时出现在读码相机的视野内时,容易发生多包裹的条码信息的混淆,并且条码信息无法与包裹的其他属性正确关联。However, when multiple packages are simultaneously transmitted on the conveyor belt, that is, when multiple packages appear side by side or staggered back and forth in the field of view of the barcode reading camera, the barcode information of the multiple packages is likely to be confused, and the barcode information cannot be confused with the barcode information. The other attributes of the package are correctly associated.
因此,如何将条码信息与包裹及其属性进行准确关联是需要解决的技术问题。Therefore, how to accurately associate the barcode information with the package and its attributes is a technical problem that needs to be resolved.
发明内容Summary of the invention
本申请提出了一种检测包裹的方法、装置、计算设备、物流系统及存储介质,能够将条码信息与包裹及其属性进行自动关联。This application proposes a method, device, computing device, logistics system, and storage medium for detecting packages, which can automatically associate barcode information with packages and their attributes.
根据本申请一个方面,提供一种检测包裹的方法,包括:According to one aspect of the present application, a method for detecting packages is provided, including:
在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别所述轮廓在所述检测区域中的检测位置,所述指定区域包括所述包裹的条码;When a package on a conveyor belt passes through a detection area, identifying the contour of a designated area on the package, and identifying the detection position of the contour in the detection area, the designated area including the barcode of the package;
根据所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,预测所述轮廓在读码相机的图像坐标系的成像区域中的读码位置和所述读码位置对应的第一图像标识,所述读码相机处于所述检测区域的下游,所述第一图像标识用于标识当所述轮廓到达所述读码位置时读码相机拍摄的包裹图像;According to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, it is predicted that the reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first corresponding to the reading position An image identification, the code reading camera is located downstream of the detection area, and the first image identification is used to identify a package image taken by the code reading camera when the outline reaches the barcode reading position;
当获取到所述第一图像标识对应的包裹图像时,对所述包裹图像进行条码识别;When the package image corresponding to the first image identifier is acquired, perform barcode recognition on the package image;
当识别出所述包裹图像中的条码时,将所述读码位置与识别出的条码进 行位置匹配;When the barcode in the package image is recognized, the barcode reading position is matched with the recognized barcode;
当所述读码位置匹配到条码时,将匹配到的条码与所述包裹进行关联。When the barcode reading position matches the barcode, the matched barcode is associated with the package.
在一些实施例中,所述轮廓在所述检测区域中的检测位置为所述轮廓在第一世界坐标系中的坐标;所述根据所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,预测所述轮廓在读码相机的图像坐标系的成像区域中的读码位置和所述读码位置对应的第一图像标识,包括:In some embodiments, the detection position of the contour in the detection area is the coordinates of the contour in the first world coordinate system; the detection position according to the contour in the detection area and the The transmission speed of the conveyor belt, predicting the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position, includes:
当识别出所述检测位置时,确定读码相机当前时刻采集的图像帧的第二图像标识,所述第二图像标识为帧号或者时间戳;When the detection position is identified, the second image identifier of the image frame collected by the barcode reading camera at the current moment is determined, and the second image identifier is the frame number or the time stamp;
获取包裹在读码相机的单个采集周期内的移动距离;Obtain the moving distance of the package in a single collection cycle of the barcode reading camera;
根据所述读码相机的参数、所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,确定所述轮廓在第一世界坐标系中的偏移位置,所述偏移位置满足:当所述轮廓处于所述偏移位置时,所述轮廓在读码相机的图像坐标系中的投影位置处于所述成像区域中;Determine the offset position of the contour in the first world coordinate system according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt. It is satisfied that when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area;
将所述轮廓处于所述偏移位置时,所述轮廓在读码相机的图像坐标系中的投影位置作为所述读码位置;When the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is taken as the code reading position;
计算所述偏移位置与所述检测位置之间的差距,并根据该差距包含的所述移动距离的个数,确定所述读码相机在所述包裹从所述检测位置到达所述读码位置之前拍摄的图像帧数量;Calculate the difference between the offset position and the detection position, and according to the number of the movement distances included in the difference, determine that the barcode reading camera reaches the barcode reading when the package is from the detection position The number of image frames taken before the location;
根据所述第二图像标识和所述图像帧数量,确定所述读码位置对应的第一图像标识。According to the second image identifier and the number of image frames, the first image identifier corresponding to the code reading position is determined.
在一些实施例中,检测包裹的方法,进一步包括:In some embodiments, the method of detecting the package further includes:
在传送带上包裹经过检测区域时,检测所述包裹的目标属性,所述目标属性包括:包裹的体积、包裹的尺寸、包裹的质量和包裹的面单中至少一个;When the package on the conveyor belt passes through the detection area, the target attribute of the package is detected, and the target attribute includes at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package;
当所述读码位置匹配到条码时,将匹配到的条码与所述目标属性进行关联。When the barcode reading position matches the barcode, the matched barcode is associated with the target attribute.
在一些实施例中,所述将所述读码位置与条码进行位置匹配,包括:In some embodiments, the position matching the barcode reading position with the barcode includes:
确定条码在所述包裹图像中的条码区域;Determining the barcode area of the barcode in the package image;
确定所述条码区域是否属于所述读码位置对应的区域;Determining whether the barcode area belongs to the area corresponding to the barcode reading position;
在所述条码区域的至少一部分区域属于所述读码位置对应的区域时,确定所述条码与所述读码位置之间位置匹配;When at least a part of the barcode area belongs to the area corresponding to the barcode reading position, determining the position match between the barcode and the barcode reading position;
在所述条码区域处于所述读码位置对应的区域之外时,确定所述条码与所述读码位置之间位置不匹配。When the barcode area is outside the area corresponding to the barcode reading position, it is determined that the barcode and the barcode reading position do not match.
在一些实施例中,检测包裹的方法,进一步包括:In some embodiments, the method of detecting the package further includes:
获取传送带的延展处理区的全景图像,其中,延展处理区在传送带的传送方向上位于所述读码视野的下游;Acquiring a panoramic image of the extended processing area of the conveyor belt, wherein the extended processing area is located downstream of the code reading field in the conveying direction of the conveyor belt;
根据所述检测位置和所述传送速度,持续更新所述包裹在所述延展处理区中随时间变化的预测位置,并根据所述预测位置在全景图像中添加跟踪框;According to the detection position and the transmission speed, continuously update the predicted position of the package in the extended processing area over time, and add a tracking frame to the panoramic image according to the predicted position;
其中,当所述包裹关联到条码时,将所述跟踪框呈现为第一颜色,当所述包裹未关联到条码时,将所述跟踪框呈现为第二颜色。Wherein, when the package is associated with a barcode, the tracking frame is presented in a first color, and when the package is not associated with a barcode, the tracking frame is presented in a second color.
在一些实施例中,所述在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别所述轮廓在所述检测区域中的检测位置,包括:In some embodiments, when the package on the conveyor belt passes through the detection area, recognizing the contour of the designated area on the package and recognizing the detection position of the contour in the detection area includes:
在传送带上包裹经过检测区域时,获取所述包裹的深度图像;根据所述深度图像,确定所述包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置,其中,所述上表面为所述指定区域;或者When the package on the conveyor belt passes through the detection area, the depth image of the package is acquired; according to the depth image, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined, wherein the The upper surface is the designated area; or
在传送带上包裹经过检测区域时,获取所述包裹的灰度图像和深度图像;确定所述灰度图像中所述包裹的面单区域的轮廓;根据所述灰度图像中所述包裹的面单区域的轮廓,确定所述深度图像中与所述面单区域对应的第一深度区域;根据所述第一深度区域,确定面单区域的轮廓在第一世界坐标系中的检测位置,其中所述面单区域为所述指定区域。When the package on the conveyor belt passes through the detection area, obtain the grayscale image and depth image of the package; determine the contour of the single area of the package in the grayscale image; according to the surface of the package in the grayscale image The contour of a single area determines the first depth area in the depth image corresponding to the single area of the surface; according to the first depth area, the detection position of the contour of the single area of the surface in the first world coordinate system is determined, where The single surface area is the designated area.
在一些实施例中,检测包裹的方法,进一步包括:In some embodiments, the method of detecting the package further includes:
获取根据第一标定盘建立的第一世界坐标系,所述第一标定盘放置于传送带上并处于深度相机的视野范围;Acquiring the first world coordinate system established according to the first calibration disk, the first calibration disk being placed on the conveyor belt and in the field of view of the depth camera;
根据所述第一世界坐标系和所述深度相机拍摄的第一标定盘的图像,标定深度相机的外参,得到深度相机坐标系与第一世界坐标系之间的第一映射关系;Calibrate the external parameters of the depth camera according to the first world coordinate system and the image of the first calibration disk taken by the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system;
获取根据第二标定盘建立的第二世界坐标系,所述第二标定盘放置于传送带上并处于所述读码相机的读码视野;Acquiring a second world coordinate system established according to a second calibration disk, the second calibration disk being placed on a conveyor belt and in the code reading field of view of the code reading camera;
根据所述第二世界坐标系和读码相机拍摄的第二标定盘的图像,标定读码相机的外参,得到读码相机坐标系和第二世界坐标系之间的第二映射关系;Calibrate the external parameters of the code reading camera according to the second world coordinate system and the image of the second calibration disk taken by the code reading camera to obtain a second mapping relationship between the code reading camera coordinate system and the second world coordinate system;
确定第一世界坐标系和第二世界坐标系之间的第三映射关系;Determine the third mapping relationship between the first world coordinate system and the second world coordinate system;
根据读码相机的内参,确定读码相机坐标系与读码相机的图像坐标系之间的第四映射关系。According to the internal parameters of the code-reading camera, the fourth mapping relationship between the coordinate system of the code-reading camera and the image coordinate system of the code-reading camera is determined.
在一些实施例中,所述根据所述深度图像,确定所述包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置,包括:In some embodiments, the determining the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system according to the depth image includes:
根据所述深度图像,确定所述包裹的三维模型;根据所述三维模型在深度相机坐标系中的坐标和所述第一映射关系,确定包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置,所述检测位置由包裹上表面的至少三个顶点在第一世界坐标系中的坐标表示;或者According to the depth image, determine the three-dimensional model of the package; according to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, determine the contour of the upper surface of the package and the coordinates of the upper surface in the first world The detection position in the system, the detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system; or
获取所述深度图像对应的灰度图像;Acquiring a grayscale image corresponding to the depth image;
在所述灰度图像中确定所述包裹的上表面的轮廓;Determining the contour of the upper surface of the package in the grayscale image;
根据灰度图像中的所述包裹的上表面的轮廓,确定所述深度图像中与所述上表面对应的第二深度区域,得到所述第二深度区域的至少三个顶点;Determining a second depth area corresponding to the upper surface in the depth image according to the contour of the upper surface of the package in the grayscale image, to obtain at least three vertices of the second depth area;
确定所述第二深度区域的至少三个顶点在深度相机坐标系中的坐标;Determining the coordinates of at least three vertices of the second depth region in the depth camera coordinate system;
根据所述第二深度区域的至少三个顶点在深度相机坐标系中的坐标和所述第一映射关系,确定所述包裹的上表面在第一世界坐标系中的检测位置,所述检测位置由所述第二深度区域的至少三个顶点在第一世界坐标系中的坐标表示。According to the coordinates of the at least three vertices of the second depth region in the depth camera coordinate system and the first mapping relationship, the detection position of the upper surface of the package in the first world coordinate system is determined, and the detection position It is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
根据本申请一个方面,提供一种检测包裹的装置,包括:According to one aspect of the present application, a device for detecting packages is provided, including:
检测单元,用于在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别所述轮廓在所述检测区域中的检测位置,所述指定区域包括所述包裹的条码;The detection unit is configured to identify the contour of a designated area on the package when the package passes through the detection area on the conveyor belt, and identify the detection position of the contour in the detection area, and the designated area includes the barcode of the package;
预测单元,用于根据所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,预测所述轮廓在读码相机的图像坐标系的成像区域中的读码位置和所述读码位置对应的第一图像标识,所述读码相机处于所述检测区域的下游,所述第一图像标识用于标识当所述轮廓到达所述读码位置时读码相机拍摄的包裹图像;A predicting unit for predicting the barcode reading position and the barcode reading of the contour in the imaging area of the image coordinate system of the barcode reading camera based on the detection position of the contour in the detection area and the conveying speed of the conveyor belt A first image identifier corresponding to a position, the code reading camera is located downstream of the detection area, and the first image identifier is used to identify a package image taken by the code reading camera when the outline reaches the code reading position;
条码识别单元,用于当获取到所述第一图像标识对应的包裹图像时,对所述包裹图像进行条码识别;The barcode recognition unit is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is obtained;
当条码识别单元识别出所述包裹图像中的条码时,匹配单元用于将所述读码位置与识别出的条码进行位置匹配;When the barcode recognition unit recognizes the barcode in the package image, the matching unit is used to match the barcode reading position with the recognized barcode;
当匹配单元确定所述读码位置匹配到条码时,关联单元用于将匹配到的条码与所述包裹进行关联。When the matching unit determines that the barcode reading position matches the barcode, the associating unit is used to associate the matched barcode with the package.
根据本申请一方面,提供一种计算设备,包括:存储器;处理器;程序,存储在该存储器中并被配置为由所述处理器执行,所述程序包括用于执行根据本申请的检测包裹的方法的指令。According to one aspect of the present application, there is provided a computing device, including: a memory; a processor; a program, stored in the memory and configured to be executed by the processor, the program including a package for executing the inspection package according to the present application Method of instruction.
根据本申请一方面,提供一种存储介质,存储有程序,所述程序包括指令,所述指令当由计算设备执行时,使得所述计算设备执行根据本申请的检测包裹的方法。According to one aspect of the present application, there is provided a storage medium storing a program, the program including instructions that, when executed by a computing device, cause the computing device to execute the method for detecting packages according to the present application.
根据本申请一方面,提供一种物流系统,包括:计算设备;传送带;深度相机;读码相机。According to one aspect of the present application, a logistics system is provided, which includes: a computing device; a conveyor belt; a depth camera; and a code reading camera.
综上,根据本申请的检测包裹的方案,可以在检测出包裹的检测位置时预测包裹的指定区域在成像区域中的读码位置和读码位置对应的第一图像标识。这里,读码位置可以认为是在获取到第一图像标识对应的包裹图像时包裹的指定区域在图像坐标系中的投影位置。在此基础上,根据本申请的检测包裹的方案,通过对包裹图像中条码与读码位置进行位置匹配,可以确定包裹与条码的关联关系,进而,还可以将条码与包裹的属性进行关联。In summary, according to the package detection solution of the present application, when the detection position of the package is detected, the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted. Here, the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system when the package image corresponding to the first image identifier is acquired. On this basis, according to the package detection solution of the present application, by matching the position of the barcode and the barcode reading position in the package image, the association relationship between the package and the barcode can be determined, and further, the barcode can be associated with the attributes of the package.
附图说明Description of the drawings
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,本领域普通技术人员来讲还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present application and the technical solutions of the prior art more clearly, the following briefly introduces the drawings that need to be used in the embodiments and the prior art. Obviously, the drawings in the following description are merely the present invention. For some embodiments of the application, those of ordinary skill in the art can also obtain other drawings based on these drawings.
图1示出了根据本申请一些实施例的物流系统的示意图;Figure 1 shows a schematic diagram of a logistics system according to some embodiments of the present application;
图2示出了根据本申请一些实施例的检测包裹的方法200的流程图;FIG. 2 shows a flowchart of a method 200 for detecting a package according to some embodiments of the present application;
图3示出了根据本申请一些实施例的检测包裹的方法300的流程图;FIG. 3 shows a flowchart of a method 300 for detecting a package according to some embodiments of the present application;
图4示出了根据本申请一些实施例的物流系统中坐标系的示意图;Figure 4 shows a schematic diagram of a coordinate system in a logistics system according to some embodiments of the present application;
图5示出了根据本申请一些实施例的确定检测位置的方法500的流程图;FIG. 5 shows a flowchart of a method 500 for determining a detection position according to some embodiments of the present application;
图6示出了根据本申请一些实施例的确定检测位置的方法600的流程图;FIG. 6 shows a flowchart of a method 600 for determining a detection position according to some embodiments of the present application;
图7示出了根据本申请一些实施例的确定包裹的上表面对应的检测位置的方法700的流程图;FIG. 7 shows a flowchart of a method 700 for determining a detection position corresponding to the upper surface of a package according to some embodiments of the present application;
图8示出了根据本申请一些实施例的确定包裹的上表面对应的检测位置 的方法800的流程图;Fig. 8 shows a flowchart of a method 800 for determining a detection position corresponding to the upper surface of a package according to some embodiments of the present application;
图9示出了根据本申请一些实施例的预测读码位置的方法900的流程图;FIG. 9 shows a flowchart of a method 900 for predicting a barcode reading position according to some embodiments of the present application;
图10A示出了传送带放置有未进入深度相机120视野范围的包裹的示意图;FIG. 10A shows a schematic diagram of a conveyor belt placed with a package that does not enter the field of view of the depth camera 120; FIG.
图10B示出了计算设备140确定的包裹B3在所述检测区域中的检测位置;FIG. 10B shows the detection position of the package B3 in the detection area determined by the computing device 140;
图10C示出了计算设备140预测的包裹B3的一个目标位置的示意图;FIG. 10C shows a schematic diagram of a target location of the package B3 predicted by the computing device 140;
图10D示出了4个顶点投影到图像坐标系的示意图;FIG. 10D shows a schematic diagram of the projection of 4 vertices to the image coordinate system;
图10E示出了当包裹处于图10C中目标位置时在图像坐标系中的投影区域;FIG. 10E shows the projection area in the image coordinate system when the package is at the target position in FIG. 10C;
图11示出了根据本申请一些实施例对条码与读码位置进行位置匹配的方法1100的流程图;FIG. 11 shows a flowchart of a method 1100 for position matching a barcode and a barcode reading position according to some embodiments of the present application;
图12A示出了根据本申请一些实施例包裹图像;Figure 12A shows a package image according to some embodiments of the present application;
图12B示出了根据本申请一些实施例包裹图像;Figure 12B shows a package image according to some embodiments of the present application;
图13A示出了根据本申请一些实施例的物流系统的示意图;Figure 13A shows a schematic diagram of a logistics system according to some embodiments of the present application;
图13B示出了根据本申请一些实施例的全景图像的示意图;FIG. 13B shows a schematic diagram of a panoramic image according to some embodiments of the present application;
图14示出了根据本申请一些实施例的物流系统的示意图;Figure 14 shows a schematic diagram of a logistics system according to some embodiments of the present application;
图15示出了根据本申请一些实施例的检测包裹的装置1500的示意图;FIG. 15 shows a schematic diagram of an apparatus 1500 for detecting packages according to some embodiments of the present application;
图16示出了根据本申请一些实施例的检测包裹的装置1600的示意图;FIG. 16 shows a schematic diagram of an apparatus 1600 for detecting a package according to some embodiments of the present application;
图17示出了根据本申请一些实施例的计算设备的示意图。Figure 17 shows a schematic diagram of a computing device according to some embodiments of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。本领域普通技术人员基于本申请中的实施例所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions, and advantages of the present application clearer, the following further describes the present application in detail with reference to the accompanying drawings and embodiments. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in this application fall within the protection scope of this application.
图1示出了根据本申请一些实施例的物流系统的示意图。如图1所示,物流系统可以包括传送带110、深度相机120、读码相机130和计算设备140。Fig. 1 shows a schematic diagram of a logistics system according to some embodiments of the present application. As shown in FIG. 1, the logistics system may include a conveyor belt 110, a depth camera 120, a code reading camera 130 and a computing device 140.
传动带110按照传送带的传送方向(例如在图1中从右向左的方向)传送包裹。The driving belt 110 conveys the packages according to the conveying direction of the conveying belt (for example, the direction from right to left in FIG. 1 ).
图1示出的深度相机120例如为结构光相机。结构光相机可以包括激光发射模块121和图像采集模块122。图像采集模块122的视野V1可以覆盖传 送带110上的检测区域S1。除此之外,深度相机120还可以选用时间飞行(Time of Flight,缩写为ToF)相机、双目视觉(Stereo)相机等。深度相机120可以对通过视野V1的包裹进行图像采集,并向计算设备140实时输出图像帧序列。The depth camera 120 shown in FIG. 1 is, for example, a structured light camera. The structured light camera may include a laser emission module 121 and an image acquisition module 122. The field of view V1 of the image acquisition module 122 can cover the detection area S1 on the conveyor belt 110. In addition, the depth camera 120 may also use a Time of Flight (ToF, abbreviated as ToF) camera, a binocular vision (Stereo) camera, etc. The depth camera 120 can collect images of the package passing through the field of view V1, and output a sequence of image frames to the computing device 140 in real time.
计算设备140例如可以是服务器、笔记本电脑、平板电脑、掌上商务通等设备。计算设备140可以根据来自深度相机120的图像帧序列,建立包裹的三维模型。这样,计算设备140可以检测包裹的目标属性,例如确定包裹的尺寸或者包裹的体积。另外,计算设备140可以在检测出目标属性的时刻,确定传送带110上包裹在检测区域S1中的检测位置,即确定当前时刻包裹的实际位置。这里,包裹在检测区域S1中的检测位置例如可以用包裹上表面4个顶点的坐标表示。The computing device 140 may be, for example, a server, a notebook computer, a tablet computer, or a handheld business communication device. The computing device 140 can build a three-dimensional model of the package according to the sequence of image frames from the depth camera 120. In this way, the computing device 140 can detect the target attribute of the package, for example, determine the size of the package or the volume of the package. In addition, the computing device 140 may determine the detection location of the package on the conveyor belt 110 in the detection area S1 at the moment when the target attribute is detected, that is, determine the actual location of the package at the current moment. Here, the detection position wrapped in the detection area S1 can be represented by, for example, the coordinates of the 4 vertices on the upper surface of the package.
在传送带110的传送方向上,读码相机130处于深度相机120的下游。读码相机130的读码视野V2覆盖传送带110上的条码识别区S2。读码相机130可以是具有图像拍摄功能的工业相机、或者集成有图像拍摄和图像处理等功能的智能相机。读码相机130可以向计算设备140输出图像帧。计算设备140可以对来自读码相机130的图像帧进行条码识别。这里,计算设备140可以对一维条码和\或二维码进行检测。In the conveying direction of the conveyor belt 110, the code reading camera 130 is downstream of the depth camera 120. The barcode reading field V2 of the barcode reading camera 130 covers the barcode identification area S2 on the conveyor belt 110. The code reading camera 130 may be an industrial camera with an image capturing function, or a smart camera integrated with image capturing and image processing functions. The code reading camera 130 may output image frames to the computing device 140. The computing device 140 can perform barcode recognition on the image frame from the barcode reading camera 130. Here, the computing device 140 can detect one-dimensional barcodes and/or two-dimensional codes.
在一些实施例中,计算设备140可以建立条码信息与目标属性的关联关系。下面结合图2对建立关联关系的方式进行说明。In some embodiments, the computing device 140 may establish an association relationship between the barcode information and the target attribute. The manner of establishing an association relationship will be described below with reference to FIG. 2.
图2示出了根据本申请一些实施例的检测包裹的方法200的流程图。方法200可以由计算设备140执行。FIG. 2 shows a flowchart of a method 200 for detecting a package according to some embodiments of the present application. The method 200 may be executed by the computing device 140.
在步骤S201中,在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别轮廓在检测区域中的检测位置。指定区域包括包裹的条码,例如为包裹的上表面或者包裹的面单区域。步骤S201例如可以确定指定区域至少3个顶点的坐标,并利用该至少3个顶点的坐标表示轮廓在检测区域中的检测位置。In step S201, when the package on the conveyor belt passes through the detection area, the contour of the designated area on the package is recognized, and the detection position of the contour in the detection area is recognized. The designated area includes the barcode of the package, for example, the upper surface of the package or the surface area of the package. In step S201, for example, the coordinates of at least three vertices in the designated area may be determined, and the coordinates of the at least three vertices may be used to indicate the detection position of the contour in the detection area.
一种实现方式中,在传送带上包裹经过检测区域时,在识别出指定区域的轮廓后,可以获取指定区域多个顶点的坐标,用于表示该轮廓在检测区域中的检测位置。In one implementation, when the package passes through the detection area on the conveyor belt, after the contour of the designated area is recognized, the coordinates of multiple vertices of the designated area can be obtained to indicate the detection position of the contour in the detection area.
在步骤S202中,根据轮廓在检测区域中的检测位置和传送带的传送速度,预测轮廓在读码相机的图像坐标系的成像区域中的读码位置和读码位置对应 的第一图像标识。读码相机130处于检测区域的下游。第一图像标识用于标识当轮廓到达读码位置时读码相机130拍摄的包裹图像。在一些实施例中,第一图像标识例如为帧号或者时间戳。轮廓的读码位置是指:在轮廓的至少一部分处于成像区域时轮廓在读码相机130的图像坐标系中的坐标位置。In step S202, according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position are predicted. The code reading camera 130 is located downstream of the detection area. The first image identification is used to identify the package image taken by the barcode reading camera 130 when the outline reaches the barcode reading position. In some embodiments, the first image identifier is, for example, a frame number or a time stamp. The code reading position of the contour refers to the coordinate position of the contour in the image coordinate system of the code reading camera 130 when at least a part of the contour is in the imaging area.
目标属性包括下述中至少一个:包裹体积和包裹尺寸。例如,步骤S201可以根据深度相机120采集的图像帧序列,确定包裹的目标属性。深度相机120例如为线结构光相机。在线结构光相机完成对一个包裹的扫描时,步骤S201可以根据扫描到的图像帧序列,生成包裹的三维模型。这样,步骤S201可以根据三维模型确定包裹的目标属性。The target attribute includes at least one of the following: package volume and package size. For example, in step S201, the target attribute of the package may be determined according to the sequence of image frames collected by the depth camera 120. The depth camera 120 is, for example, a line structured light camera. When the online structured light camera completes the scanning of a package, step S201 can generate a three-dimensional model of the package according to the scanned image frame sequence. In this way, in step S201, the target attribute of the package can be determined according to the three-dimensional model.
在一个可选实施例中,以包裹上的指定区域是包裹的上表面为例,可以确定包裹的上表面在传送带110上的检测区域中的检测位置。例如,可以确定包裹的上表面4个顶点的坐标,并利用4个顶点的坐标表示包裹在检测区域中的检测位置。进而,可以将检测位置作为起点,根据传送带速度,确定至少一个读码位置。读码位置可以通过包裹的上表面4个顶点的坐标进行表示。4个顶点的坐标可以限定一个矩形区域。矩形区域对应于包裹的上表面。矩形区域的至少一部分处于读码视野中。In an optional embodiment, taking the designated area on the package as the upper surface of the package as an example, the detection position of the upper surface of the package in the detection area on the conveyor belt 110 can be determined. For example, the coordinates of 4 vertices on the upper surface of the package can be determined, and the coordinates of the 4 vertices can be used to indicate the detection position of the package in the detection area. Furthermore, the detection position can be used as a starting point, and at least one code reading position can be determined according to the conveyor speed. The reading position can be indicated by the coordinates of the 4 vertices on the upper surface of the package. The coordinates of the 4 vertices can define a rectangular area. The rectangular area corresponds to the upper surface of the package. At least a part of the rectangular area is in the reading field of view.
在步骤S203中,当获取到第一图像标识对应的包裹图像时,对包裹图像进行条码识别。In step S203, when the package image corresponding to the first image identifier is acquired, barcode recognition is performed on the package image.
当步骤S203识别出包裹图像中的条码时,方法200可以执行步骤S204,将识别出的条码与读码位置进行位置匹配。这里,步骤S204可以在同一个坐标系(例如读码相机130的图像坐标系)中对条码和读码位置进行位置匹配。在多个包裹并排或前后交错地放置于传送带的场景中,步骤S203可以在包裹图像中识别出多个条码,例如条码C1、条码C2和条码C3。对于包裹B1的读码位置,步骤S204可以在条码C1对应的条码区域的至少一部分属于包裹B1的指定区域在包裹图像中投影区域时,确定条码C1与包裹B1对应的读码位置之间位置匹配。也就是说,步骤S204可以确定包裹B1上指定区域的轮廓到达读码位置时,包裹B1的指定区域在包裹图像中投影区域,进而,若判定条码C1对应的条码区域的至少一部分属于该投影区域,则可以确定条码C1与包裹B1对应的读码位置之间位置匹配。否则,则确定条码C1与包裹B1对应的读码位置之间位置不匹配。同理,可以判断条码C2与包裹B1对应 的读码位置之间是否位置匹配,以及条码C3与包裹B1对应的读码位置之间是否位置匹配。基于此,能够确定出与包裹B1对应的读码位置之间位置匹配的条码。When the barcode in the package image is identified in step S203, the method 200 may execute step S204 to position the identified barcode with the barcode reading position. Here, step S204 may perform position matching on the barcode and the barcode reading position in the same coordinate system (for example, the image coordinate system of the barcode reading camera 130). In a scenario where multiple packages are placed side by side or staggered back and forth on the conveyor belt, step S203 may identify multiple barcodes in the package image, such as barcode C1, barcode C2, and barcode C3. For the barcode reading position of the package B1, in step S204, when at least a part of the barcode area corresponding to the barcode C1 belongs to the designated area of the package B1, the position matching between the barcode C1 and the barcode reading position corresponding to the package B1 may be determined when the image of the package is projected. . That is, in step S204, it can be determined that when the outline of the designated area on the package B1 reaches the barcode reading position, the designated area of the package B1 is projected in the package image, and further, if it is determined that at least a part of the barcode area corresponding to the barcode C1 belongs to the projection area , Then it can be determined that the barcode C1 matches the barcode reading position corresponding to the package B1. Otherwise, it is determined that the position of the barcode C1 and the corresponding barcode reading position of the package B1 do not match. In the same way, it can be judged whether the barcode C2 and the corresponding reading position of the package B1 match, and whether the barcode C3 and the corresponding reading position of the package B1 match. Based on this, it is possible to determine the barcode that matches the position between the barcode reading positions corresponding to the package B1.
在一个实施例中,若第一图像标识为帧号,则可以获取读码相机130拍摄的图像中具有该帧号的图像,作为第一图像标识对应的包裹图像。若第一图像标识为时间戳,则可以获取读码相机130拍摄的图像中具有该时间戳的图像,作为第一图像标识对应的包裹图像。In one embodiment, if the first image identifier is a frame number, an image with the frame number in the image captured by the code reading camera 130 can be acquired as the package image corresponding to the first image identifier. If the first image identifier is a timestamp, an image with the timestamp in the image taken by the code reading camera 130 can be acquired as the package image corresponding to the first image identifier.
在步骤S205中,当读码位置匹配到条码时,将匹配到的条码与包裹进行关联。若步骤S204识别出一个条码,则步骤S205中匹配到的条码为步骤S204中识别出的该条码。若步骤S204识别出多个条码,则步骤S205中匹配到的条码可以为步骤S204中识别出的多个条码中的一个。In step S205, when the barcode reading position matches the barcode, associate the matched barcode with the package. If a barcode is identified in step S204, the barcode matched in step S205 is the barcode identified in step S204. If multiple barcodes are identified in step S204, the barcode matched in step S205 may be one of the multiple barcodes identified in step S204.
综上,根据本申请的检测包裹的方法200,可以在检测出包裹的检测位置时预测包裹的指定区域在成像区域中的读码位置和读码位置对应的第一图像标识。这里,读码位置可以认为是在获取到第一图像标识对应的包裹图像时包裹的指定区域在读码相机的图像坐标系中的投影位置。在此基础上,根据本申请的检测包裹的方法200通过对包裹图像中条码与读码位置进行位置匹配,可以确定包裹与条码的关联关系。另外,在确定出包裹与条码的关联关系后,还可以获取该包裹的属性,并将该包裹的属性与该条码进行关联,该包裹的属性可以是预先检测出的。In summary, according to the package detection method 200 of the present application, when the detection position of the package is detected, the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted. Here, the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system of the barcode reading camera when the package image corresponding to the first image identifier is acquired. On this basis, the package detection method 200 according to the present application can determine the association relationship between the package and the barcode by matching the position of the barcode and the barcode reading position in the package image. In addition, after the association relationship between the package and the barcode is determined, the attributes of the package can also be obtained, and the attributes of the package can be associated with the barcode. The attributes of the package can be detected in advance.
在一个实施例中,在计算设备140获取深度相机120输出图像帧序列,识别出包裹后,可以生成每一包裹的标识。进而,在确定出检测位置,以及预测出读码位置和第一图像标识后,可以记录包裹的标识、检测位置、读码位置和第一图像标识的对应关系。后续,当获取到第一图像标识对应的包裹图像,识别到条码且读码位置匹配到条码,则可以确定读码位置对应的包裹的标识,进而,将匹配到的条码与确定出的标识对应的包裹进行关联。In one embodiment, after the computing device 140 obtains the sequence of image frames output by the depth camera 120 and recognizes the package, the identification of each package can be generated. Furthermore, after the detection position is determined, and the code reading position and the first image identification are predicted, the correspondence relationship between the identification of the package, the detection position, the reading position and the first image identification can be recorded. Subsequently, when the package image corresponding to the first image identification is obtained, the barcode is recognized and the barcode reading position matches the barcode, the identification of the package corresponding to the barcode reading position can be determined, and then the matched barcode is corresponding to the determined identification The packages are associated.
图3示出了根据本申请一些实施例的检测包裹的方法300的流程图。方法300可以由计算设备140执行。FIG. 3 shows a flowchart of a method 300 for detecting a package according to some embodiments of the present application. The method 300 may be executed by the computing device 140.
在步骤S301中,获取根据第一标定盘建立的第一世界坐标系。第一标定盘放置于传送带110上并处于深度相机120的视野范围。第一标定盘例如为棋盘格标定盘。如图4所示,第一世界坐标系可以为(X1,Y1,Z1)。In step S301, the first world coordinate system established according to the first calibration plate is acquired. The first calibration disk is placed on the conveyor belt 110 and is in the field of view of the depth camera 120. The first calibration board is, for example, a checkerboard calibration board. As shown in Figure 4, the first world coordinate system can be (X1, Y1, Z1).
在步骤S302中,根据第一世界坐标系和深度相机拍摄的第一标定盘的图像,标定深度相机的外参,得到深度相机坐标系与第一世界坐标系之间的第一映射关系。In step S302, the external parameters of the depth camera are calibrated according to the first world coordinate system and the image of the first calibration disk taken by the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system.
在步骤S303中,获取根据第二标定盘建立的第二世界坐标系。第二标定盘放置于传送带110上并处于读码相机130的读码视野。In step S303, the second world coordinate system established according to the second calibration plate is acquired. The second calibration disk is placed on the conveyor belt 110 and is in the code reading field of the code reading camera 130.
在步骤S304中,根据第二世界坐标系和读码相机拍摄的第二标定盘的图像,标定读码相机的外参,得到读码相机坐标系和第二世界坐标系之间的第二映射关系。如图4所示,第二世界坐标系可以为(X2,Y2,Z2)。In step S304, the external parameters of the code reading camera are calibrated according to the second world coordinate system and the image of the second calibration plate taken by the code reading camera to obtain a second mapping between the code reading camera coordinate system and the second world coordinate system relation. As shown in Figure 4, the second world coordinate system can be (X2, Y2, Z2).
在步骤S305中,确定第一世界坐标系和第二世界坐标系之间的第三映射关系。In step S305, a third mapping relationship between the first world coordinate system and the second world coordinate system is determined.
在步骤S306中,根据读码相机的内参,确定读码相机坐标系与读码相机的图像坐标系之间的第四映射关系。In step S306, according to the internal parameters of the code reading camera, a fourth mapping relationship between the coordinate system of the code reading camera and the image coordinate system of the code reading camera is determined.
例如,图4示出了物流系统中坐标系的示意图。图4示出了第一世界坐标系R1(X1,Y1,Z1),第二世界坐标系R2(X2,Y2,Z2),深度相机坐标系R3(X3,Y3,Z3),读码相机坐标系R4(X4,Y4,Z4)和读码相机130的图像坐标系R5(X5,Y5)。其中,图像坐标系R5对应读码相机130的成像平面。第一世界坐标系R1与第二世界坐标系R2之间在传送带110的传送方向上存在偏移d。For example, Figure 4 shows a schematic diagram of the coordinate system in the logistics system. Figure 4 shows the first world coordinate system R1 (X1, Y1, Z1), the second world coordinate system R2 (X2, Y2, Z2), the depth camera coordinate system R3 (X3, Y3, Z3), the code reading camera coordinate It is R4 (X4, Y4, Z4) and the image coordinate system R5 (X5, Y5) of the barcode reading camera 130. Among them, the image coordinate system R5 corresponds to the imaging plane of the code reading camera 130. There is an offset d in the conveying direction of the conveyor belt 110 between the first world coordinate system R1 and the second world coordinate system R2.
例如,读码相机130的外参可以用T CB表示,即表示第二映射关系。例如可以表示为如下矩阵形式: For example, the external parameters of the barcode reading camera 130 can be represented by T CB , that is, the second mapping relationship. For example, it can be expressed as the following matrix form:
Figure PCTCN2021082964-appb-000001
其中,R例如为3*3的旋转矩阵,表示第二世界坐标系与读码相机坐标系的旋转变换参数。T例如为3*1的平移矩阵,表示第二世界坐标系与读码相机坐标系的平移变换参数。I为正交矩阵。
Figure PCTCN2021082964-appb-000002
Figure PCTCN2021082964-appb-000001
Among them, R is, for example, a 3*3 rotation matrix, which represents the rotation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system. T is, for example, a 3*1 translation matrix, which represents the translation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system. I is an orthogonal matrix.
Figure PCTCN2021082964-appb-000002
深度相机120的外参的标定方法,与读码相机130的外参的标定方法类似,相应的,第一映射关系可以参考第二映射关系的相关介绍。The calibration method of the external parameters of the depth camera 120 is similar to the calibration method of the external parameters of the code reading camera 130. Correspondingly, the first mapping relationship can refer to the related introduction of the second mapping relationship.
第三映射关系对应的变换矩阵可以为[0,d,0,0] T,d表示第二世界坐标系相对于第一世界坐标系在传送带的传送方向上的偏移量。 The transformation matrix corresponding to the third mapping relationship may be [0,d,0,0] T , where d represents the offset of the second world coordinate system relative to the first world coordinate system in the conveying direction of the conveyor belt.
读码相机130的内参可以用K C表示,即表示第四映射关系。具体的,可以表示为如下矩阵形式: The internal parameters of the code reading camera 130 can be represented by K C , that is, the fourth mapping relationship. Specifically, it can be expressed as the following matrix form:
Figure PCTCN2021082964-appb-000003
其中,f x、f y分别为读码相机130的焦距参数,c x、c y为相机坐标系相对图像坐标系的偏移。
Figure PCTCN2021082964-appb-000003
Among them, f x and f y are the focal length parameters of the code reading camera 130 respectively, and c x and c y are the offsets of the camera coordinate system relative to the image coordinate system.
上述步骤301-步骤S306是各坐标系的标定及映射关系的确定过程。在对包裹进行检测之前可以基于步骤301-步骤S306,标定各坐标系,并确定映射关系,后续,当需要检测包裹时,可以执行步骤S307-步骤S315。也就是说,上述步骤301-步骤S306只需要执行一次即可,并不需要在每次对包裹进行检测时都执行上述步骤301-步骤S306。The above steps 301 to S306 are the calibration of each coordinate system and the determination of the mapping relationship. Before the package is inspected, each coordinate system can be calibrated based on step 301-step S306, and the mapping relationship can be determined. Subsequently, when the package needs to be inspected, step S307-step S315 can be executed. In other words, the above steps 301 to S306 only need to be performed once, and it is not necessary to perform the above steps 301 to S306 every time the package is inspected.
在步骤S307中,在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别轮廓在检测区域中检测位置。指定区域包括包裹的条码,例如为包裹的上表面或者包裹的面单区域。In step S307, when the package on the conveyor belt passes through the detection area, the contour of the designated area on the package is recognized, and the detection position of the contour in the detection area is recognized. The designated area includes the barcode of the package, for example, the upper surface of the package or the surface area of the package.
在一些实施例中,指定区域为包裹的上表面,步骤S307可以实施为方法500。In some embodiments, the designated area is the upper surface of the package, and step S307 may be implemented as method 500.
如图5所示,在步骤S501中,在传送带上包裹经过检测区域时,获取包裹的深度图像。例如,深度相机120可以对传送带110上传输的包裹进行拍摄,进而能够获取到经过检测区域的包裹的深度图像。As shown in FIG. 5, in step S501, when the package passes through the inspection area on the conveyor belt, a depth image of the package is acquired. For example, the depth camera 120 can take pictures of the packages transferred on the conveyor belt 110, and then can obtain the depth images of the packages passing through the inspection area.
在步骤S502中,根据深度图像,确定包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置。其中,包裹的上表面可以作为指定区域。In step S502, according to the depth image, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined. Among them, the upper surface of the package can be used as a designated area.
综上,方法500能够根据深度相机120拍摄的深度图像,确定包裹的上表面的轮廓,从而将上表面作为指定区域,并且确定轮廓在检测区域中的检测位置。In summary, the method 500 can determine the contour of the upper surface of the package according to the depth image taken by the depth camera 120, so that the upper surface is taken as the designated area, and the detection position of the contour in the detection area is determined.
在一些实施例中,指定区域为包裹上的面单区域。步骤S307可以实施为方法600。In some embodiments, the designated area is a single-sided area on the package. Step S307 may be implemented as method 600.
如图6所示,在步骤S601中,在传送带上包裹经过检测区域时,获取包裹的灰度图像和深度图像。As shown in FIG. 6, in step S601, when the package passes through the inspection area on the conveyor belt, a grayscale image and a depth image of the package are acquired.
在步骤S602中,确定灰度图像中包裹的面单区域的轮廓。In step S602, the contour of the single-surface area wrapped in the grayscale image is determined.
在步骤S603中,根据灰度图像中面单区域的轮廓,确定深度图像中与面 单区域对应的第一深度区域。In step S603, according to the contour of the single-surface area in the gray image, the first depth area corresponding to the single-surface area in the depth image is determined.
在步骤S604中,根据第一深度区域,确定面单区域的轮廓在第一世界坐标系中的检测位置。In step S604, the detection position of the contour of the single area in the first world coordinate system is determined according to the first depth area.
综上,方法600能够利用灰度图像确定包裹的面单区域,进而根据灰度图像和深度图像,确定面单区域的轮廓和检测位置。In summary, the method 600 can use the grayscale image to determine the surface area of the package, and then determine the contour and detection position of the surface area based on the grayscale image and the depth image.
在一些实施例中,步骤S502可以实施为方法700。In some embodiments, step S502 may be implemented as method 700.
如图7所示,在步骤S701中,根据深度图像,确定包裹的三维模型。As shown in Fig. 7, in step S701, a three-dimensional model of the package is determined according to the depth image.
在步骤S702中,根据三维模型在深度相机坐标系中的坐标和第一映射关系,确定包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置。这里,检测位置由包裹上表面的至少三个顶点在第一世界坐标系中的坐标表示。In step S702, according to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined. Here, the detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system.
在一个可选实施例中,在确定包裹的三维模型在深度相机坐标系中的坐标后,可以根据三维模型在深度相机坐标系中的坐标,得到包裹的上表面在深度相机坐标系中的坐标。基于包裹的上表面在深度相机坐标系中的坐标,可以确定包裹的上表面的轮廓。另外,在得到包裹的上表面在深度相机坐标系中的坐标后,可以选取包裹的上表面的至少三个顶点在深度相机坐标系中的坐标,并结合第一映射关系,得到包裹的上表面的至少三个顶点在第一世界坐标系中的坐标。In an optional embodiment, after determining the coordinates of the 3D model of the package in the depth camera coordinate system, the coordinates of the upper surface of the package in the depth camera coordinate system can be obtained according to the coordinates of the 3D model in the depth camera coordinate system. . Based on the coordinates of the upper surface of the package in the depth camera coordinate system, the contour of the upper surface of the package can be determined. In addition, after obtaining the coordinates of the upper surface of the package in the depth camera coordinate system, the coordinates of at least three vertices of the upper surface of the package in the depth camera coordinate system can be selected, and combined with the first mapping relationship, the upper surface of the package can be obtained. The coordinates of at least three vertices in the first world coordinate system.
综上,方法700可以根据深度图像确定包裹的三维模型,进而利用三维模型确定指定区域(即上表面)的轮廓和轮廓在第一世界坐标系中的检测位置。In summary, the method 700 can determine the three-dimensional model of the package according to the depth image, and then use the three-dimensional model to determine the contour of the designated area (ie, the upper surface) and the detection position of the contour in the first world coordinate system.
在一些实施例中,步骤S502可以实施为方法800。In some embodiments, step S502 may be implemented as method 800.
如图8所示,在步骤S801中,获取深度图像对应的灰度图像。As shown in FIG. 8, in step S801, a gray image corresponding to the depth image is acquired.
在步骤S802中,在灰度图像中确定包裹的上表面的轮廓。包裹的上表面轮廓例如为矩形区域。In step S802, the contour of the upper surface of the package is determined in the grayscale image. The contour of the upper surface of the package is, for example, a rectangular area.
在步骤S803中,根据灰度图像中的包裹的上表面的轮廓,确定深度图像中与上表面对应的第二深度区域,得到第二深度区域的至少三个顶点。In step S803, according to the contour of the upper surface of the package in the grayscale image, a second depth region corresponding to the upper surface in the depth image is determined, and at least three vertices of the second depth region are obtained.
在步骤S804中,确定第二深度区域的至少三个顶点在深度相机坐标系中的坐标。In step S804, the coordinates of at least three vertices of the second depth region in the depth camera coordinate system are determined.
在步骤S805中,根据第二深度区域的至少三个顶点在深度相机坐标系中的坐标和第一映射关系,确定包裹的上表面在第一世界坐标系中的检测位置。 这里,检测位置由第二深度区域的至少三个顶点在第一世界坐标系中的坐标表示。In step S805, the detection position of the upper surface of the package in the first world coordinate system is determined according to the coordinates of the at least three vertices of the second depth region in the depth camera coordinate system and the first mapping relationship. Here, the detection position is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
综上,方法800可以根据灰度图像和深度图像,确定包裹的上表面的顶点坐标,进而确定包裹的上表面在第一世界坐标系中的检测位置。In summary, the method 800 can determine the vertex coordinates of the upper surface of the package according to the grayscale image and the depth image, and then determine the detection position of the upper surface of the package in the first world coordinate system.
在传送带110上包裹经过检测区域时,方法300除了执行步骤S307,还可以执行步骤S308。在步骤S308中,检测传送带上包裹的目标属性。目标属性例如可以包括:包裹的体积、包裹的尺寸、包裹的质量和包裹的面单中至少一个。When the package on the conveyor belt 110 passes through the inspection area, the method 300 may perform step S308 in addition to step S307. In step S308, the target attribute of the package on the conveyor belt is detected. The target attribute may include, for example, at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package.
在一些实施例中,深度相机120可以包括线结构光相机。深度相机120在线结构光相机完成对一个包裹的扫描时,步骤S308可以根据扫描到的图像帧序列,获取包裹的深度图像,并根据深度图像确定包裹的尺寸或者体积。例如,基于深度图像,步骤S308可以确定包裹的三维模型,并根据三维模型确定包裹的尺寸或者体积。In some embodiments, the depth camera 120 may include a line structured light camera. When the depth camera 120 completes the scanning of a package by the online structured light camera, step S308 may obtain a depth image of the package according to the scanned image frame sequence, and determine the size or volume of the package according to the depth image. For example, based on the depth image, step S308 may determine the three-dimensional model of the package, and determine the size or volume of the package according to the three-dimensional model.
在一些实施例中,检测区域还可以部署有称重仪。称重仪可以检测包裹的质量。在一些实施例中,步骤S308可以利用深度相机120拍摄的灰度图像,确定包裹的面单。In some embodiments, a weighing instrument may also be deployed in the detection area. The weighing instrument can detect the quality of the package. In some embodiments, step S308 may use the grayscale image taken by the depth camera 120 to determine the face sheet of the package.
在步骤S309中,根据轮廓在检测区域中的检测位置和传送带的传送速度,预测轮廓在读码相机的图像坐标系的成像区域中的读码位置和读码位置对应的第一图像标识。读码相机处于检测区域的下游。第一图像标识用于标识当轮廓到达读码位置时读码相机拍摄的包裹图像。第一图像标识例如为帧号或者时间戳。在一些实施例中,第一图像标识例如为帧号或者时间戳。轮廓的读码位置是指:在轮廓的至少一部分处于成像区域时轮廓在图像坐标系中的坐标位置。在一些实施例中,步骤S309可以将检测位置作为起点,根据传送带速度,确定至少一个偏移位置。这里,当包裹的指定区域的至少一部分进入读码相机的视野范围时,包裹上指定区域在图像坐标系中的至少一部分投影区域处于成像区域内,即指定区域的轮廓的至少一部分处于成像区域。该投影区域的位置为偏移位置。偏移位置例如由指定区域的至少3个顶点在图像坐标系中的投影点的坐标表示。In step S309, according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identification corresponding to the code reading position are predicted. The barcode reading camera is located downstream of the detection area. The first image identifier is used to identify the package image taken by the code reading camera when the outline reaches the code reading position. The first image identifier is, for example, a frame number or a time stamp. In some embodiments, the first image identifier is, for example, a frame number or a time stamp. The reading position of the contour refers to the coordinate position of the contour in the image coordinate system when at least a part of the contour is in the imaging area. In some embodiments, step S309 may use the detected position as a starting point, and determine at least one offset position according to the conveyor belt speed. Here, when at least part of the designated area of the package enters the field of view of the code reading camera, at least a part of the projection area of the designated area on the package in the image coordinate system is in the imaging area, that is, at least a part of the outline of the designated area is in the imaging area. The position of the projection area is the offset position. The offset position is represented by, for example, the coordinates of the projection points of at least three vertices of the designated area in the image coordinate system.
在一些实施例中,步骤S309可以实施为方法900。In some embodiments, step S309 may be implemented as method 900.
如图9所示,在步骤S901中,当识别出检测位置时,确定读码相机当前 时刻采集的图像帧的第二图像标识。第二图像标识为帧号或者时间戳。第二图像标识例如由读码相机130生成。又例如,第二图像标识为计算设备140在确定检测位置的时刻为当前从读码相机130接收到的图像帧添加的帧号或者时间戳。As shown in Fig. 9, in step S901, when the detection position is identified, the second image identifier of the image frame currently collected by the code reading camera is determined. The second image is identified as a frame number or a time stamp. The second image identification is generated by the code reading camera 130, for example. For another example, the second image identifier is a frame number or a time stamp added to the image frame currently received from the code reading camera 130 when the computing device 140 determines the detection position.
在步骤S902中,获取包裹在读码相机的单个采集周期内的移动距离。例如,读码相机130的单个采集周期为T 1。传送速度为v。移动距离s=v*T 1In step S902, the movement distance of the package in a single collection period of the code reading camera is acquired. For example, the single acquisition period of the code reading camera 130 is T 1 . The transmission speed is v. The moving distance s=v*T 1 .
在步骤S903中,根据读码相机的参数、轮廓在检测区域中的检测位置和传送带的传送速度,确定轮廓在第一世界坐标系中的偏移位置。偏移位置满足:当轮廓处于偏移位置时,轮廓在读码相机的图像坐标系中的投影位置处于成像区域中。例如,处于偏移位置的轮廓在图像坐标系中的至少一部分投影区域处于成像区域时,步骤S903确定投影位置处于成像区域中。在一些实施例中,读码相机130的分辨率为x*y。图像坐标系中成像区域的4个顶点的坐标为(0,0),(x,0),(0,y),(x,y)。在轮廓的至少一个顶点在图像坐标系中投影点处于成像区域时,步骤S903可以确定投影位置处于成像区域中。In step S903, the offset position of the contour in the first world coordinate system is determined according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt. The offset position satisfies: when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area. For example, when at least a part of the projection area of the contour at the offset position is in the imaging area in the image coordinate system, step S903 determines that the projection position is in the imaging area. In some embodiments, the resolution of the code reading camera 130 is x*y. The coordinates of the four vertices of the imaging area in the image coordinate system are (0, 0), (x, 0), (0, y), (x, y). When the projection point of at least one vertex of the contour is in the imaging area in the image coordinate system, step S903 may determine that the projection position is in the imaging area.
在一些实施例中,步骤S903以检测位置为起点,并以移动距离作为偏移单位,确定包裹处于读码视野时的偏移位置。偏移位置与检测位置之差距等于整数个偏移单位。例如,偏移位置与检测位置之间的距离等于N个偏移单位之和。N为正整数。In some embodiments, step S903 uses the detection position as the starting point and the movement distance as the offset unit to determine the offset position of the package when it is in the code reading field of view. The difference between the offset position and the detection position is equal to an integer number of offset units. For example, the distance between the offset position and the detection position is equal to the sum of N offset units. N is a positive integer.
在一些实施例中,步骤S903以检测位置为起点,并以移动距离作为偏移单位,将满足目标条件的目标位置作为偏移位置。目标条件为:目标位置与检测位置之差距等于整数个偏移单位,并且目标位置在读码相机的图像坐标系中的投影区域与读码视野在图像坐标系中的成像(即读码相机拍摄的图像)存在重叠。需要说明的是,读码相机130在包裹从进入读码相机130的视野范围直到离开读码相机130的视野范围的过程中,读码相机130拍摄的每帧图像帧中包括包裹的至少一部分上表面,步骤S903可以选定一部分图像帧对应的包裹拍摄位置或者全部图像帧对应的包裹拍摄位置作为偏移位置。因此,步骤S904可以预测一个或多个读码位置。In some embodiments, step S903 takes the detection position as a starting point, and uses the moving distance as the offset unit, and uses the target position that meets the target condition as the offset position. The target condition is: the difference between the target position and the detection position is equal to an integer number of offset units, and the target position is in the image coordinate system of the code reading camera. (Images) overlap. It should be noted that, during the process of the package from entering the field of view of the code reading camera 130 to leaving the field of view of the code reading camera 130, each image frame captured by the code reading camera 130 includes at least a part of the package. On the surface, in step S903, the package shooting positions corresponding to a part of the image frames or the package shooting positions corresponding to all image frames can be selected as the offset position. Therefore, step S904 can predict one or more reading positions.
在一些实施例中,步骤S903基于偏移位置在第一世界坐标系中的坐标(即当包裹放置在偏移位置时上表面4个顶点的坐标),并根据第二映射关系、第三映射关系和第四映射关系,确定偏移位置在读码相机130的图像坐标系中 投影位置的图像坐标。In some embodiments, step S903 is based on the coordinates of the offset position in the first world coordinate system (that is, the coordinates of the 4 vertices on the upper surface when the package is placed at the offset position), and according to the second mapping relationship and the third mapping The relationship and the fourth mapping relationship determine the image coordinates of the projection position of the offset position in the image coordinate system of the code reading camera 130.
以包裹的一个顶点为例,一个顶点在图像坐标系中投影点的图像坐标可以根据下述公式计算:Taking a vertex of the package as an example, the image coordinates of the projection point of a vertex in the image coordinate system can be calculated according to the following formula:
[u l,v l,w l] T=K C[I|0]T CB(P L-[0,d,0,0] T+N*[0,Δ d,0,0]) [u l ,v l ,w l ] T = K C [I|0]T CB (P L -[0,d,0,0] T +N*[0,Δ d ,0,0])
其中,[u k,v k,w k] T为一个顶点L的投影点的图像坐标。P L表示包裹处于检测位置时顶点L在第一世界坐标系中的坐标。[0,d,0,0] T表示第三映射关系对应的变换矩阵。第二世界坐标系相对于第一世界坐标系在传送带的传送方向上的偏移量为d。P L-[0,d,0,0] T表示当包裹处于检测位置时顶点L在第二世界坐标系中的坐标。(P L-[0,d,0,0] T+N*[0,Δ d,0,0])表示当包裹处于偏移位置时顶点L在第二世界坐标系中的坐标。Δ d表示读码相机130的单个采集周期内的移动距离。N表示偏移位置与检测位置之差距包含的偏移单位的个数,即差距与偏移单位的比值。 Among them, [u k ,v k ,w k ] T is the image coordinate of the projection point of a vertex L. P L represents the coordinates of the vertex L in the first world coordinate system when the package is in the detection position. [0,d,0,0] T represents the transformation matrix corresponding to the third mapping relationship. The offset of the second world coordinate system relative to the first world coordinate system in the conveying direction of the conveyor belt is d. P L -[0,d,0,0] T represents the coordinates of the vertex L in the second world coordinate system when the package is at the detection position. (P L -[0,d,0,0] T +N*[0,Δ d ,0,0]) represents the coordinates of the vertex L in the second world coordinate system when the package is at the offset position. The moving distance Δ d within a single acquisition cycle 130 of reading the camera. N represents the number of offset units included in the difference between the offset position and the detection position, that is, the ratio of the difference to the offset unit.
T CB为外参矩阵,表示读码相机130的外参,可以表示第二映射关系。外参例如可以表示为如下矩阵形式: T CB is an external parameter matrix, which represents the external parameters of the code reading camera 130, and can represent the second mapping relationship. For example, the external parameters can be expressed as the following matrix form:
Figure PCTCN2021082964-appb-000004
其中,R例如为3*3的旋转矩阵,表示第二世界坐标系与读码相机坐标系的旋转变换参数。T例如为3*1的平移矩阵,表示第二世界坐标系与读码相机坐标系的平移变换参数。I为正交矩阵。
Figure PCTCN2021082964-appb-000005
Figure PCTCN2021082964-appb-000004
Among them, R is, for example, a 3*3 rotation matrix, which represents the rotation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system. T is, for example, a 3*1 translation matrix, which represents the translation transformation parameter between the second world coordinate system and the barcode reading camera coordinate system. I is an orthogonal matrix.
Figure PCTCN2021082964-appb-000005
K C为读码相机130的内参矩阵,用于表示读码相机130的内参,可以表示第四映射关系。
Figure PCTCN2021082964-appb-000006
其中,f x、f y分别为读码相机130的焦距参数,c x、c y为相机坐标系相对图像坐标系的偏移。
K C is the internal parameter matrix of the code reading camera 130, which is used to represent the internal parameters of the code reading camera 130, and can represent the fourth mapping relationship.
Figure PCTCN2021082964-appb-000006
Among them, f x and f y are the focal length parameters of the code reading camera 130 respectively, and c x and c y are the offsets of the camera coordinate system relative to the image coordinate system.
在步骤S905中,计算偏移位置与检测位置之间的差距,并根据该差距包含的移动距离的个数,确定读码相机在包裹从检测位置到达读码位置之前拍摄的图像帧数量。移动距离的个数与图像帧数量一致。In step S905, the difference between the offset position and the detection position is calculated, and the number of image frames taken by the code reading camera before the package reaches the code reading position from the detection position is determined according to the number of movement distances contained in the difference. The number of moving distances is consistent with the number of image frames.
在步骤S906中,根据第二图像标识和图像帧数量,确定读码位置对应的第一图像标识。例如,第二图像标识为帧号I 2,偏移位置与检测位置之差距包含的偏移单位的个数为k 1。第一图像标识的帧号I 1=I 2+k 1。又例如,第二 图像标识为时间戳t 2,偏移位置与检测位置之差距包含的偏移单位的个数为k 1。第一图像标识的时间戳t 1=t 2+k 1*T 1,T 1为读码相机的相邻帧的时差(即读码相机的采集周期)。 In step S906, according to the second image identifier and the number of image frames, the first image identifier corresponding to the code reading position is determined. For example, the second image is identified as frame number I 2 , and the number of offset units included in the gap between the offset position and the detection position is k 1 . The frame number I 1 =I 2 +k 1 identified by the first image. For another example, the second image is identified as a timestamp t 2 , and the number of offset units included in the difference between the offset position and the detection position is k 1 . The timestamp t 1 of the first image identification is t 1 =t 2 +k 1 *T 1 , and T 1 is the time difference between adjacent frames of the code reading camera (ie, the collection period of the code reading camera).
综上,方法900能够根据第二映射关系、第三映射关系和第四映射关系,确定偏移位置在包裹图像中的投影区域,从而准确预测包裹的指定区域处于读码视野的至少一个读码位置和每个读码位置对应的图像帧的标识(即第一图像标识)。In summary, the method 900 can determine the projection area of the offset position in the package image according to the second mapping relationship, the third mapping relationship, and the fourth mapping relationship, so as to accurately predict that the designated area of the package is at least one code reading field of the code reading field. The position and the identification of the image frame corresponding to each reading position (ie, the first image identification).
为了更形象说明确定读码位置的过程,下面结合图10A-图10E进行举例说明。In order to illustrate the process of determining the reading position more vividly, an example will be described below with reference to FIGS. 10A-10E.
图10A的传送带放置有未进入深度相机120视野范围的包裹B1、B2及B3。其中,包裹B1与B3并排放置。B2处于包裹B1与B3之后。In the conveyor belt of FIG. 10A, packages B1, B2 and B3 that do not enter the field of view of the depth camera 120 are placed. Among them, the packages B1 and B3 are placed side by side. B2 is behind packages B1 and B3.
图10B示出了计算设备140确定的包裹B3在检测区域中的检测位置。这里,包裹B3的检测位置用包裹B3上表面4个顶点e1-e4的坐标表示。在深度相机120为线结构光相机的场景中,包裹B3的检测位置为包裹B3刚离开视野V1的位置。计算设备140可以根据第一映射关系,确定顶点e1-e4在第一世界坐标系中的坐标。FIG. 10B shows the detection position of the package B3 in the detection area determined by the computing device 140. Here, the detection position of the package B3 is represented by the coordinates of the four vertices e1-e4 on the upper surface of the package B3. In the scene where the depth camera 120 is a line structured light camera, the detection position of the package B3 is the position where the package B3 has just left the field of view V1. The computing device 140 may determine the coordinates of the vertices e1-e4 in the first world coordinate system according to the first mapping relationship.
图10C示出了计算设备140预测的包裹B3的一个目标位置的示意图。图10C仅示出了包裹B3上表面4个顶点e1-e4的位置,并利用4个顶点的位置表示包裹B3的目标位置。图10D示出了4个顶点e1-e4投影到图像坐标系(成像平面)的示意图。图10E示出了当包裹处于图10C中目标位置时在图像坐标系中的投影区域。投影区域B3’表示当包裹处于图10C的目标位置时包裹上表面在图像坐标系中的投影区域。V2’表示成像区域,即读码相机130生成的图像在图像坐标系中的范围。由图10E可知,包裹B3处于目标位置时,包裹B3在图像坐标系中的投影区域(即指定区域的在图像坐标系中的投影区域)处于成像区域中。并且,计算设备140可以确定图10C的目标位置与包裹B3的检测位置之差距包括整数个偏移单位。因此,计算设备140可以将图10C所示的目标位置作为一个偏移位置。FIG. 10C shows a schematic diagram of a target location of the package B3 predicted by the computing device 140. FIG. 10C only shows the positions of the four vertices e1-e4 on the upper surface of the package B3, and uses the positions of the four vertices to indicate the target position of the package B3. FIG. 10D shows a schematic diagram of the projection of 4 vertices e1-e4 to the image coordinate system (imaging plane). Fig. 10E shows the projection area in the image coordinate system when the package is at the target position in Fig. 10C. The projection area B3' represents the projection area of the upper surface of the package in the image coordinate system when the package is at the target position of Fig. 10C. V2' represents the imaging area, that is, the range of the image generated by the code reading camera 130 in the image coordinate system. It can be seen from FIG. 10E that when the package B3 is at the target position, the projection area of the package B3 in the image coordinate system (that is, the projection area of the designated area in the image coordinate system) is in the imaging area. In addition, the computing device 140 may determine that the difference between the target position of FIG. 10C and the detection position of the package B3 includes an integer number of offset units. Therefore, the computing device 140 may use the target position shown in FIG. 10C as an offset position.
在一些实施例中,方法300还可以包括步骤S310。In some embodiments, the method 300 may further include step S310.
在步骤S310中,当获取到第一图像标识对应的包裹图像时,对包裹图像进行条码识别。In step S310, when the package image corresponding to the first image identifier is acquired, barcode recognition is performed on the package image.
当步骤S310识别出包裹图像中的条码时,方法300可以执行步骤S311,将识别出的条码与读码位置进行位置匹配。When the barcode in the package image is identified in step S310, the method 300 may execute step S311 to position the identified barcode with the barcode reading position.
在一些实施例中,步骤S311可以实施为方法1100。In some embodiments, step S311 may be implemented as method 1100.
如图11所示,在步骤S1101中,确定条码在包裹图像中的条码区域。例如,步骤S1101可以确定包裹图像中各条码区域的图像坐标。例如,图12A示出了根据本申请一些实施例的包裹图像的示意图。图12A的包裹图像P1中包括条码区域C1和C2。As shown in FIG. 11, in step S1101, the barcode area of the barcode in the package image is determined. For example, step S1101 may determine the image coordinates of each barcode area in the package image. For example, FIG. 12A shows a schematic diagram of a package image according to some embodiments of the present application. The package image P1 in FIG. 12A includes barcode areas C1 and C2.
在步骤S1102中,确定条码区域是否属于读码位置对应的区域。In step S1102, it is determined whether the barcode area belongs to the area corresponding to the barcode reading position.
在一些实施例中,基于包裹上指定区域的轮廓的偏移位置在第一世界坐标系中的坐标,并根据第二映射关系、第三映射关系和第四映射关系,步骤S1102可以确定包裹处于偏移位置时指定区域在图像坐标系中的投影区域(即读码位置对应的区域)。这里,步骤S1102基于第三映射关系(即第一世界坐标系和第二世界坐标系之间的映射关系),可以确定偏移位置在第二世界坐标系中的坐标。基于第二映射关系(即第二世界坐标系和读码相机的相机坐标系之间的映射关系),步骤S1102可以确定第二世界坐标系中偏移位置在读码相机坐标系中的坐标。基于第四映射关系(即读码相机坐标系和读码相机的图像坐标系的映射关系)和偏移位置在读码相机坐标系中的坐标,步骤S1102确定处于偏移位置的包裹的指定区域投影至图像坐标系的投影区域的坐标(即读码位置)。In some embodiments, based on the coordinates of the offset position of the outline of the designated area on the package in the first world coordinate system, and according to the second mapping relationship, the third mapping relationship, and the fourth mapping relationship, step S1102 may determine that the package is in When shifting the position, the projection area of the designated area in the image coordinate system (ie the area corresponding to the reading position). Here, in step S1102, based on the third mapping relationship (that is, the mapping relationship between the first world coordinate system and the second world coordinate system), the coordinates of the offset position in the second world coordinate system can be determined. Based on the second mapping relationship (ie, the mapping relationship between the second world coordinate system and the camera coordinate system of the code reading camera), step S1102 can determine the coordinates of the offset position in the second world coordinate system in the code reading camera coordinate system. Based on the fourth mapping relationship (ie the mapping relationship between the code reading camera coordinate system and the image coordinate system of the code reading camera) and the coordinates of the offset position in the code reading camera coordinate system, step S1102 determines the projection of the designated area of the package at the offset position The coordinates of the projection area to the image coordinate system (ie, the reading position).
例如,针对包裹B3,步骤S1102可以确定包裹B3在包裹图像中投影区域(即读码位置对应的区域)。图12B示出了包裹B3的投影区域B3”。步骤S1102可以确定条码C1的条码区域处于投影区域B3”之外,而条码C2的条码区域属于投影区域B3”中。For example, for package B3, step S1102 can determine the projection area of package B3 in the package image (ie, the area corresponding to the barcode reading position). Figure 12B shows the projection area B3" of the package B3. In step S1102, it can be determined that the barcode area of the barcode C1 is outside the projection area B3", and the barcode area of the barcode C2 belongs to the projection area B3".
在步骤S1102确定条码区域的至少一部分区域属于读码位置对应的区域时,即,确定条码区域属于读码位置对应的区域时,方法1100可以执行步骤S1103,确定条码与读码位置之间位置匹配。以图12B为例,可以确定条码C1与包裹B3的读码位置之间位置匹配。When it is determined in step S1102 that at least a part of the barcode area belongs to the area corresponding to the barcode reading position, that is, when it is determined that the barcode area belongs to the area corresponding to the barcode reading position, the method 1100 may execute step S1103 to determine the position matching between the barcode and the barcode reading position . Taking Figure 12B as an example, it can be determined that the barcode C1 and the parcel B3 have a position matching between the barcode reading positions.
在步骤S1102确定条码区域处于读码位置对应的区域之外时,即,确定条码区域不属于读码位置对应的区域时,方法1100可以执行步骤S1104,确定条码与读码位置之间位置不匹配。When it is determined in step S1102 that the barcode region is outside the region corresponding to the barcode reading position, that is, when it is determined that the barcode region does not belong to the region corresponding to the barcode reading position, the method 1100 may execute step S1104 to determine that the barcode and the barcode reading position do not match. .
综上,方法1100可以根据读码位置与条码的位置关系,确定条码与读码位置是否匹配。这里,条码与读码位置匹配可以理解为:条码在读码位置对应的包裹上。条码与读码位置不匹配可以理解为:条码不属于读码位置对应的包裹。In summary, the method 1100 can determine whether the barcode and the barcode reading position match according to the position relationship between the barcode reading position and the barcode. Here, the barcode matching with the reading position can be understood as: the barcode is on the package corresponding to the reading position. The mismatch between the barcode and the barcode reading position can be understood as: the barcode does not belong to the package corresponding to the barcode reading position.
在一些实施例中,方法300还可以包括步骤S312和S313。在步骤S312中,当读码位置匹配到条码时,将匹配到的条码与目标属性进行关联。以图12B为例,步骤S312可以将条码C2与包裹B3的目标属性进行关联并存储。In some embodiments, the method 300 may further include steps S312 and S313. In step S312, when the barcode reading position matches the barcode, the matched barcode is associated with the target attribute. Taking FIG. 12B as an example, in step S312, the barcode C2 and the target attribute of the package B3 may be associated and stored.
在步骤S313中,将匹配到的条码与包裹进行关联。In step S313, the matched barcode is associated with the package.
综上,根据本申请的检测包裹的方法300,可以在检测出包裹的检测位置时,预测包裹在成像区域中的读码位置和读码位置对应的第一图像标识。在此基础上,根据本申请的检测包裹的方法300通过对包裹图像中条码与读码位置进行位置匹配,可以确定包裹与条码的关联关系,并且能够将包裹的目标属性与条码进行关联,相应的,在确定出包裹的属性后,还可以将包裹的属性与条码进行关联。In summary, according to the package detection method 300 of the present application, when the detection position of the package is detected, the code reading position of the package in the imaging area and the first image identification corresponding to the code reading position can be predicted. On this basis, according to the package detection method 300 of the present application, by matching the barcode and the barcode reading position in the package image, the association relationship between the package and the barcode can be determined, and the target attribute of the package can be associated with the barcode. Yes, after determining the attributes of the package, you can also associate the attributes of the package with the barcode.
在一些实施例中,方法300还可以包括步骤S314,获取传送带的延展处理区的全景图像。其中,延展处理区位于读码视野的下游。In some embodiments, the method 300 may further include step S314 of obtaining a panoramic image of the extended processing area of the conveyor belt. Among them, the extended processing area is located downstream of the barcode reading field.
在步骤S315中,根据检测位置和传输速度,持续更新包裹在延展处理区中随时间变化的预测位置,并根据预测位置在全景图像中添加跟踪框。其中,当包裹的目标属性关联到条码时,可以将跟踪框呈现为第一颜色。当包裹的目标属性未关联到条码时,步骤S315可以将跟踪框呈现为第二颜色。例如,第一颜色为绿色,第二颜色为红色。In step S315, according to the detected position and the transmission speed, continuously update the predicted position of the package in the extended processing area over time, and add a tracking frame to the panoramic image according to the predicted position. Wherein, when the target attribute of the package is associated with the barcode, the tracking box can be presented as the first color. When the target attribute of the package is not associated with the barcode, step S315 may present the tracking box in the second color. For example, the first color is green and the second color is red.
图13A示出了根据本申请一些实施例的物流系统的示意图。图13A在图1的基础上进一步增加有相机150。相机150可以向计算设备140输出图像帧序列,即输出全景图像帧序列。相机150处于读码相机130的下游。相机150的视野为V3。视野V3可以覆盖传送带的延展处理区S3。计算设备140可以根据检测位置和传送速度,更新各包裹在延展处理区中的预测位置,并在全景图像中添加跟踪框。这样,计算设备140可以通过跟踪框跟踪包裹的位置。另外,计算设备140可以通过将包裹的跟踪框显示为不同颜色,显示包裹的不同状态。这样,工作人员可以方便地确定第二颜色的跟踪框对应的包裹的目标属性未关联到条码,即第二颜色的跟踪框对应的包裹的上表面不存在可 识别的条码。例如,上表面不存在可识别的条码的情况例如是包裹上表面的条码不完整、包裹不存在条码或者包裹条码在包裹的侧面或者底面中等。工作人员可以对不存在可识别的条码的包裹进行补码等处理。例如图13B示出了根据本申请一些实施例的全景图像的示意图。图13B中跟踪框M1例如可以呈现为绿色,跟踪框M2例如可以呈现为红色。根据跟踪框的颜色,工作人员可以快速找到红色的包裹,并进行补码等操作。Figure 13A shows a schematic diagram of a logistics system according to some embodiments of the present application. FIG. 13A further adds a camera 150 on the basis of FIG. 1. The camera 150 may output a sequence of image frames to the computing device 140, that is, a sequence of panoramic image frames. The camera 150 is downstream of the code reading camera 130. The field of view of the camera 150 is V3. The field of view V3 can cover the extended processing area S3 of the conveyor belt. The computing device 140 can update the predicted position of each package in the extended processing area according to the detected position and the transmission speed, and add a tracking frame to the panoramic image. In this way, the computing device 140 can track the location of the package through the tracking box. In addition, the computing device 140 can display different states of the package by displaying the tracking frame of the package in different colors. In this way, the staff can easily determine that the target attribute of the package corresponding to the tracking frame of the second color is not associated with a barcode, that is, there is no identifiable barcode on the upper surface of the package corresponding to the tracking frame of the second color. For example, the case where there is no identifiable bar code on the upper surface of the package is, for example, the bar code on the upper surface of the package is incomplete, the package does not have a bar code, or the package bar code is on the side or bottom of the package. The staff can perform code supplementation and other processing for packages that do not have identifiable barcodes. For example, FIG. 13B shows a schematic diagram of a panoramic image according to some embodiments of the present application. In FIG. 13B, the tracking frame M1 may be presented in green, for example, and the tracking frame M2 may be presented in red, for example. According to the color of the tracking box, the staff can quickly find the red package and perform operations such as complementing the code.
综上,根据本申请的检测包裹的方法300,可以通过步骤S314和S315,可以跟踪离开读码视野的包裹,并且通过呈现为不同颜色,对包裹状态进行提示,能够极大提高对异常包裹(如目标属性未关联到条码的包裹)的处理方便性。In summary, according to the package detection method 300 of the present application, it is possible to track the packages leaving the code reading field through steps S314 and S315, and by presenting different colors to prompt the package status, it can greatly improve the detection of abnormal packages ( If the target attribute is not associated with the barcode package), it is convenient to handle.
图14示出了根据本申请一些实施例的物流系统的示意图。图14示出了读码相机的阵列。该阵列例如包括读码相机130、160和170。在阵列中,相邻的读码相机的视野范围可以邻接或局部交叠。计算设备140可以预测一个包裹分别在读码相机130、160和170中的读码位置,并且根据各读码相机的包裹图像将包裹的目标属性与条码进行关联。这样,通过采用多个读码相机进行目标属性与条码进行关联。计算设备140可以将不同读码相机对应的关联结果(即目标属性与条码之间的关联结果)进行比对,从而提高将目标属性与条码进行关联的准确度。例如,读码相机130和160对应的关联结果相同,而读码相机170对应的关联结果不同于读码相机130对应的关联结果,则,计算设备140以读码相机130(160)对应的关联结果为准。Figure 14 shows a schematic diagram of a logistics system according to some embodiments of the present application. Figure 14 shows an array of code reading cameras. The array includes, for example, code reading cameras 130, 160, and 170. In the array, the field of view of adjacent code reading cameras can be adjacent or partially overlapped. The computing device 140 can predict the barcode reading position of a package in the barcode reading cameras 130, 160, and 170, and associate the target attribute of the package with the barcode according to the package image of each barcode reading camera. In this way, multiple barcode reading cameras are used to associate target attributes with barcodes. The computing device 140 can compare the association results (ie, the association results between the target attribute and the barcode) corresponding to different barcode reading cameras, so as to improve the accuracy of associating the target attribute with the barcode. For example, the association result corresponding to the barcode reading camera 130 and 160 is the same, and the association result corresponding to the barcode reading camera 170 is different from the association result corresponding to the barcode reading camera 130, then the computing device 140 uses the association result corresponding to the barcode camera 130 (160) The result shall prevail.
图15示出了根据本申请一些实施例的检测包裹的装置1500的示意图。这里,装置1500例如可以部署在计算设备140中。FIG. 15 shows a schematic diagram of an apparatus 1500 for detecting packages according to some embodiments of the present application. Here, the apparatus 1500 may be deployed in the computing device 140, for example.
如图15所示,检测包裹的装置1500可以包括:检测单元1501预测单元1502、条码识别单元1503、匹配单元1504和关联单元1505。As shown in FIG. 15, the device 1500 for detecting packages may include: a detection unit 1501, a prediction unit 1502, a barcode recognition unit 1503, a matching unit 1504, and an association unit 1505.
检测单元1501,用于在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别轮廓在检测区域中的检测位置。指定区域包括所述包裹的条码。The detection unit 1501 is used to identify the contour of the designated area on the package when the package passes through the detection area on the conveyor belt, and to identify the detection position of the contour in the detection area. The designated area includes the barcode of the package.
预测单元1502用于根据轮廓在检测区域中的检测位置和所述传送带的传送速度,预测轮廓在读码相机的图像坐标系的成像区域中的读码位置和读码位置对应的第一图像标识。所述读码相机处于所述检测区域的下游,所述第 一图像标识用于标识当轮廓到达读码位置时读码相机拍摄的包裹图像。The prediction unit 1502 is used for predicting the reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identifier corresponding to the reading position according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt. The code reading camera is located downstream of the detection area, and the first image identifier is used to identify the package image taken by the code reading camera when the contour reaches the code reading position.
条码识别单元1503用于当获取到第一图像标识对应的包裹图像时,对包裹图像进行条码识别。The barcode recognition unit 1503 is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is acquired.
当条码识别单元1503识别出包裹图像中的条码时,匹配单元1504用于将读码位置与识别出的条码进行位置匹配。When the barcode recognition unit 1503 recognizes the barcode in the package image, the matching unit 1504 is used to match the barcode reading position with the recognized barcode.
当匹配单元1504确定读码位置匹配到条码时,关联单元1505用于将匹配到的条码与包裹进行关联。When the matching unit 1504 determines that the barcode reading position matches the barcode, the associating unit 1505 is used to associate the matched barcode with the package.
综上,根据本申请的检测包裹的装置1500,可以在检测出包裹的检测位置时预测包裹的指定区域在成像区域中的读码位置和读码位置对应的第一图像标识。这里,读码位置可以认为是在获取到第一图像标识对应的包裹图像时包裹的指定区域在图像坐标系中的投影位置。在此基础上,根据本申请的检测包裹的装置1500通过对包裹图像中条码与读码位置进行位置匹配,可以确定包裹与条码的关联关系,相应的,在确定出包裹的属性后,还可以将包裹的属性与条码进行关联。In summary, the device 1500 for detecting packages according to the present application can predict the code reading position of the designated area of the package in the imaging area and the first image identification corresponding to the code reading position when the detection position of the package is detected. Here, the barcode reading position can be considered as the projection position of the designated area of the package in the image coordinate system when the package image corresponding to the first image identifier is acquired. On this basis, the device 1500 for detecting a package according to the present application can determine the relationship between the package and the barcode by matching the barcode and the barcode reading position in the package image. Correspondingly, after determining the attributes of the package, it can also Associate the attributes of the package with the barcode.
图16示出了根据本申请一些实施例的检测包裹的装置1600的示意图。这里,装置1600例如可以部署在计算设备140中。FIG. 16 shows a schematic diagram of an apparatus 1600 for detecting packages according to some embodiments of the present application. Here, the apparatus 1600 may be deployed in the computing device 140, for example.
如图16所示,检测包裹的装置1600可以包括:检测单元1601、预测单元1602、条码识别单元1603、匹配单元1604、关联单元1605、第一标定单元1606和第二标定单元1607。As shown in FIG. 16, the device 1600 for detecting packages may include: a detection unit 1601, a prediction unit 1602, a barcode recognition unit 1603, a matching unit 1604, an association unit 1605, a first calibration unit 1606, and a second calibration unit 1607.
第一标定单元1606可以获取根据第一标定盘建立的第一世界坐标系。第一标定盘放置于传送带上并处于深度相机的视野范围。根据第一世界坐标系和深度相机拍摄的第一标定盘的图像,第一标定单元1606可以标定深度相机的外参,得到深度相机坐标系与第一世界坐标系之间的第一映射关系。The first calibration unit 1606 can obtain the first world coordinate system established according to the first calibration disk. The first calibration disc is placed on the conveyor belt and is in the field of view of the depth camera. According to the first world coordinate system and the image of the first calibration disk taken by the depth camera, the first calibration unit 1606 can calibrate the external parameters of the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system.
第二标定单元1607获取根据第二标定盘建立的第二世界坐标系。第二标定盘放置于传送带上并处于读码相机的读码视野。根据第二世界坐标系和读码相机拍摄的第二标定盘的图像,第二标定单元1607可以标定读码相机的外参,得到读码相机坐标系和第二世界坐标系之间的第二映射关系。The second calibration unit 1607 acquires the second world coordinate system established according to the second calibration disk. The second calibration disc is placed on the conveyor belt and is in the reading field of the code reading camera. According to the second world coordinate system and the image of the second calibration plate taken by the code-reading camera, the second calibration unit 1607 can calibrate the external parameters of the code-reading camera to obtain the second between the code-reading camera coordinate system and the second world coordinate system. Mapping relations.
另外,第二标定单元1607还可以确定第一世界坐标系和第二世界坐标系之间的第三映射关系。根据读码相机的内参,第二标定单元1607可以确定读码相机坐标系与读码相机的图像坐标系之间的第四映射关系。In addition, the second calibration unit 1607 can also determine the third mapping relationship between the first world coordinate system and the second world coordinate system. According to the internal parameters of the code reading camera, the second calibration unit 1607 can determine the fourth mapping relationship between the code reading camera's coordinate system and the image coordinate system of the code reading camera.
检测单元1601,用于在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓和轮廓在检测区域中的检测位置。指定区域包括所述包裹的条码。The detection unit 1601 is used to identify the contour of the designated area on the package and the detection position of the contour in the detection area when the package passes through the detection area on the conveyor belt. The designated area includes the barcode of the package.
在一些实施例中,在传送带上包裹经过检测区域时,检测单元1601可以获取包裹的深度图像。根据所述深度图像,检测单元1601确定包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置。其中,上表面为所述指定区域。In some embodiments, the detection unit 1601 may obtain a depth image of the package when the package passes through the detection area on the conveyor belt. According to the depth image, the detection unit 1601 determines the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system. Wherein, the upper surface is the designated area.
在一些实施例中,检测单元1601可以根据所述深度图像,确定所述包裹的三维模型。根据三维模型在深度相机坐标系中的坐标和第一映射关系,检测单元1601可以确定包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置。检测位置由包裹上表面的至少三个顶点在第一世界坐标系中的坐标表示。In some embodiments, the detection unit 1601 may determine a three-dimensional model of the package according to the depth image. According to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, the detection unit 1601 can determine the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system. The detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system.
在一些实施例中,检测单元1601可以获取所述深度图像对应的灰度图像。检测单元1601可以在灰度图像中确定包裹的上表面的轮廓。根据灰度图像中的包裹的上表面的轮廓,检测单元1601可以确定深度图像中与上表面对应的第二深度区域,得到第二深度区域的至少三个顶点。检测单元1601可以确定第二深度区域的至少三个顶点在深度相机坐标系中的坐标。根据所述第二深度区域的至少三个顶点在深度相机坐标系中的坐标和所述第一映射关系,检测单元1601可以确定包裹的上表面在第一世界坐标系中的检测位置。检测位置由第二深度区域的至少三个顶点在第一世界坐标系中的坐标表示。In some embodiments, the detection unit 1601 may obtain a grayscale image corresponding to the depth image. The detection unit 1601 can determine the contour of the upper surface of the package in the gray image. According to the contour of the upper surface of the package in the grayscale image, the detection unit 1601 can determine the second depth region corresponding to the upper surface in the depth image, and obtain at least three vertices of the second depth region. The detection unit 1601 may determine the coordinates of at least three vertices of the second depth region in the depth camera coordinate system. According to the coordinates of the at least three vertices of the second depth region in the depth camera coordinate system and the first mapping relationship, the detection unit 1601 can determine the detection position of the upper surface of the package in the first world coordinate system. The detection position is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
在一些实施例中,检测单元1601可以在传送带上包裹经过检测区域时,获取包裹的灰度图像和深度图像。检测单元1601可以确定灰度图像中包裹的面单区域的轮廓。根据灰度图像中包裹的面单区域的轮廓,检测单元1601可以确定深度图像中与面单区域对应的第一深度区域。根据第一深度区域,检测单元1601可以确定面单区域的轮廓在第一世界坐标系中的检测位置。其中面单区域为指定区域。In some embodiments, the detection unit 1601 may obtain a grayscale image and a depth image of the package when the package passes through the detection area on the conveyor belt. The detection unit 1601 can determine the contour of the single-surface area wrapped in the grayscale image. According to the contour of the single-surface area wrapped in the grayscale image, the detection unit 1601 can determine the first depth area corresponding to the single-surface area in the depth image. According to the first depth area, the detection unit 1601 can determine the detection position of the contour of the single area in the first world coordinate system. The single area is the designated area.
在一些实施例中,检测单元1601还可以在传送带上包裹经过检测区域时,检测所述包裹的目标属性。目标属性包括:包裹的体积、包裹的尺寸、包裹的质量和包裹的面单中至少一个。In some embodiments, the detection unit 1601 may also detect the target attribute of the package when the package passes through the detection area on the conveyor belt. The target attributes include at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package.
预测单元1602用于根据轮廓在检测区域中的检测位置和所述传送带的传送速度,预测轮廓在读码相机的图像坐标系的成像区域中的读码位置和读码 位置对应的第一图像标识。读码相机处于所述检测区域的下游,所述第一图像标识用于标识当轮廓到达读码位置时读码相机拍摄的包裹图像。The prediction unit 1602 is used for predicting the code reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first image identifier corresponding to the code reading position according to the detection position of the contour in the detection area and the conveying speed of the conveyor belt. The code reading camera is located downstream of the detection area, and the first image identifier is used to identify the package image taken by the code reading camera when the outline reaches the code reading position.
在一些实施例中,轮廓在检测区域中的检测位置为轮廓在第一世界坐标系中的坐标。In some embodiments, the detection position of the contour in the detection area is the coordinate of the contour in the first world coordinate system.
当检测单元1601识别出所述检测位置时,预测单元1602可以确定读码相机当前时刻采集的图像帧的第二图像标识。第二图像标识为帧号或者时间戳。预测单元1602可以获取包裹在读码相机的单个采集周期内的移动距离。在此基础上,预测单元1602可以根据读码相机的参数、轮廓在检测区域中的检测位置和传送带的传送速度,确定轮廓在第一世界坐标系中的偏移位置。偏移位置满足:当轮廓处于所述偏移位置时,轮廓在读码相机的图像坐标系中的投影位置处于成像区域中。这样,预测单元1602可以将轮廓处于偏移位置时,轮廓在读码相机的图像坐标系中的投影位置作为读码位置。When the detection unit 1601 recognizes the detection position, the prediction unit 1602 can determine the second image identifier of the image frame collected by the barcode reading camera at the current moment. The second image is identified as a frame number or a time stamp. The prediction unit 1602 can obtain the moving distance of the package in a single collection period of the code reading camera. On this basis, the prediction unit 1602 can determine the offset position of the contour in the first world coordinate system according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt. The offset position satisfies: when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area. In this way, the prediction unit 1602 can use the projection position of the contour in the image coordinate system of the code reading camera as the code reading position when the contour is at the offset position.
另外,预测单元1602可以计算偏移位置与检测位置之间的差距,并根据该差距包含的移动距离的个数,确定读码相机在包裹从检测位置到达读码位置之前拍摄的图像帧数量。根据第二图像标识和图像帧数量,预测单元1602可以确定读码位置对应的第一图像标识。In addition, the prediction unit 1602 may calculate the difference between the offset position and the detection position, and determine the number of image frames taken by the code reading camera before the package reaches the code reading position from the detection position according to the number of movement distances included in the difference. According to the second image identifier and the number of image frames, the prediction unit 1602 can determine the first image identifier corresponding to the code reading position.
条码识别单元1603用于当获取到第一图像标识对应的包裹图像时,对包裹图像进行条码识别。The barcode recognition unit 1603 is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is acquired.
当条码识别单元1603识别出包裹图像中的条码时,匹配单元1604用于将读码位置与条码进行位置匹配。When the barcode recognition unit 1603 recognizes the barcode in the package image, the matching unit 1604 is used to match the barcode reading position with the barcode.
在一些实施例中,匹配单元1604可以确定条码在包裹图像中的条码区域。匹配单元1604确定条码区域是否属于读码位置对应的区域。在条码区域的至少一部分区域属于读码位置对应的区域时,匹配单元1604确定条码与读码位置之间位置匹配。在条码区域处于读码位置对应的区域之外时,匹配单元1604确定条码与读码位置之间位置不匹配。In some embodiments, the matching unit 1604 may determine the barcode area of the barcode in the package image. The matching unit 1604 determines whether the barcode area belongs to the area corresponding to the barcode reading position. When at least a part of the barcode area belongs to the area corresponding to the barcode reading position, the matching unit 1604 determines the position matching between the barcode and the barcode reading position. When the barcode area is outside the area corresponding to the barcode reading position, the matching unit 1604 determines that the barcode and the barcode reading position do not match.
当匹配单元1604确定读码位置匹配到条码时,关联单元1605用于将匹配到的条码与包裹进行关联。另外,当所述读码位置匹配到条码时,关联单元1605可以将将匹配到的条码与目标属性进行关联。在一些实施例中,装置1600还可以包括跟踪单元1608。跟踪单元1608可以获取传送带的延展处理区的全景图像。其中,延展处理区位于读码视野的下游。根据检测位置和传 输速度,预测单元1602持续更新包裹在延展处理区中随时间变化的预测位置。跟踪单元1608可以根据预测位置在全景图像中添加跟踪框。其中,当包裹的目标属性关联到条码时,跟踪单元1608将跟踪框呈现为第一颜色。当包裹的目标属性未关联到条码时,跟踪单元1608将跟踪框呈现为第二颜色。When the matching unit 1604 determines that the barcode reading position matches the barcode, the associating unit 1605 is used to associate the matched barcode with the package. In addition, when the barcode reading position matches the barcode, the associating unit 1605 can associate the matched barcode with the target attribute. In some embodiments, the apparatus 1600 may further include a tracking unit 1608. The tracking unit 1608 can acquire a panoramic image of the extended processing area of the conveyor belt. Among them, the extended processing area is located downstream of the barcode reading field. Based on the detected position and the transmission speed, the prediction unit 1602 continuously updates the predicted position of the package in the extended processing area over time. The tracking unit 1608 may add a tracking frame to the panoramic image according to the predicted position. Wherein, when the target attribute of the package is associated with the barcode, the tracking unit 1608 presents the tracking frame as the first color. When the target attribute of the package is not associated with the barcode, the tracking unit 1608 presents the tracking frame in the second color.
装置1600更具体的实施方式与方法300类似,这里不再赘述。A more specific implementation manner of the apparatus 1600 is similar to that of the method 300, and will not be repeated here.
图17示出了根据本申请一些实施例的计算设备的示意图。如图17所示,该计算设备包括一个或者多个处理器(CPU)1702、通信模块1704、存储器1706、用户接口1710,以及用于互联这些组件的通信总线1708。Figure 17 shows a schematic diagram of a computing device according to some embodiments of the present application. As shown in FIG. 17, the computing device includes one or more processors (CPU) 1702, a communication module 1704, a memory 1706, a user interface 1710, and a communication bus 1708 for interconnecting these components.
处理器1702可通过通信模块1704接收和发送数据以实现网络通信和/或本地通信。The processor 1702 may receive and send data through the communication module 1704 to implement network communication and/or local communication.
用户接口1710包括一个或多个输出设备1712,其包括一个或多个扬声器和/或一个或多个可视化显示器。用户接口1710也包括一个或多个输入设备1714。用户接口1710例如可以接收遥控器的指令,但不限于此。The user interface 1710 includes one or more output devices 1712, which includes one or more speakers and/or one or more visual displays. The user interface 1710 also includes one or more input devices 1714. The user interface 1710 may, for example, receive instructions from a remote controller, but is not limited to this.
存储器1706可以是高速随机存取存储器,诸如DRAM、SRAM、DDR RAM、或其他随机存取固态存储设备;或者非易失性存储器,诸如一个或多个磁盘存储设备、光盘存储设备、闪存设备,或其他非易失性固态存储设备。The memory 1706 may be a high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid-state storage devices; or a non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, or flash memory devices, Or other non-volatile solid-state storage devices.
存储器1706存储处理器1702可执行的指令集,包括:The memory 1706 stores an instruction set executable by the processor 1702, including:
操作系统1716,包括用于处理各种基本系统服务和用于执行硬件相关任务的程序;Operating system 1716, including programs for processing various basic system services and performing hardware-related tasks;
应用1718,包括用于实现上述检测包裹的各种程序,例如可以包括检测包裹的装置1500或者1600。这种程序能够实现上述各实例中的处理流程,比如可以包括检测包裹的方法。The application 1718 includes various programs for realizing the aforementioned package detection, for example, it may include a package detection device 1500 or 1600. Such a program can implement the processing procedures in the above-mentioned examples, and may include, for example, a method of detecting packages.
另外,本申请的每一个实施例可以通过由数据处理设备如计算机执行的数据处理程序来实现。显然,数据处理程序构成了本申请。此外,通常存储在一个存储介质中的数据处理程序通过直接将程序读取出存储介质或者通过将程序安装或复制到数据处理设备的存储设备(如硬盘和或内存)中执行。因此,这样的存储介质也构成了本申请。存储介质可以使用任何类型的记录方式,例如纸张存储介质(如纸带等)、磁存储介质(如软盘、硬盘、闪存等)、光存储介质(如CD-ROM等)、磁光存储介质(如MO等)等。In addition, each embodiment of the present application can be implemented by a data processing program executed by a data processing device such as a computer. Obviously, the data processing program constitutes this application. In addition, a data processing program usually stored in a storage medium is executed by directly reading the program out of the storage medium or by installing or copying the program to a storage device (such as a hard disk and/or a memory) of the data processing device. Therefore, such a storage medium also constitutes the present application. The storage medium can use any type of recording method, such as paper storage medium (such as paper tape, etc.), magnetic storage medium (such as floppy disk, hard disk, flash memory, etc.), optical storage medium (such as CD-ROM, etc.), magneto-optical storage medium ( Such as MO, etc.) and so on.
因此本申请还公开了一种非易失性存储介质,其中存储有程序。该程序 包括指令,所述指令当由处理器执行时,使得计算设备执行根据本申请的检测包裹的方法。Therefore, this application also discloses a non-volatile storage medium in which a program is stored. The program includes instructions that, when executed by the processor, cause the computing device to execute the method of detecting packages according to the present application.
另外,本申请所述的方法步骤除了可以用数据处理程序来实现,还可以由硬件来实现,例如,可以由逻辑门、开关、专用集成电路(ASIC)、可编程逻辑控制器和嵌微控制器等来实现。因此这种可以实现本申请所述方法的硬件也可以构成本申请。In addition, the method steps described in this application can be implemented by data processing programs, but also by hardware, for example, logic gates, switches, application-specific integrated circuits (ASICs), programmable logic controllers, and embedded micro-controllers.器, etc. to achieve. Therefore, this kind of hardware that can implement the method described in this application can also constitute this application.
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。The above are only preferred embodiments of this application, and are not intended to limit this application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included in the protection of this application. Within the range.

Claims (12)

  1. 一种检测包裹的方法,包括:A method of detecting packages, including:
    在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别所述轮廓在所述检测区域中的检测位置,所述指定区域包括所述包裹的条码;When a package on a conveyor belt passes through a detection area, identifying the contour of a designated area on the package, and identifying the detection position of the contour in the detection area, the designated area including the barcode of the package;
    根据所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,预测所述轮廓在读码相机的图像坐标系的成像区域中的读码位置和所述读码位置对应的第一图像标识,所述读码相机处于所述检测区域的下游,所述第一图像标识用于标识当所述轮廓到达所述读码位置时读码相机拍摄的包裹图像;According to the detection position of the contour in the detection area and the conveying speed of the conveyor belt, it is predicted that the reading position of the contour in the imaging area of the image coordinate system of the code reading camera and the first corresponding to the reading position An image identification, the code reading camera is located downstream of the detection area, and the first image identification is used to identify a package image taken by the code reading camera when the outline reaches the barcode reading position;
    当获取到所述第一图像标识对应的包裹图像时,对所述包裹图像进行条码识别;When the package image corresponding to the first image identifier is acquired, perform barcode recognition on the package image;
    当识别出所述包裹图像中的条码时,将所述读码位置与识别出的条码进行位置匹配;When the barcode in the package image is recognized, the barcode reading position is matched with the recognized barcode;
    当所述读码位置匹配到条码时,将匹配到的条码与所述包裹进行关联。When the barcode reading position matches the barcode, the matched barcode is associated with the package.
  2. 如权利要求1所述的方法,其中,所述轮廓在所述检测区域中的检测位置为所述轮廓在第一世界坐标系中的坐标;所述根据所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,预测所述轮廓在读码相机的图像坐标系的成像区域中的读码位置和所述读码位置对应的第一图像标识,包括:The method according to claim 1, wherein the detection position of the contour in the detection area is the coordinate of the contour in the first world coordinate system; Detecting the position and the conveying speed of the conveyor belt, predicting the barcode reading position of the contour in the imaging area of the image coordinate system of the barcode reading camera and the first image identifier corresponding to the barcode reading position include:
    当识别出所述检测位置时,确定读码相机当前时刻采集的图像帧的第二图像标识,所述第二图像标识为帧号或者时间戳;When the detection position is identified, the second image identifier of the image frame collected by the barcode reading camera at the current moment is determined, and the second image identifier is the frame number or the time stamp;
    获取包裹在读码相机的单个采集周期内的移动距离;Obtain the moving distance of the package in a single collection cycle of the barcode reading camera;
    根据所述读码相机的参数、所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,确定所述轮廓在第一世界坐标系中的偏移位置,所述偏移位置满足:当所述轮廓处于所述偏移位置时,所述轮廓在读码相机的图像坐标系中的投影位置处于所述成像区域中;Determine the offset position of the contour in the first world coordinate system according to the parameters of the code reading camera, the detection position of the contour in the detection area, and the conveying speed of the conveyor belt. It is satisfied that when the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is in the imaging area;
    将所述轮廓处于所述偏移位置时,所述轮廓在读码相机的图像坐标系中的投影位置作为所述读码位置;When the contour is at the offset position, the projection position of the contour in the image coordinate system of the code reading camera is taken as the code reading position;
    计算所述偏移位置与所述检测位置之间的差距,并根据该差距包含的所述移动距离的个数,确定所述读码相机在所述包裹从所述检测位置到达所述 读码位置之前拍摄的图像帧数量;Calculate the difference between the offset position and the detection position, and according to the number of the movement distance included in the difference, determine that the code reading camera reaches the reading code when the package is from the detection position The number of image frames taken before the location;
    根据所述第二图像标识和所述图像帧数量,确定所述读码位置对应的第一图像标识。According to the second image identifier and the number of image frames, the first image identifier corresponding to the code reading position is determined.
  3. 如权利要求1所述的方法,其中,进一步包括:The method of claim 1, further comprising:
    在传送带上包裹经过检测区域时,检测所述包裹的目标属性,所述目标属性包括:包裹的体积、包裹的尺寸、包裹的质量和包裹的面单中至少一个;When the package on the conveyor belt passes through the detection area, the target attribute of the package is detected, and the target attribute includes at least one of the volume of the package, the size of the package, the quality of the package, and the face sheet of the package;
    当所述读码位置匹配到条码时,将匹配到的条码与所述目标属性进行关联。When the barcode reading position matches the barcode, the matched barcode is associated with the target attribute.
  4. 如权利要求1所述的方法,其中,所述将所述读码位置与条码进行位置匹配,包括:The method according to claim 1, wherein the position matching the barcode reading position with the barcode includes:
    确定条码在所述包裹图像中的条码区域;Determining the barcode area of the barcode in the package image;
    确定所述条码区域是否属于所述读码位置对应的区域;Determining whether the barcode area belongs to the area corresponding to the barcode reading position;
    在所述条码区域的至少一部分区域属于所述读码位置对应的区域时,确定所述条码与所述读码位置之间位置匹配;When at least a part of the barcode area belongs to the area corresponding to the barcode reading position, determining the position match between the barcode and the barcode reading position;
    在所述条码区域处于所述读码位置对应的区域之外时,确定所述条码与所述读码位置之间位置不匹配。When the barcode area is outside the area corresponding to the barcode reading position, it is determined that the barcode and the barcode reading position do not match.
  5. 如权利要求1所述的方法,其中,进一步包括:The method of claim 1, further comprising:
    获取传送带的延展处理区的全景图像,其中,延展处理区在传送带的传送方向上位于所述读码视野的下游;Acquiring a panoramic image of the extended processing area of the conveyor belt, wherein the extended processing area is located downstream of the code reading field in the conveying direction of the conveyor belt;
    根据所述检测位置和所述传送速度,持续更新所述包裹在所述延展处理区中随时间变化的预测位置,并根据所述预测位置在全景图像中添加跟踪框;According to the detection position and the transmission speed, continuously update the predicted position of the package in the extended processing area over time, and add a tracking frame to the panoramic image according to the predicted position;
    其中,当所述包裹关联到条码时,将所述跟踪框呈现为第一颜色,当所述包裹未关联到条码时,将所述跟踪框呈现为第二颜色。Wherein, when the package is associated with a barcode, the tracking frame is presented in a first color, and when the package is not associated with a barcode, the tracking frame is presented in a second color.
  6. 如权利要求1所述的方法,其中,所述在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别所述轮廓在所述检测区域中的检测位置,包括:The method according to claim 1, wherein, when the package on the conveyor belt passes through the detection area, identifying the contour of the designated area on the package and recognizing the detection position of the contour in the detection area comprises:
    在传送带上包裹经过检测区域时,获取所述包裹的深度图像;根据所述深度图像,确定所述包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置,其中,所述上表面为所述指定区域;或者When the package on the conveyor belt passes through the detection area, the depth image of the package is acquired; according to the depth image, the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system are determined, wherein the The upper surface is the designated area; or
    在传送带上包裹经过检测区域时,获取所述包裹的灰度图像和深度图像; 确定所述灰度图像中所述包裹的面单区域的轮廓;根据所述灰度图像中所述包裹的面单区域的轮廓,确定所述深度图像中与所述面单区域对应的第一深度区域;根据所述第一深度区域,确定面单区域的轮廓在第一世界坐标系中的检测位置,其中所述面单区域为所述指定区域。When the package on the conveyor belt passes through the detection area, obtain the grayscale image and depth image of the package; determine the contour of the single area of the package in the grayscale image; according to the surface of the package in the grayscale image The contour of a single area determines the first depth area in the depth image corresponding to the single area of the surface; according to the first depth area, the detection position of the contour of the single area of the surface in the first world coordinate system is determined, where The single surface area is the designated area.
  7. 如权利要求6所述的方法,其中,进一步包括:The method of claim 6, further comprising:
    获取根据第一标定盘建立的第一世界坐标系,所述第一标定盘放置于传送带上并处于深度相机的视野范围;Acquiring the first world coordinate system established according to the first calibration disk, the first calibration disk being placed on the conveyor belt and in the field of view of the depth camera;
    根据所述第一世界坐标系和所述深度相机拍摄的第一标定盘的图像,标定深度相机的外参,得到深度相机坐标系与第一世界坐标系之间的第一映射关系;Calibrate the external parameters of the depth camera according to the first world coordinate system and the image of the first calibration disk taken by the depth camera to obtain the first mapping relationship between the depth camera coordinate system and the first world coordinate system;
    获取根据第二标定盘建立的第二世界坐标系,所述第二标定盘放置于传送带上并处于所述读码相机的读码视野;Acquiring a second world coordinate system established according to a second calibration disk, the second calibration disk being placed on a conveyor belt and in the code reading field of view of the code reading camera;
    根据所述第二世界坐标系和读码相机拍摄的第二标定盘的图像,标定读码相机的外参,得到读码相机坐标系和第二世界坐标系之间的第二映射关系;Calibrate the external parameters of the code reading camera according to the second world coordinate system and the image of the second calibration disk taken by the code reading camera to obtain a second mapping relationship between the code reading camera coordinate system and the second world coordinate system;
    确定第一世界坐标系和第二世界坐标系之间的第三映射关系;Determine the third mapping relationship between the first world coordinate system and the second world coordinate system;
    根据读码相机的内参,确定读码相机坐标系与读码相机的图像坐标系之间的第四映射关系。According to the internal parameters of the code-reading camera, the fourth mapping relationship between the coordinate system of the code-reading camera and the image coordinate system of the code-reading camera is determined.
  8. 如权利要求7所述的方法,其中,所述根据所述深度图像,确定所述包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置,包括:The method according to claim 7, wherein the determining the contour of the upper surface of the package and the detection position of the upper surface in the first world coordinate system according to the depth image comprises:
    根据所述深度图像,确定所述包裹的三维模型;根据所述三维模型在深度相机坐标系中的坐标和所述第一映射关系,确定包裹的上表面的轮廓和上表面在第一世界坐标系中的检测位置,所述检测位置由包裹上表面的至少三个顶点在第一世界坐标系中的坐标表示;或者According to the depth image, determine the three-dimensional model of the package; according to the coordinates of the three-dimensional model in the depth camera coordinate system and the first mapping relationship, determine the contour of the upper surface of the package and the coordinates of the upper surface in the first world The detection position in the system, the detection position is represented by the coordinates of at least three vertices of the upper surface of the package in the first world coordinate system; or
    获取所述深度图像对应的灰度图像;Acquiring a grayscale image corresponding to the depth image;
    在所述灰度图像中确定所述包裹的上表面的轮廓;Determining the contour of the upper surface of the package in the grayscale image;
    根据灰度图像中的所述包裹的上表面的轮廓,确定所述深度图像中与所述上表面对应的第二深度区域,得到所述第二深度区域的至少三个顶点;Determining a second depth area corresponding to the upper surface in the depth image according to the contour of the upper surface of the package in the grayscale image, to obtain at least three vertices of the second depth area;
    确定所述第二深度区域的至少三个顶点在深度相机坐标系中的坐标;Determining the coordinates of at least three vertices of the second depth region in the depth camera coordinate system;
    根据所述第二深度区域的至少三个顶点在深度相机坐标系中的坐标和所述第一映射关系,确定所述包裹的上表面在第一世界坐标系中的检测位置, 所述检测位置由所述第二深度区域的至少三个顶点在第一世界坐标系中的坐标表示。According to the coordinates of the at least three vertices of the second depth area in the depth camera coordinate system and the first mapping relationship, the detection position of the upper surface of the package in the first world coordinate system is determined, and the detection position It is represented by the coordinates of at least three vertices of the second depth region in the first world coordinate system.
  9. 一种检测包裹的装置,包括:A device for detecting packages, including:
    检测单元,用于在传送带上包裹经过检测区域时,识别包裹上指定区域的轮廓,并识别所述轮廓在所述检测区域中的检测位置,所述指定区域包括所述包裹的条码;The detection unit is configured to identify the contour of a designated area on the package when the package passes through the detection area on the conveyor belt, and identify the detection position of the contour in the detection area, and the designated area includes the barcode of the package;
    预测单元,用于根据所述轮廓在所述检测区域中的检测位置和所述传送带的传送速度,预测所述轮廓在读码相机的图像坐标系的成像区域中的读码位置和所述读码位置对应的第一图像标识,所述读码相机处于所述检测区域的下游,所述第一图像标识用于标识当所述轮廓到达所述读码位置时读码相机拍摄的包裹图像;A predicting unit for predicting the barcode reading position and the barcode reading of the contour in the imaging area of the image coordinate system of the barcode reading camera based on the detection position of the contour in the detection area and the conveying speed of the conveyor belt A first image identifier corresponding to a position, the code reading camera is located downstream of the detection area, and the first image identifier is used to identify a package image taken by the code reading camera when the outline reaches the code reading position;
    条码识别单元,用于当获取到所述第一图像标识对应的包裹图像时,对所述包裹图像进行条码识别;The barcode recognition unit is configured to perform barcode recognition on the package image when the package image corresponding to the first image identifier is obtained;
    当条码识别单元识别出所述包裹图像中的条码时,匹配单元用于将所述读码位置与识别出的条码进行位置匹配;When the barcode recognition unit recognizes the barcode in the package image, the matching unit is used to match the barcode reading position with the recognized barcode;
    当匹配单元确定所述读码位置匹配到条码时,关联单元用于将匹配到的条码与所述包裹进行关联。When the matching unit determines that the barcode reading position matches the barcode, the associating unit is used to associate the matched barcode with the package.
  10. 一种计算设备,包括:A computing device including:
    存储器;Memory
    处理器;processor;
    程序,存储在该存储器中并被配置为由所述处理器执行,所述程序包括用于执行权利要求1-8中任一项所述方法的指令。A program, stored in the memory and configured to be executed by the processor, the program including instructions for executing the method of any one of claims 1-8.
  11. 一种存储介质,存储有程序,所述程序包括指令,所述指令当由计算设备执行时,使得所述计算设备执行如权利要求1-8中任一项所述的方法。A storage medium storing a program, the program including instructions, which when executed by a computing device, cause the computing device to execute the method according to any one of claims 1-8.
  12. 一种物流系统,包括:A logistics system including:
    如权利要请求10所述的计算设备;The computing device according to claim 10;
    传送带;Conveyor belt
    深度相机;Depth camera
    读码相机。Code reading camera.
PCT/CN2021/082964 2020-03-25 2021-03-25 Parcel detection method, device, computing apparatus, logistics system, and storage medium WO2021190595A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010216758.4A CN113449532B (en) 2020-03-25 2020-03-25 Method, device, computing equipment, logistics system and storage medium for detecting packages
CN202010216758.4 2020-03-25

Publications (1)

Publication Number Publication Date
WO2021190595A1 true WO2021190595A1 (en) 2021-09-30

Family

ID=77807583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082964 WO2021190595A1 (en) 2020-03-25 2021-03-25 Parcel detection method, device, computing apparatus, logistics system, and storage medium

Country Status (2)

Country Link
CN (1) CN113449532B (en)
WO (1) WO2021190595A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693735A (en) * 2022-03-23 2022-07-01 成都智元汇信息技术股份有限公司 Video fusion method and device based on target identification
CN115494556A (en) * 2022-08-18 2022-12-20 成都智元汇信息技术股份有限公司 Packet association method based on paragraph fuzzy matching
CN117140558A (en) * 2023-10-25 2023-12-01 菲特(天津)检测技术有限公司 Coordinate conversion method, system and electronic equipment
CN117765065A (en) * 2023-11-28 2024-03-26 中科微至科技股份有限公司 Target detection-based single-piece separated package rapid positioning method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191469A (en) * 2021-04-30 2021-07-30 南方科技大学 Logistics management method, system, server and storage medium based on two-dimension code
CN114950977B (en) * 2022-04-08 2023-11-24 浙江华睿科技股份有限公司 Package tracing method, device, system and computer readable storage medium
CN114972509B (en) * 2022-05-26 2023-09-29 北京利君成数字科技有限公司 Method for quickly identifying tableware position

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107328364A (en) * 2017-08-15 2017-11-07 顺丰科技有限公司 A kind of volume, weight measuring system and its method of work
CN107832999A (en) * 2017-11-10 2018-03-23 顺丰科技有限公司 A kind of goods bar code information acquisition system
US20190347455A1 (en) * 2018-05-11 2019-11-14 Optoelectronics Co., Ltd. Optical information reading apparatus and optical information reading method
CN112215022A (en) * 2019-07-12 2021-01-12 杭州海康机器人技术有限公司 Logistics code reading method, logistics code reading device and logistics system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020014533A1 (en) * 1995-12-18 2002-02-07 Xiaxun Zhu Automated object dimensioning system employing contour tracing, vertice detection, and forner point detection and reduction methods on 2-d range data maps
JP5814275B2 (en) * 2010-03-12 2015-11-17 サンライズ アール アンド ディーホールディングス,エルエルシー System and method for product identification
CN108627092A (en) * 2018-04-17 2018-10-09 南京阿凡达机器人科技有限公司 A kind of measurement method, system, storage medium and the mobile terminal of package volume
CN109127445B (en) * 2018-06-04 2021-05-04 顺丰科技有限公司 Bar code reading method and bar code reading system
CN109583535B (en) * 2018-11-29 2023-04-18 中国人民解放军国防科技大学 Vision-based logistics barcode detection method and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107328364A (en) * 2017-08-15 2017-11-07 顺丰科技有限公司 A kind of volume, weight measuring system and its method of work
CN107832999A (en) * 2017-11-10 2018-03-23 顺丰科技有限公司 A kind of goods bar code information acquisition system
US20190347455A1 (en) * 2018-05-11 2019-11-14 Optoelectronics Co., Ltd. Optical information reading apparatus and optical information reading method
CN112215022A (en) * 2019-07-12 2021-01-12 杭州海康机器人技术有限公司 Logistics code reading method, logistics code reading device and logistics system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693735A (en) * 2022-03-23 2022-07-01 成都智元汇信息技术股份有限公司 Video fusion method and device based on target identification
CN115494556A (en) * 2022-08-18 2022-12-20 成都智元汇信息技术股份有限公司 Packet association method based on paragraph fuzzy matching
CN115494556B (en) * 2022-08-18 2023-09-12 成都智元汇信息技术股份有限公司 Packet association method based on paragraph fuzzy matching
CN117140558A (en) * 2023-10-25 2023-12-01 菲特(天津)检测技术有限公司 Coordinate conversion method, system and electronic equipment
CN117140558B (en) * 2023-10-25 2024-01-16 菲特(天津)检测技术有限公司 Coordinate conversion method, system and electronic equipment
CN117765065A (en) * 2023-11-28 2024-03-26 中科微至科技股份有限公司 Target detection-based single-piece separated package rapid positioning method
CN117765065B (en) * 2023-11-28 2024-06-04 中科微至科技股份有限公司 Target detection-based single-piece separated package rapid positioning method

Also Published As

Publication number Publication date
CN113449532A (en) 2021-09-28
CN113449532B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
WO2021190595A1 (en) Parcel detection method, device, computing apparatus, logistics system, and storage medium
CN105026997B (en) Optical projection system, semiconductor integrated circuit and image correcting method
CN107525466B (en) Automatic mode switching in a volumetric size marker
JP5421624B2 (en) 3D measurement image capturing device
JP2001194114A (en) Image processing apparatus and method and program providing medium
TW201118791A (en) System and method for obtaining camera parameters from a plurality of images, and computer program products thereof
TW201104508A (en) Stereoscopic form reader
JP2010219825A (en) Photographing device for three-dimensional measurement
WO2021114776A1 (en) Object detection method, object detection device, terminal device, and medium
US20130170756A1 (en) Edge detection apparatus, program and method for edge detection
CN109934873B (en) Method, device and equipment for acquiring marked image
KR102492821B1 (en) Methods and apparatus for generating a three-dimensional reconstruction of an object with reduced distortion
JP2013108933A (en) Information terminal device
CN110807431A (en) Object positioning method and device, electronic equipment and storage medium
CN111295683A (en) Package searching auxiliary system based on augmented reality
JP6017343B2 (en) Database generation device, camera posture estimation device, database generation method, camera posture estimation method, and program
JP4554231B2 (en) Distortion parameter generation method, video generation method, distortion parameter generation apparatus, and video generation apparatus
CN101180657A (en) Information terminal
CN104677911A (en) Inspection apparatus and method for machine vision inspection
CN117253022A (en) Object identification method, device and inspection equipment
US20230125042A1 (en) System and method of 3d point cloud registration with multiple 2d images
CN112262411B (en) Image association method, system and device
CN117078762A (en) Virtual reality equipment, camera calibration device and method
KR102217215B1 (en) Server and method for 3dimension model production using scale bar
KR20210084339A (en) Image association method, system and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21776287

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21776287

Country of ref document: EP

Kind code of ref document: A1