CN112132523B - Method, system and device for determining quantity of goods - Google Patents

Method, system and device for determining quantity of goods Download PDF

Info

Publication number
CN112132523B
CN112132523B CN202011347297.0A CN202011347297A CN112132523B CN 112132523 B CN112132523 B CN 112132523B CN 202011347297 A CN202011347297 A CN 202011347297A CN 112132523 B CN112132523 B CN 112132523B
Authority
CN
China
Prior art keywords
goods
cargo
stack
position data
stacks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011347297.0A
Other languages
Chinese (zh)
Other versions
CN112132523A (en
Inventor
何思枫
杨旭东
张晓博
邹雪晴
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202011347297.0A priority Critical patent/CN112132523B/en
Publication of CN112132523A publication Critical patent/CN112132523A/en
Application granted granted Critical
Publication of CN112132523B publication Critical patent/CN112132523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • G06Q10/0875Itemisation or classification of parts, supplies or services, e.g. bill of materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The embodiment of the specification discloses a method, a system and a device for determining the quantity of goods. The method comprises the following steps: the goods stack comprises at least one goods stack, and each goods stack is formed by overlapping and stacking at least one goods single piece layer by layer; processing the first perspective image and the second perspective image by using a target detection model, and acquiring first position data of the top goods of one or more goods stacks in the first perspective image and second position data of the top goods in the second perspective image; determining position coordinates of the top goods of the one or more goods stacks in a three-dimensional space coordinate system based on the first position data and the second position data of the top goods of the one or more goods stacks; determining a height of each stack of goods based at least on the position coordinates of top-level goods of one or more stacks of goods; determining the quantity of the goods of each goods stack based on the height of each goods stack and the height of the single goods so as to obtain the quantity of the goods stack.

Description

Method, system and device for determining quantity of goods
Technical Field
The present disclosure relates to the field of inventory of quantity of goods, and in particular, to a method, a system, and a device for determining quantity of goods.
Background
In a warehousing environment, the quantity of goods is very important information for both warehousing parties and stockholders. At present, most of the existing stock checking is manual checking based on offline personnel, and the manual checking mode has the problems of long single checking time, inaccurate quantity, high labor cost and the like. Thus, there is a need for a fast and automated method of inventory of quantity of goods.
Disclosure of Invention
One of the embodiments of the present specification provides a cargo quantity determination method. The method comprises the following steps: acquiring a first visual angle image and a second visual angle image of a cargo pile, wherein the cargo pile comprises at least one cargo stack, and each cargo stack is formed by overlapping at least one cargo single piece layer by layer and stacking the cargo single piece layer by layer; processing the first perspective image and the second perspective image by using a target detection model, and acquiring first position data of the top goods of one or more goods stacks in the first perspective image and second position data of the top goods in the second perspective image; determining position coordinates of the top goods of the one or more goods stacks in a three-dimensional space coordinate system based on the first position data and the second position data of the top goods of the one or more goods stacks; determining a height of each stack of goods based at least on the position coordinates of top-level goods of one or more stacks of goods; determining the quantity of the goods of each goods stack based on the height of each goods stack and the height of the single goods so as to obtain the quantity of the goods stack.
One of the embodiments of the present specification provides a cargo quantity determination system, including: the system comprises an image acquisition module, a storage module and a display module, wherein the image acquisition module is used for acquiring a first visual angle image and a second visual angle image of a cargo stack, the cargo stack comprises at least one cargo stack, and each cargo stack is formed by overlapping and stacking at least one cargo single piece layer by layer; the first determining module is used for processing the first perspective image and the second perspective image by using the target detection model, and acquiring first position data of the top goods of one or more goods stacks in the first perspective image and second position data of the top goods in the second perspective image; determining position coordinates of the top goods of the one or more goods stacks in a three-dimensional space coordinate system based on the first position data and the second position data of the top goods of the one or more goods stacks; a second determining module that determines a height of each stack of goods based at least on the position coordinates of top-level goods of the one or more stacks of goods; and the third determining module is used for determining the quantity of the goods of each goods stack so as to obtain the quantity of the goods pile based on the height of each goods stack and the height of the single goods.
One of the embodiments of the present specification provides a cargo quantity determination device, including a processor for executing a cargo quantity determination method.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a method/system for determining a quantity of goods according to some embodiments herein;
FIG. 2 is an exemplary flow chart of a cargo quantity determination method according to some embodiments described herein;
FIG. 3 is an exemplary illustration of center points of detection boxes in two view images according to some embodiments of the disclosure;
FIG. 4 is an exemplary flow chart of a method for determining position coordinates of a top level cargo of a stack of cargo in a three-dimensional spatial coordinate system according to some embodiments of the present description;
FIG. 5 is a block diagram of a cargo quantity determination system according to some embodiments of the present disclosure.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
In an offline scenario, such as a warehouse, real-time acquisition and retrieval of the quantity of goods is very important. However, the goods checking method based on manual checking by offline personnel has the problems of long checking time, inaccurate quantity, high labor cost and the like in a single time. And the inventory data also needs multiple layers of auditing and checking to ensure the credibility of the data. Therefore, an automatic goods inventory mode is carried out. In some embodiments, automated inventory may be implemented via smart warehousing. However, smart warehousing requires a large number of robots, depth sensors, and the like. In addition, the scheme needs to carry out customized transformation on the original warehouse, so that the construction period is long and the cost is high.
The related technology of computer vision also provides a thought for goods inventory. In the warehouse, goods are obviously stacked and shielded. That is, the goods are generally stacked to form a stack (e.g., stacked one above the other to form a stack) and the stack is formed by one or more stacks of goods placed in the same area. The object detection scheme using the single-view camera device for counting the number of the goods cannot achieve an effective effect. For example, only the number of stacks within the field of view of the camera can be obtained, not the exact number of the goods, so that the number of the finally obtained trays has an error of several times or even tens of times.
Therefore, in view of the above existing problems, some embodiments of the present disclosure provide a cargo quantity determination method, which combines surface target detection of cargo and three-dimensional height estimation of cargo based on a binocular (e.g., top/side view) camera vision algorithm, and achieves quick and accurate cargo quantity inventory at a low cost. The following is a detailed description of the present invention.
FIG. 1 is a diagram of an exemplary application scenario of a method/system for determining a quantity of goods, according to some embodiments of the present description. In the application scenario 100, one or more items are stacked in an overlapping manner to form one or more stacks of items. The stacks of goods are placed in the same storage area (e.g., in a bay in a warehouse). The cargo quantity determination method/system disclosed in the present specification may determine the quantity of the cargo based on a binocular recognition algorithm. As shown in fig. 1, a stack of a plurality of goods 110 is placed in the warehousing area 120, and the image capturing device 130 and the image capturing device 140 can acquire two different perspective images of the stack of goods from two different capturing angles. Based on the two perspective images, in combination with a target recognition algorithm and a three-dimensional height estimation algorithm, the number of items constituting the stack of items can be determined.
In some embodiments, any stack of one or more items 110 may be formed on a single piece, layer-by-layer, basis. That is, each layer of the stack of goods is made up of one piece of goods. Fig. 1 shows a stacking of a stack of goods. The goods 110-1, 110-2 and 110-3 are stacked into a stack of goods. Cargo 110-1 forms the bottom layer of the stack of cargo and cargo 110-2 is placed on top of cargo 110-1, forming the second layer of the stack of cargo. And the goods 110-3 are placed on top of the goods 110-2, forming the top layer of the stack of goods. For each stack of goods, the top surface of the goods in the middle or bottom layer is obscured by the goods in the upper layer, except for the top surface of the top-most goods which may appear in the perspective image.
In some embodiments, the camera device 130 and the camera device 140 may acquire perspective images of the stack of goods at different perspectives. For example, the image capturing apparatus 130 may acquire a perspective image of the stack of goods from an upper position of the stack of goods in a top view. The camera device 140 may acquire a perspective image of the stack of goods from a side view of the stack of goods. For another example, the image capturing apparatus 130 and the image capturing apparatus 140 may acquire perspective images of the stack of items at different top/side angles. In some embodiments, the image capture devices 130 and 140 may be any device having image acquisition capabilities, such as a video camera or a still camera. Both of the image capture devices may be the same or different types of image capture devices, for example, image capture device 130 may be a fisheye camera and image capture device 140 may be a gun-type camera. In some embodiments, the two perspective images of the stack of goods acquired by the camera device 130 and the camera device 140 may include a top layer of goods of the stack of goods, for example, a top surface including the top layer of goods.
According to the cargo quantity determining method/system disclosed in the specification, after the perspective images of the cargo stack at different perspectives are acquired, the top-level cargo position data of the cargo stack can be determined by using the target detection algorithm. For example, the top surface of the top cargo is coordinated in the images at different viewing angles, such as coordinates in an image coordinate system or a pixel coordinate system. And then calculating the position coordinates of the top surface of the top goods of the same goods stack in a three-dimensional space coordinate system based on the coordinates of the top surface in different perspective images. Such as coordinates in a camera coordinate system or coordinates in a world coordinate system. The actual height of the goods stack can be calculated through position coordinates in the three-dimensional space coordinate system, and the number of the goods contained in one goods stack can be obtained through combination of the height of a single goods. The image coordinate system may be a two-dimensional coordinate system of the imaging plane, the origin is a focal point of the optical axis of the image pickup apparatus and the imaging plane, and the x and y axes are parallel to the length and width directions of the imaging plane. The pixel coordinate system (also referred to as an image plane coordinate system) may refer to a two-dimensional coordinate system established based on the image finally presented to us, the origin is the upper left corner of the image, and the x and y axes are parallel to the length and width directions of the image. The camera coordinate system is a three-dimensional coordinate system with the optical center of the camera device as an origin, the x axis and the y axis of the three-dimensional coordinate system are parallel to the length direction and the width direction of an imaging plane, and the z axis of the three-dimensional coordinate system is perpendicular to the imaging plane. The world coordinate system is a three-dimensional coordinate system describing an object in a real three-dimensional space, and can arbitrarily designate one point in the three-dimensional space as an origin, and determine x, y and z axes thereof based on the origin. It will be appreciated that the camera coordinate system can be seen as an example of a world coordinate system. In some embodiments of the present description, the world coordinate system may be established based on the library locations, and by way of example only, the xoy plane of the world coordinate system may be established based on the ground of the library locations, the origin o is located at the center of the ground of the library locations (e.g., the region of the ground of the library locations is rectangular, and the origin o may be located at the center of the rectangle), and the z-axis is perpendicular to the ground of the library locations. In some embodiments of the present description, the world coordinate system and the three-dimensional space coordinate system may be interchanged.
FIG. 2 is an exemplary flow chart of a cargo quantity determination method according to some embodiments described herein. In some embodiments, the process 200 may be performed by a processing device (e.g., the quantity of goods determination system 500). For example, the process 200 may be embodied in a storage device (e.g., an onboard storage unit of a processing device or an external storage device) in the form of a program or instructions that, when executed, may implement the process 200. As shown in fig. 2, the process 200 may include the following steps.
Step 202, acquiring a first visual angle image and a second visual angle image of a cargo pile, wherein the cargo pile comprises at least one cargo stack, and each cargo stack is formed by overlapping at least one cargo single piece layer by layer and stacking the cargo single piece layer by layer. This step may be performed by image acquisition module 510.
In some embodiments, the stack may be a stack of multiple stacked products, and may include at least one stack of products. Each goods stack can be formed by overlapping and stacking at least one goods single piece layer by layer. For a detailed description of the stacking of the goods, reference may be made to relevant parts in fig. 1, which are not described herein again.
In some embodiments, the first perspective image and the second perspective image may be images acquired by camera devices (e.g., camera device 130 and camera device 140) mounted in different orientations of the cargo pile. For example, the first perspective image may be an image captured by a camera device located above the stack, which may correspond to a top view perspective (relative to the stack). The second perspective image may be an image captured by a camera device located to the side of the pile, which may correspond to a side perspective (with respect to the pile). For another example, the first perspective image may be captured by a camera located at the upper left of the pile, and the second perspective image may be captured by a camera located at the upper right of the pile. For another example, the first perspective image may be captured by a camera device located to the left of the pile, and the second perspective image may be captured by a camera device located to the right of the pile.
In some embodiments, the first perspective image and the second perspective image may be images of a top cargo comprising all stacks of cargo comprising the pile. That is, the topmost item of each stack of items may be captured and imaged by the camera device. In some embodiments, the image acquisition module 510 may acquire the first perspective image and the second perspective image by communicating with the image capturing devices (e.g., the image capturing device 130 and the image capturing device 140).
In some embodiments, the camera device for acquiring the first perspective image and the second perspective image may be calibrated by parameters, including an internal reference calibration and an external reference calibration. After parameter calibration, a coordinate conversion (mapping) relation matrix between a pixel coordinate system and a world coordinate system of the camera equipment can be determined. The matrices may include an internal reference matrix and an external reference matrix.
And 204, processing the first perspective image and the second perspective image by using the target detection model, and acquiring first position data of the top-layer goods of one or more goods stacks in the first perspective image and second position data of the top-layer goods of the one or more goods stacks in the second perspective image. This step may be performed by the location data determination module 520.
In some embodiments, the object detection model may be a machine learning model, e.g., a model having a neural network structure. Exemplary target detection models can include R-CNN, Fast-RCNN, Faster RCNN, Yolo, SSD, CenterNet, and the like. The object detection model may also include any model or algorithm that can perform an object detection function. In some embodiments, the target detection model may be obtained by training a plurality of labeled sample images. The plurality of sample images may include a first sample image having the same corresponding viewing angle as the first viewing angle image and a second sample image having the same corresponding viewing angle as the second viewing angle image. Each sample image includes at least one sample pile. Multiple sample images may be labeled prior to training. For example, bounding box labeling may be performed for the top-level cargo of each stack of cargo contained in each sample pile. For example, bounding box labeling is performed for the top surface of the top-level good. In the training process, the sample images can be input into a target detection model in the training process to obtain a corresponding detection result, namely a detection framing result of the top goods of each goods stack contained in the sample goods stack. By determining the difference between the detection framing result and the boundary frame labeled in advance, the parameters of the model can be adjusted reversely. The above processes are iterated until a preset condition is met, for example, the difference between the detection frame framing result and the boundary frame labeled in advance is smaller than a set threshold value, or the training round reaches a preset number of times, and the training can be stopped. And finally obtaining the target detection model.
In some embodiments, the position data determining module 520 may input the first perspective image and the second perspective image into the target detection model, and obtain a detection result corresponding to the top cargo of each cargo stack in the two perspective images respectively. The detection result may include a target detection frame that may be directly displayed over the first perspective image and the second perspective image and frame the top cargo of the corresponding cargo stack. That is, the top level cargo of each cargo stack may be framed by an object detection frame. In some embodiments, the detection result may further include information related to the target detection frame, including location information, size information, and the like. The position information may be used to indicate the position of the target detection frame in the image, for example, using a coordinate representation in a pixel coordinate system. The size information may be used to indicate the size of the target detection frame in the image, for example, expressed using an area. The position data determination module 520 may specify the position information as position data of the top cargo of the corresponding stack of cargo. The position data may include first position data of a top item of the stack of items in the first perspective image and second position data in the second perspective image. As an example, for a stack of goods, the position information of the first target detection frame of the top goods of the stack of goods, which is obtained by processing the first perspective image based on the target detection model, may be used as the first position data. Similarly, the position information of the second target detection frame of the top cargo of the cargo stack, which is obtained by processing the second perspective image based on the target detection model, may be used as the second position data.
In some embodiments, the target detection frame may be framing a top surface of a top cargo of a stack of cargoes. Referring to fig. 3, fig. 3 is a schematic diagram of an exemplary first perspective image and second perspective image, according to some embodiments herein. As shown in fig. 3, after being processed by the object detection model, an object detection frame may be included on the first perspective image and the second perspective image. For the first perspective image, such as the left image in fig. 3, the three object detection frames (which may be referred to as first object detection frames in this specification) may have the same size as the top surface of the top cargo of the three cargo stacks, respectively. For example, the three first object detection boxes may coincide with the top surfaces 310-1, 320-1, 330-1, respectively, of the top items of the three stacks of items. Alternatively, the first target detection frame may be a top surface of a top cargo including the cargo stack, and the top surface of the top cargo of the cargo stack is located at a center position of the first target detection frame. For a second perspective image, such as the right image in fig. 3, the three object detection boxes (which may be referred to as second object detection boxes in this specification) may be top surfaces of top-level goods, 310-2, 320-2, 330-2, respectively, containing three stacks of goods. Based on the position information carried by the target detection frames, the position data determination module 520 may determine the coordinates of the center point of each target detection frame (including the first target detection frame and the second target detection frame), respectively. For example, the first center point coordinate of the first target detection frame is determined, and the first center point coordinate may be calculated based on coordinates of pixels constituting the first target detection frame in a first pixel coordinate system corresponding to the first perspective image. And determining a second center point coordinate of the second target detection frame, which may be calculated based on coordinates of pixels constituting the second target detection frame in a second pixel coordinate system corresponding to the second perspective image. The location data determination module 520 may designate the first center point coordinate as the first location data and the second center point coordinate as the second location data. As shown in fig. 3, the black triangle in the figure may represent the center point. Each center point is located at the center of the top surface detection frame of the top layer cargo.
In some embodiments, the object detection model may be a model that includes a first detection model and a second detection model. The first detection model may be used solely for processing the first perspective image and the second detection model may be used solely for processing the second perspective image. For example, the first detection model and the second detection model may be trained using different training samples to obtain. The first detection target may be trained using a training sample from the same perspective as the first perspective view and the second detection target may be trained using a training sample from the same perspective as the second perspective view.
In some embodiments, a processing device (e.g., a pre-processing module of the cargo quantity determination system 500, not shown) may pre-process the first perspective image and the second perspective image prior to processing the first perspective image and the second perspective image. The pre-processing may include distortion correction and/or affine transformation. It is understood that when the video camera that acquires the image to be processed and the template image is a fisheye camera, distortion correction is required. For example, the preprocessing module may perform distortion correction on the first perspective image and the second perspective image using a distortion parameter determined during calibration of the image capturing apparatus. Meanwhile, the preprocessing module may perform affine transformation on the first perspective image and the second perspective image, such as Translation (Translation), scaling (Scale), Flip (Flip), Rotation (Rotation), shearing (Shear), and the like, so that the preprocessed first perspective image and/or second perspective image better meet the detection requirements when performing target detection. For example, when the top surface of the top cargo of the cargo stack is detected, if the perspective image (e.g., the first perspective image or the second perspective image) is obtained by the side-view camera device located on the side surface of the cargo stack, the top surface of the top cargo of the cargo stack is not completely opposite to the top surface. At the moment, the preprocessing module can cut the view angle image, so that the top surface of the goods is opposite to the top surface of the goods. It can be understood that after the first perspective image or the second perspective image is preprocessed, the position information of the target detection frame or the center point of the target detection frame obtained by the target detection model is input, the position information of the target detection frame or the coordinate data of the center point of the target detection frame in the original first perspective image or the original second perspective image is obtained through preprocessing (distortion correction and/or affine transformation) and inverse transformation, and the coordinate data is used as the first position data of the top-layer goods of one or more goods stacks in the first perspective image and the second position data of the top-layer goods stacked in the second perspective image.
And step 206, determining the position coordinates of the top goods of the one or more goods stacks in the three-dimensional space coordinate system based on the first position data and the second position data of the top goods of the one or more goods stacks. This step may be performed by the location coordinate determination module 530.
In some embodiments, the three-dimensional spatial coordinate system may refer to a world coordinate system. The position coordinates may be coordinates of the top level cargo (e.g., the top surface portion) of the stack of cargo in a world coordinate system. As an example, the location coordinate determination module 530 may first register the first location data and the second location data. For example, the position coordinate determination module 530 may perform constraint matching to determine the center point coordinates corresponding to the top center point of the topmost good of each stack of goods in the two perspective images (the first perspective image and the second perspective image). Subsequently, the position coordinate determining module 530 may determine the position coordinates of the top center point of the topmost goods of the goods stack in the three-dimensional space coordinate system from the two center point coordinates (the first center point coordinate and the second center point coordinate) of the top center point of the topmost goods of the goods stack based on the coordinate transformation method from the image plane coordinate system to the world coordinate system. Further description of the position coordinates of the top goods of one or more stacks of goods in the three-dimensional space coordinate system may refer to the description in connection with fig. 4 of the present description.
A height of each stack of goods is determined 208 based at least on the position coordinates of the top level goods of the one or more stacks of goods. This step may be performed by the height determination module 540.
In some embodiments, the height determining module 540 may determine the height of each stack of goods using the position information of the reference object and the position coordinates of the top level goods of the stack of goods. In some embodiments, the reference object may be a reference plane, for example, a plane carrying the one or more stacks of goods. Assuming that the one or more stacks of goods are stored in a warehouse, the reference plane may be the floor of the warehouse. The height determination module 540 may obtain a mathematical representation of the reference plane in a three-dimensional spatial coordinate system. The mathematical expression may be a planar expression of a reference plane in a three-dimensional spatial coordinate system, assumed to be set to
Figure 271561DEST_PATH_IMAGE001
. If the position coordinate of the top goods of the goods stack is
Figure 957758DEST_PATH_IMAGE002
The height determination module 540 may determine the height of the stack of goods based on a point-to-surface distance algorithm. E.g. height of stack of goods
Figure 961486DEST_PATH_IMAGE003
. The mathematical representation of the reference plane in the three-dimensional space coordinate system may be predetermined and stored in a storage device (e.g., an onboard memory unit or an external storage device of the cargo quantity determination system 500), and the height determination module 540 may communicate with the storage device to obtain the mathematical representation of the reference plane in the three-dimensional space coordinate system. In some embodiments, of the world coordinate systemxoyWhen the plane is taken to the ground of the warehouse location, the height of the stack of goods can be determined directly on the basis of the z-coordinate of the position coordinate of the top goods of the stack of goods.
Step 210, determining the quantity of the goods of each goods stack to obtain the quantity of the goods pile based on the height of each goods stack and the height of the single goods. This step may be performed by the cargo quantity determination module 550.
In some embodiments, the height of the piece of cargo may be pre-acquired. For example, when goods are first stored, the height of individual goods may be manually measured and stored in a database. As another example, the height of the good may be provided by an inventory party and stored in a database. The cargo quantity determination module 550 may communicate with the database to obtain the height of the cargo.
After the height of the stack of goods and the height of a single good are obtained, the number of goods forming the stack of goods may be obtained by dividing the height of the stack of goods by the height of a single good by the number of goods determining module 550. After obtaining the quantity of the cargo for each cargo stack, the cargo quantity determination module 550 may determine the total quantity of the cargo in the stack according to the quantity of the cargo stack in the stack. For example, the number of items contained in each stack of items is accumulated. It will be appreciated that the stored goods are generally of the same type for the same bin, i.e. the sizes of the goods making up the stack or stack are all the same, so that the number of goods making up the stack is accurate based on the height of the stack and the height of the individual goods.
It should be noted that the above description of the various steps in fig. 2 is for illustration and description only and does not limit the scope of applicability of the present description. Various modifications and changes to the various steps in fig. 2 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description.
Fig. 4 is an exemplary flow chart of a method for determining position coordinates of a top level cargo of a stack of cargo in a three-dimensional spatial coordinate system according to some embodiments of the present description. In some embodiments, the process 400 may be performed by a processing device (e.g., the location coordinate determination module 530 of the cargo quantity determination system 500). For example, the process 400 may be embodied in a storage device (e.g., an onboard storage unit of a processing device or an external storage device) in the form of a program or instructions that, when executed, may implement the process 400. As shown in fig. 4, the process 400 may include the following steps.
Step 402, executing constraint matching operation based on the first position data and the second position data of the top goods of one or more goods stacks, and determining the first position data and the second position data of the top goods corresponding to the same goods stack.
It will be appreciated that the position of the top item of the same stack of items in the two perspective images (the first perspective image and the second perspective image) is not the same. In particular, due to the viewing angle, the top cargo of a stack of cargo that appears in the middle of one perspective image may be in an upper position in another perspective image. Therefore, it is necessary to align or register the position data of the top cargo corresponding to the same cargo stack in the images from different perspectives. That is, assuming that there are three stacks of goods, three pairs of position data are obtained by matching three first position data detected from the first perspective image with three second position data detected from the second perspective image. Each position data pair comprises a first position data and a second position data and corresponds to the top goods of the same goods stack.
In some embodiments, the location coordinate determination module 530 may perform a constraint matching operation to determine top-level shipments corresponding to the same stack of shipmentsThe position data pair of the object, i.e. the first position data and the second position data. The constraint matching operation may be an operation of pairing the location data using a matching rule generated by using a constraint and/or a dependency existing between the location data. In some embodiments, the position coordinate determination module 530 may utilize a constraint matching algorithm to process the first position data and the second position data of the top cargo of one or more cargo stacks under constraint conditions to determine the first position data and the second position data of the top cargo corresponding to the same cargo stack. The constrained matching algorithm may comprise a hungarian matching algorithm or a similar matching algorithm. The constraints may comprise epipolar constraints and/or geometrical constraints. The epipolar constraint may be a point-to-line constraint. Two image pickup apparatuses pick up a solid point (assumed to be p) in a physical space at different angles, and there is one imaging point on each of the two images (assumed to be p in image 1)1P in image 22). If the position of the point p on an image is known (assuming p is known)1The position is known), the position of the imaged point on the other sub-image will be located at p1The polar line of the point. Optical centers O of two image pickup apparatuses1And O2Plane O formed together with solid point p1O2The intersection of p with the plane in which the image 2 lies may be referred to as p1The polar line of the dot. Epipolar constraints narrow the match between points from a point-to-plane search to a point-to-line search. And the camera equipment for acquiring the first perspective image and the second perspective image can obtain the epipolar constraint when calibrating parameters and confirming relative positions. The geometric constraint may be a constraint on the spatial ordering of stacks of goods. Referring to fig. 3, the stack is formed of three stacks of goods, and it can be seen that the stack of goods formed by one stack of goods is located on one side of the stack of goods formed by three stacks of goods, and the stack of goods formed by two stacks of goods is located on the other side of the stack of goods formed by three stacks of goods. Therefore, in the obtained perspective image (including the first perspective image and the second perspective image), the position data (including the first position number) of the top goods of the three goods stacksAccording to the second position image) is also arranged in the spatial arrangement sequence in which the stacks of goods are installed. When the top goods of three goods stacks are detected in a perspective image, the spatial arrangement order of the stacks is also determined. Which in turn can be used as a constraint to search for corresponding imaged points in another perspective view.
The process of the constraint matching operation is briefly described below. It is assumed that three first position data and three second position data of the top goods of one or more stacks of goods are respectively provided, namely x1,x2,x3And y1,y2,y3. First is x1Finding corresponding position data satisfying a constraint (e.g., the limit constraint or a geometric constraint or both) given y1. Then x is2Finding the corresponding position data satisfying the constraint condition, assuming y2. Then, is x3Finding the corresponding position data satisfying the constraint condition, assuming y as well1. At this point, the matches coincide. Will be x1Re-searching the corresponding position data meeting the constraint condition, and if the corresponding position data is matched with y2Then, again with x2The location data of (2) match results in a conflict. At this point, x may continue to be2Re-searching the corresponding position data meeting the constraint condition, and if the corresponding position data is successfully matched with the y3The maximum match result occurs. And each first position data has corresponding second position data for matching and meets the constraint condition. The constraint matches the operational procedure result.
And 404, determining the position coordinates of the top goods of the goods stack in the three-dimensional space based on the first position data and the second position data of the top goods corresponding to the same goods stack.
In some embodiments, the position coordinate determination module 530 may determine that the top cargo of the cargo stack is in a three-dimensional space (e.g., a real space) based on the first position data (e.g., a first center point coordinate) and the second position data (e.g., a second center point coordinate) of the top cargo corresponding to the same cargo stack using the transformation equation of the image plane coordinates to the world coordinatesPosition coordinates (e.g., three-dimensional coordinates in a world coordinate system). Assuming the coordinates of the first center point of the top cargo (such as the center point of the target detection frame) P corresponding to the same cargo stack are recorded as
Figure 137252DEST_PATH_IMAGE004
The second center coordinate is recorded as
Figure 593641DEST_PATH_IMAGE005
And the position coordinates in three-dimensional space are recorded as
Figure 817949DEST_PATH_IMAGE006
The following takes the first image pickup apparatus that acquires the first angle-of-view image as an example, and a brief description is given of the conversion process. The camera external parameter and the camera internal parameter based on the first camera shooting device can be obtained
Figure 145025DEST_PATH_IMAGE007
And
Figure 960535DEST_PATH_IMAGE004
the conversion relationship of (1):
Figure 904220DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 197798DEST_PATH_IMAGE009
is the z-coordinate value of point P in the first camera coordinate system, which may be unknown. The first matrix on the right side of the equation equal sign is a camera internal reference matrix, wherein,
Figure 910539DEST_PATH_IMAGE010
and
Figure 365791DEST_PATH_IMAGE011
Figure 796773DEST_PATH_IMAGE012
which represents the focal length of the lens,
Figure 894042DEST_PATH_IMAGE013
Figure 461289DEST_PATH_IMAGE014
respectively represents the scale factors of the camera in the directions of the horizontal axis and the vertical axis in the image plane coordinate system,
Figure 353022DEST_PATH_IMAGE015
and
Figure 5720DEST_PATH_IMAGE016
representing coordinates of the camera optical axis in a pixel coordinate system; the second matrix on the right side of the equation equal sign is a camera extrinsic parameter matrix, wherein,
Figure 375521DEST_PATH_IMAGE017
a matrix of rotations is represented, which is,
Figure 797275DEST_PATH_IMAGE018
a translation vector is represented that represents the translation vector,
Figure 859909DEST_PATH_IMAGE019
representing a vector consisting of 0 elements. The above equation can be written as 3 equations, containing 4 unknowns:
Figure 999904DEST_PATH_IMAGE020
and
Figure 438975DEST_PATH_IMAGE021
. Based on
Figure 980815DEST_PATH_IMAGE005
The inclusion of unknowns can be obtained in correspondence with the pixel coordinates of the second image pickup apparatus and the world coordinates conversion relationship
Figure 479930DEST_PATH_IMAGE022
And
Figure 841641DEST_PATH_IMAGE021
3 equations of (c). Finally, 6 equations containing 5 unknowns can be obtained, the equations are solved simultaneously, and the position coordinates of the P point in the three-dimensional space can be obtained
Figure 84403DEST_PATH_IMAGE007
Based on the process described in the above description, the position coordinates in three-dimensional space of the center point of the top surface of the top tier cargo of the cargo stack can be determined.
It should be noted that the above description regarding the steps in fig. 4 is for illustration and explanation only, and does not limit the applicable scope of the present specification. Various modifications and changes to the various steps in fig. 4 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description.
FIG. 5 is a block diagram of a cargo quantity determination system according to some embodiments of the present disclosure. As shown in fig. 5, the cargo quantity determination system 500 may include an image acquisition module 510, a position data determination module 520, a position coordinate determination module 530, a height determination module 540, and a quantity determination module 550.
The image acquisition module 510 may acquire a first perspective image and a second perspective image of a stack of goods. The stack may be a stack of a plurality of stacked goods, and may include at least one stack of goods. Each goods stack can be formed by overlapping and stacking at least one goods single piece layer by layer. The first perspective image and the second perspective image may be images acquired by image pickup apparatuses (e.g., the image pickup apparatus 130 and the image pickup apparatus 140) installed in different orientations of the cargo pile. The image acquisition module 510 may acquire the first perspective image and the second perspective image by communicating with image capturing apparatuses (e.g., the image capturing apparatus 130 and the image capturing apparatus 140).
The position data determining module 520 may process the first perspective image and the second perspective image with the target detection model, and obtain first position data of the top cargo of the one or more cargo stacks in the first perspective image and second position data in the second perspective image. The object detection model may be a machine learning model, e.g. a model with a neural network structure. Exemplary target detection models can include R-CNN, Fast-RCNN, Faster RCNN, Yolo, SSD, CenterNet, and the like. The object detection model may also include any model or algorithm that can perform an object detection function. In some embodiments, the position data determining module 520 may input the first perspective image and the second perspective image into the target detection model, and obtain a detection result corresponding to the top cargo of each cargo stack in the two perspective images respectively. The detection result may include a target detection frame that may be directly displayed over the first perspective image and the second perspective image and frame the top cargo of the corresponding cargo stack. In some embodiments, the position data determining module 520 may process the first perspective image and the second perspective image using the object detection model, determine a first center point coordinate of one or more first object detection frames respectively corresponding to the top surfaces of the top cargo of the one or more cargo stacks in the first perspective image and a second center point coordinate of one or more second object detection frames respectively corresponding to the top surfaces of the top cargo of the one or more cargo stacks in the second perspective image, and designate the first center point coordinate as the first position data and the second center point coordinate as the second position data. In some embodiments, the object detection model may be a model that includes a first detection model and a second detection model. The first detection model may be used solely for processing the first perspective image and the second detection model may be used solely for processing the second perspective image. For example, the first detection model and the second detection model may be trained using different training samples to obtain. The first detection target may be trained using a training sample from the same perspective as the first perspective view and the second detection target may be trained using a training sample from the same perspective as the second perspective view.
The position coordinate determination module 530 may determine position coordinates of the top cargo of the one or more stacks of cargo in a three-dimensional space coordinate system based on the first position data and the second position data of the top cargo of the one or more stacks of cargo. The three-dimensional spatial coordinate system may be referred to as a world coordinate system. The position coordinates may be coordinates of the top level cargo (e.g., the top surface portion) of the stack of cargo in a world coordinate system. As an example, the position coordinate determination module 530 may first register the first position data and the second position data, and then determine the position coordinates of the top center point of the top cargo stack in the three-dimensional space coordinate system from two center point coordinates (the first center point coordinate and the second center point coordinate) of the top center point of the top cargo stack according to a coordinate transformation method that may be based on an image plane coordinate system to a world coordinate system. In some embodiments, the position coordinate determination module 530 may perform a constraint matching operation based on the first position data and the second position data of the top cargo of one or more stacks of cargo, determining the first position data and the second position data of the top cargo corresponding to the same stack of cargo. In some embodiments, the position coordinate determination module 530 may utilize a constraint matching algorithm to process the first position data and the second position data of the top cargo of one or more cargo stacks under constraint conditions to determine the first position data and the second position data of the top cargo corresponding to the same cargo stack. The constrained matching algorithm may comprise a hungarian matching algorithm or a similar matching algorithm. The constraints may comprise epipolar constraints and/or geometrical constraints. In some embodiments, the position coordinate determination module 530 may determine the position coordinates of the top item of the stack of items in three-dimensional space based on the first position data and the second position data corresponding to the top item of the same stack of items. The position coordinate determination module 530 may determine the position coordinates (e.g., three-dimensional coordinates in a world coordinate system) of the top cargo of the cargo stack in a three-dimensional space (e.g., a real space) based on the first position data (e.g., a first center point coordinate) and the second position data (e.g., a second center point coordinate) of the top cargo corresponding to the same cargo stack using the image plane coordinate to world coordinate conversion equation.
The height determination module 540 may determine the height of each stack of items based at least on the position coordinates of the top level item of one or more stacks of items. The height determining module 540 may determine the height of each stack of goods using the position information of the reference object and the position coordinates of the top level goods of the stack of goods. In some embodiments, the reference object may be a reference plane, for example, a plane carrying the one or more stacks of goods. The height determining module 540 may obtain a mathematical representation of the reference plane in a three-dimensional space coordinate system, for example, a planar representation, and determine the height of the stack of goods according to a point-to-plane distance algorithm.
The quantity determination module 550 may determine the quantity of the goods per stack of goods based on the height of each stack of goods and the height of the individual goods to obtain the quantity of the goods of the stack. In some embodiments, the height of the piece of cargo may be pre-acquired. For example, when goods are first stored, the height of individual goods may be manually measured and stored in a database. As another example, the height of the good may be provided by an inventory party and stored in a database. The cargo quantity determination module 550 may communicate with the database to obtain the height of the cargo. After the height of the stack of goods and the height of a single good are obtained, the number of goods forming the stack of goods may be obtained by dividing the height of the stack of goods by the height of a single good by the number of goods determining module 550. After obtaining the quantity of the cargo for each cargo stack, the cargo quantity determination module 550 may determine the total quantity of the cargo in the stack according to the quantity of the cargo stack in the stack. For example, the number of items contained in each stack of items is accumulated.
In some embodiments, the cargo quantity determination system 500 may also include a preprocessing module. The preprocessing module may preprocess the first perspective image and the second perspective image before processing the first perspective image and the second perspective image. The pre-processing may include distortion correction and/or affine transformation.
Other descriptions of modules may refer to portions of the flow diagrams of this specification, such as fig. 2 and/or fig. 4.
It should be understood that the system and its modules shown in FIG. 5 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above descriptions of the candidate item display and determination system and the modules thereof are only for convenience of description, and the description is not limited to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, the modules disclosed in fig. 5 may be different modules in a system, or may be a module that implements the functionality of two or more of the modules described above. For example, the position data determining module 520, the position coordinate determining module 530, the height determining module 540, and the quantity determining module 550 may be four modules, or may be one module that simultaneously implements various determining functions. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) based on the vision algorithm of binocular (such as overlook/side view) camera equipment, the surface target detection of the goods and the three-dimensional height estimation of the goods are combined, and the goods stacked in the warehouse are rapidly and accurately counted. (2) The goods of warehouse is carried out quantity and is checked through vision sensor's goods checking scheme, need not to have the warehouse of goods to carry out the customization transformation, can carry out accurate quantity to the goods and check, and corresponding time limit for a project and cost all can reduce, and holistic link also shortens, improves accuracy and the efficiency of checking quantity of goods. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (15)

1. A cargo quantity determination method, wherein the method comprises:
acquiring a first visual angle image and a second visual angle image of a cargo stack, wherein the cargo stack comprises at least one cargo stack, and each cargo stack is formed by a single cargo or two or more cargo single pieces overlapped layer by layer and stacked in height;
processing the first perspective image and the second perspective image by using a target detection model, and acquiring first position data of the top goods of one or more goods stacks in the first perspective image and second position data of the top goods in the second perspective image;
determining position coordinates of the top goods of the one or more goods stacks in a three-dimensional space coordinate system based on the first position data and the second position data of the top goods of the one or more goods stacks;
determining a height of each stack of goods based at least on the position coordinates of top-level goods of one or more stacks of goods;
determining the quantity of the goods of each goods stack based on the height of each goods stack and the height of the single goods so as to obtain the quantity of the goods stack.
2. The method of claim 1, wherein before processing the first perspective image and the second perspective image using the object detection model, respectively, further comprising preprocessing the first perspective image and the second perspective image, the preprocessing comprising:
distortion correction and/or affine transformation.
3. The method of claim 1, wherein the processing the first perspective image and the second perspective image using the object detection model to obtain first position data of the top cargo of the one or more cargo stacks in the first perspective image and second position data in the second perspective image comprises:
processing the first visual angle images and the second visual angle images by using a target detection model, and determining first central point coordinates of one or more first target detection frames respectively corresponding to the top surfaces of the top cargoes of one or more cargo stacks in the first visual angle images and second central point coordinates of one or more second target detection frames respectively corresponding to the top surfaces of the top cargoes in the second visual angle images; the target detection model comprises a machine learning model;
and designating the first central point coordinate as first position data and the second central point coordinate as second position data.
4. The method of claim 3, wherein the object detection model comprises a first detection model for processing a first perspective image and a second detection model for processing a second perspective image.
5. The method of claim 1, wherein determining position coordinates of the top cargo of the one or more cargo stacks in a three-dimensional space coordinate system based on the first and second position data of the top cargo of the one or more cargo stacks comprises:
performing a constraint matching operation based on the first position data and the second position data of the top cargo of one or more cargo stacks, and determining the first position data and the second position data of the top cargo corresponding to the same cargo stack;
the position coordinates of the top cargo of the stack of cargo in three-dimensional space are determined based on the first position data and the second position data of the top cargo corresponding to the same stack of cargo.
6. The method of claim 5, wherein the constraint matching operation comprises:
processing the first position data and the second position data of the top goods of one or more goods stacks under constraint conditions by using a constraint matching algorithm, and determining the first position data and the second position data of the top goods corresponding to the same goods stack;
the constraint matching algorithm comprises a Hungarian matching algorithm, and the constraint condition comprises an epipolar constraint and/or a geometric constraint.
7. The method of claim 1, wherein said determining a height for each stack of goods based at least on the position coordinates of the top level goods of one or more stacks of goods comprises, for any stack of goods:
acquiring mathematical expression of a reference plane in the three-dimensional space coordinate system;
and determining the height of the stack by using a point-surface distance determination algorithm based on the mathematical expression and the position coordinates of the top layer cargo of the cargo stack.
8. A cargo quantity determination system, wherein the system comprises:
the system comprises an image acquisition module, a storage module and a display module, wherein the image acquisition module is used for acquiring a first visual angle image and a second visual angle image of a cargo stack, the cargo stack comprises at least one cargo stack, and each cargo stack is formed by a single cargo or two or more cargo single pieces overlapped layer by layer and stacked in height;
the position data determining module is used for processing the first visual angle image and the second visual angle image by using the target detection model, and acquiring first position data of the top goods of one or more goods stacks in the first visual angle image and second position data of the top goods in the second visual angle image;
the position coordinate determination module is used for determining the position coordinates of the top goods of the one or more goods stacks in a three-dimensional space coordinate system based on the first position data and the second position data of the top goods of the one or more goods stacks;
a height determination module for determining a height of each stack of goods based at least on the position coordinates of top-level goods of one or more stacks of goods;
the quantity determining module is used for determining the quantity of the goods of each goods stack so as to obtain the quantity of the goods pile based on the height of each goods stack and the height of the single goods.
9. The system of claim 8, wherein the system further comprises:
the preprocessing module is used for preprocessing the first perspective image and the second perspective image before the first perspective image and the second perspective image are respectively processed by using the target detection model, and the preprocessing comprises the following steps:
distortion correction and/or affine transformation.
10. The system of claim 8, wherein to process the first perspective image and the second perspective image using the object detection model, the position data determination module is configured to obtain first position data of the top item of the one or more stacks of items in the first perspective image and second position data of the top item of the one or more stacks of items in the second perspective image, the position data determination module being configured to:
processing the first visual angle images and the second visual angle images by using a target detection model, and determining first central point coordinates of one or more first target detection frames respectively corresponding to the top surfaces of the top cargoes of one or more cargo stacks in the first visual angle images and second central point coordinates of one or more second target detection frames respectively corresponding to the top surfaces of the top cargoes in the second visual angle images; the target detection model comprises a machine learning model;
and designating the first central point coordinate as first position data and the second central point coordinate as second position data.
11. The system of claim 10, wherein the object detection model comprises a first detection model for processing a first perspective image and a second detection model for processing a second perspective image.
12. The system of claim 8, wherein to determine the position coordinates of the top cargo of the one or more cargo stacks in the three-dimensional space coordinate system based on the first and second position data of the top cargo of the one or more cargo stacks, the position coordinate determination module:
performing a constraint matching operation based on the first position data and the second position data of the top cargo of one or more cargo stacks, and determining the first position data and the second position data of the top cargo corresponding to the same cargo stack;
the position coordinates of the top cargo of the stack of cargo in three-dimensional space are determined based on the first position data and the second position data of the top cargo corresponding to the same stack of cargo.
13. The system of claim 12, wherein the location coordinate determination module is further to:
processing the first position data and the second position data of the top goods of one or more goods stacks under constraint conditions by using a constraint matching algorithm, and determining the first position data and the second position data of the top goods corresponding to the same goods stack;
the constraint matching algorithm comprises a Hungarian matching algorithm, and the constraint condition comprises an epipolar constraint and/or a geometric constraint.
14. The system of claim 8, wherein to determine the height of each stack of goods based at least on the position coordinates of the top level goods of one or more stacks of goods, the height determination module is to:
acquiring mathematical expression of a reference plane in the three-dimensional space coordinate system;
and determining the height of the stack by using a point-surface distance determination algorithm based on the mathematical expression and the position coordinates of the top layer cargo of the cargo stack.
15. A cargo quantity determination apparatus, wherein the apparatus comprises a processor for performing the cargo quantity determination method according to any one of claims 1 to 7.
CN202011347297.0A 2020-11-26 2020-11-26 Method, system and device for determining quantity of goods Active CN112132523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011347297.0A CN112132523B (en) 2020-11-26 2020-11-26 Method, system and device for determining quantity of goods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011347297.0A CN112132523B (en) 2020-11-26 2020-11-26 Method, system and device for determining quantity of goods

Publications (2)

Publication Number Publication Date
CN112132523A CN112132523A (en) 2020-12-25
CN112132523B true CN112132523B (en) 2021-07-13

Family

ID=73852296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011347297.0A Active CN112132523B (en) 2020-11-26 2020-11-26 Method, system and device for determining quantity of goods

Country Status (1)

Country Link
CN (1) CN112132523B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4202858A1 (en) * 2021-12-21 2023-06-28 VisionNav Robotics (Shenzhen) Co., Ltd. Method and device for cargo counting, computer equipment, and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160414A (en) * 2021-01-25 2021-07-23 北京豆牛网络科技有限公司 Automatic identification method and device for remaining amount of goods, electronic equipment and computer readable medium
CN113610465B (en) * 2021-08-03 2023-12-19 宁波极望信息科技有限公司 Production manufacturing operation management system based on internet of things technology
CN113748427A (en) * 2021-09-13 2021-12-03 商汤国际私人有限公司 Data processing method, device and system, medium and computer equipment
CN115439065A (en) * 2022-05-26 2022-12-06 未来机器人(深圳)有限公司 Goods inventory processing method and device, computer equipment and storage medium
CN115511875A (en) * 2022-10-28 2022-12-23 上海东普信息科技有限公司 Cargo accumulation detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578052A (en) * 2017-09-15 2018-01-12 北京京东尚科信息技术有限公司 Kinds of goods processing method and system
CN110634145A (en) * 2018-06-22 2019-12-31 青岛日日顺物流有限公司 Warehouse checking method based on image processing
CN111311630A (en) * 2020-01-19 2020-06-19 上海智勘科技有限公司 Method and system for intelligently counting quantity of goods through videos in warehousing management
CN111626983A (en) * 2020-04-13 2020-09-04 中国外运股份有限公司 Method and device for identifying quantity of goods to be detected

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120010B (en) * 2019-04-12 2023-02-07 嘉兴恒创电力集团有限公司博创物资分公司 Camera image stitching-based visual checking method and system for three-dimensional goods shelf
CN111553914B (en) * 2020-05-08 2021-11-12 深圳前海微众银行股份有限公司 Vision-based goods detection method and device, terminal and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578052A (en) * 2017-09-15 2018-01-12 北京京东尚科信息技术有限公司 Kinds of goods processing method and system
CN110634145A (en) * 2018-06-22 2019-12-31 青岛日日顺物流有限公司 Warehouse checking method based on image processing
CN111311630A (en) * 2020-01-19 2020-06-19 上海智勘科技有限公司 Method and system for intelligently counting quantity of goods through videos in warehousing management
CN111626983A (en) * 2020-04-13 2020-09-04 中国外运股份有限公司 Method and device for identifying quantity of goods to be detected

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4202858A1 (en) * 2021-12-21 2023-06-28 VisionNav Robotics (Shenzhen) Co., Ltd. Method and device for cargo counting, computer equipment, and storage medium

Also Published As

Publication number Publication date
CN112132523A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN112132523B (en) Method, system and device for determining quantity of goods
US11276194B2 (en) Learning dataset creation method and device
US10872439B2 (en) Method and device for verification
EP2843590B1 (en) System and method for package dimensioning
US9529945B2 (en) Robot simulation system which simulates takeout process of workpieces
US7574045B2 (en) Model-based recognition of objects using a calibrated image system
CN111127422A (en) Image annotation method, device, system and host
CN112378333B (en) Method and device for measuring warehoused goods
CN113191174B (en) Article positioning method and device, robot and computer readable storage medium
CN112254633B (en) Object size measuring method, device and equipment
CN111626665A (en) Intelligent logistics system and method based on binocular vision
CN112348890B (en) Space positioning method, device and computer readable storage medium
CN110807431A (en) Object positioning method and device, electronic equipment and storage medium
CN114972421A (en) Workshop material identification tracking and positioning method and system
CN112802114A (en) Multi-vision sensor fusion device and method and electronic equipment
WO2019098901A1 (en) Method and image processing system for facilitating estimation of volumes of load of a truck
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN113345023B (en) Box positioning method and device, medium and electronic equipment
US20220128347A1 (en) System and method to measure object dimension using stereo vision
CN111860035A (en) Book cover detection method and device, storage medium and electronic equipment
CN115100271A (en) Method and device for detecting goods taking height, computer equipment and storage medium
CN113450335B (en) Road edge detection method, road edge detection device and road surface construction vehicle
JP2023092446A (en) Cargo counting method and apparatus, computer apparatus, and storage medium
WO2021114775A1 (en) Object detection method, object detection device, terminal device, and medium
Pohudina et al. Possibilities of position determination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant