CN113379826A - Method and device for measuring volume of logistics piece - Google Patents

Method and device for measuring volume of logistics piece Download PDF

Info

Publication number
CN113379826A
CN113379826A CN202010159693.4A CN202010159693A CN113379826A CN 113379826 A CN113379826 A CN 113379826A CN 202010159693 A CN202010159693 A CN 202010159693A CN 113379826 A CN113379826 A CN 113379826A
Authority
CN
China
Prior art keywords
depth image
target
piece
logistics
logistics piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010159693.4A
Other languages
Chinese (zh)
Inventor
陈紫荣
彭显明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Shenzhen SF Taisen Holding Group Co Ltd
SF Tech Co Ltd
Original Assignee
SF Technology Co Ltd
Shenzhen SF Taisen Holding Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd, Shenzhen SF Taisen Holding Group Co Ltd filed Critical SF Technology Co Ltd
Priority to CN202010159693.4A priority Critical patent/CN113379826A/en
Publication of CN113379826A publication Critical patent/CN113379826A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The embodiment of the application provides a method and a device for measuring the volume of a logistics piece, which are used for acquiring accurate volume data of the logistics piece and providing effective data support for logistics transportation. The volume measurement method of the logistics piece provided by the embodiment of the application comprises the following steps: acquiring a depth image, wherein the depth image is obtained by shooting a target logistics piece through a depth camera; segmenting a target surface in the depth image, wherein the target surface is a bearing surface for bearing a target logistics piece; taking the target surface as a background area, carrying out foreground and background segmentation on the depth image, and taking an image corresponding to the obtained foreground area as a depth image of the logistics piece; calculating a minimum cuboid surrounding the target logistics piece according to the depth image of the logistics piece; and calculating the volume of the minimum cuboid as the volume measurement result of the target logistics piece.

Description

Method and device for measuring volume of logistics piece
Technical Field
The application relates to the field of logistics, in particular to a method and a device for measuring the volume of a logistics piece.
Background
Compared with the traditional camera, the depth camera (which can be called as a depth camera or a 3D camera) can detect the distance between each point in an image and the camera, namely the depth distance of a shooting space, so that a real scene can be restored more conveniently, and the applications such as scene modeling are realized. Typically, the depth camera may adopt three schemes Of structured light, binocular vision, or Time Of Flight (TOF).
In recent years, with the rapid development of related technologies such as machine vision, automatic driving, face recognition and the like, application scenes of a depth camera are increasingly wide, and especially, the application requirements on a mobile terminal are very strong. For example, in the field of logistics, a Personal Digital Assistant (PDA) carrying a depth camera can photograph a logistics item and measure the volume of the logistics item according to the obtained depth image, thereby facilitating Digital management of logistics work.
Meanwhile, it can be found that in the prior art, the problem of lack of precision still exists in measuring the volume of the material flow through the depth image of the material flow.
Disclosure of Invention
The embodiment of the application provides a method and a device for measuring the volume of a logistics piece, which are used for acquiring accurate volume data of the logistics piece and providing effective data support for logistics transportation.
In a first aspect, an embodiment of the present application provides a method for measuring a volume of a material flow member, where the method includes:
acquiring a depth image, wherein the depth image is obtained by shooting a target logistics piece through a depth camera;
segmenting a target surface in the depth image, wherein the target surface is a bearing surface for bearing a target logistics piece;
taking the target surface as a background area, carrying out foreground and background segmentation on the depth image, and taking an image corresponding to the obtained foreground area as a depth image of the logistics piece;
calculating a minimum cuboid surrounding the target logistics piece according to the depth image of the logistics piece;
and calculating the volume of the minimum cuboid as the volume measurement result of the target logistics piece.
With reference to the first aspect of the embodiment of the present application, in a first possible implementation manner of the first aspect of the embodiment of the present application, segmenting a target surface in a depth image includes:
carrying out plane detection on the depth image to obtain a plurality of planes contained in the depth image;
and sequentially calculating the distances from a plurality of surfaces contained in the depth image to the coordinate origin, and selecting the surface with the largest distance as a target surface.
With reference to the first possible implementation manner of the first aspect of the embodiment of the present application, in a second possible implementation manner of the first aspect of the embodiment of the present application, performing plane detection on a depth image, and obtaining several planes included in the depth image includes:
gridding the depth image;
converting two-dimensional pixel points in each grid formed after the depth image is gridded into three-dimensional coordinate points to form grid point clouds;
screening out target grid point clouds forming a point cloud plane from three-dimensional coordinate points in the grid point clouds;
clustering the target grid point cloud to obtain a clustering plane;
and clustering the three-dimensional coordinate points which are not clustered with the clustering planes to obtain a plurality of planes contained in the depth image.
With reference to the first aspect of the embodiment of the present application, in a third possible implementation manner of the first aspect of the embodiment of the present application, taking a target surface as a background region, performing foreground and background segmentation on a depth image, and taking an image corresponding to the obtained foreground region as a depth image of a logistics piece includes:
extracting the depth value of the target surface;
extracting target pixel points of which the depth values of the pixel points in the depth image are lower than that of the target surface;
and taking a connected region formed by the target pixel points as a foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
With reference to the third possible implementation manner of the first aspect of the embodiment of the present application, in the fourth possible implementation manner of the first aspect of the embodiment of the present application, taking a connected region formed by target pixel points as a foreground region, and taking an image corresponding to the foreground region as a depth image of a logistics piece includes:
detecting a logistics piece identifier of the depth image, wherein the logistics piece identifier is used for identifying a target logistics piece;
clustering is carried out on the target pixel points by taking the positions of the logistics piece identifications as starting points to obtain connected areas;
and taking the connected region as a foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
With reference to the first aspect of the present embodiment, in a fifth possible implementation manner of the first aspect of the present embodiment, if a plurality of unknown pixel points not included in the connected region exist in the target pixel points, before the connected region is used as a foreground region and an image corresponding to the foreground region is used as a depth image of the physical distribution piece, the method further includes:
calculating a first distance from each unknown pixel point to a target surface, a second distance from each unknown pixel point to a foreground area and a third distance between two points of the unknown pixel point;
and clustering according to the first distance, the second distance and the third distance and the distance size, and classifying a plurality of unknown pixel points to a target surface or a connected region.
With reference to the first aspect of the embodiment of the present application, in a sixth possible implementation manner of the first aspect of the embodiment of the present application, calculating a minimum rectangular parallelepiped surrounding a target logistics piece according to a depth image of the logistics piece includes:
projecting the depth image of the logistics piece to a target surface to obtain a projected image;
extracting edge points of the projected image;
acquiring a minimum rectangle surrounding the projected image according to the position of the edge point;
extracting the maximum length of the depth image of the material flow piece along the normal vector direction of the target surface;
and constructing a minimum cuboid surrounding the target logistics piece by taking the minimum rectangle as a bottom surface and taking the maximum length as a height.
In a second aspect, an embodiment of the present application provides a volume measurement device for a logistics piece, the device including:
the device comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring a depth image, and the depth image is obtained by shooting a target logistics piece through a depth camera;
the segmentation unit is used for segmenting a target surface in the depth image, wherein the target surface is a bearing surface for bearing a target logistics piece;
the segmentation unit is also used for taking the target surface as a background area, carrying out foreground and background segmentation on the depth image, and taking an image corresponding to the obtained foreground area as a depth image of the logistics piece;
the calculating unit is used for calculating the minimum cuboid surrounding the target logistics piece according to the depth image of the logistics piece;
and the calculating unit is also used for calculating the volume of the minimum cuboid as the volume measurement result of the target logistics piece.
With reference to the second aspect of the embodiment of the present application, in a first possible implementation manner of the second aspect of the embodiment of the present application, the dividing unit is specifically configured to:
carrying out plane detection on the depth image to obtain a plurality of planes contained in the depth image;
and sequentially calculating the distances from a plurality of surfaces contained in the depth image to the coordinate origin, and selecting the surface with the largest distance as a target surface.
With reference to the first possible implementation manner of the second aspect of the embodiment of the present application, in the second possible implementation manner of the second aspect of the embodiment of the present application, the dividing unit is specifically configured to:
gridding the depth image;
converting two-dimensional pixel points in each grid formed after the depth image is gridded into three-dimensional coordinate points to form grid point clouds;
screening out target grid point clouds forming a point cloud plane from three-dimensional coordinate points in the grid point clouds;
clustering the target grid point cloud to obtain a clustering plane;
and clustering the three-dimensional coordinate points which are not clustered with the clustering planes to obtain a plurality of planes contained in the depth image.
With reference to the second aspect of the embodiment of the present application, in a third possible implementation manner of the second aspect of the embodiment of the present application, the dividing unit is specifically configured to:
extracting the depth value of the target surface;
extracting target pixel points of which the depth values of the pixel points in the depth image are lower than that of the target surface;
and taking a connected region formed by the target pixel points as a foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
With reference to the third possible implementation manner of the second aspect of the embodiment of the present application, in a fourth possible implementation manner of the second aspect of the embodiment of the present application, the segmentation unit is specifically configured to:
detecting a logistics piece identifier of the depth image, wherein the logistics piece identifier is used for identifying a target logistics piece;
clustering is carried out on the target pixel points by taking the positions of the logistics piece identifications as starting points to obtain connected areas;
and taking the connected region as a foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
With reference to the third possible implementation manner of the second aspect of the embodiment of the present application, in a fifth possible implementation manner of the second aspect of the embodiment of the present application, if there are multiple unknown pixel points that are not included in the connected region in the target pixel point, the segmentation unit is further configured to:
calculating a first distance from each unknown pixel point to a target surface, a second distance from each unknown pixel point to a foreground area and a third distance between two points of the unknown pixel point;
and clustering according to the first distance, the second distance and the third distance and the distance size, and classifying a plurality of unknown pixel points to a target surface or a connected region.
With reference to the second aspect of the embodiment of the present application, in a sixth possible implementation manner of the second aspect of the embodiment of the present application, the calculating unit is specifically configured to:
projecting the depth image of the logistics piece to a target surface to obtain a projected image;
extracting edge points of the projected image;
acquiring a minimum rectangle surrounding the projected image according to the position of the edge point;
extracting the maximum length of the depth image of the material flow piece along the normal vector direction of the target surface;
and constructing a minimum cuboid surrounding the target logistics piece by taking the minimum rectangle as a bottom surface and taking the maximum length as a height.
In a third aspect, an embodiment of the present application further provides a device for measuring a volume of a physical distribution piece, which includes a processor and a memory, where the memory stores a computer program, and the processor executes the steps in any one of the methods provided in the embodiments of the present application when calling the computer program in the memory.
In a fourth aspect, this application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to perform the steps in any one of the methods provided by this application.
As can be seen from the above, the embodiments of the present application have the following beneficial effects:
for an irregularly-shaped logistics piece or a non-cuboid-shaped logistics piece, a bearing surface for bearing the logistics piece is segmented from a depth image, and the depth image is subjected to foreground and background segmentation according to the bearing surface, so that the depth image of the logistics piece can be effectively separated, a minimum cuboid surrounding the logistics piece is calculated, the minimum cuboid can be used as the equivalent shape of the logistics piece in logistics transportation, the volume of the minimum cuboid can be used as the volume measurement result of the logistics piece, and operations such as packing, placing, freight transportation and the like of the logistics piece are facilitated.
In addition, because the data processing is simpler, and because only one depth image is needed to determine the volume measurement result of the logistics piece, the on-site shooting processing is facilitated, the convenience of automatically detecting the volume of the logistics piece on the logistics work is obviously improved, and effective data support is simply and conveniently provided for the logistics work.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for measuring a volume of a material flow component according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating step S102 according to the embodiment of FIG. 1;
fig. 3 is a schematic flowchart of step S201 in the embodiment of fig. 2;
FIG. 4 is a schematic flow chart illustrating step S103 according to the embodiment of FIG. 1;
FIG. 5 is a schematic view of a scenario of a method for measuring a volume of a physical distribution member according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a volume measuring device for a material flow member according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a volume measuring apparatus for a material flow member according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The principles of the present application may be employed in numerous other general-purpose or special-purpose computing, communication environments or configurations. Examples of well known computing systems, environments, and configurations that may be suitable for use with the application include, but are not limited to, hand-held telephones, personal computers, servers, multiprocessor systems, microcomputer-based systems, mainframe-based computers, and distributed computing environments that include any of the above systems or devices.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions.
First, before describing the embodiments of the present application, the related contents of the embodiments of the present application with respect to the application context will be described.
In the prior art, in the measurement of the volume of the logistics piece, the volume of the logistics piece is usually calculated by combining depth images of the logistics piece in different postures, and in order to improve calculation accuracy, multiple depth images are often required to be acquired, which is limited by the accuracy limit of acquiring the volume of the logistics piece from the depth images, and on-site logistics workers often need to repeatedly acquire multiple depth images, which is inconvenient for on-site work.
Based on the above defects of the prior art, the embodiments of the present application provide a new method for measuring the volume of a physical distribution piece, which overcomes the defects of the prior art to at least some extent.
In the method for measuring the volume of a logistics item according to the embodiment of the present application, an execution main body of the method may be a volume measuring device of the logistics item, or different types of volume measuring devices of the logistics item, such as server Equipment, a physical host, or User Equipment (UE), which are integrated with the volume measuring device of the logistics item, where the volume measuring device of the logistics item may be implemented in a hardware or software manner, and the UE may specifically be a terminal device such as a smart phone, a tablet computer, a laptop computer, a palm computer, a desktop computer, or a PDA.
The logistics piece volume measurement device can be further divided into a plurality of devices, and the logistics piece volume measurement method provided by the embodiment of the application is executed together, for example, a worker of an express delivery network can acquire a depth image of a received logistics piece through a camera device, upload the depth image to a server through a network, and extract the logistics piece volume through the server.
The camera device may be a depth camera, or may also be a device including a depth camera, such as the above mentioned PDA, smart phone, or tablet computer; the network may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others; the server may be a server cluster composed of a plurality of servers, or the like, and the server may also be a cloud server. There may be any number of cameras, networks, and servers, as desired.
Of course, in the embodiment of the present application, because the data processing required by the method for measuring the volume of the physical distribution piece is relatively simple, in practical application, it is more convenient to directly acquire a depth image on a device such as a PDA on site and extract the volume of the physical distribution piece based on the acquired depth image, so that it is convenient to arrange subsequent physical distribution work on site.
In addition, the logistics item referred to in the embodiment of the present application may be specifically a courier item of an express service in the logistics industry, and correspondingly, the method for measuring the volume of the logistics item provided in the embodiment of the present application may be used to extract the volume of the courier item based on a depth image of the courier item.
Fig. 1 shows a schematic flow chart of a method for measuring a volume of a material flow member according to an embodiment of the present application, and as shown in fig. 1, the method for measuring a volume of a material flow member according to an embodiment of the present application may specifically include the following steps:
step S101, obtaining a depth image, wherein the depth image is obtained by shooting a target logistics piece through a depth camera;
step S102, a target surface is segmented from the depth image, wherein the target surface is a bearing surface for bearing a target logistics piece;
step S103, taking the target surface as a background area, carrying out foreground and background segmentation on the depth image, and taking an image corresponding to the obtained foreground area as a depth image of the logistics piece;
step S104, calculating a minimum cuboid surrounding the target logistics piece according to the depth image of the logistics piece;
and step S105, calculating the volume of the minimum cuboid as the volume measurement result of the target logistics piece.
In the technical solution proposed in the embodiment shown in fig. 1, for an irregularly-shaped logistics piece or a non-cuboid-shaped logistics piece, a depth image of the logistics piece can be effectively separated and a minimum cuboid surrounding the logistics piece can be calculated by dividing a bearing surface bearing the logistics piece from the depth image and performing foreground and background division on the depth image according to the bearing surface, the minimum cuboid can be used as an equivalent shape of the logistics piece in logistics transportation, and the volume of the minimum cuboid can be used as a volume measurement result of the logistics piece, so that operations such as packing, placing, freight transportation and the like of the logistics piece are facilitated.
In addition, because the data processing is simpler, and because only one depth image is needed to determine the volume measurement result of the logistics piece, the on-site shooting processing is facilitated, the convenience of automatically detecting the volume of the logistics piece on the logistics work is obviously improved, and effective data support is simply and conveniently provided for the logistics work.
It should be understood that, here, only one depth image is needed to determine the volume measurement result of the physical distribution piece, and it is not meant to limit that only one depth image is captured for the physical distribution piece when the volume of the physical distribution piece is measured, and multiple depth images can be captured for the physical distribution piece. In practical applications, if a plurality of depth images are captured for a logistics, one or more depth images can be selected to calculate the volume of the logistics, and if the plurality of depth images are captured for the logistics, a specific volume measurement result can be selected as a final output volume measurement result in a manner of screening out a corresponding minimum volume measurement result, a maximum volume measurement result or an average volume measurement result from the plurality of volume measurement results.
In addition, the method for measuring the volume of the logistics piece provided by the embodiment of the application can be applied to the logistics piece in an irregular shape and can also be applied to the logistics piece in a regular shape. In the present embodiment, the regular shape refers to a rectangular parallelepiped shape, for example, when many commodities are sold, a rectangular parallelepiped carton is packed, and the irregular shape refers to a shape such as a non-rectangular parallelepiped shape, typically, for example, a polyhedron like a sphere, an unpacked commodity body, easily deformable clothes, and the like.
The following proceeds to a detailed description of the various steps of the embodiment shown in fig. 1:
in the embodiment of the application, the depth image may be obtained by shooting the physical distribution through different types of depth map shooting modes such as structured light, binocular vision or TOF, and may be determined according to actual shooting equipment or shooting conditions.
In the step S101, the depth image is obtained by shooting the object in real time through a depth camera of the depth camera and obtaining a depth image of the object; or the method can also be understood as shooting the logistics piece in real time and obtaining a depth image of the logistics piece by connecting an external depth camera or an external device with the depth camera; or, it may be understood as extracting a depth image of the item stored by itself or an external device.
In practical application, because the logistics piece is irregular in shape, the depth image of the logistics piece is combined with the point cloud with depth information, a plane can be extracted, the depth images corresponding to the logistics piece and the bearing surface bearing the logistics piece are different, the depth image of the bearing surface is generally formed by a single surface with a large area, and the depth image of the logistics piece is generally formed by different surfaces with a dense surface and a small area.
In consideration of the fact that a depth image formed by irregularly-shaped logistics pieces is complex, the difficulty and data processing amount of segmentation from the depth image are high, the depth image formed by bearing the logistics pieces is simple, the image characteristics are basically unchanged, and the difficulty and data processing amount of segmentation from the depth image are low, therefore, in the embodiment of the application, the purpose of recognizing the bearing surface in the image is considered, and then the logistics pieces in the image are indirectly recognized. After the bearing surface is segmented from the depth image, the bearing surface and the logistics piece are separated in the depth image by combining a foreground and background segmentation data processing mode to obtain a depth image of the logistics piece, so that the higher difficulty and data processing amount of directly identifying and segmenting the logistics piece from the depth image are avoided.
Further, in the above-mentioned related data processing process, as an exemplary specific implementation manner, referring to a flow diagram of step S102 in fig. 1 corresponding to the embodiment shown in fig. 2, the partitioning process of the bearing surface in the embodiment of the present application may specifically include the following steps:
step S201, carrying out plane detection on the depth image to obtain a plurality of planes contained in the depth image;
the depth image is used for plane detection, a clustering mode can be generally adopted to cluster dense point clouds in the depth image, and adjacent point clouds belonging to the same plane can be clustered through similar clustering characteristics to obtain different planes.
For example, a clustering segmentation algorithm is adopted, the point clouds are divided by detecting different types of features such as spatial features, geometric features, texture features, color features and the like of the point clouds, and clustering is performed to obtain different surfaces formed by the point clouds with similar features.
Or, using Random Sample Consensus (RANSAC), randomly selecting several points in the point cloud of the depth image to set as an inner cluster (or local inner point, which is a point in the point cloud forming a surface), calculating a model fitting the inner cluster, bringing other points into the established model, calculating whether the points are the inner cluster, recording the number of the inner cluster, repeating the iteration of the above steps for multiple times, comparing which calculation the number of the inner cluster is the most, wherein the model formed at the time with the most number of the inner cluster is the required solution, and the surface formed by the point sets corresponding to the solutions is the surface detected from the depth image.
Of course, in the related art, the plane detection mode may also adopt other modes for detection, and details are not described herein.
Step S202, sequentially calculating the distances from a plurality of surfaces contained in the depth image to the origin of coordinates, and selecting the surface with the largest distance as a target surface.
In the embodiment of the application, in the scene of shooting the logistics piece on site, the content in the shot image can be acquiescently shot, and only the bearing surface for bearing the logistics piece except the logistics piece is used.
In this case, the bearing surface is the surface with the lowest height, or in the depth image, if the distances between the extracted surfaces and the depth camera (the origin of coordinates) along the Z axis of the depth image are calculated, the surface with the largest distance is the bearing surface, that is, the target surface in the embodiment of the present application.
Therefore, the bearing surface in the depth image can be accurately identified by combining the height, and the segmentation of the depth image of the subsequent logistics piece is facilitated.
As a specific implementation manner of the clustering method, as shown in fig. 3, a flow diagram of step S201 in the embodiment of fig. 2 of the present application may include:
step S301, gridding the depth image;
step S302, converting two-dimensional pixel points in each grid formed after the depth image is meshed into three-dimensional coordinate points to form grid point clouds;
step S303, screening out target grid point clouds forming a point cloud plane from three-dimensional coordinate points in the grid point clouds;
step S304, clustering the target grid point cloud to obtain a clustering plane;
step S305, clustering the three-dimensional coordinate points which are not clustered with the clustering planes to obtain several planes contained in the depth image.
Specifically, gridding the obtained original depth image to obtain a plurality of grids corresponding to the depth image, wherein the size and distribution of the grids are adjusted according to actual needs; and determining the three-dimensional coordinates of the two-dimensional pixel points in the three-dimensional scene according to the depth data corresponding to each two-dimensional pixel point, and determining the three-dimensional coordinate points corresponding to the two-dimensional pixel points in the three-dimensional scene according to the three-dimensional coordinates so as to convert the two-dimensional pixel points into corresponding three-dimensional coordinate points.
For each two-dimensional pixel point, a corresponding three-dimensional coordinate point is restored, and a corresponding formula is as follows:
Figure BDA0002405316780000111
wherein (u, v) is a point coordinate in a two-dimensional coordinate system, (x)c,yc,zc) Is the point coordinate in a three-dimensional coordinate system, fxAnd fyIs the focal length, (u)0,v0) Is a pixel reference point in a two-dimensional coordinate system, zcThe distance from the target point to the depth camera along the depth Z-axis direction of the picture can be directly obtained from the depth map.
Then, according to the three-dimensional coordinate points corresponding to each two-dimensional pixel point in each grid, obtaining the grid point clouds corresponding to the grids in the three-dimensional scene, thereby obtaining a plurality of grid point clouds corresponding to the depth image, and according to the three-dimensional coordinate points in each grid point cloud, judging whether the three-dimensional coordinate points in the grid point clouds are in the same point cloud plane in the three-dimensional scene, namely judging whether the grid point clouds form a point cloud plane, and screening the grid point clouds forming the point cloud plane from the grid point clouds.
And clustering the screened grid point clouds forming the point cloud plane according to three-dimensional coordinate points in the grid point clouds to obtain a plurality of clustering planes. And the terminal determines the three-dimensional coordinate points which are not clustered from the three-dimensional point cloud data, and clusters the three-dimensional coordinate points which are not clustered with the obtained clustering plane so as to cluster each three-dimensional coordinate point in the three-dimensional point cloud data to the corresponding clustering plane, thereby obtaining several surfaces contained in the depth image.
In another exemplary implementation manner, referring to a flowchart of step S103 shown in fig. 4 in fig. 1 of the present application, the foregoing foreground and background segmentation processing may specifically include:
step S401, extracting the depth value of the target surface;
step S402, extracting target pixel points of which the depth of the pixel point values in the depth image is lower than the depth value of the target surface;
step S403, using a connected region formed by the target pixel as a foreground region, and using an image corresponding to the foreground region as a depth image of the logistics.
As a foreground and background segmentation, a depth dimension can be used as a main distinguishing mode of a foreground and background image, and after a bearing surface, namely a target surface, is determined, the depth image can be temporarily divided into two parts, namely an image of the bearing surface and an image of a non-bearing surface. And at the moment, extracting the depth value of the bearing surface, wherein the depth value can be multiple along with each point of the bearing surface to form a depth value interval, and if the depth value of a pixel point of the image of the non-bearing surface is lower than the depth value of the bearing surface, the image can be used as a target pixel point, the image can grow based on the target pixel points, and the obtained connected region can be used as a foreground region to realize the separation from the bearing surface and can be used as a depth image of a logistics piece.
Some pixel points in the target pixel points can be randomly selected to be used as growth seeds for area growth, coordinate information including depth information is combined, the distance between the target pixel points and other adjacent pixel points in the target pixel points is calculated, when the distance is smaller than a threshold value of the threshold value, communication can be carried out, the target pixel points are added into a growing area, the searching and comparing processes are continuously repeated until all the points in the target pixel points are processed, and therefore the obtained point set can be regarded as foreground point cloud with high confidence coefficient.
Or, the position of the physical distribution piece in the image can be identified by combining the position identification, and the physical distribution piece is grown.
For example, the step S403 may specifically include:
detecting a logistics piece identifier of the depth image, wherein the logistics piece identifier is used for identifying a target logistics piece;
clustering is carried out on the target pixel points by taking the positions of the logistics piece identifications as starting points to obtain connected areas;
and taking the connected region as a foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
Wherein the physical distribution identification can be generated during the shooting process of the depth image. Taking the PDA as an example, when a worker photographs a physical distribution piece through the PDA, the photographing angle may be corrected under the guidance of the physical distribution piece identifier displayed on the PDA photographing preview interface, so that the physical distribution piece identifier is located in the image of the physical distribution piece.
Of course, the physical distribution piece identifier may be located in the image of the physical distribution piece, and the corresponding physical distribution piece may be indicated in other manners, for example, by using the image of the direction pointed by the pointer identifier as the physical distribution piece.
The logistics piece identification can be limited to a fixed position of the preview interface, and the fixed position is set through extraction experience of historical logistics pieces or can be manually added to a corresponding position of the preview interface along with a worker. After shooting is finished, the logistics piece identification can be displayed at the position in the original preview interface in the depth image, or the position of the logistics piece identification in the depth image is indicated through identification position information carried by the depth image.
Therefore, according to the logistics piece identification, the initial pixel points of growth are directly determined, and growth and clustering are carried out in the target pixel points to obtain the connected region.
In the above-mentioned growing process, in rare cases, there may also be point clouds that do not belong to the connected region, i.e., do not belong to the foreground point cloud, which may then be considered as unknown point clouds.
For the unknown point clouds or the unknown pixel points, corresponding distinguishing processing can be carried out so as to divide the unknown point clouds into the point clouds of the bearing surface or the logistics piece, and the accuracy of foreground and background segmentation is further improved.
Correspondingly, as another exemplary embodiment, before performing step S403, the following steps may be further included:
calculating a first distance from each unknown pixel point to a target surface, a second distance from each unknown pixel point to a foreground area and a third distance between two points of the unknown pixel point;
and clustering according to the first distance, the second distance and the third distance and the distance size, and classifying a plurality of unknown pixel points to a target surface or a connected region.
For example, assume that the point cloud of the bearing surface (the background point cloud) is Bkg _ r, the point cloud of the physical distribution piece (the foreground point cloud) is Frg _ r, and the unknown point cloud is Ukn _ r. For the three-dimensional coordinate point pt of any unknown pixel point in the unknown point cloud Ukn _ r to be recognized,
distance to background area: defining the distance between pt and any point ptb in the background spot cloud Bkg _ r as
Figure BDA0002405316780000131
Distance to foreground region: defining the distance between pt and any point ptf in the foreground point cloud Frg _ r as
Figure BDA0002405316780000132
To neighbouring sitesDistance: for the unknown Point Cloud Ukn _ r, the Point Cloud composed of all three-dimensional coordinate points in the set is divided into three-dimensional spaces by using a Kdtree method in a Point Cloud Library (PCL), and each Point Cloud is used as a node to construct a corresponding K-dimensional tree structure. For pt points, the tree structure can be used to search the point cloud for the k nearest neighbors, i.e., the neighborhood set n (pt) of pt points. The distance from pt point to any point ptn in the unknown point cloud Ukn _ r is defined as
Figure BDA0002405316780000141
Therefore, the foreground and the background can be divided according to the distance data obtained by calculation and the principle of proximity, and the unknown pixel points are classified to a target surface or a communication area, namely a bearing surface or a logistics piece.
Further, in step S104 in the embodiment corresponding to fig. 1, after the depth image of the material flow piece is obtained in step S103, the lengths of the material flow piece in three dimensions, that is, the lengths of three sides, may be extracted according to the image, and the length, width and height data of the material flow piece may be used, so that the volume of the material flow piece may be calculated according to a volume calculation formula V ═ X × Y × Z, where the positions of the sides corresponding to the length, the width and the height are not limited fixedly, and the three sides corresponding to the length, the width and the height are in a perpendicular spatial relationship.
As an exemplary specific implementation manner, part of the side length data may be extracted in a projection manner, and correspondingly, step S104 may include:
projecting the depth image of the logistics piece to a target surface to obtain a projected image;
extracting edge points of the projected image;
acquiring a minimum rectangle surrounding the projected image according to the position of the edge point;
extracting the maximum length of the depth image of the material flow piece along the normal vector direction of the target surface;
and constructing a minimum cuboid surrounding the target logistics piece by taking the minimum rectangle as a bottom surface and taking the maximum length as a height.
Specifically, after the material flow is projected onto the bearing surface, a two-dimensional image is obtained, at this time, edge points of the image can be extracted, the edge points can be all points of corners or contours of the image, the maximum lengths of the image on the X axis and the Y axis are determined by combining coordinate axes of a plane where the edge points are located, for example, the XY axis, and according to the two points, a minimum rectangle surrounding the image can be drawn.
By analogy, the logistics piece can be projected to the plane formed by the normal vector of the target surface and the X axis again, or projected to the plane formed by the normal vector of the target surface and the Y axis, the edge point is extracted, and the maximum length of the projection image in the normal vector direction of the target surface is determined along the normal vector direction of the target surface and is used as the height of the logistics piece. Or detecting the pixel point farthest away from the bearing surface in the normal vector direction of the target surface in the depth image of the logistics piece and extracting the distance of the pixel point as the maximum length of the depth image of the logistics piece along the normal vector direction of the target surface, namely the height of the logistics piece.
Thus, after the lengths of the maximum three sides of the logistics piece are obtained, the minimum cuboid surrounding the logistics piece can be constructed, and the volume of the cuboid is calculated.
It should be noted that, in the above-mentioned contents, the coordinate conversion processing may also be involved, and it is easy to understand that, in the shooting process, limited by the practical shooting conditions such as the shooting angle and the shooting space, the physical distribution member in the depth image often presents different placing postures and has different inclinations, and for the convenience of calculation processing, the coordinate conversion processing may also be performed on the shot depth image of the physical distribution member to make the placing posture of the physical distribution member in an ideal posture.
Namely, the bearing surface is in a posture perpendicular to the picture depth Z axis, that is, the normal vector of the bearing surface in the image is parallel to the picture depth Z axis of the depth camera, and the specific rotation process is as follows:
the depth of the picture in the Z-axis direction is Z ═ 0,0,1]Bearing surface normal vector Ntop=[nx,ny,nz]Calculating x ═ xx, xy, xz]=Ntop*Z,x=[yx,yy,yz]=NtopX, then get
Figure BDA0002405316780000151
Wherein the operation symbol is the vector outer product. For any three-dimensional point P ═ px, py, pz]The rotated point is R × P.
The rotation effect of the flow measurement device can specifically refer to a scene schematic diagram of the volume measurement method of the flow member in the embodiment of the present application shown in fig. 5.
In order to better implement the method for measuring the volume of the logistics piece provided by the embodiment of the application, the embodiment of the application also provides a device for measuring the volume of the logistics piece.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a volume measuring device of a material flow element according to an embodiment of the present disclosure, in the embodiment of the present disclosure, the volume measuring device 600 of the material flow element may specifically include the following structure:
the acquiring unit 601 is configured to acquire a depth image, where the depth image is obtained by shooting a target logistics piece through a depth camera;
a segmentation unit 602, configured to segment a target surface from the depth image, where the target surface is a bearing surface for bearing a target logistics part;
the segmentation unit 602 is further configured to perform foreground and background segmentation on the depth image by using the target surface as a foreground region, and use an image corresponding to the obtained foreground region as a depth image of the logistics;
a calculating unit 603, configured to calculate a minimum rectangular parallelepiped surrounding the target logistics piece according to the depth image of the logistics piece;
the calculating unit 603 is further configured to calculate a volume of the smallest rectangular parallelepiped as a volume measurement result of the target material flow member.
In an exemplary embodiment, the segmentation unit 602 is specifically configured to:
carrying out plane detection on the depth image to obtain a plurality of planes contained in the depth image;
and sequentially calculating the distances from a plurality of surfaces contained in the depth image to the coordinate origin, and selecting the surface with the largest distance as a target surface.
In another exemplary embodiment, the segmentation unit 602 is specifically configured to:
gridding the depth image;
converting two-dimensional pixel points in each grid formed after the depth image is gridded into three-dimensional coordinate points to form grid point clouds;
screening out target grid point clouds forming a point cloud plane from three-dimensional coordinate points in the grid point clouds;
clustering the target grid point cloud to obtain a clustering plane;
and clustering the three-dimensional coordinate points which are not clustered with the clustering planes to obtain a plurality of planes contained in the depth image.
In another exemplary embodiment, the segmentation unit 602 is specifically configured to:
extracting the depth value of the target surface;
extracting target pixel points of which the depth values of the pixel points in the depth image are lower than that of the target surface;
and taking a connected region formed by the target pixel points as a foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
In another exemplary embodiment, the segmentation unit 602 is specifically configured to:
detecting a logistics piece identifier of the depth image, wherein the logistics piece identifier is used for identifying a target logistics piece;
clustering is carried out on the target pixel points by taking the positions of the logistics piece identifications as starting points to obtain connected areas;
and taking the connected region as a foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
In another exemplary specific implementation manner, if there are multiple unknown pixel points that are not included in the connected region in the target pixel point, the segmenting unit 602 is further configured to:
calculating a first distance from each unknown pixel point to a target surface, a second distance from each unknown pixel point to a foreground area and a third distance between two points of the unknown pixel point;
and clustering according to the first distance, the second distance and the third distance and the distance size, and classifying a plurality of unknown pixel points to a target surface or a connected region.
In another exemplary embodiment, the calculating unit 603 is specifically configured to:
projecting the depth image of the logistics piece to a target surface to obtain a projected image;
extracting edge points of the projected image;
acquiring a minimum rectangle surrounding the projected image according to the position of the edge point;
extracting the maximum length of the depth image of the material flow piece along the normal vector direction of the target surface;
and constructing a minimum cuboid surrounding the target logistics piece by taking the minimum rectangle as a bottom surface and taking the maximum length as a height.
The embodiment of the present application further provides a volume measurement device for a physical distribution piece, referring to fig. 7, fig. 7 shows a schematic structural diagram of the volume measurement device for a physical distribution piece according to the embodiment of the present application, specifically, the volume measurement device for a physical distribution piece according to the embodiment of the present application includes a processor 701, and when the processor 701 is used to execute a computer program stored in a memory 702, each step of the volume measurement method for a physical distribution piece according to any embodiment corresponding to fig. 1 to 5 is implemented; alternatively, the processor 701 is configured to implement the functions of the units in the corresponding embodiment of fig. 6 when executing the computer program stored in the memory 702.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in the memory 702 and executed by the processor 701 to accomplish the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The volume measuring device of the logistics piece can include, but is not limited to, a processor 701 and a memory 702. It will be understood by those skilled in the art that the illustration is merely an example of a volume measuring device of a physical distribution piece, and does not constitute a limitation of the volume measuring device of the physical distribution piece, and may include more or less components than those illustrated, or combine some components, or different components, for example, the volume measuring device of the physical distribution piece may further include an input/output device, a network access device, a bus, etc., and the processor 701, the memory 702, the input/output device, the network access device, etc. are connected through the bus.
The Processor 601 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the volumetric measuring device of the logistics item, the various parts of the entire device being connected by various interfaces and lines.
The memory 702 may be used to store computer programs and/or modules, and the processor 701 may implement various functions of the computer apparatus by running or executing the computer programs and/or modules stored in the memory 702 and invoking data stored in the memory 702. The memory 702 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the stored data area may store data (such as audio data, video data, etc.) created from use of the volume measuring device of the piece of material, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described volume measurement device and apparatus for a logistics piece and the corresponding units thereof may refer to the description of the volume measurement method for a logistics piece in any embodiment corresponding to fig. 1 to 5, and are not described herein again in detail.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute steps in the method in any embodiment corresponding to fig. 1 to 5 in the present application, and specific operations may refer to descriptions of the method in any embodiment corresponding to fig. 1 to 5, which are not described herein again.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the method for measuring the volume of the material flow component in any embodiment of the present application, such as those shown in fig. 1 to fig. 5, the beneficial effects that can be achieved by the method for measuring the volume of the material flow component in any embodiment of the present application, such as those shown in fig. 1 to fig. 5, can be achieved, which are detailed in the foregoing description and will not be repeated herein.
The method, the device, the equipment and the computer-readable storage medium for measuring the volume of the physical distribution piece provided by the present application are described in detail above, and a specific example is applied in the present application to illustrate the principle and the implementation manner of the present application, and the description of the above embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of measuring a volume of a piece of material, the method comprising:
acquiring a depth image, wherein the depth image is obtained by shooting a target logistics piece through a depth camera;
segmenting a target surface in the depth image, wherein the target surface is a bearing surface for bearing the target logistics piece;
taking the target surface as a background area, performing foreground and background segmentation on the depth image, and taking an image corresponding to the obtained foreground area as a depth image of the logistics piece;
calculating a minimum cuboid surrounding the target logistics piece according to the depth image of the logistics piece;
and calculating the volume of the minimum cuboid as the volume measurement result of the target logistics piece.
2. The method of claim 1, wherein said segmenting a target surface in the depth image comprises:
carrying out plane detection on the depth image to obtain a plurality of planes contained in the depth image;
and sequentially calculating the distances from a plurality of surfaces contained in the depth image to the origin of coordinates, and selecting the surface with the largest distance as the target surface.
3. The method of claim 2, wherein performing plane detection on the depth image to obtain several planes included in the depth image comprises:
gridding the depth image;
converting two-dimensional pixel points in each grid formed after the depth image is gridded into three-dimensional coordinate points to form grid point clouds;
screening out target grid point clouds forming a point cloud plane from three-dimensional coordinate points in the grid point clouds;
clustering the target grid point cloud to obtain a clustering plane;
and clustering the three-dimensional coordinate points which are not clustered with the clustering plane to obtain several surfaces contained in the depth image.
4. The method according to claim 1, wherein the taking the target surface as a background region, performing foreground and background segmentation on the depth image, and taking an image corresponding to the obtained foreground region as a depth image of the logistics piece comprises:
extracting the depth value of the target surface;
extracting target pixel points of which the depth values of the pixel points in the depth image are lower than the depth value of the target surface;
and taking a connected region formed by the target pixel points as the foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
5. The method according to claim 4, wherein the taking a connected region formed by the target pixel points as the foreground region and taking an image corresponding to the foreground region as a depth image of the logistics piece comprises:
detecting a logistics piece identifier of the depth image, wherein the logistics piece identifier is used for identifying the target logistics piece;
clustering is carried out on the target pixel points by taking the positions of the logistics piece identifications as starting points to obtain the connected areas;
and taking the connected region as the foreground region, and taking an image corresponding to the foreground region as a depth image of the logistics piece.
6. The method according to claim 4, wherein if there are a plurality of unknown pixel points that are not included in the connected region among the target pixel points, before the connected region is used as the foreground region and an image corresponding to the foreground region is used as the depth image of the physical distribution piece, the method further includes:
calculating a first distance from each unknown pixel point to the target surface, a second distance from each unknown pixel point to the foreground area and a third distance between every two unknown pixel points;
and clustering according to the first distance, the second distance and the third distance and according to the distance, and classifying a plurality of unknown pixel points to the target surface or the connected region.
7. The method of claim 1, wherein calculating a minimum cuboid surrounding the target piece of logistics from the depth image of the piece of logistics comprises:
projecting the depth image of the logistics piece to the target surface to obtain a projected image;
extracting edge points of the projected image;
acquiring a minimum rectangle surrounding the projected image according to the position of the edge point;
extracting the maximum length of the depth image of the logistics piece along the normal vector direction of the target surface;
and constructing the minimum cuboid surrounding the target logistics piece by taking the minimum rectangle as a bottom surface and the maximum length as a height.
8. A device for measuring the volume of a material flow, said device comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a depth image, and the depth image is obtained by shooting a target logistics piece through a depth camera;
the segmentation unit is used for segmenting a target surface in the depth image, wherein the target surface is a bearing surface for bearing the target logistics piece;
the segmentation unit is further configured to perform foreground and background segmentation on the depth image by taking the target surface as a background region, and take an image corresponding to the obtained foreground region as a depth image of the logistics piece;
the calculating unit is used for calculating the minimum cuboid surrounding the target logistics piece according to the depth image of the logistics piece;
the calculation unit is further configured to calculate a volume of the minimum rectangular parallelepiped as a volume measurement result of the target material flow member.
9. A volumetric measuring device for a material flow, comprising a processor and a memory, in which a computer program is stored, which when called by the processor executes the method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 7.
CN202010159693.4A 2020-03-10 2020-03-10 Method and device for measuring volume of logistics piece Pending CN113379826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010159693.4A CN113379826A (en) 2020-03-10 2020-03-10 Method and device for measuring volume of logistics piece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010159693.4A CN113379826A (en) 2020-03-10 2020-03-10 Method and device for measuring volume of logistics piece

Publications (1)

Publication Number Publication Date
CN113379826A true CN113379826A (en) 2021-09-10

Family

ID=77568597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010159693.4A Pending CN113379826A (en) 2020-03-10 2020-03-10 Method and device for measuring volume of logistics piece

Country Status (1)

Country Link
CN (1) CN113379826A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648640A (en) * 2022-05-23 2022-06-21 深圳市其域创新科技有限公司 Target object monomer method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume
CN110095062A (en) * 2019-04-17 2019-08-06 北京华捷艾米科技有限公司 A kind of object volume measurement method of parameters, device and equipment
WO2019169941A1 (en) * 2018-03-05 2019-09-12 华为技术有限公司 Distance measurement method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume
WO2019169941A1 (en) * 2018-03-05 2019-09-12 华为技术有限公司 Distance measurement method and apparatus
CN110095062A (en) * 2019-04-17 2019-08-06 北京华捷艾米科技有限公司 A kind of object volume measurement method of parameters, device and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孙晓兰;赵慧洁;: "基于网格采样的深度图像表面特征提取算法", 中国图象图形学报, no. 06, 15 June 2007 (2007-06-15) *
杜廷伟;刘波;: "基于高斯混合模型聚类的Kinect深度数据分割", 计算机应用与软件, no. 12, 15 December 2014 (2014-12-15) *
高庆吉;位园园;: "基于表面深度值均方差的航空行李分类研究", 计算机工程与科学, no. 01, 15 January 2017 (2017-01-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648640A (en) * 2022-05-23 2022-06-21 深圳市其域创新科技有限公司 Target object monomer method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10268917B2 (en) Pre-segment point cloud data to run real-time shape extraction faster
US10510159B2 (en) Information processing apparatus, control method for information processing apparatus, and non-transitory computer-readable storage medium
US9292961B1 (en) System and method for detecting a structural opening in a three dimensional point cloud
EP3502621B1 (en) Visual localisation
JP6760957B2 (en) 3D modeling method and equipment
CN112581629A (en) Augmented reality display method and device, electronic equipment and storage medium
US20210256763A1 (en) Method and device for simplifying three-dimensional mesh model
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN108492284B (en) Method and apparatus for determining perspective shape of image
CN114396875B (en) Rectangular package volume measurement method based on vertical shooting of depth camera
CN109344750B (en) Complex structure three-dimensional object identification method based on structure descriptor
CN112197708B (en) Measuring method and device, electronic device and storage medium
CN114066999A (en) Target positioning system and method based on three-dimensional modeling
CN113628343A (en) Three-dimensional grid combination processing method and device and storage medium
CN113379826A (en) Method and device for measuring volume of logistics piece
CN112102391A (en) Measuring method and device, electronic device and storage medium
US10861174B2 (en) Selective 3D registration
CN116266365A (en) Point cloud data extraction method and device, electronic equipment and storage medium
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
CN113592994A (en) Method, apparatus and storage medium for texture mapping
CN113592976A (en) Map data processing method and device, household appliance and readable storage medium
CN112150527A (en) Measuring method and device, electronic device and storage medium
CN113048950A (en) Base station antenna inclination angle measuring method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination