CN106813568A - object measuring method and device - Google Patents

object measuring method and device Download PDF

Info

Publication number
CN106813568A
CN106813568A CN201510847486.7A CN201510847486A CN106813568A CN 106813568 A CN106813568 A CN 106813568A CN 201510847486 A CN201510847486 A CN 201510847486A CN 106813568 A CN106813568 A CN 106813568A
Authority
CN
China
Prior art keywords
target object
equation
surface equation
top surface
dimensional coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510847486.7A
Other languages
Chinese (zh)
Other versions
CN106813568B (en
Inventor
何勇
崔晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cainiao Smart Logistics Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201510847486.7A priority Critical patent/CN106813568B/en
Publication of CN106813568A publication Critical patent/CN106813568A/en
Application granted granted Critical
Publication of CN106813568B publication Critical patent/CN106813568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application discloses a kind of object measuring method and device, and methods described includes:Obtain the depth image of target object;According to depth information, two-dimensional coordinate of each pixel under image coordinate system is converted into the three-dimensional coordinate under world coordinate system, three-dimensional coordinate composition three-dimensional point cloud of all pixels under world coordinate system in the depth map;Using the three-dimensional point cloud and preset algorithm, the bottom surface equation and top surface equation of the target object are obtained;According to the bottom surface equation and top surface equation, the length and width and height of the target object are calculated;Using the present processes and device, the computational efficiency of object volume can be improved, accelerate object and dispensing and storing the treatment of link, it is ensured that the normal operation of whole logistics progress.

Description

Object measuring method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an object measurement method and apparatus.
Background
With the rapid growth of online shopping, the logistics industry is rapidly developing. Logistics refers to the process of moving objects from a supply location to a receiving location, wherein the process includes the steps of packaging, storing, and delivering the objects. Generally, in the distribution link of objects, a vehicle combination strategy needs to be generated according to the volume of the objects, so that the cargo space of a distribution vehicle is utilized to the maximum extent; in the storage link of the objects, the proper goods positions need to be allocated to the objects according to the volumes of the objects, so that the utilization rate of the warehouse is maximized. Therefore, the volume of the object needs to be measured in the distribution and storage links of the logistics.
In the prior art, the volume of an object is generally calculated manually, as follows: firstly, a worker measures the length, width and height information of an object by using a measuring tool (such as a tape measure); then, the volume of the object is calculated by using a volume calculation formula.
In a large logistics distribution and storage center, the number of objects to be processed is large, and at the moment, if the size of the objects is calculated in a human-used mode, the processing time of the objects in the distribution and storage links is inevitably long, so that the normal operation of the whole logistics process is influenced.
Content of application
The embodiment of the application provides an object measuring method and device, so that the calculation efficiency of the volume of an object is improved, the processing of the object in the distribution and storage links is accelerated, and the normal operation of the whole logistics process is guaranteed.
In order to solve the technical problem, the embodiment of the application discloses the following technical scheme:
the application discloses an object measurement method, which comprises the following steps: acquiring a depth image of a target object, wherein the depth image is composed of a plurality of pixels, and each pixel comprises depth information of the target object;
converting two-dimensional coordinates of each pixel under an image coordinate system into three-dimensional coordinates under a world coordinate system according to the depth information, wherein the three-dimensional coordinates of all pixels in the depth map under the world coordinate system form a three-dimensional point cloud;
obtaining a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, wherein the bottom surface equation is an equation formed by the bottom surface of the target object in a world coordinate system, and the top surface equation is an equation formed by the top surface of the target object in the world coordinate system;
and calculating the length, width and height of the target object according to the bottom surface equation and the top surface equation.
Optionally, converting the two-dimensional coordinates of each pixel in the image coordinate system into three-dimensional coordinates in the world coordinate system according to the depth information, including:
converting two-dimensional coordinates of each pixel in the depth map under an image coordinate system into three-dimensional coordinates under a camera coordinate system according to the depth information;
and converting the three-dimensional coordinates of each pixel in the depth map in the camera coordinate system into the three-dimensional coordinates in the world coordinate system.
Optionally, the preset algorithm is a RANSAC algorithm, and the obtaining of the bottom surface equation and the top surface equation of the target object by using the three-dimensional point cloud and the preset algorithm includes:
step a: calling RANSAC algorithm, and calculating the three-dimensional point cloud to obtain a bottom surface equation of the target object;
step b: deleting the three-dimensional coordinates associated with the bottom surface equation in the three-dimensional point cloud;
step c: calling RANSAC algorithm, and calculating the three-dimensional point cloud with the three-dimensional coordinates deleted to obtain a plane equation of the target object;
step d: judging whether an included angle between the plane equation and the bottom surface equation is within a preset threshold value interval or not;
step e 1: if the included angle is located in a preset threshold interval, determining that the plane equation is a top surface equation;
step e 2: and if the included angle is not in a preset threshold interval, deleting the three-dimensional coordinates associated with the plane equation from the three-dimensional point cloud, and returning to the step c to continue to perform circularly.
Optionally, calculating the length, width, and height of the target object according to the bottom surface equation and the top surface equation includes:
calculating the distance from the plane represented by the top surface equation to the plane represented by the bottom surface equation as the height of the target object;
judging whether at least two side equations of the target object can be obtained by using the three-dimensional point cloud;
if so, acquiring the length and the width of the target object by using a linear equation of intersection of the top surface equation and the side surface equation;
if not, the length and width of the plane represented by the top surface equation are obtained as the length and width of the target object.
Optionally, the obtaining the length and the width of the target object by using a linear equation in which a top surface equation and a side surface equation intersect includes:
obtaining a first linear equation of intersection of the top surface equation and the first side surface equation, calculating distances among a plurality of three-dimensional coordinates on the first linear equation, and taking the maximum value in the distances as the length of the target object; and
and obtaining a second linear equation of which the top surface equation and the second side surface equation are intersected, calculating the distance between a plurality of three-dimensional coordinates on the second linear equation, and taking the maximum value in the distance as the width of the target object.
Optionally, the obtaining, as the length and the width of the target object, the length and the width of the plane represented by the top surface equation includes:
obtaining a first boundary linear equation of a plane represented by the top surface equation, calculating the distance between a plurality of three-dimensional coordinates on the first boundary linear equation, and taking the maximum value in the distance as the length of the target object; and
and obtaining a second boundary linear equation of the plane represented by the top surface equation, calculating the distance between a plurality of three-dimensional coordinates on the second boundary linear equation, and taking the maximum value in the distance as the width of the target object.
Optionally, the method further includes:
for each coordinate point in the three-dimensional point cloud, calculating an adjacent threshold value of the coordinate point when the number of neighbors is equal to a preset number of neighbors, wherein the adjacent threshold value is an area within a certain distance near the coordinate point;
averaging the adjacent threshold values calculated by all the coordinate points to obtain an adjacent threshold average value;
sequentially judging whether the neighbor threshold calculated by each coordinate point in the three-dimensional point cloud is larger than the neighbor threshold average value;
and determining the coordinate point with the adjacent threshold value larger than the average value of the adjacent threshold values as a noise point, and deleting the noise point in the three-dimensional point cloud.
Optionally, the preset number of neighbors is one of: 10, 20, 30, 40, 50, 60; and
averaging the calculated neighbor thresholds for all coordinate points includes: and carrying out arithmetic mean on the adjacent threshold values calculated by all the coordinate points to obtain the average value of the adjacent threshold values.
Optionally, the method further includes:
calculating a volume of the target object from the length, width and height of the target object, the volume calculated as: volume is length, width, height.
The application also discloses an object measuring device includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a depth image of a target object, the depth image is composed of a plurality of pixels, and each pixel comprises depth information of the target object;
the conversion module is used for converting the two-dimensional coordinates of each pixel under the image coordinate system into three-dimensional coordinates under a world coordinate system according to the depth information, and the three-dimensional coordinates of all pixels in the depth map under the world coordinate system form a three-dimensional point cloud;
the obtaining module is used for obtaining a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, wherein the bottom surface equation is an equation formed by the bottom surface of the target object in a world coordinate system, and the top surface equation is an equation formed by the top surface of the target object in the world coordinate system;
and the length, width and height calculation module is used for calculating the length, width and height of the target object according to the bottom surface equation and the top surface equation.
Optionally, the conversion module includes:
the first conversion unit is used for converting the two-dimensional coordinates of each pixel in the depth map under the image coordinate system into three-dimensional coordinates under the camera coordinate system according to the depth information;
and the second conversion unit is used for converting the three-dimensional coordinates of each pixel in the depth map under the camera coordinate system into the three-dimensional coordinates under the world coordinate system.
Optionally, the obtaining module includes:
the first operation unit is used for calling RANSAC algorithm to operate the three-dimensional point cloud to obtain a bottom surface equation of the target object;
the first deleting unit is used for deleting the three-dimensional coordinates associated with the bottom surface equation in the three-dimensional point cloud;
the second operation unit is used for calling RANSAC algorithm, operating the three-dimensional point cloud with the three-dimensional coordinates deleted and obtaining a plane equation of the target object;
the first judgment unit is used for judging whether an included angle between the plane equation and the bottom surface equation is within a preset threshold value interval or not;
the determining unit is used for determining the plane equation as a top surface equation when the included angle is located in the preset threshold interval;
and the second deleting unit is used for deleting the three-dimensional coordinates associated with the plane equation in the three-dimensional point cloud.
Optionally, the length, width, and height calculating module includes:
a calculation unit configured to calculate a distance from a plane represented by the top surface equation to a plane represented by the bottom surface equation as a height of the target object;
the second judgment unit is used for judging whether at least two side equations of the target object can be obtained by utilizing the three-dimensional point cloud;
the first obtaining unit is used for obtaining the length and the width of the target object by utilizing a linear equation of intersection of the top surface equation and the side surface equation when the side surface equation of the target object can be obtained;
a second obtaining unit configured to obtain, as the length and width of the target object, the length and width of the plane represented by the top surface equation when the side surface equation of the target object cannot be obtained.
Optionally, the first obtaining unit includes:
the first obtaining subunit is used for obtaining a first linear equation of intersection of the top surface equation and the first side surface equation;
a first calculation subunit configured to calculate distances between a plurality of three-dimensional coordinates on the first linear equation, and take a maximum value of the distances as a length of the target object;
the second obtaining subunit is used for obtaining a second linear equation of intersection of the top surface equation and the second side surface equation;
and a second calculating subunit, configured to calculate distances between the plurality of three-dimensional coordinates on the second line equation, and use a maximum value of the distances as the width of the target object.
Optionally, the second obtaining unit includes:
a third obtaining subunit, configured to obtain a first boundary line equation of the plane represented by the top surface equation;
a third calculation subunit configured to calculate distances between the plurality of three-dimensional coordinates on the first boundary line equation, and use a maximum value of the distances as a length of the target object;
a fourth obtaining subunit, configured to obtain a second boundary linear equation of the plane represented by the top surface equation;
and a fourth calculating subunit, configured to calculate distances between the plurality of three-dimensional coordinates on the second boundary line equation, and use a maximum value of the distances as the width of the target object.
Optionally, the apparatus further comprises:
the neighbor threshold calculation module is used for calculating a neighbor threshold of each coordinate point in the three-dimensional point cloud when the neighbor number of the coordinate point is equal to the preset neighbor number, wherein the neighbor threshold is an area within a certain distance near the coordinate point;
the average calculation module is used for averaging the adjacent threshold values calculated by all the coordinate points to obtain an adjacent threshold average value;
the judging module is used for sequentially judging whether the neighbor threshold calculated by each coordinate point in the three-dimensional point cloud is larger than the neighbor threshold average value;
and the deleting module is used for determining the coordinate point with the adjacent threshold value larger than the average value of the adjacent threshold value as a noise point and deleting the noise point in the three-dimensional point cloud.
Optionally, the apparatus further comprises:
a volume calculation module for calculating a volume of the target object according to the length, width and height of the target object, the volume being calculated as: volume is length, width, height.
Optionally, the object measurement device is further coupled to a target object image acquisition device to obtain a depth image of the target object, the target object image acquisition device includes:
a video camera for recording a video image of a subject,
camera support, and
a base; the target object is placed on the base,
wherein the camera is placed higher than the target object so that the camera can photograph the top surface of the target object.
According to the technical scheme, in the embodiment of the application, the depth image of the target object is obtained firstly; then converting the two-dimensional coordinates of each pixel in the depth image under an image plane coordinate system into three-dimensional coordinates under a world coordinate system by using the depth information of the depth image, and forming three-dimensional point cloud of the depth image by using the three-dimensional coordinates of all pixels in the depth image under the world coordinate system; and then, acquiring a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, calculating the length, the width and the height of the target object according to the bottom surface equation and the top surface equation, and acquiring the volume of the target object by using the length, the width and the height of the target object. Therefore, compared with a mode of manually calculating the volume of the target object, the method and the device improve the calculation efficiency of the volume of the target object, so that the processing of the target object in the distribution and storage links is accelerated, and the normal operation of the whole logistics process is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of an object measurement method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of an object measurement method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a mapping relationship among an image coordinate system, a camera coordinate system, and a world coordinate system according to an embodiment of the present disclosure;
FIG. 4 is another schematic flow chart of a method for measuring an object according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of an object measurement method disclosed in an embodiment of the present application;
FIG. 6 is a schematic illustration of a target object disclosed in an embodiment of the present application;
FIG. 7 is another schematic flow chart of a method for measuring an object according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an apparatus for obtaining an image of a target object according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of an object measuring device according to an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart of an object measurement method disclosed in an embodiment of the present application;
fig. 11 is a schematic diagram of a three-dimensional point cloud disclosed in an embodiment of the present application.
Detailed Description
The application provides an object measuring method and device, which are used for improving the calculation efficiency of the volume of a target object, accelerating the processing of the target object in the delivery and storage links and ensuring the normal operation of the whole logistics process.
First, a method for measuring an object provided in an embodiment of the present application is described, as shown in fig. 1, the method at least includes:
step S11: acquiring a depth image of a target object, wherein the depth image is composed of a plurality of pixels, and each pixel comprises depth information of the target object;
in the embodiment of the application, the depth camera can be used for directly acquiring the depth image of the target object; the conventional image of the target object can also be acquired by using a common camera; and then processing the conventional image (such as stereo vision) to obtain a depth image of the target object.
In an embodiment of the present application, the pixel values of the depth image may represent the depth information, and the depth information may be specifically a distance between the target object and the capturing camera.
Step S12: converting two-dimensional coordinates of each pixel under an image coordinate system into three-dimensional coordinates under a world coordinate system according to the depth information, wherein the three-dimensional coordinates of all pixels in the depth map under the world coordinate system form a three-dimensional point cloud;
in the embodiment of the present application, the image coordinate system is a two-dimensional rectangular plane coordinate system established with an optical center of a camera as an origin; the world coordinate system is a three-dimensional coordinate system established by taking the intersection point of the optical axis of the camera and the image plane as an origin; the world coordinate system is an absolute coordinate system and is used for describing the absolute position of the target object.
In the embodiment of the present application, the three-dimensional point cloud may be specifically as shown in fig. 11, and the target object (i.e., the convex portion in fig. 11) and the ground (i.e., the large-area plane in fig. 11) may be clearly represented by using the coordinates in the three-dimensional point cloud.
Step S13: obtaining a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, wherein the bottom surface equation is an equation formed by the bottom surface of the target object in a world coordinate system, and the top surface equation is an equation formed by the top surface of the target object in the world coordinate system;
step S14: and calculating the length, width and height of the target object according to the bottom surface equation and the top surface equation.
As can be seen from the above, in the embodiment of the present application, a depth image of a target object is first obtained; then converting the two-dimensional coordinates of each pixel in the depth image under an image plane coordinate system into three-dimensional coordinates under a world coordinate system by using the depth information of the depth image, and forming three-dimensional point cloud of the depth image by using the three-dimensional coordinates of all pixels in the depth image under the world coordinate system; and then, acquiring a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, calculating the length, the width and the height of the target object according to the bottom surface equation and the top surface equation, and acquiring the volume of the target object by using the length, the width and the height of the target object. Therefore, compared with a mode of manually calculating the volume of the target object, the method and the device improve the calculation efficiency of the volume of the target object, so that the processing of the target object in the distribution and storage links is accelerated, and the normal operation of the whole logistics process is ensured.
In another possible embodiment of the present application, as shown in fig. 2, step S12 in all the above embodiments may include:
step S21: converting two-dimensional coordinates of each pixel in the depth map under an image coordinate system into three-dimensional coordinates under a camera coordinate system according to the depth information;
in the embodiment of the application, the camera coordinate system is a three-dimensional coordinate system established by taking the optical center of the camera as an origin, the X axis and the Y axis of the camera coordinate system are respectively parallel to the U axis and the V axis of the image coordinate system, and the Z axis is the optical axis of the camera. In the embodiment of the present application, the correspondence between the image coordinate system, the camera coordinate system, and the world coordinate system can be seen in fig. 3.
In the present embodiment, assuming that the origin O1 of the image plane coordinate system (U-V coordinate system), the coordinates in the camera plane coordinate system (X-Y coordinate system) are (U)0,v0) Then, the conversion relationship between the U-V coordinate system and the X-Y coordinate system for any pixel in the depth image is:
by expressing equation (1) above with a homogeneous matrix, one can obtain:
inverse transformation of matrix equation (2) yields:
in the embodiment of the present application, it is assumed that the coordinates of a pixel in the depth image in the image coordinate system are P (X, y), and the coordinates in the camera coordinate system are P (X)C,YC,ZC) The coordinate in the world coordinate system is P (X)W,YW,ZW) Then P (X, y) is converted to P (X)C,YC,ZC) The specific process is as follows:
a1: according to the corresponding relation between the image coordinate system and the camera coordinate system, the following can be obtained:
wherein f represents the depth of the pixel corresponding to the P (x, y) coordinate;
b1: expressing equation (4) above with a homogeneous equation yields:
c1: substituting the above equation (3) into equation (5) yields:
d1: transforming equation (6) yields:
using equation (7) above, the two-dimensional coordinates P (X, y) of the pixel in the image coordinate system can be converted into the three-dimensional coordinates P (X) of the pixel in the camera coordinate systemC,YC,ZC)。
Step S22: and converting the three-dimensional coordinates of each pixel in the depth map in the camera coordinate system into the three-dimensional coordinates in the world coordinate system.
In the examples of the present application, P (X)C,YC,ZC) Conversion to P (X)C,YC,ZC) The specific process is as follows:
a2: and expressing the corresponding relation between the camera and the world coordinate system by using a translation vector t and a rotation matrix R:
wherein R is an orthogonal unit matrix of 3 x 3, and t is a three-dimensional translation vector;
b2: will be in equation (8)Substituted into equation (7)Obtaining:
wherein,
the coordinate P (X) of the pixel in the camera coordinate system can be obtained by using the equation (9)C,YC,ZC) Converting into three-dimensional coordinates P (X) of pixels in world coordinate systemW、YW、ZW)。
As can be seen from the above, in the embodiment of the present application, the two-dimensional coordinates of each pixel in the depth map in the image coordinate system may be first converted into the three-dimensional coordinates in the camera coordinate system, and then the three-dimensional coordinates of each pixel in the camera coordinate system may be converted into the three-dimensional coordinates in the world coordinate system.
In the embodiment of the present application, the preset algorithm in all the embodiments may specifically be a ransac (random sampleconsensus) algorithm; as shown in fig. 4, step S13 in all the above embodiments may include:
step S41: calling RANSAC algorithm, and calculating the three-dimensional point cloud to obtain a bottom surface equation of the target object;
in the embodiment of the application, the RANSAC algorithm is to calculate the three-dimensional point cloud according to the following process:
firstly, any two three-dimensional coordinates in the three-dimensional point cloud are taken, and a plane equation is established; then, judging whether other three-dimensional coordinates in the three-dimensional point cloud are associated with the plane equation or not; if so, adjusting the plane equation by using the three-dimensional coordinates; finally, the plane represented by the obtained plane equation is the plane with the largest area and the largest coverage three-dimensional coordinates in the depth image. Since the target object is generally placed on a plane in the actual acquisition process of the depth image, it is possible to determine that the plane having a large area and covering the most three-dimensional coordinates is the bottom surface of the target object.
Step S42: removing points belonging to the ground in the three-dimensional point cloud, namely deleting three-dimensional coordinates associated with the bottom surface equation;
in this embodiment, the three-dimensional coordinates covered by the plane formed by the bottom surface equation may be deleted from the three-dimensional point cloud.
Step S43: calling RANSAC algorithm, and calculating the three-dimensional point cloud with the three-dimensional coordinates deleted to obtain a plane equation of the target object;
in the embodiment of the application, a RANSAC algorithm is called to calculate the three-dimensional point cloud after the three-dimensional coordinates are deleted, and the obtained plane is the plane which has the largest covering surface except the bottom surface and covers the most three-dimensional point cloud in the target object; since it cannot be determined whether the current plane is the top surface of the target object in practical applications, step S44 is adopted to determine whether the plane represented by the current plane equation is the top surface of the target object.
Step S44: judging whether an included angle between the plane equation and the bottom surface equation is located in a first preset threshold interval or not; if so, executing step S45, otherwise, executing step S46;
in the embodiment of the present application, the first preset threshold interval is [ -5 degrees, +5 degrees ]. Theoretically, the bottom surface equation is parallel to the top surface equation, and the included angle between the bottom surface equation and the top surface equation is 0 degree, but in practical calculation, the size of the included angle is found to be wrong with the performance of the plane, so that the first preset threshold interval is set to be [ -5 degrees, +5 degrees ].
In the embodiment of the present application, the normals of the above plane equation and the bottom surface equation may be determined, and then the included angle between the two normals is determined, that is, the included angle between the plane equation and the bottom surface equation is obtained.
Step S45: determining the plane equation as a top surface equation;
step S46: and removing the points belonging to the plane in the three-dimensional point cloud, namely deleting the three-dimensional coordinates associated with the plane equation, and then circularly executing the step S43.
In the embodiment of the application, if the obtained plane equation is not the top surface equation of the target object, the points may be removed from the three-dimensional point cloud, that is, the three-dimensional coordinates associated with the plane equation are deleted, and then the RANSAC algorithm is continuously called to perform the operation on the three-dimensional point cloud until the top surface equation of the target object is obtained.
From the above, the bottom surface equation and the top surface equation of the target object can be determined by the above method.
In another possible embodiment of the present application, as shown in fig. 5, step S14 in all the above embodiments may include:
step S51: calculating the distance from the plane represented by the top surface equation to the plane represented by the bottom surface equation as the height of the target object;
in practical applications, the bottom surface of the target object is a rugged plane constructed by scattered three-dimensional coordinate points, so that in order to increase accuracy, the distances from all three-dimensional coordinate points in the top surface of the target object to the bottom surface equation can be firstly obtained, and then the distances are averaged to be used as the height of the target object. However, since the height of the target object is calculated in the above manner, the calculation amount is large, and therefore, in order to improve the calculation efficiency, the height of the target object can be calculated by sampling the three-dimensional coordinate point on the top surface of the target object and then calculating the equation distance from the three-dimensional coordinate point to the bottom surface.
Step S52: judging whether at least two side equations of the target object can be obtained by using the three-dimensional point cloud; if so, executing step S53, otherwise, executing step S54;
in the embodiment of the application, specifically, a RANSAC algorithm may be called first to calculate the three-dimensional point cloud to obtain a plane equation; then, judging whether the plane equation and the bottom surface equation are orthogonal, namely, whether the included angle between the plane equation and the bottom surface equation is 90 degrees (which can be further realized as the preset threshold interval [ -85 degrees, +95 degrees ] as described above or other preferable orthogonal angles), and if so, determining that the plane equation is a side surface equation; and if the three-dimensional coordinates are not orthogonal, deleting the three-dimensional coordinates associated with the side equations from the three-dimensional point cloud, continuing to execute and call the RANSAC algorithm, and operating the three-dimensional point cloud with the three-dimensional coordinates deleted until at least two side equations are obtained or the three-dimensional coordinates do not exist in the three-dimensional cloud.
Step S53: obtaining the length and the width of the target object by using a linear equation of intersection of the top surface equation and the side surface equation;
in order to obtain the length and width of the target object, at least two side surfaces (shown by hatching in fig. 6) of the target object need to be acquired during the depth image acquisition process of the target object, and the number of the straight lines where the top surface of the target object intersects with the two side surfaces is actually two, namely, the straight line 1 and the straight line 2 shown in fig. 6. In the embodiment of the present application, after obtaining the straight lines 1 and 2, the length and width of the target object can be obtained in the following manner:
obtaining a first linear equation (which may be an equation corresponding to a straight line 1) in which the top surface equation and the first side surface equation intersect, calculating distances between a plurality of three-dimensional coordinates on the first linear equation, and taking a maximum value of the distances as a length (or as a width) of the target object; and
and obtaining a second linear equation (which can be an equation corresponding to the straight line 2) in which the top surface equation and the second side surface equation intersect, calculating the distances between a plurality of three-dimensional coordinates on the second linear equation, and taking the maximum value in the distances as the width (or the length) of the target object.
Step S54: and obtaining the length and the width of the plane represented by the top surface equation as the length and the width of the target object.
In the present example, theoretically, a 4-edge boundary line equation can be obtained using a top surface equation, but in one embodiment, for simplicity, only any two intersecting boundary line equations of the top surface equation need be used, which are considered to define the length and width of the plane represented by the top surface equation. In one embodiment, the specific steps are as follows:
obtaining a first boundary line equation of a plane represented by the top surface equation, calculating the distance between a plurality of three-dimensional coordinates on the first boundary line equation, and taking the maximum value in the distance as the length (or the width) of the target object; and
and obtaining a second boundary line equation of the plane represented by the top surface equation, calculating the distance between a plurality of three-dimensional coordinates on the second boundary line equation, and taking the maximum value of the distances as the width (or the length) of the target object.
From the above, in the embodiment of the present application, the length and width of the target object can be determined.
Those skilled in the art will recognize that noise is inevitably introduced into the three-dimensional point cloud obtained from the depth image due to factors such as the accuracy of the acquisition camera itself and environmental interference. Therefore, in another possible embodiment of the present application, the noise in the three-dimensional point cloud can be further removed by processing. There are various methods in the art for denoising three-dimensional point clouds.
In a preferred embodiment, the denoising processing is performed according to the adjacency relation of coordinate points in the three-dimensional point cloud. In the three-dimensional point cloud formed by the depth camera, a coordinate point around a certain coordinate point a may be regarded as a "neighbor" of the coordinate point a. A "neighborhood threshold" is a region within a certain distance of the vicinity of a coordinate point, within which the point may have "neighbors".
Since each coordinate point in the three-dimensional point cloud can be considered as a neighbor relation having a different distance from all other coordinate points. Therefore, for a certain coordinate point a, the value of the neighborhood threshold when it has exactly N (N is a positive integer) neighbors can be calculated. In an exemplary description, for example, when a coordinate point a has 100 nearest neighbors, and a farthest neighbor among the 100 neighbors is 20 coordinate units away from the coordinate point a, it may be considered that: these 100 neighbors are all within the neighborhood threshold 20 of coordinate point a.
Based on this situation, as shown in fig. 7, the method in all embodiments disclosed in the present application may include:
step S71: for each coordinate point in the three-dimensional point cloud, calculating a neighbor threshold value of the coordinate point when the neighbor number of the coordinate point is equal to a preset neighbor number, where the preset neighbor number may be set according to an actual situation, for example, according to a research, the preset neighbor number may be set to 10, 20, 30, 40, 50, or 60, where 30, 40, or 50 are taken as a preferred value;
step S72: averaging the calculated adjacent threshold values of all the coordinate points to obtain an adjacent threshold average value;
in this embodiment of the present application, the arithmetic mean of the calculated neighbor threshold values of all the coordinate points may be specifically obtained, for example, the neighbor threshold values of all three coordinate points are 10, 20, and 30, respectively, and then the calculation of the neighbor threshold mean value may specifically be: (10+20+ 30)/3-20.
Step S73: sequentially judging whether the calculated adjacent threshold value of each coordinate point in the three-dimensional point cloud is larger than the average value of the adjacent threshold values; if so, go to step S74; otherwise, ending the flow;
step S74: and determining the coordinate point as a noise point, and deleting the noise point from the three-dimensional point cloud.
In another possible embodiment of the present application, as shown in fig. 10, the method may further include:
step S15: calculating a volume of the target object from the length, width and height of the target object, the volume calculated as: volume, length, width, height;
from the above, by adopting the method disclosed in the embodiment of the present application, the volume of the target object can also be calculated. Through the above description of the method embodiments, those skilled in the art can clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation manner in many cases. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program codes, such as Read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and so on.
The application also provides a device for acquiring the image of the target object, as shown in fig. 8, the device comprises a camera, a camera bracket and a base; the target object can be placed on the base, and the placement position of the camera is higher than the target object, so that the camera can shoot the top surface of the target object.
In practical application, when the camera is a depth camera, the device can be used for directly acquiring a depth image of a target object; when the camera is a common camera, the depth image of the target object can be obtained by further processing the image obtained by the device.
In the embodiment of the application, the base of the device can be replaced by a conveyor belt.
Corresponding to the embodiments of the object measurement method provided by the present application, the present application further provides an object measurement apparatus, which may be implemented in hardware, such as a processor of a camera, a processor of a computer, a processor of a mobile terminal, and the like, or may be implemented on a remote server or a client deployed remotely and communicating with a network. The object measuring device is connected to the device for acquiring an image of a target object shown in fig. 8, or remotely communicates with the device for acquiring an image of a target object shown in fig. 8 through a network or the like, so as to receive a captured depth image of a target object. As shown in fig. 9, the object measuring apparatus includes at least:
a first obtaining module 91, configured to obtain a depth image of a target object, where the depth image is composed of multiple pixels, and each pixel includes depth information of the target object;
the conversion module 92 is configured to convert the two-dimensional coordinates of each pixel in the image coordinate system into three-dimensional coordinates in the world coordinate system according to the depth information, where the three-dimensional coordinates of all pixels in the depth map in the world coordinate system form a three-dimensional point cloud;
an obtaining module 93, configured to obtain a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, where the bottom surface equation is an equation formed by the bottom surface of the target object in a world coordinate system, and the top surface equation is an equation formed by the top surface of the target object in the world coordinate system;
and a length, width and height calculation module 94 for calculating the length, width and height of the target object according to the bottom surface equation and the top surface equation.
As can be seen from the above, in the embodiment of the present application, a depth image of a target object is first obtained; then converting the two-dimensional coordinates of each pixel in the depth image under an image plane coordinate system into three-dimensional coordinates under a world coordinate system by using the depth information of the depth image, and forming three-dimensional point cloud of the depth image by using the three-dimensional coordinates of all pixels in the depth image under the world coordinate system; then, acquiring a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, and calculating the length, the width and the height of the target object according to the bottom surface equation and the top surface equation; and the volume of the target object can be calculated. Therefore, compared with a mode of manually calculating the object volume, the method and the device improve the calculation efficiency of the object volume, so that the processing of the objects in the distribution and storage links is accelerated, and the normal operation of the whole logistics process is ensured.
In another possible embodiment of the present application, the conversion module 92 in all the above embodiments includes:
the first conversion unit is used for converting the two-dimensional coordinates of each pixel in the depth map under the image coordinate system into three-dimensional coordinates under the camera coordinate system according to the depth information;
and the second conversion unit is used for converting the three-dimensional coordinates of each pixel in the depth map under the camera coordinate system into the three-dimensional coordinates under the world coordinate system.
From the above, in the embodiment of the present application, the two-dimensional coordinates of each pixel in the depth map in the image coordinate system can be converted into the three-dimensional coordinates in the world coordinate system.
In another possible embodiment of the present application, the obtaining module 93 in all the above embodiments may include:
the first operation unit is used for calling RANSAC algorithm to operate the three-dimensional point cloud to obtain a bottom surface equation of the target object;
the first deleting unit is used for deleting the three-dimensional coordinates associated with the bottom surface equation in the three-dimensional point cloud;
the second operation unit is used for calling RANSAC algorithm, operating the three-dimensional point cloud with the three-dimensional coordinates deleted and obtaining a plane equation of the target object;
the first judgment unit is used for judging whether an included angle between the plane equation and the bottom surface equation is within a preset threshold value interval or not;
the determining unit is used for determining the plane equation as a top surface equation when the included angle is located in the preset threshold interval;
and the second deleting unit is used for deleting the three-dimensional coordinates associated with the plane equation in the three-dimensional point cloud.
From the above, it can be seen that the bottom surface equation and the top surface equation of the target object can be determined by using the above apparatus.
In another possible embodiment of the present application, the length, width and height calculating module 94 in all the above embodiments may include:
a calculation unit configured to calculate a distance from a plane represented by the top surface equation to a plane represented by the bottom surface equation as a height of the target object;
the second judgment unit is used for judging whether at least two side equations of the target object can be obtained by utilizing the three-dimensional point cloud;
the first obtaining unit is used for obtaining the length and the width of the target object by utilizing a linear equation of intersection of the top surface equation and the side surface equation when the side surface equation of the target object can be obtained;
in an embodiment of the present application, the first obtaining unit includes: the first obtaining subunit is used for obtaining a first linear equation of intersection of the top surface equation and the first side surface equation; a first calculation subunit configured to calculate distances between a plurality of three-dimensional coordinates on the first linear equation, and take a maximum value of the distances as a length of the target object; the second obtaining subunit is used for obtaining a second linear equation of intersection of the top surface equation and the second side surface equation; and a second calculating subunit, configured to calculate distances between the plurality of three-dimensional coordinates on the second line equation, and use a maximum value of the distances as the width of the target object.
A second obtaining unit configured to obtain, as the length and width of the target object, the length and width of the plane represented by the top surface equation when the side surface equation of the target object cannot be obtained.
In an embodiment of the present application, the second obtaining unit includes: a third obtaining subunit, configured to obtain a first boundary line equation of the plane represented by the top surface equation; a third calculation subunit configured to calculate distances between the plurality of three-dimensional coordinates on the first boundary line equation, and use a maximum value of the distances as a length of the target object; a fourth obtaining subunit, configured to obtain a second boundary linear equation of the plane represented by the top surface equation; and a fourth calculating subunit, configured to calculate distances between the plurality of three-dimensional coordinates on the second boundary line equation, and use a maximum value of the distances as the width of the target object.
Therefore, by adopting the device, the length, the width and the height of the target object can be calculated.
In another possible embodiment of the present application, the apparatus in all the above embodiments may further include:
the neighbor threshold calculation module is used for calculating a neighbor threshold of each coordinate point in the three-dimensional point cloud when the neighbor number of the coordinate point is equal to the preset neighbor number, wherein the neighbor threshold is an area within a certain distance near the coordinate point;
the average calculation module is used for averaging the adjacent threshold values calculated by all the coordinate points to obtain an adjacent threshold average value;
the judging module is used for sequentially judging whether the neighbor threshold calculated by each coordinate point in the three-dimensional point cloud is larger than the neighbor threshold average value;
and the deleting module is used for determining the coordinate point with the adjacent threshold value larger than the average value of the adjacent threshold value as a noise point and deleting the noise point in the three-dimensional point cloud.
Therefore, by adopting the device, the noise points in the three-dimensional point cloud can be deleted.
In another possible embodiment of the present application, the apparatus in all the above embodiments may further include:
a volume calculation module for calculating a volume of the target object according to the length, width and height of the target object, the volume being calculated as: volume is length, width, height.
From the above, with the device of the present application, the volume of the target object can also be calculated.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is directed to embodiments of the present application and it is noted that numerous modifications and adaptations may be made by those skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (18)

1. An object measuring method, comprising:
acquiring a depth image of a target object, wherein the depth image is composed of a plurality of pixels, and each pixel comprises depth information of the target object;
converting two-dimensional coordinates of each pixel under an image coordinate system into three-dimensional coordinates under a world coordinate system according to the depth information, wherein the three-dimensional coordinates of all pixels in the depth map under the world coordinate system form a three-dimensional point cloud;
obtaining a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, wherein the bottom surface equation is an equation formed by the bottom surface of the target object in a world coordinate system, and the top surface equation is an equation formed by the top surface of the target object in the world coordinate system;
and calculating the length, width and height of the target object according to the bottom surface equation and the top surface equation.
2. The method of claim 1, wherein converting two-dimensional coordinates of each pixel in an image coordinate system to three-dimensional coordinates in a world coordinate system according to the depth information comprises:
converting two-dimensional coordinates of each pixel in the depth map under an image coordinate system into three-dimensional coordinates under a camera coordinate system according to the depth information;
and converting the three-dimensional coordinates of each pixel in the depth map in the camera coordinate system into the three-dimensional coordinates in the world coordinate system.
3. The method of claim 1, wherein the preset algorithm is RANSAC algorithm, and the obtaining of the bottom surface equation and the top surface equation of the target object by using the three-dimensional point cloud and the preset algorithm comprises:
step a: calling RANSAC algorithm, and calculating the three-dimensional point cloud to obtain a bottom surface equation of the target object;
step b: deleting the three-dimensional coordinates associated with the bottom surface equation in the three-dimensional point cloud;
step c: calling RANSAC algorithm, and calculating the three-dimensional point cloud with the three-dimensional coordinates deleted to obtain a plane equation of the target object;
step d: judging whether an included angle between the plane equation and the bottom surface equation is within a preset threshold value interval or not;
step e 1: if the included angle is located in a preset threshold interval, determining that the plane equation is a top surface equation;
step e 2: and if the included angle is not in a preset threshold interval, deleting the three-dimensional coordinates associated with the plane equation from the three-dimensional point cloud, and returning to the step c to continue to perform circularly.
4. The method of claim 1, wherein calculating the length, width, and height of the target object from the bottom surface equation and the top surface equation comprises:
calculating the distance from the plane represented by the top surface equation to the plane represented by the bottom surface equation as the height of the target object;
judging whether at least two side equations of the target object can be obtained by using the three-dimensional point cloud;
if so, acquiring the length and the width of the target object by using a linear equation of intersection of the top surface equation and the side surface equation;
if not, the length and width of the plane represented by the top surface equation are obtained as the length and width of the target object.
5. The method of claim 4, wherein obtaining the length and width of the target object using a line equation that intersects a top surface equation with a side surface equation comprises:
obtaining a first linear equation of intersection of the top surface equation and the first side surface equation, calculating distances among a plurality of three-dimensional coordinates on the first linear equation, and taking the maximum value in the distances as the length of the target object; and
and obtaining a second linear equation of which the top surface equation and the second side surface equation are intersected, calculating the distance between a plurality of three-dimensional coordinates on the second linear equation, and taking the maximum value in the distance as the width of the target object.
6. The method of claim 4, wherein obtaining the length and width of the plane represented by the top surface equation as the length and width of the target object comprises:
obtaining a first boundary linear equation of a plane represented by the top surface equation, calculating the distance between a plurality of three-dimensional coordinates on the first boundary linear equation, and taking the maximum value in the distance as the length of the target object; and
and obtaining a second boundary linear equation of the plane represented by the top surface equation, calculating the distance between a plurality of three-dimensional coordinates on the second boundary linear equation, and taking the maximum value in the distance as the width of the target object.
7. The method according to any one of claims 1-6, further comprising:
for each coordinate point in the three-dimensional point cloud, calculating an adjacent threshold value of the coordinate point when the number of neighbors is equal to a preset number of neighbors, wherein the adjacent threshold value is an area within a certain distance near the coordinate point;
averaging the adjacent threshold values calculated by all the coordinate points to obtain an adjacent threshold average value;
sequentially judging whether the neighbor threshold calculated by each coordinate point in the three-dimensional point cloud is larger than the neighbor threshold average value;
and determining the coordinate point with the adjacent threshold value larger than the average value of the adjacent threshold values as a noise point, and deleting the noise point in the three-dimensional point cloud.
8. The method of claim 7, wherein the preset number of neighbors is one of: 10, 20, 30, 40, 50, 60; and
averaging the calculated neighbor thresholds for all coordinate points includes: and carrying out arithmetic mean on the adjacent threshold values calculated by all the coordinate points to obtain the average value of the adjacent threshold values.
9. The method of claim 1, further comprising:
calculating a volume of the target object from the length, width and height of the target object, the volume calculated as: volume is length, width, height.
10. An object measuring device, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a depth image of a target object, the depth image is composed of a plurality of pixels, and each pixel comprises depth information of the target object;
the conversion module is used for converting the two-dimensional coordinates of each pixel under the image coordinate system into three-dimensional coordinates under a world coordinate system according to the depth information, and the three-dimensional coordinates of all pixels in the depth map under the world coordinate system form a three-dimensional point cloud;
the obtaining module is used for obtaining a bottom surface equation and a top surface equation of the target object by using the three-dimensional point cloud and a preset algorithm, wherein the bottom surface equation is an equation formed by the bottom surface of the target object in a world coordinate system, and the top surface equation is an equation formed by the top surface of the target object in the world coordinate system;
and the length, width and height calculation module is used for calculating the length, width and height of the target object according to the bottom surface equation and the top surface equation.
11. The apparatus of claim 10, wherein the conversion module comprises:
the first conversion unit is used for converting the two-dimensional coordinates of each pixel in the depth map under the image coordinate system into three-dimensional coordinates under the camera coordinate system according to the depth information;
and the second conversion unit is used for converting the three-dimensional coordinates of each pixel in the depth map under the camera coordinate system into the three-dimensional coordinates under the world coordinate system.
12. The apparatus of claim 10, wherein the obtaining module comprises:
the first operation unit is used for calling RANSAC algorithm to operate the three-dimensional point cloud to obtain a bottom surface equation of the target object;
the first deleting unit is used for deleting the three-dimensional coordinates associated with the bottom surface equation in the three-dimensional point cloud;
the second operation unit is used for calling RANSAC algorithm, operating the three-dimensional point cloud with the three-dimensional coordinates deleted and obtaining a plane equation of the target object;
the first judgment unit is used for judging whether an included angle between the plane equation and the bottom surface equation is within a preset threshold value interval or not;
the determining unit is used for determining the plane equation as a top surface equation when the included angle is located in the preset threshold interval;
and the second deleting unit is used for deleting the three-dimensional coordinates associated with the plane equation in the three-dimensional point cloud.
13. The apparatus of claim 10, wherein the aspect height calculation module comprises:
a calculation unit configured to calculate a distance from a plane represented by the top surface equation to a plane represented by the bottom surface equation as a height of the target object;
the second judgment unit is used for judging whether at least two side equations of the target object can be obtained by utilizing the three-dimensional point cloud;
the first obtaining unit is used for obtaining the length and the width of the target object by utilizing a linear equation of intersection of the top surface equation and the side surface equation when the side surface equation of the target object can be obtained;
a second obtaining unit configured to obtain, as the length and width of the target object, the length and width of the plane represented by the top surface equation when the side surface equation of the target object cannot be obtained.
14. The apparatus of claim 13, wherein the first obtaining unit comprises:
the first obtaining subunit is used for obtaining a first linear equation of intersection of the top surface equation and the first side surface equation;
a first calculation subunit configured to calculate distances between a plurality of three-dimensional coordinates on the first linear equation, and take a maximum value of the distances as a length of the target object;
the second obtaining subunit is used for obtaining a second linear equation of intersection of the top surface equation and the second side surface equation;
and a second calculating subunit, configured to calculate distances between the plurality of three-dimensional coordinates on the second line equation, and use a maximum value of the distances as the width of the target object.
15. The apparatus of claim 13, wherein the second obtaining unit comprises:
a third obtaining subunit, configured to obtain a first boundary line equation of the plane represented by the top surface equation;
a third calculation subunit configured to calculate distances between the plurality of three-dimensional coordinates on the first boundary line equation, and use a maximum value of the distances as a length of the target object;
a fourth obtaining subunit, configured to obtain a second boundary linear equation of the plane represented by the top surface equation;
and a fourth calculating subunit, configured to calculate distances between the plurality of three-dimensional coordinates on the second boundary line equation, and use a maximum value of the distances as the width of the target object.
16. The apparatus according to any one of claims 10-15, wherein the apparatus further comprises:
the neighbor threshold calculation module is used for calculating a neighbor threshold of each coordinate point in the three-dimensional point cloud when the neighbor number of the coordinate point is equal to the preset neighbor number, wherein the neighbor threshold is an area within a certain distance near the coordinate point;
the average calculation module is used for averaging the adjacent threshold values calculated by all the coordinate points to obtain an adjacent threshold average value;
the judging module is used for sequentially judging whether the neighbor threshold calculated by each coordinate point in the three-dimensional point cloud is larger than the neighbor threshold average value;
and the deleting module is used for determining the coordinate point with the adjacent threshold value larger than the average value of the adjacent threshold value as a noise point and deleting the noise point in the three-dimensional point cloud.
17. The apparatus of claim 10, further comprising:
a volume calculation module for calculating a volume of the target object according to the length, width and height of the target object, the volume being calculated as: volume is length, width, height.
18. The apparatus of claim 10, wherein the object measurement device is further coupled to a target object image acquisition device to obtain a depth image of the target object, the target object image acquisition device comprising:
a video camera for recording a video image of a subject,
camera support, and
a base; the target object is placed on the base,
wherein the camera is placed higher than the target object so that the camera can photograph the top surface of the target object.
CN201510847486.7A 2015-11-27 2015-11-27 Object measuring method and device Active CN106813568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510847486.7A CN106813568B (en) 2015-11-27 2015-11-27 Object measuring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510847486.7A CN106813568B (en) 2015-11-27 2015-11-27 Object measuring method and device

Publications (2)

Publication Number Publication Date
CN106813568A true CN106813568A (en) 2017-06-09
CN106813568B CN106813568B (en) 2019-10-29

Family

ID=59102096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510847486.7A Active CN106813568B (en) 2015-11-27 2015-11-27 Object measuring method and device

Country Status (1)

Country Link
CN (1) CN106813568B (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610176A (en) * 2017-09-15 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN108335325A (en) * 2018-01-30 2018-07-27 上海数迹智能科技有限公司 A kind of cube method for fast measuring based on depth camera data
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume
CN108537834A (en) * 2018-03-19 2018-09-14 杭州艾芯智能科技有限公司 A kind of volume measuring method, system and depth camera based on depth image
CN108648230A (en) * 2018-05-14 2018-10-12 南京阿凡达机器人科技有限公司 A kind of package dimensions measurement method, system, storage medium and mobile terminal
CN108885791A (en) * 2018-07-06 2018-11-23 深圳前海达闼云端智能科技有限公司 ground detection method, related device and computer readable storage medium
CN109029253A (en) * 2018-06-29 2018-12-18 南京阿凡达机器人科技有限公司 A kind of package volume measuring method, system, storage medium and mobile terminal
CN109357637A (en) * 2018-12-11 2019-02-19 长治学院 A kind of veneer reeling machine roll bending radius of curvature and method for measuring thickness based on depth camera
CN109632825A (en) * 2019-01-18 2019-04-16 创新奇智(重庆)科技有限公司 A kind of automatic testing method of coil of strip surface abnormalities protrusion
CN109737874A (en) * 2019-01-17 2019-05-10 广东省智能制造研究所 Dimension of object measurement method and device based on 3D vision technology
CN109785444A (en) * 2019-01-07 2019-05-21 深圳增强现实技术有限公司 Recognition methods, device and the mobile terminal of real plane in image
CN109870126A (en) * 2017-12-05 2019-06-11 宁波盈芯信息科技有限公司 A kind of area computation method and a kind of mobile phone for being able to carry out areal calculation
CN110006343A (en) * 2019-04-15 2019-07-12 Oppo广东移动通信有限公司 Measurement method, device and the terminal of object geometric parameter
CN110276801A (en) * 2019-06-24 2019-09-24 深圳前海达闼云端智能科技有限公司 Object positioning method and device and storage medium
CN110449364A (en) * 2019-08-22 2019-11-15 广州市龙图智能科技有限公司 A kind of articles sorting delivery method, device and storage medium
CN110570511A (en) * 2018-06-06 2019-12-13 阿里巴巴集团控股有限公司 point cloud data processing method, device and system and storage medium
CN110726996A (en) * 2019-11-25 2020-01-24 歌尔股份有限公司 Depth module ranging method, depth camera and mobile terminal
CN110766744A (en) * 2019-11-05 2020-02-07 北京华捷艾米科技有限公司 MR volume measurement method and device based on 3D depth camera
CN111308484A (en) * 2019-11-26 2020-06-19 歌尔股份有限公司 Depth module ranging method and device, depth camera and mobile terminal
CN111707198A (en) * 2020-06-29 2020-09-25 中车青岛四方车辆研究所有限公司 3D vision-based key parameter measurement method for rail vehicle coupler and draft gear
CN111784765A (en) * 2020-06-03 2020-10-16 Oppo广东移动通信有限公司 Object measurement method, virtual object processing method, object measurement device, virtual object processing device, medium, and electronic apparatus
CN111862185A (en) * 2020-07-24 2020-10-30 唯羲科技有限公司 Method for extracting plane from image
CN111915666A (en) * 2019-05-07 2020-11-10 顺丰科技有限公司 Volume measurement method and device based on mobile terminal
CN112037236A (en) * 2020-08-28 2020-12-04 深圳开立生物医疗科技股份有限公司 Ultrasonic three-dimensional image measuring method, system, equipment and computer medium
CN112102391A (en) * 2020-08-31 2020-12-18 北京市商汤科技开发有限公司 Measuring method and device, electronic device and storage medium
CN112101209A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112991427A (en) * 2019-12-02 2021-06-18 顺丰科技有限公司 Object volume measuring method, device, computer equipment and storage medium
CN112991429A (en) * 2019-12-13 2021-06-18 顺丰科技有限公司 Box volume measuring method and device, computer equipment and storage medium
CN112991428A (en) * 2019-12-02 2021-06-18 顺丰科技有限公司 Box volume measuring method and device, computer equipment and storage medium
CN113066117A (en) * 2019-12-13 2021-07-02 顺丰科技有限公司 Box volume measuring method and device, computer equipment and storage medium
CN113532266A (en) * 2020-04-15 2021-10-22 深圳市光鉴科技有限公司 Box volume measuring method, system, equipment and storage medium based on three-dimensional vision
CN113532265A (en) * 2020-04-15 2021-10-22 深圳市光鉴科技有限公司 Box volume measuring device based on three-dimensional vision
CN113643414A (en) * 2020-05-11 2021-11-12 北京达佳互联信息技术有限公司 Three-dimensional image generation method and device, electronic equipment and storage medium
CN114003819A (en) * 2021-11-27 2022-02-01 上海迪塔班克数据科技有限公司 Internet data acquisition method and system based on chemical plastic industry
CN114593681A (en) * 2020-12-07 2022-06-07 北京格灵深瞳信息技术有限公司 Thickness measuring method, thickness measuring apparatus, electronic device, and storage medium
CN114777702A (en) * 2022-04-22 2022-07-22 成都市绿色快线环保科技有限公司 Stacked plate volume identification method, device and system
CN115830031A (en) * 2023-02-22 2023-03-21 深圳市兆兴博拓科技股份有限公司 Method and system for detecting circuit board patch and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2298111A (en) * 1995-01-31 1996-08-21 Videologic Ltd Improvements relating to computer 3d rendering systems
CN101266131A (en) * 2008-04-08 2008-09-17 长安大学 Volume measurement device based on image and its measurement method
CN101846503A (en) * 2010-04-21 2010-09-29 中国科学院自动化研究所 Luggage information on-line obtaining system based on stereoscopic vision and method thereof
CN103983334A (en) * 2014-05-20 2014-08-13 联想(北京)有限公司 Information processing method and electronic equipment
WO2015021473A1 (en) * 2013-08-09 2015-02-12 Postea, Inc. Apparatus, systems and methods for enrollment of irregular shaped objects
CN104422391A (en) * 2013-09-10 2015-03-18 上海瑷皑电子科技有限公司 Sensing type object volume identifying technology
CN104933704A (en) * 2015-05-28 2015-09-23 西安算筹信息科技有限公司 Three-dimensional scanning method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2298111A (en) * 1995-01-31 1996-08-21 Videologic Ltd Improvements relating to computer 3d rendering systems
CN101266131A (en) * 2008-04-08 2008-09-17 长安大学 Volume measurement device based on image and its measurement method
CN101846503A (en) * 2010-04-21 2010-09-29 中国科学院自动化研究所 Luggage information on-line obtaining system based on stereoscopic vision and method thereof
WO2015021473A1 (en) * 2013-08-09 2015-02-12 Postea, Inc. Apparatus, systems and methods for enrollment of irregular shaped objects
CN104422391A (en) * 2013-09-10 2015-03-18 上海瑷皑电子科技有限公司 Sensing type object volume identifying technology
CN103983334A (en) * 2014-05-20 2014-08-13 联想(北京)有限公司 Information processing method and electronic equipment
CN104933704A (en) * 2015-05-28 2015-09-23 西安算筹信息科技有限公司 Three-dimensional scanning method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹毓 等: "RANSAC平面估计算法在路面物体体积测量中的应用", 《传感技术学报》 *

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610176A (en) * 2017-09-15 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN107610176B (en) * 2017-09-15 2020-06-26 斯坦德机器人(深圳)有限公司 Pallet dynamic identification and positioning method, system and medium based on Kinect
CN109870126A (en) * 2017-12-05 2019-06-11 宁波盈芯信息科技有限公司 A kind of area computation method and a kind of mobile phone for being able to carry out areal calculation
CN108335325A (en) * 2018-01-30 2018-07-27 上海数迹智能科技有限公司 A kind of cube method for fast measuring based on depth camera data
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume
CN108537834A (en) * 2018-03-19 2018-09-14 杭州艾芯智能科技有限公司 A kind of volume measuring method, system and depth camera based on depth image
CN108537834B (en) * 2018-03-19 2020-05-01 杭州艾芯智能科技有限公司 Volume measurement method and system based on depth image and depth camera
CN108648230A (en) * 2018-05-14 2018-10-12 南京阿凡达机器人科技有限公司 A kind of package dimensions measurement method, system, storage medium and mobile terminal
CN110570511A (en) * 2018-06-06 2019-12-13 阿里巴巴集团控股有限公司 point cloud data processing method, device and system and storage medium
CN109029253A (en) * 2018-06-29 2018-12-18 南京阿凡达机器人科技有限公司 A kind of package volume measuring method, system, storage medium and mobile terminal
CN108885791A (en) * 2018-07-06 2018-11-23 深圳前海达闼云端智能科技有限公司 ground detection method, related device and computer readable storage medium
CN108885791B (en) * 2018-07-06 2022-04-08 达闼机器人有限公司 Ground detection method, related device and computer readable storage medium
WO2020006765A1 (en) * 2018-07-06 2020-01-09 深圳前海达闼云端智能科技有限公司 Ground detection method, related device, and computer readable storage medium
CN109357637A (en) * 2018-12-11 2019-02-19 长治学院 A kind of veneer reeling machine roll bending radius of curvature and method for measuring thickness based on depth camera
CN109357637B (en) * 2018-12-11 2021-12-10 长治学院 Method for measuring curvature radius and thickness of plate rolling machine plate rolling based on depth camera
CN109785444A (en) * 2019-01-07 2019-05-21 深圳增强现实技术有限公司 Recognition methods, device and the mobile terminal of real plane in image
CN109737874B (en) * 2019-01-17 2021-12-03 广东省智能制造研究所 Object size measuring method and device based on three-dimensional vision technology
CN109737874A (en) * 2019-01-17 2019-05-10 广东省智能制造研究所 Dimension of object measurement method and device based on 3D vision technology
CN109632825A (en) * 2019-01-18 2019-04-16 创新奇智(重庆)科技有限公司 A kind of automatic testing method of coil of strip surface abnormalities protrusion
CN110006343A (en) * 2019-04-15 2019-07-12 Oppo广东移动通信有限公司 Measurement method, device and the terminal of object geometric parameter
US12117284B2 (en) 2019-04-15 2024-10-15 Guangdong Oppo Mobile Telecommunications Corp. Ltd. Method and apparatus for measuring geometric parameter of object, and terminal
CN111915666A (en) * 2019-05-07 2020-11-10 顺丰科技有限公司 Volume measurement method and device based on mobile terminal
CN110276801A (en) * 2019-06-24 2019-09-24 深圳前海达闼云端智能科技有限公司 Object positioning method and device and storage medium
CN110449364B (en) * 2019-08-22 2021-11-23 广州市龙图智能科技有限公司 Object sorting and conveying method, device and storage medium
CN110449364A (en) * 2019-08-22 2019-11-15 广州市龙图智能科技有限公司 A kind of articles sorting delivery method, device and storage medium
CN110766744A (en) * 2019-11-05 2020-02-07 北京华捷艾米科技有限公司 MR volume measurement method and device based on 3D depth camera
CN110726996A (en) * 2019-11-25 2020-01-24 歌尔股份有限公司 Depth module ranging method, depth camera and mobile terminal
CN110726996B (en) * 2019-11-25 2021-11-26 歌尔光学科技有限公司 Depth module ranging method, depth camera and mobile terminal
CN111308484B (en) * 2019-11-26 2022-03-22 歌尔光学科技有限公司 Depth module ranging method and device, depth camera and mobile terminal
CN111308484A (en) * 2019-11-26 2020-06-19 歌尔股份有限公司 Depth module ranging method and device, depth camera and mobile terminal
CN112991428A (en) * 2019-12-02 2021-06-18 顺丰科技有限公司 Box volume measuring method and device, computer equipment and storage medium
CN112991428B (en) * 2019-12-02 2024-09-27 顺丰科技有限公司 Box volume measuring method, device, computer equipment and storage medium
CN112991427A (en) * 2019-12-02 2021-06-18 顺丰科技有限公司 Object volume measuring method, device, computer equipment and storage medium
CN112991429A (en) * 2019-12-13 2021-06-18 顺丰科技有限公司 Box volume measuring method and device, computer equipment and storage medium
CN113066117A (en) * 2019-12-13 2021-07-02 顺丰科技有限公司 Box volume measuring method and device, computer equipment and storage medium
CN112991429B (en) * 2019-12-13 2024-08-20 顺丰科技有限公司 Box volume measuring method, device, computer equipment and storage medium
CN113066117B (en) * 2019-12-13 2024-05-17 顺丰科技有限公司 Box volume measuring method, device, computer equipment and storage medium
CN113532266B (en) * 2020-04-15 2023-08-08 深圳市光鉴科技有限公司 Box volume measuring method, system, equipment and storage medium based on three-dimensional vision
CN113532266A (en) * 2020-04-15 2021-10-22 深圳市光鉴科技有限公司 Box volume measuring method, system, equipment and storage medium based on three-dimensional vision
CN113532265A (en) * 2020-04-15 2021-10-22 深圳市光鉴科技有限公司 Box volume measuring device based on three-dimensional vision
CN113532265B (en) * 2020-04-15 2023-08-08 深圳市光鉴科技有限公司 Box volume measuring device based on three-dimensional vision
CN113643414B (en) * 2020-05-11 2024-02-06 北京达佳互联信息技术有限公司 Three-dimensional image generation method and device, electronic equipment and storage medium
CN113643414A (en) * 2020-05-11 2021-11-12 北京达佳互联信息技术有限公司 Three-dimensional image generation method and device, electronic equipment and storage medium
CN111784765A (en) * 2020-06-03 2020-10-16 Oppo广东移动通信有限公司 Object measurement method, virtual object processing method, object measurement device, virtual object processing device, medium, and electronic apparatus
CN111784765B (en) * 2020-06-03 2024-04-26 Oppo广东移动通信有限公司 Object measurement method, virtual object processing method, virtual object measurement device, virtual object processing device, medium and electronic equipment
WO2021244140A1 (en) * 2020-06-03 2021-12-09 Oppo广东移动通信有限公司 Object measurement method and apparatus, virtual object processing method and apparatus, medium and electronic device
CN111707198A (en) * 2020-06-29 2020-09-25 中车青岛四方车辆研究所有限公司 3D vision-based key parameter measurement method for rail vehicle coupler and draft gear
CN111707198B (en) * 2020-06-29 2021-08-03 中车青岛四方车辆研究所有限公司 3D vision-based key parameter measurement method for rail vehicle coupler and draft gear
CN111862185A (en) * 2020-07-24 2020-10-30 唯羲科技有限公司 Method for extracting plane from image
CN112037236A (en) * 2020-08-28 2020-12-04 深圳开立生物医疗科技股份有限公司 Ultrasonic three-dimensional image measuring method, system, equipment and computer medium
CN112102391A (en) * 2020-08-31 2020-12-18 北京市商汤科技开发有限公司 Measuring method and device, electronic device and storage medium
CN112101209B (en) * 2020-09-15 2024-04-09 阿波罗智联(北京)科技有限公司 Method and apparatus for determining world coordinate point cloud for roadside computing device
CN112101209A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN114593681A (en) * 2020-12-07 2022-06-07 北京格灵深瞳信息技术有限公司 Thickness measuring method, thickness measuring apparatus, electronic device, and storage medium
CN114593681B (en) * 2020-12-07 2024-10-18 北京格灵深瞳信息技术有限公司 Thickness measuring method, thickness measuring device, electronic equipment and storage medium
CN114003819A (en) * 2021-11-27 2022-02-01 上海迪塔班克数据科技有限公司 Internet data acquisition method and system based on chemical plastic industry
CN114777702B (en) * 2022-04-22 2024-03-12 成都市绿色快线环保科技有限公司 Stacked plate volume identification method, device and system thereof
CN114777702A (en) * 2022-04-22 2022-07-22 成都市绿色快线环保科技有限公司 Stacked plate volume identification method, device and system
CN115830031A (en) * 2023-02-22 2023-03-21 深圳市兆兴博拓科技股份有限公司 Method and system for detecting circuit board patch and storage medium

Also Published As

Publication number Publication date
CN106813568B (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN106813568B (en) Object measuring method and device
CN110084236B (en) Image correction method and device
US9659217B2 (en) Systems and methods for scale invariant 3D object detection leveraging processor architecture
KR102326097B1 (en) Pallet detection using units of physical length
JP7433609B2 (en) Method and computational system for object identification
KR102095626B1 (en) Image processing method and apparatus
US8903161B2 (en) Apparatus for estimating robot position and method thereof
CN104816306A (en) Robot, robot system, control device and control method
CN107452028B (en) Method and device for determining position information of target image
JP2019505868A (en) Motion detection in images
JP2016091053A (en) Information processing apparatus, container shape estimation method, work-piece picking system, and program
CN109033920B (en) Recognition method and device for grabbed target and computer readable storage medium
JP2019192022A (en) Image processing apparatus, image processing method, and program
CN113420735B (en) Contour extraction method, device, equipment and storage medium
CN112378333B (en) Method and device for measuring warehoused goods
CN114435828A (en) Goods storage method and device, carrying equipment and storage medium
CN111666935B (en) Article center positioning method and device, logistics system and storage medium
CN114638891A (en) Target detection positioning method and system based on image and point cloud fusion
US8712167B2 (en) Pattern identification apparatus, control method and program thereof
JP5703705B2 (en) Image feature detection system, image recognition system, image feature detection method, and program
CN110955797B (en) User position determining method, device, electronic equipment and storage medium
CN116038715B (en) Box taking method, device, robot and storage medium
CN112197708A (en) Measuring method and device, electronic device and storage medium
US20230196719A1 (en) Method for cargo counting, computer equipment, and storage medium
US20220128347A1 (en) System and method to measure object dimension using stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1237405

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180418

Address after: Four story 847 mailbox of the capital mansion of Cayman Islands, Cayman Islands, Cayman

Applicant after: CAINIAO SMART LOGISTICS HOLDING Ltd.

Address before: Grand Cayman, Cayman Islands

Applicant before: ALIBABA GROUP HOLDING Ltd.

GR01 Patent grant
GR01 Patent grant