CN108537834B - Volume measurement method and system based on depth image and depth camera - Google Patents

Volume measurement method and system based on depth image and depth camera Download PDF

Info

Publication number
CN108537834B
CN108537834B CN201810225912.7A CN201810225912A CN108537834B CN 108537834 B CN108537834 B CN 108537834B CN 201810225912 A CN201810225912 A CN 201810225912A CN 108537834 B CN108537834 B CN 108537834B
Authority
CN
China
Prior art keywords
coordinate
point cloud
depth
depth camera
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810225912.7A
Other languages
Chinese (zh)
Other versions
CN108537834A (en
Inventor
侯方超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Aixin Intelligent Technology Co ltd
Original Assignee
Hangzhou Aixin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Aixin Intelligent Technology Co ltd filed Critical Hangzhou Aixin Intelligent Technology Co ltd
Priority to CN201810225912.7A priority Critical patent/CN108537834B/en
Publication of CN108537834A publication Critical patent/CN108537834A/en
Application granted granted Critical
Publication of CN108537834B publication Critical patent/CN108537834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of logistics and volume measurement, and particularly relates to a depth image-based volume measurement method and system and a depth camera, wherein the depth image-based volume measurement method comprises the following steps: s1, acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate; s2, transforming the scene point cloud coordinates to obtain scene point cloud coordinates under a depth camera coordinate system; s3, processing scene point cloud coordinates under a depth camera coordinate system to obtain a coordinate set of the object to be measured; and S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured. Compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time.

Description

Volume measurement method and system based on depth image and depth camera
Technical Field
The invention belongs to the technical field of logistics and volume measurement, and particularly relates to a depth image-based volume measurement method and system and a depth camera.
Background
In recent years, with the rapid development of economic globalization, a large amount of materials need to flow frequently between areas, and particularly, with the rise of electronic commerce generated along with the revolution of information technology, the logistics industry is rapidly developed, the competition among logistics enterprises is intensified, how to reduce the labor cost, and how to efficiently send express mail to a destination is the key for gaining competitive advantages.
In logistics and storage management, the volume attribute of the article is of great importance to the logistics center for optimizing receiving, warehousing, picking, packaging and shipping management, so that the automatic accurate measurement of the size and the volume of the article is realized, and the efficiency of storage logistics and the intelligence and automation level of a logistics system can be greatly improved.
Most of the existing volume measuring devices are based on light curtain or linear array laser scanning, and the volume can be calculated only by matching with a conveyor belt encoder. This technology, while mature, is expensive and has a high system complexity.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a depth image-based volume measurement method, a depth image-based volume measurement system and a depth camera, compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and the cost is lower; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time.
In a first aspect, the present invention provides a depth image-based volume measurement method, including the following steps:
s1, acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
s2, transforming the scene point cloud coordinates to obtain scene point cloud coordinates under a depth camera coordinate system;
s3, processing scene point cloud coordinates under a depth camera coordinate system to obtain a coordinate set of the object to be measured;
and S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured.
Preferably, the step S2 is specifically:
s21, setting a reference plane in the scene depth map;
s22, calculating the tilt attitude data of the depth camera according to the reference plane;
s23, transforming the scene point cloud coordinates according to the tilt attitude data to obtain the scene point cloud coordinates under the depth camera coordinate system
The scene point cloud coordinates of (1).
Preferably, the S22 is specifically:
s221, setting an included angle range of the X axis and the Y axis of the depth camera and the normal of the reference plane, wherein the included angle range comprises a plurality of included angles theta of the X axisxAnd angle theta with Y axisy
S222, traversing each X-axis included angle and each Y-axis included angle, and utilizing a coordinate transformation formula to carry out Z-axis transformation on the reference planeCKTransforming the coordinates to obtain a plurality of transformed ZCKCoordinates, the transformation formula is:
Z'=Y0*sinθx+Z0cosθx
Zck=Z'*cosθy-X0sinθy
wherein X0、Y0、Z0As original coordinate points of a reference plane, ZCKFor transformed ZCKCoordinates;
s223, calculating all transformed ZCKThe mean Zmean and the minimum variance Zsigma of the coordinates;
s224, the included angle theta of the X axis corresponding to the minimum variance ZsigmaxX-axis canted angle α as a depth cameraxCorresponding angle of Y-axis thetayY-axis canted angle α as a depth camerayThereby obtaining tilt attitude data: zCKMean value of coordinates Zmean, minimum variance Zsigma, X-axis tilt angle αxInclined at an angle α with respect to the Y axisy
Preferably, the S23 is specifically:
inclined at an included angle α according to the X-axisxInclined at an angle α with respect to the Y axisyAnd transforming the scene point cloud coordinate by using a transformation formula to obtain the scene point cloud coordinate under a depth camera coordinate system, wherein the transformation formula is as follows:
Z'i=Yio*sinαx+Ziocosαx
Xi=Z'i*sinαy+Xiocosαy
Yi=Yio*cosαy-Ziosinαy
Zi=Z'i*cosαy-Xiosinαy
wherein Xio、Yio、ZioAs original scene point cloud coordinates, Xi、Yi、ZiIs the scene point cloud coordinate under the depth camera coordinate system.
Preferably, the S3 is specifically:
according to a screening formula, screening X of the object to be tested meeting the conditions from the scene cloud coordinates under the depth camera coordinate systemi、Yi、ZiThe coordinate point set and the screening formula are as follows:
Zi-Zmean > N Zsigma, where N is a positive number.
Preferably, the S4 is specifically:
s41, calculating the X of the object to be measured according to the preset grid precisioni、YiProjecting the coordinate points to corresponding grid areas in a reference plane, calibrating the connected areas of the grid areas, and counting the size of each connected area;
s42, selecting X corresponding to the communication area with the largest areai、YiCoordinate point set, calculating selected X by principal component analysisi、YiObtaining the length and width of the projection of the object to be measured in the reference plane by the minimum circumscribed rectangle corresponding to the coordinate point;
s43, calculating ZiAnd obtaining the height of the object to be measured by the maximum difference with Zmean, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In a second aspect, the present invention provides a depth image-based volume measurement system, which is suitable for the depth image-based volume measurement method in the first aspect, and includes:
the scene acquisition unit is used for acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
the coordinate transformation unit is used for transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system;
the device comprises a to-be-detected object extracting unit, a depth camera coordinate system acquiring unit and a depth camera coordinate system acquiring unit, wherein the to-be-detected object extracting unit is used for processing scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the to-be-detected object;
and the volume calculation unit is used for calculating the length, the width and the height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In a third aspect, the present invention provides a depth camera comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method according to the first aspect.
The invention has the beneficial effects that: compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and the cost is lower; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a flow chart of a depth image-based volume measurement method according to the present embodiment;
fig. 2 is a structural diagram of the depth image-based volume measurement system according to the present embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The first embodiment is as follows:
the present embodiment provides a depth image-based volume measurement method, as shown in fig. 1, including the following four steps S1, S2, S3, and S4:
and S1, acquiring a scene depth map containing the object to be detected to obtain scene point cloud coordinates. The scene depth map acquired by the embodiment can adopt an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like. A depth map, i.e. a coordinate set of the Z-axis of depth, also called a range image, is an image in which the distance (depth) from an image grabber to each point in a scene is taken as a pixel value, and directly reflects the geometry of the visible surface of each object in the scene. The depth map can be calculated into scene point cloud data through coordinate conversion, the image collector of the embodiment is a depth camera, and the depth camera can be used for collecting the depth map and can apply an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like.
And S2, transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system.
The step S2 specifically includes three steps S21, S22, and S23:
and S21, setting a reference plane in the scene depth map.
S22, calculating the tilt posture data of the depth camera according to the reference plane. The S22 specifically includes four steps of S221, S222, S223, and S224:
s221, setting an included angle range of the X axis and the Y axis of the depth camera and the normal of the reference plane, wherein the included angle range comprises a plurality of included angles theta of the X axisxAnd angle theta with Y axisy
S222, traversing each X-axis included angle and each Y-axis included angle, and utilizing a coordinate transformation formula to carry out Z-axis transformation on the reference planeCKTransforming the coordinates to obtain a plurality of transformed ZCKCoordinates, the transformation formula is:
Z'=Y0*sinθx+Z0cosθx
Zck=Z'*cosθy-X0sinθy
wherein X0、Y0、Z0As original coordinate points of a reference plane, ZCKFor transformed ZCKCoordinates;
s223, calculating all transformed ZCKThe mean Zmean and the minimum variance Zsigma of the coordinates;
s224, the included angle theta of the X axis corresponding to the minimum variance ZsigmaxX-axis canted angle α as a depth cameraxCorresponding angle of Y-axis thetayY-axis canted angle α as a depth camerayThereby obtaining tilt attitude data: zCKMean value of coordinates Zmean, minimum variance Zsigma, X-axis tilt angle αxInclined at an angle α with respect to the Y axisy
In this embodiment, the X-axis angle, the Y-axis angle, and the distance between the depth camera and the reference plane are obtained according to step S22.
And S23, transforming the scene point cloud coordinates according to the tilt attitude data to obtain the scene point cloud coordinates under the depth camera coordinate system. The method comprises the following specific steps:
inclined at an included angle α according to the X-axisxInclined at an angle α with respect to the Y axisyAnd transforming the scene point cloud coordinate by using a transformation formula to obtain the scene point cloud coordinate under a depth camera coordinate system, wherein the transformation formula is as follows:
Z'i=Yio*sinαx+Ziocosαx
Xi=Z'i*sinαy+Xiocosαy
Yi=Yio*cosαy-Ziosinαy
Zi=Z'i*cosαy-Xiosinαy
wherein Xio、Yio、ZioAs original scene point cloud coordinates, Xi、Yi、ZiIs the scene point cloud coordinate under the depth camera coordinate system.
And S3, processing the scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the object to be measured. The method comprises the following specific steps:
according to a screening formula, screening X of the object to be tested meeting the conditions from the scene cloud coordinates under the depth camera coordinate systemi、Yi、ZiThe coordinate point set and the screening formula are as follows:
Zi-Zmean > N Zsigma, where N is a positive number.
In this embodiment, in step S3, the coordinate data of the object to be measured is extracted to remove the relevant information of other objects in the scene.
And S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured. The S4 specifically includes two steps S41 and S42:
s41, calculating the X of the object to be measured according to the preset grid precisioni、YiCoordinate pointProjecting to a corresponding grid area in a reference plane, calibrating a connected area of the grid area, and counting the size of each connected area;
s42, selecting X corresponding to the communication area with the largest areai、YiCoordinate point set, calculating selected X by principal component analysisi、YiObtaining the length and width of the projection of the object to be measured in the reference plane by the minimum circumscribed rectangle corresponding to the coordinate point;
s43, calculating ZiAnd obtaining the height of the object to be measured by the maximum difference with Zmean, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In summary, compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in terms of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time. The camera inclination posture is calibrated in advance, and the length, the width and the height of the object to be measured can be calculated only by the key steps of multiplication, connected region calibration, principal component analysis and the like in the operation process, so that the volume of the object to be measured is calculated, and very good measurement instantaneity can be achieved. The embodiment supports the inclination existing when the camera is installed, is easy to install and expands the measuring range. The point clouds output by a plurality of high-precision depth cameras in the existing market are unstructured, and the point cloud data (point cloud coordinates) are not required to be structured, so that the type selection of the measuring equipment is easier to perform.
Example two:
the present embodiment provides a depth image-based volume measurement system, as shown in fig. 2, including:
the scene acquisition unit is used for acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
the coordinate transformation unit is used for transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system;
the device comprises a to-be-detected object extracting unit, a depth camera coordinate system acquiring unit and a depth camera coordinate system acquiring unit, wherein the to-be-detected object extracting unit is used for processing scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the to-be-detected object;
and the volume calculation unit is used for calculating the length, the width and the height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
The system is suitable for the depth image-based volume measurement method described in the first embodiment, and as shown in fig. 1, includes the following four steps S1, S2, S3, and S4:
and S1, acquiring a scene depth map containing the object to be detected to obtain scene point cloud coordinates. The scene depth map acquired by the embodiment can adopt an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like. A depth map, i.e. a coordinate set of the Z-axis of depth, also called a range image, is an image in which the distance (depth) from an image grabber to each point in a scene is taken as a pixel value, and directly reflects the geometry of the visible surface of each object in the scene. The depth map can be calculated into scene point cloud data through coordinate conversion, the image collector of the embodiment is a depth camera, and the depth camera can be used for collecting the depth map and can apply an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like.
And S2, transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system.
The step S2 specifically includes three steps S21, S22, and S23:
and S21, setting a reference plane in the scene depth map.
S22, calculating the tilt posture data of the depth camera according to the reference plane. The S22 specifically includes four steps of S221, S222, S223, and S224:
s221, setting an included angle range of the X axis and the Y axis of the depth camera and the normal of the reference plane, wherein the included angle range comprises a plurality of included angles theta of the X axisxAnd angle theta with Y axisy
S222, traversing each X-axis included angle and each Y-axis included angle, and utilizing a coordinate transformation formula to carry out Z-axis transformation on the reference planeCKTransforming the coordinates to obtain a plurality of transformed ZCKCoordinates, the transformation formula is:
Z'=Y0*sinθx+Z0cosθx
Zck=Z'*cosθy-X0sinθy
wherein X0、Y0、Z0As original coordinate points of a reference plane, ZCKFor transformed ZCKCoordinates;
s223, calculating all transformed ZCKThe mean Zmean and the minimum variance Zsigma of the coordinates;
s224, the included angle theta of the X axis corresponding to the minimum variance ZsigmaxX-axis canted angle α as a depth cameraxCorresponding angle of Y-axis thetayY-axis canted angle α as a depth camerayThereby obtaining tilt attitude data: zCKMean value of coordinates Zmean, minimum variance Zsigma, X-axis tilt angle αxInclined at an angle α with respect to the Y axisy
In this embodiment, the X-axis angle, the Y-axis angle, and the distance between the depth camera and the reference plane are obtained according to step S22.
And S23, transforming the scene point cloud coordinates according to the tilt attitude data to obtain the scene point cloud coordinates under the depth camera coordinate system. The method comprises the following specific steps:
inclined at an included angle α according to the X-axisxInclined at an angle α with respect to the Y axisyAnd transforming the scene point cloud coordinate by using a transformation formula to obtain the scene point cloud coordinate under a depth camera coordinate system, wherein the transformation formula is as follows:
Z'i=Yio*sinαx+Ziocosαx
Xi=Z'i*sinαy+Xiocosαy
Yi=Yio*cosαy-Ziosinαy
Zi=Z'i*cosαy-Xiosinαy
wherein Xio、Yio、ZioAs original scene point cloud coordinates, Xi、Yi、ZiFor scene point cloud under depth camera coordinate systemAnd (4) marking.
And S3, processing the scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the object to be measured. The method comprises the following specific steps:
according to a screening formula, screening X of the object to be tested meeting the conditions from the scene cloud coordinates under the depth camera coordinate systemi、Yi、ZiThe coordinate point set and the screening formula are as follows:
Zi-Zmean > N Zsigma, where N is a positive number.
In this embodiment, in step S3, the coordinate data of the object to be measured is extracted to remove the relevant information of other objects in the scene.
And S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured. The S4 specifically includes two steps S41 and S42:
s41, calculating the X of the object to be measured according to the preset grid precisioni、YiProjecting the coordinate points to corresponding grid areas in a reference plane, calibrating the connected areas of the grid areas, and counting the size of each connected area;
s42, selecting X corresponding to the communication area with the largest areai、YiCoordinate point set, calculating selected X by principal component analysisi、YiObtaining the length and width of the projection of the object to be measured in the reference plane by the minimum circumscribed rectangle corresponding to the coordinate point;
s43, calculating ZiAnd obtaining the height of the object to be measured by the maximum difference with Zmean, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In summary, compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in terms of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time. The camera inclination posture is calibrated in advance, and the length, the width and the height of the object to be measured can be calculated only by the key steps of multiplication, connected region calibration, principal component analysis and the like in the operation process, so that the volume of the object to be measured is calculated, and very good measurement instantaneity can be achieved. The embodiment supports the inclination existing when the camera is installed, is easy to install and expands the measuring range. The point clouds output by a plurality of high-precision depth cameras in the existing market are unstructured, and the point cloud data (point cloud coordinates) are not required to be structured, so that the type selection of the measuring equipment is easier to perform.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed method and system may be implemented in other ways. For example, the above division of elements is merely a logical division, and other divisions may be realized, for example, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not executed. The units may or may not be physically separate, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
Example three:
the embodiment provides a depth camera, which comprises a processor, an input device, an output device and a memory, wherein the processor, the input device, the output device and the memory are connected with each other, the memory is used for storing a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions and execute the method of the first embodiment.
Compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time. The camera inclination posture is calibrated in advance, and the length, the width and the height of the object to be measured can be calculated only by the key steps of multiplication, connected region calibration, principal component analysis and the like in the operation process, so that the volume of the object to be measured is calculated, and very good measurement instantaneity can be achieved. The embodiment supports the inclination existing when the camera is installed, is easy to install and expands the measuring range. The point clouds output by a plurality of high-precision depth cameras in the existing market are unstructured, and the point cloud data (point cloud coordinates) are not required to be structured, so that the type selection of the measuring equipment is easier to perform.
It should be understood that in the present embodiment, the processor may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device may include an image capture device and the output device may include a display (LCD, etc.), speakers, etc.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (7)

1. A depth image-based volume measurement method is characterized by comprising the following steps:
s1, acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
s2, transforming the scene point cloud coordinates to obtain scene point cloud coordinates under a depth camera coordinate system;
s3, processing scene point cloud coordinates under a depth camera coordinate system to obtain a coordinate set of the object to be measured;
s4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured;
the step S2 specifically includes:
s21, setting a reference plane in the scene depth map;
s22, calculating the tilt attitude data of the depth camera according to the reference plane;
s23, transforming the scene point cloud coordinate according to the tilt attitude data to obtain the scene point cloud coordinate under the depth camera coordinate system;
the S22 specifically includes:
s221, setting an included angle range of the X axis and the Y axis of the depth camera and the normal of the reference plane, wherein the included angle range comprises a plurality of included angles theta of the X axisxAnd angle theta with Y axisy
S222, traversing each X-axis included angle and each Y-axis included angle, and utilizing a coordinate transformation formula to carry out Z-axis transformation on the reference planeCKTransforming the coordinates to obtain a plurality of transformed ZCKCoordinates;
s223, calculating all transformed ZCKMean value Zmean of the coordinatesA minimum variance Zsigma;
s224, the included angle theta of the X axis corresponding to the minimum variance ZsigmaxX-axis canted angle α as a depth cameraxCorresponding angle of Y-axis thetayY-axis canted angle α as a depth camerayThereby obtaining tilt attitude data: zCKMean value of coordinates Zmean, minimum variance Zsigma, X-axis tilt angle αxInclined at an angle α with respect to the Y axisy
2. The depth image-based volume measurement method according to claim 1, wherein the transformation formula is:
Z'=Y0*sinθx+Z0cosθx
Zck=Z'*cosθy-X0sinθy
wherein X0、Y0、Z0As original coordinate points of a reference plane, ZCKFor transformed ZCKAnd (4) coordinates.
3. The depth-image-based volume measurement method according to claim 2, wherein the step S23 is specifically that:
inclined at an included angle α according to the X-axisxInclined at an angle α with respect to the Y axisyAnd transforming the scene point cloud coordinate by using a transformation formula to obtain the scene point cloud coordinate under a depth camera coordinate system, wherein the transformation formula is as follows:
Z'i=Yio*sinαx+Ziocosαx
Xi=Z'i*sinαy+Xiocosαy
Yi=Yio*cosαy-Ziosinαy
Zi=Z'i*cosαy-Xiosinαy
wherein Xio、Yio、ZioAs original scene point cloud coordinates, Xi、Yi、ZiIs the scene point cloud coordinate under the depth camera coordinate system.
4. The depth-image-based volume measurement method according to claim 3, wherein the step S3 is specifically that:
according to a screening formula, screening X of the object to be tested meeting the conditions from the scene cloud coordinates under the depth camera coordinate systemi、Yi、ZiThe coordinate point set and the screening formula are as follows:
Zi-Zmean > N Zsigma, where N is a positive number.
5. The depth-image-based volume measurement method according to claim 4, wherein the step S4 is specifically that:
s41, calculating the X of the object to be measured according to the preset grid precisioni、YiProjecting the coordinate points to corresponding grid areas in a reference plane, calibrating the connected areas of the grid areas, and counting the size of each connected area;
s42, selecting X corresponding to the communication area with the largest areai、YiCoordinate point set, calculating selected X by principal component analysisi、YiObtaining the length and width of the projection of the object to be measured in the reference plane by the minimum circumscribed rectangle corresponding to the coordinate point;
s43, calculating ZiAnd obtaining the height of the object to be measured by the maximum difference with Zmean, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
6. A depth image-based volume measurement system adapted to the depth image-based volume measurement method of any one of claims 1 to 5, comprising:
the scene acquisition unit is used for acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
the coordinate transformation unit is used for transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system;
the device comprises a to-be-detected object extracting unit, a depth camera coordinate system acquiring unit and a depth camera coordinate system acquiring unit, wherein the to-be-detected object extracting unit is used for processing scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the to-be-detected object;
and the volume calculation unit is used for calculating the length, the width and the height of the object to be measured according to the coordinate set of the object to be measured and multiplying the length, the width and the height to obtain the volume of the object to be measured.
7. A depth camera comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being for storing a computer program comprising program instructions, characterized in that the processor is configured for invoking the program instructions for performing the method of any one of claims 1-5.
CN201810225912.7A 2018-03-19 2018-03-19 Volume measurement method and system based on depth image and depth camera Active CN108537834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810225912.7A CN108537834B (en) 2018-03-19 2018-03-19 Volume measurement method and system based on depth image and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810225912.7A CN108537834B (en) 2018-03-19 2018-03-19 Volume measurement method and system based on depth image and depth camera

Publications (2)

Publication Number Publication Date
CN108537834A CN108537834A (en) 2018-09-14
CN108537834B true CN108537834B (en) 2020-05-01

Family

ID=63484983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810225912.7A Active CN108537834B (en) 2018-03-19 2018-03-19 Volume measurement method and system based on depth image and depth camera

Country Status (1)

Country Link
CN (1) CN108537834B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111095024A (en) * 2018-09-18 2020-05-01 深圳市大疆创新科技有限公司 Height determination method, height determination device, electronic equipment and computer-readable storage medium
CN109448045B (en) * 2018-10-23 2021-02-12 南京华捷艾米软件科技有限公司 SLAM-based planar polygon measurement method and machine-readable storage medium
CN109376791B (en) * 2018-11-05 2020-11-24 北京旷视科技有限公司 Depth algorithm precision calculation method and device, electronic equipment and readable storage medium
CN109587875A (en) * 2018-11-16 2019-04-05 厦门盈趣科技股份有限公司 A kind of intelligent desk lamp and its adjusting method
CN109631764B (en) * 2018-11-22 2020-12-04 南京理工大学 Dimension measuring system and method based on RealSense camera
CN109886961B (en) * 2019-03-27 2023-04-11 重庆交通大学 Medium and large cargo volume measuring method based on depth image
CN109916302B (en) * 2019-03-27 2020-11-20 青岛小鸟看看科技有限公司 Volume measurement method and system for cargo carrying box
CN109993785B (en) * 2019-03-27 2020-11-17 青岛小鸟看看科技有限公司 Method for measuring volume of goods loaded in container and depth camera module
CN110310459A (en) * 2019-04-04 2019-10-08 桑尼环保(江苏)有限公司 Multi-parameter extract real-time system
CN112797897B (en) * 2019-04-15 2022-12-06 Oppo广东移动通信有限公司 Method and device for measuring geometric parameters of object and terminal
CN111986250A (en) * 2019-05-22 2020-11-24 顺丰科技有限公司 Object volume measuring method, device, measuring equipment and storage medium
CN110309561A (en) * 2019-06-14 2019-10-08 吉旗物联科技(上海)有限公司 Goods space volume measuring method and device
CN110349205B (en) * 2019-07-22 2021-05-28 浙江光珀智能科技有限公司 Method and device for measuring volume of object
CN110425980A (en) * 2019-08-12 2019-11-08 深圳市知维智能科技有限公司 The measurement method and system of the volume of storage facilities content
CN110296747A (en) * 2019-08-12 2019-10-01 深圳市知维智能科技有限公司 The measurement method and system of the volume of storage content
CN110766744B (en) * 2019-11-05 2022-06-10 北京华捷艾米科技有限公司 MR volume measurement method and device based on 3D depth camera
CN111561872B (en) * 2020-05-25 2022-05-13 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN111696152B (en) * 2020-06-12 2023-05-12 杭州海康机器人股份有限公司 Method, device, computing equipment, system and storage medium for detecting package stack
CN112254635B (en) * 2020-09-23 2022-06-28 洛伦兹(北京)科技有限公司 Volume measurement method, device and system
CN113418467A (en) * 2021-06-16 2021-09-21 厦门硅谷动能信息技术有限公司 Method for detecting general and black luggage size based on ToF point cloud data
CN114264277A (en) * 2021-12-31 2022-04-01 英特尔产品(成都)有限公司 Method and device for detecting flatness abnormality of chip substrate
CN114494404A (en) * 2022-02-14 2022-05-13 云从科技集团股份有限公司 Object volume measurement method, system, device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7037263B2 (en) * 2003-08-20 2006-05-02 Siemens Medical Solutions Usa, Inc. Computing spatial derivatives for medical diagnostic imaging methods and systems
CN101266131A (en) * 2008-04-08 2008-09-17 长安大学 Volume measurement device based on image and its measurement method
CN106225678A (en) * 2016-09-27 2016-12-14 北京正安维视科技股份有限公司 Dynamic object based on 3D camera location and volume measuring method
CN106813568A (en) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 object measuring method and device
CN106839975A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 Volume measuring method and its system based on depth camera
CN107067394A (en) * 2017-04-18 2017-08-18 中国电子科技集团公司电子科学研究院 A kind of oblique photograph obtains the method and device of point cloud coordinate

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7037263B2 (en) * 2003-08-20 2006-05-02 Siemens Medical Solutions Usa, Inc. Computing spatial derivatives for medical diagnostic imaging methods and systems
CN101266131A (en) * 2008-04-08 2008-09-17 长安大学 Volume measurement device based on image and its measurement method
CN106813568A (en) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 object measuring method and device
CN106839975A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 Volume measuring method and its system based on depth camera
CN106225678A (en) * 2016-09-27 2016-12-14 北京正安维视科技股份有限公司 Dynamic object based on 3D camera location and volume measuring method
CN107067394A (en) * 2017-04-18 2017-08-18 中国电子科技集团公司电子科学研究院 A kind of oblique photograph obtains the method and device of point cloud coordinate

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
《Accuracy of scaling and DLT reconstruction techniques for planar motion analyses》;Brewin,MA et al;《Journal of Applied Biomechanics》;20030228;第19卷(第19期);第79-88页 *
《Optical flow background estimation for real-time pan/tilt camera object tracking》;D Doyle et al;《Measurement》;20140228;第48卷(第01期);第195-207页 *
《基于像面旋转的画幅遥感相机姿态像移计算》;丁亚林;《光学精密工程》;20070529;第09卷(第15期);第1432-1438页 *
《基于摄影原理的文物表面纹理重建技术研究》;丁立军;《中国优秀硕士学位论文全文数据库 信息科技辑》;20071215(第06期);I138-763 *
《应用数学坐标变换方法计算航空相机像面旋转》;丁亚林;《光学仪器》;20120925;第29卷(第01期);第22-26页 *
《考虑飞机姿态角时倾斜航空相机像移速度计算》;翟林培;《光学精密工程》;20060731;第14卷(第03期);第490-494页 *

Also Published As

Publication number Publication date
CN108537834A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN108537834B (en) Volume measurement method and system based on depth image and depth camera
CN110174056A (en) A kind of object volume measurement method, device and mobile terminal
WO2020168685A1 (en) Three-dimensional scanning viewpoint planning method, device, and computer readable storage medium
KR20220025028A (en) Method and device for building beacon map based on visual beacon
CN115100299B (en) Calibration method, device, equipment and storage medium
CN111080682A (en) Point cloud data registration method and device
CN112991459A (en) Camera calibration method, device, equipment and storage medium
CN111311671B (en) Workpiece measuring method and device, electronic equipment and storage medium
CN115060162A (en) Chamfer dimension measuring method and device, electronic equipment and storage medium
CN114037987A (en) Intelligent identification method, device, medium and equipment for scrap steel
CN108332662B (en) Object measuring method and device
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN111985266B (en) Scale map determining method, device, equipment and storage medium
JP2020512536A (en) System and method for 3D profile determination using model-based peak selection
CN116203976A (en) Indoor inspection method and device for transformer substation, unmanned aerial vehicle and storage medium
CN109000560B (en) Method, device and equipment for detecting package size based on three-dimensional camera
CN113379826A (en) Method and device for measuring volume of logistics piece
CN111915666A (en) Volume measurement method and device based on mobile terminal
CN115861443A (en) Multi-camera internal reference calibration method and device, electronic equipment and storage medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN105078404A (en) Fully automatic eye movement tracking distance measuring calibration instrument based on laser algorithm and use method of calibration instrument
CN113759346B (en) Laser radar calibration method and device, electronic equipment and storage medium
CN112150527B (en) Measurement method and device, electronic equipment and storage medium
CN115100296A (en) Photovoltaic module fault positioning method, device, equipment and storage medium
CN109712547B (en) Display screen plane brightness measuring method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A volume measurement method, system and depth camera based on depth image

Effective date of registration: 20220412

Granted publication date: 20200501

Pledgee: Zhejiang Mintai Commercial Bank Co.,Ltd. Hangzhou Binjiang small and micro enterprise franchise sub branch

Pledgor: HANGZHOU AIXIN INTELLIGENT TECHNOLOGY CO.,LTD.

Registration number: Y2022330000495

PE01 Entry into force of the registration of the contract for pledge of patent right