CN112435291A - Multi-target volume measurement method and device and storage medium - Google Patents

Multi-target volume measurement method and device and storage medium Download PDF

Info

Publication number
CN112435291A
CN112435291A CN202011473401.0A CN202011473401A CN112435291A CN 112435291 A CN112435291 A CN 112435291A CN 202011473401 A CN202011473401 A CN 202011473401A CN 112435291 A CN112435291 A CN 112435291A
Authority
CN
China
Prior art keywords
information
point cloud
target
image information
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011473401.0A
Other languages
Chinese (zh)
Inventor
毛巨洪
胡攀攀
李康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Wanji Information Technology Co Ltd
Original Assignee
Wuhan Wanji Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Wanji Information Technology Co Ltd filed Critical Wuhan Wanji Information Technology Co Ltd
Priority to CN202011473401.0A priority Critical patent/CN112435291A/en
Publication of CN112435291A publication Critical patent/CN112435291A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a multi-target volume measurement method, a multi-target volume measurement device and a storage medium, wherein the multi-target volume measurement method comprises the steps of firstly acquiring attitude information of the multi-target volume measurement device at a plurality of positions, and acquiring point cloud information and image information in scenes corresponding to the plurality of positions; secondly, calculating the volumes and positioning information of a plurality of targets and a plurality of targets based on the acquired attitude information and point cloud information; then, based on the obtained attitude information and the image information, determining a matching mark on the image information and positioning information of the matching mark; and finally, matching each target in the plurality of targets with the corresponding volume based on the volume and the positioning information of the plurality of targets and the matching marks on the image information and the positioning information of the matching marks. The method can realize simultaneous measurement of a plurality of target volumes, so that the efficiency of measuring the volume of the goods can be effectively improved when the goods are more.

Description

Multi-target volume measurement method and device and storage medium
Technical Field
The invention belongs to the technical field of measurement, and particularly relates to a multi-target volume measurement method, a multi-target volume measurement device and a storage medium.
Background
With the development of society, the volume of the object to be measured is more and more common in daily life, and industries such as storage, transportation and loading of goods, airports, express delivery and the like all need to measure the volume of the object. At present, the volume measuring method mainly comprises two modes of manual measurement and automatic measurement. Wherein, the manual measurement is not only inefficient, but also difficult to measure the volume of irregular objects; the automatic measuring equipment mainly adopts two types of fixed and handheld automatic measuring equipment, and the main adopted technical means comprises the step of reconstructing the three-dimensional outline of an object by using a video or laser radar sensor. However, the fixed equipment has a blind area in the scanning process, which affects the volume measurement precision; the handheld device can affect the volume measurement precision, and the current common technical means is to measure the volume of a single target, and when the number of cargos is large, the volume measurement efficiency can be affected.
Therefore, how to provide a method for performing volume measurement on a plurality of objects is becoming an urgent technical problem to be solved.
Disclosure of Invention
In view of the above problems, the present invention provides a method, an apparatus and a storage medium for multi-target volume measurement, which can achieve volume measurement of multiple targets, and has high measurement efficiency and high accuracy.
The invention aims to provide a multi-target volume measurement method, which comprises the following steps,
acquiring attitude information of the multi-target measuring device at a plurality of positions, and acquiring point cloud information and image information in a scene corresponding to the plurality of positions;
calculating the volume and positioning information of a plurality of targets and a plurality of targets based on the acquired attitude information and point cloud information;
determining a matching mark on the image information and positioning information of the matching mark based on the acquired attitude information and the image information;
each of the plurality of objects is matched to the corresponding volume based on the volume and location information of the plurality of objects and the matching mark and the location information of the matching mark on the image information.
Optionally, the acquiring the attitude information of the multi-target measuring device at a plurality of positions and the acquiring point cloud information and image information in the scene corresponding to the plurality of positions includes,
determining an initial position of the multi-target measurement device;
acquiring attitude information at an initial position, recording the attitude information as first position information and a first direction angle, and acquiring first point cloud information and first image information in a scene corresponding to the initial position;
moving the position of the multi-target measuring device, respectively acquiring attitude information of the multi-target measuring device moving to the second position and the Nth position of the third position … …, recording the attitude information as second position information, the Nth position information of the third position information … …, a second direction angle and an Nth direction angle of the third direction angle … …, acquiring point cloud information and image information of a scene corresponding to the position of the multi-target measuring device moving to the position, and recording the point cloud information and the image information as second point cloud information, third point cloud information … …, Nth point cloud information, second image information and third image information … …, Nth image information.
Optionally, the calculating, based on the obtained pose information and the point cloud information, volume and location information of the plurality of targets includes,
sequentially registering the second point cloud information, the third point cloud information … …, the Nth point cloud information and the first point cloud information respectively to form dense 3D point cloud information;
carrying out point cloud segmentation and target detection on dense 3D point cloud information to obtain a target point cloud;
and acquiring a plurality of point cloud targets based on the target point cloud, and calculating to obtain the volume and positioning information of any point cloud target.
Optionally, the sequentially registering the second point cloud information and the third point cloud information … …, the N-th point cloud information and the first point cloud information respectively to form dense 3D point cloud information includes sequentially pre-registering and fine-registering the second point cloud information and the third point cloud information … …, the N-th point cloud information and the first point cloud information respectively, and the method specifically includes the following steps:
calculating to obtain a pre-registration translation matrix between the second point cloud information and the first point cloud information based on the first position information and the second position information;
calculating to obtain a pre-registration rotation matrix between the second point cloud information and the first point cloud information based on the first direction angle and the second direction angle;
pre-registering the second point cloud information and the first point cloud information based on the pre-registration translation matrix and the pre-registration rotation matrix;
based on the feature points, performing fine registration on the second point cloud information and the first point cloud information after the pre-registration, and calculating to obtain a fine registration rotation matrix and a fine registration translation matrix;
and repeating the steps, and sequentially finishing pre-registration and fine registration between the third point cloud information, the fourth point cloud information … … and the N point cloud information and the first point cloud information.
Optionally, after the point cloud segmentation is performed on the dense 3D point cloud information, manual correction is performed on over-segmentation and under-segmentation.
Optionally, the calculating to obtain the volume of any point cloud target comprises,
judging whether the point cloud target is a regular object, wherein,
if the point cloud target is a regular object, calculating the volume of the point cloud target based on the length, the width and the height;
if the point cloud target is an irregular object, firstly determining a minimum circumscribed rectangle containing the point cloud target, and then taking the volume of the circumscribed rectangle as the volume of the point cloud target.
Optionally, the determining the matching marks of the multiple targets and the positioning information of the matching marks based on the obtained posture information and the image information includes identifying and positioning the matching marks on the image information, and obtaining the positioning information of the matching marks, which specifically includes the following steps;
selecting any two pieces of image information from the first image information to the Nth image information;
acquiring external parameter data of the selected two pieces of image information respectively based on the attitude information corresponding to the selected two pieces of image information and the attitude information corresponding to the first image information;
identifying matching marks on the two pieces of selected image information based on the external reference data, and matching the same matching mark;
respectively determining the pixel position of the matching mark which is successfully matched in the two pieces of selected image information and the camera optical center which respectively corresponds to the two pieces of selected image information;
determining the positions of camera optical centers corresponding to the two pieces of selected image information and the distance between the camera optical centers corresponding to the two pieces of selected image information based on the posture information corresponding to the two pieces of selected image information;
and determining the position of the matching mark according to a binocular vision ranging method based on the positions of the optical centers of the cameras respectively corresponding to the two pieces of selected image information and the pixel positions of the matching marks successfully matched in the two pieces of selected image information, and acquiring the positioning information of the matching marks.
Optionally, the matching each of the plurality of objects with the corresponding volume based on the volume and location information of the plurality of objects and the matching marks on the image information and the location information of the matching marks comprises,
and matching and binding the volumes of the matching mark and the point cloud target according to the positioning information of the matching mark and the positioning information of the point cloud target.
It is another object of the present invention to provide a multi-target volume measuring device, comprising,
the acquisition module is used for acquiring attitude information of the multi-target measuring device at a plurality of positions and acquiring point cloud information and image information in a scene corresponding to the plurality of positions;
the calculation module is used for calculating the volume and the positioning information of the multiple targets based on the acquired attitude information and the point cloud information;
the determining module is used for determining a matching mark on the image information and positioning information of the matching mark based on the acquired posture information and the image information;
and the matching module is used for matching each target in the plurality of targets with the corresponding volume based on the volume and the positioning information of the plurality of targets and the matching marks on the image information and the positioning information of the matching marks.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the multi-target volume measurement method of the invention as described above.
The multi-target volume measurement method provided by the invention can be used for acquiring the attitude information, the point cloud information and the image information of the multi-target measurement device at different positions, so that the acquired measurement data is more comprehensive, and the reliability of target volume calculation is improved. Furthermore, the multi-target volume measurement method can realize simultaneous measurement of a plurality of target volumes, so that when more goods are available, the efficiency of measuring the volume of the goods can be effectively improved, and the multi-target measurement device has the characteristics of strong universality and high measurement accuracy, and can effectively improve the user experience.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a multi-target volume measurement method according to an embodiment of the invention;
FIG. 2 is a flow chart illustrating the steps of a multi-target volume measurement method according to an embodiment of the present invention;
FIG. 3a is a schematic diagram illustrating an under-segmentation structure in point cloud segmentation of dense 3D point cloud information according to an embodiment of the present invention;
FIG. 3b is a schematic diagram illustrating an excessive segmentation structure in point cloud segmentation of dense 3D point cloud information according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating positioning of a barcode based on the barcodes in the Nth image information and the Mth image information according to an embodiment of the present invention;
fig. 5 shows a schematic structural diagram of a multi-target volume measuring device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms first, second and the like in the description and in the claims of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that embodiments of the invention may be practiced otherwise than as specifically illustrated and described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The electronic toll collection method provided by the embodiment of the invention is described in detail by specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Referring to fig. 1, a multi-target volume measurement method is introduced in the embodiment of the present invention, including obtaining attitude information of a multi-target measurement device at multiple positions, and acquiring point cloud information and image information in a scene corresponding to the multiple positions; secondly, calculating the volumes and positioning information of a plurality of targets and a plurality of targets based on the acquired attitude information and point cloud information; then, based on the obtained attitude information and the image information, determining a matching mark on the image information and positioning information of the matching mark; finally, each of the plurality of objects is matched to the corresponding volume based on the volume and location information of the plurality of objects and the matching mark and the location information of the matching mark on the image information. The attitude information packet includes position information and a direction angle in the implementation of the present invention. By acquiring the attitude information and the point cloud information at a plurality of positions, the volume information of a plurality of objects can be effectively acquired, and the efficiency of measuring a plurality of target objects is improved. In addition, after the target volume and the position are calculated, the matching marks and the positioning information of the matching marks are identified according to the image information, and finally each target in the multiple targets is matched with the corresponding volume, so that the specific volume of which object can be accurately segmented, and the target volume can be measured more accurately and efficiently.
It should be noted that, in the embodiment of the present invention, an exemplary description is performed by using a matching label as a barcode, and referring to fig. 2, the multi-target volume measurement method includes the following detailed steps:
step S1: firstly, determining an initial position of the multi-target measuring device, acquiring attitude information, namely first position information and a first direction angle, at the initial position, and acquiring first point cloud information and first image information in a scene corresponding to the initial position; illustratively, the first position information is an initial position (X1, Y1, Z1), and the first direction angle is
Figure BDA0002836723760000061
Step S2: moving the position of the multi-target measuring device to obtain the attitude information of the position of the multi-target measuring device after moving, and acquiring point cloud information and image information in a scene corresponding to the position of the multi-target measuring device after the movement, in particular, after the first movement, the multi-target measuring device is at a second position, second position information, a second direction angle, second point cloud information and second image information can be obtained, the movement is continued, and finally, if the multi-target measuring device moves to the Nth position, third position information, a third orientation angle, third point cloud information, and third image information can be obtained, fourth position information, a fourth orientation angle, fourth point cloud information, and fourth image information … … N position information, N orientation angle, N point cloud information, and N image information, where N is an integer, and N > 1.
Alternatively, taking the multi-target measuring device as an example after the first movement, if the multi-target measuring device is at the second position after the first movement, the second position information in the pose information at the second position is (X2, Y2, Z2), and the first direction angle is (X2, Y2, Z2)
Figure BDA0002836723760000062
The second position information is a displacement S21(X2-X1, Y2-Y1, Z2-Z1) with respect to the first position information. Attitude information, point cloud information and image information of the multi-target measuring device are obtained at different positions, so that the obtained measuring data are more comprehensive, and the reliability of target volume calculation is improved.
In this embodiment, when point cloud information and image information in a scene are collected, point cloud information and image information in four directions, namely, front, back, left, and right, of the scene are collected at least. The point cloud information and the image information are collected in multiple directions in the same scene, so that the obtained information is more comprehensive and abundant, and the measurement accuracy is improved.
Step S3: sequentially registering the second point cloud information, the third point cloud information … …, the Nth point cloud information and the first point cloud information respectively to form dense 3D point cloud information; specifically, the registration of the second point cloud information, the third point cloud information … … and the nth point cloud information with the first point cloud information respectively in sequence comprises pre-registration and fine registration. The reliability of the point cloud information is higher through pre-registration and precise registration, the complexity of a registration algorithm can be reduced, and the registration point cloud overlap ratio is improved.
Specifically, registering the point cloud information of the nth position and the first point cloud information of the initial position comprises the steps of firstly obtaining pre-registration parameters according to the nth position information, the nth direction angle, the first position information and the first direction angle, wherein the pre-registration parameters comprise a pre-registration rotation matrix YRNAnd pre-preparedQuasi translation matrix YTN(ii) a Secondly, pre-registering the Nth point cloud information and the first point cloud information based on pre-registration parameters; then, carrying out fine registration on the N point cloud information and the first point cloud information after the pre-registration according to the characteristic points to obtain fine registration parameters, wherein the fine registration parameters comprise a fine registration rotation matrix JRNAnd fine registration translation matrix JTN(ii) a Further specifically, the registration of the second point cloud information and the first point cloud information is taken as an exemplary illustration.
Pre-registering the second point cloud information with the first point cloud information, and firstly calculating a pre-registration translation matrix YT according to the initial position (X1, Y1, Z1) and the second position (X2, Y2, Z2)2=(t1,2,t3);
Secondly, calculating a pre-registration rotation matrix according to the initial direction angle (φ x1, φ y1, φ z1) and the second direction angle (φ x2, φ y2, φ z 2):
Figure BDA0002836723760000071
secondly, pre-registering the second point cloud information and the first point cloud information based on the pre-registering translation matrix and the pre-registering rotation matrix;
and finally, on the basis of pre-registration, continuing to perform precise registration on the second point cloud information and the first point cloud information to obtain feature points, and obtaining precise registration parameters through the feature points.
The feature points include, but are not limited to, intersections, vertices, or high points of the target object, and may also be parameters such as length, width, and inflection point of the rectangle. Further, the fine registration parameters include a fine registration rotation matrix and a fine registration translation matrix, wherein,
the fine registration rotation matrix is:
Figure BDA0002836723760000081
the fine registration translation matrix is: JT2(t ' 1, t ' 2, t ' 3). Both pre-registration and fine registrationThe matrix is used as the registration parameter, so that the registration accuracy is effectively improved, and the accuracy of multi-target detection is further improved.
Step S4: carrying out point cloud segmentation and target detection on dense 3D point cloud information to obtain a target point cloud; specifically, the method further comprises preprocessing the dense 3D point cloud information before performing point cloud segmentation and target detection on the dense 3D point cloud information, wherein the preprocessing steps include but are not limited to filtering, region growing, background point cloud elimination and the like.
In this embodiment, after the point cloud segmentation is performed on the dense 3D point cloud information, manual correction is further performed on over-segmentation and under-segmentation. When the over-segmentation and under-segmentation problems exist, the correction is carried out manually. The under-segmentation is a plurality of objects that are not completely segmented (see fig. 3 a); the over-segmentation means that the same target is segmented into a plurality of targets by mistake (see fig. 3 b); when the point cloud objects are excessively divided, two excessively divided point cloud objects are manually selected on the terminal equipment in a touch or mouse mode, and point cloud object combination is achieved; and when the point cloud is under-segmented, drawing an artificial segmentation line on the terminal equipment in a touch or mouse mode to realize the artificial segmentation of the point cloud target. In the embodiment, the problems of over-segmentation and under-segmentation in the target segmentation process are fully considered, and different processing methods are set for different problems, so that the accuracy of multi-target detection is high, and the user experience can be effectively improved.
Step S5: based on the target point cloud, a plurality of point cloud targets (also referred to as a plurality of targets) are obtained, that is, all the targets are obtained, and the volume and positioning information of any point cloud target are calculated. Specifically, based on the first to nth point cloud information, the first to nth positions, and the first to nth direction angles in steps S1-S4, a plurality of point cloud targets can be obtained through registration and target segmentation, and the volume and position (i.e., positioning information) of the first point cloud target can be calculated, the volume and position of the second point cloud target and position … … of the kth point cloud target, where K is an integer and is less than and/or equal to N. Further, the calculating to obtain the volume of any point cloud target comprises judging whether the point cloud target is a regular object, wherein if the point cloud target is the regular object, the point cloud target volume is calculated based on the length, the width and the height; if the point cloud target is an irregular object, firstly determining a minimum circumscribed rectangle containing the point cloud target, and then taking the volume of the circumscribed rectangle as the volume of the point cloud target. Different volume calculation methods are adopted for objects with different properties, and the measured multi-target volume is more accurate.
Step S6: and identifying and positioning the bar code on the image information to acquire the positioning information of the bar code. Any two images in the first image information, the second image information … … and the nth image information are sequentially subjected to barcode identification and barcode positioning until all barcodes are identified and positioned. The method specifically comprises the following steps;
selecting any two pieces of image information from the first image information to the Nth image information;
acquiring a pre-registration translation matrix and a pre-registration rotation matrix corresponding to the two pieces of selected image information respectively based on the attitude information corresponding to the two pieces of selected image information and the attitude information corresponding to the first image information;
identifying the barcodes on the two pieces of selected image information based on the pre-registration translation matrix and the pre-registration rotation matrix corresponding to the two pieces of selected image information, and matching the same barcode;
respectively determining the pixel position of the successfully matched bar code in the two pieces of selected image information and the camera optical center corresponding to the two pieces of selected image information;
determining the positions of camera optical centers corresponding to the two pieces of selected image information and the distance between the camera optical centers corresponding to the two pieces of selected image information based on the posture information corresponding to the two pieces of selected image information;
and determining the position of the bar code according to a binocular vision ranging method based on the positions of the camera optical centers respectively corresponding to the two pieces of selected image information and the pixel positions of the successfully matched bar codes in the two pieces of selected image information.
In some embodiments, referring to fig. 4, the two pieces of selected image information are respectively the nth image information and the mth image information to exemplarily illustrate the positioning of the barcode. The same bar code (KP12345) is respectively identified ON the Nth image information and the Mth image information, the pixel positions KN and KM of the bar code ON the Nth image information and the Mth image information are respectively determined, and the optical centers of cameras corresponding to the Nth image information and the Mth image information are respectively ON and OM. And then determining the positions of the ON and OM and the relative distance L between the ON and OM according to the attitude information corresponding to the Nth image information and the Mth image information, namely according to the Nth position and the Nth direction angle and the Mth position and the Mth direction angle.
And determining the space position coordinates of the bar code KP12345, namely the position of the bar code, namely the positioning information of the bar code according to a binocular vision distance measuring method, the positions of ON and OM and the positions of KN and KM.
Step S7: and matching and binding the bar code information with the volume of the corresponding point cloud target. And matching and binding the volumes of the bar code and the point cloud target according to the positioning information of the bar code and the positioning information of the point cloud target. The positioning information of one bar code is consistent with that of one point cloud target, the two point cloud targets are matched and bound, and finally the volume of the corresponding target can be clearly displayed for a user by means of the bar code, so that the user experience is improved.
In addition, referring to fig. 5, an embodiment of the present invention further introduces a multi-target volume measurement apparatus capable of executing the method, including an obtaining module, a calculating module, a determining module, and a matching module, where the obtaining module is configured to obtain posture information of the multi-target volume measurement apparatus at multiple positions, and acquire point cloud information and image information in a scene corresponding to the multiple positions; the calculation module is used for calculating the volume and positioning information of the multiple targets based on the acquired attitude information and the point cloud information; the determining module is used for determining a matching mark on the image information and positioning information of the matching mark based on the acquired posture information and the image information; the matching module is used for matching each target in the plurality of targets with the corresponding volume based on the volume and the positioning information of the plurality of targets and the matching marks on the image information and the positioning information of the matching marks. In the embodiment of the invention, after the target volume and the position are calculated, each point cloud target in the plurality of point cloud targets needs to be matched with the corresponding volume and displayed to a user, so that the target volume and the matching mark are corresponding through the matching mark, the specific volume of which object can be accurately segmented, and the target volume can be measured more accurately and efficiently.
Specifically, the acquisition module at least comprises a posture detection unit and an information acquisition unit; the computing module includes at least a processing unit. The attitude detection unit is used for acquiring attitude information of the multi-target measuring device at a plurality of positions, and the information acquisition unit is used for acquiring point cloud information and image information in a scene corresponding to the plurality of positions; the processing unit is used for determining the volumes of the targets based on the acquired attitude information and the point cloud information.
In this embodiment, the posture detecting unit and the information collecting unit can execute the steps S1 and S2, and the processing unit can execute the steps S3 to S7, which are not described herein again.
In this embodiment, the attitude detection unit at least includes a gyroscope, an accelerometer, and a magnetometer; the information acquisition unit may include an image acquisition unit and a point cloud information acquisition unit, and further, a common point cloud information acquisition unit includes a TOF (time-of-flight: depth imaging camera) camera or a structured light camera.
In this embodiment, processing unit is for connecting gesture detecting element and information acquisition unit, handles data and provides human-computer interaction interface's APP, multi-target volume measuring device can be intelligent equipment such as cell-phone pad that has carried on gesture detecting element and information acquisition unit, also can be the pluggable equipment that has integrateed gesture detecting element and information acquisition unit, and this pluggable equipment passes through serial ports, bluetooth or wiFi etc. and handheld terminal equipment is connected to realize control and data processing through the data processing software of operation on terminal equipment.
The multi-target measuring device in the embodiment of the invention can realize simultaneous measurement of a plurality of target volumes, so that the efficiency of measuring the volume of goods can be effectively improved when a large number of goods are available, and the multi-target measuring device has the characteristics of strong universality and high measuring accuracy, and can effectively improve the user experience.
In addition, an embodiment of the present invention further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the multi-target volume measurement method in the method embodiment of fig. 1, and details are not repeated here in order to avoid repetition.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of embodiments of the present invention is not limited to performing functions in the order illustrated or discussed, but may include performing functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A multi-target volume measurement method is characterized by comprising the following steps,
acquiring attitude information of the multi-target measuring device at a plurality of positions, and acquiring point cloud information and image information in a scene corresponding to the plurality of positions;
calculating the volume and positioning information of a plurality of targets and a plurality of targets based on the acquired attitude information and point cloud information;
determining a matching mark on the image information and positioning information of the matching mark based on the acquired attitude information and the image information;
each of the plurality of objects is matched to a corresponding volume based on the volume and location information of the plurality of objects and the matching marker and location information of the matching marker on the image information.
2. The multi-target volumetric measurement method of claim 1, wherein obtaining pose information for the multi-target measurement device at a plurality of locations and acquiring point cloud information and image information within a scene corresponding to the plurality of locations comprises,
determining an initial position of the multi-target measurement device;
acquiring attitude information at an initial position, recording the attitude information as first position information and a first direction angle, and acquiring first point cloud information and first image information in a scene corresponding to the initial position;
moving the position of the multi-target measuring device, respectively acquiring attitude information of the multi-target measuring device moving to the second position and the Nth position of the third position … …, recording the attitude information as second position information, the Nth position information of the third position information … …, a second direction angle and an Nth direction angle of the third direction angle … …, acquiring point cloud information and image information of a scene corresponding to the position of the multi-target measuring device moving to the position, and recording the point cloud information and the image information as second point cloud information, third point cloud information … …, Nth point cloud information, second image information and third image information … …, Nth image information.
3. The multi-target volume measurement method according to claim 2, wherein the calculating of the volume and location information of the plurality of targets based on the acquired attitude information and point cloud information includes,
sequentially registering the second point cloud information, the third point cloud information … …, the Nth point cloud information and the first point cloud information respectively to form dense 3D point cloud information;
carrying out point cloud segmentation and target detection on dense 3D point cloud information to obtain a target point cloud;
and acquiring a plurality of point cloud targets based on the target point cloud, and calculating to obtain the volume and positioning information of any point cloud target.
4. The multi-target volume measurement method according to claim 3, wherein the sequentially registering the second point cloud information, the third point cloud information … … and the Nth point cloud information with the first point cloud information respectively to form dense 3D point cloud information comprises sequentially pre-registering and fine-registering the second point cloud information, the third point cloud information … … and the Nth point cloud information with the first point cloud information respectively, and specifically comprises the following steps:
calculating to obtain a pre-registration translation matrix between the second point cloud information and the first point cloud information based on the first position information and the second position information;
calculating to obtain a pre-registration rotation matrix between the second point cloud information and the first point cloud information based on the first direction angle and the second direction angle;
pre-registering the second point cloud information and the first point cloud information based on the pre-registration translation matrix and the pre-registration rotation matrix;
based on the feature points, performing fine registration on the second point cloud information and the first point cloud information after the pre-registration, and calculating to obtain a fine registration rotation matrix and a fine registration translation matrix;
and repeating the steps, and sequentially finishing pre-registration and fine registration between the third point cloud information, the fourth point cloud information … … and the N point cloud information and the first point cloud information.
5. The multi-target volume measurement method of claim 4, further comprising manually correcting over-segmentation and under-segmentation after point cloud segmentation of dense 3D point cloud information.
6. The multi-target volumetric measurement method of claim 5, wherein the calculating a volume of any point cloud target includes,
judging whether the point cloud target is a regular object, wherein,
if the point cloud target is a regular object, calculating the volume of the point cloud target based on the length, the width and the height;
if the point cloud target is an irregular object, firstly determining a minimum circumscribed rectangle containing the point cloud target, and then taking the volume of the circumscribed rectangle as the volume of the point cloud target.
7. The multi-target volume measurement method according to claim 6, wherein the determining of the matching marks in the image information and the positioning information of the matching marks based on the obtained pose information and the image information includes identifying and positioning the matching marks on the image information, and obtaining the positioning information of the matching marks, and specifically includes the following steps;
selecting any two pieces of image information from the first image information to the Nth image information;
acquiring external parameter data of the selected two pieces of image information respectively based on the attitude information corresponding to the selected two pieces of image information and the attitude information corresponding to the first image information;
identifying matching marks on the two pieces of selected image information based on the external reference data, and matching the same matching mark;
respectively determining the pixel position of the matching mark which is successfully matched in the two pieces of selected image information and the camera optical center which respectively corresponds to the two pieces of selected image information;
determining the positions of camera optical centers corresponding to the two pieces of selected image information and the distance between the camera optical centers corresponding to the two pieces of selected image information based on the posture information corresponding to the two pieces of selected image information;
and determining the position of the matching mark according to a binocular vision ranging method based on the positions of the optical centers of the cameras respectively corresponding to the two pieces of selected image information and the pixel positions of the matching marks successfully matched in the two pieces of selected image information, and acquiring the positioning information of the matching marks.
8. The multi-target volume measurement method of claim 7, wherein matching each of the plurality of targets to a corresponding volume based on the volume and location information of the plurality of targets and the location information of the matching marks and the matching marks on the image information comprises,
and matching and binding the volumes of the matching mark and the point cloud target according to the positioning information of the matching mark and the positioning information of the point cloud target.
9. A multi-target volume measuring device, comprising,
the acquisition module is used for acquiring attitude information of the multi-target measuring device at a plurality of positions and acquiring point cloud information and image information in a scene corresponding to the plurality of positions;
the calculation module is used for calculating the volume and the positioning information of the multiple targets based on the acquired attitude information and the point cloud information;
the determining module is used for determining the matching marks on the image information and the positioning information of the matching marks on the image information based on the acquired posture information and the acquired image information;
and the matching module is used for matching each target in the plurality of targets with the corresponding volume based on the volume and the positioning information of the plurality of targets and the matching marks on the image information and the positioning information of the matching marks.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a multi-target volume measurement method according to any one of claims 1 to 8.
CN202011473401.0A 2020-12-15 2020-12-15 Multi-target volume measurement method and device and storage medium Pending CN112435291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011473401.0A CN112435291A (en) 2020-12-15 2020-12-15 Multi-target volume measurement method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011473401.0A CN112435291A (en) 2020-12-15 2020-12-15 Multi-target volume measurement method and device and storage medium

Publications (1)

Publication Number Publication Date
CN112435291A true CN112435291A (en) 2021-03-02

Family

ID=74691220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011473401.0A Pending CN112435291A (en) 2020-12-15 2020-12-15 Multi-target volume measurement method and device and storage medium

Country Status (1)

Country Link
CN (1) CN112435291A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496488A (en) * 2021-07-16 2021-10-12 深圳市乐福衡器有限公司 Method and system for acquiring nutrition information, shooting terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105571489A (en) * 2016-01-04 2016-05-11 广州市汶鑫自控工程有限公司 Object weight and volume measurement and identification system and method
US20160163067A1 (en) * 2014-12-05 2016-06-09 Symbol Technologies, Inc. Apparatus for and method of estimating dimensions of an object associated with a code in automatic response to reading the code
DE102018006765A1 (en) * 2018-08-27 2020-02-27 Daimler Ag METHOD AND SYSTEM (S) FOR MANAGING FREIGHT VEHICLES
CN111310740A (en) * 2020-04-14 2020-06-19 深圳市异方科技有限公司 Pedestrian luggage volume measuring device under motion condition
CN111626665A (en) * 2020-05-09 2020-09-04 武汉中岩科技股份有限公司 Intelligent logistics system and method based on binocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160163067A1 (en) * 2014-12-05 2016-06-09 Symbol Technologies, Inc. Apparatus for and method of estimating dimensions of an object associated with a code in automatic response to reading the code
CN105571489A (en) * 2016-01-04 2016-05-11 广州市汶鑫自控工程有限公司 Object weight and volume measurement and identification system and method
DE102018006765A1 (en) * 2018-08-27 2020-02-27 Daimler Ag METHOD AND SYSTEM (S) FOR MANAGING FREIGHT VEHICLES
CN111310740A (en) * 2020-04-14 2020-06-19 深圳市异方科技有限公司 Pedestrian luggage volume measuring device under motion condition
CN111626665A (en) * 2020-05-09 2020-09-04 武汉中岩科技股份有限公司 Intelligent logistics system and method based on binocular vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496488A (en) * 2021-07-16 2021-10-12 深圳市乐福衡器有限公司 Method and system for acquiring nutrition information, shooting terminal and storage medium

Similar Documents

Publication Publication Date Title
CN111199564B (en) Indoor positioning method and device of intelligent mobile terminal and electronic equipment
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
CN110766758B (en) Calibration method, device, system and storage device
CN113409382A (en) Method and device for measuring damaged area of vehicle
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
US11875524B2 (en) Unmanned aerial vehicle platform based vision measurement method for static rigid object
JP2018091656A (en) Information processing apparatus, measuring apparatus, system, calculating method, program, and article manufacturing method
CN112926395A (en) Target detection method and device, computer equipment and storage medium
CN111750804A (en) Object measuring method and device
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
JP2017003525A (en) Three-dimensional measuring device
US20130155199A1 (en) Multi-Part Corresponder for Multiple Cameras
JP5878634B2 (en) Feature extraction method, program, and system
CN110163914B (en) Vision-based positioning
CN112435291A (en) Multi-target volume measurement method and device and storage medium
JP2018195070A (en) Information processing apparatus, information processing method, and program
CN114004899A (en) Pallet pose identification method, storage medium and equipment
CN109903308B (en) Method and device for acquiring information
CN116160458B (en) Multi-sensor fusion rapid positioning method, equipment and system for mobile robot
CN117237681A (en) Image processing method, device and related equipment
CN112381873A (en) Data labeling method and device
CN112685527A (en) Method, device and electronic system for establishing map
CN117011457A (en) Three-dimensional drawing construction method and device, electronic equipment and storage medium
CN111141217A (en) Object measuring method, device, terminal equipment and computer storage medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination