CN113295089B - Carriage volume rate measuring method based on visual inertia SLAM - Google Patents

Carriage volume rate measuring method based on visual inertia SLAM Download PDF

Info

Publication number
CN113295089B
CN113295089B CN202110370458.6A CN202110370458A CN113295089B CN 113295089 B CN113295089 B CN 113295089B CN 202110370458 A CN202110370458 A CN 202110370458A CN 113295089 B CN113295089 B CN 113295089B
Authority
CN
China
Prior art keywords
carriage
mobile terminal
point cloud
frame
goods
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110370458.6A
Other languages
Chinese (zh)
Other versions
CN113295089A (en
Inventor
张箫
邱鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yifang Technology Co ltd
Original Assignee
Shenzhen Yifang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yifang Technology Co ltd filed Critical Shenzhen Yifang Technology Co ltd
Priority to CN202110370458.6A priority Critical patent/CN113295089B/en
Publication of CN113295089A publication Critical patent/CN113295089A/en
Application granted granted Critical
Publication of CN113295089B publication Critical patent/CN113295089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The embodiment of the invention discloses a carriage volume rate measuring method based on visual inertia SLAM, wherein a mobile terminal acquires color images, depth images and inertia data of a carriage and goods from a preset distance of the goods; initializing the system; tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot, and carrying out three-dimensional point cloud reconstruction on color images and depth images of the carriage and goods to obtain three-dimensional point cloud of the carriage and the goods; and obtaining the volumes of the point clouds of the carriage and the cargoes according to the three-dimensional point clouds, and further obtaining the volume rate of the carriage. The invention can rapidly, simply, conveniently, efficiently, long-distance and accurately obtain the volume of the point cloud of the measured carriage and the goods, thereby obtaining the volume rate of the carriage.

Description

Carriage volume rate measuring method based on visual inertia SLAM
Technical Field
The invention relates to the technical field of mapping, in particular to a carriage volume rate measuring method based on visual inertia SLAM.
Background
In the existing logistics industry, when goods are transferred and a bill is needed, the volume of the goods loaded by the carriage at present needs to be known, so that the goods can be loaded by the truck. Currently industry personnel estimate by empirical eye measurements, but the error is extremely large and the results given by everyone are different. Or a sensor using the principle of triangulation, the depth data error obtained after the car length exceeds 2 meters becomes large, so that the volume measurement requirement of more than 2 meters of the car cannot be met.
Disclosure of Invention
Aiming at the technical problems, the embodiment of the invention provides a carriage volume rate measuring method based on visual inertia SLAM.
A first aspect of an embodiment of the present invention provides a method for measuring a car volume rate based on visual inertia SLAM, including:
The method comprises the steps that a mobile terminal acquires color images, depth images and inertial data of a carriage and goods from a preset distance of the goods;
Carrying out initialization processing by vio systems;
tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot, and carrying out three-dimensional point cloud reconstruction on the color images and depth images of the carriage and the goods to obtain the three-dimensional point cloud of the carriage and the goods;
And obtaining the volumes of the point clouds of the carriage and the cargoes according to the three-dimensional point clouds, and further obtaining the volume rate of the carriage.
Optionally, the depth image stores a depth distance of the car or the cargo to the mobile terminal as pixel values in the image, and the inertial data are a velocity acceleration of the car and the cargo in x, y, z axes and an angular velocity acceleration around the x, y, z axes.
Optionally, the step of acquiring the color image, the depth image and the inertial data of the carriage and the cargo from the preset distance of the cargo by the mobile terminal includes:
the mobile terminal acquires color images and depth images to form a video stream;
and tracking the images in the video stream, and calculating the motion information of the mobile terminal according to the information of the adjacent images.
Optionally, the tracking the images in the video stream and calculating the motion information of the mobile terminal according to the information of the adjacent images includes:
tracking Shi-Tomasi characteristic points on the color image by adopting a KLT sparse optical flow algorithm;
and calculating an essential matrix or a homography matrix according to the Shi-Tomasi characteristic points, and recovering the pose of the mobile terminal between the current frame and the previous frame image from the essential matrix or homography matrix, so as to obtain the motion information of the mobile terminal.
Optionally, the initializing the system includes:
Judging whether the system initialization is successful or not;
If not, carrying out vio system initialization when the parallax between the non-adjacent two frames of depth images and the color images is larger than a preset value according to the pose of the mobile terminal.
Optionally, registering and aligning the non-adjacent two frames of depth images with a color image, wherein the non-adjacent two frames are a first preset frame and a second preset frame, and the second preset frame is after the first frame;
Searching the depth corresponding to the characteristic points of the color image to obtain the three-dimensional coordinates of the characteristic points of the second frame and the key two-dimensional coordinates corresponding to the first frame;
And solving the absolute scale pose of the mobile terminal between two frames according to the three-dimensional coordinates of the second frame characteristic points and the key two-dimensional coordinates corresponding to the first frame through a preset algorithm.
Optionally, the step of tracking and calculating to obtain the pose of the mobile terminal and the color image and depth image of the carriage and the goods when each frame of image is shot to reconstruct the three-dimensional point cloud comprises the following steps:
Tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot;
And adding the point cloud transformation corresponding to the key frame depth image into a coordinate system of an initialization frame according to the pose, scanning the carriage and cargoes in the mode, and completing three-dimensional point cloud reconstruction of the environment in the whole carriage after the scanning is finished.
Optionally, the step of tracking and calculating to reconstruct the three-dimensional point cloud of the color image and the depth image of the carriage and the goods after the pose of the mobile terminal when each frame of image is shot comprises:
Obtaining a gravity direction according to the inertial data, and carrying out histogram statistics on the numerical value projected to the direction by point cloud in the gravity direction, so as to find the maximum value from small to large and from large to small, thereby obtaining a preliminary carriage bottom plane and a preliminary carriage top plane;
Performing plane fitting according to the point cloud obtained through preliminary selection and the random consistency sampling method to obtain accurate plane information;
respectively fitting two planes on the side surface of the carriage by adopting the same method;
And obtaining the edge part of the carriage gate after obtaining four planes of the carriage, and preparing for the next volume measurement after completing the point cloud of the carriage gate part.
Optionally, the step of tracking and calculating to reconstruct the three-dimensional point cloud of the color image and the depth image of the carriage and the goods after the pose of the mobile terminal when each frame of image is shot comprises:
And judging whether the three-dimensional point cloud reconstruction is completed, if not, rescanning, and executing the steps of acquiring color images, depth images and inertial data of the carriage and the goods from the preset distance of the goods by the mobile terminal.
Optionally, the step of obtaining the volume of the point cloud of the carriage and the cargo according to the three-dimensional point cloud of the carriage and the cargo, and further obtaining the volume ratio of the carriage comprises:
Triangulating the three-dimensional point cloud using vtk library VTKTRIANGLEFILTER classes;
The volume of the point cloud of the carriage and the cargo is obtained by using the GetVolume method provided by vtkMassProperties, so that the volume rate of the carriage is obtained.
The embodiment of the invention provides a carriage volume rate measuring method based on visual inertia SLAM, wherein a mobile terminal acquires color images, depth images and inertia data of a carriage and goods from a preset distance of the goods; initializing the system; tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot, and carrying out three-dimensional point cloud reconstruction on color images and depth images of the carriage and goods to obtain three-dimensional point cloud of the carriage and the goods; and obtaining the volumes of the point clouds of the carriage and the cargoes according to the three-dimensional point clouds, and further obtaining the volume rate of the carriage. Compared with the prior art, the embodiment of the invention has the advantages of fast, simple, convenient, high-efficiency, long-distance and accurate volume of the point cloud of the measured carriage and the cargoes, and further obtains the volume rate of the carriage.
Drawings
FIG. 1 is a flow chart of a method for measuring a car volume fraction based on visual inertia SLAM according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for measuring a car volume fraction based on visual inertia SLAM according to another embodiment of the present invention;
FIG. 3 is a flow chart of a method for measuring a car volume fraction based on visual inertia SLAM according to still another embodiment of the present invention;
Fig. 4 is a flowchart of a method for measuring a car volume rate based on the visual inertia SLAM according to still another embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of an embodiment of the present invention, which provides a method for measuring a car volume rate based on visual inertia SLAM, comprising:
S100: the mobile terminal acquires color images, depth images and inertial data of a carriage and goods from a preset distance of the goods.
Specifically, SLAM is an abbreviation for Simultaneous Localization AND MAPPING, which can be translated into synchronous positioning and mapping. The mobile terminal is provided with a depth camera and an inertial sensor. Where the depth camera, also known as rgbd camera. Inertial sensor, english name Inertial Measurement Unit, abbreviated IMU. The mobile terminal moves to scan the surface of the carriage and the goods in the direction close to the goods at the preset distance from the goods so as to collect color images, depth images and inertial data of the carriage and the goods. The predetermined distance may be 1 to 2m, for example 1m, 1.1m, 1.2m, 1.4m, 1.6m, 1.8m, 2m. The depth image stores the depth distance of the car or cargo to the mobile terminal as pixel values into the image. The inertial data are the velocity acceleration of the car and cargo in the x, y, z axes and the angular velocity acceleration about the x, y, z axes.
S110: vio the system performs the initialization process.
Specifically, the vio system performs the initialization process including:
A determination vio is made as to whether the system initialization was successful.
Specifically, if yes, the vio system completes the initialization process. vio is an abbreviation for Visual Inertial Odometry. vio the system is initialized to obtain parameters calculated based on the missing visual SLAM, such as scale information, gravity direction information, speed information, etc.
If not, carrying out vio system initialization when the parallax between the non-adjacent two frames of depth images and the color images is larger than a preset value according to the pose of the mobile terminal. If vio the system does not complete the initialization, according to the pose of the mobile terminal, performing vio system initialization when the parallax between the non-adjacent two-frame depth image and the color image is larger than a preset value. The preset value is a disparity value between two non-adjacent frames of depth images and color images. Registering and aligning non-adjacent two frames of depth images with the color images, wherein the non-adjacent two frames are a first preset frame and a second preset frame, and the second preset frame is behind the first frame.
And searching the depth corresponding to the characteristic points of the color image to obtain the three-dimensional coordinates of the characteristic points of the second frame and the key two-dimensional coordinates corresponding to the first frame.
And solving the absolute scale pose of the mobile terminal between the two frames according to the three-dimensional coordinates of the characteristic points of the second frame and the key two-dimensional coordinates corresponding to the first frame by a preset algorithm. The preset algorithm is a PnP method, and then the absolute scale pose of the mobile terminal between two frames is solved by adopting the PnP method. PnP is an abbreviation for PERSPECTIVE-n-Point. The PnP method is a method of solving for 3D to 2D point-to-point motion.
S120: and tracking and calculating to obtain the pose of the mobile terminal and the color images and depth images of the carriage and the goods when each frame of image is shot, and reconstructing the three-dimensional point cloud to obtain the three-dimensional point cloud of the carriage and the goods.
Specifically, the mobile terminal scans the color images and depth images of the carriage and the goods, and the mobile terminal tracks the color images and depth images of the carriage and the goods to calculate the pose of the camera of the mobile terminal when each frame of image is shot. And vio, tracking and calculating the pose of a depth camera of the mobile terminal when each frame of image is shot, and carrying out three-dimensional point cloud reconstruction on color images and depth images of the carriage and the goods so as to obtain the three-dimensional point cloud of the carriage and the goods.
The step of tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot and reconstructing the color images and depth images of the carriage and goods in a three-dimensional point cloud comprises the step of tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot. Further, vio systems track and calculate the pose of the depth camera of the mobile terminal when each frame of image is shot. And adding the point cloud transformation corresponding to the key frame depth image into a coordinate system of an initialization frame according to the pose, scanning the carriage and cargoes in the mode, and completing three-dimensional point cloud reconstruction of the environment in the whole carriage after the scanning is finished.
S130: and obtaining the volumes of the point clouds of the carriage and the cargoes according to the three-dimensional point clouds, and further obtaining the volume rate of the carriage.
Specifically, vio system obtains the volume of the point cloud of the carriage and the goods according to the three-dimensional point cloud of the carriage and the goods, and further obtains the volume rate of the carriage. The step of obtaining the volume of the point cloud of the carriage and the goods according to the three-dimensional point cloud, and further obtaining the volume rate of the carriage comprises triangulating the three-dimensional point cloud by using vtk library VTKTRIANGLEFILTER. The volume of the point cloud of the carriage and the goods is obtained by using the GetVolume method provided by vtkMassProperties, so that the volume rate of the carriage is obtained. The volume ratio of the carriage is the ratio of the volume of the point cloud of the cargo to the volume of the point cloud of the carriage, i.e. the ratio of the volume of the cargo to the volume of the carriage.
Referring to fig. 2, fig. 2 is a diagram of an embodiment of the present invention, after a step of acquiring color images, depth images and inertial data of a carriage and cargo from a cargo preset distance by a mobile terminal, the method includes:
s200: the mobile terminal collects color images and depth images to form a video stream.
Specifically, the mobile terminal scans the car and the cargo using a depth camera (rgbd camera) to acquire color images and depth images of the car and the cargo. And the mobile terminal splices the collected color images and depth images of the carriage and the cargoes to form a video stream.
S210: and tracking the images in the video stream, and calculating the motion information of the mobile terminal according to the information of the adjacent images.
Specifically, the mobile terminal tracks images in the video stream, and calculates motion information of the mobile terminal according to information of adjacent images. The motion information of the mobile terminal includes a moving speed, an angular velocity, an acceleration, and the like. Wherein, track the picture in the video stream, and calculate the motion information of the mobile terminal according to the information of the adjacent picture includes:
And tracking the Shi-Tomasi characteristic points by adopting a KLT sparse optical flow algorithm on the color image. And the mobile terminal respectively adopts a KLT sparse optical flow algorithm to track the Shi-Tomasi characteristic points of the carriage and the goods for acquiring the color images of the carriage and the goods.
And calculating an essential matrix or a homography matrix according to the Shi-Tomasi characteristic points, and recovering the pose of the mobile terminal between the current frame and the previous frame image from the essential matrix or homography matrix, thereby obtaining the motion information of the mobile terminal.
Referring to fig. 3, fig. 3 is a block diagram of an embodiment of the present invention, wherein the step of tracking and calculating the pose of the mobile terminal and performing three-dimensional point cloud reconstruction on color images and depth images of a carriage and goods when each frame of image is captured includes:
S300: and obtaining a gravity direction according to the inertial data, and carrying out histogram statistics on the numerical value projected to the direction by the point cloud in the gravity direction, so as to find the maximum value from small to large and from large to small, thereby obtaining a preliminary carriage bottom surface plane and a preliminary carriage top surface plane.
Specifically, vio system obtains the direction of gravity from inertial data of inertial sensors. And carrying out histogram statistics on the numerical value projected to the direction by the point cloud in the gravity direction, so as to find the maximum value from small to large and from large to small, and obtaining a preliminary carriage bottom plane and a preliminary carriage top plane.
S310: and carrying out plane fitting according to the point cloud obtained through preliminary selection and the random consistency sampling method to obtain accurate plane information.
Specifically, the vio system performs plane fitting according to the point cloud of the initially selected carriage bottom plane and the point cloud of the initially selected top plane according to the random consistency sampling method to obtain accurate plane information, and further obtains accurate carriage bottom plane and accurate carriage top plane.
S320: the two planes of the car side are fitted separately using the same method.
Specifically, the method of obtaining the accurate bottom plane and top plane of the car with reference to steps S300-310 fits the two planes of the side of the car, respectively, and will not be repeated here.
S330: and obtaining the edge part of the door opening of the carriage after obtaining four planes of the carriage, and preparing for the next volume measurement after completing the point cloud of the door opening part of the carriage.
Specifically, after four planes of the carriage are acquired by the vio system, the bottom plane and the top plane of the carriage are spliced with the planes on two sides of the carriage to obtain the edge part of the carriage gate, and after the point cloud of the carriage gate part is completed, preparation is made for the next volume measurement.
Referring to fig. 4, fig. 4 is a block diagram of an embodiment of the present invention, wherein the steps for tracking and calculating the pose of the mobile terminal and reconstructing the color image and depth image of the carriage and the cargo during each frame of image capturing include:
s400: and judging whether the three-dimensional point cloud reconstruction is completed, if not, rescanning, and executing the steps of acquiring color images, depth images and inertial data of the carriage and the goods from the preset distance of the goods by the mobile terminal.
Specifically, when it is judged whether the three-dimensional point cloud reconstruction is completed. If not, rescanning, and executing the steps of acquiring the color image, the depth image and the inertial data of the carriage and the cargo from the preset distance of the cargo by the mobile terminal, namely returning to the step S100, and detailed reference is not repeated herein.
The technical scheme of the invention provides a carriage volume rate measuring method based on visual inertia SLAM, wherein a mobile terminal acquires color images, depth images and inertia data of a carriage and goods from a preset distance of the goods; initializing the system; tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot, and carrying out three-dimensional point cloud reconstruction on color images and depth images of the carriage and goods to obtain three-dimensional point cloud of the carriage and the goods; and obtaining the volumes of the point clouds of the carriage and the cargoes according to the three-dimensional point clouds, and further obtaining the volume rate of the carriage. Compared with the prior art, the embodiment of the invention has the advantages of fast, simple, convenient, high-efficiency, long-distance and accurate volume of the point cloud of the measured carriage and the cargoes, and further obtains the volume rate of the carriage.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. A cabin capacity rate measurement method based on visual inertia SLAM, comprising:
The method comprises the steps that a mobile terminal acquires color images, depth images and inertial data of a carriage and goods from a preset distance of the goods;
Carrying out initialization processing by vio systems;
tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot, and carrying out three-dimensional point cloud reconstruction on the color images and depth images of the carriage and the goods to obtain the three-dimensional point cloud of the carriage and the goods;
Obtaining the volume of the point cloud of the carriage and the cargoes according to the three-dimensional point cloud, and further obtaining the volume rate of the carriage;
The step of acquiring the color image, the depth image and the inertial data of the carriage and the goods from the preset distance of the goods by the mobile terminal comprises the following steps:
the mobile terminal acquires color images and depth images to form a video stream;
tracking the images in the video stream, and calculating the motion information of the mobile terminal according to the information of the adjacent images;
The tracking the images in the video stream, and calculating the motion information of the mobile terminal according to the information of the adjacent images comprises the following steps:
tracking Shi-Tomasi characteristic points on the color image by adopting a KLT sparse optical flow algorithm;
Calculating an essential matrix or a homography matrix according to the Shi-Tomasi feature points, and recovering the pose of the mobile terminal between the current frame and the previous frame image from the essential matrix or homography matrix, so as to obtain the motion information of the mobile terminal;
the vio system performs initialization processing including:
Judging vio whether the system initialization is successful or not;
if not, carrying out vio system initialization when the parallax between the non-adjacent two frames of depth images and the color images is larger than a preset value according to the pose of the mobile terminal;
the step of tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot and reconstructing the color image and the depth image of the carriage and the goods by three-dimensional point cloud comprises the following steps:
Tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot;
adding point cloud transformation corresponding to the key frame depth image into a coordinate system of an initialization frame according to the pose, scanning a carriage and cargoes, and completing three-dimensional point cloud reconstruction of the environment in the whole carriage after the scanning is finished;
the step of tracking and calculating to obtain the pose of the mobile terminal when each frame of image is shot and reconstructing the color image and the depth image of the carriage and the goods by three-dimensional point cloud comprises the following steps:
Obtaining a gravity direction according to the inertial data, and carrying out histogram statistics on the numerical value projected to the direction by point cloud in the gravity direction, so as to find the maximum value from small to large and from large to small, thereby obtaining a preliminary carriage bottom plane and a preliminary carriage top plane;
Performing plane fitting according to the point cloud obtained through preliminary selection and the random consistency sampling method to obtain accurate plane information;
respectively fitting two planes on the side surface of the carriage by adopting the same method;
And obtaining the edge part of the carriage gate after obtaining four planes of the carriage, and preparing for the next volume measurement after completing the point cloud of the carriage gate part.
2. The method for measuring the volume rate of a car based on visual inertia SLAM according to claim 1, wherein the depth image stores the depth distance of the car or cargo to the mobile terminal as pixel values in an image, and the inertial data are the acceleration of the car and cargo in x, y, z axes and the acceleration of angular velocity around the x, y, z axes.
3. The method for measuring the car volume fraction based on the visual inertia SLAM according to claim 1, wherein,
Registering and aligning the non-adjacent two-frame depth image with a color image, wherein the non-adjacent two-frame depth image is a first preset frame and a second preset frame, and the second preset frame is after the first preset frame;
searching the depth corresponding to the characteristic points of the color image to obtain three-dimensional coordinates of the characteristic points of the second preset frame and key two-dimensional coordinates corresponding to the first preset frame;
And solving the absolute scale pose of the mobile terminal between two frames according to the three-dimensional coordinates of the characteristic points of the second preset frame and the key two-dimensional coordinates corresponding to the first preset frame through a preset algorithm.
4. The method for measuring the volume rate of a carriage based on visual inertia SLAM according to claim 1, wherein the step of tracking and calculating the pose of the mobile terminal at the time of each frame image capturing and reconstructing the color image and the depth image of the carriage and the cargo comprises the following steps:
And judging whether the three-dimensional point cloud reconstruction is completed, if not, rescanning, and executing the steps of acquiring color images, depth images and inertial data of the carriage and the goods from the preset distance of the goods by the mobile terminal.
5. The method for measuring the volume rate of a car based on visual inertia SLAM according to claim 1, wherein the step of obtaining the volume of the point cloud of the car and the cargo from the three-dimensional point cloud, further obtaining the volume rate of the car comprises:
Triangulating the three-dimensional point cloud using vtk library VTKTRIANGLEFILTER classes;
The volume of the point cloud of the carriage and the cargo is obtained by using the GetVolume method provided by vtkMassProperties, so that the volume rate of the carriage is obtained.
CN202110370458.6A 2021-04-07 2021-04-07 Carriage volume rate measuring method based on visual inertia SLAM Active CN113295089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110370458.6A CN113295089B (en) 2021-04-07 2021-04-07 Carriage volume rate measuring method based on visual inertia SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110370458.6A CN113295089B (en) 2021-04-07 2021-04-07 Carriage volume rate measuring method based on visual inertia SLAM

Publications (2)

Publication Number Publication Date
CN113295089A CN113295089A (en) 2021-08-24
CN113295089B true CN113295089B (en) 2024-04-26

Family

ID=77319607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110370458.6A Active CN113295089B (en) 2021-04-07 2021-04-07 Carriage volume rate measuring method based on visual inertia SLAM

Country Status (1)

Country Link
CN (1) CN113295089B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375754A (en) * 2022-10-21 2022-11-22 中信梧桐港供应链管理有限公司 Storage yard volume detection method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107504917A (en) * 2017-08-17 2017-12-22 深圳市异方科技有限公司 A kind of three-dimensional dimension measuring method and device
CN108830925A (en) * 2018-05-08 2018-11-16 中德(珠海)人工智能研究院有限公司 A kind of three-dimensional digital modeling method based on ball curtain video flowing
CN109764880A (en) * 2019-02-19 2019-05-17 中国科学院自动化研究所 The vision inertia ranging method and system of close coupling vehicle wheel encoder data
CN110310325A (en) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 A kind of virtual measurement method, electronic equipment and computer readable storage medium
CN110617813A (en) * 2019-09-26 2019-12-27 中国科学院电子学研究所 Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method
CN111028294A (en) * 2019-10-20 2020-04-17 深圳奥比中光科技有限公司 Multi-distance calibration method and system based on depth camera
CN112270702A (en) * 2020-11-12 2021-01-26 Oppo广东移动通信有限公司 Volume measurement method and device, computer readable medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107504917A (en) * 2017-08-17 2017-12-22 深圳市异方科技有限公司 A kind of three-dimensional dimension measuring method and device
CN108830925A (en) * 2018-05-08 2018-11-16 中德(珠海)人工智能研究院有限公司 A kind of three-dimensional digital modeling method based on ball curtain video flowing
CN109764880A (en) * 2019-02-19 2019-05-17 中国科学院自动化研究所 The vision inertia ranging method and system of close coupling vehicle wheel encoder data
CN110310325A (en) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 A kind of virtual measurement method, electronic equipment and computer readable storage medium
CN110617813A (en) * 2019-09-26 2019-12-27 中国科学院电子学研究所 Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method
CN111028294A (en) * 2019-10-20 2020-04-17 深圳奥比中光科技有限公司 Multi-distance calibration method and system based on depth camera
CN112270702A (en) * 2020-11-12 2021-01-26 Oppo广东移动通信有限公司 Volume measurement method and device, computer readable medium and electronic equipment

Also Published As

Publication number Publication date
CN113295089A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN108171733B (en) Method of registering two or more three-dimensional 3D point clouds
US20200096317A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
US10068344B2 (en) Method and system for 3D capture based on structure from motion with simplified pose detection
US10636151B2 (en) Method for estimating the speed of movement of a camera
CN106767399B (en) The non-contact measurement method of logistics goods volume based on binocular stereo vision and dot laser ranging
Golparvar-Fard et al. Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques
CN112734852B (en) Robot mapping method and device and computing equipment
US9420265B2 (en) Tracking poses of 3D camera using points and planes
US20100204964A1 (en) Lidar-assisted multi-image matching for 3-d model and sensor pose refinement
CN112384891B (en) Method and system for point cloud coloring
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
US20150235367A1 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN111968228B (en) Augmented reality self-positioning method based on aviation assembly
Heng et al. Real-time photo-realistic 3d mapping for micro aerial vehicles
EP3716210B1 (en) Three-dimensional point group data generation method, position estimation method, three-dimensional point group data generation device, and position estimation device
US20160253836A1 (en) Apparatus for measuring three dimensional shape, method for measuring three dimensional shape and three dimensional shape measurment program
CN208323361U (en) A kind of positioning device and robot based on deep vision
Vechersky et al. Colourising point clouds using independent cameras
CN111383257A (en) Method and device for determining loading and unloading rate of carriage
JP4568845B2 (en) Change area recognition device
CN113034571A (en) Object three-dimensional size measuring method based on vision-inertia
CN111105467B (en) Image calibration method and device and electronic equipment
KR100574227B1 (en) Apparatus and method for separating object motion from camera motion
CN113295089B (en) Carriage volume rate measuring method based on visual inertia SLAM
JP2020125960A (en) Moving object position estimating device and moving object position estimating program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant