CN112348941A - Real-time fusion method and device based on point cloud and image data - Google Patents

Real-time fusion method and device based on point cloud and image data Download PDF

Info

Publication number
CN112348941A
CN112348941A CN202010980058.2A CN202010980058A CN112348941A CN 112348941 A CN112348941 A CN 112348941A CN 202010980058 A CN202010980058 A CN 202010980058A CN 112348941 A CN112348941 A CN 112348941A
Authority
CN
China
Prior art keywords
point cloud
image data
data
image
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010980058.2A
Other languages
Chinese (zh)
Inventor
陈方圆
余崇圣
董子乐
李梦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhizhi Technology Co ltd
Original Assignee
Chongqing Zhizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhizhi Technology Co ltd filed Critical Chongqing Zhizhi Technology Co ltd
Priority to CN202010980058.2A priority Critical patent/CN112348941A/en
Publication of CN112348941A publication Critical patent/CN112348941A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a real-time fusion method and a device based on point cloud and image data, wherein the point cloud data and the image data of a calibration plate are acquired by using point cloud and image equipment, point cloud coordinates and pixel coordinates corresponding to intersection points of the calibration plate one by one are fitted through a fitting algorithm, finally, a least square optimization method is used according to a constraint condition that a rotation matrix is an orthogonal matrix, a relation J of the point cloud data and the image data is obtained, then, real-time fusion processing is carried out on each frame of point cloud and the image data, each frame of point cloud is accurately colored through the relation J of the point cloud data and the image data, and each frame of point cloud is put into the same real physical space coordinate system. According to the invention, from the time dimension, the point cloud data and the image data are subjected to frame-level fusion, and no post-processing is required, so that the traditional point cloud and image data fusion processing time is greatly shortened, the real-time fusion of the point cloud and the image data is realized, and the fusion processing efficiency is improved.

Description

Real-time fusion method and device based on point cloud and image data
Technical Field
The invention relates to the field of live-action three-dimensional reconstruction, in particular to a real-time fusion method and device based on point cloud and image data.
Background
In recent years, with the proposal of innovative concepts of smart cities and intelligent manufacturing development planning, various fields play a positive promoting role in technical innovation. The concept of smart city and smart manufacturing includes the demand of people for digital 3D construction of the surrounding environment. Basic data sources of digital 3D construction are point clouds and images.
The point cloud and the image can be regarded as two basic data sources for sensing the surrounding environment, the two data are complementary in respective attributes, the image can collect external color information, the point cloud can sense external distance information, and then the point cloud and the image are fused to sense a lot of information of the surrounding environment, such as distance, size, color and the like.
In the field of application of point cloud and image fusion processing, such as the field of surveying and mapping, the fusion of point cloud and image data is often located in a post-processing stage, that is, the point cloud data and the image data are distributed and stored during external operation, and then the images are spliced in the post-processing process, and then the 2D spliced images and the 3D point cloud data are registered to complete the fusion processing of the whole point cloud and the image data. In the whole fusion process, the method has the defects of long time consumption, low efficiency and the like.
With the increasing of the digital 3D construction requirements of people on the surrounding environment, the real-time fusion processing technology of the point cloud and the image is a trend. The point cloud and image real-time fusion processing technology is widely applied to the fields of indoor and outdoor 3D modeling, robot SLAM, smart city 3D construction, forestry general survey, electric power line patrol and the like, and can realize rapid three-dimensional information extraction of surrounding scenes.
Disclosure of Invention
The invention provides a real-time fusion method and device based on point cloud and image data, and aims to solve the problems of long time consumption and low efficiency in the process of extracting three-dimensional information of surrounding scenes.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a real-time fusion method based on point cloud and image data comprises the following steps:
s1, fixing two devices for collecting point cloud and image data by using a fixing device, wherein the relative positions of the two devices are kept still;
s2, collecting point cloud and image data of the calibration plate;
s3, calculating the point cloud coordinate of the intersection point by using a fitting method for the point cloud data of the calibration plate, and calculating the pixel coordinate of the intersection point by using a measuring and fitting method for the image data of the calibration plate;
s4, obtaining the relation J between the point cloud and the image data;
s5, performing real-time fusion of point cloud data and image data, and acquiring a frame of point cloud and image data by using a point cloud and image device at the same time;
s6, the first frame point cloud data P1Obtaining P through the relation J of point cloud and image data1Corresponding image data Pl1
S7, the first frame point cloud data P1The corresponding image data Pl1 is subjected to condition judgment, and if Pl1 is in the image pixel space, the corresponding point cloud data P can be obtained1Corresponding RGB value, the part of point cloud is counted as P1C; if Pl1Not in the image pixel space, corresponding point cloud data P1A distinguishing color is given, and the part of the point cloud is recorded as P1O;
S8, unifying the point cloud data to the same real physical space coordinate system;
and S9, moving the fixing device, continuously collecting the next frame of data, and repeating the steps S5 to S8 until the three-dimensional information extraction of the surrounding scene is completed.
Preferably, in step S1: two devices for collecting point cloud and image data are a laser radar and a camera.
Preferably, in step S2: the calibration plate is a square frame with four square hollowed-out parts inside, and the whole shape is vertically and horizontally symmetrical.
Preferably, in step S3: the method for solving the intersection point of the point cloud of the calibration plate comprises the steps of removing noise points from the point cloud data, fitting the point cloud data to a plane by using an RANSAC algorithm, projecting the original point cloud to the plane, fitting six frame lines of the calibration plate, solving the intersection point of the six frame lines, and obtaining an intersection point cloud coordinate; the method for obtaining the intersection point by the calibration plate image data comprises the steps of obtaining two-point pixel coordinates of each frame line by using a measurement fitting method, further obtaining a pixel equation of each frame line, and finally obtaining the intersection point of six frame lines, so that the pixel coordinates of the intersection point can be obtained.
Preferably, the point cloud of the intersection points corresponds to the image coordinates one by one, and the number of the intersection points is more than or equal to 4.
Preferably, step S4 includes the steps of:
in the camera model imaging principle of the image, the mathematical expressions of a pixel plane coordinate system, a camera coordinate system and a world coordinate system are as follows,
Figure BDA0002687212680000021
u and v are pixel coordinates; zcIs the optical axis distance under the camera coordinate; f. ofu、fvEffective focal length in the horizontal and vertical directions, respectively; u. of0、v0Are all central point position coordinates; r, T, respectively, the relative position relationship between the collected point cloud and the image equipment, R is a rotation matrix, and T is a translation matrix; x is the number ofw、yw、zwCoordinates in a world coordinate system;
if the coordinate system of the point cloud capturing device is set as the world coordinate system, the relationship s.t between the point cloud and the image data is,
PI=J(R,T,P)
PI is a pixel coordinate of image data, and P is a point cloud coordinate;
and (4) solving the relation J between the point cloud data and the image data by using a least square method according to the intersection point cloud coordinates and the image coordinates in the step S3 and by taking the rotation matrix R as a constraint condition of an orthogonal matrix.
The invention also discloses a real-time fusion device based on the point cloud and the image data, and the real-time fusion method comprises a fixing device and two devices for collecting the point cloud and the image data.
Compared with the prior art, the invention has the beneficial effects that:
adopting a calibration plate, acquiring point cloud data and image data of the calibration plate by using a point cloud and image device, fitting point cloud coordinates and pixel coordinates corresponding to intersection points of the calibration plate one by using a fitting algorithm, and finally obtaining a relation J of the point cloud data and the image data by using a least square method optimization method according to a constraint condition that a rotation matrix is an orthogonal matrix, thereby completing accurate calibration between the point cloud data and the image data, wherein the calibration precision is within 3 pixels;
in the process of real-time fusion of the point cloud and the image, real-time fusion processing is carried out on each frame of point cloud and image data, and color is accurately given to each frame of point cloud through the relation J between the point cloud data and the image data, so that good real-time performance is achieved;
each frame of point cloud is unified to the same real physical space coordinate system, after data acquisition is completed, the whole three-dimensional space data is generated, and post-processing of coordinate conversion on the original point cloud data is not needed.
Drawings
FIG. 1 is a flow chart of a method for real-time fusion based on point cloud and image data according to the present invention;
FIG. 2 is a structural diagram of a real-time fusion device based on point cloud and image data according to the present invention.
FIG. 3 is a physical diagram of a calibration plate in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of point cloud data of a calibration plate according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating image data of a calibration plate according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating the effect of the point cloud and image data fusion in real time according to an embodiment of the present invention;
fig. 7 is a partially enlarged view of fig. 6.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
Referring to fig. 1, a real-time fusion method based on point cloud and image data includes the following steps:
s1, fixing two devices for collecting point cloud and image data by using a fixing device 1, wherein the relative positions of the two devices are kept still, as shown in figure 2;
two devices for collecting point cloud and image data are a laser radar 3 and a camera 2. The invention is not limited to the two devices, and only needs the devices which can respectively collect point cloud image data, such as line laser, panoramic camera and the like, and the fixing device can be adjusted according to the actual installation requirement.
S2, collecting point cloud and image data of the calibration plate;
the calibration plate 4 is a square frame with four square hollowed-out parts inside, and the whole shape is symmetrical up and down and left and right, as shown in fig. 3. The size of the calibration plate 4 is determined according to the image and point cloud acquisition equipment, the basic requirements of acquiring remarkable calibration plate image and point cloud data are taken, and the thickness of the calibration plate is 1mm < d < 10 mm. During collection, if the point cloud data of a single frame can be well fitted to the point cloud data of the calibration plate, only one frame of data needs to be collected; if not, multi-frame point cloud data needs to be acquired, and other frame point cloud data are converted to a coordinate system during first frame acquisition so as to obtain point cloud data capable of fitting the calibration plate. The point cloud data and the image data of the calibration plate are respectively shown in fig. 4 and fig. 5.
S3, calculating the point cloud coordinate of the intersection point by using a fitting method for the point cloud data of the calibration plate, and calculating the pixel coordinate of the intersection point by using a measuring and fitting method for the image data of the calibration plate;
the plate intersections are marked as the frame line intersections 6 shown in fig. 3. The method for solving the intersection point of the point cloud of the calibration plate comprises the steps of removing noise points from the point cloud data, fitting the point cloud data to a plane by using an RANSAC algorithm, projecting the original point cloud to the plane, fitting six frame lines 5 of the calibration plate, and solving the intersection point of the six frame lines 5 to obtain an intersection point cloud coordinate; the method for obtaining the intersection point by the calibration plate image data comprises the steps of obtaining two-point pixel coordinates of each frame line by using a measurement fitting method, further obtaining a pixel equation of each frame line, and finally obtaining the intersection point of six frame lines, so that the pixel coordinates of the intersection point can be obtained. The point cloud of the intersection points corresponds to the image coordinates one by one, and the number of the intersection points is more than or equal to 4.
S4, obtaining the relation J between the point cloud and the image data;
in the camera model imaging principle of images, the system mainly comprises four coordinate systems, namely a pixel plane coordinate system, a camera coordinate system and a world coordinate system. The mathematical expressions of the pixel plane coordinate system, the image plane coordinate system, the camera coordinate system and the world coordinate system are as follows,
Figure BDA0002687212680000041
u and v are pixel coordinates; z is a radical ofcIs the optical axis distance under the camera coordinate; f. ofu、fvEffective focal length in the horizontal and vertical directions, respectively; u. of0、v0Are all central point position coordinates; r, T, respectively, the relative position relationship between the collected point cloud and the image equipment, R is a rotation matrix, and T is a translation matrix; x is the number ofw、yw、zwCoordinates in a world coordinate system;
if the coordinate system of the point cloud capturing device is set as the world coordinate system, the relationship s.t between the point cloud and the image data is,
PI=J(R,T,P)
PI is a pixel coordinate of image data, and P is a point cloud coordinate;
and (4) solving the relation J between the point cloud data and the image data by using a least square method according to the intersection point cloud coordinates and the image coordinates in the step S3 and by taking the rotation matrix R as a constraint condition of an orthogonal matrix.
S5, performing real-time fusion of point cloud data and image data, and acquiring a frame of point cloud and image data by using a point cloud and image device at the same time;
s6, the first frame point cloud data P1Obtaining P through the relation J of point cloud and image data1Corresponding image data Pl1
S7, the first frame point cloud data P1Corresponding image data Pl1Making a condition judgment if Pl1In the image pixel space, the corresponding point cloud data P can be obtained1Corresponding RGB value realizes point cloud data P1Coloring with the point cloud being counted as P1C; if Pl1Not in the image pixel space, corresponding point cloud data P1A distinguishing color (red, black and white) is given, and the part of the point cloud is recorded as P1O;
And S8, unifying the point cloud data to the same real physical space coordinate system. The pose of the point cloud equipment when the first frame of point cloud data is collected can be used as a real space physical coordinate system, the first frame of point cloud data can not be subjected to position conversion translation, and each frame of subsequent data can be unified to the same real physical space coordinate system according to the position and pose information of the fixing device. The position and attitude information of the fixing device can select other position and attitude sensors such as GNSS + combined inertial navigation, IMU + vision, pulse signals on machinery and the like.
S9, moving the fixing device, continuously collecting the next frame of data, and repeating the steps S5 to S8 until the three-dimensional information extraction of the surrounding scene is completed;
the real-time fusion effect diagram of the point cloud and the image data is shown in fig. 6 and 7.
Example 2
A real-time fusion device based on point cloud and image data implements the real-time fusion method of any one of embodiment 1, and comprises a fixing device and two devices for collecting the point cloud and the image data.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, and the scope of protection is still within the scope of the invention.

Claims (7)

1. A real-time fusion method based on point cloud and image data is characterized by comprising the following steps:
s1, fixing two devices for collecting point cloud and image data by using a fixing device, wherein the relative positions of the two devices are kept still;
s2, collecting point cloud and image data of the calibration plate;
s3, calculating the point cloud coordinate of the intersection point by using a fitting method for the point cloud data of the calibration plate, and calculating the pixel coordinate of the intersection point by using a measuring and fitting method for the image data of the calibration plate;
s4, obtaining the relation J between the point cloud and the image data;
s5, performing real-time fusion of point cloud data and image data, and acquiring a frame of point cloud and image data by using a point cloud and image device at the same time;
s6, the first frame point cloud data P1Obtaining P through the relation J of point cloud and image data1Corresponding image data Pl1
S7, the first frame point cloud data P1Corresponding image data Pl1Making a condition judgment if Pl1In the image pixel space, the corresponding point cloud data P can be obtained1Corresponding RGB value, the part of point cloud is counted as P1C; if Pl1Not in the image pixel space, corresponding point cloud data P1A distinguishing color is given, and the part of the point cloud is recorded as P1O;
S8, unifying the point cloud data to the same real physical space coordinate system;
and S9, moving the fixing device, continuously collecting the next frame of data, and repeating the steps S5 to S8 until the three-dimensional information extraction of the surrounding scene is completed.
2. The method for fusing point cloud and image data in real time according to claim 1, wherein in step S1: two devices for collecting point cloud and image data are a laser radar and a camera.
3. The method for fusing point cloud and image data in real time according to claim 1, wherein in step S2: the calibration plate is a square frame with four square hollowed-out parts inside, and the whole shape is vertically and horizontally symmetrical.
4. The method for fusing point cloud and image data in real time according to claim 1, wherein in step S3: the method for solving the intersection point of the point cloud of the calibration plate comprises the steps of removing noise points from the point cloud data, fitting the point cloud data to a plane by using an RANSAC algorithm, projecting the original point cloud to the plane, fitting six frame lines of the calibration plate, solving the intersection point of the six frame lines, and obtaining an intersection point cloud coordinate; the method for obtaining the intersection point by the calibration plate image data comprises the steps of obtaining two-point pixel coordinates of each frame line by using a measurement fitting method, further obtaining a pixel equation of each frame line, and finally obtaining the intersection point of six frame lines, so that the pixel coordinates of the intersection point can be obtained.
5. The method as claimed in claim 4, wherein the point cloud of intersection points corresponds to the image coordinates one by one, and the number of intersection points is greater than or equal to 4.
6. The method of claim 1, wherein the step S4 comprises the following steps:
in the camera model imaging principle of the image, the mathematical expressions of a pixel plane coordinate system, a camera coordinate system and a world coordinate system are as follows,
Figure FDA0002687212670000011
u and v are pixel coordinates; z is a radical ofcIs the optical axis distance under the camera coordinate; f. ofu、fvEffective focal length in the horizontal and vertical directions, respectively; u. of0、v0Are all at the centerA point position coordinate; r, T, respectively, the relative position relationship between the collected point cloud and the image equipment, R is a rotation matrix, and T is a translation matrix; x is the number ofw、yw、zwCoordinates in a world coordinate system;
if the coordinate system of the point cloud capturing device is set as the world coordinate system, the relationship s.t between the point cloud and the image data is,
PI=J(R,T,P)
PI is a pixel coordinate of image data, and P is a point cloud coordinate;
and (4) solving the relation J between the point cloud data and the image data by using a least square method according to the intersection point cloud coordinates and the image coordinates in the step S3 and by taking the rotation matrix R as a constraint condition of an orthogonal matrix.
7. A real-time fusion device based on point cloud and image data, implementing the real-time fusion method of any one of claims 1 to 6, wherein: the device comprises a fixing device and two devices for collecting point cloud and image data.
CN202010980058.2A 2020-09-17 2020-09-17 Real-time fusion method and device based on point cloud and image data Pending CN112348941A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010980058.2A CN112348941A (en) 2020-09-17 2020-09-17 Real-time fusion method and device based on point cloud and image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010980058.2A CN112348941A (en) 2020-09-17 2020-09-17 Real-time fusion method and device based on point cloud and image data

Publications (1)

Publication Number Publication Date
CN112348941A true CN112348941A (en) 2021-02-09

Family

ID=74358169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010980058.2A Pending CN112348941A (en) 2020-09-17 2020-09-17 Real-time fusion method and device based on point cloud and image data

Country Status (1)

Country Link
CN (1) CN112348941A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113358134A (en) * 2021-04-20 2021-09-07 重庆知至科技有限公司 Visual inertial odometer system
CN115359333A (en) * 2022-10-24 2022-11-18 山东矩阵软件工程股份有限公司 Multi-dimensional information fusion method based on multiple types of data acquisition equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A kind of three-dimensional laser point cloud and the fusion method of two dimensional image
CN109352646A (en) * 2018-09-30 2019-02-19 泰安康平纳机械有限公司 Yarn automatic loading and unloading method and system
CN110349221A (en) * 2019-07-16 2019-10-18 北京航空航天大学 A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN110764070A (en) * 2019-10-29 2020-02-07 北科天绘(合肥)激光技术有限公司 Data real-time fusion processing method and device based on three-dimensional data and image data
CN111045000A (en) * 2018-10-11 2020-04-21 阿里巴巴集团控股有限公司 Monitoring system and method
CN111369630A (en) * 2020-02-27 2020-07-03 河海大学常州校区 Method for calibrating multi-line laser radar and camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A kind of three-dimensional laser point cloud and the fusion method of two dimensional image
CN109352646A (en) * 2018-09-30 2019-02-19 泰安康平纳机械有限公司 Yarn automatic loading and unloading method and system
CN111045000A (en) * 2018-10-11 2020-04-21 阿里巴巴集团控股有限公司 Monitoring system and method
CN110349221A (en) * 2019-07-16 2019-10-18 北京航空航天大学 A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN110764070A (en) * 2019-10-29 2020-02-07 北科天绘(合肥)激光技术有限公司 Data real-time fusion processing method and device based on three-dimensional data and image data
CN111369630A (en) * 2020-02-27 2020-07-03 河海大学常州校区 Method for calibrating multi-line laser radar and camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘倩倩: "基于激光点云与图像信息融合的三维场景重建", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
石青等: "《微小型仿生机器鼠设计与控制》", 31 December 2019, 北京理工大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113358134A (en) * 2021-04-20 2021-09-07 重庆知至科技有限公司 Visual inertial odometer system
CN115359333A (en) * 2022-10-24 2022-11-18 山东矩阵软件工程股份有限公司 Multi-dimensional information fusion method based on multiple types of data acquisition equipment

Similar Documents

Publication Publication Date Title
CN109615652B (en) Depth information acquisition method and device
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN113362247B (en) Semantic real scene three-dimensional reconstruction method and system for laser fusion multi-view camera
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN105627992A (en) Method for quickly surveying and mapping historic building in high-accuracy non-contact mode
CN103868460A (en) Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN111275750A (en) Indoor space panoramic image generation method based on multi-sensor fusion
CN105469389B (en) A kind of grid ball target for vision sensor calibration and corresponding scaling method
CN103971404A (en) 3D real-scene copying device having high cost performance
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN102509304A (en) Intelligent optimization-based camera calibration method
CN110879080A (en) High-precision intelligent measuring instrument and measuring method for high-temperature forge piece
CN107767424A (en) Scaling method, multicamera system and the terminal device of multicamera system
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN112348941A (en) Real-time fusion method and device based on point cloud and image data
CN109087339A (en) A kind of laser scanning point and Image registration method
CN114638909A (en) Substation semantic map construction method based on laser SLAM and visual fusion
CN110736472A (en) indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
CN116309813A (en) Solid-state laser radar-camera tight coupling pose estimation method
CN117036300A (en) Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping
CN116778288A (en) Multi-mode fusion target detection system and method
CN112017259B (en) Indoor positioning and image building method based on depth camera and thermal imager

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210209

RJ01 Rejection of invention patent application after publication