CN116228870B - Mapping method and system based on two-dimensional code SLAM precision control - Google Patents

Mapping method and system based on two-dimensional code SLAM precision control Download PDF

Info

Publication number
CN116228870B
CN116228870B CN202310494501.9A CN202310494501A CN116228870B CN 116228870 B CN116228870 B CN 116228870B CN 202310494501 A CN202310494501 A CN 202310494501A CN 116228870 B CN116228870 B CN 116228870B
Authority
CN
China
Prior art keywords
slam
camera
laser
dimensional code
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310494501.9A
Other languages
Chinese (zh)
Other versions
CN116228870A (en
Inventor
朱磊
凌晓春
魏占营
魏国忠
赵航
邢耀文
宋禄楷
鲁一慧
朱伟
赵明金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Provincial Institute of Land Surveying and Mapping
Original Assignee
Shandong Provincial Institute of Land Surveying and Mapping
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Provincial Institute of Land Surveying and Mapping filed Critical Shandong Provincial Institute of Land Surveying and Mapping
Priority to CN202310494501.9A priority Critical patent/CN116228870B/en
Publication of CN116228870A publication Critical patent/CN116228870A/en
Application granted granted Critical
Publication of CN116228870B publication Critical patent/CN116228870B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of SLAM systems, and provides a two-dimensional code SLAM precision control-based graph building method and system. Acquiring four corner points of a two-dimensional code, and calculating pixel coordinates of the four corner points of the two-dimensional code under a camera coordinate system; when four corner points of the two-dimensional code pass through the two-dimensional code attached to a building through SLAM equipment, acquiring the two-dimensional code by a camera and a laser, and recording time; calculating the pose of the camera according to the pixel coordinates of four corner points of the two-dimensional code and the coordinates of four corner points of the two-dimensional code under the camera coordinate system; according to the pose of the camera, calculating the laser pose at the corresponding moment by combining the pose relation from the camera to the laser; and constructing an error item according to the laser pose, the SLAM initial track and the transformation from the SLAM coordinate system to the world coordinate system, and carrying out optimization solution on the error item to obtain a final SLAM track so as to construct an SLAM point cloud map.

Description

Mapping method and system based on two-dimensional code SLAM precision control
Technical Field
The invention belongs to the field of instant positioning and map construction (Simultaneous Localization and Mapping, SLAM), and particularly relates to a map construction method and system based on two-dimensional code SLAM precision control.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The SLAM system can conveniently collect high-precision point clouds of indoor environments, but the SLAM system needs such as closed-loop detection, back-end optimization, odometer degradation compensation based on IMU pre-integration and the like, can not provide stable precision output under a complex scene, and is characterized in that SLAM achievements can not always meet the requirements of high precision of surveying and mapping, track drifting of different degrees occurs, and the drifting also causes that many later applications can not be unfolded. The correction of the drifting point cloud is a necessary and important link for realizing mapping service application.
The main method for controlling the accuracy of the later stage of the SLAM system is to touch an anchor point which is generally attached to the ground or a wall surface, and when in operation, a certain part of SLAM equipment is contacted with a preset anchor point, and because the transformation relation between the fixed position of the SLAM equipment and the SLAM track reference can be calibrated, the method is equivalent to designating a known coordinate for the SLAM at a certain moment, and therefore the accuracy control can be realized. However, the anchor point layout has high cost, and particularly, multiple station guiding is needed in a severe-shielding indoor environment, but the time and manpower consumption for the anchor point layout obviously exceeds the scanning time. In some indoor scene scans, operators even reuse the mobile station type laser scanner, so as to reduce the work load of dotting.
The current precision control scheme for SLAM point clouds is single, namely a control point (also called an anchor point) is added, but the cost for dotting the control point is high. In order to reduce the number of control points, most imported SLAM systems require that each scanning cannot pass for 20 minutes, which is to ensure the internal accuracy of a single scanning to be reliable, but in large-scale operation, multiple scanning is needed, and then post-splicing is performed, so that the workload of post-processing is very large, and therefore, how to reduce anchor points is very important.
Disclosure of Invention
In order to solve the problems, the invention provides a mapping method and a mapping system based on two-dimensional code SLAM precision control.
According to some embodiments, the present invention employs the following technical solutions:
in a first aspect, the invention provides a mapping method based on two-dimensional code SLAM precision control.
A mapping method based on two-dimensional code SLAM precision control comprises the following steps:
acquiring four corner points of the two-dimensional code, and calculating pixel coordinates of the four corner points of the two-dimensional code under a camera coordinate system; when four corner points of the two-dimensional code pass through the two-dimensional code attached to a building through SLAM equipment, acquiring the two-dimensional code by a camera and a laser, and recording time;
calculating the pose of the camera according to the pixel coordinates of four corner points of the two-dimensional code and the coordinates of four corner points of the two-dimensional code under the camera coordinate system;
according to the pose of the camera, calculating the laser pose at the corresponding moment by combining the pose relation from the camera to the laser;
and constructing an error item according to the laser pose, the SLAM initial track and the transformation from the SLAM coordinate system to the world coordinate system, and carrying out optimization solution on the error item to obtain a final SLAM track so as to construct an SLAM point cloud map.
Further, the camera to laser pose relationship is determined by:
capturing a graduated checkerboard by adopting a camera and a laser at the same time, and obtaining coordinates of four corner points of the checkerboard under a laser coordinate system and pixel coordinates of the four corner points of the checkerboard under the camera coordinate system;
and calculating the pose of the camera under the laser coordinate system according to the coordinates of the four corner points of the checkerboard under the laser coordinate system and the pixel coordinates of the four corner points of the checkerboard under the camera coordinate system, namely the pose relation between the camera and the laser.
Further, after obtaining the pose relationship of the camera to the laser, the method further comprises:
and verifying the pose relationship from the camera to the laser by adopting a checkerboard without pose placement.
Further, after obtaining the pose relationship of the camera to the laser, the method further comprises:
and acquiring the intensity value of the grid points of the checkerboard in laser, and verifying the pose relation from the camera to the laser according to the three-dimensional coordinates of the grid points in the laser coordinate system and the pixel coordinates of the grid points in the camera coordinate system.
Further, the error term is:
wherein,,g represents a true value Groundtruth, namely total station coordinates, which is an error term; s represents a SLAM coordinate system, c represents a Camera, and l represents a laser LiDAR. />For the camera pose in total station coordinate system obtained by Pnp calculation at time i,/>For the transformation between SLAM to total station coordinate system,/>For the pose from the camera at moment i to SLAM, the pose is composed of +.>Find out->For i time SLAM pose, +.>For spatial conversion of camera to laser. />And->For the purpose of waiting for the quantity->And->Is a known quantity.
Further, the process of constructing the SLAM point cloud map includes:
and expanding the laser frame-by-frame point cloud of the SLAM system under the track of the laser frame-by-frame point cloud to construct an SLAM point cloud map.
Further, the SLAM track is a track obtained under the constraint of a two-dimensional code anchor point and the constraint of an SLAM initial track.
Further, the camera and the laser are fixed together.
In a second aspect, the invention provides a mapping system based on two-dimensional code SLAM precision control.
A mapping system based on two-dimensional code SLAM precision control comprises: a SLAM device and a camera in communication with each other,
the camera and the laser are used for acquiring four corner points of the two-dimensional code when the SLAM equipment passes through the two-dimensional code attached to the building, recording time and sending the four corner points of the two-dimensional code to the SLAM equipment;
the SLAM equipment is used for calculating pixel coordinates of four corner points of the two-dimension code under the camera coordinate system according to the four corner points of the two-dimension code; calculating the pose of the camera according to the pixel coordinates of four corner points of the two-dimensional code and the coordinates of four corner points of the two-dimensional code under the camera coordinate system; according to the pose of the camera, calculating the laser pose at the corresponding moment by combining the pose relation from the camera to the laser; and constructing an error item according to the laser pose, the SLAM initial track and the transformation from the SLAM coordinate system to the world coordinate system, and carrying out optimization solution on the error item to obtain a final SLAM track so as to construct an SLAM point cloud map.
Further, the camera and the laser are fixed together.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a two-dimensional code anchor point mode, which can effectively reduce the number of touch anchor points, namely, the precision of the traditional touch anchor points can be achieved in a smaller number, and the efficiency of SLAM point cloud mapping is improved.
The invention can utilize fewer two-dimensional code anchors to achieve the precision control effect of the traditional touch anchor, thereby reducing field workload, submitting efficiency and being more flexible than the arrangement of the traditional anchor.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a flow chart of a mapping method based on two-dimensional code SLAM precision control shown in the invention;
fig. 2 is a schematic diagram of two-dimensional code anchor point observation shown in the present invention;
FIG. 3 is a schematic diagram of Pnp (Perselect-n-Point) algorithm positioning according to the present invention;
FIG. 4 is a schematic diagram of the spatial calibration between the laser and the camera according to the present invention;
fig. 5 is a comparison diagram before and after the two-dimensional code anchor point precision enhancement.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
In the present invention, terms such as "fixedly," "coupled," and the like are to be construed broadly and mean either fixedly coupled or integrally or detachably coupled; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the terms in the present invention can be determined according to circumstances by a person skilled in the relevant art or the art, and is not to be construed as limiting the present invention.
Term interpretation:
the Apriltag is used for identifying the position and the direction of the two-dimensional code by analyzing black pixels and white pixels in the image, so that the two-dimensional code is identified.
The iSAM is an optimized library of sparse non-linearity problems encountered in the instant localization and mapping (SLAM).
GTSAM, GTSAM is a C++ library used for smoothing (smoothing) and mapping (mapping) in the robot domain and the computer vision domain.
G2O, least squares optimization method.
Ceres is a widely used least squares problem solving library.
Example 1
The embodiment provides a mapping method based on two-dimensional code SLAM precision control.
As shown in fig. 1, a mapping method based on two-dimensional code SLAM precision control includes:
acquiring four corner points of the two-dimensional code, and calculating pixel coordinates of the four corner points of the two-dimensional code under a camera coordinate system; when four corner points of the two-dimensional code pass through the two-dimensional code attached to a building through SLAM equipment, acquiring the two-dimensional code by a camera and a laser, and recording time;
calculating the pose of the camera according to the pixel coordinates of four corner points of the two-dimensional code and the coordinates of four corner points of the two-dimensional code under the camera coordinate system;
according to the pose of the camera, calculating the laser pose at the corresponding moment by combining the pose relation from the camera to the laser;
and constructing an error item according to the laser pose, the SLAM initial track and the transformation from the SLAM coordinate system to the world coordinate system, and carrying out optimization solution on the error item to obtain a final SLAM track so as to construct an SLAM point cloud map.
In specific implementation, the april tag two-dimensional code is required to be attached to a wall surface or an upright post, four corner points of the april tag two-dimensional code are measured through a total station, the april tag two-dimensional code can be used as a two-dimensional code anchor point, the size of the april tag two-dimensional code can be 15 multiplied by 15cm, and the april tag two-dimensional code is not limited herein. Meanwhile, the hardware of the SLAM system needs to be modified, one (or a plurality of) cameras are fixed on a machine head (a device for emitting laser) and the cameras are connected to the SLAM system to ensure synchronization. The camera automatically captures the two-dimensional code and records time when the SLAM system works, generally when the equipment passes through the two-dimensional code, the camera can have 3-5 seconds of shooting time, and several to tens of two-dimensional code images can be obtained, as shown in fig. 2, so that compared with a single touch anchor point, the embodiment can provide more observables, namely can provide wider constraint, so that more stable control is obtained. Moreover, when SLAM passes through the two-dimensional code again, the two-dimensional code can still be automatically captured, and the absolute pose control is increased again.
The two-dimensional code anchor point provides true values of multiple instantaneous poses for the SLAM system, meanwhile, the SLAM system has the poses which are automatically calculated, fusion (optimal solution) calculation is carried out between the two poses, and therefore an SLAM track with higher precision can be obtained, and an SLAM point cloud map with higher precision is obtained. Namely, the SLAM algorithm can calculate the SLAM track, and only the accuracy of the track possibly has a large error, and the two-dimensional code provides a pose (true value) accurate at the moment, so that the SLAM is only helped to realize accuracy enhancement.
The implementation flow is as follows:
(1) And obtaining pixel coordinates of four corner points of the AprilTag two-dimensional code under the camera coordinate system by using a pattern recognition algorithm. For specific implementation, please refer to the april tag library correlation algorithm, which belongs to the existing method.
(2) Under the condition that four corner coordinates of the AprilTag two-dimensional code are known (the corner coordinates are obtained by direct measurement of a total station), the pose of the camera is obtained by utilizing a single-view geometrical rear intersection algorithm. Specifically, the pose of the camera can be solved by referring to the Pnp algorithm, that is, through n known world coordinates and corresponding pixel coordinates in the image, the principle is the same as that of solving the external azimuth element in photogrammetry. As shown in fig. 3, assuming A, B, C is a known spatial point of the world coordinate system, a, b, c are pixel coordinates projected in the camera window O-W1W2W3W4, the spatial pose of the camera can be obtained according to Pnp algorithm.
(3) And according to the spatial change relation between the camera and the laser and the camera pose, obtaining the laser pose of the camera when capturing the two-dimensional code. Where the transformation between camera and laser needs to be obtained by spatial calibration, as shown in fig. 4.
The camera and the laser are fixed together, a checkerboard with an accurate scale on one surface is hung indoors, the distance between the laser and the camera is about 2-3 meters, the camera and the laser can shoot/scan the checkerboard at the same time, three-dimensional coordinates of four corner points of the checkerboard in a laser system and pixel coordinates in the camera are obtained, and the pose of the camera under the laser coordinate system can be obtained by using a PnP method, namely the required pose. In order to avoid accidental errors, the placement of different postures of the checkerboard is needed, and if the lattice points of the checkerboard can be distinguished in the intensity value of the laser, the input quantity (three-dimensional coordinates under the laser coordinate system and pixel coordinates under the camera coordinate system) can be increased (by Pnp) to be used for proving the posture relation between the laser and the camera.
(4) The algorithm is implemented as shown in fig. 5.
The pose of the camera corresponding to the moment when the two-dimensional code anchor point is observed under the total station coordinate system is set as follows:
the pose is directly obtained by Pnp algorithm (see step (2)) as a true value. Setting the transformation from SLAM coordinate system to total station coordinate systemThe pose of the camera under the SLAM coordinate system at the moment i is:
the error function can be expressed as
Wherein g represents the true value Groundtruth, i.e., total station coordinates; s represents a SLAM coordinate system, c represents a Camera, and l represents a laser LiDAR.The camera pose under the total station coordinate system obtained by Pnp calculation at the moment of i,for the transformation between SLAM to total station coordinate system,/>For the pose from the camera at moment i to SLAM, the pose is composed of +.>Find out->For i time SLAM pose, +.>For spatial conversion of camera to laser. />And (3) withFor the purpose of waiting for the quantity->And->Is a known quantity.
Solving the above equation by using the optimization theory (the optimization problem can be solved by using a plurality of excellent open source libraries, such as iSAM, GTSAM, G2O, ceres, etc.), and obtainingAnd->,/>The method is characterized in that the method is a SLAM track under a total station coordinate system, and the point cloud is unfolded under the track, namely all the result point clouds with enhanced precision. The result point cloud considers the transformation relation between the result point cloud and the two-dimensional code anchor point, and accords with the constraint of SLAM, so that the internal consistency is realized, the precision is improved, and the requirement is met.
Example two
The embodiment provides a mapping system based on two-dimensional code SLAM precision control.
A mapping system based on two-dimensional code SLAM precision control comprises: a SLAM device and a camera in communication with each other,
the camera and the laser are used for acquiring four corner points of the two-dimensional code when the SLAM equipment passes through the two-dimensional code attached to the building, recording time and sending the four corner points of the two-dimensional code to the SLAM equipment;
the SLAM equipment is used for calculating pixel coordinates of four corner points of the two-dimension code under the camera coordinate system according to the four corner points of the two-dimension code; calculating the pose of the camera according to the pixel coordinates of four corner points of the two-dimensional code and the coordinates of four corner points of the two-dimensional code under the camera coordinate system; according to the pose of the camera, calculating the laser pose at the corresponding moment by combining the pose relation from the camera to the laser; and constructing an error item according to the laser pose, the SLAM initial track and the transformation from the SLAM coordinate system to the world coordinate system, and carrying out optimization solution on the error item to obtain a final SLAM track so as to construct an SLAM point cloud map.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A mapping method based on two-dimensional code SLAM precision control is characterized by comprising the following steps:
acquiring four corner points of the two-dimensional code, and calculating pixel coordinates of the four corner points of the two-dimensional code under a camera coordinate system; four corner points of the two-dimensional code are obtained by using a camera and a laser when the two-dimensional code is attached to a building through SLAM equipment, and the time is recorded;
calculating the pose of the camera according to the pixel coordinates of four corner points of the two-dimensional code and the coordinates of four corner points of the two-dimensional code under the camera coordinate system;
according to the pose of the camera, calculating the laser pose at the corresponding moment by combining the pose relation from the camera to the laser;
constructing an error item according to the laser pose, the SLAM initial track and the transformation from the SLAM coordinate system to the world coordinate system, and carrying out optimization solution on the error item to obtain a final SLAM track so as to construct an SLAM point cloud map;
the error term is:
wherein,,g represents total station coordinates; s denotes a SLAM coordinate system, c denotes a camera, l denotes a laser,is i time onCamera pose in total station coordinate system obtained by Pnp calculation>For the transformation between SLAM to total station coordinate system,/>For pose from camera at moment i to SLAM coordinate system,/for camera at moment i>For i time SLAM pose, +.>For spatial conversion of camera to laser.
2. The mapping method based on two-dimensional code SLAM precision control of claim 1, wherein the camera-to-laser pose relationship is determined by:
capturing a graduated checkerboard by adopting a camera and a laser at the same time, and obtaining coordinates of four corner points of the checkerboard under a laser coordinate system and pixel coordinates of the four corner points of the checkerboard under the camera coordinate system;
and calculating the pose of the camera under the laser coordinate system according to the coordinates of the four corner points of the checkerboard under the laser coordinate system and the pixel coordinates of the four corner points of the checkerboard under the camera coordinate system, namely the pose relation between the camera and the laser.
3. The mapping method based on two-dimensional code SLAM precision control of claim 2, further comprising, after obtaining the camera-to-laser pose relationship:
and verifying the pose relationship from the camera to the laser by adopting a checkerboard without pose placement.
4. The mapping method based on two-dimensional code SLAM precision control of claim 2, further comprising, after obtaining the camera-to-laser pose relationship:
and acquiring the intensity value of the grid points of the checkerboard in laser, and verifying the pose relation from the camera to the laser according to the three-dimensional coordinates of the grid points in the laser coordinate system and the pixel coordinates of the grid points in the camera coordinate system.
5. The mapping method based on two-dimensional code SLAM precision control of claim 1, wherein the process of constructing a SLAM point cloud map comprises:
and expanding the laser frame-by-frame point cloud of the SLAM system under the track of the laser frame-by-frame point cloud to construct an SLAM point cloud map.
6. The mapping method based on two-dimensional code SLAM precision control of claim 1, wherein the SLAM track is a track obtained under two-dimensional code anchor point constraint and SLAM initial track constraint.
7. The mapping method based on two-dimensional code SLAM precision control according to any one of claims 1-6, wherein the camera and the laser are fixed together.
8. A mapping system based on two-dimensional code SLAM precision control is characterized by comprising: a SLAM device and a camera in communication with each other,
the camera and the laser are used for acquiring four corner points of the two-dimensional code when the SLAM equipment passes through the two-dimensional code attached to the building, recording time and sending the four corner points of the two-dimensional code to the SLAM equipment;
the SLAM equipment is used for calculating pixel coordinates of four corner points of the two-dimension code under the camera coordinate system according to the four corner points of the two-dimension code; calculating the pose of the camera according to the pixel coordinates of four corner points of the two-dimensional code and the coordinates of four corner points of the two-dimensional code under the camera coordinate system; according to the pose of the camera, calculating the laser pose at the corresponding moment by combining the pose relation from the camera to the laser; constructing an error item according to the laser pose, the SLAM initial track and the transformation from the SLAM coordinate system to the world coordinate system, and carrying out optimization solution on the error item to obtain a final SLAM track so as to construct an SLAM point cloud map;
the error term is:
wherein,,g represents total station coordinates; s denotes a SLAM coordinate system, c denotes a camera, l denotes a laser,for the camera pose in total station coordinate system obtained by Pnp calculation at time i,/>For the transformation between SLAM to total station coordinate system,/>For pose from camera at moment i to SLAM coordinate system,/for camera at moment i>For i time SLAM pose, +.>For spatial conversion of camera to laser.
9. The mapping system based on two-dimensional code SLAM precision control of claim 8, wherein the camera and the laser are fixed together.
CN202310494501.9A 2023-05-05 2023-05-05 Mapping method and system based on two-dimensional code SLAM precision control Active CN116228870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310494501.9A CN116228870B (en) 2023-05-05 2023-05-05 Mapping method and system based on two-dimensional code SLAM precision control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310494501.9A CN116228870B (en) 2023-05-05 2023-05-05 Mapping method and system based on two-dimensional code SLAM precision control

Publications (2)

Publication Number Publication Date
CN116228870A CN116228870A (en) 2023-06-06
CN116228870B true CN116228870B (en) 2023-07-28

Family

ID=86585885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310494501.9A Active CN116228870B (en) 2023-05-05 2023-05-05 Mapping method and system based on two-dimensional code SLAM precision control

Country Status (1)

Country Link
CN (1) CN116228870B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117824667B (en) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 Fusion positioning method and medium based on two-dimensional code and laser

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115774265A (en) * 2023-02-15 2023-03-10 江苏集萃清联智控科技有限公司 Two-dimensional code and laser radar fusion positioning method and device for industrial robot

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180215B (en) * 2017-05-31 2020-01-31 同济大学 Parking lot automatic mapping and high-precision positioning method based on library position and two-dimensional code
CN107422735A (en) * 2017-07-29 2017-12-01 深圳力子机器人有限公司 A kind of trackless navigation AGV laser and visual signature hybrid navigation method
CN108921947B (en) * 2018-07-23 2022-06-21 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and acquisition entity for generating electronic map
CN111179427A (en) * 2019-12-24 2020-05-19 深圳市优必选科技股份有限公司 Autonomous mobile device, control method thereof, and computer-readable storage medium
CN113706626B (en) * 2021-07-30 2022-12-09 西安交通大学 Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
CN113792564B (en) * 2021-09-29 2023-11-10 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN114332360A (en) * 2021-12-10 2022-04-12 深圳先进技术研究院 Collaborative three-dimensional mapping method and system
CN115936029B (en) * 2022-12-13 2024-02-09 湖南大学无锡智能控制研究院 SLAM positioning method and device based on two-dimensional code

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115774265A (en) * 2023-02-15 2023-03-10 江苏集萃清联智控科技有限公司 Two-dimensional code and laser radar fusion positioning method and device for industrial robot

Also Published As

Publication number Publication date
CN116228870A (en) 2023-06-06

Similar Documents

Publication Publication Date Title
US8699005B2 (en) Indoor surveying apparatus
US10578426B2 (en) Object measurement apparatus and object measurement method
CN109801358A (en) A kind of substation's three-dimensional investigation method scanning and put cloud visual fusion based on SLAM
US20130096873A1 (en) Acquisition of Information for a Construction Site
CN109541630A (en) A method of it is surveyed and drawn suitable for Indoor environment plane 2D SLAM
Mi et al. A vision-based displacement measurement system for foundation pit
CN116228870B (en) Mapping method and system based on two-dimensional code SLAM precision control
CN110243375A (en) Method that is a kind of while constructing two-dimensional map and three-dimensional map
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
Jiang et al. Determination of construction site elevations using drone technology
Previtali et al. Rigorous procedure for mapping thermal infrared images on three-dimensional models of building façades
US11867818B2 (en) Capturing environmental scans using landmarks based on semantic features
WO2022126339A1 (en) Method for monitoring deformation of civil structure, and related device
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
EP4332631A1 (en) Global optimization methods for mobile coordinate scanners
Zheng et al. Rail sensor: A mobile lidar system for 3d archiving the bas-reliefs in angkor wat
CN111156983B (en) Target equipment positioning method, device, storage medium and computer equipment
US20230290055A1 (en) Surveying assistance system, information display terminal, surveying assistance method, and storage medium storing surveying assistance program
KR101996226B1 (en) Apparatus for measuring three-dimensional position of subject and method thereof
Xiang et al. Towards mobile projective AR for construction co-robots
CN108592789A (en) A kind of steel construction factory pre-assembly method based on BIM and machine vision technique
EP4068218A1 (en) Automated update of object-models in geometrical digital representation
Wang et al. Distance measurement using single non-metric CCD camera
CN112284351A (en) Method for measuring cross spanning line
US20230324558A1 (en) Sensor field-of-view manipulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant