CN111105497A - Three-dimensional mapping intelligent location identification method - Google Patents

Three-dimensional mapping intelligent location identification method Download PDF

Info

Publication number
CN111105497A
CN111105497A CN201911327677.5A CN201911327677A CN111105497A CN 111105497 A CN111105497 A CN 111105497A CN 201911327677 A CN201911327677 A CN 201911327677A CN 111105497 A CN111105497 A CN 111105497A
Authority
CN
China
Prior art keywords
dimensional
map
axis
coordinate
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911327677.5A
Other languages
Chinese (zh)
Inventor
田立刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cashway Technology Co Ltd
Original Assignee
Cashway Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cashway Technology Co Ltd filed Critical Cashway Technology Co Ltd
Priority to CN201911327677.5A priority Critical patent/CN111105497A/en
Publication of CN111105497A publication Critical patent/CN111105497A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Abstract

The invention discloses a three-dimensional mapping intelligent location identification method, which comprises the following steps: scanning and acquiring a two-dimensional coordinate of a plane through a laser radar to perform positioning and map building to form a two-dimensional map; acquiring a three-dimensional coordinate through a depth camera to form a three-dimensional image, and fitting the three-dimensional image to a two-dimensional map created by a laser radar to form a three-dimensional map; the navigation equipment enters an unfamiliar environment for the first time, the site characters and/or two-dimensional codes in the environment are identified through the depth camera to obtain the site name, the identified site name is marked in the three-dimensional map, and the three-dimensional map intelligent identification is completed. According to the method, a laser radar is used for positioning and mapping, accurate coordinates are established, an object is imaged through a depth camera to assist mapping, and a two-dimensional map is synthesized into a three-dimensional map, so that blind spots are avoided, and illumination brightness influence is reduced; the method has the advantages that the characters and/or the two-dimensional codes are identified through the depth camera, and the real self-positioning, self-identification and self-navigation are realized for the intelligent identification of places.

Description

Three-dimensional mapping intelligent location identification method
Technical Field
The invention relates to the technical field of intelligent navigation equipment, in particular to a three-dimensional mapping intelligent location identification method.
Background
The traditional laser positioning mapping (SLAM for short) is greatly influenced by light emphasis, and a single-dimensional laser radar can only establish a two-dimensional plane map. The actual use environment is complicated, steps, hollow-out articles of the frame and the like exist, and the two-dimensional map has blind spots in space, so that danger is caused. Two-dimensional or multidimensional radars are expensive and not easy to popularize. The positioning accuracy error of visual positioning mapping (VSLAM for short) is large. And the equipment enters an unfamiliar environment for the first time, and the specific place name of the surrounding environment cannot be identified.
Disclosure of Invention
The invention aims to provide a three-dimensional mapping intelligent location identification method aiming at the technical defects in the prior art.
The technical scheme adopted for realizing the purpose of the invention is as follows:
a three-dimensional mapping intelligent location identification method comprises the following steps:
scanning and acquiring a two-dimensional coordinate of a plane through a laser radar to perform positioning and map building to form a two-dimensional map;
acquiring a three-dimensional coordinate through a depth camera to form a three-dimensional image, and fitting the three-dimensional image to a two-dimensional map created by a laser radar to form a three-dimensional map;
the navigation equipment enters an unfamiliar environment for the first time, the site characters and/or two-dimensional codes in the environment are identified through the depth camera to obtain the site name, the identified site name is marked in the three-dimensional map, and the three-dimensional map intelligent identification is completed.
The three-dimensional map is formed by fitting the three-dimensional image into a two-dimensional map created by a laser radar, correcting the X-axis coordinate and the Y-axis coordinate of the map by using the A-axis coordinate and the B-axis coordinate of the depth camera by using the laser radar coordinate as a reference to obtain new X-axis coordinate and Y-axis coordinate, fitting C-axis data corresponding to the A-axis coordinate and the B-axis coordinate of the depth camera into the laser radar coordinate to obtain a drawing coordinate, and forming the three-dimensional map.
According to the invention, a laser radar is adopted for positioning and mapping, accurate coordinates are established, an object is imaged through a depth camera to assist mapping, and a two-dimensional map is synthesized into a three-dimensional map, so that blind spots are avoided, and the illumination brightness influence is reduced. In addition, the invention carries out the recognition of characters and/or two-dimensional codes through the depth camera, and intelligently recognizes places, thereby realizing real self-positioning, self-recognition and self-navigation.
Drawings
Fig. 1 is a flow chart of a three-dimensional mapping intelligent location identification method.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in FIG. 1, the three-dimensional mapping intelligent location identification method of the invention comprises the following steps:
scanning and acquiring a two-dimensional coordinate of a plane through a laser radar to perform positioning and map building to form a two-dimensional map;
acquiring a three-dimensional coordinate through a depth camera to form a three-dimensional image, and fitting the three-dimensional image to a two-dimensional map created by a laser radar to form a three-dimensional map;
the navigation equipment enters an unfamiliar environment for the first time, the site characters and/or two-dimensional codes in the environment are identified through the depth camera to obtain the site name, the identified site name is marked in the three-dimensional map, and the three-dimensional map intelligent identification is completed.
The three-dimensional map is formed by fitting a three-dimensional image into a two-dimensional map created by a laser radar, wherein X-axis and Y-axis coordinates of the map are based on laser radar coordinates, the X-axis and Y-axis coordinates of the map are corrected by using A-axis and B-axis coordinates of a depth camera to obtain new X-axis and Y-axis coordinates, C-axis data corresponding to the A-axis and B-axis coordinates of the depth camera are fitted into the laser radar coordinates, and the three-dimensional map is formed after drawing coordinates are obtained.
Specifically, the radar mapping coordinates of the laser radar are relative coordinate systems (X, Y and Z), the initial coordinates are (0, 0 and 0), and the initial angle is 0 degree. The depth camera establishes a map with initial coordinates of relative coordinate systems (A, B and C), initial coordinates of (0, 0 and 0) and an angle of 0 degree.
In the invention, during mapping, a laser radar scanning plane acquires two-dimensional coordinates to establish accurate coordinates, a depth camera acquires three-dimensional coordinates, an object is imaged to establish an auxiliary mapping, a three-dimensional image is fitted into the two-dimensional mapping established by the laser radar, and an accurate three-dimensional map is established. The two-dimensional map can be synthesized into a three-dimensional map so as to avoid blind spots and reduce the influence of illumination brightness.
The X-axis and Y-axis coordinates of the map are based on the laser radar coordinates, the A-axis and B-axis coordinates of the depth camera are corrected, and C-axis data corresponding to the A-axis and B-axis coordinates of the depth camera are fitted into the laser radar coordinates to obtain mapping coordinates (X1, Y1 and C2).
The navigation equipment enters an unfamiliar environment for the first time, the place characters and the two-dimensional codes in the environment are identified through the depth camera, the place name is obtained, and intelligent identification is conducted on the place.
Specifically, the depth camera intelligently identifies places by recognizing characters so as to judge the names of the positions where the depth camera is located, and marks the positions on a map, so that the intelligent recognition of the three-dimensional map is completed.
Or the depth camera intelligently identifies places through the identification of the two-dimensional codes so as to judge the names of the positions where the depth camera is located, and marks the positions on the map, so that the intelligent identification of the three-dimensional map is completed.
According to the invention, a laser radar is adopted for positioning and mapping, accurate coordinates are established, an object is imaged through a depth camera to assist mapping, and a two-dimensional map is synthesized into a three-dimensional map, so that blind spots are avoided, and the illumination brightness influence is reduced. The depth camera identifies characters and two-dimensional codes, intelligently identifies places, and achieves real self-positioning, self-identification and self-navigation.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (2)

1. A three-dimensional mapping intelligent location identification method is characterized in that a two-dimensional coordinate of a plane is scanned and collected through a laser radar to carry out positioning mapping, and a two-dimensional map is formed;
acquiring a three-dimensional coordinate through a depth camera to form a three-dimensional image, and fitting the three-dimensional image to a two-dimensional map created by a laser radar to form a three-dimensional map;
the navigation equipment enters an unfamiliar environment for the first time, the site characters and/or two-dimensional codes in the environment are identified through the depth camera to obtain the site name, the identified site name is marked in the three-dimensional map, and the three-dimensional map intelligent identification is completed.
2. The method for intelligently identifying the three-dimensional mapping according to claim 1, wherein the step of fitting the three-dimensional image to the two-dimensional map created by the laser radar is to use the coordinate of the laser radar as a reference for the coordinate of the X-axis and the Y-axis of the map, correct the coordinate of the a-axis and the B-axis of the depth camera to obtain a new coordinate of the X-axis and the Y-axis, and then fit the data of the C-axis corresponding to the coordinate of the a-axis and the B-axis of the depth camera to the coordinate of the laser radar to obtain the mapping coordinate, thereby forming the three-dimensional map.
CN201911327677.5A 2019-12-20 2019-12-20 Three-dimensional mapping intelligent location identification method Pending CN111105497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911327677.5A CN111105497A (en) 2019-12-20 2019-12-20 Three-dimensional mapping intelligent location identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911327677.5A CN111105497A (en) 2019-12-20 2019-12-20 Three-dimensional mapping intelligent location identification method

Publications (1)

Publication Number Publication Date
CN111105497A true CN111105497A (en) 2020-05-05

Family

ID=70422688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911327677.5A Pending CN111105497A (en) 2019-12-20 2019-12-20 Three-dimensional mapping intelligent location identification method

Country Status (1)

Country Link
CN (1) CN111105497A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI770919B (en) * 2021-03-31 2022-07-11 串雲科技有限公司 System for recognizing the location of an object and method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI770919B (en) * 2021-03-31 2022-07-11 串雲科技有限公司 System for recognizing the location of an object and method thereof

Similar Documents

Publication Publication Date Title
CN108297115B (en) Autonomous repositioning method for robot
CN108388641B (en) Traffic facility map generation method and system based on deep learning
CN108868268A (en) Based on point to identity distance from the unmanned vehicle position and orientation estimation method being registrated with cross-correlation entropy
CN104503449A (en) Positioning method based on environment line features
CN106908064B (en) Indoor night vision navigation method based on Kinect2 sensor
CN110568451A (en) Method and device for generating road traffic marking in high-precision map
CN110827358A (en) Camera calibration method applied to automatic driving automobile
CN105243663A (en) automatic PCB (Printed Circuit Board) scan image matching method and system
CN111721259A (en) Underwater robot recovery positioning method based on binocular vision
CN104552341A (en) Single-point multi-view meter-hanging posture error detecting method of mobile industrial robot
CN106352871A (en) Indoor visual positioning system and method based on artificial ceiling beacon
CN103559704A (en) Method for visually positioning tank mouth of railway oil tank truck
CN114721001A (en) Mobile robot positioning method based on multi-sensor fusion
CN114485654A (en) Multi-sensor fusion positioning method and device based on high-precision map
CN111105497A (en) Three-dimensional mapping intelligent location identification method
WO2022036478A1 (en) Machine vision-based augmented reality blind area assembly guidance method
CN111738971B (en) Circuit board stereoscopic scanning detection method based on line laser binocular stereoscopic vision
CN110281271A (en) The method that robotic arm corrects the outer camera of arm
CN113971697A (en) Air-ground cooperative vehicle positioning and orienting method
CN110322508A (en) A kind of assisted location method based on computer vision
CN106271235A (en) Welding bead localization method based on machine vision and device
CN114296097A (en) SLAM navigation method and system based on GNSS and LiDAR
CN115131360A (en) Road pit detection method and device based on 3D point cloud semantic segmentation
CN114370871A (en) Close coupling optimization method for visible light positioning and laser radar inertial odometer
CN111445578B (en) Map three-dimensional road feature identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200505

WD01 Invention patent application deemed withdrawn after publication