CN114659512A - Geographic information acquisition system - Google Patents

Geographic information acquisition system Download PDF

Info

Publication number
CN114659512A
CN114659512A CN202210194992.0A CN202210194992A CN114659512A CN 114659512 A CN114659512 A CN 114659512A CN 202210194992 A CN202210194992 A CN 202210194992A CN 114659512 A CN114659512 A CN 114659512A
Authority
CN
China
Prior art keywords
precision camera
precision
camera
laser
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210194992.0A
Other languages
Chinese (zh)
Inventor
周洁
方永火
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN202210194992.0A priority Critical patent/CN114659512A/en
Publication of CN114659512A publication Critical patent/CN114659512A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a geographic information acquisition system, which comprises a vision acquisition system, a display system and a display system, wherein the vision acquisition system is used for acquiring road elements and sensing composition; the vision acquisition system comprises a high-precision camera, a medium-precision camera and a low-precision camera; the high-precision camera is used for collecting reference ground feature elements; the medium-precision camera is used for collecting specific ground feature elements; the low-precision camera is used for carrying out all-element crowdsourcing data acquisition; the high-precision camera, the medium-precision camera and the low-precision camera respectively and independently perform sensing composition after acquiring data; and taking the output result of the high-precision camera as a reference, carrying out deviation rectification fusion on the output results of the high-precision camera and the low-precision camera, checking the deviation rectification fusion result through the output result of the middle-precision camera, and outputting map information. The three ADAS camera combination schemes make up for deficiencies of each other and make up for the sensing defect of a single sensor.

Description

Geographic information acquisition system
Technical Field
The invention relates to the technical field of high-precision map surveying and mapping, in particular to a geographic information acquisition system.
Background
Mobile measurement collection vehicles play an increasingly important role in mapping and updating. The system for acquiring the image and laser point cloud data based on the mobile carrier acquires accurate geocode image and laser point cloud data in the moving process of the mobile carrier, is safe, efficient and high in data accuracy, wherein an industrial camera acquires a visual image (including a panoramic or aerial camera) of an operation environment, a laser sensor acquires point cloud coordinate values and intensity values, a POS system composed of a GNSS and an IMU provides accurate positions and postures, and processing software synchronously fuses all sensor data. Because the collection vehicle integrates a large number of core parts, such as a laser system, an inertial navigation system and an image system, which are very expensive; and to ensure accuracy, high-precision equipment is often used, all of which rely on imports. One high precision device is integrated, often at the cost of millions. This is very stressful for the graph maker.
Therefore, one domestic device is used, the cost is greatly reduced, and the production efficiency is improved; the requirements of the current special package updating vehicle, a data quality inspection platform, a dynamic and static acquisition vehicle, a research and development test vehicle and a high-precision mapping vehicle are subjected to resource integration, and a set of complete minimum-magnitude test vehicle with diversified functions is formed, so that the system is particularly necessary.
Disclosure of Invention
The invention provides a geographic information acquisition system aiming at the technical problems in the prior art, the system has low cost and high efficiency, the quality can meet the requirement of mass production acquisition capacity, the system is suitable for different application scenes such as expressways, urban roads, underground parking lots and the like, and the feasibility of the accuracy guarantee of a long tunnel without GNSS signals and a combined inertial navigation failure scene is considered.
The technical scheme for solving the technical problems is as follows: a geographic information acquisition system comprises a visual acquisition system, a data acquisition system and a data processing system, wherein the visual acquisition system is used for acquiring road elements and performing perception composition; the vision acquisition system comprises a high-precision camera, a medium-precision camera and a low-precision camera;
the high-precision camera is used for collecting reference ground feature elements;
the medium-precision camera is used for collecting specific ground feature elements;
the low-precision camera is used for carrying out all-element crowdsourcing data acquisition;
the high-precision camera, the medium-precision camera and the low-precision camera respectively and independently perform sensing composition after acquiring data; and taking the output result of the high-precision camera as a reference, performing deviation rectification fusion on the output results of the high-precision camera and the low-precision camera, checking the deviation rectification fused result through the output result of the middle-precision camera, and outputting map information.
Furthermore, the geographic information acquisition system also comprises a laser system, wherein the laser system comprises a main laser radar and an auxiliary laser radar; the main laser radar is horizontally arranged on the roof of the vehicle and used for collecting point cloud data of key ground feature elements of the road and carrying out laser SLAM; the auxiliary laser radar is obliquely arranged at the tail of the vehicle and is used for acquiring the special packet data into a picture;
and the laser system fuses the point cloud data acquired by the main laser radar and the special packet data acquired by the auxiliary laser radar to acquire the road full-element laser point cloud data.
Further, the vision system estimates pose transformation at high frequency using a vision odometer, optimizes motion estimation at low frequency using a laser odometer, and calibrates drift.
Furthermore, the geographic information acquisition system further comprises a data processing unit for fusing the output data of the vision system and the laser system.
Further, the data processing unit includes: the system comprises a motion estimation module, a feature tracking module and a depth map registration module;
the motion estimation module is used for estimating the motion of the system by utilizing the images and the laser point cloud according to the frame rate of the images;
the characteristic tracking module is used for detecting and matching visual characteristic points in continuous image frames;
the depth map registration module is used for aligning the local depth map and the point cloud and obtaining the depth of the visual feature point;
and the motion estimation module is used for calculating the body motion by utilizing the visual feature points.
Further, the reference feature element includes a lane line and an arrow.
Further, the specific ground feature elements comprise lane lines, arrows, traffic lights and rod pieces.
Furthermore, the system also comprises a combined inertial navigation module, a CAN gateway and a data conversion and transmission module.
The invention has the beneficial effects that: the invention takes a dynamic target as an example, and because the vehicle end sensing surface is simulated by an automatic driving system, the detection of the object needs to obtain an accurate three-dimensional space position, the length, the width and the height of the object and the orientation information of the object, and also needs to provide speed information in addition. Therefore, a way of fusing a sensor with 3D characteristics and a 2D sensor is required. The sensing of the surrounding environment is mainly completed through the fusion of a camera, a laser radar and a millimeter wave radar.
The patent creatively provides three ADAS camera combination schemes, makes up for deficiencies of each other, and makes up for the sensing defect of a single sensor.
The method has the advantages that the low-cost sensors are used for replacing expensive high-precision sensors to integrate the vehicle-mounted mobile measurement system, the cost of the vehicle-mounted mobile measurement system is reduced, the problems that the scanning frequency of the low-cost sensors is low, data are sparse and the like are solved by using the plurality of low-cost sensors, and the assembly cost of the whole system is reduced on the premise of guaranteeing the measurement precision.
Drawings
Fig. 1 is a block diagram of a geographic information collection system according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a block diagram of a geographic information collection system according to an embodiment of the present invention. As shown in fig. 1, the geographic information collecting system includes a vision collecting system for collecting road elements and performing a perception composition; the vision acquisition system comprises a high-precision camera, a medium-precision camera and a low-precision camera.
The high-precision camera is used for collecting reference ground feature elements so as to meet the requirement that collected videos are used for reverse projection, and the output of the high-precision camera is mainly lane markings such as lane lines, arrows and the like.
The medium-precision camera is used for collecting specific ground feature elements and visually perceiving, and the output of the medium-precision camera is crowdsourcing map data which comprises lane lines, arrows, traffic lights and rod-shaped ground features.
The low-precision camera is used for collecting all-element crowdsourcing data, namely, identifying all-view dynamic scene targets, and mainly relates to traffic signboard identification, traffic identification in a camera view coverage range and traffic light identification.
The high-precision camera has high perceived mapping precision, but can only recognize a small number of fifth elements such as lanes and arrows. The medium-precision camera senses the medium-precision image and can identify parts of ground object elements such as lane lines, arrows, traffic lights, rod pieces and the like. The perception mapping precision of the low-precision camera is low, a large number of ground feature elements can be identified, and all-element crowdsourcing map acquisition is carried out, so that the high-precision camera, the medium-precision camera and the low-precision camera are respectively and independently subjected to perception mapping after acquiring data; and the output result of the high-precision camera and the output result of the low-precision camera are subjected to deviation rectification fusion by taking the output result of the high-precision camera as a reference, the deviation rectification fused result is checked through the output result of the middle-precision camera, and map information is output, so that the precision of the whole visual system is guaranteed.
Further, on the basis of the above embodiment, the geographic information acquisition system further includes a laser system, and the laser system includes a main laser radar and an auxiliary laser radar; the main laser radar is horizontally arranged on the roof of the vehicle and used for collecting point cloud data of key ground feature elements of the road and carrying out laser SLAM; the auxiliary laser radar is installed at the tail of the vehicle in an inclined angle of 45 degrees and used for collecting the special packet data to form a picture. The key feature herein refers to a rod-shaped feature.
On the basis, the geographic information acquisition system further comprises a combined inertial navigation module, a CAN gateway and a data conversion and transmission module. The combined inertial navigation module comprises a vehicle-scale GNSS and an IMU, outputs longitude, latitude, elevation, roll, pitch and yaw and is used for calculating the pose of the sensor. The CAN gateway is used for collecting and outputting the corner, wheel speed and driver behavior data of the vehicle, and the system collects and outputs the data through the CAN gateway to improve the precision of the combined inertial navigation. The data conversion and transmission module comprises a DTU, a switch, a router and the like, is used for 4G network access and data transmission, and the output data mainly comprises differential signals, vector data and the like.
The main laser radar collects rod-shaped ground objects, and simultaneously carries out laser SLAM, so that the navigation precision of the combined inertial navigation can be improved.
And the laser system fuses the point cloud data acquired by the main laser radar and the special packet data acquired by the auxiliary laser radar, improves the precision and the accuracy and acquires the road full-element laser point cloud data.
Further, the vision system estimates pose transformation at high frequency using a vision odometer, optimizes motion estimation at low frequency using a laser odometer, and calibrates drift.
And (3) a V-LOAM scheme for real-time mapping by visual combination of 3D laser radar. Pose transformation is estimated at high frequency by using a visual odometer, motion estimation is optimized at low frequency by using a laser odometer, and drift is calibrated. On the disclosed KITTI data set, the accuracy of the V-LOAM algorithm is ranked first, and when the sensor moves at high speed and is subjected to obvious illumination change, the robustness of the method is better.
Further, the geographic information acquisition system further comprises a data processing unit for fusing the output data of the vision system and the laser system.
The data processing unit includes: the system comprises a motion estimation module, a feature tracking module and a depth map registration module;
the motion estimation module is used for estimating the motion of the system by utilizing the images and the laser point cloud according to the frame rate of the images;
the characteristic tracking module is used for detecting and matching visual characteristic points in continuous image frames;
the depth map registration module is used for aligning the local depth map and the point cloud and obtaining the depth of the visual feature point;
and the motion estimation module is used for calculating the body motion by utilizing the visual feature points.
The multi-sensor fusion, the vision can provide high-precision odometer and map information with rich information quantity, and the laser radar provides accurate depth information for the vision characteristics. The robustness and real-time of the SLAM algorithm are to be further improved. In the aspect of improving the robustness of the SLAM algorithm, data processing processes such as calibration of a speedometer, external participation timestamp calibration of a laser radar, removal of motion distortion of the laser radar and the like need to be considered, and meanwhile, the problems of a degraded environment, global positioning, dynamic environment positioning and the like are improved to a certain extent.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A geographic information acquisition system is characterized by comprising a vision acquisition system, a vision acquisition system and a vision acquisition system, wherein the vision acquisition system is used for acquiring road elements and sensing composition; the vision acquisition system comprises a high-precision camera, a medium-precision camera and a low-precision camera;
the high-precision camera is used for collecting reference ground feature elements;
the medium-precision camera is used for collecting specific ground feature elements;
the low-precision camera is used for carrying out all-element crowdsourcing data acquisition;
the high-precision camera, the medium-precision camera and the low-precision camera respectively and independently perform sensing composition after acquiring data; and taking the output result of the high-precision camera as a reference, performing deviation rectification fusion on the output results of the high-precision camera and the low-precision camera, checking the deviation rectification fused result through the output result of the middle-precision camera, and outputting map information.
2. The system of claim 1, further comprising a laser system comprising a primary lidar and a secondary lidar; the main laser radar is horizontally arranged on the roof of the vehicle and used for collecting point cloud data of key ground feature elements of the road and carrying out laser SLAM; the auxiliary laser radar is obliquely arranged at the tail of the vehicle and used for acquiring the special packet data into a picture;
and the laser system fuses the point cloud data acquired by the main laser radar and the special packet data acquired by the auxiliary laser radar to acquire the full-element laser point cloud data of the road.
3. The system of claim 2, further comprising the vision system estimating pose transformation at high frequency using a vision odometer and optimizing motion estimation and calibrating drift at low frequency using a laser odometer.
4. The system of claim 2, further comprising a data processing unit for integrating output data from the vision system and the laser system with a worker.
5. The system of claim 4, wherein the data processing unit comprises: the system comprises a motion estimation module, a feature tracking module and a depth map registration module;
the motion estimation module is used for estimating the motion of the system by utilizing the images and the laser point cloud according to the frame rate of the images;
the characteristic tracking module is used for detecting and matching visual characteristic points in continuous image frames;
the depth map registration module is used for aligning the local depth map and the point cloud and obtaining the depth of the visual feature point;
and the motion estimation module is used for calculating the body motion by utilizing the visual feature points.
6. The system of claim 1, wherein the reference feature elements comprise lane lines, arrows.
7. The system of claim 1, wherein the feature-specific elements include lane lines, arrows, traffic lights, bars.
8. The system of claim 1, further comprising a combined inertial navigation module, a CAN gateway, and a data conversion and transmission module.
CN202210194992.0A 2022-03-01 2022-03-01 Geographic information acquisition system Pending CN114659512A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210194992.0A CN114659512A (en) 2022-03-01 2022-03-01 Geographic information acquisition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210194992.0A CN114659512A (en) 2022-03-01 2022-03-01 Geographic information acquisition system

Publications (1)

Publication Number Publication Date
CN114659512A true CN114659512A (en) 2022-06-24

Family

ID=82026915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210194992.0A Pending CN114659512A (en) 2022-03-01 2022-03-01 Geographic information acquisition system

Country Status (1)

Country Link
CN (1) CN114659512A (en)

Similar Documents

Publication Publication Date Title
CN110859044B (en) Integrated sensor calibration in natural scenes
CN105793669B (en) Vehicle position estimation system, device, method, and camera device
JP2020525809A (en) System and method for updating high resolution maps based on binocular images
JP5388082B2 (en) Stationary object map generator
JP5404861B2 (en) Stationary object map generator
CN111582079A (en) Lane positioning method and device based on computer vision
CN111006655A (en) Multi-scene autonomous navigation positioning method for airport inspection robot
CN104280036A (en) Traffic information detection and positioning method, device and electronic equipment
CN113885062A (en) Data acquisition and fusion equipment, method and system based on V2X
US20210341605A1 (en) Position coordinates estimation device, position coordinates estimation method, and program
CN114111775B (en) Multi-sensor fusion positioning method and device, storage medium and electronic equipment
US20210180980A1 (en) Roadway mapping device
JP4596566B2 (en) Self-vehicle information recognition device and self-vehicle information recognition method
Zhou et al. Developing and testing robust autonomy: The university of sydney campus data set
CN112136021A (en) System and method for constructing landmark-based high-definition map
CN114485654A (en) Multi-sensor fusion positioning method and device based on high-precision map
CN113673386A (en) Method for marking traffic signal lamp in prior-to-check map
CN115667847A (en) Vehicle control device and vehicle position estimation method
CN117310627A (en) Combined calibration method applied to vehicle-road collaborative road side sensing system
CN114659512A (en) Geographic information acquisition system
Jiang et al. Precise vehicle ego-localization using feature matching of pavement images
CN112530270B (en) Mapping method and device based on region allocation
CN113727434A (en) Vehicle-road cooperative auxiliary positioning system and method based on edge computing gateway
Dai et al. Roadside Edge Sensed and Fused Three-dimensional Localization using Camera and LiDAR
CN112880692A (en) Map data annotation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination