CN109887057A - The method and apparatus for generating high-precision map - Google Patents

The method and apparatus for generating high-precision map Download PDF

Info

Publication number
CN109887057A
CN109887057A CN201910156262.XA CN201910156262A CN109887057A CN 109887057 A CN109887057 A CN 109887057A CN 201910156262 A CN201910156262 A CN 201910156262A CN 109887057 A CN109887057 A CN 109887057A
Authority
CN
China
Prior art keywords
information
attitude estimation
point cloud
camera
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910156262.XA
Other languages
Chinese (zh)
Other versions
CN109887057B (en
Inventor
沈栋
李昱辰
钱炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Feibao Technology Co Ltd
Original Assignee
Hangzhou Feibao Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Feibao Technology Co Ltd filed Critical Hangzhou Feibao Technology Co Ltd
Publication of CN109887057A publication Critical patent/CN109887057A/en
Application granted granted Critical
Publication of CN109887057B publication Critical patent/CN109887057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses the method and apparatus that high-precision map is generated in the case where not depending on RTK (real-time dynamic positioning).This method comprises: obtaining image data from camera;Point cloud data is obtained from laser radar;Posture information is obtained from Global Navigation Satellite System (GNSS)/Inertial Measurement Unit (IMU);The map visual signature information of the whole posture information and previous maintenance that are handled image data the visual signature information, previous fusion to extract visual signature information, extracted according to present frame carries out Attitude estimation to obtain camera Attitude estimation;Point cloud data is handled to obtain point cloud information, and Attitude estimation is carried out to obtain radar Attitude estimation according to the point cloud information and other information;The posture information, camera Attitude estimation and radar Attitude estimation are merged, to obtain more acurrate stable Attitude estimation result;According to more acurrate stable Attitude estimation as a result, merging the image data and the point cloud data to construct high-precision map.

Description

The method and apparatus for generating high-precision map
Technical field
In summary, the present invention relates to automatic Pilot fields, specifically, being related to based on camera, laser radar and GNSS The method and apparatus for generating high-precision map and being positioned.
Background technique
With the development of technology, the robot driver vehicles (for example, " UAV " or " unmanned plane, unmanned vehicle ") are driven automatically The technology of sailing starts to be increasingly becoming hot spot.It is just driving a conveyance in development machines people to be used to be widely applied, in automatic Pilot The key link be it is to be understood that vehicle where and will where, high-precision map plays an important role wherein, such as What generates the key problem that high-precision map is automatic Pilot field.
Common high-precision map scheme is depended on RTK (Real-Time Kinematic, real-time dynamic positioning) High-precision GPS and Inertial Measurement Unit (IMU) integrated navigation system, but its cost is prohibitively expensive, while in high buildings and large mansions, tunnel The weak output signals such as road, underground garage or are swerved in vehicle, are shaken, the feelings of bad weather even without the environment of signal Under condition, error can all become greater to the requirement for being much unable to reach high-precision centimetre (cm) rank.
The robot driver vehicles be usually equipped with can capture image, image sequence or video camera, can It obtains the radar installations of radar point cloud data and can receive and handle the GNSS receiver of navigation signal.Robot driver Captured image, radar point cloud data and GNSS signal can be used in the vehicles accurately schemes to generate, and execution is based on The navigation and positioning of vision, thus in various environment navigating robot drive a conveyance provide it is flexible, expansible and The solution of low cost.
Summary of the invention
In view of this, the disclosure of invention provide based on camera, laser radar and GNSS generate high-precision map and Method, apparatus, equipment and the computer storage medium positioned, for realizing cm rank is obtained in various Driving Scenes High-precision map, to improve the safety of robot driver vehicles automatic Pilot.
In one aspect, the embodiment provides a kind of method for generating high-precision map, this method packets It includes: obtaining image data from camera;Point cloud data is obtained from laser radar;It is surveyed from Global Navigation Satellite System (GNSS)/inertia It measures unit (IMU) and obtains posture information;The image data is handled to extract visual signature information, is mentioned according to present frame The whole posture information of the visual signature information, previous fusion that take and the map visual signature information of previous maintenance carry out Attitude estimation is to obtain camera Attitude estimation;The point cloud data is handled to obtain point cloud information, and according to the point Cloud information and other information carry out Attitude estimation to obtain radar Attitude estimation;The posture information, the camera posture are estimated Meter and the radar Attitude estimation are merged, to obtain more acurrate stable Attitude estimation result;According to described more acurrate steady Fixed Attitude estimation is as a result, the image data and the point cloud data are merged, building high-precision map.Here its Its information can be the posture information of previous fusion and the point cloud information being previously saved.
In one embodiment of the disclosure of invention, before being locally stored, to the image data, described cloud Data and the posture information are pre-processed.
In one embodiment of the disclosure of invention, the pretreatment includes to the image data, described cloud Data and the posture information parsed, time synchronization and screening.
In one embodiment of the disclosure of invention, the posture information includes the acceleration that the GNSS/IMU is provided Degree, angular speed, coordinate etc..
In one embodiment of the disclosure of invention, to the camera obtain the image data handled with The visual signature information is extracted, the whole posture information of the visual signature information, previous fusion extracted according to present frame And the map visual signature information of previous maintenance carries out Attitude estimation to obtain the camera Attitude estimation, comprising: using deeply Degree study the relevant technologies, identify dynamic barrier;The dynamic barrier is filtered to obtain final image data;It is right The final image data is handled to obtain the visual signature information;Based on the visual signature information, and utilize upper The image data posture information of one frame maintenance obtains the rough estimate to Current camera posture, then carries out frame frame matching, just Step optimization camera posture;Characteristic matching is carried out using the cartographic information of previous maintenance, constructs optimization method, then phase described in suboptimization Machine posture is to obtain the camera Attitude estimation.
In one embodiment of the disclosure of invention, the point cloud data is handled to obtain point cloud information, And Attitude estimation is carried out to obtain radar Attitude estimation according to the point cloud information and other information, comprising: utilize structural information Extract the characteristic point of the laser radar point cloud data;According to the exercise data of other information, using the angle of the specified point, Linear interpolation is carried out to calculate the time interval that the specified point starts relative to frame;Speed benefit is carried out using the time interval It repays, corrects the genuine point cloud data of fault to obtain;It is matched with the structure feature of previous frame point, using between adjacent two frame Match information tentatively optimizes posture information;The global map of maintenance is recycled, carries out Feature Points Matching to optimize, to obtain More accurately posture information.
In one embodiment of the disclosure of invention, by the posture information, the camera Attitude estimation and described Radar Attitude estimation is merged, to obtain more acurrate stable Attitude estimation result, comprising: to the posture information, described Camera Attitude estimation and the radar Attitude estimation are calibrated.
In one embodiment of the disclosure of invention, according to the more acurrate stable Attitude estimation as a result, by institute It states image data and the point cloud data is merged to construct high-precision map, comprising: by the radar points cloud data projection To regulation coordinate system;The image data is projected into regulation coordinate system;Based on the more acurrate stable Attitude estimation as a result, The radar point cloud data and the image data that project to regulation coordinate system are merged to construct high-precision map.
In one embodiment of the disclosure of invention, the regulation coordinate system is world coordinate system.
In one embodiment of the disclosure of invention, the calibration uses Kalman filtering.
On the other hand, the embodiment provides a kind of for generating the device of high-precision map.The device It may include obtaining module, be used for: obtaining image data from camera;Point cloud data is obtained from laser radar;From GNSS/IMU Obtain posture information.The device can also include preprocessing module, be used for the image data, the point cloud data and institute Posture information is stated to be pre-processed to be locally stored.The device can also include camera data processing module, be used for described Image data is handled the visual signature information to extract visual signature information, extracted according to present frame, previous fusion Whole posture information and previous maintenance map visual signature information carry out Attitude estimation to obtain camera Attitude estimation.It should Device can also include laser radar data processing module, be used to handle the point cloud data to obtain point cloud letter Breath, and Attitude estimation is carried out to obtain radar Attitude estimation according to the point cloud information and other information.The device can also wrap Fusion Module is included, is used to merge the posture information, the camera Attitude estimation and the radar Attitude estimation, with Obtain more acurrate stable Attitude estimation result.The device can also include building module, be used for according to described more acurrate steady Fixed Attitude estimation is as a result, the image data and the point cloud data are merged, building high-precision map.Here its Its information can be the posture information of previous fusion and the point cloud information being previously saved.
Each embodiment can also include the robot driver vehicles with high-precision map creation device, this is high-precision It includes transceiver, memory and outlined above to execute configured with processor-executable instruction for spending map creation device The processor of the operation of method.Each embodiment include for the processing equipment used in the robot driver vehicles, It is configured to execute the operation of method outlined above.Each embodiment includes be stored with processor-executable instruction non-provisional Property processor readable medium, the processor-executable instruction are configured as making the processor of the robot driver vehicles to execute The operation of method outlined above.
Detailed description of the invention
It is incorporated herein and is constituted the attached drawing of this specification a part, depicts exemplary embodiment, and together with above Provide be broadly described and detailed description given below comes together to explain the feature of each embodiment.
Fig. 1 shows the environment or system for being suitable for being used to realize the embodiment of the present invention;
Fig. 2 is according to embodiments of the present invention, to show for the high-precision map used in the robot driver vehicles The block diagram of the component of generating device;
Fig. 3 is according to embodiments of the present invention, show for illustrate obtain laser radar data point cloud problem of dtmf distortion DTMF and The schematic diagram of its solution;
Fig. 4 is according to embodiments of the present invention, shows higher to obtain for being merged to the processed data of acquisition The schematic diagram of the method for the posture information of precision;
Fig. 5 is according to embodiments of the present invention, shows for utilizing final posture information, by radar point cloud data and camera Data projection generates the schematic diagram of high-precision map to unified coordinate system;
Fig. 6 is according to embodiments of the present invention, shows a kind of for generating the schematic flow chart of the method for high-precision map; And
Fig. 7 is according to embodiments of the present invention, shows a kind of for generating the schematic block diagram of the device of high-precision map.
In the accompanying drawings, the same or similar label is used to represent the same or similar element.
Specific embodiment
Disclosed preferred embodiment that the present invention will be described in more detail below with reference to accompanying drawings.Although showing this in attached drawing The preferred embodiment of disclosure of the invention content, however, it is to be appreciated that the present invention can also be realized with other various forms without answering It is limited in specific embodiment described below.There is provided these specific embodiments herein is to disclose the present invention more Add thorough and complete, and range disclosed by the invention can be completely communicated to those skilled in the art.
" illustrative " word used herein means " being used as example, illustration or explanation ".Here depicted as " showing Any aspect of example property " is not necessarily to be construed as or more advantage more more preferable than other aspects.
As it is used herein, term " the robot driver vehicles " and " unmanned plane ", " unmanned vehicle " refer to: including It is configured to supply one of various types of vehicles of cart-mounted computing device of some autonomous or semi-autonomous ability.Robot The example to drive a conveyance includes but is not limited to: such as aircraft of unmanned vehicle (UAV) etc;Surface car (for example, autonomous driving vehicle or semi-automatic driving automobile etc.);Water base vehicle is (that is, being configured as on the water surface or making under water The vehicle of industry);Its basic vehicle (for example, spacecraft or space probe);And/or its certain combination.In some embodiments, machine Device people drive a conveyance can be it is manned.In other embodiments, the robot driver vehicles can be nobody It drives.In some implementations, the robot driver vehicles can be aircraft (unmanned or manned), the flight Device can be rotor craft or lifting vehicle.
Each embodiment can realize that these robot driver vehicles can in the various robot driver vehicles To be communicated with one or more communication networks, being shown in FIG. 1, which may adapt to, combines showing of using of various embodiments Example.
Referring to Fig. 1, system or environment 1 may include one or more robot driver vehicles 10, GNSS 20 and lead to Communication network 50.Although the robot driver vehicles 10 are shown as communicating with communication network 50 in Fig. 1, robot Drive a conveyance 10 can be communicated with any communication network about any of the methodologies described herein, can not also be with These communication networks are communicated.
In various embodiments, the robot driver vehicles 10 may include one or more cameras 140, one Or multiple cameras 140 are configured as obtaining image, and image data is supplied to the processing equipment of the robot driver vehicles 10 110。
In various embodiments, the robot driver vehicles 10 may include one or more laser radars 150, described One or more laser radars 150 are configured as obtaining radar point cloud data, and the radar point cloud data of acquisition is supplied to machine People drive a conveyance 10 processing equipment 110.
Such as Global Navigation Satellite System (GNSS), global positioning system can be used in the robot driver vehicles 10 (GPS) etc. positioning is navigated or determined to navigation system, GNSS/IMU can be used obtains robot driver traffic work The posture information of tool.In some embodiments, the robot driver vehicles 10 can be used substitution positioning signal source (that is, Different from GNSS, GPS etc.).
The robot driver vehicles 10 may include processing equipment 110, processing equipment 110 can be configured as monitoring and Control various functions, subsystem and/or the other components of the robot driver vehicles 10.For example, processing equipment 110 can be by Be configured to be monitored and controlled the robot driver vehicles 10 various functions, such as with propulsion, power management, sensor tube Reason, navigation, communication, actuating, turn to, braking and/or mode of vehicle operation manage relevant module, software, instruction, circuit, hard Part.
Processing equipment 110 can accommodate the various circuits of the operation for controlling the robot driver vehicles 10 and set It is standby.For example, processing equipment 110 may include the processor 120 for indicating the control of the robot driver vehicles 10.Processor 120 may include one or more processors, be configured as executing processor-executable instruction (for example, application program, example Journey, script, instruction set etc.) with the operation that controls the robot driver vehicles 10, (it includes the behaviour of various embodiments herein Make).In some embodiments, processing equipment 110 may include the memory 122 for being coupled to processor 120, be configured as depositing Store up data (for example, image data, the GNSS/IMU sensing data of acquisition, radar point cloud data, received message, using journey Sequence etc.).Processor 120 and memory 122 and other elements may be configured to or including systems on chip (SOC) 115. Processing equipment 110 may include more than one SOC 115, to increase the quantity of processor 120 and processor core.Processing is set Standby 110 can also include processor 120 not associated with SOC 115.Each processor 120 can be multi-core processor.
As used herein term " system on chip " or " SOC " refer to the electronic circuit of one group of interconnection, usually (but not It exclusively) include one or more processors (for example, 120), memory (for example, 122) and communication interface.SOC 115 can be with Including various types of processor 120 and processor core, such as general processor, central processing unit (CPU), number letter The subsystem of number processor (DSP), graphics processing unit (GPU), acceleration processing unit (APU), the specific components of processing equipment Processor is (for example, at for the image processor of high-precision map creation device (e.g., 130) or for the display of display Manage device, secondary processor, single core processor and multi-core processor).SOC115 can also include other hardware and hardware combinations, example Such as field programmable gate array (FPGA), specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, crystalline substance Body pipe logic, performance monitoring hardware, house dog hardware and time reference.Integrated circuit can be configured, so that integrated electricity The component on road resides on monolithic semiconductor material (for example, silicon).
Processing equipment 110 can also include or be connected to one or more sensors 136, and institute can be used in processor 120 State one or more sensors 136 determine information associated with vehicle operating and/or with the robot driver vehicles 10 The corresponding associated information of external environment, to control the various processing on the robot driver vehicles 10.This sensing The example of device 136 includes accelerometer, gyroscope and electronic compass, they are configured as providing to processor 120 about machine Device people drives a conveyance the data in 10 direction and the variation of movement.For example, in some embodiments, processor 120 can be with Use the data from sensor 136 as input, for determining or predicting the movement number of the robot driver vehicles 10 According to.Can be by various circuits (for example, bus or other similar circuits), it will be each in processing equipment 110 and/or SOC 115 Kind component is coupled.
Processing equipment 110 can also include high-precision map creation device 130, and the latter can be to obtaining from camera 140 Image data, the point cloud data obtained from laser radar 150 and the posture information obtained from GNSS/IMU are pre-processed with this Ground storage is handled the image data being locally stored to extract visual signature information, and the vision extracted according to present frame is special The map visual signature information of reference breath, the whole posture information of previous fusion and previous maintenance carries out Attitude estimation to obtain Camera Attitude estimation is handled the point cloud data being locally stored to obtain point cloud information, then according to the point cloud information with Other information carries out Attitude estimation to obtain radar Attitude estimation.Here other information can be the posture information of previous fusion With the point cloud information being previously saved.High-precision map creation device 130 can also estimate posture information obtained, camera posture Meter and radar Attitude estimation are merged, to obtain more acurrate stable Attitude estimation as a result, based on the more acurrate stable appearance State estimated result merges image data and point cloud data to construct high-precision map.
It, can be by some or all of group although the various components of processing equipment 110 are illustrated as individual component Part (for example, processor 120, memory 122 and other units) is integrated in individual equipment or module (for example, system on chip together Module) in.
Each embodiment can be realized in the high-precision map generating device 200 of the robot driver vehicles, in Fig. 2 In show one example.Referring to Fig. 1-2, the high-precision map generating device 200 suitable for various embodiments may include Camera 140, processor 208, memory 210, laser radar element 212 and map generation unit 214.In addition, high-precision map Generating device 200 may include Inertial Measurement Unit (IMU) 216 and EMS 218.
Camera 140 may include at least one imaging sensor 204 and at least one optical system 206 (for example, one or Multiple lens).Camera 140 can obtain one or more digital pictures (sometimes herein called picture frame).Camera 140 can wrap Include single monocular camera, stereoscopic camera and/or omnidirectional camera.In some embodiments, camera 140 can be raw with high-precision map Forming apparatus 200 is physically separated, for example, positioned at the robot driver vehicles outside and (do not show via data cable It is connected to processor 208 out).In some embodiments, camera 140 may include another processor (not shown), can Configured with one or more operations in operation of the processor-executable instruction to execute various embodiment methods.
In some embodiments, memory 210 can be realized in camera 140 or such as frame buffer (does not show Another memory out) etc.For example, camera 140 may include being configured as to the image from imaging sensor 204 Before data are handled (for example, being handled by processor 208), (that is, provisionally storing) is cached to the data.? In some embodiments, high-precision map generating device 200 may include image data buffer, be configured as to from camera 140 image data is cached (that is, provisionally storing).Such caching image data can be supplied to processor 208, or Other processors that person can be executed the operation of some or all of various embodiments by processor 208 or be configured as are visited It asks.
Laser radar element 212 can be configured as the one or more laser radar point cloud datas of capture.It can will capture One or more laser radar point cloud datas storage in memory 210.
High-precision map generating device 200 may include: be configured as robot measurement drive a conveyance 10 it is various The Inertial Measurement Unit 216 (IMU) of parameter.IMU 216 may include one or more in gyroscope, accelerometer and magnetometer It is a.IMU 216 can be configured as the change of detection pitching associated with the robot driver vehicles 10, rolling and yaw axis Change.216 outputting measurement value of IMU is determined for height, angular speed, linear velocity and/or the position of the robot driver vehicles It sets.
In some embodiments, map generation unit 214 can be configured as use and mention from 140 captured image of camera The information taken, the one or more laser radar point cloud datas captured from laser radar element 212 and the appearance obtained from IMU 216 State information, Lai Shengcheng accurately scheme, and the various parameters for navigating in environment are determined, to hand in robot driver It navigates in the environment of logical tool 10.
In addition, high-precision map generating device 200 optionally includes EMS 218.EMS 218 can With the various parameters for the environmental correclation connection being configured as around detection and the robot driver vehicles 10.EMS 218 may include ambient light detectors, thermal imaging system, ultrasonic detector, radar system, ultrasonic system, piezoelectric transducer, wheat One or more of gram wind etc..In some embodiments, the parameter that EMS 218 detects can be used for detecting Ambient light level detects the various objects in environment, identifies the position of each object, identification subject material etc..Some In embodiment, Attitude estimation can be carried out based on the measured value that EMS 218 exports.
It in various embodiments, can the survey that obtains of one or more camera captured images to camera 140, IMU216 Magnitude, the one or more laser radar point cloud datas captured from laser radar element 212 and/or EMS 218 obtain One or more of the measured value obtained adds timestamp.Map generation unit 214 can be used these timestamp informations and come from phase One or more images that machine 140 captures and/or the one or more laser radar point cloud numbers captured from laser radar element 212 It navigates according to middle extraction information and/or in the environment of the robot driver vehicles 10.
Processor 208 may be coupled to camera 140, one or more imaging sensors 204, one or more optical systems 206, memory 210, laser radar element 212, map generation unit 214 and IMU 216 and optional EMS 218 (for example, being communicated with them).Processor 208 can be general purpose single-chip or multi-chip microprocessor (for example, at ARM Manage device), specific use microprocessor (for example, digital signal processor (DSP)), microcontroller, programmable gate array etc..Place Reason device 208 is properly termed as central processing unit (CPU).Although single processor 208 is shown in FIG. 2, high-precision map Generating device 200 may include multiple processors (for example, multi-core processor) or different types of processor (for example, ARM and DSP combination).
Processor 208 can be configured as the method for realizing each embodiment to generate high-precision map and/or in environment Middle navigating robot drives a conveyance 10.
Memory 210 can store data (for example, image data, radar point cloud data, GNSS/IMU measured value, time Stamp, data associated with map generation unit 214 etc.) and the instruction that can be executed by processor 208.In various realities It applies in example, the example that can store instruction and/or data in memory 210 may include image data, gyroscope measurement Data, radar point cloud data, the automatic calibration command of camera etc..Memory 210, which can be, can store any of electronic information Electronic building brick comprising for example random access memory (RAM), read-only memory (ROM), magnetic disk storage medium, optical storage are situated between The subsidiary onboard storage device of flash memory device, processor in matter, RAM, Erasable Programmable Read Only Memory EPROM (EPROM), electricity can Erasable programmable read-only memory (EPROM) (EEPROM), register etc. (including a combination thereof).
Certainly, it should be understood by one skilled in the art that the high-precision map generating device 200 for example can be service Device or computer, are also possible to intelligent terminal, such as electronic lock, smart phone, Intelligent flat etc., and the present invention is simultaneously not limited System.
It will be detailed below the mechanism and principle of the embodiment of the present invention.Unless specifically stated otherwise, below and claim Used in term "based" indicate " being based at least partially on ".Term " includes " indicates that opening includes, i.e., " including but it is unlimited In ".Term " multiple " expression " two or more ".Term " one embodiment " expression " at least one embodiment ".Term is " another Embodiment " expression " at least one other embodiment ".The definition of other terms provides in will be described below.
Fig. 3 is according to embodiments of the present invention, show for illustrate obtain laser radar data point cloud problem of dtmf distortion DTMF and The schematic diagram of its solution.
Laser radar is poor according to reflection interval, obtains distance of the cloud under laser radar coordinate system, carries out to cloud normal The pretreatment of rule, important is the problem of dtmf distortion DTMF for solving point cloud.
The point cloud distortion that the movement of radar itself will lead to.If the frame per second of radar compared with external movement quickly, Point cloud problem of dtmf distortion DTMF caused by moving in single pass influences less, and if sweep speed is slow, especially 2 axis Radar, one of axis are considerably slower than another axis, and problem of dtmf distortion DTMF will be clearly.Come usually using other sensors Speed is obtained to compensate.
The problem of point cloud distortion, can explain that laser reaches 360 degree of rotation by a surfaces of revolution by Fig. 3. Under laser radar coordinate system, object distance be Laser emission with the time interval that receives multiplied by the light velocity, but due to carrier before Row causes in Laser emission and 2 times received, and for laser radar not in identical position, the distance caused is not Correctly, therefore the distortion of a cloud is caused.
It, can be in the case where exercise data (being calculated by other sensors) under radar fix system has been obtained A cloud is calibrated.Firstly, it is necessary to the time interval for starting scanning relative to present frame of the point be calculated, further according to the time Interval updates result.According to the characteristic that laser radar uniform angular velocity rotates, the angle of current point is obtained using cloud, is carried out linear Interpolation obtains the opposite time interval started with frame of a cloud.Shown in following formula, wherein T is time needed for run-down, Δ t It is a cloud with respect to the time interval that frame starts.
Velocity compensation is carried out according to the time interval Δ t, corrects the genuine point cloud data of fault to obtain.Then, knot is utilized The characteristic point of structure information extraction laser radar point cloud data is matched with the structure feature of previous frame point, utilizes adjacent two frame Between match information tentatively optimize posture information, then recycle previous maintenance global map, carry out Feature Points Matching come It optimizes, to obtain more accurately posture information.
And for the Attitude estimation of camera picture data, using deep learning the relevant technologies, identify dynamic barrier, it is right Dynamic barrier is filtered to obtain final image data, is then handled final image data to extract visual signature Information.It obtains based on the visual signature information, and using the image data posture information of previous frame maintenance to Current camera appearance Then the rough estimate of state carries out the matching of frame frame, preliminary to optimize camera posture, is finally carried out using the cartographic information of previous maintenance Characteristic matching constructs optimization method, optimizes camera posture again to obtain camera Attitude estimation.
Fig. 4 is according to embodiments of the present invention, shows higher to obtain for being merged to the processed data of acquisition The schematic diagram of the method for the posture information of precision.
As shown in Figure 4, the data after being handled well using camera, radar, GPS/IMU, carry out processing calculating, and fusion is final As a result, obtaining the better posture information of higher precision, robustness.
It constructs motion model using SLAM the relevant technologies according to the picture that camera data processing module provides and solves tentatively Posture is then BA, advanced optimizes posture, and optimization aim is to make projection error minimum, and projection error is that basis has been computed The posture acquired projects to common observation point on corresponding frame, calculates the relative distance of matched point.Then it is matched, with Recover corresponding posture.Result is calibrated according to the data that other sensors provide simultaneously.Why to be calibrated, It is often to obtain an attitude data because the frame per second of multisensor is different with the processing calculating time and require independent and final carriage As a result it is merged.
Since laser radar has a degree of lower elevation angle, and when carrier movement is on open road, obtain one Partial dot cloud by be ground point cloud, if directly carry out Attitude estimation, there will be many ground point clouds.Influence last essence Degree eliminates the influence that it is generated so needing to identify ground point cloud.We use the technology of correlation machine study simultaneously, know The point cloud that dynamic barrier Chu not belonged to, is filtered.The point cloud information that we utilize laser radar to provide, uses laser SLAM The relevant technologies are matched, and are constructed optimization method, are optimized, solve posture.The number provided simultaneously according to other sensors It is calibrated according to result.This is because the frame per second of multisensor is different with the processing calculating time, an attitude data is often obtained It requires individually to be merged with final carriage result.
Finally the result of GPS/IMU, camera, radar multisensor is merged, obtains a more accurate, robustness Higher Attitude estimation result.
Coordinate system definition
Camera coordinates system is defined, radar fix system, camera coordinates system is the radar fix system using camera optics center as origin The launching centre for being a radar is origin.The coordinate X=(x, y, z) an of point is defined, posture is definedR is 3*3 Selection matrix, T be 3*1 translation vector.Assuming that Current camera frame (being defined as the i-th frame) is directed to camera start frame the (the 0th Frame) posture be P, then Current camera relative to camera start frame rotate R, translate T.It defines P and describes the position of Current camera It sets, simultaneously for the point X of any one Current camera framei=(xi,yi,zi), all by following equation:
X0=RXi+T
Wherein, X0For XiThe corresponding point under initial frame coordinate system.It is clear that posture has transitivity:
Pa→c=Pa→bPb→c
Pa→cAttitudes vibration for c frame relative to a frame, Pa→bAttitudes vibration for b frame relative to a frame, Pa→cFor c Attitudes vibration of the frame relative to b frame.
Coordinate system conversion is unified
For the fusion more sensed, need the result for finding out different sensors under its coordinate system unified.It is passed with camera For the unification of sensor and radar sensor.The P of visual odometry outputcamera-iIt is Current camera frame (being defined as the i-th frame) needle For the posture of camera start frame (the 0th frame), it would be desirable to which that obtain is Pliadar-iIt is current radar frame (being defined as the i-th frame) needle For the posture of radar start frame (the 0th frame).Simultaneously according to calibration, it is known that the calibration relationship of camera to radar is defined as Pcalib, PcalibThe point of radar fix system is transferred to camera coordinates system.For the point of any one current radar frame (being defined as the i-th frame) Xlidar-iPremultiplication is in PcalibCorresponding coordinate in available camera frame.That is PcalibIndicate that Current camera becomes to the posture of radar Change.
It is camera start frame respectively to radar start frame as shown in figure 5, camera start frame has 2 paths to the i-th frame of radar Arrive again the i-th frame of radar and from camera start frame to camera the i-th frame again to the i-th frame of radar.Therefore available equation
Pcamera-iPcalib=PcalibPliadar-i
It is available
After obtaining posture of the i-th frame of radar relative to radar start frame, it is known that the timestamp of the i-th frame can utilize adjacent 2 The data of frame and time, the IMU data such as speed when obtaining the i-th frame of radar under radar start frame coordinate system.
Fig. 6 shows the method 600 for generating high-precision map of an exemplary embodiment according to the present invention Schematic flow chart.Described by the high-precision map creation device 130 as described in referring to Fig.1 of method 600, reference Fig. 2 High-precision map generating device 200 execute.Below with reference to each step for including in Fig. 6 detailed description method 600.
Method 600 starts from step 602, obtains image data from camera.It will appreciated by the skilled person that here Obtaining image data for example can be the image data for obtaining acquisition, be also possible to obtain the picture number of acquisition after treatment According to, or other means.The present invention is not limited thereto.
In step 604, point cloud data is obtained from laser radar.
In step 606, posture information is obtained from GNSS/IMU.In one aspect, which includes that GNSS/IMU is mentioned The acceleration of confession, angular speed, coordinate etc..
In step 608, the image data of acquisition is handled to extract visual signature information, the whole appearance of previous fusion State information and the map visual signature information of previous maintenance carry out Attitude estimation to obtain camera Attitude estimation.
In one aspect, before being locally stored, to the image data of acquisition, point cloud data and posture information are located in advance Reason.In one aspect, the pretreatment operation may include: image data, point cloud data and posture information are parsed, the time Synchronous and screening.
In one aspect, the image data obtained to camera is handled to extract visual signature information, previous fusion The map visual signature information of whole posture information and previous maintenance carries out Attitude estimation to obtain camera Attitude estimation, wraps It includes: using deep learning the relevant technologies, identifying dynamic barrier;Dynamic barrier is filtered to obtain final picture number According to;Final image data is handled to extract visual signature information;It is tieed up based on the visual signature information, and using previous frame The image data posture information of shield obtains the rough estimate to Current camera posture, then carries out frame frame matching, preliminary to optimize Camera posture;Characteristic matching finally is carried out using the cartographic information of previous maintenance, constructs optimization method, then camera described in suboptimization Posture is to obtain camera Attitude estimation.
In step 610, the point cloud data of acquisition is handled to obtain point cloud information, and according to the point cloud information and its Its information carries out Attitude estimation to obtain radar Attitude estimation.Here other information can be previous fusion posture information and The point cloud information being previously saved.In one aspect, the point cloud data of acquisition is handled to obtain point cloud information, and according to this Point cloud information and other information carry out Attitude estimation to obtain radar Attitude estimation, comprising: extract laser thunder using structural information Up to the characteristic point of point cloud data;Linear interpolation is carried out in terms of using the angle of the specified point according to the exercise data of other information Calculate the time interval that specified point starts relative to frame;Velocity compensation is carried out using the time interval, it is genuine to obtain correction fault Point cloud data;It is matched with the structure feature of previous frame point, tentatively optimizes posture using the match information between adjacent two frame Information;The global map of maintenance is recycled, carries out Feature Points Matching to optimize, to obtain more accurately posture information.
In step 612, posture information, camera Attitude estimation and radar Attitude estimation are merged, it is more acurrate to obtain Stable Attitude estimation result.In one aspect, posture information, camera Attitude estimation and radar Attitude estimation are merged, To obtain more acurrate stable Attitude estimation result, comprising: carried out to posture information, camera Attitude estimation and radar Attitude estimation Calibration.This is because the frame per second of multisensor with processing calculate the time it is different, often obtain an attitude data require individually with Final carriage result is merged.In one aspect, which uses Kalman filtering.
In step 614, according to more acurrate stable Attitude estimation as a result, image data and point cloud data are merged, Construct high-precision map.In one aspect, according to more acurrate stable Attitude estimation as a result, by image data and point cloud data into Row fusion, constructs high-precision map, comprising: by radar points cloud data projection to regulation coordinate system;Image data is projected into rule Position fixing system;Based on more acurrate stable Attitude estimation obtained as a result, to the radar points for projecting to regulation coordinate system Cloud data and the image data are merged to construct high-precision map.In one aspect, which can be generation Boundary's coordinate system.
Fig. 7 is according to embodiments of the present invention, provides a kind of for generating the schematic block diagram of the device 700 of high-precision map.
The device 700 includes: to obtain module 702, is configured as obtaining image data from camera, obtain from laser radar Point cloud data obtains posture information from GNSS/IMU.At optional aspect, which can also include preprocessing module 704, it is configured as pre-processing to be locally stored image data, point cloud data and posture information.The device 700 may be used also To include camera data processing module 706, it is configured as handling the image data being locally stored to extract vision spy Reference breath, visual signature information, the whole posture information of previous fusion and the map of previous maintenance extracted according to present frame Visual signature information carries out Attitude estimation to obtain camera Attitude estimation.The device 700 can also include at laser radar data Module 708 is managed, is configured as handling the point cloud data being locally stored to obtain point cloud information, and believed according to the cloud Breath carries out Attitude estimation with other information to obtain radar Attitude estimation.The device 700 can also include Fusion Module 710, It is configured as merging the posture information, camera Attitude estimation and radar Attitude estimation, it is more acurrate stable to obtain Attitude estimation result.Here other information can be the posture information of previous fusion and the point cloud information being previously saved.The dress Setting 700 can also include building module 712, be configured as according to more acurrate stable Attitude estimation obtained as a result, will Image data and point cloud data are merged to construct high-precision map.
The specific implementation of device 700 provided in this embodiment is referred to corresponding embodiment of the method, and details are not described herein.
For clarity, all selectable units or subelement included by device 700 are not shown in Fig. 7, and uses Dotted line shows optional module.Described by above method embodiment and the embodiment by reference to that can be obtained with combination All features and operation be respectively suitable for device 700, therefore details are not described herein.
It will be understood by those skilled in the art that the division of unit or sub-unit is not limiting but shows in device 700 Example property, but in order to more convenient it will be appreciated by those skilled in the art that logically describing its major function or operation.In device In 700, the function of a unit can be realized by multiple units;Conversely, multiple units can also be realized by a unit.This Invention limits not to this.
Likewise, carrying out the list that realization device 700 is included in various manners it will be understood by those skilled in the art that can adopt Member comprising but be not limited to software, hardware, firmware or any combination thereof, the present invention limits not to this.
The present invention can be system, method, computer-readable storage medium and/or computer program product.Computer Readable storage medium storing program for executing for example can be the tangible device for being able to maintain and storing the instruction used by instruction execution equipment.
Computer-readable/executable program instruction can be downloaded to from computer readable storage medium each calculating/from Equipment is managed, outer computer or External memory equipment can also be downloaded to by various communication modes.The present invention does not limit specifically Make the specific programming language for realizing computer-readable/executable program instruction or instruction.
Referring herein to according to the method for the embodiment of the present invention, the flowchart and or block diagram of device (system) describe this hair Bright various aspects.It should be appreciated that each box in each box and flowchart and or block diagram of flowchart and or block diagram Combination can be realized by computer-readable/executable program instruction.
Above-mentioned method description and process flow diagram are intended merely as illustrative example, rather than are intended to requirement or hidden Contain the operation that each embodiment must be executed with given sequence.As those of ordinary skill in the art should understand that , the operation order in the above embodiments can be executed in any order.
Various illustrative logical boxs, module, circuit and the algorithm operating described in conjunction with presently disclosed embodiment is equal The combination of electronic hardware, computer software or both may be implemented into.It is this between hardware and software in order to clearly show that Interchangeability has carried out general description around its function to various illustrative components, frame, module, circuit and operation above. Hardware is implemented as this function and is also implemented as software, is set depending on specifically application and to what whole system was applied Count constraint condition.Those skilled in the art can be directed to each specific application, realize described function in a manner of flexible, but It is that this embodiment decision should not be interpreted as causing a departure from the scope of this disclosure.
For executing general processor, the digital signal processor (DSP), specific integrated circuit of function described herein (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, point Vertical hardware component or any combination thereof, can be used to realize or executes combine aspect disclosed herein to describe for realizing The hardware of various illustrative logics, logical box, module and circuit.General processor can be multiprocessor, alternatively, the processing Device is also possible to any conventional processor, controller, microcontroller or state machine.Processor also can be implemented as receiver The combination of smart object, for example, the combination of DSP and microprocessor, several microprocessors, one or more microprocessors and DSP The combination or any other such structure of kernel.Alternatively, some operations or method can be by specific to given functions Circuit executes.
To can be realized any those of ordinary skill in this field or using the present invention, disclosed implementation is surrounded above Example is described.It to those skilled in the art, is it will be apparent that simultaneously to the various modifications of these embodiments And general principles defined herein can also be applied to other realities on the basis of not departing from spirit of the invention or protection scope Apply example.Therefore, the disclosure of invention is not limited to embodiments shown herein, but with the appended claims and this paper The widest scope of principle disclosed and novel features is consistent.

Claims (10)

1. a kind of method for generating high-precision map, which comprises
Image data is obtained from camera;
Point cloud data is obtained from laser radar;
Posture information is obtained from global navigation satellite system GNSS/Inertial Measurement Unit IMU;
The image data is handled to extract visual signature information, is believed according to the visual signature that present frame extracts The map visual signature information of breath, the whole posture information of previous fusion and previous maintenance carries out Attitude estimation to obtain camera Attitude estimation;
The point cloud data is handled to obtain point cloud information, and posture is carried out according to the point cloud information and other information Estimation is to obtain radar Attitude estimation;
The posture information, the camera Attitude estimation and the radar Attitude estimation are merged, it is more acurrate steady to obtain Fixed Attitude estimation result;
According to the more acurrate stable Attitude estimation as a result, the image data and the point cloud data are merged, structure Build high-precision map.
2. according to the method described in claim 1, further include:
Before being locally stored, the image data, the point cloud data and the posture information are pre-processed.
3. according to the method described in claim 2, wherein, the pretreatment includes to the image data, the point cloud data Parsed with the posture information, time synchronization and screening.
4. according to the method described in claim 1, wherein, being handled the image data that the camera obtains to extract The visual signature information, according to present frame extract the visual signature information, previous fusion whole posture information and The map visual signature information of previous maintenance carries out Attitude estimation to obtain the camera Attitude estimation, comprising:
Using deep learning the relevant technologies, dynamic barrier is identified;
The dynamic barrier is filtered to obtain final image data;
The final image data is handled to obtain the visual signature information;
It obtains based on the visual signature information, and using the image data posture information of previous frame maintenance to Current camera appearance Then the rough estimate of state carries out the matching of frame frame, preliminary to optimize camera posture;
Characteristic matching is carried out using the cartographic information of previous maintenance, constructs optimization method, then camera posture described in suboptimization is to obtain Obtain the camera Attitude estimation.
5. according to the method described in claim 1, wherein, being handled the point cloud data to obtain point cloud information, and root Attitude estimation is carried out according to the point cloud information and other information to obtain radar Attitude estimation, comprising:
The characteristic point of the laser radar point cloud data is extracted using structural information;
Linear interpolation is carried out to calculate the specified point using the angle of the specified point according to the exercise data of other information The time interval started relative to frame;
Velocity compensation is carried out using the time interval, corrects the genuine point cloud data of fault to obtain;
It is matched with the structure feature of previous frame point, tentatively optimizes posture information using the match information between adjacent two frame;
The global map of maintenance is recycled, carries out Feature Points Matching to optimize, to obtain more accurately posture information.
6. according to the method described in claim 1, wherein, by the posture information, the camera Attitude estimation and the radar Attitude estimation is merged, to obtain more acurrate stable Attitude estimation result, comprising:
The posture information, the camera Attitude estimation and the radar Attitude estimation are calibrated.
7. according to the method described in claim 6, wherein, the calibration uses Kalman filtering.
8. according to the method described in claim 1, wherein, according to the more acurrate stable Attitude estimation as a result, by the figure Sheet data and the point cloud data are merged to construct high-precision map, comprising:
By the radar points cloud data projection to regulation coordinate system;
The image data is projected into regulation coordinate system;
Based on the more acurrate stable Attitude estimation as a result, to the radar point cloud data and institute for projecting to regulation coordinate system Image data is stated to be merged to construct high-precision map.
9. method that is a kind of for generating the device of high-precision map, requiring any one of 1 to 8 for perform claim.
10. it is a kind of for generating the computer readable storage medium of high-precision map, it is deposited on the computer readable storage medium At least one executable computer program instructions is contained, the computer program instructions include requiring 1 to 8 for perform claim Any one of method each step computer program instructions.
CN201910156262.XA 2019-01-30 2019-03-01 Method and device for generating high-precision map Active CN109887057B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019100903472 2019-01-30
CN201910090347 2019-01-30

Publications (2)

Publication Number Publication Date
CN109887057A true CN109887057A (en) 2019-06-14
CN109887057B CN109887057B (en) 2023-03-24

Family

ID=66930235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910156262.XA Active CN109887057B (en) 2019-01-30 2019-03-01 Method and device for generating high-precision map

Country Status (1)

Country Link
CN (1) CN109887057B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110645998A (en) * 2019-09-10 2020-01-03 上海交通大学 Dynamic object-free map segmentation establishing method based on laser point cloud
CN111461980A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Performance estimation method and device of point cloud splicing algorithm
CN111538032A (en) * 2020-05-19 2020-08-14 北京数字绿土科技有限公司 Time synchronization method and device based on independent drawing tracks of camera and laser radar
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN111709990A (en) * 2020-05-22 2020-09-25 贵州民族大学 Camera repositioning method and system
CN111912417A (en) * 2020-07-10 2020-11-10 上海商汤临港智能科技有限公司 Map construction method, map construction device, map construction equipment and storage medium
CN112132888A (en) * 2019-06-25 2020-12-25 黑芝麻智能科技(重庆)有限公司 Monocular camera localization within large-scale indoor sparse lidar point clouds
CN112388635A (en) * 2020-10-30 2021-02-23 中国科学院自动化研究所 Method, system and device for fusing sensing and space positioning of multiple sensors of robot
CN112396662A (en) * 2019-08-13 2021-02-23 杭州海康威视数字技术股份有限公司 Method and device for correcting conversion matrix
CN112445210A (en) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 Method and device for determining motion trail, storage medium and electronic device
CN112710318A (en) * 2020-12-14 2021-04-27 深圳市商汤科技有限公司 Map generation method, route planning method, electronic device, and storage medium
CN112731450A (en) * 2020-08-19 2021-04-30 深圳市速腾聚创科技有限公司 Method, device and system for motion compensation of point cloud
CN112985416A (en) * 2021-04-19 2021-06-18 湖南大学 Robust positioning and mapping method and system based on laser and visual information fusion
CN113298941A (en) * 2021-05-27 2021-08-24 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision
CN113378867A (en) * 2020-02-25 2021-09-10 北京轻舟智航智能技术有限公司 Asynchronous data fusion method and device, storage medium and electronic equipment
CN113495281A (en) * 2021-06-21 2021-10-12 杭州飞步科技有限公司 Real-time positioning method and device for movable platform
CN113724382A (en) * 2021-07-23 2021-11-30 北京搜狗科技发展有限公司 Map generation method and device and electronic equipment
CN113777635A (en) * 2021-08-06 2021-12-10 香港理工大学深圳研究院 Global navigation satellite data calibration method, device, terminal and storage medium
CN113865580A (en) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 Map construction method and device, electronic equipment and computer readable storage medium
WO2022048193A1 (en) * 2020-09-01 2022-03-10 华为技术有限公司 Map drawing method and apparatus
CN114413898A (en) * 2022-03-29 2022-04-29 深圳市边界智控科技有限公司 Multi-sensor data fusion method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160266256A1 (en) * 2015-03-11 2016-09-15 The Boeing Company Real Time Multi Dimensional Image Fusing
EP3252714A1 (en) * 2016-06-03 2017-12-06 Univrses AB Camera selection in positional tracking
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160266256A1 (en) * 2015-03-11 2016-09-15 The Boeing Company Real Time Multi Dimensional Image Fusing
EP3252714A1 (en) * 2016-06-03 2017-12-06 Univrses AB Camera selection in positional tracking
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132888A (en) * 2019-06-25 2020-12-25 黑芝麻智能科技(重庆)有限公司 Monocular camera localization within large-scale indoor sparse lidar point clouds
CN112132888B (en) * 2019-06-25 2024-04-26 黑芝麻智能科技(重庆)有限公司 Monocular camera positioning in large-scale indoor sparse laser radar point clouds
CN112396662A (en) * 2019-08-13 2021-02-23 杭州海康威视数字技术股份有限公司 Method and device for correcting conversion matrix
CN112396662B (en) * 2019-08-13 2024-05-24 杭州海康威视数字技术股份有限公司 Conversion matrix correction method and device
CN112445210B (en) * 2019-08-15 2023-10-27 纳恩博(北京)科技有限公司 Method and device for determining motion trail, storage medium and electronic device
CN112445210A (en) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 Method and device for determining motion trail, storage medium and electronic device
CN110645998A (en) * 2019-09-10 2020-01-03 上海交通大学 Dynamic object-free map segmentation establishing method based on laser point cloud
CN113378867A (en) * 2020-02-25 2021-09-10 北京轻舟智航智能技术有限公司 Asynchronous data fusion method and device, storage medium and electronic equipment
CN113378867B (en) * 2020-02-25 2023-08-22 北京轻舟智航智能技术有限公司 Asynchronous data fusion method and device, storage medium and electronic equipment
CN111461980A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Performance estimation method and device of point cloud splicing algorithm
CN111461980B (en) * 2020-03-30 2023-08-29 北京百度网讯科技有限公司 Performance estimation method and device of point cloud stitching algorithm
CN111538032B (en) * 2020-05-19 2021-04-13 北京数字绿土科技有限公司 Time synchronization method and device based on independent drawing tracks of camera and laser radar
CN111561923B (en) * 2020-05-19 2022-04-15 北京数字绿土科技股份有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN111538032A (en) * 2020-05-19 2020-08-14 北京数字绿土科技有限公司 Time synchronization method and device based on independent drawing tracks of camera and laser radar
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN111709990A (en) * 2020-05-22 2020-09-25 贵州民族大学 Camera repositioning method and system
CN111709990B (en) * 2020-05-22 2023-06-20 贵州民族大学 Camera repositioning method and system
CN111912417A (en) * 2020-07-10 2020-11-10 上海商汤临港智能科技有限公司 Map construction method, map construction device, map construction equipment and storage medium
CN112731450B (en) * 2020-08-19 2023-06-30 深圳市速腾聚创科技有限公司 Point cloud motion compensation method, device and system
CN112731450A (en) * 2020-08-19 2021-04-30 深圳市速腾聚创科技有限公司 Method, device and system for motion compensation of point cloud
WO2022048193A1 (en) * 2020-09-01 2022-03-10 华为技术有限公司 Map drawing method and apparatus
CN112388635A (en) * 2020-10-30 2021-02-23 中国科学院自动化研究所 Method, system and device for fusing sensing and space positioning of multiple sensors of robot
CN112388635B (en) * 2020-10-30 2022-03-25 中国科学院自动化研究所 Method, system and device for fusing sensing and space positioning of multiple sensors of robot
CN112710318B (en) * 2020-12-14 2024-05-17 深圳市商汤科技有限公司 Map generation method, path planning method, electronic device, and storage medium
CN112710318A (en) * 2020-12-14 2021-04-27 深圳市商汤科技有限公司 Map generation method, route planning method, electronic device, and storage medium
CN112985416B (en) * 2021-04-19 2021-07-30 湖南大学 Robust positioning and mapping method and system based on laser and visual information fusion
CN112985416A (en) * 2021-04-19 2021-06-18 湖南大学 Robust positioning and mapping method and system based on laser and visual information fusion
CN113298941A (en) * 2021-05-27 2021-08-24 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision
CN113298941B (en) * 2021-05-27 2024-01-30 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision
CN113495281B (en) * 2021-06-21 2023-08-22 杭州飞步科技有限公司 Real-time positioning method and device for movable platform
CN113495281A (en) * 2021-06-21 2021-10-12 杭州飞步科技有限公司 Real-time positioning method and device for movable platform
CN113724382A (en) * 2021-07-23 2021-11-30 北京搜狗科技发展有限公司 Map generation method and device and electronic equipment
CN113777635A (en) * 2021-08-06 2021-12-10 香港理工大学深圳研究院 Global navigation satellite data calibration method, device, terminal and storage medium
CN113777635B (en) * 2021-08-06 2023-11-03 香港理工大学深圳研究院 Global navigation satellite data calibration method, device, terminal and storage medium
CN113865580B (en) * 2021-09-15 2024-03-22 北京易航远智科技有限公司 Method and device for constructing map, electronic equipment and computer readable storage medium
CN113865580A (en) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 Map construction method and device, electronic equipment and computer readable storage medium
CN114413898B (en) * 2022-03-29 2022-07-29 深圳市边界智控科技有限公司 Multi-sensor data fusion method and device, computer equipment and storage medium
CN114413898A (en) * 2022-03-29 2022-04-29 深圳市边界智控科技有限公司 Multi-sensor data fusion method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN109887057B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN109887057A (en) The method and apparatus for generating high-precision map
US11802769B2 (en) Lane line positioning method and apparatus, and storage medium thereof
CN109885080B (en) Autonomous control system and autonomous control method
Loianno et al. Cooperative localization and mapping of MAVs using RGB-D sensors
US11747144B2 (en) Real time robust localization via visual inertial odometry
US9183638B2 (en) Image based position determination
WO2019152149A1 (en) Actively complementing exposure settings for autonomous navigation
CN114088087B (en) High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
US20200364883A1 (en) Localization of a mobile unit by means of a multi-hypothesis kalman filter method
CN109978954A (en) The method and apparatus of radar and camera combined calibrating based on cabinet
CN110887486B (en) Unmanned aerial vehicle visual navigation positioning method based on laser line assistance
CN112116651B (en) Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
Caballero et al. Improving vision-based planar motion estimation for unmanned aerial vehicles through online mosaicing
CN112136137A (en) Parameter optimization method and device, control equipment and aircraft
CN115902930A (en) Unmanned aerial vehicle room built-in map and positioning method for ship detection
Andert et al. On the safe navigation problem for unmanned aircraft: Visual odometry and alignment optimizations for UAV positioning
Fink et al. Visual inertial SLAM: Application to unmanned aerial vehicles
US20210229810A1 (en) Information processing device, flight control method, and flight control system
KR20200109116A (en) Method and system for position estimation of unmanned aerial vehicle using graph structure based on multi module
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium
Stowers et al. Optical flow for heading estimation of a quadrotor helicopter
CN115307646A (en) Multi-sensor fusion robot positioning method, system and device
CN115344033A (en) Monocular camera/IMU/DVL tight coupling-based unmanned ship navigation and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant