CN111721283B - Precision detection method and device for positioning algorithm, computer equipment and storage medium - Google Patents

Precision detection method and device for positioning algorithm, computer equipment and storage medium Download PDF

Info

Publication number
CN111721283B
CN111721283B CN201910204805.0A CN201910204805A CN111721283B CN 111721283 B CN111721283 B CN 111721283B CN 201910204805 A CN201910204805 A CN 201910204805A CN 111721283 B CN111721283 B CN 111721283B
Authority
CN
China
Prior art keywords
point cloud
real
cloud image
time
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910204805.0A
Other languages
Chinese (zh)
Other versions
CN111721283A (en
Inventor
徐棨森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suteng Innovation Technology Co Ltd
Original Assignee
Suteng Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suteng Innovation Technology Co Ltd filed Critical Suteng Innovation Technology Co Ltd
Priority to CN201910204805.0A priority Critical patent/CN111721283B/en
Priority to CN202310890176.8A priority patent/CN116972880A/en
Publication of CN111721283A publication Critical patent/CN111721283A/en
Application granted granted Critical
Publication of CN111721283B publication Critical patent/CN111721283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/23Testing, monitoring, correcting or calibrating of receiver elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/35Constructional details or hardware or software details of the signal processing chain
    • G01S19/37Hardware or software details of the signal processing chain
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application relates to a precision detection method and device for a positioning algorithm, computer equipment and a storage medium. The method comprises the following steps: acquiring a real-time point cloud image of a moving object; matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of a moving object; matching the real-time point cloud image with a source point cloud image in a point cloud map by adopting an offline positioning algorithm to obtain offline positioning data of a moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm; and taking the off-line positioning data as a reference value of the real-time positioning data, and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data. The offline positioning data is used as the reference value of the real-time positioning data, so that the accuracy of the real-time positioning algorithm is effectively detected.

Description

Precision detection method and device for positioning algorithm, computer equipment and storage medium
Technical Field
The present application relates to the field of unmanned technologies, and in particular, to a method and apparatus for detecting accuracy of a positioning algorithm, a computer device, and a storage medium.
Background
Along with the development of the unmanned development field, the positioning module is used for obtaining the position information of the unmanned vehicle in the map, wherein the positioning algorithm can be combined with other auxiliary algorithms to realize the main functions of the positioning module.
However, the existing positioning algorithm cannot acquire the position reference value of the unmanned vehicle, and when the accuracy of the positioning algorithm is evaluated, the obtained error integrates the error of the constructed map and the error of the positioning algorithm, so that the accuracy detection of the positioning algorithm of the vehicle cannot be performed.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for detecting accuracy of a positioning algorithm.
A method for detecting accuracy of a positioning algorithm, the method comprising:
acquiring a real-time point cloud image of a moving object;
matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;
matching the real-time point cloud image with a source point cloud image in the point cloud map by adopting an offline positioning algorithm to obtain offline positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;
And taking the off-line positioning data as a reference value of the real-time positioning data, and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data.
In one embodiment, the method for constructing the point cloud map includes:
acquiring a source point cloud image, and performing target detection to obtain a source obstacle point cloud image;
and filtering the source obstacle point cloud image in the source point cloud image, and constructing a point cloud map according to the filtered source point cloud image.
In one embodiment, the obtaining the source point cloud image, performing target detection, and obtaining the source obstacle point cloud image includes:
acquiring a source point cloud image, and inputting the source point cloud image into a trained target detection model to obtain a source obstacle point cloud image; the target detection model is obtained through training according to a point cloud sample image containing the obstacle.
In one embodiment, the matching the real-time point cloud image with the source point cloud image in the constructed point cloud map by using a real-time positioning algorithm to obtain real-time positioning data of the moving object includes:
matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;
And taking the position information of the source point cloud image corresponding to the real-time point cloud image as real-time positioning data of the moving object in the point cloud map.
In one embodiment, the matching the real-time point cloud image with the source point cloud image in the point cloud map by using an offline positioning algorithm to obtain offline positioning data of the moving object includes:
acquiring a real-time point cloud image, and inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm to obtain a target obstacle point cloud image;
filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;
inputting the filtered real-time point cloud image into a well trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image; the matching model is obtained through training according to the real-time point cloud image and the corresponding source point cloud image;
and taking the position information of the source point cloud image corresponding to the real-time point cloud image as offline positioning data of the moving object.
In one embodiment, the inputting the filtered real-time point cloud image into a trained matching model, calculating a matching value of the filtered real-time point cloud image, and obtaining a source point cloud image corresponding to the filtered real-time point cloud image includes:
And screening out the real-time point cloud image with the largest matching value according to the size of the matching value of each real-time point cloud image after filtering, and taking the source point cloud image corresponding to the real-time point cloud image with the largest matching value as the output of a matching model.
In one embodiment, the weight corresponding to the matching value of the static object point cloud in the matching model is higher than the weight corresponding to the matching value of the dynamic object point cloud.
In one embodiment, the determining the accuracy of the real-time positioning data according to the reference value and the real-time positioning data by using the offline positioning data as the reference value of the real-time positioning data includes:
acquiring the real-time positioning data and the corresponding offline positioning data;
calculating the difference value of each real-time positioning data and the corresponding offline positioning data;
and calculating an average value or a weighted average value or a median value of the difference values, and taking the average value or the weighted average value or the median value as the accuracy of the positioning algorithm.
A precision detection apparatus for a positioning algorithm, the apparatus comprising:
the image acquisition module is used for acquiring a real-time point cloud image of the moving object;
The real-time positioning module is used for matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;
the off-line positioning module is used for matching the real-time point cloud image with the source point cloud image in the point cloud map by adopting an off-line positioning algorithm to obtain off-line positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;
and the precision calculation module is used for taking the off-line positioning data as a reference value of the real-time positioning data, and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data.
In one embodiment, the apparatus further comprises a map building module; the map construction module comprises:
the first target detection unit is used for acquiring a source point cloud image, and carrying out target detection to obtain a source obstacle point cloud image;
the first obstacle filtering unit is used for filtering the source obstacle point cloud images in the source point cloud images;
and the construction unit is used for constructing a point cloud map according to the source point cloud image after the filtering processing.
In one embodiment, the target detection unit is further configured to acquire a source point cloud image, and input the source point cloud image to a trained target detection model to obtain a source obstacle point cloud image; the target detection model is obtained through training according to a point cloud sample image containing the obstacle.
In one embodiment, the real-time positioning module includes:
the image matching unit is used for matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;
the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as real-time positioning data of the moving object in the point cloud map.
In one embodiment, the offline positioning module includes:
the second target detection unit is used for acquiring a real-time point cloud image, inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm, and obtaining a target obstacle point cloud image;
the second obstacle filtering unit is used for filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;
The image matching unit is used for inputting the filtered real-time point cloud image into a well-trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image, wherein the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image;
and the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.
In one embodiment, the image matching unit is further configured to screen out a real-time point cloud image with the largest matching value according to the size of the matching value of each real-time point cloud image after the filtering processing, and take a source point cloud image corresponding to the real-time point cloud image with the largest matching value as the output of the matching model.
In one embodiment, the image matching unit is further configured to set a weight corresponding to a matching value of the static object point cloud in the matching model, which is higher than a weight corresponding to a matching value of the dynamic object point cloud.
In one embodiment, the precision calculation module includes:
the data acquisition unit is used for acquiring the real-time positioning data and the corresponding offline positioning data;
The difference value calculation unit is used for calculating the difference value of each real-time positioning data and the corresponding offline positioning data;
and the precision determining unit is used for calculating an average value or a weighted average value or a median value of the difference values, and taking the average value or the weighted average value or the median value as the precision of the positioning algorithm.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the accuracy detection step of the above-mentioned positioning algorithm when executing the computer program:
a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the accuracy detection step of the above-described positioning algorithm:
according to the precision detection method, the device, the computer equipment and the storage medium of the positioning algorithm, through real-time positioning and off-line positioning, off-line positioning data are used as reference values of the real-time positioning data, precision is calculated according to the reference values and the corresponding real-time positioning data, and precision detection of the positioning algorithm of the moving object is achieved. In addition, the off-line positioning data of the moving object is acquired by adopting a higher-precision off-line positioning algorithm, so that the precision detection of the positioning algorithm is more accurate.
Drawings
FIG. 1 is an application environment diagram of a method for accuracy detection of a positioning algorithm in one embodiment;
FIG. 2 is a flow chart of a method for detecting accuracy of a positioning algorithm according to an embodiment;
FIG. 3 is a flow chart of a method for constructing a point cloud map in one embodiment;
FIG. 4 is a flowchart illustrating a step of acquiring real-time positioning data according to one embodiment;
FIG. 5 is a flowchart illustrating steps for acquiring offline positioning data in one embodiment;
FIG. 6 is a flow chart of a positioning algorithm accuracy calculation step in one embodiment;
FIG. 7 is a flow chart of a method for detecting accuracy of a positioning algorithm according to another embodiment;
FIG. 8 is a block diagram of a positioning algorithm accuracy detection device in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
FIG. 1 is a diagram of an application environment for accuracy detection of a positioning algorithm in one embodiment. The accuracy detection method of the positioning algorithm provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The computer device 100 may be a desktop terminal or a mobile terminal, and the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a wearable device, a personal digital assistant, or the like. The computer device 100 may also be implemented as a stand-alone server or as a server cluster of multiple servers.
Fig. 2 is a flow chart of a method for detecting accuracy of a positioning algorithm according to an embodiment. As shown in fig. 2, a method for detecting accuracy of a positioning algorithm is provided, and the method is applied to the computer device 100 in fig. 1 for illustration, and includes the following steps:
step 202, acquiring a real-time point cloud image of a moving object.
The moving object may refer to a moving device that needs to acquire its own positioning data. The real-time point cloud image is a current frame image of a photographed scene, which is acquired by a moving object according to a real-time positioning algorithm. The point cloud image is an image containing depth information of a scene to be photographed. The point cloud image may be an image generated directly according to the point cloud data, or may be an image obtained by performing coordinate conversion on a depth image obtained according to a depth camera. The point cloud data refers to laser point information acquired by scanning a photographed scene by a laser radar. The depth image is an image in which the depth value from the depth camera to each point in the scene to be photographed is set as a pixel value.
Specifically, the computer device 100 acquires a current frame image of the photographed scene according to a real-time positioning algorithm of the moving object, and uses the current frame image as a real-time point cloud image.
Alternatively, the moving object may be an unmanned vehicle, an unmanned aerial vehicle, a mobile robot, a mobile video monitoring device, or the like.
Optionally, the real-time point cloud image may be a three-dimensional image generated according to the point cloud data, or may be a three-dimensional image obtained by performing coordinate conversion on a depth image acquired by the depth camera.
And 204, matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object.
The real-time positioning algorithm is an algorithm for acquiring positioning data of the moving object in real time, and in this embodiment, the algorithm is an algorithm requiring detection accuracy. The real-time positioning data may be data including real-time positioning information of the moving object. The source point cloud image may be a real-time point cloud image obtained in advance by a moving object, and the point cloud map may refer to a map formed by overlapping the source point cloud image frame by frame.
The real-time positioning data may include GPS (Global Positioning System ) data, GNSS (Global Navigation Satellite System, global navigation satellite system) data, and may also include IMU (Inertial measurement unit ) data. The IMU is used for measuring the angular velocity and the acceleration of an object in a three-dimensional space. Specifically, the computer device 100 obtains a source point cloud image in the constructed point cloud map, matches the real-time point cloud image with the source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm, obtains a source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map, obtains positioning information of the source point cloud image, and uses the positioning information as real-time positioning data of the moving object. The matching refers to finding out a corresponding source point cloud image of the real-time point cloud image in the constructed point cloud map according to the contrast analysis of the image feature points.
Step 206, matching the real-time point cloud image with the source point cloud image in the point cloud map by adopting an offline positioning algorithm to obtain offline positioning data of the moving object; the positioning accuracy of the off-line positioning algorithm is greater than that of the real-time positioning algorithm.
The off-line positioning algorithm can be an algorithm with positioning accuracy larger than real-time positioning accuracy. The off-line positioning data can be positioning information of the moving object obtained by adopting an off-line positioning algorithm, and can comprise GPS data, GNSS data and IMU data.
The IMU is used for measuring the angular velocity and the acceleration of an object in a three-dimensional space. The real-time positioning data and the off-line positioning data obtained according to the same frame of real-time point cloud image are in one-to-one correspondence, for example, for the real-time point data and the off-line positioning data obtained according to the same frame of real-time image, the angular velocity of the measured object in the real-time positioning data corresponds to the angular velocity of the measured object in the off-line positioning data, and the acceleration of the measured object in the real-time positioning data corresponds to the acceleration of the measured object in the off-line positioning data.
Specifically, the computer device 100 obtains a real-time point cloud image, performs target detection on the real-time point cloud image to filter an obstacle in the real-time image, matches the filtered real-time point cloud image with a source point cloud image in a point cloud map, obtains a source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map, obtains positioning information of the source point cloud image, and uses the positioning information as offline positioning data of a moving object.
In an embodiment, because more obstacle point cloud images exist in the acquired real-time point cloud images, a larger matching error occurs when the real-time point cloud images are compared with the source point cloud images in the point cloud map, and therefore the real-time point cloud images can be processed offline to filter the obstacle point cloud images, and the filtered real-time point cloud images can be obtained. Specifically, the target detection mode may be to input the real-time point cloud image into a trained target detection model, perform obstacle detection and obstacle filtering, and obtain a filtered real-time point cloud image.
In another embodiment, because the collected real-time point cloud image may be a local point cloud image, the shooting angle is incomplete, or there are rotation dislocation, translation dislocation and other situations, the collected real-time point cloud image needs to be registered offline, and the point cloud images under each angle are converted into a complete point cloud image under the same coordinate system and spliced. The registration may be by ICP algorithm (Iterative Closest Point ), NDT algorithm (Normal Distributions Transform, normal distribution transform), etc. Specifically, the computer device 100 obtains a plurality of real-time point cloud images, obtains a registered real-time point cloud image by adopting a registration algorithm, matches the registered real-time point cloud image with a source point cloud image in a point cloud map, obtains a source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map, obtains positioning information of the source point cloud image, and uses the positioning information as offline positioning data of a moving object.
And step 208, taking the offline positioning data as a reference value of the real-time positioning data, and determining an accuracy reference value of the real-time positioning data according to the reference value and the real-time positioning data.
The reference value refers to a real value of a variable which cannot be directly obtained, and a reference value is generally agreed as the real value of the variable. Correspondingly, the off-line positioning GPS data is used as a reference value of the real-time positioning GPS data; taking the offline positioned GNSS data as a reference value of the real-time positioned GNSS data; the IMU angular velocity of off-line positioning is used as a reference value of the IMU angular velocity in real-time positioning; and taking the off-line positioned IMU acceleration as a reference value of the IMU acceleration in real-time positioning.
The accuracy of the positioning algorithm may designate the accuracy of the positioning data measured by the positioning algorithm. The accuracy of the positioning algorithm may be obtained by calculating a difference from the offline positioning data and the real-time positioning data acquired by the computer device 100. In the precision detection method of the positioning algorithm, the off-line positioning data are used as reference values of the real-time positioning data through real-time positioning and off-line positioning, and the precision is calculated according to the off-line positioning data and the real-time positioning data, so that the precision detection of the positioning algorithm of the moving object is realized.
In an embodiment, fig. 3 is a schematic flow chart of a method for constructing a point cloud map in an embodiment. As shown in fig. 3, a method for constructing a point cloud map is provided, which includes the following steps:
step 302, acquiring a source point cloud image, and performing target detection to obtain a source obstacle point cloud image.
The source point cloud image can be a real-time point cloud image obtained in advance by the test vehicle, the source obstacle point cloud image can be a point cloud image of an obstacle in the source point cloud image, and the source obstacle point cloud image can be obtained by comparing and analyzing obstacle characteristic points of the source point cloud image.
In an embodiment, the obtaining the source point cloud image, performing target detection, and obtaining the source obstacle point cloud image includes: acquiring a source point cloud image, and inputting the source point cloud image into a trained target detection model to obtain a source obstacle point cloud image; the target detection model is obtained through training according to a point cloud sample image containing the obstacle. The point cloud sample image may be a pre-constructed sample image including an obstacle point cloud image.
And step 304, filtering the source obstacle point cloud image in the source point cloud image, and constructing a point cloud map according to the filtered source point cloud image.
The point cloud map may be a map obtained by stitching point clouds at different positions, and stitching may refer to stitching scanned adjacent point cloud data together, which may refer to stitching source point cloud images frame by frame in this embodiment. The algorithm adopted by the splicing can be an ICP algorithm, and can also adopt a global matching algorithm or a local matching algorithm.
Optionally, the point cloud map can be constructed by adopting a multi-view three-dimensional reconstruction technology, errors caused by different angle observations are considered, source point cloud images shot at different angles are optimized, and the point cloud map is obtained according to the optimized source point cloud images.
In an embodiment, the method for constructing the point cloud map includes: and acquiring a source point cloud image, screening the point cloud image, and overlapping the screened source point cloud image frame by frame to obtain a point cloud map. An obstacle in the source point cloud image can be detected by a target detection model before the point cloud image is screened. The screening of the point cloud images can be carried out according to the number of the obstacles of the source point cloud images so as to obtain the source point cloud images with the obstacles smaller than a threshold value in the images. The filtering mode of bilateral filtering, gaussian filtering, conditional filtering, direct filtering, random sampling consistency filtering and the like can be adopted for filtering the point cloud image.
In the above-mentioned point cloud map construction step, the source obstacle point cloud image is obtained by carrying out target detection on the source point cloud image, then the source obstacle point cloud image is filtered from the source point cloud image, the filtered source point cloud image is obtained, and the point cloud map is obtained by overlapping the source point cloud image frame by frame, so that the construction of the point cloud map is realized, and the constructed point cloud map has fewer obstacle point cloud images, so that the real-time point cloud image can be conveniently matched with the source point cloud image in the point cloud map.
In one embodiment, fig. 4 is a flowchart illustrating a step of acquiring real-time positioning data, and as shown in fig. 4, step 204 includes:
step 402, matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image.
The matching refers to performing a comparison analysis according to characteristic points of the real-time point cloud image and characteristic points of all source point cloud images in the point cloud map, and finding a source point cloud image with highest similarity with the real-time point cloud image or highest image characteristic point overlap ratio as a source point cloud image corresponding to the real-time point cloud image. And step 404, using the position information of the source point cloud image corresponding to the real-time point cloud image as real-time positioning data of the moving object in the point cloud map.
The real-time positioning data may include GPS data, GNSS data, and IMU data.
In the step of acquiring the real-time positioning data, a real-time positioning algorithm is adopted to match the real-time point cloud image with the source point cloud image in the constructed point cloud map, so that the source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map is obtained, and the positioning information of the source point cloud image is used as the real-time positioning data of the moving object, so that the acquisition of the real-time positioning data of the moving object is realized.
In one embodiment, fig. 5 is a flowchart illustrating a step of acquiring offline positioning data, and step 206 shown in fig. 5 includes:
step 502, acquiring a real-time point cloud image, and inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm to obtain a target obstacle point cloud image.
The target detection model may be a trained model for detecting a target obstacle, and may be a neural network model trained according to feature points of a source obstacle point cloud image.
And step 504, filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image.
The target obstacle point cloud image can be a point cloud image of an obstacle in the real-time point cloud image, and can be obtained by comparing and analyzing obstacle characteristic points of the real-time point cloud image.
Step 506, inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image; the matching model is trained according to the real-time point cloud image and the corresponding source point cloud image.
The matching model can be trained by combining characteristic points of the sample point cloud image.
And step 508, taking the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.
In the step of acquiring the offline positioning data, the real-time point cloud image is subjected to target detection to filter the obstacle in the real-time image, the filtered real-time point cloud image is matched with the source point cloud image in the point cloud map, the source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map is obtained, the positioning information of the source point cloud image is acquired, and the positioning information is used as the offline positioning data of the moving object, so that the acquisition of the offline positioning data is realized.
In another embodiment, the inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image includes: and screening out the real-time point cloud image with the largest matching value according to the size of the matching value of each real-time point cloud image after filtering, and taking the source point cloud image corresponding to the real-time point cloud image with the largest matching value as the output of a matching model. The matching value may reflect the similarity between the filtered real-time point cloud image and the corresponding source point cloud image.
Alternatively, the matching value may be a matching score value, may be an image similarity percentage, and may also be a coincidence probability value of the image feature points.
In another embodiment, the weight corresponding to the matching value of the static object point cloud in the matching model is higher than the weight corresponding to the matching value of the dynamic object point cloud. The static object point cloud refers to the static object point cloud under the world coordinate system, and the dynamic object point cloud refers to the object point cloud moving under the world coordinate system.
In one embodiment, fig. 6 is a flowchart illustrating a positioning algorithm accuracy calculating step, as shown in fig. 6, step 208 includes:
Step 602, obtaining the real-time positioning data and the corresponding offline positioning data.
The real-time positioning data and the offline positioning data corresponding to the real-time positioning data are obtained according to the same frame of real-time point cloud image processing.
Step 604, calculating a difference value between each of the real-time positioning data and the corresponding offline positioning data.
At least one group of real-time positioning data and the corresponding offline positioning data are taken as calculation samples, and difference value calculation is carried out to obtain at least one difference value.
Step 606, calculating an average value or a weighted average value or a median value of the respective differences, and taking the average value or the weighted average value or the median value as the accuracy of the positioning algorithm.
Alternatively, each difference may be subjected to an average calculation, and the average is taken as the accuracy of the positioning algorithm; the weighted average value can also be calculated, and the weighted average value is used as the precision of a positioning algorithm; the median value can also be calculated, and the median value is used as the accuracy of a positioning algorithm; one or more of the average, weighted average, median may also be used as the accuracy of the positioning algorithm.
In the calculating precision step, the precision of the real-time positioning algorithm is determined according to the difference value of the off-line positioning data and the real-time positioning data, so that the precision calculation of the positioning algorithm is realized.
In another embodiment, fig. 7 is a flowchart of a method for detecting accuracy of a positioning algorithm, as shown in fig. 7, the method includes the steps of:
step 702, a point cloud map is constructed, and a source obstacle point cloud image in the point cloud map is filtered out through a target detection module.
The point cloud map may be a map built according to laser point cloud data, and the point cloud map further includes speed data, GPS data, GNSS data, and IMU data of a moving object. In this embodiment, the moving object is a test vehicle, and a point cloud map is constructed by acquiring a source point cloud image. The source obstacle point cloud image may be a point cloud image of an obstacle in the source point cloud image, and the obstacle is an object not belonging to the map architecture.
Specifically, the computer device 100 detects the obstacle by using the target detection model, obtains the size and the position of each source obstacle point cloud image, deducts the point cloud from the source point cloud image, and performs map construction by using the filtered point cloud image.
And step 704, obtaining real-time positioning data by adopting a real-time positioning algorithm, and storing the real-time positioning data.
The real-time positioning algorithm can be an algorithm for acquiring the position information of the moving object in the map, and the real-time positioning data is stored in a hard disk or other storage media in the form of a document file.
And step 706, filtering the target obstacle from the real-time point cloud image to obtain a filtered current frame image.
The target obstacle is a non-building object which needs to be filtered out in the point cloud map and is easy to cause image matching errors. The target obstacle point cloud image is a point cloud image of an obstacle in the real-time point cloud image.
Specifically, the computer device 100 detects the target obstacle by using a target detection model, so as to obtain the size and the position of each target obstacle point cloud image, and deducts the point cloud from the source point cloud image, so as to obtain the filtered current frame image.
Step 708, adopting the modified NDT matching algorithm as an offline positioning algorithm, and performing offline positioning on the filtered current frame image to obtain offline positioning data.
The NDT (Normal Distributions Transform, normal distribution transformation) matching algorithm refers to an algorithm for performing coordinate transformation on an input point cloud image to perform matching. The method for modifying the NDT matching algorithm comprises the following steps: and configuring weights of the real-time point cloud images of all angles in an NDT matching algorithm, so that the point cloud images with the angle deviation larger than a threshold value in the input point cloud images have score weights smaller than a preset value in the matching values.
And 710, performing filtering processing on the offline positioning data obtained by the NDT matching algorithm by adopting an RTS smoothing algorithm to obtain the filtered offline positioning data.
The RTS (Rauch Tung Striebel) algorithm is one of Kalman filtering algorithms, and can perform filtering processing on the offline positioning data to obtain the offline positioning data with higher positioning accuracy.
Step 712, screening the offline positioning data after the filtering processing according to the matching value of the real-time positioning image of each frame, taking the obtained result as a reference value, calculating the difference value between the reference value and the corresponding real-time positioning data, and obtaining the accuracy of the positioning algorithm according to the difference value.
The matching value of each frame of real-time positioning image reflects the similarity between each frame of real-time positioning image and the corresponding source point cloud image. The embodiment can screen out the real-time positioning image with the highest similarity, namely the highest matching value, and takes the position data of the source point cloud image corresponding to the real-time positioning image as the reference value of the real-time positioning data of the real-time positioning image. The error of the positioning algorithm can be directly obtained according to the difference value, or an average value or a weighted average value or a median value of at least one difference value can be calculated, and the average value or the weighted average value or the median value is used as the accuracy of the positioning algorithm.
According to the precision detection method of the positioning algorithm, the modified NDT matching algorithm is used as an offline positioning algorithm to obtain offline positioning data, the RTS smoothing algorithm is used for filtering treatment to obtain higher-precision offline positioning data, the higher-precision offline positioning data are used as reference values of real-time positioning data, and finally the reference values are compared with the real-time positioning data to obtain the precision of the positioning algorithm, so that the precision detection of the positioning algorithm is realized.
It should be understood that, although the steps in the flowcharts of fig. 1-7 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-7 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or steps.
In one embodiment, as shown in fig. 8, there is provided a precision detection apparatus of a positioning algorithm, including: an image acquisition module 802, a real-time location module 804, an offline location module 806, and a precision calculation module 808, wherein:
an image acquisition module 802 is configured to acquire a real-time point cloud image of a moving object.
The real-time positioning module 804 is configured to match the real-time point cloud image with a source point cloud image in the constructed point cloud map by using a real-time positioning algorithm, so as to obtain real-time positioning data of the moving object.
The offline positioning module 806 is configured to match the real-time point cloud image with a source point cloud image in the point cloud map by using an offline positioning algorithm, so as to obtain offline positioning data of the moving object; the positioning accuracy of the off-line positioning algorithm is greater than that of the real-time positioning algorithm.
And the precision calculation module 808 is configured to determine the precision of the real-time positioning data according to the reference value and the real-time positioning data by using the offline positioning data as the reference value of the real-time positioning data.
The apparatus further comprises: and the map construction module is used for acquiring a source point cloud image and constructing a point cloud map according to the source point cloud image.
Wherein, the map construction module includes: the first target detection unit is used for acquiring a source point cloud image, and carrying out target detection to obtain a source obstacle point cloud image; the first obstacle filtering unit is used for filtering the source obstacle point cloud images in the source point cloud images; and the construction unit is used for constructing a point cloud map according to the source point cloud image after the filtering processing. The target detection unit further comprises a model application unit, the model application unit is used for inputting the acquired source point cloud image into a trained target detection model to obtain a source obstacle point cloud image, and the target detection model is obtained through training according to the point cloud sample image containing the obstacle.
Wherein, the real-time positioning module 804 includes:
the image matching unit is used for matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;
the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as real-time positioning data of the moving object in the point cloud map.
Wherein, the offline positioning module 806 includes:
The second target detection unit is used for acquiring a real-time point cloud image, inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm, and obtaining a target obstacle point cloud image;
the second obstacle filtering unit is used for filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;
the image matching unit is used for inputting the filtered real-time point cloud image into a well-trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image, wherein the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image;
and the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.
The image matching unit is also used for screening out the real-time point cloud image with the largest matching value according to the size of the matching value of each real-time point cloud image after filtering processing, and taking the source point cloud image corresponding to the real-time point cloud image with the largest matching value as the output of the matching model.
The image matching unit is also used for setting the weight corresponding to the matching value of the static object point cloud in the matching model to be higher than the weight corresponding to the matching value of the dynamic object point cloud.
The precision calculation module 808 includes:
the data acquisition unit is used for acquiring the real-time positioning data and the corresponding offline positioning data;
the difference value calculation unit is used for calculating the difference value of each real-time positioning data and the corresponding offline positioning data;
and the precision determining unit is used for calculating an average value or a weighted average value or a median value of the difference values, and taking the average value or the weighted average value or the median value as the precision of the positioning algorithm.
For specific limitations of the accuracy detection device of the positioning algorithm, reference may be made to the above limitation of the accuracy detection method of the positioning algorithm, and no further description is given here. The modules in the accuracy detection device of the positioning algorithm can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method for detecting the accuracy of a positioning algorithm. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the accuracy detection method steps of the above-described positioning algorithm when executing the computer program.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon which, when executed by a processor, implements the accuracy detection method steps of the positioning algorithm described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (9)

1. A method for detecting accuracy of a positioning algorithm, the method comprising:
acquiring a real-time point cloud image of a moving object;
matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;
matching the real-time point cloud image with a source point cloud image in the point cloud map by adopting an offline positioning algorithm to obtain offline positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;
Taking the off-line positioning data as a reference value of real-time positioning data, and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data;
the step of matching the real-time point cloud image with the source point cloud image in the point cloud map by using an offline positioning algorithm to obtain offline positioning data of the moving object comprises the following steps:
acquiring a real-time point cloud image, and inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm to obtain a target obstacle point cloud image;
filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;
inputting the filtered real-time point cloud image into a well trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image; the matching model is obtained through training according to the real-time point cloud image and the corresponding source point cloud image;
and taking the position information of the source point cloud image corresponding to the real-time point cloud image as offline positioning data of the moving object.
2. The method of claim 1, wherein the method for constructing the point cloud map comprises:
Acquiring a source point cloud image, and performing target detection to obtain a source obstacle point cloud image;
and filtering the source obstacle point cloud image in the source point cloud image, and constructing a point cloud map according to the filtered source point cloud image.
3. The method of claim 2, wherein the acquiring the source point cloud image for target detection to obtain the source obstacle point cloud image comprises:
acquiring a source point cloud image, and inputting the source point cloud image into a trained target detection model to obtain a source obstacle point cloud image; the target detection model is obtained through training according to a point cloud sample image containing the obstacle.
4. The method of claim 1, wherein the matching the real-time point cloud image with the source point cloud image in the constructed point cloud map using the real-time positioning algorithm to obtain real-time positioning data of the moving object comprises:
matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;
and taking the position information of the source point cloud image corresponding to the real-time point cloud image as real-time positioning data of the moving object in the point cloud map.
5. The method of claim 1, wherein inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image comprises:
and screening out the real-time point cloud image with the largest matching value according to the matching value of each real-time point cloud image after filtering, and taking the source point cloud image corresponding to the real-time point cloud image with the largest matching value as the output of a matching model.
6. The method of claim 1, wherein the weight corresponding to the matching value of the static object point cloud is higher than the weight corresponding to the matching value of the dynamic object point cloud in the matching model.
7. The method of claim 1, wherein the determining the accuracy of the real-time positioning data based on the reference value and the real-time positioning data using the offline positioning data as the reference value of the real-time positioning data comprises:
acquiring the real-time positioning data and corresponding offline positioning data;
calculating the difference value of each real-time positioning data and the corresponding offline positioning data;
And calculating an average value or a weighted average value or a median value of the difference values, and taking the average value or the weighted average value or the median value as the accuracy of the positioning algorithm.
8. An accuracy assessment device for a positioning algorithm, the device comprising:
the image acquisition module is used for acquiring a real-time point cloud image of the moving object;
the real-time positioning module is used for matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;
the off-line positioning module is used for matching the real-time point cloud image with the source point cloud image in the point cloud map by adopting an off-line positioning algorithm to obtain off-line positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;
the precision calculation module is used for taking the off-line positioning data as a reference value of the real-time positioning data and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data;
wherein, the off-line positioning module is particularly used for,
acquiring a real-time point cloud image, inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm to obtain a target obstacle point cloud image,
Filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image,
inputting the filtered real-time point cloud image into a well-trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image, wherein the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image,
and taking the position information of the source point cloud image corresponding to the real-time point cloud image as offline positioning data of the moving object.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
CN201910204805.0A 2019-03-18 2019-03-18 Precision detection method and device for positioning algorithm, computer equipment and storage medium Active CN111721283B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910204805.0A CN111721283B (en) 2019-03-18 2019-03-18 Precision detection method and device for positioning algorithm, computer equipment and storage medium
CN202310890176.8A CN116972880A (en) 2019-03-18 2019-03-18 Precision detection device of positioning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910204805.0A CN111721283B (en) 2019-03-18 2019-03-18 Precision detection method and device for positioning algorithm, computer equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310890176.8A Division CN116972880A (en) 2019-03-18 2019-03-18 Precision detection device of positioning algorithm

Publications (2)

Publication Number Publication Date
CN111721283A CN111721283A (en) 2020-09-29
CN111721283B true CN111721283B (en) 2023-08-15

Family

ID=72563252

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910204805.0A Active CN111721283B (en) 2019-03-18 2019-03-18 Precision detection method and device for positioning algorithm, computer equipment and storage medium
CN202310890176.8A Pending CN116972880A (en) 2019-03-18 2019-03-18 Precision detection device of positioning algorithm

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310890176.8A Pending CN116972880A (en) 2019-03-18 2019-03-18 Precision detection device of positioning algorithm

Country Status (1)

Country Link
CN (2) CN111721283B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700479B (en) * 2020-12-23 2024-02-23 北京超星未来科技有限公司 Registration method based on CNN point cloud target detection
CN114442605B (en) * 2021-12-16 2023-08-18 中国科学院深圳先进技术研究院 Positioning detection method, device, autonomous mobile equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105424026A (en) * 2015-11-04 2016-03-23 中国人民解放军国防科学技术大学 Indoor navigation and localization method and system based on point cloud tracks
CN106168805A (en) * 2016-09-26 2016-11-30 湖南晖龙股份有限公司 The method of robot autonomous walking based on cloud computing
CN106225790A (en) * 2016-07-13 2016-12-14 百度在线网络技术(北京)有限公司 A kind of determination method and device of unmanned vehicle positioning precision
CN106846308A (en) * 2017-01-20 2017-06-13 广州市城市规划勘测设计研究院 The detection method and device of the topographic map precision based on a cloud
CN107451593A (en) * 2017-07-07 2017-12-08 西安交通大学 A kind of high-precision GPS localization method based on image characteristic point
CN108389264A (en) * 2018-02-07 2018-08-10 网易(杭州)网络有限公司 Coordinate system determines method, apparatus, storage medium and electronic equipment
CN108876852A (en) * 2017-05-09 2018-11-23 中国科学院沈阳自动化研究所 A kind of online real-time object identification localization method based on 3D vision
CN108871353A (en) * 2018-07-02 2018-11-23 上海西井信息科技有限公司 Road network map generation method, system, equipment and storage medium
CN109270545A (en) * 2018-10-23 2019-01-25 百度在线网络技术(北京)有限公司 A kind of positioning true value method of calibration, device, equipment and storage medium
CN109459734A (en) * 2018-10-30 2019-03-12 百度在线网络技术(北京)有限公司 A kind of laser radar locating effect appraisal procedure, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105424026A (en) * 2015-11-04 2016-03-23 中国人民解放军国防科学技术大学 Indoor navigation and localization method and system based on point cloud tracks
CN106225790A (en) * 2016-07-13 2016-12-14 百度在线网络技术(北京)有限公司 A kind of determination method and device of unmanned vehicle positioning precision
CN106168805A (en) * 2016-09-26 2016-11-30 湖南晖龙股份有限公司 The method of robot autonomous walking based on cloud computing
CN106846308A (en) * 2017-01-20 2017-06-13 广州市城市规划勘测设计研究院 The detection method and device of the topographic map precision based on a cloud
CN108876852A (en) * 2017-05-09 2018-11-23 中国科学院沈阳自动化研究所 A kind of online real-time object identification localization method based on 3D vision
CN107451593A (en) * 2017-07-07 2017-12-08 西安交通大学 A kind of high-precision GPS localization method based on image characteristic point
CN108389264A (en) * 2018-02-07 2018-08-10 网易(杭州)网络有限公司 Coordinate system determines method, apparatus, storage medium and electronic equipment
CN108871353A (en) * 2018-07-02 2018-11-23 上海西井信息科技有限公司 Road network map generation method, system, equipment and storage medium
CN109270545A (en) * 2018-10-23 2019-01-25 百度在线网络技术(北京)有限公司 A kind of positioning true value method of calibration, device, equipment and storage medium
CN109459734A (en) * 2018-10-30 2019-03-12 百度在线网络技术(北京)有限公司 A kind of laser radar locating effect appraisal procedure, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116972880A (en) 2023-10-31
CN111721283A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
EP3309751B1 (en) Image processing device, method, and program
CN111797650B (en) Obstacle identification method, obstacle identification device, computer equipment and storage medium
CN106529538A (en) Method and device for positioning aircraft
CN110047108B (en) Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
CN109901123B (en) Sensor calibration method, device, computer equipment and storage medium
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN116109706B (en) Space target inversion method, device and equipment based on priori geometric constraint
CN111721283B (en) Precision detection method and device for positioning algorithm, computer equipment and storage medium
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN114359334A (en) Target tracking method and device, computer equipment and storage medium
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN111723597B (en) Method, device, computer equipment and storage medium for detecting precision of tracking algorithm
CN112629565B (en) Method, device and equipment for calibrating rotation relation between camera and inertial measurement unit
CN116630442B (en) Visual SLAM pose estimation precision evaluation method and device
CN113033439A (en) Method and device for data processing and electronic equipment
CN110706257B (en) Identification method of effective characteristic point pair, and camera state determination method and device
CN111445513A (en) Plant canopy volume obtaining method and device based on depth image, computer equipment and storage medium
CN111259702A (en) User interest estimation method and device
CN115376018A (en) Building height and floor area calculation method, device, equipment and storage medium
CN111723826B (en) Method, device, computer equipment and storage medium for detecting precision of tracking algorithm
CN111161357B (en) Information processing method and device, augmented reality device and readable storage medium
CN114863201A (en) Training method and device of three-dimensional detection model, computer equipment and storage medium
CN116518981B (en) Aircraft visual navigation method based on deep learning matching and Kalman filtering
US20230401691A1 (en) Image defect detection method, electronic device and readable storage medium
US20200294315A1 (en) Method and system for node vectorisation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant