CN108534782B - Binocular vision system-based landmark map vehicle instant positioning method - Google Patents
Binocular vision system-based landmark map vehicle instant positioning method Download PDFInfo
- Publication number
- CN108534782B CN108534782B CN201810337903.7A CN201810337903A CN108534782B CN 108534782 B CN108534782 B CN 108534782B CN 201810337903 A CN201810337903 A CN 201810337903A CN 108534782 B CN108534782 B CN 108534782B
- Authority
- CN
- China
- Prior art keywords
- landmark
- vehicle
- dimensional
- camera
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a binocular vision system-based landmark map vehicle instant positioning method, which comprises the steps of utilizing a binocular vision system to realize a vision odometer, calculating the relative positioning of a vehicle, and then utilizing a deep learning technology to detect landmarks in an image; and then, carrying out landmark retrieval in a landmark map database, carrying out accurate positioning on the vehicle under a terrestrial coordinate system by utilizing the retrieved landmarks and a nonlinear optimization algorithm, then converting a camera relative coordinate system and the terrestrial coordinate system, and correcting the vehicle positioning of the binocular vision odometer by utilizing the vehicle positioning calculated by the landmarks.
Description
Technical Field
The invention belongs to the technical field of vehicle positioning, and particularly relates to a binocular vision system-based landmark map vehicle instant positioning method.
Background
The landmark map is a high-precision map which removes redundant information from the map and stores the redundant information in a light weight mode, and can provide partial static target perception, or landmark perception, for the intelligent vehicle. At present, high-tech companies such as israel, japan, and usa are engaged in the research and development and collection of landmark maps, such as eye q map of Mobile, landmark image map of toyota, and lvl5 crowdsourcing pure visual map. How to realize the positioning of the intelligent vehicle by using a landmark map is continuously researched by researchers in various countries.
By searching for prior art patents, we find that the system and the method for generating the binocular camera-based high-precision visual positioning map (application number 2016100288342), "the visual positioning method and the visual positioning device (application number 201611069552.3)," the visual positioning method and the device (application number 201110371807.2), "the system and the method for fusing and positioning the intelligent vehicle with the panoramic map" (application number 201710150551. X). The binocular camera-based high-precision visual positioning map generation system and method record the feature points in the image in a visual mode, implant the image feature points into the map to form a feature point map, and are different from a landmark map with actual significance. The visual positioning method and the visual positioning device set a plurality of identification points on an identification object, set a plurality of identification points on the identification object, establish a linear equation set of three-dimensional coordinates of the identification points, and solve the three-dimensional coordinates of the identification points, thereby obtaining the attitude information of the identification object. However, this method has disadvantages in that: the representation points need to be arranged on the identification object points, and the representation points cannot be used for positioning the intelligent vehicle. The visual positioning method and the visual positioning device realize that a three-dimensional image of the target object is synthesized by adopting a two-dimensional image of a camera through an image acquisition module, and the three-dimensional coordinate and the posture of the target object in a space coordinate system are determined according to the three-dimensional image. The disadvantages of this method are: reconstructions that rely on monocular cameras lack true physical dimension information and cannot be used for intelligent vehicle positioning. The intelligent vehicle fusion positioning system and method introducing the panoramic map obtains the positioning result of the vehicle by matching the vehicle-mounted satellite positioning equipment, the inertial navigation equipment, the camera sensor and the panoramic map. The disadvantages of this method are: depending on a plurality of sensors, the cost is expensive to obtain the ideal precision.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a landmark map vehicle instant positioning method based on a binocular vision system.
In order to achieve the aim, the invention discloses a landmark map vehicle instant positioning method based on a binocular vision system, which is characterized by comprising the following steps:
(1) carrying out internal reference calibration on the binocular vision system, acquiring internal parameters of a left camera and a right camera in the binocular vision system, and constructing an internal reference matrix K;
(2) the binocular vision system acquires video stream, the position of the left camera for acquiring the first frame image is used as an origin and is marked as C0Let the pose at this time be
(3) Continuously performing camera visual tracking from the obtained video stream, reconstructing the instant relative pose of the cameraObtaining the relative coordinate position C of the vehiclej;
(3.1) extracting the feature X from the left and right video frames at the current moment jR,XLThen, for the feature X of the left and right video framesR,XLPerforming fast matching to obtain the characteristic XkWhere k is 1,2, …, which indicates the number of matched features, and j is 1,2, …, which indicates the time when the video stream continues;
reconstruction of two-dimensional feature points X by means of a multi-vision geometric methodkCorresponding three-dimensional feature point PkCoordinates of (2)
(3.2) removing outer points in the reconstructed three-dimensional characteristic points, and then establishing a local sparse map M by using the residual three-dimensional characteristic points;
(3.3) by using the assumed camera motion model estimation, rapidly searching the possible range of the two-dimensional feature points in the left or right video frame at the j +1 th moment, and establishing the image feature pair at the j +1 th moment and the j +1 th momentWherein, XsIs XROr XL(x, y) represents the coordinate location of the two-dimensional feature point;
establishing corresponding relation of three-dimensional characteristics according to image characteristic pairs of previous and next framesAndand calculate outAndrelative transformation relationship ofThen the singleton appears inAdding the three-dimensional feature points into the local sparse map M, and optimizing the three-dimensional feature points in the local sparse map M by a nonlinear optimization SBA algorithm;
(3.4) establishing a local sparse map M andcorresponding three-dimensional feature points, and then calculating the relative pose of the left or right camera at the j +1 th momentObtaining the relative coordinate position of the vehicle;
(4) detecting whether the images shot by the left camera or the right camera contain landmarks or not, and positioning the absolute coordinates of the vehicle through the detected landmarks;
(4.1) detecting landmark blocks marked as B in each image shot by the left or right camera by using a deep learning method1,B2,…,BnN represents the number of landmark blocks in the image; recording the shooting time of the image of the detected landmark block as i, i equals to 1,2, …, i < j;
and then carrying out feature matching on the landmark blocks in the landmark map database to find out the coordinates of the landmark blocks under the global coordinate systemNamely longitude and latitude elevation;
(4.2) detecting the retrieved landmark block B1,B2,…,BnAnd calculating the coordinates of the two-dimensional image of the center of the landmark block in the videoCorrecting the coordinates of the two-dimensional image by using the internal parameters of the camera; then, establishing two-dimensional image coordinates of the landmark block and three-dimensional coordinates of the landmark under the global coordinate systemCorresponding relation between them;
(4.3) establishing an optimized objective function F
Wherein R represents a rotation matrix, t represents a translation vector, d represents a Euclidean distance,the coordinates of the two-dimensional image are represented,representing two-dimensional projection vectors obtained by calculation under a homogeneous coordinate system;
obtaining the absolute coordinate position of the vehicle under the global coordinate system at the current moment i by optimally solving R and t
(5) Establishing the absolute coordinate position of the vehicleRelative coordinate position of vehicle with corresponding timeRelative transformation relation of (T)i,Representing a relative element corresponding to the ith moment when the landmark block is detected in the relative coordinate sequence; then use TiCorrecting the relative coordinate position of the vehicle to the absolute coordinate position of the vehicle under the global coordinate system, and correcting the positions of all the vehicles at the i-1 th moment and the i-th moment before the correctionTo an absolute positionAnd finishing the instant absolute positioning of the vehicle.
The invention aims to realize the following steps:
the invention relates to a landmark map vehicle instant positioning method based on a binocular vision system, which is characterized in that a binocular vision system is utilized to realize a vision odometer, the relative positioning of a vehicle is calculated, and then a deep learning technology is utilized to detect landmarks in an image; and then, carrying out landmark retrieval in a landmark map database, carrying out accurate positioning on the vehicle under a terrestrial coordinate system by utilizing the retrieved landmarks and a nonlinear optimization algorithm, then converting a camera relative coordinate system and the terrestrial coordinate system, and correcting the vehicle positioning of the binocular vision odometer by utilizing the vehicle positioning calculated by the landmarks.
Meanwhile, the method for instantly positioning the landmark map vehicle based on the binocular vision system also has the following beneficial effects:
(1) the binocular vision system is used as the only sensor to realize the instant positioning function based on the landmark map vehicle, the advantages of low cost are achieved, the binocular vision system can be multiplexed with the vision perception module sensor of the intelligent vehicle, and the cost can be effectively reduced.
(2) Compared with the prior art, the invention has the advantages of improved performance, improved reliability, reduced cost, simplified process, energy saving, environmental protection, and the like.
Drawings
FIG. 1 is a binocular vision system based landmark map vehicle instant positioning system architecture diagram of the present invention;
FIG. 2 is a flow chart of the method for instantly positioning a landmark map vehicle based on a binocular vision system according to the present invention;
FIG. 3 is a schematic of a three-dimensional reconstruction;
fig. 4 is an effect diagram of the instant positioning of the vehicle by using the invention.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
Fig. 1 is a binocular vision system based landmark map vehicle instant positioning system architecture diagram of the present invention.
In this embodiment, as shown in fig. 1, the landmark map vehicle instant positioning system based on the binocular vision system of the present invention mainly includes:
binocular vision acquisition module: the binocular vision correction system is used for acquiring binocular vision signals, correcting images of a binocular camera and establishing a unified terrestrial coordinate system.
A landmark detection module: firstly, training a deep neural network model by using a deep learning method, then detecting landmarks in real time by using the deep neural network model, and returning coordinates of the landmarks in a two-dimensional image;
map database of landmarks: the method is used for downloading the landmark map, and the landmark map comprises image blocks of all landmarks and longitude and latitude altitude coordinates of the image blocks under the earth coordinate system.
The vehicle instant positioning module: the outputs of the binocular vision acquisition module, the landmark map database and the landmark detection module are used as the inputs, so that the function of vehicle instant positioning is realized.
In the following, we will describe in detail a method for instantly positioning a landmark map vehicle based on a binocular vision system in conjunction with fig. 2, which specifically includes the following steps:
s1, carrying out internal reference calibration on the binocular vision acquisition module to obtain internal parameters of a left camera and a right camera in the binocular vision system, wherein the internal parameters mainly comprise distortion parameters, focal length and center offset of the cameras, and then constructing an internal reference matrix K by utilizing the internal parameters.
S2, the binocular vision acquisition module acquires video stream, the position of the left camera for acquiring the first frame image is used as an origin and is marked as C0Let the pose at this time beMark C0Andfor reference in subsequent coordinate rectification processes.
S3, continuously carrying out camera visual tracking from the obtained video stream, and reconstructing the instant relative posture of the cameraObtaining the relative coordinate position C of the vehiclej;
S3.1, extracting the feature X from the left video frame and the right video frame by utilizing algorithms such as ORB, SURF, SIFT and the like at the current moment jR,XLAnd then the polar line search method is utilized to search the characteristics X of the left and right video framesR,XLPerforming fast matching to obtain the characteristic XkWhere k is 1,2, …, which indicates the number of matched features, and j is 1,2, …, which indicates the time when the video stream continues;
reconstruction of two-dimensional feature points X by means of a multi-vision geometric methodkCorresponding three-dimensional feature point PkCoordinates of (2)
S3.2, removing outer points in the reconstructed three-dimensional feature points, namely taking out the three-dimensional feature points with the reconstructed precision exceeding a threshold value, so that the accuracy of the reconstructed three-dimensional feature points is ensured, wherein the schematic diagram of the three-dimensional reconstructed three-dimensional feature points is shown in FIG. 3, and then establishing a local sparse map M by using the remaining three-dimensional feature points;
s3.3, by using the assumed camera motion model estimation, rapidly searching the possible range of the two-dimensional feature points in the left or right video frame at the j +1 th moment, and establishing the image feature pair at the j +1 th moment and the j +1 th momentWherein, XsIs XROr XL(x, y) represents the coordinate location of the two-dimensional feature point;
establishing three-dimension according to image characteristic pairs of front and back framesCorrespondence of featuresAndand calculate outAndrelative transformation relationship ofThen the singleton appears inAdding the three-dimensional feature points into the local sparse map M, and optimizing the three-dimensional feature points in the local sparse map M by a nonlinear optimization SBA algorithm;
s3.4, establishing a local sparse map M andcorresponding three-dimensional feature points, and then calculating the relative pose of the left or right camera at the j +1 th momentObtaining the relative coordinate position of the vehicle;
s4, detecting whether the images shot by the left camera or the right camera contain the landmark or not, and positioning the absolute coordinate of the vehicle through the detected landmark;
s4.1, detecting landmark blocks marked as B in each image shot by the left camera or the right camera by using a deep learning method1,B2,…,BnN represents the number of landmark blocks in the image; recording the shooting time of the image of the detected landmark block as i, i equals to 1,2, …, i < j;
then feature matching is carried out on the landmark blocks in the landmark map database, and a global coordinate system is foundCoordinates of underlying landmark blocksNamely longitude and latitude elevation;
in this embodiment, in order to increase the speed of feature matching on landmark blocks, the present embodiment may perform landmark block feature matching in a limited range in a landmark map database, where the limited range may be provided by a non-high-precision satellite positioning system to assist in landmark search and matching;
s4.2, detecting the searched landmark block B1,B2,…,BnAnd calculating the coordinates of the two-dimensional image of the center of the landmark block in the videoCorrecting the coordinates of the two-dimensional image by using the internal parameters of the camera; then, establishing two-dimensional image coordinates of the landmark block and three-dimensional coordinates of the landmark under the global coordinate systemCorresponding relation between them;
s4.3, establishing an optimized objective function F
Wherein R represents a rotation matrix, t represents a translation vector, d represents an Euclidean distance,the coordinates of the two-dimensional image are represented,representing two-dimensional projection vectors obtained by calculation under a homogeneous coordinate system;
obtaining the absolute coordinate position of the vehicle under the global coordinate system at the current moment i by optimally solving R and t
S5, establishing absolute coordinate position of vehicleRelative coordinate position of vehicle with corresponding timeRelative transformation relation of (T)i,Representing a relative element corresponding to the ith moment when the landmark block is detected in the relative coordinate sequence; then use TiCorrecting the relative coordinate position of the vehicle to the absolute coordinate position of the vehicle under the global coordinate system, and correcting the positions of all the vehicles at the i-1 th moment and the i-th moment before the correctionTo an absolute positionAnd finishing the instant absolute positioning of the vehicle.
In this embodiment, i represents the time when the landmark is detected, and j represents the time when the video stream continues, so i<<j, we now use the absolute coordinate position of time iCorrecting the relative coordinate position of the corresponding i time in jAnd using the absolute coordinate position of the time iAnd correcting the relative coordinate position of the vehicle when the ground mark is not detected in the video stream before the time j. As shown in fig. 4, the experimental results of the vehicle instantaneous location system are shown, in which the left camera image is shown on the left and the instantaneous location results of the vehicle are shown on the right.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.
Claims (2)
1. A landmark map vehicle instant positioning method based on a binocular vision system is characterized by comprising the following steps:
(1) carrying out internal reference calibration on the binocular vision system, acquiring internal parameters of a left camera and a right camera in the binocular vision system, and constructing an internal reference matrix K;
(2) the binocular vision system obtains video stream, the position of the center of the camera for shooting the first image in the left camera is used as an origin, and the mark is C0Let the pose at this time be
(3) Continuously performing camera visual tracking from the obtained video stream, reconstructing the instant relative pose of the cameraObtaining the relative coordinate position C of the vehiclej;
(3.1) extracting the feature X from the left and right video frames at the current moment jR,XLThen, for the feature X of the left and right video framesR,XLPerforming fast matching to obtain the characteristic XkWherein, k is 1,2, …, which represents the number of matched features;
reconstruction of two-dimensional feature points X by means of a multi-vision geometric methodkCorresponding three-dimensional feature point PkCoordinates of (2)
(3.2) removing outer points in the reconstructed three-dimensional characteristic points, and then establishing a local sparse map M by using the residual three-dimensional characteristic points;
(3.3) by using the assumed camera motion model estimation, rapidly searching the possible range of the two-dimensional feature points in the left or right video frame at the j +1 th moment, and establishing the image feature pair at the j +1 th moment and the j +1 th momentWherein, XsIs XROr XL(x, y) represents the coordinate position of the two-dimensional feature point;
establishing corresponding relation of three-dimensional characteristics according to image characteristic pairs of previous and next framesAndand calculate outAndrelative transformation relationship ofThen the singleton appears inAdding the three-dimensional feature points into the local sparse map M, and optimizing the three-dimensional feature points in the local sparse map M by a nonlinear optimization SBA algorithm;
(3.4) establishing a local sparse map M andcorresponding three-dimensional feature points, and then calculating the relative pose of the left or right camera at the j +1 th momentObtaining the relative coordinate position of the vehicle;
(4) detecting whether the images shot by the left camera or the right camera contain landmarks or not, and positioning the absolute coordinates of the vehicle through the detected landmarks;
(4.1) detecting landmark blocks marked as B in each image shot by the left or right camera by using a deep learning method1,B2,…,BnN represents the number of landmark blocks in the image; recording the shooting time of the image of the detected landmark block as i, i equals to 1,2, …, i < j;
and then carrying out feature matching on the landmark blocks in the landmark map database to find out the coordinates of the landmark blocks under the global coordinate systemNamely longitude and latitude elevation;
(4.2) detecting the retrieved landmark block B1,B2,…,BnAnd calculating the coordinates of the two-dimensional image of the center of the landmark block in the videoCorrecting the coordinates of the two-dimensional image by using the internal parameters of the camera; then, establishing two-dimensional image coordinates of the landmark block and three-dimensional coordinates of the landmark under the global coordinate systemCorresponding relation between them;
(4.3) establishing an optimization objective function F of the error between the two-dimensional image coordinate of the landmark block and the three-dimensional coordinate of the landmark in the global coordinate system;
wherein R represents a rotation matrix, t represents a translation vector, d represents a Euclidean distance,the coordinates of the two-dimensional image are represented,representing two-dimensional projection vectors obtained by calculation under a homogeneous coordinate system;
obtaining the absolute coordinate position of the vehicle under the global coordinate system at the current moment i by optimally solving R and t
(5) Establishing the absolute coordinate position of the vehicleRelative coordinate position of vehicle with corresponding timeRelative transformation relation of (T)i,Representing a relative element corresponding to the ith moment when the landmark block is detected in the relative coordinate sequence; then use TiCorrecting the relative coordinate position of the vehicle to the absolute coordinate position of the vehicle under the global coordinate system, and correcting the positions of all the vehicles at the i-1 th moment and the i-th moment before the correctionTo an absolute positionAnd finishing the instant absolute positioning of the vehicle.
2. The binocular vision system-based landmark map vehicle instant positioning method of claim 1, wherein the left and right camera intrinsic parameters comprise distortion parameters, focal length and center offset of the cameras.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810337903.7A CN108534782B (en) | 2018-04-16 | 2018-04-16 | Binocular vision system-based landmark map vehicle instant positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810337903.7A CN108534782B (en) | 2018-04-16 | 2018-04-16 | Binocular vision system-based landmark map vehicle instant positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108534782A CN108534782A (en) | 2018-09-14 |
CN108534782B true CN108534782B (en) | 2021-08-17 |
Family
ID=63481245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810337903.7A Active CN108534782B (en) | 2018-04-16 | 2018-04-16 | Binocular vision system-based landmark map vehicle instant positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108534782B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110706280A (en) * | 2018-09-28 | 2020-01-17 | 成都家有为力机器人技术有限公司 | Lightweight semantic driven sparse reconstruction method based on 2D-SLAM |
CN109584299B (en) * | 2018-11-13 | 2021-01-05 | 深圳前海达闼云端智能科技有限公司 | Positioning method, positioning device, terminal and storage medium |
CN109583409A (en) * | 2018-12-07 | 2019-04-05 | 电子科技大学 | A kind of intelligent vehicle localization method and system towards cognitive map |
TWM584008U (en) * | 2019-01-31 | 2019-09-21 | 許斐凱 | Trachea model reconstruction system utilizing computer vision and deep learning technology |
CN109752008B (en) * | 2019-03-05 | 2021-04-13 | 长安大学 | Intelligent vehicle multi-mode cooperative positioning system and method and intelligent vehicle |
CN110514212A (en) * | 2019-07-26 | 2019-11-29 | 电子科技大学 | A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS |
CN112307810B (en) * | 2019-07-26 | 2023-08-04 | 北京魔门塔科技有限公司 | Visual positioning effect self-checking method and vehicle-mounted terminal |
CN111539973B (en) * | 2020-04-28 | 2021-10-01 | 北京百度网讯科技有限公司 | Method and device for detecting pose of vehicle |
CN112132894B (en) * | 2020-09-08 | 2022-09-20 | 大连理工大学 | Mechanical arm real-time tracking method based on binocular vision guidance |
CN112489080A (en) * | 2020-11-27 | 2021-03-12 | 的卢技术有限公司 | Binocular vision SLAM-based vehicle positioning and vehicle 3D detection method |
CN112598705B (en) * | 2020-12-17 | 2024-05-03 | 太原理工大学 | Binocular vision-based vehicle body posture detection method |
CN113390435B (en) * | 2021-05-13 | 2022-08-26 | 中铁二院工程集团有限责任公司 | High-speed railway multi-element auxiliary positioning system |
CN114998849B (en) * | 2022-05-27 | 2024-04-16 | 电子科技大学 | Traffic flow element sensing and positioning method based on road-side monocular camera and application thereof |
CN116528062B (en) * | 2023-07-05 | 2023-09-15 | 合肥中科类脑智能技术有限公司 | Multi-target tracking method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8818133B2 (en) * | 2012-07-11 | 2014-08-26 | Raytheon Company | Point cloud construction with unposed camera |
CN103106688B (en) * | 2013-02-20 | 2016-04-27 | 北京工业大学 | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering |
CN105469405B (en) * | 2015-11-26 | 2018-08-03 | 清华大学 | Positioning and map constructing method while view-based access control model ranging |
CN106228538B (en) * | 2016-07-12 | 2018-12-11 | 哈尔滨工业大学 | Binocular vision indoor orientation method based on logo |
CN106709950B (en) * | 2016-11-28 | 2020-09-22 | 西安工程大学 | Binocular vision-based inspection robot obstacle crossing wire positioning method |
CN106910210B (en) * | 2017-03-03 | 2018-09-11 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating image information |
-
2018
- 2018-04-16 CN CN201810337903.7A patent/CN108534782B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108534782A (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108534782B (en) | Binocular vision system-based landmark map vehicle instant positioning method | |
CN108801274B (en) | Landmark map generation method integrating binocular vision and differential satellite positioning | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN110009681B (en) | IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method | |
CN111830953B (en) | Vehicle self-positioning method, device and system | |
CN106679648B (en) | Visual inertia combination SLAM method based on genetic algorithm | |
CN110319772B (en) | Visual large-span distance measurement method based on unmanned aerial vehicle | |
US10909395B2 (en) | Object detection apparatus | |
CN109579825B (en) | Robot positioning system and method based on binocular vision and convolutional neural network | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN107560603B (en) | Unmanned aerial vehicle oblique photography measurement system and measurement method | |
CN113850126A (en) | Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle | |
CN108519102B (en) | Binocular vision mileage calculation method based on secondary projection | |
US20070115352A1 (en) | System and method for multi-camera visual odometry | |
CN104281148A (en) | Mobile robot autonomous navigation method based on binocular stereoscopic vision | |
CN113658337B (en) | Multi-mode odometer method based on rut lines | |
CN111815765B (en) | Heterogeneous data fusion-based image three-dimensional reconstruction method | |
CN111091088B (en) | Video satellite information supported marine target real-time detection positioning system and method | |
CN117253029B (en) | Image matching positioning method based on deep learning and computer equipment | |
CN113240813B (en) | Three-dimensional point cloud information determining method and device | |
CN114693754B (en) | Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion | |
CN111524174A (en) | Binocular vision three-dimensional construction method for moving target of moving platform | |
CN112767546A (en) | Binocular image-based visual map generation method for mobile robot | |
CN116385504A (en) | Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration | |
CN113345032B (en) | Initialization map building method and system based on wide-angle camera large distortion map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |