CN110514212A - A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS - Google Patents
A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS Download PDFInfo
- Publication number
- CN110514212A CN110514212A CN201910684352.6A CN201910684352A CN110514212A CN 110514212 A CN110514212 A CN 110514212A CN 201910684352 A CN201910684352 A CN 201910684352A CN 110514212 A CN110514212 A CN 110514212A
- Authority
- CN
- China
- Prior art keywords
- terrestrial reference
- image
- intelligent vehicle
- point
- monocular vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000004807 localization Effects 0.000 title claims abstract description 17
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 230000003287 optical effect Effects 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims abstract 4
- 239000000284 extract Substances 0.000 claims abstract 2
- 230000009466 transformation Effects 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 claims description 4
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 241000145637 Lepturus Species 0.000 description 1
- 241001494479 Pecora Species 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/01—Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/03—Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers
- G01S19/10—Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers providing dedicated supplementary positioning signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
Abstract
The invention discloses a kind of intelligent vehicle map terrestrial reference localization methods for merging monocular vision and difference GNSS, including sensor synchronization, data prediction, terrestrial reference detection and terrestrial reference feature point extraction, terrestrial reference tracking and positioning and terrestrial reference description to calculate five steps.The present invention synchronizes to obtain the space coordinate of image corresponding vehicle and camera using sensor;Image terrestrial reference is detected and is extracted terrestrial reference characteristic point by data processing;Image characteristic point is tracked using optical flow method;It is tracked, is finally averaged to calculate this place target position by extracting multiple characteristic points to the same terrestrial reference.Terrestrial reference extracts feature to finally detected terrestrial reference after positioning successfully and carries out uniqueness description to this terrestrial reference, and the description of landmark locations, type and feature is put into database and is saved.
Description
Technical field
The present invention relates to unmanned map positioning field more particularly to a kind of intelligence for merging monocular vision and difference GNSS
It can vehicle map terrestrial reference localization method.
Background technique
As flourishing for location-based service is increasing with heavy construction, demand of the people to location-based service constantly increases
Add, carry out fast accurate be positioned in order to there is an urgent need to.
Currently, intelligent vehicle map is broadly divided into, laser radar builds figure and vision builds figure two ways.It is built based on laser radar
Figure, obtains terrestrial reference spatial position using a cloud coordinate, this method is computationally intensive, there are certain requirements to hardware, and real-time is low;It is based on
The intelligent vehicle map terrestrial reference of vision positions, and using binocular camera, can directly obtain terrestrial reference spatial position by binocular ranging, but
This method robustness is low, error is big.
Summary of the invention
It is an object of the present invention in view of the above-mentioned problems, propose a kind of intelligent vehicle for merging monocular vision and difference GNSS
Map terrestrial reference localization method.
A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS, includes the following steps:
S1: sensor synchronizes to obtain the corresponding vehicle of image and camera space coordinate;
S2: data processing is detected and is extracted terrestrial reference characteristic point to image terrestrial reference;
S3: image characteristic point is tracked using optical flow method;
S4: it calculates and obtains landmark locations and describe sub- calculating.
A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS, S1 includes following sub-step:
S11: building acquisition system, installs camera and the bis- positioning antennas of difference GNSS in roof, makes camera and GNSS antenna
In same plane, camera direction is identical with double antenna direction, and camera and positioning antenna distance are d;
S12: time arest neighbors is carried out using the timestamp of different sensors data and is synchronized, each frame image obtains one group of mark
Standardization data need to meet:
ei=(pi, Vi)
Wherein, PiIndicate the corresponding vehicle location of every frame image and posture, i.e. (xi, yi, zi, αi, βi, γi), wherein (xi,
yi, zi) it is camera coordinates on vehicle, (αi, βi, γi) be camera posture on vehicle three angles, ViIndicate that current pose is corresponding
Image;
S13: image after being corrected using camera distortion parameter correcting image utilizes sensing by the position that GNSS is obtained
Relative position converts to obtain camera position between device.
S2 includes following sub-step:
S21: using the terrestrial reference in deep learning algorithm detection image, testing result is for terrestrial reference type (ID) and in the picture
Two-dimensional position;
S22: ORB feature is extracted to the image-region for detecting terrestrial reference;
The step S3 includes following sub-step:
S31: judgement appears in whether a certain regional area in two continuous frames image I, J is same target, needs to meet:
I (x, y, t)=J (x ', y ', t+ Δ)
Wherein, own (x, y) and all move (d to a directionx, dy), to obtain (x ', y ').
(x, the y) point at S32:t moment is (x+d at the t+ τ momentx, y+dy), so the problem of seeking matching can turn to under
Formula seeks minimum value, needs to meet:
Wherein, wxAnd wyRespectively indicate the 1/2, u of W windowxAnd uyRespectively indicate the image coordinate of point to be matched.It is terrible
To best match, so that ε is minimum, enabling above formula derivative is 0, seeks minimum, and the d solved is the offset tracked.
The step S4 further includes following sub-step:
S41: continue the imaging point in (N+1) when obtaining terrestrial reference positioning from t moment to t+N;
S42: utilizing coordinate system transformation relationship, obtains image landmark locations, and transformation relation meets:
z0=Z*cos θ1
Wherein, (x0, y0, z0) is that can be calculated using the same landmark point that multiple images frame (known to position) observes
The position of this landmark point out;
S43: multiple characteristic points are extracted to the same terrestrial reference and are tracked, are finally averaged to calculate this place target position
It sets.
Detailed description of the invention
Fig. 1: terrestrial reference positioning system frame diagram;
Fig. 2: landmark point image in different moments image;
Fig. 3: terrestrial reference positioning schematic.
Specific embodiment
For a clearer understanding of the technical characteristics, objects and effects of the present invention, this hair of Detailed description of the invention is now compareed
Bright specific embodiment.
In the present embodiment, a kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS, including such as
Lower step:
S1: sensor synchronizes to obtain the corresponding vehicle of image and camera space coordinate;
S2: data processing is detected and is extracted terrestrial reference characteristic point to image terrestrial reference;
S3: image characteristic point is tracked using optical flow method;
S4: it calculates and obtains landmark locations and describe sub- calculating.
A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS, S1 includes following sub-step:
S11: building acquisition system, installs camera and the bis- positioning antennas of difference GNSS in roof, makes camera and GNSS antenna
In same plane, camera direction is identical with double antenna direction, and camera and positioning antenna distance are d;
S12: time arest neighbors is carried out using the timestamp of different sensors data and is synchronized, each frame image obtains one group of mark
Standardization data need to meet:
ei=(Pi, Vi)
Wherein, PiIndicate the corresponding vehicle location of every frame image and posture, i.e. (xi, yi, zi, αi, βi, γi), wherein (xi,
yi, zi) it is camera coordinates on vehicle, (αi, βi, γi) be camera posture on vehicle three angles, ViIndicate that current pose is corresponding
Image;
S13: image after being corrected using camera distortion parameter correcting image utilizes sensing by the position that GNSS is obtained
Relative position converts to obtain camera position between device.
A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS, S2 includes following sub-step:
S21: the terrestrial reference (such as traffic lights, traffic sign, surface mark etc.) in deep learning algorithm detection image, inspection are used
The two-dimensional position of result for terrestrial reference type (ID) and in the picture is surveyed, the position of image is marked on two pixels expression
Rect(T1,T2,B1,B2)。
S22: ORB feature is extracted to the image-region for detecting terrestrial reference;
A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS, the step S3 include as follows
Sub-step:
S31: interframe feature point tracking is carried out using optical flow method, appears in terrestrial reference detection frame in a certain frame if any characteristic point
Outside, then this feature point is deleted, the realization of optical flow method is based on target in video streaming, only generates the thin tail sheep of consistency, and brightness is permanent
Fixed and consecutive frame has the hypothesis of similar movement.Judgement appears in whether a certain regional area in two continuous frames image I, J is same
Target needs to meet:
I (x, y, t)=J (x ', y ', t+ Δ)
Wherein own (x, y) and all moves (d to a directionx, dy), to obtain (x ', y ').
(x, the y) point at S32:t moment is (x+d at the t+ τ momentx, y+dy), so the problem of seeking matching can turn to under
Formula seeks minimum value, needs to meet:
Wherein wxAnd wyRespectively indicate the 1/2, u of W windowxAnd uyRespectively indicate the image coordinate of point to be matched.In order to obtain
Best match, so that ε is minimum, enabling above formula derivative is 0, seeks minimum, and the d solved is the offset tracked.
The step S4 further includes following sub-step:
S41: as shown in Fig. 2, continuous (N+1) opens identical three-dimensional terrestrial reference in image from t moment to the t+N moment when terrestrial reference positions
Imaging point of the point in different images, O are per moment image center, and Z indicates three-dimensional landmark point at a distance from different moments camera
(Z is unknown) S42: Fig. 3 be terrestrial reference positioning schematic, according to camera pinhole imaging system principle from image landmark point to camera in
Two angles of Heart vectorAccording to coordinate system transformation relationship:
z0=Z*cos θ1
Wherein (x0, y0, z0), it can be calculated using the same landmark point that multiple images frame (known to position) observes
The position of this landmark point;
S43: multiple characteristic points are extracted to the same terrestrial reference and are tracked, are finally averaged to calculate this place target position
It sets.
The invention proposes a kind of intelligent vehicle map terrestrial reference localization methods for merging monocular vision and difference GNSS, are able to achieve
Quickly, high-precision terrestrial reference positioning.Basic principles and main features of the invention and of the invention excellent have been shown and described above
Point.It should be understood by those skilled in the art that the present invention is not limited to the above embodiments, retouched in above embodiments and description
That states merely illustrates the principles of the invention, and without departing from the spirit and scope of the present invention, the present invention also has various changes
Change and improve, these changes and improvements all fall within the protetion scope of the claimed invention.The claimed scope of the invention is by appended
Claims and its equivalent thereof.
Claims (5)
1. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS, which is characterized in that including as follows
Step:
S1: pretreatment carries out data acquisition according to the system built;
S2: image terrestrial reference detection carries out terrestrial reference detection according to deep learning algorithm and obtains terrestrial reference detection result;
S3: terrestrial reference feature point extraction carries out ORB feature point extraction according to above-mentioned image terrestrial reference detection result;
S4: terrestrial reference tracking carries out interframe using optical flow method and tracks to obtain previous frame characteristic point in the corresponding characteristic point of present frame;
S5: terrestrial reference positioning carries out triangulation calculation and is averaged to indicate current terrestrial reference to multiple characteristic points that a terrestrial reference extracts
Position;
S6: terrestrial reference description calculates, and sub- extraction is described to landmark image and obtains the description of terrestrial reference uniqueness;
S7: terrestrial reference storage repeats S2-S6 step, determines and obtains different terrestrial reference positioning, wherein by it is described differently
Calibration position is stored in landmark data library for constructing intelligent vehicle map.
2. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1,
It is characterized in that, S1 includes following sub-step:
S11: building acquisition system, installs camera and the bis- positioning antennas of difference GNSS in roof;
S12: time arest neighbors is carried out using the timestamp of different sensors data and is synchronized, each frame image obtains one group of standardization
Data need to meet:
ei=(Pi, Vi)
Wherein, PiIndicate the corresponding vehicle location of every frame image and posture, i.e. (xi, yi, zi, αi, βi, γi), wherein (xi, yi, zi)
For camera coordinates on vehicle, (αi, βi, γi) be camera posture on vehicle three angles, ViIndicate the corresponding figure of current pose
Picture;
S13: it converts to obtain camera position using relative position between sensor.
3. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1,
It is characterized in that, the terrestrial reference detection result is arranged to the two-dimensional position in terrestrial reference type and detection image, the detection figure
Two-dimensional position as in is expressed with two pixels.
4. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1,
It is characterized in that, the step S4 includes following sub-step:
S41: judgement appears in whether a certain regional area in two continuous frames image I, J is same target, however, it is determined that is to need completely
Foot:
I (x, y, t)=J (x ', y ', t+ Δ)
Wherein, own (x, y) and all move (d to a directionx, dy), to obtain (x ', y ');
(x, the y) point at S42:t moment is (x+d at the t+ τ momentx, y+dy), so the problem of seeking matching can turn to and seek to following formula
It minimizes, needs to meet:
Wherein wxAnd wyRespectively indicate the 1/2, u of W windowxAnd uyRespectively indicate the image coordinate of point to be matched.It is best in order to obtain
Matching, so that ε is minimum, enabling above formula derivative is 0, seeks minimum, and the d solved is the offset tracked.
5. a kind of intelligent vehicle map terrestrial reference localization method for merging monocular vision and difference GNSS according to claim 1,
It is characterized in that, the step S5 further includes following sub-step:
S51: continue the imaging point in (N+1) when obtaining terrestrial reference positioning from t moment to t+N;
S52: coordinate system transformation relationship is utilized, image landmark locations are obtained.Transformation relation meets:
z0=Z*cos θ1
Wherein (x0, y0, z0) it is the position that this landmark point can be calculated using the same landmark point that multiple images frame observes
It sets;
S53: multiple characteristic points are extracted to the same terrestrial reference and are tracked, are finally averaged to calculate this place target position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910684352.6A CN110514212A (en) | 2019-07-26 | 2019-07-26 | A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910684352.6A CN110514212A (en) | 2019-07-26 | 2019-07-26 | A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110514212A true CN110514212A (en) | 2019-11-29 |
Family
ID=68624160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910684352.6A Pending CN110514212A (en) | 2019-07-26 | 2019-07-26 | A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110514212A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111273674A (en) * | 2020-03-12 | 2020-06-12 | 深圳冰河导航科技有限公司 | Distance measurement method, vehicle operation control method and control system |
CN111337950A (en) * | 2020-05-21 | 2020-06-26 | 深圳市西博泰科电子有限公司 | Data processing method, device, equipment and medium for improving landmark positioning precision |
CN111611913A (en) * | 2020-05-20 | 2020-09-01 | 北京海月水母科技有限公司 | Human-shaped positioning technology of monocular face recognition probe |
CN111856499A (en) * | 2020-07-30 | 2020-10-30 | 浙江大华技术股份有限公司 | Map construction method and device based on laser radar |
CN113358125A (en) * | 2021-04-30 | 2021-09-07 | 西安交通大学 | Navigation method and system based on environmental target detection and environmental target map |
CN114742885A (en) * | 2022-06-13 | 2022-07-12 | 山东省科学院海洋仪器仪表研究所 | Target consistency judgment method in binocular vision system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080167814A1 (en) * | 2006-12-01 | 2008-07-10 | Supun Samarasekera | Unified framework for precise vision-aided navigation |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
CN108534782A (en) * | 2018-04-16 | 2018-09-14 | 电子科技大学 | A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system |
CN108801274A (en) * | 2018-04-16 | 2018-11-13 | 电子科技大学 | A kind of terrestrial reference ground drawing generating method of fusion binocular vision and differential satellite positioning |
CN108986037A (en) * | 2018-05-25 | 2018-12-11 | 重庆大学 | Monocular vision odometer localization method and positioning system based on semi-direct method |
CN109544636A (en) * | 2018-10-10 | 2019-03-29 | 广州大学 | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method |
CN109583409A (en) * | 2018-12-07 | 2019-04-05 | 电子科技大学 | A kind of intelligent vehicle localization method and system towards cognitive map |
-
2019
- 2019-07-26 CN CN201910684352.6A patent/CN110514212A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080167814A1 (en) * | 2006-12-01 | 2008-07-10 | Supun Samarasekera | Unified framework for precise vision-aided navigation |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
CN108534782A (en) * | 2018-04-16 | 2018-09-14 | 电子科技大学 | A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system |
CN108801274A (en) * | 2018-04-16 | 2018-11-13 | 电子科技大学 | A kind of terrestrial reference ground drawing generating method of fusion binocular vision and differential satellite positioning |
CN108986037A (en) * | 2018-05-25 | 2018-12-11 | 重庆大学 | Monocular vision odometer localization method and positioning system based on semi-direct method |
CN109544636A (en) * | 2018-10-10 | 2019-03-29 | 广州大学 | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method |
CN109583409A (en) * | 2018-12-07 | 2019-04-05 | 电子科技大学 | A kind of intelligent vehicle localization method and system towards cognitive map |
Non-Patent Citations (3)
Title |
---|
李承等: "基于GPS与图像融合的智能车辆高精度定位算法", 《交通运输系统工程与信息》 * |
李承等: "面向智能车定位的道路环境视觉地图构建", 《中国公路学报》 * |
骆佩佩: "面向认知地图的智能车定位系统及其应用", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技II辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111273674A (en) * | 2020-03-12 | 2020-06-12 | 深圳冰河导航科技有限公司 | Distance measurement method, vehicle operation control method and control system |
CN111611913A (en) * | 2020-05-20 | 2020-09-01 | 北京海月水母科技有限公司 | Human-shaped positioning technology of monocular face recognition probe |
CN111337950A (en) * | 2020-05-21 | 2020-06-26 | 深圳市西博泰科电子有限公司 | Data processing method, device, equipment and medium for improving landmark positioning precision |
CN111337950B (en) * | 2020-05-21 | 2020-10-30 | 深圳市西博泰科电子有限公司 | Data processing method, device, equipment and medium for improving landmark positioning precision |
CN111999745A (en) * | 2020-05-21 | 2020-11-27 | 深圳市西博泰科电子有限公司 | Data processing method, device and equipment for improving landmark positioning precision |
CN111856499A (en) * | 2020-07-30 | 2020-10-30 | 浙江大华技术股份有限公司 | Map construction method and device based on laser radar |
CN113358125A (en) * | 2021-04-30 | 2021-09-07 | 西安交通大学 | Navigation method and system based on environmental target detection and environmental target map |
CN113358125B (en) * | 2021-04-30 | 2023-04-28 | 西安交通大学 | Navigation method and system based on environment target detection and environment target map |
CN114742885A (en) * | 2022-06-13 | 2022-07-12 | 山东省科学院海洋仪器仪表研究所 | Target consistency judgment method in binocular vision system |
CN114742885B (en) * | 2022-06-13 | 2022-08-26 | 山东省科学院海洋仪器仪表研究所 | Target consistency judgment method in binocular vision system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110514212A (en) | A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS | |
US10386188B2 (en) | Geo-location or navigation camera, and aircraft and navigation method therefor | |
AU2018282302B2 (en) | Integrated sensor calibration in natural scenes | |
CN106548173B (en) | A kind of improvement no-manned plane three-dimensional information acquisition method based on classification matching strategy | |
US9542600B2 (en) | Cloud feature detection | |
WO2017080108A1 (en) | Flying device, flying control system and method | |
JP6002126B2 (en) | Method and apparatus for image-based positioning | |
CN109583409A (en) | A kind of intelligent vehicle localization method and system towards cognitive map | |
US20160238394A1 (en) | Device for Estimating Position of Moving Body and Method for Estimating Position of Moving Body | |
US10909395B2 (en) | Object detection apparatus | |
CN105352509B (en) | Unmanned plane motion target tracking and localization method under geography information space-time restriction | |
CN109471096B (en) | Multi-sensor target matching method and device and automobile | |
CN104197928A (en) | Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle | |
McManus et al. | Towards appearance-based methods for lidar sensors | |
US20170372120A1 (en) | Cloud feature detection | |
CN108549376A (en) | A kind of navigation locating method and system based on beacon | |
KR101255461B1 (en) | Position Measuring Method for street facility | |
JP2019056629A (en) | Distance estimation device and method | |
US20180012060A1 (en) | Detecting and ranging cloud features | |
KR101803340B1 (en) | Visual odometry system and method | |
CN103456027B (en) | Time sensitivity target detection positioning method under airport space relation constraint | |
KR20170058612A (en) | Indoor positioning method based on images and system thereof | |
Chenchen et al. | A camera calibration method for obstacle distance measurement based on monocular vision | |
JPH11250252A (en) | Three-dimensional object recognizing device and method therefor | |
CN113850864B (en) | GNSS/LIDAR loop detection method for outdoor mobile robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191129 |
|
RJ01 | Rejection of invention patent application after publication |