CN109784189A - Video satellite remote sensing images scape based on deep learning matches method and device thereof - Google Patents

Video satellite remote sensing images scape based on deep learning matches method and device thereof Download PDF

Info

Publication number
CN109784189A
CN109784189A CN201811554410.5A CN201811554410A CN109784189A CN 109784189 A CN109784189 A CN 109784189A CN 201811554410 A CN201811554410 A CN 201811554410A CN 109784189 A CN109784189 A CN 109784189A
Authority
CN
China
Prior art keywords
remote sensing
real
matched
time
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811554410.5A
Other languages
Chinese (zh)
Inventor
张学阳
杨雪榕
杨雅君
方宇强
殷智勇
潘升东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Original Assignee
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peoples Liberation Army Strategic Support Force Aerospace Engineering University filed Critical Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority to CN201811554410.5A priority Critical patent/CN109784189A/en
Publication of CN109784189A publication Critical patent/CN109784189A/en
Pending legal-status Critical Current

Links

Abstract

The video satellite remote sensing images scape based on deep learning matches method and device thereof that this application discloses a kind of, method includes the following steps: step S100: obtaining benchmark remote sensing figure, obtain real-time figure to be matched by video satellite, scale real-time figure to be matched;Step S200: the feature vector of real-time figure to be matched is calculated using depth convolutional neural networks;Step S300: sliding window generates multiple candidate regions in benchmark remote sensing figure, the feature vector of real-time figure to be matched and the Euclidean distance of each candidate region feature vector are calculated separately, takes the smallest candidate region of feature vector Euclidean distance as the matching area of benchmark remote sensing figure and real-time figure to be matched.This method carries out image characteristics extraction using trained depth convolutional neural networks, improves the accuracy and robustness of characteristic matching.The another aspect of the application also provides a kind of video satellite remote sensing images scape based on deep learning and matches device.

Description

Video satellite remote sensing images scape based on deep learning matches method and device thereof
Technical field
The video satellite remote sensing images scape based on deep learning matches method and device thereof that this application involves a kind of, belong to Field of remote sensing image processing.
Background technique
When known satellite orbit information and posture information, by seeking the intersection point of Satellite Targets sight and earth surface i.e. The target positioning in video satellite remote sensing images can be obtained.But it is limited to memory space on star, video satellite can not record work Whole posture informations in period.When doing off line data analysis, this is particularly problematic.
It include the target image of abundant texture information for background, this problem can be determined by Scene matching method and be regarded The coordinate of scenery solves in frequency.
Scene matching method, by observation moment image, the scene (figure in real time) around target is carried out with existing reference map Scene matching aided navigation primarily determines position of the target on benchmark image.Later by between benchmark image coordinate and world coordinate system Relationship conversion, that is, can determine the position of test target in space.
During Scene matching method uses, figure is lower with the matched precision of reference map in real time.The scape of video satellite remote sensing images Need to handle illumination, scale and the biggish situation of visual angle difference as matching, after existing common feature is extracted, matching result is by preceding It is larger to state differentia influence.
Summary of the invention
According to the one aspect of the application, provides a kind of video satellite remote sensing images scape based on deep learning and match Method, this method carry out image characteristics extraction using trained depth convolutional neural networks, improve the accurate of characteristic matching Property and robustness.
The video satellite remote sensing images scape based on deep learning matches method, which is characterized in that including following step It is rapid:
Step S100: benchmark remote sensing figure and real-time figure to be matched are obtained;
Step S200: the feature vector of the real-time figure to be matched is calculated;
Step S300: generating multiple candidate regions in the benchmark remote sensing figure, calculates separately the real-time figure to be matched Feature vector and each candidate region feature vector Euclidean distance, take the smallest candidate of described eigenvector Euclidean distance Matching area of the region as the benchmark remote sensing figure and the real-time figure to be matched.
Optionally, the spatial resolution κ of the real-time figure to be matched1Are as follows:
Wherein, h is orbit altitude, and f is that star loaded camera focal length is, d is pixel dimension.
Optionally, the step S100 further includes the scaling real-time figure step to be matched, and the scaling step is as the following formula It carries out:
Wherein, κ1For the spatial resolution of the real-time figure to be matched, κ2For the spatial resolution of benchmark remote sensing figure.
Optionally, in the step S200 using depth convolutional neural networks calculate the feature of the real-time figure to be matched to Amount;The depth convolutional neural networks are the depth convolutional neural networks obtained using sample database training.
It optionally, include: the sliding window in the benchmark remote sensing figure in described " generating multiple candidate regions " step;Institute The size of window is stated as the size after the real-time figure scaling to be matched.
Optionally, the sliding step of the window in the longitudinal direction isThe sliding step of the window horizontally is Wherein m is the longitudinal size after the real-time figure scaling to be matched, and n is the lateral dimension after the real-time figure scaling to be matched.
Optionally, the step S100 includes: that pre-matching region is determined on the benchmark remote sensing figure according to prior information The step of.
It optionally, is to generate multiple candidate regions with sliding window in the pre-matching region in the step S300.
Optionally, described eigenvector isWherein, l is feature vector dimension, by the depth convolutional Neural net Network is calculated.
Optionally, the step S300 the following steps are included:
Each candidate region feature vector is calculated using depth convolutional neural networks, establishes index database A={ β1, β2..., wherein β12... it is the feature vector of each candidate region, calculates separately the real-time figure to be matched as the following formula Feature vector α and each candidate region feature vector β12... Euclidean distance:
Wherein, α=(x1,x2,…,xl)Tj=(y1j,y2j,…,ylj)T, j=1,2 ..., x1,x2,…,xlFor it is described to Match the l dimension coordinate of the feature vector α of real-time figure, y1j,y2j,…,yljFor the feature vector β in j-th candidates regionjL tie up sit Mark;
Take the smallest feature vector of the Euclidean distance:
Obtain βkCorresponding candidate region is as the matching area.
According to the another aspect of the application, a kind of video satellite remote sensing images scape phase based on deep learning is provided With device, comprising:
Image collection module obtains real-time figure to be matched, scaling by the video satellite for obtaining benchmark remote sensing figure The real-time figure to be matched;
Neural network module, for calculating the feature vector of the real-time figure to be matched using depth convolutional neural networks;
Matching module calculates separately described for the multiple candidate regions of sliding window generation in the benchmark remote sensing figure The Euclidean distance of the feature vector of real-time figure to be matched and each candidate region feature vector, take described eigenvector Euclidean away from Matching area from the smallest candidate region as the benchmark remote sensing figure and the real-time figure to be matched.
The beneficial effect that the application can generate includes:
1) the video satellite remote sensing images scape provided herein based on deep learning matches method and device thereof, adopts With the depth convolutional neural networks by big data training, image characteristics extraction is carried out, this method uses depth convolutional Neural net The feature that network extracts is matched, and can be improved matched robustness.
2) the video satellite remote sensing images scape provided herein based on deep learning matches method and device thereof, adopts Characteristics of image is extracted with depth convolutional neural networks, scene matching aided navigation is enable to tolerate biggish illumination, scale and visual angle difference.It is logical The long generation candidate region of sliding window small step is crossed, matching accuracy is improved.
3) the video satellite remote sensing images scape provided herein based on deep learning matches method and device thereof, calculates Method is simple and fast, is convenient for Project Realization.
Detailed description of the invention
Fig. 1 is that the video satellite remote sensing images scape based on deep learning matches method stream in a kind of embodiment of the application Journey schematic diagram;
Fig. 2 is the real-time figure of image to be matched in a kind of embodiment of the application, and specially day opens up No. two video satellites and exists The real-time figure of rail shooting;
Fig. 3 is the benchmark remote sensing images in a kind of embodiment of the application;The remote sensing images in region in specially Fig. 2;
Fig. 4 is that the application providing method matching result schematic diagram is used in a kind of embodiment of the application;
Fig. 5 is that the device that matches of the video satellite remote sensing images scape based on deep learning shows in a kind of embodiment of the application It is intended to.
Specific embodiment
The application is described in detail below with reference to embodiment, but the application is not limited to these embodiments.
Referring to Fig. 1, the video satellite remote sensing images scape provided by the present application based on deep learning matches method, including with Lower step:
Step S100: obtaining benchmark remote sensing figure, obtains real-time figure to be matched by the video satellite, scaling it is described to With real-time figure;
Step S200: the feature vector of the real-time figure to be matched is calculated using depth convolutional neural networks;
Step S300: sliding window generates multiple candidate regions in the benchmark remote sensing figure, calculate separately it is described to The Euclidean distance of feature vector and each candidate region feature vector with real-time figure, takes described eigenvector Euclidean distance most Matching area of the small candidate region as the benchmark remote sensing figure and the real-time figure to be matched.
The real-time figure to be matched that satellite obtains is matched by using benchmark remote sensing figure, with feature vector Euclidean distance As screening index, to improve the matching accuracy of real-time figure.
Preferably, the spatial resolution κ of the real-time figure to be matched1Are as follows:
Wherein, h is orbit altitude, and f is that star loaded camera focal length is, d is pixel dimension.
Preferably, scaling step zooms in and out as the following formula in the step S100:
Wherein, κ1For the spatial resolution of the real-time figure to be matched, κ2For the spatial resolution of benchmark remote sensing figure.
Preferably, the depth convolutional neural networks are the depth convolutional neural networks obtained using sample database training.This Sample database used in the training at place is ImageNet data set.
Preferably, the size of the window is the size after the real-time figure scaling to be matched, is 1.
In order to improve matched accuracy, it is preferable that the sliding step of the window in the longitudinal direction isThe window exists Sliding step in transverse direction isWherein m is the longitudinal size after the real-time figure scaling to be matched, and n is the reality to be matched When figure scaling after lateral dimension.
Preferably, the step S100 includes: that pre-matching region is determined on the benchmark remote sensing figure according to prior information The step of, multiple candidate regions are generated with sliding window in the pre-matching region in the step S300.
Preferably, described eigenvector isWherein, l is feature vector dimension, by the depth convolutional Neural net Network is calculated.
Preferably, the Euclidean distance is calculated as follows:
Wherein, α=(x1,x2,…,xl)Tj=(y1j,y2j,…,ylj)T, j=1,2 ..., x1,x2,…,xlIt is to be matched The l of the feature vector α of real-time figure ties up coordinate, y1j,y2j,…,yljFor the feature vector β in j-th candidates regionjL tie up coordinate.
Preferably, the smallest feature vector of the Euclidean distance:
Obtain βkCorresponding candidate region is as the matching area.
Specifically, the step S300 the following steps are included:
Each candidate region feature vector is calculated using depth convolutional neural networks, establishes index database A={ β1, β2..., wherein β12... it is the feature vector of each candidate region, calculates separately the spy of the real-time figure to be matched as the following formula Levy vector α and each candidate region β12... Euclidean distance:
Wherein, α=(x1,x2,…,xl)Tj=(y1j,y2j,…,ylj)T, j=1,2 ..., x1,x2,…,xlIt is to be matched The l of the feature vector α of real-time figure ties up coordinate, y1j,y2j,…,yljFor the feature vector β in j-th candidates regionjL tie up coordinate;
Take the smallest feature vector of the Euclidean distance:
Obtain βkCorresponding candidate region is as matching area.
Specifically, method includes the following steps:
1. giving video satellite image to be matched (figure in real time), and determine the spatial resolution of video satellite image.If Orbit altitude is h, and star loaded camera focal length is f, pixel dimension d, then spatial resolution isGiven benchmark remote sensing figure Picture, spatial resolution κ2, image to be matched is scaled
2. according to known prior information, such as substar information, pre-matching region is determined on benchmark remote sensing images.
3. giving trained depth convolutional neural networks, the spy of image to be matched is calculated using depth convolutional neural networks Levy vectorWherein l is the feature vector dimension that known depth convolutional neural networks are calculated.
4. establishing index database.
If using the sliding of m × n in benchmark remote sensing images range areas having a size of m × n after the image to be matched scaling Window generates candidate region,
In order to improve matched accuracy, step-length is selected asWith
Characteristic value is calculated using depth convolutional neural networks to each candidate region, establishes index database A={ β12,…}。
5, α and β are calculated separately12... Euclidean distance d (α, βj), j=1,2 ...,
It is minimized, i.e.,
Then βkCorresponding candidate region is matching area.
Referring to Fig. 5, the another aspect of the application additionally provides a kind of video satellite remote sensing images scape based on deep learning Match device, comprising:
Image collection module 100 obtains real-time figure to be matched by the video satellite for obtaining benchmark remote sensing figure, Scale the real-time figure to be matched;
Neural network module 200, for using depth convolutional neural networks calculate the feature of the real-time figure to be matched to Amount;
Matching module 300 calculates separately institute for the multiple candidate regions of sliding window generation in the benchmark remote sensing figure The feature vector of real-time figure to be matched and the Euclidean distance of each candidate region feature vector are stated, described eigenvector Euclidean is taken Matching area apart from the smallest candidate region as the benchmark remote sensing figure and the real-time figure to be matched.
The factors such as dimensional variation, angle change, brightness change can be overcome to the sound of matching result using this method.Raising With accuracy.
In order to further illustrate method provided by the present application, the application providing method is carried out below in conjunction with specific embodiment It is described in detail.
Specific step is as follows.
Step 1: the real-time figure of No. two in-orbit shootings of video satellite is opened up in given day, as shown in Fig. 2, size is 640 × 500, Open up the 5 meters/pixel of spatial resolution of No. two images in known day;
Step 2: the benchmark remote sensing images provided using Amap, corresponding spatial resolution are 10 meters/pixel, will Figure scaling 0.5 in real time, obtaining size is 380 × 250;
Step 3: learning that figure is located at Pudong, Shanghai region in real time, selectes Pudong region in Amap by prior information, As shown in Figure 3;
Step 4: the Inception v4 depth convolutional neural networks of given pre-training calculate image to be matched using it Feature vector
Step 5: size is 380 × 250 after figure scaling in real time, in this, as sliding window size, with transverse direction on Fig. 3 380/4=95, longitudinal directionThe mobile sliding window of step-length, and the size of Fig. 3 is 2072 × 1201, it is possible to It arrivesA candidate region uses pre-training as index database image The feature vector of Inception v4 depth convolutional neural networks computation index library image, is obtained comprising 304 feature vectors Index database A={ β12,…,β304};
Step 6: α and β is calculated12,…,β304Euclidean distance, take minimum distance to obtain matching area such as Fig. 4 of Fig. 2 Shown, i.e. pudong airport, position is as shown in white edge in Fig. 3.
Therefore video satellite remote sensing images scape can be effectively improved using method provided by the present application and matched method Identify accuracy, and robustness is higher.
In real time between figure (Fig. 2) and benchmark remote sensing figure (Fig. 3) there are when biggish scale, angle difference, still it can obtain Obtain accurate matching result.
The above is only several embodiments of the application, not does any type of limitation to the application, although this Shen Please disclosed as above with preferred embodiment, however not to limit the application, any person skilled in the art is not taking off In the range of technical scheme, a little variation or modification are made using the technology contents of the disclosure above and is equal to Case study on implementation is imitated, is belonged in technical proposal scope.

Claims (10)

  1. A kind of method 1. video satellite remote sensing images scape based on deep learning matches, which comprises the following steps:
    Step S100: benchmark remote sensing figure and real-time figure to be matched are obtained;
    Step S200: the feature vector of the real-time figure to be matched is calculated;
    Step S300: generating multiple candidate regions in the benchmark remote sensing figure, calculates separately the spy of the real-time figure to be matched The Euclidean distance for levying vector and each candidate region feature vector, takes the smallest candidate region of described eigenvector Euclidean distance Matching area as the benchmark remote sensing figure and the real-time figure to be matched.
  2. The method 2. the video satellite remote sensing images scape according to claim 1 based on deep learning matches, feature exist In the spatial resolution κ of the real-time figure to be matched1Are as follows:
    Wherein, h is orbit altitude, and f is that star loaded camera focal length is, d is pixel dimension.
  3. The method 3. the video satellite remote sensing images scape according to claim 1 based on deep learning matches, feature exist In further including the scaling real-time figure step to be matched in the step S100, the scaling step carries out as the following formula:
    Wherein, κ1For the spatial resolution of the real-time figure to be matched, κ2For the spatial resolution of benchmark remote sensing figure.
  4. The method 4. the video satellite remote sensing images scape according to claim 1 based on deep learning matches, feature exist In calculating the feature vector of the real-time figure to be matched using depth convolutional neural networks in the step S200;
    The depth convolutional neural networks are the depth convolutional neural networks obtained using sample database training.
  5. The method 5. the video satellite remote sensing images scape according to claim 3 based on deep learning matches, feature exist In including: the sliding window in the benchmark remote sensing figure in " generating multiple candidate regions " step;The size of the window For the size after the real-time figure scaling to be matched.
  6. The method 6. the video satellite remote sensing images scape according to claim 5 based on deep learning matches, feature exist In the sliding step of the window in the longitudinal direction isThe sliding step of the window horizontally isWherein m is described Longitudinal size after real-time figure scaling to be matched, n are the lateral dimension after the real-time figure scaling to be matched.
  7. The method 7. the video satellite remote sensing images scape according to claim 5 based on deep learning matches, feature exist In the step S100 includes: according to prior information, in the step of determining pre-matching region on the benchmark remote sensing figure.
  8. The method 8. the video satellite remote sensing images scape according to claim 7 based on deep learning matches, feature exist In including generating multiple candidate regions with the sliding window in the pre-matching region in the step S300.
  9. The method 9. the video satellite remote sensing images scape according to claim 1 based on deep learning matches, feature exist In, the step S300 the following steps are included:
    Each candidate region feature vector is calculated using depth convolutional neural networks, establishes index database A={ β12..., In, β12... it is the feature vector of each candidate region, calculates separately the feature vector of the real-time figure to be matched as the following formula α and each candidate region feature vector β12... Euclidean distance:
    Wherein, α=(x1,x2,…,xl)Tj=(y1j,y2j,…,ylj)T, j=1,2 ..., x1,x2,…,xlIt is described to be matched The l of the feature vector α of real-time figure ties up coordinate, y1j,y2j,…,yljFor the feature vector β in j-th candidates regionjL tie up coordinate;
    Take the smallest feature vector of the Euclidean distance:
    Obtain βkCorresponding candidate region is as the matching area.
  10. The device 10. a kind of video satellite remote sensing images scape based on deep learning matches characterized by comprising
    Image collection module obtains real-time figure to be matched by the video satellite, described in scaling for obtaining benchmark remote sensing figure Real-time figure to be matched;
    Neural network module, for calculating the feature vector of the real-time figure to be matched using depth convolutional neural networks;
    Matching module, in the benchmark remote sensing figure sliding window generate multiple candidate regions, calculate separately it is described to The Euclidean distance of feature vector and each candidate region feature vector with real-time figure, takes described eigenvector Euclidean distance most Matching area of the small candidate region as the benchmark remote sensing figure and the real-time figure to be matched.
CN201811554410.5A 2018-12-19 2018-12-19 Video satellite remote sensing images scape based on deep learning matches method and device thereof Pending CN109784189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811554410.5A CN109784189A (en) 2018-12-19 2018-12-19 Video satellite remote sensing images scape based on deep learning matches method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811554410.5A CN109784189A (en) 2018-12-19 2018-12-19 Video satellite remote sensing images scape based on deep learning matches method and device thereof

Publications (1)

Publication Number Publication Date
CN109784189A true CN109784189A (en) 2019-05-21

Family

ID=66497299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811554410.5A Pending CN109784189A (en) 2018-12-19 2018-12-19 Video satellite remote sensing images scape based on deep learning matches method and device thereof

Country Status (1)

Country Link
CN (1) CN109784189A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927294A (en) * 2021-01-27 2021-06-08 浙江大学 Satellite orbit and attitude determination method based on single sensor
CN116668435A (en) * 2023-08-01 2023-08-29 中国科学院空天信息创新研究院 Interactive real-time remote sensing product generation method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106197408A (en) * 2016-06-23 2016-12-07 南京航空航天大学 A kind of multi-source navigation data fusion method based on factor graph
CN106709515A (en) * 2016-12-16 2017-05-24 北京华航无线电测量研究所 Downward-looking scene matching area selection criteria intervention method
US20180225799A1 (en) * 2017-02-03 2018-08-09 Cognex Corporation System and method for scoring color candidate poses against a color image in a vision system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106197408A (en) * 2016-06-23 2016-12-07 南京航空航天大学 A kind of multi-source navigation data fusion method based on factor graph
CN106709515A (en) * 2016-12-16 2017-05-24 北京华航无线电测量研究所 Downward-looking scene matching area selection criteria intervention method
US20180225799A1 (en) * 2017-02-03 2018-08-09 Cognex Corporation System and method for scoring color candidate poses against a color image in a vision system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
万士宁: "基于卷积神经网络的人脸识别研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
刘家锋等: "《模式识别》", 31 August 2014 *
刘晓春: "基于实时图与卫片的景象匹配导航技术研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *
赵锋伟: "景象匹配算法、性能评估及其应用", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927294A (en) * 2021-01-27 2021-06-08 浙江大学 Satellite orbit and attitude determination method based on single sensor
CN112927294B (en) * 2021-01-27 2022-06-10 浙江大学 Satellite orbit and attitude determination method based on single sensor
CN116668435A (en) * 2023-08-01 2023-08-29 中国科学院空天信息创新研究院 Interactive real-time remote sensing product generation method, device and storage medium
CN116668435B (en) * 2023-08-01 2023-11-10 中国科学院空天信息创新研究院 Interactive real-time remote sensing product generation method, device and storage medium

Similar Documents

Publication Publication Date Title
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN103703758B (en) mobile augmented reality system
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
US11003956B2 (en) System and method for training a neural network for visual localization based upon learning objects-of-interest dense match regression
TWI483215B (en) Augmenting image data based on related 3d point cloud data
CN103632626B (en) A kind of intelligent guide implementation method based on mobile Internet, device and mobile client
CN103377476B (en) Use the image registration of the multimodal data of three-dimensional geographical arc
CN109029444B (en) Indoor navigation system and method based on image matching and space positioning
JP2019087229A (en) Information processing device, control method of information processing device and program
CN103119611B (en) The method and apparatus of the location based on image
CN107690840B (en) Unmanned plane vision auxiliary navigation method and system
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
US20060195858A1 (en) Video object recognition device and recognition method, video annotation giving device and giving method, and program
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN111985376A (en) Remote sensing image ship contour extraction method based on deep learning
WO2004095374A1 (en) Video object recognition device and recognition method, video annotation giving device and giving method, and program
KR102321998B1 (en) Method and system for estimating position and direction of image
CN115388902B (en) Indoor positioning method and system, AR indoor positioning navigation method and system
CN114241464A (en) Cross-view image real-time matching geographic positioning method and system based on deep learning
Aeschliman et al. Tracking vehicles through shadows and occlusions in wide-area aerial video
CN112435338A (en) Method and device for acquiring position of interest point of electronic map and electronic equipment
CN109784189A (en) Video satellite remote sensing images scape based on deep learning matches method and device thereof
US11568642B2 (en) Large-scale outdoor augmented reality scenes using camera pose based on learned descriptors
Mithun et al. Cross-View Visual Geo-Localization for Outdoor Augmented Reality
Ayadi et al. A skyline-based approach for mobile augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190521