CN111666959A - Vector image matching method and device - Google Patents
Vector image matching method and device Download PDFInfo
- Publication number
- CN111666959A CN111666959A CN201910166892.5A CN201910166892A CN111666959A CN 111666959 A CN111666959 A CN 111666959A CN 201910166892 A CN201910166892 A CN 201910166892A CN 111666959 A CN111666959 A CN 111666959A
- Authority
- CN
- China
- Prior art keywords
- image
- vector
- reference object
- matching
- characteristic value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000013598 vector Substances 0.000 title claims abstract description 127
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000000605 extraction Methods 0.000 claims abstract description 26
- 238000010586 diagram Methods 0.000 description 14
- 230000011218 segmentation Effects 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 5
- 238000003709 image segmentation Methods 0.000 description 3
- 238000011982 device technology Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
The invention provides a vector image matching method and a vector image matching device, relates to the technical field of electronic information, and can solve the problem that a flight device cannot be positioned in a flight area with poor electromagnetic wave signals. The specific technical scheme is as follows: acquiring at least one basic image shot by the flying device, wherein the basic image is an image of the ground shot by the flying device from top to bottom in a top view; carrying out vector extraction on at least one basic image to obtain at least one vector image; and matching at least one vector image with a prestored reference image to determine the position area of the flying device in the reference image. The present disclosure is for use in flying device positioning.
Description
Technical Field
The present disclosure relates to the field of electronic information technologies, and in particular, to a vector image matching method and apparatus.
Background
With the development of the flying device technology, the flying device technology is applied to many fields, such as aerial photography, transportation, monitoring, and the like. In the process of flying the flying device, the flying device is usually located, for example, by GPS (global positioning System) or network, but these locating methods are required to ensure that the flying device cannot be located in a flying area where electromagnetic signals are poor where electromagnetic signals can be received/transmitted.
Disclosure of Invention
The embodiment of the disclosure provides a vector image matching method and a vector image matching device, which can solve the problem that a flight device cannot be positioned in a flight area with poor electromagnetic wave signals, and the technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a vector image matching method, including:
acquiring at least one basic image shot by the flying device, wherein the basic image is an image of the ground shot by the flying device from top to bottom in a top view;
carrying out vector extraction on at least one basic image to obtain at least one vector image;
and matching at least one vector image with a prestored reference image to determine the position area of the flying device in the reference image.
Vector images are obtained by vector extraction from the basic images, the vector images are used for matching, matching is more accurate, the vector image matching computation amount is smaller, the computation amount during complex image matching is greatly reduced, and the rapid and accurate positioning of the flight device can be realized in the region with poor electromagnetic wave signals.
In one embodiment, matching at least one vector image with a pre-stored reference image to determine a location area of the flying apparatus comprises:
acquiring a characteristic value of a first reference object in the target vector image according to the target vector image;
comparing the characteristic value of the first reference object with the characteristic value of at least one second reference object in the reference image;
determining a second reference object which is the same as the characteristic value of the first reference object in the at least one second reference object as a target reference object;
and determining the position area of the flying device in the reference image according to the position of the target reference object in the reference image.
In one embodiment, the first reference object includes an intersection in the target vector image, and the feature value of the first reference object includes at least one of a number of roads, an adjacent intersection orientation, and a preset reference number.
In one embodiment, vector extracting at least one base image to obtain at least one vector image comprises:
extracting a characteristic value in each basic image according to preset characteristics;
and generating a corresponding vector image according to the characteristic value of each basic image.
In one embodiment, matching at least one vector image with a pre-stored reference image comprises:
splicing at least one vector image according to the shooting time sequence to obtain a spliced image;
and matching the spliced image with the reference image.
According to a second aspect of the embodiments of the present disclosure, there is provided a vector image matching apparatus including: the device comprises an acquisition module, an extraction module and a matching module;
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring at least one basic image shot by a flying device, and the basic image is an image of the ground shot by the flying device from top to bottom in a top view;
the extraction module is used for carrying out vector extraction on at least one basic image to obtain at least one vector image;
and the matching module is used for matching at least one vector image with a prestored reference image and determining the position area of the flying device in the reference image.
In one embodiment, the matching module includes: the device comprises a first characteristic value unit, a comparison unit, a determination unit and a position unit;
the first characteristic value unit is used for acquiring a characteristic value of a first reference object in the target vector image according to the target vector image;
the comparison unit is used for comparing the characteristic value of the first reference object with the characteristic value of at least one second reference object in the reference image;
the determining unit is used for determining a second reference object which is the same as the characteristic value of the first reference object in at least one second reference object as a target reference object;
and the position unit is used for determining the position area of the flying device in the reference image according to the position of the target reference object in the reference image.
In one embodiment, the first reference object includes an intersection in the target vector image, and the feature value of the first reference object includes at least one of a number of roads, an adjacent intersection orientation, and a preset reference number.
In one embodiment, the extraction module comprises a second feature value unit and a vector image unit;
the second characteristic value unit is used for extracting a characteristic value in each basic image according to preset characteristics;
and the vector image unit is used for generating a corresponding vector image according to the characteristic value of each basic image.
In one embodiment, the matching module comprises a splicing unit and a matching unit;
the splicing unit is used for splicing at least one vector image according to the shooting time sequence to obtain a spliced image;
and the matching unit is used for matching the spliced image with the reference image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of a vector image matching method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a vector extraction effect provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a road vector extraction network logic provided in an embodiment of the present disclosure;
FIG. 4 is a characteristic diagram of a first reference object provided by the embodiments of the present disclosure;
fig. 5 is a block diagram of a vector image matching apparatus provided in an embodiment of the present disclosure;
fig. 6 is a block diagram of a vector image matching apparatus provided in an embodiment of the present disclosure;
fig. 7 is a block diagram of a vector image matching apparatus provided in an embodiment of the present disclosure;
fig. 8 is a structural diagram of a vector image matching apparatus according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The disclosed embodiment provides a vector image matching method, which is applied to a vector image matching device, as shown in fig. 1, fig. 1 is a flowchart of the vector image matching method provided by the disclosed embodiment, and the vector image matching method provided by the disclosed embodiment includes the following steps:
101. at least one base image taken by the flying device is acquired.
The base image is an image of the ground taken from the top down in a top view of the flying device. The flying device may be a drone.
102. And performing vector extraction on at least one basic image to obtain at least one vector image.
In one embodiment, vector extracting at least one base image to obtain at least one vector image comprises:
extracting a characteristic value in each basic image according to preset characteristics; and generating a corresponding vector image according to the characteristic value of each basic image.
For example, a full convolution network architecture may be adopted to implement pixel-by-pixel classification of a target image, as shown in fig. 2, fig. 2 is a schematic view of a vector extraction effect provided in an embodiment of the present disclosure, and is configured to determine whether a target pixel in a base image belongs to a road, where the target pixel is any one of pixels in the base image, mark a feature value of the target pixel as 1 if the target pixel belongs to the road, mark the feature value of the target pixel as 0 if the target pixel does not belong to the road, finally obtain a feature value by extracting features from the base image, generate a vector image according to the feature value of each pixel, and only display the shape of the road in the vector image. The feature values may also be the same as the vector image, i.e. the feature values are directly used as vector images. Of course, in fig. 2, the extracted features are illustrated as roads, but the extracted features may include rivers, buildings, and the like, and the present disclosure does not limit the features.
Further, here, taking the vector extraction of the road as an example, the vector extraction process will be described in detail by the following three steps.
In the first step, the image segmentation of the basic image pixel level is realized by utilizing a full convolution network. Under the full convolution frame, different bottom layer characteristics are added to jump layer connection of high layer characteristics, and self-adaptive adjustment of a network structure is realized aiming at different imaging heights and different imaging sensors respectively, so that a designed network can cover the empty foundation platforms with different heights.
The algorithm adopted by the image segmentation in the step can be a highly self-adaptive image segmentation algorithm, and a conversion model of height information and regional candidate frame size information is embedded into a segmentation network on the basis of a full convolution network; meanwhile, a feature diagram obtained by performing channel series fusion on conv5_3 of the VGG16 feature extraction network and a conv4_3 layer feature diagram through deconvolution replaces the original conv4_3 feature diagram, and then performs channel series operation on the feature diagram, fc7 and conv7_2, so that semantic information of the feature diagram is further enriched, the detection precision is improved, and a specific network structure is shown in FIG. 3. In fig. 3, the left side is the input image and height information, and in consideration of practical application, here, an image with a size of 1024 × 768 is taken as an example, a conv5_3 feature layer is subjected to 2 × 2 deconvolution operation and conv4_3 channel serial fusion through a VGG16 feature extraction network to obtain a fused feature map, then, the feature map is subjected to convolution operation with fc7 and conv7_2 respectively, three different feature map channels are connected in series, and then, 1 × 1 convolution operation and normalization are performed to obtain a feature map with a channel number of 512 and a size of 38 × 38. In the above model, the detector portion still adopts the basic structure of FSSD, and the feature maps obtained are subjected to convolution operation a plurality of times to generate 6 feature maps having different reception fields and sizes.
The network structure is characterized by convolutional network coding and then decoding through deconvolution and upsampling, and belongs to a deep coding and decoding network with imaging high self-adaptive capacity. By adjusting the depth of the network, the network depth is reduced, and higher road detail segmentation precision is obtained.
For example, the air-to-ground image may be normalized (assuming a 512 x 512 size), and the encoding of the air-to-ground image may be completed after several layers of convolution and several times of max-pooling using a full convolution network. Due to the convolution and pooling, the resulting feature map is typically much smaller than the original (say 1/4). In order to realize pixel-level segmentation of an image, a feature map is used as an input layer of a decoding network, and after up-sampling is completed for a plurality of times, the feature map with the same size as an original image is obtained. Finally, a special convolution is performed on the feature map matching the original size, the convolution kernel has a size of 1 × 1, and the output map is a probability map for pixel classification, which has a size of 512 × 512 × 1. Thus, the design of the road pixel level segmentation network is completed.
And secondly, making a road segmentation data set and training network parameters. The method comprises the steps of setting an air-to-ground road segmentation data set which mainly comprises a satellite imaging data set, an unmanned aerial vehicle aerial photography data set and the like, and forming an air-to-ground road segmentation data set which has a certain scale and covers different imaging heights by marking the original data on the road. Then, on the basis of the data set, training the designed full convolution network to obtain high-precision network weight, and sending the weight matrix to an airborne computing platform.
And thirdly, automatically extracting the actual flight image road based on the road extraction network. And after the road extraction network structure and the weight file thereof are sent to an airborne operation processing platform, the air-based flight platform performs flight shooting and automatic road extraction to obtain a road topological structure corresponding to the position of the unmanned aerial vehicle at the current moment in real time, and the road topological structure is described and stored in a vector diagram form.
103. And matching at least one vector image with a prestored reference image to determine the position area of the flying device in the reference image.
In one embodiment, the reference image is a vector map of the region where the flying device flies, that is, the region where the flying device flies is photographed and vector-extracted according to steps 101 and 102, and then the vector image of the whole region is obtained by stitching.
In one embodiment, matching at least one vector image with a pre-stored reference image to determine a location area of the flying apparatus comprises:
acquiring a characteristic value of a first reference object in the target vector image according to the target vector image; comparing the characteristic value of the first reference object with the characteristic value of at least one second reference object in the reference image; determining a second reference object which is the same as the characteristic value of the first reference object in the at least one second reference object as a target reference object; and determining the position area of the flying device in the reference image according to the position of the target reference object in the reference image.
The target vector image is any one of the at least one vector image, and the first reference object is any one of the target vector image, and the first reference object is only used as an example for explanation, and does not represent any limitation. Further, in one embodiment, the first reference object includes an intersection in the target vector image, and the feature value of the first reference object includes at least one of a number of roads, an adjacent intersection orientation, and a preset reference number.
For example, taking the first reference object and the second reference object as intersections as an example, as shown in fig. 4, fig. 4 is a characteristic diagram of the first reference object provided by the embodiment of the present disclosure, and in fig. 4, the characteristic value of the first reference object (i.e., a certain intersection in the target vector image) includes the number of roads and the position of an adjacent intersection; the number of roads is the number of roads communicated with the intersection, the adjacent intersection orientation refers to the orientation of the intersection closest to the intersection relative to the intersection, as shown in fig. 4, the intersection closest to the intersection is located at the right east (right), the orientations are numbered sequentially from the east, and the orientations are arranged in the clockwise direction, for example, the east is marked as 1, the south is marked as 2, the south is marked as 3, the south is marked as 4, the west is marked as 5, the north is marked as 6, the north is marked as 7, and the north is marked as 8, as shown in fig. 3, the intersection closest to the intersection is located at the east, and is marked as 1, and then the characteristic value of the intersection is [4,1 ]. And determining a second reference object with the same characteristic value in at least one second reference object of the reference image according to the characteristic value as the target reference object, namely determining the position area of the target vector image in the reference image.
In one embodiment, matching at least one vector image with a pre-stored reference image comprises:
splicing at least one vector image according to the shooting time sequence to obtain a spliced image; and matching the spliced image with the reference image.
The vector image matching method provided by the embodiment of the disclosure extracts vectors from the basic image to obtain the vector image, matches by using the vector image, is more accurate in matching, has smaller vector image matching computation, greatly reduces computation in complex image matching, and can realize quick and accurate positioning of the flight device in an area with poor electromagnetic wave signals.
Based on the vector image matching method described in the above embodiment corresponding to fig. 1, an embodiment of the present disclosure provides a vector image matching apparatus for performing the vector image matching method described in the above embodiment corresponding to fig. 1, and as shown in fig. 5, the vector image matching apparatus 50 includes: an acquisition module 501, an extraction module 502 and a matching module 503;
the acquiring module 501 is configured to acquire at least one basic image captured by the flying device, where the basic image is an image of the ground captured by the flying device from top to bottom in a top view;
an extracting module 502, configured to perform vector extraction on at least one basic image to obtain at least one vector image;
and a matching module 503, configured to match the at least one vector image with a pre-stored reference image, and determine a position area of the flying apparatus in the reference image.
In one embodiment, as shown in FIG. 6, the matching module 503 includes: a first feature value unit 5031, a comparison unit 5032, a determination unit 5033 and a location unit 5034;
the first feature value unit 5031 is configured to obtain a feature value of a first reference object in the target vector image according to the target vector image;
a comparing unit 5032, configured to compare the feature value of the first reference object with the feature value of at least one second reference object in the reference image;
a determining unit 5033, configured to determine, as the target reference object, a second reference object, which is the same as the feature value of the first reference object, in the at least one second reference object;
a location unit 5034 for determining a location area of the flying device in the reference image according to the location of the target reference object in the reference image.
In one embodiment, the first reference object includes an intersection in the target vector image, and the feature value of the first reference object includes at least one of a number of roads, an adjacent intersection orientation, and a preset reference number.
In one embodiment, as shown in fig. 7, the extraction module 502 includes a second feature value unit 5021 and a vector image unit 5022;
a second feature value unit 5021, configured to extract a feature value from each basic image according to a preset feature;
and a vector image unit 5022, configured to generate a corresponding vector image according to the feature value of each basic image.
In one embodiment, as shown in fig. 8, the matching module 503 includes a splicing unit 5035 and a matching unit 5036;
a splicing unit 5035, configured to splice at least one vector image according to the shooting time sequence to obtain a spliced image;
a matching unit 5036 for matching the stitched image with the reference image.
The vector image matching device provided by the embodiment of the disclosure extracts vectors from a basic image to obtain a vector image, matches the vector image more accurately, has small vector image matching computation amount, greatly reduces computation amount during complex image matching, and can realize quick and accurate positioning of a flight device in an area with poor electromagnetic wave signals.
Based on the vector image matching method described in the embodiment corresponding to fig. 1, an embodiment of the present disclosure further provides a computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a Read Only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The storage medium stores computer instructions for executing the vector image matching method described in the embodiment corresponding to fig. 1, which is not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (10)
1. A vector image matching method, the method comprising:
acquiring at least one basic image shot by a flying device, wherein the basic image is an image of the ground shot by the flying device from top to bottom in a top view;
performing vector extraction on the at least one basic image to obtain at least one vector image;
and matching the at least one vector image with a prestored reference image to determine the position area of the flying device in the reference image.
2. The method of claim 1, wherein matching the at least one vector image to a pre-stored reference image to determine the location area of the flying device comprises:
acquiring a characteristic value of a first reference object in a target vector image according to the target vector image;
comparing the characteristic value of the first reference object with the characteristic value of at least one second reference object in the reference image;
determining a second reference object which is the same as the characteristic value of the first reference object in the at least one second reference object as a target reference object;
and determining the position area of the flying device in the reference image according to the position of the target reference object in the reference image.
3. The method of claim 2,
the first reference object comprises an intersection in the target vector image, and the characteristic value of the first reference object comprises at least one of the number of roads, the position of the adjacent intersection and the number of preset reference objects.
4. The method of claim 1, wherein vector extracting the at least one base image to obtain at least one vector image comprises:
extracting a characteristic value in each basic image according to preset characteristics;
and generating a corresponding vector image according to the characteristic value of each basic image.
5. The method according to any of claims 1-4, wherein matching the at least one vector image with a pre-stored reference image comprises:
splicing the at least one vector image according to the shooting time sequence to obtain a spliced image;
and matching the spliced image with the reference image.
6. A vector image matching apparatus, characterized in that the vector image matching apparatus comprises: the device comprises an acquisition module, an extraction module and a matching module;
the acquisition module is used for acquiring at least one basic image shot by a flying device, wherein the basic image is an image of the ground shot by the flying device from top to bottom in a top view;
the extraction module is used for carrying out vector extraction on the at least one basic image to obtain at least one vector image;
the matching module is used for matching the at least one vector image with a prestored reference image and determining the position area of the flying device in the reference image.
7. The apparatus of claim 6, wherein the matching module comprises: the device comprises a first characteristic value unit, a comparison unit, a determination unit and a position unit;
the first characteristic value unit is used for acquiring a characteristic value of a first reference object in a target vector image according to the target vector image;
the comparison unit is used for comparing the characteristic value of the first reference object with the characteristic value of at least one second reference object in the reference image;
the determining unit is used for determining a second reference object which is the same as the characteristic value of the first reference object in the at least one second reference object as a target reference object;
the position unit is used for determining the position area of the flying device in the reference image according to the position of the target reference object in the reference image.
8. The apparatus of claim 7,
the first reference object comprises an intersection in the target vector image, and the characteristic value of the first reference object comprises at least one of the number of roads, the position of the adjacent intersection and the number of preset reference objects.
9. The apparatus of claim 6, wherein the extraction module comprises a second feature value unit and a vector image unit;
the second characteristic value unit is used for extracting a characteristic value in each basic image according to preset characteristics;
and the vector image unit is used for generating a corresponding vector image according to the characteristic value of each basic image.
10. The apparatus according to any one of claims 6-9, wherein the matching module comprises a splicing unit and a matching unit;
the splicing unit is used for splicing the at least one vector image according to a shooting time sequence to obtain a spliced image;
the matching unit is used for matching the spliced image with the reference image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910166892.5A CN111666959A (en) | 2019-03-06 | 2019-03-06 | Vector image matching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910166892.5A CN111666959A (en) | 2019-03-06 | 2019-03-06 | Vector image matching method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111666959A true CN111666959A (en) | 2020-09-15 |
Family
ID=72381850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910166892.5A Pending CN111666959A (en) | 2019-03-06 | 2019-03-06 | Vector image matching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111666959A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113535996A (en) * | 2021-05-27 | 2021-10-22 | 中国人民解放军火箭军工程大学 | Road image data set preparation method and device based on aerial image |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102706352A (en) * | 2012-05-21 | 2012-10-03 | 南京航空航天大学 | Vector map matching navigation method for linear target in aviation |
CN104075710A (en) * | 2014-04-28 | 2014-10-01 | 中国科学院光电技术研究所 | Maneuvering extension target axial attitude real-time estimation method based on track prediction |
JP2016134136A (en) * | 2015-01-22 | 2016-07-25 | キャンバスマップル株式会社 | Image processing apparatus, and image processing program |
CN105868772A (en) * | 2016-03-23 | 2016-08-17 | 百度在线网络技术(北京)有限公司 | Image identification method and apparatus |
CN108168522A (en) * | 2017-12-11 | 2018-06-15 | 宁波亿拍客网络科技有限公司 | A kind of unmanned plane observed object method for searching and correlation technique again |
CN108513642A (en) * | 2017-07-31 | 2018-09-07 | 深圳市大疆创新科技有限公司 | A kind of image processing method, unmanned plane, ground control cabinet and its image processing system |
-
2019
- 2019-03-06 CN CN201910166892.5A patent/CN111666959A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102706352A (en) * | 2012-05-21 | 2012-10-03 | 南京航空航天大学 | Vector map matching navigation method for linear target in aviation |
CN104075710A (en) * | 2014-04-28 | 2014-10-01 | 中国科学院光电技术研究所 | Maneuvering extension target axial attitude real-time estimation method based on track prediction |
JP2016134136A (en) * | 2015-01-22 | 2016-07-25 | キャンバスマップル株式会社 | Image processing apparatus, and image processing program |
CN105868772A (en) * | 2016-03-23 | 2016-08-17 | 百度在线网络技术(北京)有限公司 | Image identification method and apparatus |
CN108513642A (en) * | 2017-07-31 | 2018-09-07 | 深圳市大疆创新科技有限公司 | A kind of image processing method, unmanned plane, ground control cabinet and its image processing system |
CN108168522A (en) * | 2017-12-11 | 2018-06-15 | 宁波亿拍客网络科技有限公司 | A kind of unmanned plane observed object method for searching and correlation technique again |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113535996A (en) * | 2021-05-27 | 2021-10-22 | 中国人民解放军火箭军工程大学 | Road image data set preparation method and device based on aerial image |
CN113535996B (en) * | 2021-05-27 | 2023-08-04 | 中国人民解放军火箭军工程大学 | Road image dataset preparation method and device based on aerial image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109509230B (en) | SLAM method applied to multi-lens combined panoramic camera | |
CN109931939B (en) | Vehicle positioning method, device, equipment and computer readable storage medium | |
KR101105795B1 (en) | Automatic processing of aerial images | |
CN106529538A (en) | Method and device for positioning aircraft | |
KR102349946B1 (en) | Learning method and learning device for learning automatic labeling device capable of auto-labeling image of base vehicle using images of nearby vehicles, and testing method and testing device using the same | |
CN114004977B (en) | Method and system for positioning aerial data target based on deep learning | |
CN113012215B (en) | Space positioning method, system and equipment | |
CN114252884A (en) | Method and device for positioning and monitoring roadside radar, computer equipment and storage medium | |
JP2022039188A (en) | Position attitude calculation method and position attitude calculation program | |
CN114252883B (en) | Target detection method, apparatus, computer device and medium | |
CN111666959A (en) | Vector image matching method and device | |
CN113096016A (en) | Low-altitude aerial image splicing method and system | |
CN114252859A (en) | Target area determination method and device, computer equipment and storage medium | |
CN114252868A (en) | Laser radar calibration method and device, computer equipment and storage medium | |
CN114550016B (en) | Unmanned aerial vehicle positioning method and system based on context information perception | |
CN111328099B (en) | Mobile network signal testing method, device, storage medium and signal testing system | |
CN112232132A (en) | Target identification and positioning method fusing navigation information | |
CN111667531B (en) | Positioning method and device | |
Tang et al. | Automatic geo‐localization framework without GNSS data | |
CN118196214B (en) | Outdoor camera distribution control method and equipment based on three-dimensional scene simulation | |
CN112308904A (en) | Vision-based drawing construction method and device and vehicle-mounted terminal | |
CN113361552B (en) | Positioning method and device | |
CN106650724A (en) | Method and device for building traffic sign database | |
CN114255264B (en) | Multi-base-station registration method and device, computer equipment and storage medium | |
US12000703B2 (en) | Method, software product, and system for determining a position and orientation in a 3D reconstruction of the earth's surface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200915 |