CN111523392B - Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture - Google Patents

Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture Download PDF

Info

Publication number
CN111523392B
CN111523392B CN202010224330.4A CN202010224330A CN111523392B CN 111523392 B CN111523392 B CN 111523392B CN 202010224330 A CN202010224330 A CN 202010224330A CN 111523392 B CN111523392 B CN 111523392B
Authority
CN
China
Prior art keywords
deep learning
sample
target
image
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010224330.4A
Other languages
Chinese (zh)
Other versions
CN111523392A (en
Inventor
郑文娟
靳松直
刘严羊硕
张辉
王亚辉
周斌
郝梦茜
丛龙剑
康旭冰
傅绍文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aerospace Automatic Control Research Institute
Original Assignee
Beijing Aerospace Automatic Control Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Automatic Control Research Institute filed Critical Beijing Aerospace Automatic Control Research Institute
Priority to CN202010224330.4A priority Critical patent/CN111523392B/en
Publication of CN111523392A publication Critical patent/CN111523392A/en
Application granted granted Critical
Publication of CN111523392B publication Critical patent/CN111523392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention relates to a method for preparing a deep learning sample based on a full gesture of a satellite orthographic image and a target recognition method, which comprises the following steps of (1) generating target area images (also samples to be involved in deep learning training) of an aircraft and a target under different distances, different azimuth angles and different high and low angles by utilizing satellite orthographic image data of the target and using a light tracking method; (2) Carrying out gray inversion treatment on the target area image to obtain a sample; (3) Carrying out Gaussian blur processing on the target area image to obtain a sample; (4) Carrying out logarithmic transformation on the target area image to obtain a sample; (5) adjusting the brightness of the target area image to obtain a sample; (6) Performing histogram equalization on the target area image to obtain a sample; (7) Forming the samples in the steps (1) - (6) into a final sample set to be participated in deep learning training; the method is beneficial to improving the accuracy of identifying the target image actually shot by the aircraft.

Description

Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture
Technical Field
The invention relates to a preparation method and an identification method of a deep learning sample based on a satellite orthographic image full gesture, a preparation system and an identification system, and belongs to the technical field of image processing.
Background
One of the factors determining the quality of the deep learning detection recognition effect is the number of samples, and if the number of samples is enough, the better the detection recognition effect is, the existing samples for the classifiable targets such as vehicles, people, planes, ships, cats, dogs and the like are abundant enough, so that the detection effect obtains a satisfactory result. Conventionally, for some specific buildings, the recognition of the buildings is generally performed by adopting a template matching mode, but in this case, the recognition of the specific buildings by adopting the template matching mode becomes extremely difficult or difficult to realize due to various factors such as errors of inertial measurement units, large maneuvering requirements and the like, the problem is expected to be solved by adopting a deep learning method, but the problem of sample number is solved firstly by adopting the deep learning method, a specific building is usually hit, if the detection and recognition are required to be performed by adopting the deep learning method, images of the targets under different time periods, different weather conditions, different distances, different azimuth angles and high-low angle conditions are required to be acquired by using a camera, the images are difficult to realize in actual operation, and the cost is high, so that the probability of the recognized targets is less than 10 percent by using the information of the image data of only one satellite.
Disclosure of Invention
The technical problems solved by the invention are as follows: the method for preparing the deep learning sample based on the satellite orthographic image full gesture and the identification method thereof are provided, the sample of the target area which is actually hung and flying and collected is simulated in a mathematical transformation mode under the condition of not changing the inertial measurement unit precision of the existing aircraft, and the probability of the identified target reaches more than 80 percent by utilizing the sample.
The technical scheme of the invention is as follows: a method for preparing a deep learning sample based on a satellite orthographic image full gesture comprises the following steps:
(1) Generating target area images of the aircraft and the target at different distances, different azimuth angles and different high and low angles by using satellite orthographic image data of the target and using a light tracking method;
(2) Performing gray inversion processing on the target area image to obtain a sample to be involved in deep learning training;
(3) Carrying out Gaussian blur processing on the target area image to obtain a sample to be participated in deep learning training;
(4) Carrying out logarithmic transformation on the target area image to obtain a sample to be participated in deep learning training;
(5) Adjusting the brightness of the target area image to obtain a sample to be involved in deep learning training;
(6) Performing histogram equalization on the target area image to obtain a sample to be involved in deep learning training;
(7) Forming a final sample set to be participated in the deep learning training by the samples to be participated in the deep learning training in the steps (1) - (6);
preferably, the target is a ground fixed target.
Preferably, the target area is a square area set with the target as the center, and the pixels of the target area are preferably 300×300 pixels.
Preferably, the requirements of the aircraft and the target are: the deviation of the distance between the aircraft and the target is greater than 500 meters, the azimuth deviation is greater than 20 degrees, and the high-low angle deviation is greater than 10 degrees.
Preferably, the size of the object to be identified is required to be greater than 10×10 pixels.
Preferably, the method for identifying the target based on the preparation of the deep learning sample of the full gesture of the satellite orthographic image comprises the following steps:
(1) Obtaining a final sample set to be involved in deep learning training according to a deep learning sample preparation method based on the full pose of the satellite orthographic image;
(2) And identifying the target image actually shot by the aircraft by utilizing the final sample in the sample set to be participated in the deep learning training, and accurately identifying the target.
Preferably, a deep learning sample preparation system based on satellite orthographic imaging full pose comprises: the device comprises an image generation module, an image processing module and a sample set storage module;
the image generation module is used for generating target area images of the aircraft and the target under different distances, different azimuth angles and different high and low angles by utilizing satellite orthographic image data of the target and using a light tracking method, and the images are used as samples to be involved in deep learning training;
the image processing module is used for carrying out gray inversion processing on the target area image to obtain a sample to be participated in deep learning training; carrying out Gaussian blur processing on the target area image to obtain a sample to be participated in deep learning training; carrying out logarithmic transformation on the target area image to obtain a sample to be participated in deep learning training; adjusting the brightness of the target area image to obtain a sample to be involved in deep learning training; performing histogram equalization on the target area image to obtain a sample to be involved in deep learning training;
and the sample set storage module is used for forming a final sample set to be participated in the deep learning training from all the samples to be participated in the deep learning training in the step, and storing the final sample set to be participated in the deep learning training in the sample set storage module.
Preferably, the target is a ground fixed target.
Preferably, the target area is a square area set with the target as the center, and the pixels of the target area are preferably 300×300 pixels.
Preferably, a target recognition system based on deep learning sample preparation of satellite orthographic image full-pose is characterized by comprising: the deep learning sample preparation system and the recognition module are based on the full gesture of the satellite orthographic image;
the method comprises the steps that a deep learning sample preparation system based on the full posture of a satellite orthographic image obtains a final sample set to be participated in deep learning training;
the identification module is used for identifying the target image actually shot by the aircraft by utilizing the final samples in the sample set to be participated in the deep learning training, and accurately identifying the target.
Compared with the prior art, the invention has the advantages that:
(1) For a particular building object with only one piece of orthophoto data, a sufficient number of samples for deep learning training are generated using the method of the present invention. The cost of sample collection can be effectively saved.
(2) Under the condition that the target of a specific building is hit and the sample number is limited, the method for expanding the sample number is provided, and the sample of the target area which is actually hung and collected can be truly simulated.
(3) Under the conditions that the accuracy of the inertial measurement unit of the aircraft is poor (preferably, the deviation of the aircraft from a target distance is more than 500 meters, the deviation of an azimuth angle is more than 15 degrees, and the deviation of a pitch angle is more than 10 degrees) and large maneuvering conditions (preferably, the aircraft deviates from a standard plane, so that the included angle between the connecting line of the target and the current position of the aircraft and the standard plane is more than 15 degrees), template matching cannot be carried out by using a traditional method, and the non-sortable target of a specific building can be hit by using a deep learning method.
(4) The invention is to study how to prepare a deep learning sample with full gesture by using the orthophoto data, which makes it possible to hit a non-sortable target of a specific building by using a deep learning method and makes the recognition probability reach more than 80%
Drawings
FIG. 1 is an orthographic view of a satellite according to the present invention;
FIG. 2 is a schematic diagram of an image of a target area of an aircraft at different distances, different azimuth angles, and different elevation angles from the normal image of FIG. 1, wherein (a) is a schematic diagram of a first set of images of the target area at different distances, different azimuth angles, and different elevation angles; (b) A schematic diagram of a second set of target area images at different distances, different azimuth angles and different high and low angles; (c) A schematic diagram of a third group of target area images under different distances, different azimuth angles and different high and low angles; (d) Is a schematic diagram of a fourth set of target area images at different distances, different azimuth angles, and different high and low angles.
FIG. 3 is a schematic diagram of the present invention after gray inversion of the image of FIG. 2;
FIG. 4 is a schematic diagram of the present invention after Gaussian blur of the image of FIG. 2;
FIG. 5 is a schematic diagram of the present invention after logarithmic transformation of the image of FIG. 2;
FIG. 6 is a schematic diagram of the image in FIG. 2 after brightness adjustment according to the present invention, wherein (a) is a schematic diagram after brightness adjustment, and (b) is a schematic diagram after brightness adjustment;
FIG. 7 is a schematic diagram of the present invention after histogram equalization of the image of FIG. 2;
fig. 8 is a flow chart of the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific embodiments.
The invention relates to a method for preparing a deep learning sample and identifying a target based on a satellite orthographic image full gesture, which comprises the following steps of (1) generating target area images (samples to be involved in deep learning training) of an aircraft and the target under different distances, different azimuth angles and different high and low angles by utilizing satellite orthographic image data of the target and using a light tracking method; (2) Carrying out gray inversion treatment on the target area image to obtain a sample; (3) Carrying out Gaussian blur processing on the target area image to obtain a sample; (4) Carrying out logarithmic transformation on the target area image to obtain a sample; (5) adjusting the brightness of the target area image to obtain a sample; (6) Performing histogram equalization on the target area image to obtain a sample; (7) And (3) forming the samples in the steps (1) - (6) into a final sample set to be involved in deep learning training, which is beneficial to improving the accuracy of identifying the target image actually shot by the aircraft.
Preferably, the method also comprises the steps (8) and (8) of identifying the target image actually shot by the aircraft by utilizing the final samples in the sample set to be participated in the deep learning training, so as to accurately identify the target.
When the measurement accuracy of the inertial measurement unit of the aircraft (the measurement deviation of the distance of the aircraft from the target is more than 500 meters, the azimuth deviation is more than 15 degrees, and the pitch angle deviation is more than 10 degrees) is limited, the method can solve the problem that the building cannot be effectively identified in the conventional identification method under the condition, and simultaneously solve the problems that the camera is required to acquire images of the target in different time periods, different weather conditions, different distances, different azimuth angles and different elevation angles under the condition as training samples when the target is identified by deep learning, and the difficulty and the cost are high in practical operation.
In the present invention, the object is preferably a floor-fixed building. The target area is preferably a square area set with the target as the center, and the pixels of the target area are preferably 300 pixels. The aircraft carries preferably a visible light camera, identifying a target size greater than 10 x 10 pixels.
As shown in fig. 8, the method for preparing the deep learning sample based on the full posture of the satellite orthographic image according to the invention preferably comprises the following steps:
(1) The satellite orthographic image data of the target is utilized on the ground, the target area images of the aircraft and the target under different distances, different azimuth angles and different high and low angles are generated by using a light tracking method, and the sample to be participated in the deep learning training is obtained, wherein the preferable scheme comprises the following specific steps:
assuming that the distance deviation of the aircraft from the target is x (x is preferably greater than 500 meters), and the aircraft is required to identify the target from the distance dis, the distance is calculated from dis+x, when the distance is greater than or equal to 5000 meters, each distance segment is 500 meters, when the distance is less than 5000 meters and greater than or equal to 2900 meters, each distance segment is 300 meters, when the distance is less than 2900 meters and greater than or equal to 1100 meters, each distance segment is 200 meters, and when the distance is less than 1100 meters, the distance segment interval is 100 meters.
Judging what azimuth angle the aircraft flies towards the target, and determining the numerical value of different azimuth angles every 30 degrees according to 0-360 degrees because the aircraft flies towards the target at what azimuth angle when actually flying is uncertain; if it can be determined at what azimuth the aircraft is flying towards the target and the measurement error of this azimuth is known, the specific value of the azimuth generated can be determined on the basis of the measurement error coverage at intervals of 30 °.
Also, since it is not possible to determine what high and low angles the aircraft is flying towards the target at the time of actual flight, values of different high and low angles are determined every 5 ° at intervals of-5 ° to-90 °. If the high and low angles at which the aircraft flies towards the target can be determined, and the measurement errors of the high and low angles are known, the specific values of the generated high and low angles can be determined on the principle that the measurement errors can be covered at intervals of 5 degrees.
Combining these three values, the method of ray tracing is used to generate the target area images (also samples to be involved in deep learning training) of the aircraft and the target at different distances, different azimuth angles and different elevation angles as shown in (a), (b), (c) and (d) of fig. 2 for fig. 1.
(2) The target area image is subjected to gray inversion processing to obtain a sample to be participated in deep learning training, and the preferable scheme is as follows:
and (3) carrying out gray inversion processing on the target area image in the step (1) to obtain an image as shown in fig. 3.
(3) Carrying out Gaussian blur processing on the target area image to obtain a sample to be participated in deep learning training, wherein the preferable scheme specifically comprises the following steps:
and (3) carrying out Gaussian blur processing on the target area image in the step (1) through the following three groups of parameters to obtain an image as shown in fig. 4:
1) The support domain is preferably 3, and sigma is preferably 10
2) The support domain is preferably 5, and sigma is preferably 20
3) The support domain is preferably 7, and sigma is preferably 30
(4) Carrying out logarithmic transformation on the target area image to obtain a sample to be participated in deep learning training, wherein the preferable scheme comprises the following specific steps:
carrying out logarithmic transformation on the target area image in the step (1) to obtain an image as shown in fig. 5, wherein the coefficients of logarithmic transformation are respectively preferably: 10,20,30.
(5) The brightness of the target area image is adjusted to obtain a sample to be participated in deep learning training, and the preferable scheme is as follows:
and (3) respectively adjusting the gray level of each pixel point of the target area image in the step (1) by 10,20,30,40,50,60,70,80 gray levels and 10,20,30,40,50,60,70,80 gray levels, setting 0 if the adjusted pixel gray level is smaller than 0, setting 255 if the adjusted pixel gray level is larger than 255, and obtaining the image as shown in (a) and (b) of fig. 6.
(6) Performing histogram equalization on the target area image to obtain a sample to be involved in deep learning training, wherein the preferable scheme specifically comprises the following steps:
performing histogram equalization on the target area image in the step (1) to obtain an image as shown in fig. 7
(7) Forming the samples to be participated in the deep learning training in the steps (1) - (6) into a final sample set to be participated in the deep learning training, specifically:
and (3) taking all the generated images in the steps (1) - (6) as samples to be participated in the deep learning training to form a final sample set to be participated in the deep learning training.
The invention discloses a target recognition method based on satellite orthographic image full-attitude deep learning sample preparation, which comprises the following steps:
step (8) is performed after the above step (7)
(8) The final sample in the sample set to be participated in the deep learning training is utilized to identify the target image actually shot by the aircraft, and the target is accurately identified, and the preferable scheme is as follows:
the method comprises the steps of marking targets in all images in a sample set by using a minimum circumscribed rectangle, training the targets of all the marked sample sets by using a standard SSD300 convolutional neural network (preferably, setting the number of training samples as N, the number of GPUs as i, the number of BatchSize supported by each GPU video memory as j, then the value of iteration number iter of each training epoch is equal to N/(i x j), rounding upwards, setting an initial learning rate as 0.02, carrying out step down on each K epoch, and an attenuation coefficient as 0.1, training the targets of all the marked sample sets by using 3K epochs after training is finished, wherein the optimal value of K is 100.).
The further preferable scheme is as follows: the invention discloses a method for preparing a deep learning sample based on a satellite orthographic image full gesture, which comprises the following steps of:
(1) Generating target area images of different shot distances, different azimuth angles and different high and low angles as shown in fig. 2 (a), (b), (c) and (d) by using a light tracking method by using satellite orthographic image data of the target shown in fig. 1;
specifically, the distances between different bullets are 8000 meters, 7500 meters, 7000 meters, 6500 meters, 6000 meters, 5500 meters, 5000 meters, 4700 meters, 4400 meters, 4100 meters, 3800 meters, 3500 meters, 3200 meters, 2900 meters, 2700 meters, 2500 meters, 2300 meters, 2100 meters, 1900 meters, 1700 meters, 1500 meters, 1300 meters, 1100 meters, 1000 meters, 900 meters, 800 meters, 700 meters, 600 meters, 500 meters, and 400 meters.
Specifically, the requirements of different azimuth angles are: 0 °,30 °,60 °, 90 °, 120 °, 150 °, 180 °, 210 °, 240 °, 270 °, 300 °, and 330 °.
Specifically, the requirements of different high and low angles are: -85 °, -80 °, -75 °, -70 °, -65 °, -60 ° and-55 °.
The different shot distances, different azimuth angles and different elevation angles are combined to generate 2520 target area images as shown in fig. 2 (a), (b), (c) and (d).
(2) Carrying out gray inversion processing on all the 2520 target area images to generate an image shown in fig. 3;
(3) Carrying out Gaussian blur processing on the 2520 target area images;
the gaussian blur parameters are 3 groups, respectively:
1) Support domain 3, sigma 10
2) Support domain 5, sigma 20
3) Support domain 7, sigma 30
An image as shown in fig. 4 is generated.
(4) Carrying out logarithmic transformation on the 2520 target area images;
specifically, the coefficients of the logarithmic transformation are 10,20,30, respectively, producing an image as shown in fig. 5.
(5) Adjusting the brightness of the 2520 target area images;
the gray scale of each pixel point of the image is respectively adjusted by 10,20,30,40,50,60,70,80 gray scales and adjusted by 10,20,30,40,50,60,70,80 gray scales, so that the image shown in fig. 6 is generated.
(6) And carrying out histogram equalization on the 2520 target area images to generate an image shown in fig. 7.
Eventually 2520 target area images as shown in fig. 2 (a), (b), (c) and (d) are expanded to 63000 samples.
The target image actually shot by the aircraft is identified by 63000 samples, and the target is accurately identified, and the preferable scheme is as follows:
the method comprises the steps that targets in all images in a sample set are marked by using a minimum circumscribed rectangle, a standard SSD300 convolutional neural network (the number of training samples is 63000, the number of GPUs is 3, the number of BatchSize supported by each GPU is 16, the number of iteration number iter of each training epoch is 1313, the initial learning rate is 0.02, each 100 epochs undergo step descent, the attenuation coefficient is 0.1, and 300 epochs are trained) is used for training the marked targets of all the sample sets, an identification model for the targets is generated, the positions of the targets actually shot by an aircraft are accurately identified by using the trained model, and the data identification probability of the group reaches 100%.
Through the above preferred scheme, only the information of the height and the angle with deviation provided by the aircraft is used, in this case, if the specific position of the target is identified by using the traditional template matching method, which is an impossible task, the invention only uses satellite orthographic image data of one target to prepare a deep learning sample of the full posture of the target, uses a standard SSD300 convolutional neural network model to train the marked sample data, then identifies the target image actually shot by the aircraft, and the identification probability reaches 100%.

Claims (10)

1. A method for preparing a deep learning sample based on a satellite orthographic image full gesture is characterized by comprising the following steps:
(1) Generating target area images of the aircraft and the target at different distances, different azimuth angles and different high and low angles by using satellite orthographic image data of the target and using a light tracking method;
(2) Performing gray inversion processing on the target area image to obtain a sample to be involved in deep learning training;
(3) Carrying out Gaussian blur processing on the target area image to obtain a sample to be participated in deep learning training;
(4) Carrying out logarithmic transformation on the target area image to obtain a sample to be participated in deep learning training;
(5) Adjusting the brightness of the target area image to obtain a sample to be involved in deep learning training;
(6) Performing histogram equalization on the target area image to obtain a sample to be involved in deep learning training;
(7) And (3) forming a final sample set to be participated in the deep learning training by the samples to be participated in the deep learning training in the steps (2) - (6).
2. The method for preparing the deep learning sample based on the full pose of the satellite orthographic image according to claim 1, wherein the method comprises the following steps: the target is a ground fixed target.
3. The method for preparing the deep learning sample based on the full pose of the satellite orthographic image according to claim 1, wherein the method comprises the following steps: the target area is a square area set with the target as the center, and the pixels of the target area are preferably 300×300 pixels.
4. The method for preparing the deep learning sample based on the full pose of the satellite orthographic image according to claim 1, wherein the method comprises the following steps: the requirements of the aircraft and the target are as follows: the deviation of the distance between the aircraft and the target is greater than 500 meters, the azimuth deviation is greater than 20 degrees, and the high-low angle deviation is greater than 10 degrees.
5. The method for preparing the deep learning sample based on the full pose of the satellite orthographic image according to claim 1, wherein the method comprises the following steps: the size of the object to be identified is required to be greater than 10 x 10 pixels.
6. A target recognition method based on satellite orthographic image full-attitude deep learning sample preparation is characterized by comprising the following steps:
(1) The method for preparing the deep learning sample based on the full pose of the satellite orthographic image, which is disclosed in claim 1, comprises the steps of obtaining a final sample set to be involved in deep learning training;
(2) And identifying the target image actually shot by the aircraft by utilizing the final sample in the sample set to be participated in the deep learning training, and accurately identifying the target.
7. A deep learning sample preparation system based on satellite orthographic image full gesture is characterized in that: comprising the following steps: the device comprises an image generation module, an image processing module and a sample set storage module;
the image generation module is used for generating target area images of the aircraft and the target under different distances, different azimuth angles and different high and low angles by utilizing satellite orthographic image data of the target and using a light tracking method, and the images are used as samples to be involved in deep learning training;
the image processing module is used for carrying out gray inversion processing on the target area image to obtain a sample to be participated in deep learning training; carrying out Gaussian blur processing on the target area image to obtain a sample to be participated in deep learning training; carrying out logarithmic transformation on the target area image to obtain a sample to be participated in deep learning training; adjusting the brightness of the target area image to obtain a sample to be involved in deep learning training; performing histogram equalization on the target area image to obtain a sample to be involved in deep learning training;
and the sample set storage module is used for forming a final sample set to be participated in the deep learning training from all the samples to be participated in the deep learning training and storing the final sample set to be participated in the deep learning training into the sample set storage module.
8. The deep learning sample preparation system based on satellite orthographic imaging full pose as claimed in claim 7, wherein: the target is a ground fixed target.
9. The deep learning sample preparation system based on satellite orthographic imaging full pose as claimed in claim 7, wherein: the target area is a square area set with the target as the center, and the pixels of the target area are preferably 300×300 pixels.
10. The object recognition system for preparing the deep learning sample based on the full posture of the satellite orthographic image is characterized by comprising the following components: the deep learning sample preparation system and recognition module based on satellite orthographic imaging full pose of claim 7;
the deep learning sample preparation system based on the satellite orthographic image full pose of claim 7, wherein a final sample set to be involved in deep learning training is obtained;
the identification module is used for identifying the target image actually shot by the aircraft by utilizing the final samples in the sample set to be participated in the deep learning training, and accurately identifying the target.
CN202010224330.4A 2020-03-26 2020-03-26 Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture Active CN111523392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010224330.4A CN111523392B (en) 2020-03-26 2020-03-26 Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010224330.4A CN111523392B (en) 2020-03-26 2020-03-26 Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture

Publications (2)

Publication Number Publication Date
CN111523392A CN111523392A (en) 2020-08-11
CN111523392B true CN111523392B (en) 2023-06-06

Family

ID=71910580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010224330.4A Active CN111523392B (en) 2020-03-26 2020-03-26 Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture

Country Status (1)

Country Link
CN (1) CN111523392B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070151B (en) * 2020-09-07 2023-12-29 北京环境特性研究所 Target classification and identification method for MSTAR data image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200334A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN108346133A (en) * 2018-03-15 2018-07-31 武汉大学 A kind of deep learning network training method towards video satellite super-resolution rebuilding
CN109711348A (en) * 2018-12-28 2019-05-03 湖南航天远望科技有限公司 Intelligent monitoring method and system based on the long-term real-time architecture against regulations in hollow panel
CN110084093A (en) * 2019-02-20 2019-08-02 北京航空航天大学 The method and device of object detection and recognition in remote sensing images based on deep learning
CN110555352A (en) * 2018-06-04 2019-12-10 百度在线网络技术(北京)有限公司 interest point identification method, device, server and storage medium
CN110826612A (en) * 2019-10-31 2020-02-21 上海法路源医疗器械有限公司 Training and identifying method for deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9589210B1 (en) * 2015-08-26 2017-03-07 Digitalglobe, Inc. Broad area geospatial object detection using autogenerated deep learning models

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200334A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN108346133A (en) * 2018-03-15 2018-07-31 武汉大学 A kind of deep learning network training method towards video satellite super-resolution rebuilding
CN110555352A (en) * 2018-06-04 2019-12-10 百度在线网络技术(北京)有限公司 interest point identification method, device, server and storage medium
CN109711348A (en) * 2018-12-28 2019-05-03 湖南航天远望科技有限公司 Intelligent monitoring method and system based on the long-term real-time architecture against regulations in hollow panel
CN110084093A (en) * 2019-02-20 2019-08-02 北京航空航天大学 The method and device of object detection and recognition in remote sensing images based on deep learning
CN110826612A (en) * 2019-10-31 2020-02-21 上海法路源医疗器械有限公司 Training and identifying method for deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王俊强 ; 李建胜 ; .基于深度学习的大区域遥感影像路网提取方法.工程勘察.(第12期),全文. *
王柳.基于深度学习的空间多目标识别方法研究.《无人系统技术》.2019,(第undefined期),第49-55页. *

Also Published As

Publication number Publication date
CN111523392A (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN105678689B (en) High-precision map data registration relation determining method and device
CN104536009B (en) Above ground structure identification that a kind of laser infrared is compound and air navigation aid
CN110569796A (en) Method for dynamically detecting lane line and fitting lane boundary
CN113808174B (en) Radar small target tracking method based on full convolution network and Kalman filtering
CN109631912A (en) A kind of deep space spherical object passive ranging method
CN111598942A (en) Method and system for automatically positioning electric power facility instrument
CN110400330A (en) Photoelectric nacelle image tracking method and tracking system based on fusion IMU
CN111046756A (en) Convolutional neural network detection method for high-resolution remote sensing image target scale features
CN111523392B (en) Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture
CN110866472A (en) Unmanned aerial vehicle ground moving target identification and image enhancement system and method
CN113295142B (en) Terrain scanning analysis method and device based on FARO scanner and point cloud
CN115220007A (en) Radar point cloud data enhancement method aiming at attitude identification
CN111031258B (en) Lunar vehicle navigation camera exposure parameter determination method and device
CN109657679B (en) Application satellite function type identification method
JP2000275338A (en) Apparatus and method for discrimination of target
CN116778357A (en) Power line unmanned aerial vehicle inspection method and system utilizing visible light defect identification
CN104484647B (en) A kind of high-resolution remote sensing image cloud height detection method
CN111735447B (en) Star-sensitive-simulated indoor relative pose measurement system and working method thereof
Fikriansyah et al. Low Cloud Type Classification System Using Convolutional Neural Network Algorithm
CN113283326A (en) Video SAR target intelligent detection method based on simulation target bright line characteristics
CN111753887A (en) Point source target image control point detection model training method and device
CN111523564A (en) SAR time-sensitive target sample augmentation method for deep learning training
CN112101441B (en) Coronal mass ejection detection method based on fast R-CNN
CN114513746B (en) Indoor positioning method integrating triple vision matching model and multi-base station regression model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant