CN111753887A - Point source target image control point detection model training method and device - Google Patents

Point source target image control point detection model training method and device Download PDF

Info

Publication number
CN111753887A
CN111753887A CN202010520015.6A CN202010520015A CN111753887A CN 111753887 A CN111753887 A CN 111753887A CN 202010520015 A CN202010520015 A CN 202010520015A CN 111753887 A CN111753887 A CN 111753887A
Authority
CN
China
Prior art keywords
point
source target
target image
point source
image control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010520015.6A
Other languages
Chinese (zh)
Inventor
李凯
张永生
李峰
童晓冲
杨伟铭
赖广陵
纪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute Of Logistics Science And Technology Institute Of Systems Engineering Academy Of Military Sciences
Original Assignee
Institute Of Logistics Science And Technology Institute Of Systems Engineering Academy Of Military Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute Of Logistics Science And Technology Institute Of Systems Engineering Academy Of Military Sciences filed Critical Institute Of Logistics Science And Technology Institute Of Systems Engineering Academy Of Military Sciences
Priority to CN202010520015.6A priority Critical patent/CN111753887A/en
Publication of CN111753887A publication Critical patent/CN111753887A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention relates to a point source target image control point detection model training method and a device, belonging to the technical field of point source target detection, wherein a plurality of point source target image control point simulation images are randomly generated, and the point source target image control point simulation images are as follows: an image with bright center and dark periphery by taking the point source target image control point as a center; randomly selecting a position in the satellite image, putting the point source target image control point simulation image into the position, and replacing the image at the position to obtain an image sample containing the point source target image control point; repeating the steps to obtain a set number of image samples, and generating a corresponding point source target image control point detection training set according to the image samples; a detection model is established by utilizing a neural network, the point source target image control point detection training set is input into the detection model for training, and the point source target image control point detection model is obtained.

Description

Point source target image control point detection model training method and device
Technical Field
The invention relates to a point source target image control point detection model training method and device, and belongs to the technical field of point source target detection.
Background
The detection and identification problems of point targets are paid more and more attention, the detection and identification problems are one of key technologies for improving the action distance of a guidance system and strengthening the defense system capability, and because the target is far away from a detector, the target usually only occupies one or a plurality of pixels in an image obtained by the detector and does not have any shape information, the point targets cannot be quickly and accurately identified by using the traditional detection method.
Currently, in the existing point target detection technology, the following methods are generally adopted: 1) the characteristic parameters based on the point source target image can effectively detect the image control points of the point source target, but usually, part of prior information in the image is needed, and under the condition of no available prior information, only empirical values can be adopted, so that misjudgment of the image control points can be caused. 2) The initial positioning of the point source target image control point position can be carried out by utilizing the sensor position and attitude initial information, the search range is reduced, but the candidate area needs to be tested pixel by pixel in the subsequent detection process, and the detection efficiency of the algorithm is also reduced. The ideal detection algorithm can quickly and accurately detect the point source target image control point under the conditions of no constraint condition and prior information.
In recent years, a deep learning algorithm which has gained high attention has extremely excellent performance for detecting and detecting a specific target in an image, for example, a deep convolutional neural network can realize the detection of a track and field in a satellite image, a building to a lesion position in a medical image. Therefore, the point source target image can be regarded as a special ground object, a stable neural network model is obtained through learning a large number of samples, and quick intelligent detection of the point source target image control points is achieved.
However, unlike the ground features in nature such as the field and the building, the point source target image has its particularity. First, there is no ideal point source target in nature, and the frequency and number of the manually laid point source targets are limited. Therefore, enough point source target image samples cannot be obtained for training the neural network now even in the next years. Secondly, the point source target image has a single characteristic, and the deep learning algorithm is generally designed for detecting and classifying objects with abundant characteristics, such as human faces, vehicles, animals and plants, buildings and the like which are widely applied in a deep detection scene. Finally, the point source target image has a small size, and the target with a small size, such as the point source target image, is called a tiny target, and the imaging result is usually within an image block with a side length of only 3 to 7 pixels, which is also rarely seen in the application scenario of the deep learning algorithm. Due to the problems, the currently popular deep learning algorithm cannot be applied to the detection of the point source target image control point, so that the advantages of the deep learning algorithm cannot be exerted in the point target detection.
Disclosure of Invention
The invention aims to provide a point source target image control point detection model training method and device, and aims to solve the problems of low efficiency and poor precision of the existing point source target image control point detection method.
In order to achieve the purpose, the technical scheme of the invention is as follows: the invention provides a training method of a point source target image control point detection model, which comprises the following steps:
1) randomly generating a plurality of point source target image control point simulation images, wherein the point source target image control point simulation images are as follows: an image with bright center and dark periphery by taking the point source target image control point as a center;
2) randomly selecting a position in the satellite image, putting the point source target image control point simulation image into the position, and replacing the image at the position to obtain an image sample containing the point source target image control point;
3) repeating the step 2) to obtain a set number of image samples, and generating a corresponding point source target image control point detection training set according to the image samples;
4) and establishing a detection model by using a neural network, inputting the point source target image control point detection training set into the detection model for training, and obtaining the point source target image control point detection model.
The method comprises the steps of generating a point source target image control point simulation image which takes a point source target image control point as a center, is bright in the center and is dark at the periphery, then putting the point source target image control point simulation image into a satellite image, generating an image sample containing the point source target image control point, generating a corresponding training sample set, and training a detection model established by using a neural network to obtain a point source target image control point detection model. By the method, a large number of samples can be obtained, and rich training sets are generated to train the detection model, so that the deep learning algorithm has a great effect in a small target scene with single detection characteristics, and the efficiency and the precision of point source target detection are effectively improved.
The invention also provides a training device for the point source target image control point detection model, which comprises a processor and a memory, wherein the memory is stored with a computer program, and the processor executes the computer program stored in the memory to realize the following steps:
1) randomly generating a plurality of point source target image control point simulation images, wherein the point source target image control point simulation images are as follows: an image with bright center and dark periphery by taking the point source target image control point as a center;
2) randomly selecting a position in the satellite image, putting the point source target image control point simulation image into the position, and replacing the image at the position to obtain an image sample containing the point source target image control point;
3) repeating the step 2) to obtain a set number of image samples, and generating a corresponding point source target image control point detection training set according to the image samples;
4) and establishing a detection model by using a neural network, inputting the point source target image control point detection training set into the detection model for training, and obtaining the point source target image control point detection model.
The method comprises the steps of executing a computer program through a processor to generate a point source target image control point simulation image, wherein the point source target image control point simulation image takes a point source target image control point as a center, is bright in the center and dark in the periphery, then putting the point source target image control point simulation image into a satellite image to generate an image sample containing the point source target image control point, and further generating a corresponding training sample set to train a detection model to obtain a point source target image control point detection model. By the method, a large number of samples can be obtained, and rich training sets are generated to train the detection model established by the neural network, so that the deep learning algorithm has a larger effect in a small target scene with single detection characteristics, and the efficiency and the precision of point source target detection are effectively improved.
Further, for one of the above scheme of the training method and device for the point source target image control point detection model, the point source target image control point simulation image is: and the image which takes the point source target image control point as the center and gradually darkens in brightness from the center to the periphery.
Further, for one of the above scheme of the training method and device for the point source target image control point detection model, the point source target image control point simulation image is generated according to a point spread function, and the characteristics of the generated image are related to the peak value range of the point spread function and the contrast between the image center point and the edge position.
Further, for one of the point source target image control point detection model training method scheme and the point source target image control point detection model training device scheme, in the step 4), the point source target image control point detection training sets are respectively used for training a plurality of detection models, the trained models are tested, and finally, the models with the recall rate, the accuracy and the mAP reaching 99% are reserved.
Further, for one of the above scheme of the training method and device for the point source target image control point detection model, the satellite image is an image block with a set size, and the size of the image block is determined according to the size of a positioning error caused by the initial position and attitude parameters of the sensor.
Further, for one of the above scheme of the training method and device for the point source target image control point detection model, the analog image corresponding to the point source target image control point is calculated in step 1) according to the following formula:
Figure BDA0002531688290000031
m, N is the number of rows and columns of the degraded images generated according to the point source target image control points; n is a radical ofw(i, j) is discretized white gaussian noise; x is the number of0And y0The peak position of the point spread function is respectively shown, sigma and ξ are respectively the standard deviation of the point spread function on the X, Y axis, A is the brightness of the central pixel of the point source target image control point, and b is the brightness of the edge position of the simulated image of the point source target image control point.
Further, for one of the scheme of the point source target image control point detection model training method and the scheme of the point source target image control point detection model training device, the value range of the peak position of the point spread function is 0-1 pixel.
Further, for one of the scheme of the training method and the scheme of the training device for the point source target image control point detection model, the value range of the pixel value of the central pixel of the point source target image control point is (0.5-1) × Nmax,NmaxThe maximum pixel value of the satellite image.
Drawings
FIG. 1 is a flow diagram of a method in an embodiment of a test model training method of the present invention;
FIG. 2 is a part of point source target image control point simulation images in an embodiment of the detection model training method of the present invention;
FIG. 3a is an image of an image sample 1 including point source target image-controlled points in an embodiment of the detection model training method of the present invention;
FIG. 3b is an image of an image sample 2 containing point source target image-controlled points according to an embodiment of the detection model training method of the present invention;
FIG. 4a is a diagram illustrating a detection result of a first image including a simulated point source image by a trained Faster R-CNN model in an embodiment of the detection model training method of the present invention;
FIG. 4b is a diagram illustrating the detection result of the trained CenterNet model on the first image including the analog point source image according to the embodiment of the detection model training method of the present invention;
FIG. 5a is a diagram illustrating a result of detecting a second image containing a simulated point source image by a trained Faster R-CNN model according to an embodiment of the detection model training method of the present invention;
FIG. 5b is a diagram illustrating the detection result of the trained CenterNet model on the second image including the analog point source image according to the embodiment of the detection model training method of the present invention;
FIG. 6a is a result of detecting a satellite image containing a real point source image by a trained Faster R-CNN model in an embodiment of the detection model training method of the present invention;
FIG. 6b is a diagram illustrating the detection result of the trained CenterNet model on a real satellite image containing a real point source image according to the embodiment of the detection model training method of the present invention;
FIG. 7 is a block diagram of an apparatus for training a test pattern according to an embodiment of the present invention.
Detailed Description
The present embodiment relates to an embodiment of a detection model training method and a corresponding embodiment of a detection model training apparatus according to the present invention.
For the detection process of the point source target image control points, the method belongs to the point target detection process, and as the point source target image control points have the characteristics of single image characteristics, small size (an imaging result is usually only an image block of a plurality of pixels), limited use frequency and limited number, a large number of point source target image samples cannot be obtained and input into a neural network model as training data, so that the point source target image control point detection is rarely applied to a deep learning algorithm.
In order to fully exert the superior performance of the deep learning algorithm in the specific target detection and improve the precision and the efficiency of the point target detection, the key of the embodiment is to obtain a large number of training samples and further generate an enough training set to train the detection model established by the neural network.
As shown in fig. 1, in the present embodiment, the principle of the detection model training method is as follows: based on the imaging characteristics of the point source target image control points, a large amount of image sample data simulating the point source target image control points are generated, the neural network is trained, the model obtained through training is used for detecting the satellite image containing the real point source target image, and the detection effect is tested.
First, in the present embodiment, a plurality of point source target image control point simulation images are randomly generated. In the present embodiment, a point source target image control point simulation image is generated according to the characteristics of the point source target image control points expressed in the satellite image.
In view of the satellite images including the point source target image control points, the point source target image control point images are mainly bright images in the center and dark images around the point source target image control points. For example, 14000 point source target image control point simulation images with the above characteristics are generated, and the images are all shown as images which are radial from the central position to the periphery and gradually darken from the central position to the periphery by taking the point source target image control point as the center.
In this embodiment, a point-source target image control point simulation image corresponding to the point-source target image is generated by using a point spread function, and the characteristics of the generated image are related to the peak value range of the point spread function and the contrast between the center point and the edge position of the image. In the generated image, the central point of the image is a point source target image control point, namely the peak position of the point diffusion function, and the closer the point in the image to the peak position of the point diffusion function is, the higher the brightness is, the farther the point is, the lower the brightness is, and moreover, a certain contrast relation is also set between the central point and the edge position of the image. Therefore, a corresponding more accurate point source target image control point simulation image is generated.
In order to enable the trained model to have better detection capability, when generating a point source target image control point simulation image, the internal structure and external environmental factors of various imaging sensors are considered as much as possible. These factors include the following:
1) point source target image control point center imaging position (x)0,y0). When the imaging position (i.e. phase) of the center of the point source target image control point in a certain pixel changes, the imaging result of the point source target image control point also changes. Therefore, when the simulated image is generated, the phase of the point source target image control point image should be randomly selected between 0 to 1 pixel, and the imaging characteristics of the point source target image control point under different imaging positions are simulated.
2) Image PSF parameters (σ and ξ). The PSF is a degradation function during imaging, and the function expands the imaging range of a point source target image control point to the size of about 3-7 pixels on a side. The parameter in the PSF degradation function determines the size of an imaging range, and simultaneously influences the value of each pixel in the point source target image control point image. When generating the simulation image, the value of the degradation function parameter during imaging should be fully considered.
3) Random noise N in imagingw(i, j). Noise in the real image is inevitable, and design defects in the photosensitive elements of the sensor, channel noise during image transmission, and the like all cause noise in the imaging result. Gaussian white noise with different signal-to-noise ratios (SNR) is added when generating the simulation image, and the influence of random noise is simulated.
4) The imaging brightness a. The point source target obtains larger luminous flux by reflecting sunlight through a mirror surface, so that a bright spot is formed on a satellite image. When light paths among sunlight, the target reflector and the entrance pupil of the satellite are strictly aligned, the light flux obtained by the sensor is the largest, but in practical application, due to the influence of orbit prediction deviation, errors generally exist in the light paths, and the imaging brightness of the point source target image control point image changes. In addition, the sunlight condition also directly affects the imaging brightness, although the weather with good weather condition is usually selected for experiment, the influence of the factors is fully considered during algorithm design, and reference is provided for feasibility of experiment under the weak illumination condition.
5) Contrast A + b/b is imaged. The contrast between the center point and the edge position of the control point image of the source target is pointed, and the parameter is determined by the ratio of the reflectivity between the target mirror surface of the point source target and the background of the area where the target is located. Different imaging contrast ratios are considered when the simulation image is generated, so that the trained neural network model can be used for detecting the point source target image control point image under the condition of various contrast ratios.
The method for acquiring the simulated image of the image control point of the corresponding point source target is summarized by integrating the key influence factors, and is mainly obtained by the following calculation formula:
Figure BDA0002531688290000061
different imaging parameters are set, and simulated point source target image control point simulated images under different imaging conditions can be obtained. Wherein M, N is the number of rows and columns respectively generating the degraded image, Nw(i, j) is the discretized white gaussian noise. In the above formula, x0And y0The peak positions of the point spread function, σ and ξ are the standard deviations of the point spread function on the X, Y axis, A is the brightness of the pixel at the center of the point source target image control point, b is the brightness at the edge position of the simulated image of the point source target image control point0,y0) That is, the exact image coordinates of the point source image, in this embodiment, (x) is0,y0) The fractional part of (a) is defined as the phase of the point source image. The value ranges of the above parameters are shown in table 1. The partial point source image obtained by randomly selecting the parameters according to the above formula is shown in fig. 2.
Table 1:
parameter(s) Value range
Phase position 0-1 pixel in x-axis and 0-1 pixel in y-axis
Image PSF parameter 0.5-1 pixels, ξ 0.5-1 pixels
SNR 20-40dB
Point source target center pixel brightness A (0.5-1)×Nmax
Point source target and background reflectivity ratio A + b/b 2-20
In this embodiment, according to the above-mentioned method, the above-mentioned parameters are randomly selected, and 7 × 7 pixels with central areas reserved for the images are generated, where the central position is the central imaging position (x) of the point source target0,y0) Thus, the closer the point is to the position, the higher the brightness, and the farther the point is, the lower the brightness.
Then, in order to obtain a large number of training samples and generate a corresponding training set, in the present embodiment, a corresponding image sample is generated by replacing an image at an arbitrary position of an existing satellite image with a point source target image control point simulation image.
Specifically, a position is randomly selected from the existing satellite images, a point source target image control point simulation image is placed at the position, and the position is replaced to generate a pixel value within a certain size range, so that a corresponding sample is generated.
In the conventional satellite image, the search range of a point source target image can be limited within the size of W multiplied by W pixel blocks by using the initial position and attitude parameters of a sensor. In the conventional image processing, the target detection is usually based on a sliding window detection algorithm, and in order to effectively improve the search efficiency of the sliding window, in this embodiment, the search range is set to the size of the region to be detected, that is, an image block with a side length of W pixels is randomly selected from the satellite image. Then, one point source target image control point simulation image in the 14000 obtained simulation images is selected to be placed at any position in the image block, and the pixel value at the original position is replaced, so that an image sample containing the point source target image control point is obtained. The point source target image generated by the formula is an image with bright middle and dark periphery, the image is infinite, a 7 x 7 image piece is cut by taking the brightest point of the image as the center, and the image piece forms a point source target simulation image. In this embodiment, a 7 × 7 image slice is selected and placed in an image block of W pixels, i.e., a sample including a point source simulation image is generated.
And obtaining a sufficient number of samples by using the sample generation method, namely forming a point source target image simulation data set. Fig. 3a and 3b show two sample data of point source target simulation images, where the square frame is the detection area where the image control point of the point source target is located.
In order to achieve effective training of the neural network model, in the present embodiment, the obtained samples need to be processed to generate a corresponding training set. In this embodiment, the corresponding training set is obtained by performing image cropping and labeling on the obtained image sample. For example, taking fig. 4a as an example, an image sample including a point source target image control point image is cut appropriately, for example, into a small image of a simulation image almost all of which is cut by the point source target image control point or an image without the point source target image control point. And labeling the cut pictures, wherein a small picture with point source target image control points is labeled as 1, and the rest pictures are labeled as 0. And taking the satellite image as the input of the detection model, and taking whether the cut picture contains the point source target image control point as the output, thereby generating a corresponding training set.
Finally, after the training set is generated, the process of training the neural network model according to the training set is performed. Specifically, a detection model is established by utilizing a neural network, and the obtained point source target image control point detection training set is input into the detection model for training to obtain the point source target image control point detection model.
In this embodiment, in the step of generating the training set, the obtained sample data may be divided into a training set, a verification set, and a test set. For example, using the fast R-CNN network model, the training set contains 10000 analog point source target image control point image samples, and the test set and the validation set both contain 2000 samples. And after the training set is used for training the model, inputting the test set and the verification set into the trained model to obtain a detection result, and evaluating the performance index parameters.
As can be seen from Table 2, the results of the testing of the Faster R-CNN network model on the simulation data set were very successful, and the recall rate, accuracy and mAP value of the model were close to 100%.
Table 2:
network model Number of samples Detection of Error detection Missing inspection Recall/%) Accuracy/%) mAP
Faster R-CNN 2000 1999 7 1 99.95 99.65 0.999
As shown in fig. 4a and 4b, the results of the detection of the two models, namely, the fast R-CNN and the CenterNet, are shown in fig. 5a and 5b, and the results of the detection of the two models, namely, the fast R-CNN and the CenterNet, in another image are shown in combination with table 2, which shows that the two deep learning algorithms have great application potential in detecting a tiny target scene with a single feature.
Specifically, in the present embodiment, the trained models are used to input satellite image data including a real point source target image to obtain a detection result and analyze performance parameters thereof, and the results of performance index parameters of the two models for real data are shown in table 3.
Table 3:
network model Number of samples Detection of Error detection Missing inspection Recall/%) Accuracy/%) mAP
Faster R-CNN 1000 951 12 49 95.10 98.75 0.997
CenterNet 1000 977 1 23 97.70 99.90 0.990
As can be seen from table 3, the test performance of the two network models in the real data is inferior to the test result of the simulation data, but the detection task of the real point source target image is well completed, the recall rate of the two network models is above 95%, and the accuracy and the value of the mAP are close to 100%. Fig. 6a and 6b show the detection effects of the two algorithms in the same real data, respectively, and it can be seen from table 3 that the point source target image control point detection model obtained by the method in this embodiment can accurately detect the point source target image, so that the neural network algorithm based on deep learning plays a greater role in point target detection.
In the present embodiment, the point source target image control point simulation image generated by performing the calculation using the point spread function is not limited to the mode of generating the point source target image control point simulation image, but the point is that the generated point source target image control point simulation image is an image which is bright at the center and dark at the periphery with the point source target image control point as the center, and preferably an image which is gradually darker from the center to the periphery with the point source target image control point as the center. Therefore, other calculation models or algorithms in the prior art may also be used in this embodiment, and as long as the calculation models or algorithms can obtain images with bright center and dark surroundings, the method for generating the control point simulation images of the plurality of point source target images in this embodiment may be implemented and is included in the scope of the present invention.
In the present embodiment, a corresponding detection model training apparatus is further designed for the above process, and as shown in fig. 7, the apparatus includes a processor and a memory, the memory stores a computer program that can be run on the processor, and the processor implements the point source target image-controlled point detection model training method when executing the computer program.
That is, the method in the above method embodiments should be understood that the flow of the training method of the point source target image-controlled point detection model may be implemented by computer program instructions. These computer program instructions may be provided to a processor such that execution of the instructions by the processor results in the implementation of the functions specified in the method flow described above.
The processor referred to in this embodiment means a processing device such as a microprocessor MCU or a programmable logic device FPGA;
the memory referred to in the present embodiment includes a physical device for storing information, and generally, information is digitized and then stored in a medium using an electric, magnetic, optical, or other means. For example: various memories for storing information by using an electric energy mode, such as RAM, ROM and the like; various memories for storing information by magnetic energy, such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, and U disk; various types of memory, CD or DVD, that store information optically. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.
The apparatus comprising the memory, the processor and the computer program is realized by the processor executing corresponding program instructions in the computer, and the processor can be loaded with various operating systems, such as windows operating system, linux system, android, iOS system, and the like.
As other embodiments, the device may further comprise a display for displaying the results for reference by the staff.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention, the scope of the present invention is defined by the appended claims, and all structural changes that can be made by using the contents of the description and the drawings of the present invention are intended to be embraced therein.

Claims (9)

1. A point source target image control point detection model training method is characterized by comprising the following steps:
1) randomly generating a plurality of point source target image control point simulation images, wherein the point source target image control point simulation images are as follows: an image with bright center and dark periphery by taking the point source target image control point as a center;
2) randomly selecting a position in the satellite image, putting the point source target image control point simulation image into the position, and replacing the image at the position to obtain an image sample containing the point source target image control point;
3) repeating the step 2) to obtain a set number of image samples, and generating a corresponding point source target image control point detection training set according to the image samples;
4) and establishing a detection model by using a neural network, inputting the point source target image control point detection training set into the detection model for training, and obtaining the point source target image control point detection model.
2. The training method for the point source target image-controlled point detection model according to claim 1, wherein the point source target image-controlled point simulation image is: and the image which takes the point source target image control point as the center and gradually darkens in brightness from the center to the periphery.
3. The training method of the point source target image control point detection model according to claim 1 or 2, wherein the point source target image control point simulation image is generated according to a point spread function, and the characteristics of the generated image are related to the peak value range of the point spread function and the contrast between the image center point and the edge position.
4. The training method for the point source target image-controlled point detection model according to claim 1, wherein in the step 4), a plurality of detection models are trained by using the training set for point source target image-controlled point detection, and the trained models are tested, so that a model with a recall rate, accuracy and mAP up to 99% is finally retained.
5. The training method of the point source target image control point detection model according to claim 1, wherein the satellite image is an image block with a set size, and the size of the image block is determined according to the size of the positioning error caused by the initial position and posture parameters of the sensor.
6. The training method for the point source target image control point detection model according to claim 3, wherein the corresponding point source target image control point simulation image is calculated in step 1) according to the following formula:
Figure FDA0002531688280000011
m, N is the number of rows and columns of the degraded images generated according to the point source target image control points; n is a radical ofw(i, j) is discretized white gaussian noise; x is the number of0And y0The peak position of the point spread function is respectively shown, sigma and ξ are respectively the standard deviation of the point spread function on the X, Y axis, A is the brightness of the central pixel of the point source target image control point, and b is the brightness of the edge position of the simulated image of the point source target image control point.
7. The training method of the point source target image-controlled point detection model according to claim 6, wherein the peak position of the point spread function has a value ranging from 0 to 1 pixel.
8. According to claimThe point source target image control point detection model training method of claim 6, wherein the value range of the pixel value of the central pixel of the point source target image control point is (0.5-1) × Nmax,NmaxThe maximum pixel value of the satellite image.
9. A training device for a point source target image control point detection model, comprising a processor and a memory, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory to implement the training method for the point source target image control point detection model according to any one of claims 1 to 8.
CN202010520015.6A 2020-06-09 2020-06-09 Point source target image control point detection model training method and device Pending CN111753887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010520015.6A CN111753887A (en) 2020-06-09 2020-06-09 Point source target image control point detection model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010520015.6A CN111753887A (en) 2020-06-09 2020-06-09 Point source target image control point detection model training method and device

Publications (1)

Publication Number Publication Date
CN111753887A true CN111753887A (en) 2020-10-09

Family

ID=72674998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010520015.6A Pending CN111753887A (en) 2020-06-09 2020-06-09 Point source target image control point detection model training method and device

Country Status (1)

Country Link
CN (1) CN111753887A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313684A (en) * 2021-05-28 2021-08-27 北京航空航天大学 Video-based industrial defect detection system under dim light condition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470185A (en) * 2018-02-12 2018-08-31 北京佳格天地科技有限公司 The atural object annotation equipment and method of satellite image
CN110020635A (en) * 2019-04-15 2019-07-16 中国农业科学院农业资源与农业区划研究所 Growing area crops sophisticated category method and system based on unmanned plane image and satellite image
WO2020037960A1 (en) * 2018-08-21 2020-02-27 深圳大学 Sar target recognition method and apparatus, computer device, and storage medium
CN111142137A (en) * 2018-11-05 2020-05-12 中国人民解放军战略支援部队信息工程大学 Method and device for positioning point source target image control points

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470185A (en) * 2018-02-12 2018-08-31 北京佳格天地科技有限公司 The atural object annotation equipment and method of satellite image
WO2020037960A1 (en) * 2018-08-21 2020-02-27 深圳大学 Sar target recognition method and apparatus, computer device, and storage medium
CN111142137A (en) * 2018-11-05 2020-05-12 中国人民解放军战略支援部队信息工程大学 Method and device for positioning point source target image control points
CN110020635A (en) * 2019-04-15 2019-07-16 中国农业科学院农业资源与农业区划研究所 Growing area crops sophisticated category method and system based on unmanned plane image and satellite image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313684A (en) * 2021-05-28 2021-08-27 北京航空航天大学 Video-based industrial defect detection system under dim light condition

Similar Documents

Publication Publication Date Title
Berman et al. Single image dehazing using haze-lines
CN109902677B (en) Vehicle detection method based on deep learning
CN112668663B (en) Yolov 4-based aerial car detection method
CN112115895B (en) Pointer type instrument reading identification method, pointer type instrument reading identification device, computer equipment and storage medium
CN112598672A (en) Pavement disease image segmentation method and system based on deep learning
CN111833237B (en) Image registration method based on convolutional neural network and local homography transformation
CN113642390B (en) Street view image semantic segmentation method based on local attention network
CN112084923B (en) Remote sensing image semantic segmentation method, storage medium and computing device
CN111339902B (en) Liquid crystal display indication recognition method and device for digital display instrument
CN108805864A (en) The acquisition methods and device of architecture against regulations object based on view data
CN112101309A (en) Ground object target identification method and device based on deep learning segmentation network
CN115223054A (en) Remote sensing image change detection method based on partition clustering and convolution
CN113628180B (en) Remote sensing building detection method and system based on semantic segmentation network
CN111753887A (en) Point source target image control point detection model training method and device
Kirkland et al. Imaging from temporal data via spiking convolutional neural networks
US11640711B2 (en) Automated artifact detection
CN114067172A (en) Simulation image generation method, simulation image generation device and electronic equipment
CN111523392B (en) Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture
JP6044052B2 (en) Fisheye image data creation program and LAI calculation program
CN116129191B (en) Multi-target intelligent identification and fine classification method based on remote sensing AI
CN110059544B (en) Pedestrian detection method and system based on road scene
CN110751163B (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN116152770B (en) 3D target matching model building method and device
CN116168062B (en) 3D target tracking method and device
US20230401691A1 (en) Image defect detection method, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination