CN115841048B - Multi-mode simulation data set preparation method based on target mechanism model - Google Patents

Multi-mode simulation data set preparation method based on target mechanism model Download PDF

Info

Publication number
CN115841048B
CN115841048B CN202310101478.2A CN202310101478A CN115841048B CN 115841048 B CN115841048 B CN 115841048B CN 202310101478 A CN202310101478 A CN 202310101478A CN 115841048 B CN115841048 B CN 115841048B
Authority
CN
China
Prior art keywords
target
representing
simulation
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310101478.2A
Other languages
Chinese (zh)
Other versions
CN115841048A (en
Inventor
杨小冈
王思宇
申通
卢瑞涛
席建祥
李清格
蔡光斌
范继伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rocket Force University of Engineering of PLA
Original Assignee
Rocket Force University of Engineering of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocket Force University of Engineering of PLA filed Critical Rocket Force University of Engineering of PLA
Priority to CN202310101478.2A priority Critical patent/CN115841048B/en
Publication of CN115841048A publication Critical patent/CN115841048A/en
Application granted granted Critical
Publication of CN115841048B publication Critical patent/CN115841048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a preparation method of a multimode simulation data set based on a target mechanism model, which comprises the following steps: step 1, constructing a target environment scene based on a time-sensitive target model; step 2, multiband imaging simulation based on sensor parameters; step 3, collecting video and image data sets; and 4, semi-automatic data annotation for image saliency detection. The method of the invention constructs a military time-sensitive target model library and various simulation background environments on the basis of analyzing a large number of various disclosed information data of active weapon equipment, then realizes the simulation functions of visible light, night vision, medium wave infrared and long wave infrared through the virtual imaging simulation modeling of the multimode sensor, and finally carries out semiautomatic data labeling on the basis of data acquisition so as to realize the preparation of a data set.

Description

Multi-mode simulation data set preparation method based on target mechanism model
Technical Field
The invention belongs to the technical field of image dataset preparation, relates to multimode simulation dataset preparation, and in particular relates to a multimode simulation dataset preparation method based on a target mechanism model.
Background
With the continuous development of computer vision technology, the future battlefield forms are continuously changed, the battlefield modes are developed towards informatization and intellectualization, and the intelligent primary goal of the novel equipment is to have the functions of completing advanced battlefield situation awareness, target detection and recognition and other battlefield tasks. With the continuous development and transformation of military science and technology, the combination of artificial intelligence technology and missile weapon accurate guidance technology in the future battlefield will become an important means and necessary way for improving the effectiveness of missile weapon fight. In order to enable an accurate guided weapon to adaptively detect and intercept a target in a complex environment, automatically identify and track the target, and finally accurately hit the target, an advanced information processing technology and an information processing system are required to be adopted by a seeker. The space-ground target image data are widely applied to the fields of scene matching, cognitive navigation and the like of various intelligent guided smart weapons, the time-sensitive target data set is a basic key factor for realizing the intelligent guided smart weapons, and the rapidity, reliability and effectiveness of data set preparation directly determine the operational effectiveness of the intelligent guided smart weapons. For empty image target cognitive datasets, the coverage and fidelity of the datasets are more important to consider. The military target image data acquisition is difficult and costly to acquire relative to the target detection data set required by civil artificial intelligence algorithms. Therefore, when the target type, state and other attributes of the training target sample data set are less in coverage, virtual simulation generation needs to be carried out on the data set samples, so that the sample data is enhanced.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a multi-mode simulation data set preparation method based on a target mechanism model, which solves the technical problem that the capability of preparing a time-sensitive target data set in the prior art is to be further improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
a multimode simulation data set preparation method based on a target mechanism model comprises the following steps:
step 1, constructing a target environment scene based on a time-sensitive target model;
step 101, constructing a time-sensitive target model;
step 102, modeling an environment scene:
after the time-sensitive target model is obtained in the step 101, generating a corresponding terrain data model by using original data, and forming a corresponding database, wherein the original data comprises elevation data, vector data and an image; performing complete modeling on an environmental thermal system;
step 2, multiband imaging simulation based on sensor parameters:
the sensor parameters comprise sensor optical parameters, sensor detection parameters and sensor electrical parameters;
the imaging simulation can arbitrarily designate a wave band to be simulated in a simulation wave band range, provides simultaneous simulation of a plurality of frequency bands, modifies simulation parameters in the running process of the simulation, and outputs a pixel bit width of a gray image not lower than 16 bits;
step 3, collecting video and image data sets:
step 301, a scene planning file corresponding to the environment scene obtained by modeling according to step 102 is opened, the file data format of the scene planning file is XML, the scene planning file comprises a terrain scene path, a target model path, a material file path and various module parameters, and software is operated to obtain a visual simulation scene after the scene planning file is loaded;
step 302, in a visual simulation scene, performing operations of selecting, adding, deleting, position and posture adjusting and proportion adjusting of the time-sensitive target model constructed according to the step 101, and editing under the visualization to realize the view of the scene;
step 303, recording and previewing a scene under the conditions of a multi-dimensional view angle and a multi-scale view field by using the multi-band imaging simulation method based on the sensor parameters in step 2, and completing acquisition of video and image data sets;
step 4, semi-automatic data labeling for image saliency detection:
and (3) performing coarse labeling on the video and the image acquired in the step (3) by adopting a KCF algorithm, and performing accurate labeling on the video and the image subjected to coarse labeling by adopting an FT algorithm, so as to realize semi-automatic data labeling for detecting the image significance.
Compared with the prior art, the invention has the following technical effects:
the method comprises the steps of constructing a military time-sensitive target model library and various simulation background environments on the basis of analyzing a large number of various disclosed information data of active weapon equipment, realizing visible light, night vision, medium wave infrared and long wave infrared simulation functions through multimode sensor virtual imaging simulation modeling, and finally carrying out semi-automatic data labeling on the basis of data acquisition so as to realize data set preparation.
Compared with the traditional method for enhancing data by using the existing data, the method provided by the invention can fundamentally solve the problems of insufficient training data, fewer target types and the like, can expand a data set at two layers of breadth and depth, and is beneficial to the smooth development of tasks such as target identification and the like.
And (III) constructing a military time-sensitive target multimode simulation model library and various simulation background environments of sea, land and air by the method, simulating and generating typical targets and scene data, and laying a solid foundation for subsequent data set preparation.
According to the method disclosed by the invention, the simulation modeling of the virtual imaging of the multimode sensor is adopted, the simulation functions of visible light, night vision, medium-wave infrared and long-wave infrared are realized, the influence of a meteorological environment on the simulation imaging is considered, and the imaging of a network dynamic driving software platform and the real-time transmission of video images can be realized.
Aiming at the problems that the KCF target tracking algorithm has inaccurate tracking effect on scale transformation target images and the like, the method extracts the target saliency map according to the strategy of the image saliency map on the basis of rough labeling of the KCF target tracking algorithm, then carries out threshold segmentation to obtain a corresponding binary map, further determines the position of the target, and carries out fine labeling.
The method effectively solves the problems of insufficient training data set, single target type and the like in a time-sensitive target detection task, can utilize virtual simulation to generate the required data and simultaneously carry out target marking, thereby realizing complete preparation of the data set and laying a foundation for improving the accuracy of target detection, and has a certain practical value.
Drawings
Fig. 1 is a graph of thermal infrared effects.
FIG. 2 is a partial object model diagram.
FIG. 3 is a diagram of a terrain modeling process.
FIG. 4 is a partial terrain scene model map.
Fig. 5 is a graph of simulation results of visible light and long-wave infrared.
Fig. 6 is a graph of labeling effect based on a conventional KCF algorithm.
FIG. 7 is a schematic diagram of a semi-automatic labeling result based on the method of the present invention.
The following examples illustrate the invention in further detail.
Detailed Description
All the devices according to the present invention, unless otherwise specified, are known in the art.
The KCF algorithm refers to a kernel-related filtering algorithm.
The FT algorithm refers to a frequency tuning saliency detection algorithm.
The MOSSE model refers to the minimum output and square error model.
The CSK model refers to a tracking algorithm model based on a cyclic matrix structure.
In order to further improve the intelligent degree of the guided weapon, adapt to the requirements of intelligent detection and identification tasks, how to prepare high-quality image target data sets with multiple target types, wide target state coverage and high target sample fidelity is indistinct.
Aiming at the problems of insufficient data sets and low time-sensitive target detection precision of a depth-based target detection algorithm, the invention designs a multi-mode simulation data set preparation method based on a target mechanism model.
The preparation method of the multimode simulation data set based on the target mechanism model mainly comprises the functions of loading target and terrain data, generating a designed scene, simulating and calculating an environmental sensor, simulating and generating an image and video data set and the like. Constructing a virtual simulation generation data generation system by means of time-sensitive targets and target environment scene modeling; then, performing multiband sensor imaging simulation according to the environmental parameters and the sensor parameters, and providing basic data support; then utilizing the multi-dimensional view angle and the multi-dimensional view field to collect video and image data sets; and finally, realizing the complete preparation of the data set by using a semi-automatic data labeling method based on KCF algorithm and FT algorithm image saliency detection. By developing the research of the multi-mode simulation data set preparation technology of the target mechanism model, the practical application capability of the intelligent target detection algorithm can be enhanced, and the reliability of the target detection algorithm based on deep learning is greatly improved.
The following specific embodiments of the present invention are provided, and it should be noted that the present invention is not limited to the following specific embodiments, and all equivalent changes made on the basis of the technical solutions of the present application fall within the protection scope of the present invention.
Examples:
the embodiment provides a multi-mode simulation data set preparation method based on a target mechanism model, which comprises the following steps:
step 1, constructing a target environment scene based on a time-sensitive target model:
in the step, a model library of time-sensitive targets and background scenes is constructed, wherein the model library comprises land scenes such as land scenes, airport scenes, roads, buildings, bridges, deserts, gobi scenes, launching arrays and the like, and time-sensitive targets (such as target models of domestic and foreign active weapons such as tanks, vehicles, airplanes, aircraft carriers, expelling ships or cruisers) in sea, land scenes, air scenes and air scenes; ocean scenes such as harbor, sea surface, island and the like; sky background: near ground sky background, deep space background, different cloud layer background and other scenes. The scene modeling mainly loads data such as a three-dimensional terrain model, a ground surface feature model, a ocean model and the like, corresponding mapping and material files, and generates an expected scene environment. The method can support the generation of the corresponding terrain database by importing the original data such as elevation data, vector data, orthographic images and the like.
The step 1 specifically comprises the following sub-steps:
step 101, constructing a time-sensitive target model:
step 10101, a set of model libraries are built by analyzing various disclosed information data related to active weaponry. And then, carrying out dynamic hot area simulation of the target, and setting heat related parameters by determining the relative positions of the heat source in the target body and the surface of the target body, so as to realize the simulation of the heat source in the target body in the simulation.
Preferably, in step 10101, the heat-related parameters include a material layer number, a material parameter, and a thickness.
In this embodiment, the thermal infrared effect is shown in fig. 1.
In step 10102, the infrared radiation of the target is calculated using the following formula to obtain a time-sensitive target model.
Figure SMS_1
Wherein:
Figure SMS_2
indicating the total radiance to reach the observation point;
Figure SMS_3
a blackbody radiation brightness representing the same temperature as the scene object;
Figure SMS_4
representing the radiance around the surface reflection of the object;
Figure SMS_5
representing the emissivity of the material of the object;
Figure SMS_6
represents the atmospheric transmittance;
Figure SMS_7
representing atmospheric path radiation.
Preferably, in step 10102, the infrared radiation includes thermal radiation emitted by the object surface itself, reflected radiation from solar incident thermal radiation and thermal radiation scattered by the ambient background, and thermal radiation emitted by the upward path atmosphere.
A partial object model diagram constructed in this embodiment is shown in FIG. 2.
Step 102, modeling an environment scene:
after the time-sensitive target model is obtained in the step 101, generating a corresponding terrain data model by using original data, and forming a corresponding database, wherein the original data comprises elevation data, vector data and an image; the terrain modeling process is shown in fig. 3 and fully models the ambient thermal system.
Preferably, in step 102, the complete modeling of the environmental thermal system includes the temperature parameters of the specific scene under the geographic conditions of the designated date and longitude and latitude, the altitude angle irradiation of the sun, moon and the star system at the specific moment, and the thermal area, thermal boundary and thermal balance of the target, so that the temperature of the object can be updated in real time.
In this embodiment, the calculation and update of the surface temperature of the object takes into account complex physical factors, including: the material of the surface layer of the object, the number of layers, the type and the thickness of the material below the surface layer of the object, the heat conduction of the material of each layer reaches the boundary and the like. The partial terrain scene model is shown in fig. 4.
Step 2, multiband imaging simulation based on sensor parameters:
in this step, the sensor effect is mainly to make a selection change to some parameters in the image display process, such as noise.
The sensor parameters include sensor optical parameters, sensor detection parameters and sensor electrical parameters.
Preferably, in step 2, the optical parameters of the sensor include aperture shape, aperture size, aspect ratio and focal length; the sensor detection parameters comprise residence time, equivalent temperature difference, detector size resolution and blind pixel characteristics; the sensor electrical parameters include manual gain and manual level.
The imaging simulation can randomly designate a wave band to be simulated in a simulation wave band range, provides simultaneous simulation of a plurality of frequency bands by utilizing an internal integrated efficient algorithm and material characteristics, modifies simulation parameters in the running process of the simulation, and outputs a pixel bit width of a gray image not lower than 16 bits.
Preferably, in step 2, the simulation band range covers 0.35 μm to 16 μm, and covers visible light, low-light night vision, mid-wave infrared and long-wave infrared.
In this embodiment, fig. 5 shows a scene of simulation in visible light and long-wave infrared at different times. Simultaneously, the infrared thermal characteristic synthesis of targets, background scenes, cloud and fog, atmospheric transmission and the like can be performed.
Step 3, collecting video and image data sets:
step 301, a scene planning file corresponding to the environment scene obtained by modeling in step 102 is opened, the file data format of the scene planning file is XML, the scene planning file comprises a terrain scene path, a target model path, a material file path and various module parameters, and software is operated after the scene planning file is loaded to obtain a visual simulation scene.
Preferably, in step 301, the module parameters include longitude and latitude height, target azimuth and attitude angle, sensor azimuth and parameters, and atmospheric environment parameters.
Step 302, in the visual simulation scene, the operations of selecting, adding, deleting, position and posture adjusting and proportion adjusting of the time-sensitive target model constructed according to the step 101 are performed, and editing can be performed under the visualization, so that the view of the scene is obtained.
The step 302 specifically includes:
accessing parameters to be set, wherein the parameters comprise time, position, atmospheric environment, speed and gesture, define the position, gesture and track of the sensor viewpoint, and define the variable scale operated by a mouse or a keyboard; and selecting, adding, position and posture adjusting and deleting operations of the terrain to be loaded and the target model in the scene are realized.
The visual simulation scene comprises a scene processing module, wherein the scene processing module integrates the functions of operating the view point of the sensor, recording and loading the track of the sensor, loading the track of the target, storing the video image and the like, and realizes the view point control and the simulation data storage of the simulation scene.
In the simulation process of the visual simulation scene, the viewpoint position of the sensor is manually modified according to the experimental design requirement, the position can be set according to different coordinate systems of the simulation scene, the simulation of the geodetic coordinate system and the space rectangular coordinate system is supported according to different simulation scenes, and the real-time dynamic updating of the coordinate value according to the track information of the sensor loaded by software is supported.
Meanwhile, the current position and the gesture of the sensor can be set to be an initial simulation viewpoint by clicking a viewpoint setting button, and after the sensor is shifted, the sensor can be returned to the initial viewpoint by clicking a viewpoint returning button. Clicking the "-" or "+" button can manually adjust the position and attitude of the viewpoint of the sensor, and the "number step" and "angle step" in the interface represent the incremental or decremental amount of manually adjusting the position and attitude of the viewpoint.
Step 303, performing scene recording and previewing under the conditions of a multi-dimensional view angle and a multi-scale view field by using the multi-band imaging simulation method based on the sensor parameters in step 2, and completing video and image dataset acquisition.
Step 4, semi-automatic data labeling for image saliency detection:
and (3) performing coarse labeling on the video and the image acquired in the step (3) by adopting a KCF algorithm, and performing accurate labeling on the video and the image subjected to coarse labeling by adopting an FT algorithm, so as to realize semi-automatic data labeling for detecting the image significance.
In the step, when the KCF algorithm is aimed at labeling time sequence images, the first image is manually labeled, and the selection of a target frame and the determination of a target type are carried out. Because it utilizes the cyclic matrix to expand the negative sample, the size of the target frame obtained by the subsequent labeling is basically unchanged after the initial cyclic matrix is fixed. And performing secondary fine labeling on the target frame obtained by KCF coarse labeling by using the significance of the image, and finally solving the target position by combining the two table labeling results, thereby realizing the complete preparation of the target data set by using the semiautomatic labeling.
In step 4, the KCF algorithm uses a cyclic matrix to construct a negative sample library, and the target sample framed by the first frame is set as one-dimensional vector data
Figure SMS_8
The matrix of all target samples is defined as the circulant matrix +.>
Figure SMS_9
The formula can be expressed as:
Figure SMS_10
wherein:
Figure SMS_11
representing a cyclic matrix;
Arepresenting the matrix;
Trepresenting a transposed matrix of the matrix;
xrepresenting a target sample;
nrepresenting the number of target samples.
For all sample sets obtained by the circulant matrix
Figure SMS_12
The corresponding ridge regression objective function can be expressed as:
Figure SMS_13
wherein:
xrepresenting a target sample;
Figure SMS_14
representing a ridge regression objective function;
Figure SMS_15
representing the weight coefficient;
irepresent the firstiA set of samples.
Setting a training classifier for the ridge regression objective function, wherein the training classifier aims at minimizing the square error between the ridge regression objective function and the result, and the training classifier is as follows:
Figure SMS_16
Figure SMS_17
Figure SMS_18
wherein:
Hrepresenting a conjugate transpose of the matrix;
irepresent the firstiA set of samples;
nrepresenting sharing ofnA sample number;
λrepresenting regularization parameters;
Figure SMS_19
representing a 2-norm;
yrepresenting the samples after the cyclic matrix;
Figure SMS_20
representing a diagonal matrix; />
Figure SMS_21
Representing the target sample->
Figure SMS_22
Fourier transformed form of (a);
Frepresenting the fourier transform matrix.
Obtaining weight coefficients according to the training classifier
Figure SMS_23
In simplified form:
Figure SMS_24
wherein:
Figure SMS_25
representing the conjugation of the vector;
Figure SMS_26
representation->
Figure SMS_27
Fourier transformed form of (a);
Figure SMS_28
representing a dot product correlation operation.
In this embodiment, the weight coefficients are calculated by dot product operation
Figure SMS_29
The complexity and the calculated amount of the algorithm are reduced, and the tracking speed of the algorithm is improved.
Then utilize Gaussian kernel function
Figure SMS_30
Converting nonlinear problem into linear problem in high-dimensional space, and weighting coefficient
Figure SMS_31
Expressed as a linear combination:
Figure SMS_32
wherein:
Figure SMS_33
representing +.>
Figure SMS_34
Is a correlation coefficient vector of (1);
Figure SMS_35
representing a mapping function that maps the target from non-linear to linear.
The sample correlation before and after mapping can be expressed as:
Figure SMS_36
wherein:
Figure SMS_37
representing the mapped sample set;
krepresenting a gaussian kernel function.
Kernel function matrix
Figure SMS_38
Expressed as:
Figure SMS_39
wherein:
irepresent the firstiA sample number;
jrepresent the firstjSamples.
Matrix kernel functions
Figure SMS_40
For weighting coefficients->
Figure SMS_41
Solving for +.>
Figure SMS_42
Correlation coefficient of->
Figure SMS_43
Figure SMS_44
Wherein:
Irepresenting the identity matrix;
Yrepresenting a ridge regression objective function.
Correlation coefficient
Figure SMS_45
The method comprises the following steps of:
Figure SMS_46
wherein:
Figure SMS_47
representing a fourier transform;
Figure SMS_48
representation ofαFourier transformed form of (a);
Figure SMS_49
representation ofYFourier transformed form of (a);
Figure SMS_50
representing a kernel function matrix +.>
Figure SMS_51
Form a vector.
In this embodiment, the weight coefficient can be solved by the above formula
Figure SMS_52
Becomes solving for the correlation coefficient in the Fourier domain>
Figure SMS_53
The optimal value of the algorithm is greatly simplified.
It is assumed that in the image of the current frame,
Figure SMS_54
for the current frame target->
Figure SMS_55
Predicted candidate locations, then training the response function of the classifier
Figure SMS_56
Expressed as:
Figure SMS_57
solving current frame target
Figure SMS_58
The correlation with the predicted candidate position is the tracking result when the correlation is maximum, and the following formula can be obtained by utilizing the inverse discrete Fourier transform:
Figure SMS_59
wherein:
Figure SMS_60
representation->
Figure SMS_61
Fourier transformed form of (a);
Figure SMS_62
representation->
Figure SMS_63
Fourier transformed form of (a);
Figure SMS_64
representing the correlation of the current frame target with the predicted candidate position.
In the step 4, the FT algorithm divides the image into a high-frequency part and a low-frequency part in the frequency domain; the high-frequency part contains detailed information of the background and texture of the image, and the low-frequency part contains overall layout information of the image and outline information of the target.
The image significance calculation formula of the FT algorithm is expressed as follows:
Figure SMS_65
wherein:
Figure SMS_66
representing an image saliency calculation;
Figure SMS_67
representing an average eigenvalue of the image;
Figure SMS_68
representing pixel dot +.>
Figure SMS_69
Lab color characterization after Gaussian filtering.
And calculating Euclidean distances between all pixel points in the image and the average value of the image according to the image saliency calculation formula, and normalizing the Euclidean distances to obtain a final saliency map.
Simulation example:
the effect of the invention is further illustrated by the following simulations:
1. simulation conditions:
in order to verify the effectiveness of the invention, various time-sensitive targets and environment models are constructed, and military time-sensitive target data set preparation is realized. Experimental environment: the operating system is Windows 10, and the processor is a notebook computer of 2.9GHz Intel Xeon E5-2667.
2. Simulation experiment:
compared with the traditional KCF algorithm, the target tracking method effectively solves the problem of scale transformation in the target moving process, the precise marking result after the image significance analysis is more fit with the size and the outline of the main body part of the target, is more fit with the scale transformation of the target, and can accurately mark the position of the target.
According to the KCF tracking algorithm, the first frame image at the time of t=0 is manually marked to give a target frame, and the subsequent frames are automatically marked. The effect of KCF-based labeling is shown in FIG. 6.
And marking the single-target infrared sequence image by using different marking methods, and finally, presenting the results of the marking methods in the same frame image. Fig. 7 shows a representative sequence image result of a part marked by the marking algorithm, wherein the first frame of the image is marked manually, and then a subsequent marking result is acquired every 10 frames, wherein the subsequent marking result comprises a KCF algorithm rough marking frame and a KCF rough marking and significance analysis fine marking frame.

Claims (10)

1. The preparation method of the multimode simulation data set based on the target mechanism model is characterized by comprising the following steps of:
step 1, constructing a target environment scene based on a time-sensitive target model:
step 101, constructing a time-sensitive target model;
step 102, modeling an environment scene:
after the time-sensitive target model is obtained in the step 101, generating a corresponding terrain data model by using original data, and forming a corresponding database, wherein the original data comprises elevation data, vector data and an image; performing complete modeling on an environmental thermal system;
step 2, multiband imaging simulation based on sensor parameters:
the sensor parameters comprise sensor optical parameters, sensor detection parameters and sensor electrical parameters;
the imaging simulation can arbitrarily designate a wave band to be simulated in a simulation wave band range, provides simultaneous simulation of a plurality of frequency bands, modifies simulation parameters in the running process of the simulation, and outputs a pixel bit width of a gray image not lower than 16 bits;
step 3, collecting video and image data sets:
step 301, a scene planning file corresponding to the environment scene obtained by modeling according to step 102 is opened, the file data format of the scene planning file is XML, the scene planning file comprises a terrain scene path, a target model path, a material file path and various module parameters, and software is operated to obtain a visual simulation scene after the scene planning file is loaded;
step 302, in a visual simulation scene, performing operations of selecting, adding, deleting, position and posture adjusting and proportion adjusting of the time-sensitive target model constructed according to the step 101, and editing under the visualization to realize the view of the scene;
step 303, recording and previewing a scene under the conditions of a multi-dimensional view angle and a multi-scale view field by using the multi-band imaging simulation method based on the sensor parameters in step 2, and completing acquisition of video and image data sets;
step 4, semi-automatic data labeling for image saliency detection:
and (3) performing coarse labeling on the video and the image acquired in the step (3) by adopting a KCF algorithm, and performing accurate labeling on the video and the image subjected to coarse labeling by adopting an FT algorithm, so as to realize semi-automatic data labeling for detecting the image significance.
2. The method for preparing a multi-mode simulation data set based on a target mechanism model as claimed in claim 1, wherein the specific process of step 101 comprises:
step 10101, a set of model library is established by analyzing various public related information data of active weapon equipment; then, carrying out dynamic hot area simulation of the target, and setting heat related parameters by determining the relative positions of the heat source in the target body and the surface of the target body, so as to realize the simulation of the heat source in the target body in the simulation;
step 10102, calculating infrared radiation of the target by using the following formula to obtain a time-sensitive target model;
Figure QLYQS_1
wherein:
Figure QLYQS_2
indicating the total radiance to reach the observation point;
Figure QLYQS_3
a blackbody radiation brightness representing the same temperature as the scene object;
Figure QLYQS_4
representing the radiance around the surface reflection of the object; />
Figure QLYQS_5
Representing the emissivity of the material of the object;
Figure QLYQS_6
represents the atmospheric transmittance;
Figure QLYQS_7
representing atmospheric path radiation.
3. The method for preparing a multi-mode simulation data set based on a target mechanism model according to claim 2, wherein in step 10101, the heat-related parameters include a material layer number, a material parameter, and a thickness.
4. The method for preparing a multi-mode simulation dataset based on a target mechanism model as claimed in claim 2, wherein in step 10102, the infrared radiation comprises thermal radiation emitted by the object surface itself, reflected radiation of solar incident thermal radiation and thermal radiation scattered by the ambient background, and thermal radiation emitted by the upward path atmosphere.
5. The method for preparing a multi-mode simulation data set based on a target mechanism model according to claim 1, wherein in step 102, the complete modeling of the environmental thermal system includes the irradiation of the air temperature parameters, the altitude angles of the sun, the moon and the star system at specific moments of a specific scene under the geographical conditions of a designated date and longitude and latitude, and the thermal area, the thermal boundary and the thermal balance of the target, so that the temperature of the object can be updated in real time.
6. The method for preparing a multimode simulation data set based on a target mechanism model according to claim 1, wherein in the step 2, the optical parameters of the sensor include aperture shape, aperture size, aspect ratio and focal length; the sensor detection parameters comprise residence time, equivalent temperature difference, detector size resolution and blind pixel characteristics; the sensor electrical parameters include manual gain and manual level.
7. The method for preparing a multimode simulation data set based on a target mechanism model as set forth in claim 1, wherein in the step 2, the simulation band range covers 0.35 μm to 16 μm.
8. The method for preparing a multi-mode simulation data set based on a target mechanism model according to claim 1, wherein in step 301, the parameters of each module include longitude and latitude height, target azimuth and attitude angle, sensor azimuth and parameter and atmospheric environment parameter.
9. The method for preparing a multimode simulation data set based on a target mechanism model as claimed in claim 1, wherein in step 4, the KCF algorithm uses a cyclic matrix to construct a negative sample library, and the first framed target sample is set as one-dimensional vector data
Figure QLYQS_8
The matrix of all target samples is defined as the circulant matrix +.>
Figure QLYQS_9
The formula can be expressed as:
Figure QLYQS_10
wherein:
Figure QLYQS_11
representing a cyclic matrix;
Arepresenting the matrix;
Trepresenting a transposed matrix of the matrix;
xrepresenting a target sample;
nrepresenting the number of target samples;
for all sample sets obtained by the circulant matrix
Figure QLYQS_12
The corresponding ridge regression objective function can be expressed as:
Figure QLYQS_13
wherein:
xrepresenting a target sampleThe cost is high;
Figure QLYQS_14
representing a ridge regression objective function;
Figure QLYQS_15
representing the weight coefficient;
irepresent the firstiA set of samples;
setting a training classifier for the ridge regression objective function, wherein the training classifier aims at minimizing the square error between the ridge regression objective function and the result, and the training classifier is as follows:
Figure QLYQS_16
Figure QLYQS_17
Figure QLYQS_18
wherein:
Hrepresenting a conjugate transpose of the matrix;
irepresenting the i-th sample set;
nrepresenting sharing ofnA sample number;
λrepresenting regularization parameters;
Figure QLYQS_19
representing a 2-norm;
yrepresenting the samples after the cyclic matrix;
Figure QLYQS_20
representing a diagonal matrix;
Figure QLYQS_21
representing the target sample->
Figure QLYQS_22
Fourier transformed form of (a);
Frepresenting a fourier transform matrix;
obtaining weight coefficients according to the training classifier
Figure QLYQS_23
In simplified form:
Figure QLYQS_24
wherein:
Figure QLYQS_25
representing the conjugation of the vector;
Figure QLYQS_26
representation->
Figure QLYQS_27
Fourier transformed form of (a);
Figure QLYQS_28
representing a dot product correlation operation;
then utilize Gaussian kernel function
Figure QLYQS_29
Converting the non-linearity problem into linearity problem in high-dimensional space, and weighting coefficient +.>
Figure QLYQS_30
Expressed as a linear combination:
Figure QLYQS_31
wherein:
Figure QLYQS_32
representing +.>
Figure QLYQS_33
Is a correlation coefficient vector of (1);
Figure QLYQS_34
a mapping function representing mapping the target from non-linear to linear;
the sample correlation before and after mapping can be expressed as:
Figure QLYQS_35
wherein:
Figure QLYQS_36
representing the mapped sample set;
krepresenting a gaussian kernel function;
kernel function matrix
Figure QLYQS_37
Expressed as:
Figure QLYQS_38
wherein:
irepresent the firstiA sample number;
jrepresent the firstjA sample number;
matrix kernel functions
Figure QLYQS_39
For weighting coefficients->
Figure QLYQS_40
Solving for +.>
Figure QLYQS_41
Correlation coefficient of->
Figure QLYQS_42
Figure QLYQS_43
Wherein:
Irepresenting the identity matrix;
Yrepresenting a ridge regression objective function;
correlation coefficient
Figure QLYQS_44
The method comprises the following steps of:
Figure QLYQS_45
wherein:
Figure QLYQS_46
representing a fourier transform;
Figure QLYQS_47
representation ofαFourier transformed form of (a);
Figure QLYQS_48
representation ofYFourier transformed form of (a);
Figure QLYQS_49
representing a kernel function matrix +.>
Figure QLYQS_50
Corresponding row elements of (a) constitute a vector;
it is assumed that in the image of the current frame,
Figure QLYQS_51
for the current frame target->
Figure QLYQS_52
Predicted candidate locations, then training the response function of the classifier
Figure QLYQS_53
Expressed as:
Figure QLYQS_54
solving current frame target
Figure QLYQS_55
The correlation with the predicted candidate position is the tracking result when the correlation is maximum, and the following formula can be obtained by utilizing the inverse discrete Fourier transform:
Figure QLYQS_56
wherein:
Figure QLYQS_57
representation->
Figure QLYQS_58
Fourier transformed form of (a);
Figure QLYQS_59
representation->
Figure QLYQS_60
Fourier transformed form of (a); />
Figure QLYQS_61
Representing the correlation of the current frame target with the predicted candidate position.
10. The method for preparing a multimode simulation data set based on a target mechanism model as set forth in claim 1, wherein in step 4, the FT algorithm divides the image into a high frequency part and a low frequency part in a frequency domain; the high-frequency part comprises detail information of the background and the texture of the image, and the low-frequency part comprises overall layout information of the image and outline information of the target;
the image significance calculation formula of the FT algorithm is expressed as follows:
Figure QLYQS_62
wherein:
Figure QLYQS_63
representing an image saliency calculation;
Figure QLYQS_64
representing an average eigenvalue of the image;
Figure QLYQS_65
representing pixel dot +.>
Figure QLYQS_66
Lab color features after gaussian filtering;
and calculating Euclidean distances between all pixel points in the image and the average value of the image according to the image saliency calculation formula, and normalizing the Euclidean distances to obtain a final saliency map.
CN202310101478.2A 2023-02-13 2023-02-13 Multi-mode simulation data set preparation method based on target mechanism model Active CN115841048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310101478.2A CN115841048B (en) 2023-02-13 2023-02-13 Multi-mode simulation data set preparation method based on target mechanism model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310101478.2A CN115841048B (en) 2023-02-13 2023-02-13 Multi-mode simulation data set preparation method based on target mechanism model

Publications (2)

Publication Number Publication Date
CN115841048A CN115841048A (en) 2023-03-24
CN115841048B true CN115841048B (en) 2023-05-12

Family

ID=85579612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310101478.2A Active CN115841048B (en) 2023-02-13 2023-02-13 Multi-mode simulation data set preparation method based on target mechanism model

Country Status (1)

Country Link
CN (1) CN115841048B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162812A (en) * 2018-05-24 2019-08-23 北京机电工程研究所 Target sample generation method based on infrared simulation

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7768527B2 (en) * 2006-05-31 2010-08-03 Beihang University Hardware-in-the-loop simulation system and method for computer vision
CN104200237B (en) * 2014-08-22 2019-01-11 浙江生辉照明有限公司 One kind being based on the High-Speed Automatic multi-object tracking method of coring correlation filtering
CN106772682B (en) * 2016-12-31 2017-10-31 华中科技大学 A kind of infrared radiation spectrum Simulation Analysis method of moving-target
US11334762B1 (en) * 2017-09-07 2022-05-17 Aurora Operations, Inc. Method for image analysis
US11144065B2 (en) * 2018-03-20 2021-10-12 Phantom AI, Inc. Data augmentation using computer simulated objects for autonomous control systems
US11257272B2 (en) * 2019-04-25 2022-02-22 Lucid VR, Inc. Generating synthetic image data for machine learning
CN111223191A (en) * 2020-01-02 2020-06-02 中国航空工业集团公司西安航空计算技术研究所 Large-scale scene infrared imaging real-time simulation method for airborne enhanced synthetic vision system
CN111368935B (en) * 2020-03-17 2023-06-09 北京航天自动控制研究所 SAR time-sensitive target sample amplification method based on generation countermeasure network
CN112150575B (en) * 2020-10-30 2023-09-01 深圳市优必选科技股份有限公司 Scene data acquisition method, model training method and device and computer equipment
CN112613397B (en) * 2020-12-21 2022-11-29 中国人民解放军战略支援部队航天工程大学 Method for constructing target recognition training sample set of multi-view optical satellite remote sensing image
CN113362341B (en) * 2021-06-10 2024-02-27 中国人民解放军火箭军工程大学 Air-ground infrared target tracking data set labeling method based on super-pixel structure constraint
CN115392009A (en) * 2022-08-16 2022-11-25 哈尔滨新光光电科技股份有限公司 Full-function complex scene generation software architecture
CN115507959A (en) * 2022-10-19 2022-12-23 电子科技大学 Infrared radiation characteristic analysis method for target detection
CN115661251A (en) * 2022-11-08 2023-01-31 中国科学院国家空间科学中心 Imaging simulation-based space target identification sample generation system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162812A (en) * 2018-05-24 2019-08-23 北京机电工程研究所 Target sample generation method based on infrared simulation

Also Published As

Publication number Publication date
CN115841048A (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN113449680B (en) Knowledge distillation-based multimode small target detection method
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN111553245A (en) Vegetation classification method based on machine learning algorithm and multi-source remote sensing data fusion
CN108428220B (en) Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence
Li et al. A review on deep learning techniques for cloud detection methodologies and challenges
CN113239830A (en) Remote sensing image cloud detection method based on full-scale feature fusion
CN106780586B (en) A kind of solar energy potential evaluation method based on ground laser point cloud
CN104867179A (en) Whole spectral range optical imager remote sensing image simulation method
Park et al. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images
Zhang et al. Research on simulated infrared image utility evaluation using deep representation
Zhang et al. A Back Propagation Neural Network-Based Radiometric Correction Method (BPNNRCM) for UAV Multispectral Image
CN115841048B (en) Multi-mode simulation data set preparation method based on target mechanism model
Li et al. Intelligent recognition of point source target image control points with simulation datasets
CN112001342A (en) Clutter classification method adopting VGG-16 network
Su et al. Hyperspectral band selection using firefly algorithm
Liu et al. Classification of forest vegetation types in Jilin Province, China based on deep learning and multi-temporal Sentinel-2 data
CN115564678A (en) Low-altitude unmanned-machine spectral radiation correction method and system for optimizing BP neural network
CN116433859A (en) Massive multiband image generation method for space-based target detection machine learning
Ahmed et al. Performance Assessment of Object Detection from Multi Satellites and Aerial Images
Ling et al. Estimating Winter Wheat LAI Using Hyperspectral UAV Data and an Iterative Hybrid Method
CN117935075A (en) Group target detection method based on visible light remote sensing image
Hu et al. Research on water body extraction based on a joint probability model of convolution neural network and spectral information
Li Sea Ice and Ice Sheet Surface Roughness Characterization and its Effects on Bi-directional Reflectance
Tong et al. VIEW ALL ABSTRACTS
Wibowo et al. The combination of the NDBI and machine learning algorithms to classification the development of urban areas in Surabaya uses Landsat 8 Imagery.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant