CN108537278B - A kind of Multi-source Information Fusion single goal location determining method and system - Google Patents
A kind of Multi-source Information Fusion single goal location determining method and system Download PDFInfo
- Publication number
- CN108537278B CN108537278B CN201810316946.7A CN201810316946A CN108537278B CN 108537278 B CN108537278 B CN 108537278B CN 201810316946 A CN201810316946 A CN 201810316946A CN 108537278 B CN108537278 B CN 108537278B
- Authority
- CN
- China
- Prior art keywords
- target position
- moment
- distance
- image
- comparison result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of Multi-source Information Fusion single goal location determining method and systems, which comprises determines infrared image, television image, the current time alternative target position of representation of laser facula and the last moment alternative target position of multiple sensor acquisitions;Respectively by the distance between the distance between current time alternative target position of calculating, last moment alternative target position and the first set distance threshold value comparison, and judge whether determining the first comparison result and the second comparison result meet information blending constraint, if then determining last moment target position and current target position, and the distance of the two is greater than the second set distance threshold value, current target position is determined as monocular cursor position at this time, if the image of the subsequent time of multiple sensor acquisitions is otherwise obtained, until determining monocular cursor position.Method or system provided by the invention can be improved timeliness, accuracy and the automatization level that single target position determines under complex environment.
Description
Technical field
The present invention relates to targeting techniques field, in particular to a kind of Multi-source Information Fusion single goal location determining method
And system.
Background technique
During single goal search, capture and tracking under complex environment, target position determination is that the key of core is asked
Topic.Target position under complex environment determines, generally includes artificial and automation two ways, wherein the target position of automation
It sets the target signature being applied in determining method and relates generally to Infrared Image Features, television image feature, laser reflection hot spot spy
Sign etc..Based on infrared, television image feature correlation tracking method, using template matching principle, to two images (figure and ginseng in real time
Examine figure) similarity be compared, determine target position;Based on the location determining method of representation of laser facula center of gravity detection, first
It is the gray level image with 256 grades by laser image processing, then finds out laser facula center of gravity, the center of gravity acquired is target position
It sets.But in the above technical solution based on infrared, television image feature correlation tracking method, it is desirable that target is big target, spy
Sign is obvious;Based on representation of laser facula center of gravity detection location determining method, then require representation of laser facula distribution it is relatively uniform,
Image symmetrical characteristic is good, and the light interference of target is weak etc..
Generally speaking, the different characteristic of target is applied in prior art, and is used not in characteristic extraction procedure
Same feature extraction algorithm, but target local environment, target signature etc. are for more special requirement.And in complicated ring
Under the conditions of border, such as confrontation on the battlefield, congestion road conditions, strong light interference be almost difficult to avoid that in the case where, using current target position
The validity for setting determining method definitive result is difficult to ensure, is then highly dependent on manual intervention or artificial under such a condition
It judges.Therefore, the shortcomings that the prior art concentrated reflection be target position definitive result accuracy be difficult to ensure, the degree of automation
The problems such as low.
Summary of the invention
The object of the present invention is to provide a kind of Multi-source Information Fusion single goal location determining method and systems, can be improved
Timeliness, accuracy and the automatization level that single target position determines under complex environment.
To achieve the above object, the present invention provides following schemes:
A kind of Multi-source Information Fusion single goal location determining method, the Multi-source Information Fusion single goal location determining method
Include:
The last moment of multiple sensor acquisitions and the different types of image at current time are obtained, and to all figures
As carrying out image analysis, the current time alternative target position and last moment alternative target position of each type described image are determined
It sets;Described image includes infrared image, television image, representation of laser facula;
It calculates separately the distance between each current time alternative target position and each last moment is standby
Select the distance between target position;
Each distance between last moment alternative target position is compared with the first set distance threshold value respectively
Compared with determining the first comparison result;
Each distance between current time alternative target position is compared with the first set distance threshold value respectively
Compared with determining the second comparison result;
Judge whether first comparison result and second comparison result meet information blending constraint, obtains
First judging result;
If first judging result indicates that first comparison result or second comparison result do not meet the letter
Blending constraint is ceased, then obtains the different types of image of the subsequent time of multiple sensor acquisitions, and is returned to all institutes
It states image and carries out image analysis step;
If first judging result indicates that first comparison result and second comparison result meet the letter
Blending constraint is ceased, then according to all last moment alternative target positions, all current time alternative target positions
It sets and the corresponding information blending constraint of first comparison result, the corresponding letter of second comparison result
Blending constraint is ceased, determines last moment target position and current target position;
Judge whether the current target position is greater than the second setting at a distance from the last moment target position
Distance threshold obtains the second judging result;
If second judging result indicate the current target position and the subsequent time target position away from
From the second set distance threshold value is greater than, then the different types of image of the subsequent time of multiple sensor acquisitions is obtained, and
It returns and image analysis step is carried out to all described images;
If second judging result indicate the current target position and the subsequent time target position away from
From the second set distance threshold value is less than or equal to, then the current target position is determined as monocular cursor position.
Optionally, before obtaining the different types of image of last moment and current time of multiple sensor acquisitions,
The Multi-source Information Fusion single goal location determining method further includes obtaining setup parameter;The setup parameter includes upper a period of time
Quarter, current time, Fixed Time Interval, the first set distance threshold value, the second set distance threshold value and information fusion constraint item
Part.
Optionally, if first judging result indicates that first comparison result or second comparison result are not met
The information blending constraint then obtains the different types of image of the subsequent time of multiple sensor acquisitions, and return pair
All described images carry out image analysis step, specifically include:
If it is described that first judging result indicates that first comparison result and second comparison result are not met
Information blending constraint or second comparison result do not meet the information blending constraint, then obtain multiple sensings
The subsequent time and the different types of image at lower lower moment of device acquisition, return and carry out image analysis step to all described images
Suddenly, the last moment and by the subsequent time is replaced, the lower lower moment is replaced into the current time;
If first judging result indicates that first comparison result does not meet the information blending constraint and institute
It states the second comparison result and meets the information blending constraint, then obtain the subsequent time different type of multiple sensor acquisitions
Image, return and image analysis step carried out to all described images, and the current time is replaced into last moment, will
The subsequent time replaces the current time;Wherein, the interval at the last moment and the current time, it is described current when
It carves and interval, the subsequent time and the interval at the lower lower moment of the subsequent time is between the same set time
Every.
Optionally, the second set distance threshold value is the half of the first set distance threshold value.
Optionally, described the distance between each current time alternative target position and each described of calculating separately
The distance between last moment alternative target position, specifically includes:
First distance is calculated according to the following formula;The first distance is the i-th moment alternative target of the infrared image
The distance between the i-th moment alternative target position of position and the television image;The formula are as follows:
Wherein, the i-th moment alternative target position of the infrared imageI-th moment alternative target position of the television image
The second distance is calculated according to the following formula;The second distance is that the i-th moment of the infrared image is alternative
The distance between the i-th moment alternative target position of target position and the representation of laser facula;The formula are as follows:
Wherein, the i-th moment alternative target position of the representation of laser facula
It sets
The third distance is calculated according to the following formula;The third distance is that the i-th moment of the television image is alternative
Formula described in the distance between target position and the i-th moment alternative target position of the representation of laser facula are as follows:
Optionally, the information blending constraint includes: seven constraint conditions, respectivelyAndAndAndAndAndAndAnd
AndAndAndAndAnd AndAndWherein, ε indicates the first set distance threshold value.
Optionally, described according to all last moment alternative target positions, all current time alternative targets
Position and the corresponding information blending constraint of first comparison result, second comparison result are corresponding described
Information blending constraint determines current target position and last moment target position, specifically includes:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
Optionally, the judgment formula of second judging result are as follows:
Wherein,(xq,yq) indicate the current target position;(yq-1,xq-1) indicate the last moment target position;Table
Show the second set distance threshold value.
The present invention also provides a kind of Multi-source Information Fusion single goal position determination system, the Multi-source Information Fusion monocular
Cursor position determines that system includes:
Image collection module, for obtaining the last moment of multiple sensor acquisitions and the different types of figure at current time
Picture, and to all described images carry out image analysis, determine each type described image current time alternative target position and
Last moment alternative target position;Described image includes infrared image, television image, representation of laser facula;
Distance calculation module, for calculating separately the distance between each current time alternative target position and each
The distance between a last moment alternative target position;
First comparison result determining module, for distinguishing each distance between last moment alternative target position
It is compared with the first set distance threshold value, determines the first comparison result;
Second comparison result determining module, for distinguishing each distance between current time alternative target position
It is compared with the first set distance threshold value, determines the second comparison result;
First judging result obtains module, for judging whether first comparison result and second comparison result are equal
Meet information blending constraint, obtains the first judging result;
Last moment target position and current target position determination module, for being indicated when first judging result
When first comparison result and second comparison result meet the information blending constraint, according to it is all it is described on
One moment alternative target position, all current time alternative target positions and first comparison result are corresponding described
The corresponding information blending constraint of information blending constraint, second comparison result, determines last moment target
Position and current target position;
Second judging result obtains module, for judging the current target position and last moment target position
Whether the distance set is greater than the second set distance threshold value, obtains the second judging result;
Subsequent time image collection module, for indicating first comparison result or described when first judging result
Second comparison result, which does not meet the information blending constraint or second judging result, indicates the current time mesh
When cursor position is greater than the second set distance threshold value at a distance from the subsequent time target position, obtains multiple sensors and adopt
The different types of image of the subsequent time of collection, and return and image analysis step is carried out to all described images;
Single goal position determination module, for indicating the current target position and institute when second judging result
When stating the distance of subsequent time target position less than or equal to the second set distance threshold value, by the current target
Position is determined as monocular cursor position.
Optionally, the Multi-source Information Fusion single goal location determining method further includes that setup parameter obtains module;
Setup parameter obtains module, for obtaining setup parameter;The setup parameter include last moment, current time,
Fixed Time Interval, the first set distance threshold value, the second set distance threshold value and information blending constraint.
The specific embodiment provided according to the present invention, the invention discloses following technical effects:
The present invention provides a kind of Multi-source Information Fusion single goal location determining method and systems, which comprises obtains
Last moment and the infrared image at current time, television image, the representation of laser facula of multiple sensor acquisitions are taken, and to all
Image carries out image analysis, determines the current time alternative target position and last moment alternative target position of each type image
It sets;The distance between each current time alternative target position is calculated, is calculated between each last moment alternative target position
Distance;And be compared calculated distance with the first set distance threshold value respectively, determine the first comparison result and the second ratio
Relatively result;Judge whether the first comparison result and the second comparison result meet information blending constraint, if otherwise obtaining more
The image of the subsequent time of a sensor acquisition, and return and image analysis step is carried out to all images;If then according to all
Last moment alternative target position, all current time alternative targets position and the corresponding information fusion of the first comparison result are about
The corresponding information blending constraint of beam condition, the second comparison result, determines last moment target position and current target
Position, and judge whether current target position is greater than the second set distance threshold value at a distance from last moment target position;
If otherwise obtaining the image of the subsequent time of multiple sensor acquisitions, and returns and image analysis step is carried out to all images;If
It is that current target position is determined as monocular cursor position.System and method provided by the invention, the more biographies of integrated use
The target position information that sensor obtains avoids the location information that single-sensor obtains and is easy by environmental condition, target spy
Inaccurate problem caused by the limitation of the factors such as sign, and obtained in monocular cursor position determination process based on adjacent moment twice
Target possible position spacing determines goal end position, improves the stability and accuracy of position definitive result.
In addition, the monocular cursor position that system and method provided by the invention realizes computer assisted automation determines,
Reduce manual intervention degree, improves the timeliness that single target position determines under complex environment.
Detailed description of the invention
It in order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, below will be to institute in embodiment
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the flow diagram of Multi-source Information Fusion of embodiment of the present invention single goal location determining method;
Fig. 2 is the structural schematic diagram of Multi-source Information Fusion of embodiment of the present invention single goal position determination system.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
The purpose of the present invention is to provide it is a kind of suitable for complex environment, merged obtained based on multisensor it is multiple
Alternative target location information, computer aided calculation monocular cursor position Multi-source Information Fusion single goal location determining method and
System improves single target position determines under complex environment timeliness, accuracy and automatization level.
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real
Applying mode, the present invention is described in further detail.
Main thought of the present invention is distance carrying out position to the multiple alternative target positions obtained based on multisensor
Feature extraction determines that the information blending constraint of corresponding multiple alternative target positions, integrated use multisensor obtain
Target position information determines final monocular cursor position.
Embodiment one
Fig. 1 is the flow diagram of Multi-source Information Fusion of embodiment of the present invention single goal location determining method, such as Fig. 1 institute
Show, Multi-source Information Fusion single goal location determining method provided in an embodiment of the present invention specifically includes following steps:
Step 101: obtaining the last moment of multiple sensor acquisitions and the different types of image at current time, and to institute
There is described image to carry out image analysis, determines that the current time alternative target position of each type described image and last moment are standby
Select target position;Described image includes infrared image, television image, representation of laser facula.
Step 102: calculate separately the distance between each current time alternative target position and it is each it is described on
The distance between one moment alternative target position.The distance between each current time alternative target position is calculated, is counted
Calculate the distance between each last moment alternative target position.
Step 103: by each distance between last moment alternative target position respectively with the first set distance threshold
Value is compared, and determines the first comparison result.
Step 104: by each distance between current time alternative target position respectively with the first set distance threshold
Value is compared, and determines the second comparison result.
Step 105: judging whether first comparison result and second comparison result meet information fusion constraint
Condition obtains the first judging result.
If first judging result indicates that first comparison result or second comparison result do not meet the letter
Blending constraint is ceased, thens follow the steps 106;If first judging result indicates first comparison result and described second
Comparison result meets the information blending constraint, then step executes 107.
Step 106: obtaining the different types of image of the subsequent time of multiple sensor acquisitions, and return step 101.
Specifically: the first situation: if first judging result indicates first comparison result and second ratio
It does not meet the information blending constraint compared with result or second comparison result does not meet the information fusion constraint
Condition, then obtain the subsequent time and the different types of image at lower lower moment of the acquisition of multiple sensors, return step 101, and
The subsequent time is replaced into the last moment, the lower lower moment is replaced into the current time.
Second situation: if first judging result indicates that first comparison result does not meet the information fusion about
Beam condition and second comparison result meets the information blending constraint then obtains lower a period of time of multiple sensors acquisitions
Different types of image, return step 101 are carved, and the current time is replaced into the last moment, by the subsequent time
Replace the current time;Wherein, the interval at the last moment and the current time, the current time with it is described next
The interval at the interval at moment, the subsequent time and the lower lower moment is the same Fixed Time Interval.
Step 107: according to all last moment alternative target positions, all current time alternative target positions
And the corresponding information blending constraint of first comparison result, the corresponding information of second comparison result
Blending constraint determines last moment target position and current target position.
Step 108: judging whether the current target position is greater than at a distance from the last moment target position
Second set distance threshold value, obtains the second judging result.The judgment formula of second judging result are as follows:Wherein, (xq,yq) indicate the current target position;(yq-1,xq -1) indicate the last moment target position;Indicate the second set distance threshold value.
If second judging result indicate the current target position and the subsequent time target position away from
From the second set distance threshold value is greater than, 106 second situation is thened follow the steps;If second judging result indicates institute
It states current target position and is less than or equal to the second set distance threshold at a distance from the subsequent time target position
Value, thens follow the steps 109.
Step 109: the current target position is determined as monocular cursor position.
Before executing step 101, the Multi-source Information Fusion single goal location determining method further includes obtaining setting ginseng
Number;The setup parameter include last moment, current time, Fixed Time Interval, the first set distance threshold value, second setting away from
From threshold value and information blending constraint.The second set distance threshold value be the first set distance threshold value two/
One.
Step 102 specifically includes:
First distance is calculated according to the following formula;The first distance is the i-th moment alternative target of the infrared image
The distance between the i-th moment alternative target position of position and the television image;The formula are as follows:
Wherein, the i-th moment alternative target of the infrared image
PositionI-th moment alternative target position of the television image
The second distance is calculated according to the following formula;The second distance is that the i-th moment of the infrared image is alternative
The distance between the i-th moment alternative target position of target position and the representation of laser facula;The formula are as follows:
Wherein, the i-th moment of the representation of laser facula is alternative
Target position
The third distance is calculated according to the following formula;The third distance is that the i-th moment of the television image is alternative
Formula described in the distance between target position and the i-th moment alternative target position of the representation of laser facula are as follows:
The distance between each current time alternative target position is calculated according to formula (1) (2) (3);According to public affairs
Formula (1) (2) (3) calculates the distance between each last moment alternative target position.
The information blending constraint includes: seven constraint conditions, respectivelyAndAndAndAndAndAndAnd
AndAndAndAndAndAndAndWherein, ε indicates the first set distance threshold value.
Step 107 specifically includes: seven kinds of situations are respectively as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
I.e. according to all last moment alternative target positions and the corresponding information of first comparison result
Blending constraint, selects the i-th moment of the correspondence target position for meeting any of the above-described kind of situation, and by this i-th moment target position
Set determining last moment target position.
Melted according to all current time alternative target positions and the corresponding information of second comparison result
Constraint condition is closed, selects the i-th moment of the correspondence target position for meeting any of the above-described kind of situation, and by this i-th moment target position
Determine current target position.
Embodiment two
Single goal position determination system under a kind of complex environment that the embodiment of the present invention proposes, the system is using the present invention
The Multi-source Information Fusion single goal location determining method that embodiment proposes.System composition includes parameter setting subsystem, target
Alternate location information receiving subsystem, target position determine computing subsystem, process control subsystem.Wherein, parameter setting
System be used to determine the initial time of system, Facility location time interval (Fixed Time Interval), distance threshold (the first setting away from
From threshold value, the second set distance threshold value);Target alternative location information receiving subsystem is based respectively on infrared image spy for receiving
The target alternative location information that sign, television image feature, laser reflection beam pattern obtain;Target position determines computing subsystem
For calculate target possible position and final monocular cursor position;Process control subsystem is used for the stream of control system algorithm operation
Cheng Shunxu.
A kind of Multi-source Information Fusion single goal location determining method that the embodiment of the present invention proposes, specifically includes following step
Suddenly.
Step 1: determining initial time t0, Facility location time interval Δ t.
Step 2: in moment t0Reception is based respectively on Infrared Image Features, television image feature, laser reflection beam pattern
Obtained target alternative location information, is denoted as
Step 3: calculating separately the distance between 3 alternate locations received in step 2, wherein special based on infrared image
The target position obtainedWith the target position obtained based on television image featureBetween distance be
Similarly, the target position obtained based on Infrared Image FeaturesIt is obtained with based on laser reflection beam pattern
The target position arrivedBetween distance beThe target position obtained based on television image featureWith based on swash
The target position that light flare feature obtainsBetween distance be
Step 4: setting the first set distance threshold value as ε, calculate moment t by following algorithm0The possible position of target
IfAndAndThen moment t0The possible position of target is
IfAndAndThen moment t0The possible position of target is
IfAndAndThen moment t0The possible position of target is
IfAndAndThen moment t0The possible position of target is
IfAndAndThen moment t0The possible position of target is
IfAndAndThen moment t0The possible position of target is
IfAndAndThen moment t0The possible position of target is
Otherwise it is assumed that moment t0The possible position of target can not determine.
Step 5: after interval of delta t, in t1=t0+ time Δt repeats step 2~4, obtains moment t1The possible position of target
(x1,y1);
IfThen target position is determined as (x1,y1)。
Otherwise it is assumed that moment t1Target position can not determine, after interval of delta t, t2=t1+ time Δt repeats step 2~5,
Until final determine monocular cursor position.
To achieve the above object, the present invention also provides a kind of Multi-source Information Fusion single goal position determination systems.
Fig. 2 is the structural schematic diagram of Multi-source Information Fusion of embodiment of the present invention single goal position determination system, such as Fig. 2 institute
Show, Multi-source Information Fusion single goal position determination system provided in an embodiment of the present invention specifically includes:
Setup parameter obtains module 100, for obtaining setup parameter;When the setup parameter includes last moment, is current
Quarter, Fixed Time Interval, the first set distance threshold value, the second set distance threshold value and information blending constraint.
Image collection module 200, for obtaining the last moment of multiple sensor acquisitions and the different type at current time
Image, and to all described images carry out image analysis, determine the current time alternative target position of each type described image
It sets and last moment alternative target position;Described image includes infrared image, television image, representation of laser facula.
Distance calculation module 300, for calculate separately the distance between each current time alternative target position with
And the distance between each last moment alternative target position.
First comparison result determining module 400, for by each distance between last moment alternative target position
It is compared respectively with the first set distance threshold value, determines the first comparison result.
Second comparison result determining module 500, for by each distance between current time alternative target position
It is compared respectively with the first set distance threshold value, determines the second comparison result.
First judging result obtains module 600, for judging first comparison result and second comparison result is
It is no to meet information blending constraint, obtain the first judging result.
Last moment target position and current target position determination module 700, for working as first judging result
When indicating that first comparison result and second comparison result meet the information blending constraint, according to all institutes
It is corresponding to state last moment alternative target position, all current time alternative target positions and first comparison result
The corresponding information blending constraint of the information blending constraint, second comparison result, determines last moment
Target position and current target position.
Second judging result obtains module 800, for judging the current target position and the last moment mesh
Whether the distance of cursor position is greater than the second set distance threshold value, obtains the second judging result.
Subsequent time image collection module 900, for when first judging result indicate first comparison result or
When second comparison result does not meet the information blending constraint or described current second judging result expression
Carve target position with the subsequent time target position at a distance from greater than the second set distance threshold value when, obtain multiple sensings
The different types of image of the subsequent time of device acquisition, and return and image analysis step is carried out to all described images.
Single goal position determination module 1000, for indicating the current target position when second judging result
When at a distance from the subsequent time target position less than or equal to the second set distance threshold value, by the current time
Target position is determined as monocular cursor position.
Key innovations of the invention include:
(1) target position information that integrated use multisensor obtains in monocular cursor position determination process, avoids list
One source-information is easy the problem of being restricted and interfering.What sensor collected is different type image, by image
Analyze available corresponding alternate location.
(2) it in the determination of particular moment target possible position, based on feature between target alternative position, is added and is based on weighted
The information fusion algorithm of center algorithm improves the accuracy that position determines.
(3) it based on the difference characteristic between adjacent moment target possible position, determines goal end position, improves complicated ring
The stability of the border target position Xia Dan definitive result.
Compared with prior art, advantages of the present invention is mainly reflected in:
(1) system and method proposed by the invention, the monocular cursor position for realizing computer assisted automation is determining,
Reduce manual intervention degree, improves the timeliness that single target position determines under complex environment.
(2) system and method proposed by the invention, integrated use multisensor obtains in monocular cursor position determination process
The target position information arrived, avoid location information that single-sensor obtains be easy by environmental condition, target signature etc. because
Inaccurate problem caused by element limitation.
(3) system and method proposed by the invention is obtained based on adjacent moment twice in monocular cursor position determination process
Target possible position spacing determine goal end position, improve the stability and accuracy of position definitive result.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.For system disclosed in embodiment
For, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is said referring to method part
It is bright.
Used herein a specific example illustrates the principle and implementation of the invention, and above embodiments are said
It is bright to be merely used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, foundation
Thought of the invention, there will be changes in the specific implementation manner and application range.In conclusion the content of the present specification is not
It is interpreted as limitation of the present invention.
Claims (5)
1. a kind of Multi-source Information Fusion single goal location determining method, which is characterized in that the Multi-source Information Fusion monocular mark
Setting determining method includes:
Obtain setup parameter;The setup parameter includes last moment, current time, Fixed Time Interval, the first set distance
Threshold value, the second set distance threshold value and information blending constraint;
Obtain the last moment of multiple sensors acquisition and the different types of image at current time, and to all described images into
Row image analysis determines the current time alternative target position and last moment alternative target position of each type described image;
Described image includes infrared image, television image, representation of laser facula;
Calculate separately the distance between each current time alternative target position and each last moment alternative mesh
The distance between cursor position;
Each distance between last moment alternative target position is compared with the first set distance threshold value respectively, really
Fixed first comparison result;
Each distance between current time alternative target position is compared with the first set distance threshold value respectively, really
Fixed second comparison result;
Judge whether first comparison result and second comparison result meet information blending constraint, obtains first
Judging result;The information blending constraint includes: seven constraint conditions, respectivelyAndAnd AndAnd AndAnd AndAnd AndAnd AndAnd AndAndWherein, ε indicates the first set distance threshold value;Indicate that first distance, the first distance are described infrared
I-th moment alternative target position of image and the distance between the i-th moment alternative target position of the television image;Table
Show second distance, i-th moment alternative target position and the representation of laser facula of the second distance for the infrared image
The distance between the i-th moment alternative target position;Indicate third distance, the third distance is the television image
The distance between the i-th moment alternative target position of i-th moment alternative target position and the representation of laser facula;
If first judging result indicates that first comparison result or second comparison result do not meet the information and melt
Constraint condition is closed, then obtains the different types of image of the subsequent time of multiple sensor acquisitions, and is returned to all figures
As carrying out image analysis step;It specifically includes:
If first judging result indicates that first comparison result and second comparison result do not meet the information
Blending constraint or second comparison result do not meet the information blending constraint, then obtain multiple sensors and adopt
The different types of image of the subsequent time of collection and lower lower moment returns and carries out image analysis step to all described images, and
The subsequent time is replaced into the last moment, the lower lower moment is replaced into the current time;
If first judging result indicates that first comparison result does not meet the information blending constraint and described the
Two comparison results meet the information blending constraint, then obtain the different types of figure of subsequent time of multiple sensor acquisitions
Picture returns and carries out image analysis step to all described images, and the current time is replaced the last moment, will be described
Subsequent time replaces the current time;Wherein, the interval at the last moment and the current time, the current time with
The interval at the interval of the subsequent time, the subsequent time and the lower lower moment is the same Fixed Time Interval;
If first judging result indicates that first comparison result and second comparison result meet the information and melt
Close constraint condition, then according to all last moment alternative target positions, all current time alternative target positions with
And the corresponding information blending constraint of first comparison result, the corresponding information of second comparison result are melted
Constraint condition is closed, determines last moment target position and current target position;It specifically includes:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
Wherein, the i-th moment alternative target position of the infrared imageThe i-th moment alternative mesh of the television image
Cursor positionI-th moment alternative target position of the representation of laser facula
Judge whether the current target position is greater than the second set distance at a distance from the last moment target position
Threshold value obtains the second judging result;
If second judging result indicates that the current target position is big at a distance from the subsequent time target position
In the second set distance threshold value, then the different types of image of the subsequent time of multiple sensor acquisitions is obtained, and returned
Image analysis step is carried out to all described images;
If second judging result indicates that the current target position is small at a distance from the subsequent time target position
In or equal to the second set distance threshold value, then the current target position is determined as monocular cursor position.
2. Multi-source Information Fusion single goal location determining method according to claim 1, which is characterized in that described second sets
Set a distance threshold value is the half of the first set distance threshold value.
3. Multi-source Information Fusion single goal location determining method according to claim 1, which is characterized in that described to count respectively
It calculates between the distance between each current time alternative target position and each last moment alternative target position
Distance, specifically include:
First distance is calculated according to the following formula;The first distance is the i-th moment alternative target position of the infrared image
The distance between i-th moment alternative target position of the television image;The formula are as follows:
Wherein, the i-th moment alternative target position of the infrared imageI-th moment alternative target position of the television image
The second distance is calculated according to the following formula;The second distance is the i-th moment alternative target of the infrared image
The distance between the i-th moment alternative target position of position and the representation of laser facula;The formula are as follows:
Wherein, the i-th moment alternative target position of the representation of laser facula
The third distance is calculated according to the following formula;The third distance is the i-th moment alternative target of the television image
Formula described in the distance between position and the i-th moment alternative target position of the representation of laser facula are as follows:
4. Multi-source Information Fusion single goal location determining method according to claim 1, which is characterized in that described second sentences
The judgment formula of disconnected result are as follows:Wherein, (xq,yq) indicate it is described current when
Carve target position;(yq-1,xq-1) indicate the last moment target position;Indicate the second set distance threshold value.
5. a kind of Multi-source Information Fusion single goal position determination system, which is characterized in that the Multi-source Information Fusion monocular mark
Setting determining system includes:
Setup parameter obtains module, for obtaining setup parameter;The setup parameter includes last moment, current time, fixation
Time interval, the first set distance threshold value, the second set distance threshold value and information blending constraint;
Image collection module, for obtaining the last moment of multiple sensor acquisitions and the different types of image at current time,
And image analysis is carried out to all described images, determine the current time alternative target position and upper one of each type described image
Moment alternative target position;Described image includes infrared image, television image, representation of laser facula;
Distance calculation module, for calculating separately the distance between each current time alternative target position and each institute
State the distance between last moment alternative target position;
First comparison result determining module, for by each distance between last moment alternative target position respectively with
One set distance threshold value is compared, and determines the first comparison result;
Second comparison result determining module, for by each distance between current time alternative target position respectively with
One set distance threshold value is compared, and determines the second comparison result;
First judging result obtains module, for judging whether first comparison result and second comparison result meet
Information blending constraint obtains the first judging result;The information blending constraint includes: seven constraint conditions, respectively
ForAndAnd AndAnd AndAnd AndAnd AndAnd AndAnd AndAndWherein, ε indicates the first set distance threshold value;Indicate first distance, it is described
First distance is the i-th moment alternative target position of the infrared image and the i-th moment alternative target position of the television image
The distance between set;Indicate that second distance, the second distance are the i-th moment alternative target position of the infrared image
The distance between i-th moment alternative target position of the representation of laser facula;Indicate third distance, the third away from
The i-th moment alternative target position from the i-th moment alternative target position and the representation of laser facula for the television image
The distance between;
Last moment target position and current target position determination module, for described in first judging result expression
When first comparison result and second comparison result meet the information blending constraint, according to all described upper a period of time
Carve alternative target position, all current time alternative target positions and the corresponding information of first comparison result
The corresponding information blending constraint of blending constraint, second comparison result, determines last moment target position
With current target position;The last moment target position and current target position determination module, specifically include:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
WhenAndAndWhen, the i-th moment target position are as follows:
Wherein, the i-th moment alternative target position of the infrared imageThe i-th moment alternative mesh of the television image
Cursor positionI-th moment alternative target position of the representation of laser facula
Second judging result obtains module, for judging the current target position and the last moment target position
Whether distance is greater than the second set distance threshold value, obtains the second judging result;
Subsequent time image collection module, for indicating first comparison result or described second when first judging result
Comparison result, which does not meet the information blending constraint or second judging result, indicates the current target position
When setting at a distance from the subsequent time target position greater than the second set distance threshold value, multiple sensors acquisitions are obtained
The different types of image of subsequent time, and return and image analysis step is carried out to all described images;The subsequent time figure
As acquisition module, specifically include:
If first judging result indicates that first comparison result and second comparison result do not meet the information
Blending constraint or second comparison result do not meet the information blending constraint, then obtain multiple sensors and adopt
The different types of image of the subsequent time of collection and lower lower moment returns and carries out image analysis step to all described images, and
The subsequent time is replaced into the last moment, the lower lower moment is replaced into the current time;
If first judging result indicates that first comparison result does not meet the information blending constraint and described the
Two comparison results meet the information blending constraint, then obtain the different types of figure of subsequent time of multiple sensor acquisitions
Picture returns and carries out image analysis step to all described images, and the current time is replaced the last moment, will be described
Subsequent time replaces the current time;Wherein, the interval at the last moment and the current time, the current time with
The interval at the interval of the subsequent time, the subsequent time and the lower lower moment is the same Fixed Time Interval;
If second judging result indicates that the current target position is big at a distance from the subsequent time target position
In the second set distance threshold value, then the different types of image of the subsequent time of multiple sensor acquisitions is obtained, and returned
Image analysis step is carried out to all described images;
Single goal position determination module, for when second judging result indicate the current target position and it is described under
When the distance of one moment target position is less than or equal to the second set distance threshold value, by the current target position
It is determined as monocular cursor position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810316946.7A CN108537278B (en) | 2018-04-10 | 2018-04-10 | A kind of Multi-source Information Fusion single goal location determining method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810316946.7A CN108537278B (en) | 2018-04-10 | 2018-04-10 | A kind of Multi-source Information Fusion single goal location determining method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537278A CN108537278A (en) | 2018-09-14 |
CN108537278B true CN108537278B (en) | 2019-07-16 |
Family
ID=63479763
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810316946.7A Active CN108537278B (en) | 2018-04-10 | 2018-04-10 | A kind of Multi-source Information Fusion single goal location determining method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537278B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1389710A (en) * | 2002-07-18 | 2003-01-08 | 上海交通大学 | Multiple-sensor and multiple-object information fusing method |
CN106778574A (en) * | 2016-12-06 | 2017-05-31 | 广州视源电子科技股份有限公司 | Detection method and device for face image |
CN107492113A (en) * | 2017-06-01 | 2017-12-19 | 南京行者易智能交通科技有限公司 | A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103646397B (en) * | 2013-12-02 | 2016-10-19 | 西北工业大学 | Real-time synthetic aperture perspective imaging method based on multisource data fusion |
CN107273530B (en) * | 2017-06-28 | 2021-02-12 | 南京理工大学 | Internet information-based important ship target dynamic monitoring method |
CN107886523A (en) * | 2017-11-01 | 2018-04-06 | 武汉大学 | Vehicle target movement velocity detection method based on unmanned plane multi-source image |
-
2018
- 2018-04-10 CN CN201810316946.7A patent/CN108537278B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1389710A (en) * | 2002-07-18 | 2003-01-08 | 上海交通大学 | Multiple-sensor and multiple-object information fusing method |
CN106778574A (en) * | 2016-12-06 | 2017-05-31 | 广州视源电子科技股份有限公司 | Detection method and device for face image |
CN107492113A (en) * | 2017-06-01 | 2017-12-19 | 南京行者易智能交通科技有限公司 | A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method |
Non-Patent Citations (2)
Title |
---|
一种基于多重距离聚类的多源侦察结果融合算法;徐英;《兵器装备工程学报》;20160930;第37卷(第9期);83-86 |
基于整体加权的多源数据融合研究;赖成瑜;《萍乡高等专科学校学报》;20140630;第31卷(第3期);17-20 |
Also Published As
Publication number | Publication date |
---|---|
CN108537278A (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109471096B (en) | Multi-sensor target matching method and device and automobile | |
CN114299417A (en) | Multi-target tracking method based on radar-vision fusion | |
EP2858008A2 (en) | Target detecting method and system | |
CN110458055A (en) | A kind of obstacle detection method and system | |
CN102789578B (en) | Infrared remote sensing image change detection method based on multi-source target characteristic support | |
CN106373143A (en) | Adaptive method and system | |
CN110033473A (en) | Motion target tracking method based on template matching and depth sorting network | |
CN108764167A (en) | A kind of target of space time correlation recognition methods and system again | |
CN102567994B (en) | Infrared small target detection method based on angular point gaussian characteristic analysis | |
CN106355604A (en) | Target image tracking method and system | |
CN110598590A (en) | Close interaction human body posture estimation method and device based on multi-view camera | |
KR20140114741A (en) | Apparatus and method for human pose estimation | |
CN101281648A (en) | Method for tracking dimension self-adaption video target with low complex degree | |
CN106251362B (en) | A kind of sliding window method for tracking target and system based on fast correlation neighborhood characteristics point | |
US11361534B2 (en) | Method for glass detection in real scenes | |
CN104599291B (en) | Infrared motion target detection method based on structural similarity and significance analysis | |
CN109858526A (en) | Sensor-based multi-target track fusion method in a kind of target following | |
CN106530407A (en) | Three-dimensional panoramic splicing method, device and system for virtual reality | |
CN109978919A (en) | A kind of vehicle positioning method and system based on monocular camera | |
CN109086803A (en) | A kind of haze visibility detection system and method based on deep learning and the personalized factor | |
CN112347817B (en) | Video target detection and tracking method and device | |
CN108537278B (en) | A kind of Multi-source Information Fusion single goal location determining method and system | |
CN114169425A (en) | Training target tracking model and target tracking method and device | |
CN115841497B (en) | Boundary detection method and escalator area intrusion detection method and system | |
CN117315547A (en) | Visual SLAM method for solving large duty ratio of dynamic object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |