CN115170663A - Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method - Google Patents

Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method Download PDF

Info

Publication number
CN115170663A
CN115170663A CN202210798319.8A CN202210798319A CN115170663A CN 115170663 A CN115170663 A CN 115170663A CN 202210798319 A CN202210798319 A CN 202210798319A CN 115170663 A CN115170663 A CN 115170663A
Authority
CN
China
Prior art keywords
target
point
space
modal
point group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210798319.8A
Other languages
Chinese (zh)
Other versions
CN115170663B (en
Inventor
李宁
刘海波
李焱
韩玺钰
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202210798319.8A priority Critical patent/CN115170663B/en
Publication of CN115170663A publication Critical patent/CN115170663A/en
Application granted granted Critical
Publication of CN115170663B publication Critical patent/CN115170663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

A multi-mode correlation ultra-long-range passive ranging method for cross-space-time true and false targets relates to the field of photoelectric countermeasure and early warning detection, and comprises the following steps: performing space-time consistency processing on the multi-modal source data; establishing a multi-modal source data characteristic decision vector diagram, designing a self-adaptive fuzzy fusion decision criterion, and roughly screening a point group target; the information acquired by the multi-station detection equipment is projected to the same global coordinate system by using a projection imaging principle; establishing a star atlas module by using the principle of plane graphics to construct a point group target plane graph; performing nonlinear optimization by using an L-M algorithm, and performing matching angle compensation measurement and calculation and multi-site group target rough matching; determining a point true target based on the confidence of the feedback link; and carrying out target positioning measurement and calculation on the real point target by taking the secondary target point as a reference to obtain the actual spatial position of the target object. The invention has the confirmation accuracy of over 85 percent for the ranging target, has the accurate intersection ranging capability of super-long distance (150 kilometers away) and has the measurement accuracy of less than 50 meters.

Description

Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method
Technical Field
The invention relates to the technical field of photoelectric countermeasure and early warning detection, in particular to a multi-mode correlation ultra-long-range passive ranging method for cross-space-time authenticity targets.
Background
The multi-station passive intersection measurement can realize accurate positioning of the target and return accurate three-dimensional information, is an important measurement means for accurately positioning the target, and is a precondition for air defense and back guidance of an infrared independent or auxiliary radar. Although the current mature single-station active positioning has high measurement accuracy, the single-station active positioning is easily influenced by physical factors such as air, obstacles, power limitation and the like, particularly for ultra-long distance measurement, in practical application, a field Jing Shouxian, for example, a Chinese patent 'vertical target parameter testing device based on linear array CCD intersection measurement and a debugging method thereof' with the publication number of CN111521061A solves the problems of poor environmental adaptability, low debugging efficiency, and difficulty in equipment debugging and calibration when the existing linear array CCD intersection testing device is applied in an external field. In the patent, manual coarse aiming is realized through a laser lighting device, a linear array CCD camera, a leveling component and the like; and then automatically extracting the miss distance of the imaging light spot through the mutually aiming camera, so that the imaging light spot is adjusted to the horizontal center of the target surface of the mutually aiming camera, and when the imaging light spots of the mutually aiming light sources of the two testing devices are imaged on the horizontal center of the target surface of the camera, the automatic fine aiming is completed. The device has high measurement accuracy, is easy to be interfered by obstacles, cannot realize ultra-long distance measurement, and is limited in scenes in practical application.
The multi-station passive intersection measurement is an effective solution proposed at present for the above situations, but two difficult problems of true target determination and detection decision when facing true and false point group targets and intersection positioning precision deviation brought by ultra-long distance measurement still need to be further solved.
Disclosure of Invention
The invention aims to provide a cross-space-time true and false target multi-mode associated ultra-long-range passive ranging method, which aims to solve the problems of true target determination and detection decision and intersection positioning precision deviation caused by ultra-long-distance measurement in the conventional multi-station passive intersection measurement.
The technical scheme adopted by the invention for solving the technical problem is as follows:
the invention relates to a multi-mode correlation ultra-remote passive ranging method for cross-space-time authenticity targets, which comprises the following steps:
step S1: performing space-time consistency processing on the multi-modal source data;
step S2: establishing a multi-modal source data characteristic decision vector diagram, designing a self-adaptive fuzzy fusion decision criterion, and roughly screening a point group target;
and step S3: projecting information acquired by the multi-station detection equipment to the same global coordinate system by using a projection imaging principle to obtain a plurality of point group target mapping maps to be determined;
and step S4: establishing a star atlas module by using the principle of plane graphics to construct a point group target plane graph;
step S5: performing nonlinear optimization by using an L-M algorithm, and performing matching angle compensation measurement and calculation and multi-site group target rough matching;
step S6: determining a true point target and a secondary target point based on the confidence of the feedback link;
step S7: and carrying out target positioning measurement and calculation on the real point target by taking the secondary target point as a reference to obtain the actual spatial position of the target object.
Further, the specific operation steps of step S1 are as follows:
and (3) performing time and space consistency processing on the multi-modal source data by adopting a time period interpolation extrapolation method and a space conversion strategy based on UT transformation.
Further, in the UT transformation, the selection expression of the Sigma point is as follows:
g[{χ i },p x (x)]=0
S.t.Min c[{χ i },p x (x)] (1)
wherein, { χ i Denotes the Sigma point set, x denotes the Sigma point, p x (x) Density function representing x, c [ { χ [ ] i },p x (x)]The cost function is expressed, min represents that the iteration of the expression is minimum, and s.t. represents that the operation is obeyed.
Further, the specific operation steps of step S2 are as follows:
processing the knowledge modal data acquired by the measurement multi-modal source data to form a multi-modal source data characteristic decision vector diagram; setting an adaptive fuzzy fusion decision criterion by using an l1 regular optimization criterion and a D-s criterion, and roughly screening the first n point group targets which most accord with the adaptive fuzzy fusion decision criterion as point group targets to be determined; and establishing a real target attribute discrimination strategy based on the D-s criterion, performing fusion judgment on the point group target to be determined by using the real target attribute discrimination strategy, and giving the attribute type and the confidence coefficient of the point group target to be determined.
Further, the expression of the l1 canonical optimization criterion is as follows:
min||I 1 -I 2 || * (2)
where Min denotes the minimum of iteration of the expression, I 1 Representing Point group goal 1,I 2 Indicating the point cloud object 2.
Further, the true target attribute discrimination strategy is as follows:
P=ω*P(T n |(Point,Size))+(1-ω)*P'(T n |(Point,word)) (3)
ω=Sigmoid(Size/(H*W)-5) (4)
equation (3) is a true target attribute discrimination strategy, and equation (4) is an additional attribute information weight added with a target size; p denotes a decision operator 1, ω denotes a weight parameter, T n The method comprises the steps of representing a maximum likelihood selection condition, representing a selected target Point by Point, representing an abstract Size of the selected target Point by Size, representing a discrimination operator 2 by P', representing an abstract feature description of the selected target Point by word, representing a regular optimization operation of l1 by Sigmoid, representing target height information by H and representing target width information by W.
Further, the specific operation steps of step S4 are as follows:
and sequentially connecting the target points to form a polygon by using a plane graphics principle according to a plurality of point group target mapping graphs to be determined, and constructing a star map module to form a point group target plane graph.
Further, the specific operation steps of step S5 are as follows:
setting initial parameters according to a plurality of point group target mapping maps to be determined, inputting the initial parameters into a star map module to perform star map space curved surface selection and polygon diagonal simple ratio optimization, setting a basic point trace identification decision characteristic matrix among the plurality of point group target mapping maps to be determined by utilizing polygon diagonal simple ratio invariance, simultaneously measuring and calculating the basic point trace identification decision characteristic matrix by utilizing a singular value decomposition algorithm to obtain a singular value of a polygon coordinate matrix, inputting the singular value into an equation (7), performing nonlinear optimization by utilizing an L-M algorithm, and calculating a point group target mapping map matrix by utilizing an optimization result; performing matching angle compensation measurement and calculation by using a point group target mapping chart matrix, and performing multi-site group target rough matching by using singular value invariance; when the formula (5) is satisfied, further matching of the residual point group targets is carried out, a target point matched with the point group target to be determined is searched by using the point matched with the point on the main diagonal line, and when the formula (6) is satisfied, the searched point meeting the condition is determined as the main target point; finally, taking the internal and external parameters obtained by the formula (5) as initial values, taking all the matching points obtained by the formula (6) as samples, and adopting the formula (7) to carry out nonlinear optimization by using an L-M algorithm to obtain optimal parameters, namely, a matching angle compensation error measurement value;
Figure BDA0003736520770000041
Figure BDA0003736520770000042
Figure BDA0003736520770000043
in the formula, min F 1 (x) Represents the expression F 1 (x) Iteration is taken to be minimum, P 1 And W 1 Respectively representing two points, | cross, on the principal diagonal i (P)-cross i (W) | denotes a simple operation on the point P and the point W, λ 1 、λ 2 All represent a weighting parameter, σ ui 、σ vi I represents the corresponding standard deviation, | dx (P) -dx (W) | represents the length of the principal diagonal, W 0 Representing a weight factor, VC represents a point coordinate matrix, N is the number of identifiable target points, X' i 、X i Indicating the point coordinate values and C indicating the rotation matrix.
Further, the specific operation steps of step S6 are as follows:
and setting confidence coefficient based on a feedback link according to the self-adaptive fuzzy fusion decision criterion and the matching angle compensation error measurement value, continuously eliminating the pseudo point target until a minimum point group target plane graph is not formed, and determining a true point target and a secondary target point.
Further, the specific operation steps of step S7 are as follows:
and taking the secondary target point as a reference, reducing the error between the real three-dimensional position of the real point target and the system by using a two-dimensional operator, and determining the spatial position of the real point target by using a three-dimensional reconstruction gridding technology and ray intersection measurement.
The invention has the beneficial effects that:
according to the multi-mode correlation ultra-remote passive ranging method for the cross-space-time authenticity target, firstly, the acquired multi-mode source data are subjected to space-time consistency processing to improve the accuracy of basic equipment for intersection measurement; then forming a feature decision vector diagram according to the multi-modal source data, designing a self-adaptive fuzzy fusion decision criterion by using the feature decision vector diagram, and roughly screening point group targets; projecting information acquired by the multi-station detection equipment to a global coordinate system by using a projection imaging principle to obtain a target mapping chart of a point group to be determined, wherein the target mapping chart is unified to the same global coordinate system; establishing a star atlas module by using the plane graphics principle, and constructing a point group target plane figure; then, carrying out nonlinear optimization by using an L-M algorithm, carrying out matching angle compensation measurement and calculation and multi-site group target rough matching, and improving the screening and measuring accuracy of the complex point group target; determining a true point target based on the confidence of the feedback link, and completing the positioning measurement and calculation of the accurate measurement target and the determination of the compensation angle function parameter; and finally, performing target positioning measurement and calculation on the real point target by taking the secondary target point as a reference to obtain the actual spatial position of the target object, and completing the ultra-long-distance space-time ranging of the complex point group target.
Compared with the prior art, the invention has the following advantages:
1. the invention has the processing capability of accurately distinguishing the complicated true and false point group targets by the multi-station detection equipment, and the confirmation accuracy of the ranging target can reach more than 85 percent.
2. The invention has the accurate intersection distance measurement capability of ultra-long distance (150 km away), and the measurement accuracy is less than 50 m.
3. The invention can realize the multi-mode associated ultra-long-range passive ranging of the cross-space-time true and false point group target, greatly improve the efficiency of an optoelectronic system, and provide accurate three-dimensional information and timely and effective guidance information for a weapon system.
Drawings
FIG. 1 is a flow chart of a cross-space-time authenticity target multi-mode correlation ultra-long-range passive ranging method.
Fig. 2 is a schematic diagram of a specific implementation of the cross-space-time authenticity target multi-modal association ultra-long range passive ranging method for performing rough matching and matching angle compensation measurement and calculation on a multi-site group target.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the multi-modal associated ultra-remote passive ranging method for cross-space-time authenticity target mainly comprises the following steps:
step S1: performing space-time consistency processing on the multi-modal source data;
step S2: establishing a multi-modal source data characteristic decision vector diagram, designing a self-adaptive fuzzy fusion decision criterion, and roughly screening a point group target;
and step S3: the method comprises the steps that information acquired by multi-station detection equipment is projected to the same global coordinate system by using a projection imaging principle, and a plurality of point group target mapping maps to be determined are obtained;
and step S4: establishing a star map module by utilizing the principle of plane graphics, and constructing a point group target plane graph;
step S5: performing nonlinear optimization by using an L-M algorithm, and performing matching angle compensation measurement and calculation and multi-site group target rough matching;
step S6: determining a true point target and a secondary target point based on the confidence of the feedback link;
step S7: and carrying out target positioning measurement and calculation on the real point target by taking the secondary target point as a reference to obtain the actual spatial position of the target object.
The invention relates to a multi-mode correlation ultra-long-range passive ranging method for cross-space-time authenticity targets, which specifically comprises the following steps of:
step S1: and performing space-time consistency processing on the multi-modal source data.
The specific operation steps are as follows:
the common-caliber staring type multi-modal sensing multi-station detection equipment is used for acquiring multi-modal source data, wherein the multi-modal source data are mostly knowledge attribute quantities of targets and comprise visible image sequences, long wave image sequences, medium wave image sequences, short wave image sequences, time codes, radiation characteristic data, angle measurement information and other target statistical information.
And performing time and space consistency processing on multi-modal source data acquired by the multi-station detection equipment by adopting a time period interpolation extrapolation method and a space conversion strategy based on UT (user-to-user) transformation.
The specific operation steps are as follows:
multi-modal source data acquired by multi-station detection equipment are target tracks, and time consistency processing is performed by utilizing a time interval interpolation extrapolation method according to the acquired target track property and following the latest target track and an associated target track first extrapolation principle; and then, carrying out space consistency processing by adopting a space conversion strategy based on UT (user-to-device) transformation, and unifying the target track period and space.
g[{χ i },p x (x)]=0
S.t.Min c[{χ i },p x (x)] (1)
Formula (1) is a selected expression of Sigma points in UT transformation. Wherein, { χ i Denotes the Sigma point set, x denotes the Sigma point, p x (x) Density function representing x, c [ { χ [ ] i },p x (x)]The cost function is expressed, min denotes that the expression iteration takes the minimum, and s.t. denotes that the operation is obeyed.
The time-space consistency processing is carried out by utilizing a time period interpolation extrapolation method and a space conversion strategy based on UT transformation, and the precision of basic equipment for intersection measurement can be improved.
Step S2: establishing a multi-modal source data characteristic decision vector diagram according to the multi-modal source data incidence relation processed by the time-space consistency; and designing a self-adaptive fuzzy fusion decision criterion according to the multi-modal source data characteristic decision vector diagram, and roughly screening the point group targets.
The specific operation steps are as follows:
processing the knowledge modal data acquired by the measurement multi-modal source data to form a multi-modal source data characteristic decision vector diagram; and establishing an adaptive fuzzy fusion decision criterion by using the l1 regular optimization criterion and the D-s criterion, and roughly screening the first n point group targets which most accord with the adaptive fuzzy fusion decision criterion as point group targets to be determined.
min||I 1 -I 2 || * (2)
Equation (2) is a l1 regular optimization criterion expression. Where Min represents the minimum of iteration of the expression, I 1 Representing Point group goal 1,I 2 Indicating the point cloud object 2.
For the problems of data redundancy, exclusive identification attribute and the like existing in the process of forming a multi-modal source data characteristic decision vector diagram, in order to accurately identify the attributes of a point group target to be determined, a real target attribute discrimination strategy based on D-s criterion is established, and additional attribute information (specific to attribute data of a specific scene) such as target size, detection distance and the like in a target characteristic matrix is selected to make the real target attribute discrimination strategy; and performing fusion judgment on the obtained point group target to be determined by using a true target attribute judgment strategy, and giving the attribute type and the confidence coefficient of the point group target to be determined. The established true target attribute discrimination strategy is as follows:
P=ω*P(T n |(Point,Size))+(1-ω)*P'(T n |(Point,word)) (3)
ω=Sigmoid(Size/(H*W)-5) (4)
equation (3) is a true target attribute discrimination strategy, and equation (4) is an additional attribute information weight added to the target size. Where P denotes a discrimination operator 1, ω denotes a weight parameter, T n Representing maximum likelihoodSelecting conditions, point represents a selected target Point, size represents an abstract Size of the selected target Point, P' represents a discrimination operator 2, word represents abstract feature description of the selected target Point, sigmoid represents l1 regular optimization operation, H represents target height information, and W represents target width information.
And step S3: and by utilizing a projection imaging principle, projecting the information acquired by the multi-station detection equipment to the same global coordinate system to obtain a plurality of point group target mapping maps to be determined which are unified to the same global coordinate system.
The specific operation steps are as follows:
by utilizing a projection imaging principle, the information acquired by the multi-station detection equipment is converted into a relative angle in a target object view angle according to the characteristics of the multi-station detection equipment, and the information acquired by the multi-station detection equipment is unified to the same global coordinate system to form a plurality of point group target mapping maps to be determined unified to the same global coordinate system.
And step S4: a star map module is established by utilizing the principle of plane graphics, and a point group target plane graph is constructed.
The specific operation steps are as follows:
according to a plurality of point group target mapping graphs to be determined in the same global coordinate system, target points are sequentially connected to form a polygon by using a plane graphics principle, a star map module is constructed, and a point group target plane graph is formed.
Step S5: and carrying out nonlinear optimization by using an L-M algorithm, and carrying out matching angle compensation measurement and calculation and multi-site group target rough matching.
The specific operation steps are as follows:
as shown in fig. 2, according to a plurality of to-be-determined point group target mapping graphs in the same global coordinate system, setting initial parameters (the initial parameters include information obtained by a multi-station detection device, parameters in a true target attribute discrimination strategy, namely formula (3) and formula (4), and parameters in a self-adaptive fuzzy fusion decision criterion, namely formula (2)), then inputting the initial parameters into a star map module for star map space curved surface selection and polygon diagonal simple ratio optimization, setting a point trace basic recognition decision characteristic matrix among the plurality of to-be-determined point group target mapping graphs by using polygon diagonal simple ratio invariance, simultaneously measuring the point trace basic recognition decision characteristic matrix by using a singular value decomposition algorithm to obtain a singular value of a polygon coordinate matrix, inputting the singular value into formula (7), performing nonlinear optimization by using an L-M algorithm, and calculating a point group target mapping graph matrix by using an optimization result; performing matching angle compensation measurement and calculation by using the point group target mapping diagram matrix, and performing multi-site group target rough matching by using singular value invariance; when the formula (5) is satisfied, further matching of the residual point group targets is carried out, a target point matched with the point group target to be determined is searched by using the point matched with the point on the main diagonal line, and when the formula (6) is satisfied, the searched point meeting the condition is determined as the main target point; and finally, taking the internal and external parameters obtained by the formula (5) as initial values, taking all the matching points obtained by the formula (6) as samples, and performing nonlinear optimization by using an L-M algorithm by adopting a formula (7) to obtain an optimal parameter, namely a matching angle compensation error measurement value.
Figure BDA0003736520770000101
Figure BDA0003736520770000102
Figure BDA0003736520770000103
Equation (7) is the objective function used for the nonlinear optimization with the L-M algorithm. Wherein, min F 1 (x) Represents the expression F 1 (x) Iteration is taken to be minimum, P 1 And W 1 Respectively representing two points, | cross, on the principal diagonal i (P)-cross i (W) | denotes a simple operation on the point P and the point W, λ 1 、λ 2 All represent the weighting parameter, σ ui 、σ vi I represents the corresponding standard deviation, | dx (P) -dx (W) | represents the length of the principal diagonal, W 0 Representing a weight factor, VC representing a point coordinate matrix, N being the number of identifiable target points, X' i 、X i Indicating the point coordinate values and C indicating the rotation matrix.
And obtaining the optimal parameter solution of the internal and external parameters and the trace basic recognition decision characteristic matrix through nonlinear optimization, and taking the optimal parameter solution as an accurate value of calibration and attitude measurement. By using the method, the parameters such as the focal length and the point coordinates of the sensor which are not strictly calibrated can be extracted, the attitude angle value is obtained, the parameter values can be used as initial parameter values of other sensors, and the convergence speed of the L-M algorithm is accelerated.
Step S6: and determining a true point target and a secondary target point based on the confidence of the feedback link.
The specific operation steps are as follows:
setting confidence coefficient based on a feedback link according to a self-adaptive fuzzy fusion decision criterion and a matching angle compensation error measurement value, continuously eliminating a pseudo point target until a minimum point group target plane graph is not formed, and determining a true point target and a secondary target point; because the selection process of the real point target is a continuous iteration process, after the real point target is determined, the penultimate point (the point closest to the real point target) obtained in the iteration process is used as the determined secondary target point.
Step S7: and carrying out target positioning measurement and calculation on the real point target by taking the secondary target point as a reference to obtain the actual spatial position of the target object.
The specific operation steps are as follows:
and taking the secondary target point as a reference, reducing the error between the real three-dimensional position of the real point target and the system by using a two-dimensional operator, and determining the spatial position of the real point target by using a three-dimensional reconstruction gridding technology and ray intersection measurement. More specifically, the compensation angle function parameters and the ray initial position are overlapped, a polar coordinate system is established and gridded, three-dimensional point cloud reconstruction operation is carried out on the real point target, the final intersection point is the actual space position of the real point target in a real three-dimensional scene, and ultra-far distance space-time distance measurement of the complex point group target is completed.
In conclusion, when the method is used for processing the complicated true and false point group target, the cross-space-time intersection passive distance measurement beyond 150 kilometers can be realized, the distance measurement precision is less than 50 meters, accurate three-dimensional information can be provided for a weapon system, and necessary technical support is provided for accurate guidance.
The above description is only a preferred embodiment of the present invention, and does not limit the present invention in any way. It will be understood by those skilled in the art that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The method for multi-modal associated ultra-remote passive ranging of the cross-space-time authenticity target is characterized by comprising the following steps of:
step S1: performing space-time consistency processing on the multi-modal source data;
step S2: establishing a multi-modal source data characteristic decision vector diagram, designing a self-adaptive fuzzy fusion decision criterion, and roughly screening a point group target;
and step S3: projecting information acquired by the multi-station detection equipment to the same global coordinate system by using a projection imaging principle to obtain a plurality of point group target mapping maps to be determined;
and step S4: establishing a star map module by utilizing the principle of plane graphics, and constructing a point group target plane graph;
step S5: carrying out nonlinear optimization by using an L-M algorithm, and carrying out matching angle compensation measurement and calculation and multi-site group target rough matching;
step S6: determining a true point target and a secondary target point based on the confidence of the feedback link;
step S7: and carrying out target positioning measurement and calculation on the real point target by taking the secondary target point as a reference to obtain the actual spatial position of the target object.
2. The method for performing multi-modal correlation ultra-remote passive ranging across space-time authenticity targets according to claim 1, wherein the specific operation steps of the step S1 are as follows:
and (3) performing time and space consistency processing on the multi-modal source data by adopting a time period interpolation extrapolation method and a space conversion strategy based on UT transformation.
3. The method for multi-modal correlation ultra-remote passive ranging across space-time authenticity targets according to claim 2, wherein in the UT transformation, a selection expression of Sigma points is as follows:
g[{χ i },p x (x)]=0
S.t.Min c[{χ i },p x (x)] (1)
wherein, { χ i Denotes the Sigma point set, x denotes the Sigma point, p x (x) Density function representing x, c [ { χ [ ] i },p x (x)]The cost function is expressed, min denotes that the expression iteration takes the minimum, and s.t. denotes that the operation is obeyed.
4. The method for multi-modal associated ultra-remote passive ranging across space-time authenticity targets according to claim 2, wherein the specific operation steps of step S2 are as follows:
processing the knowledge modal data acquired by the measurement multi-modal source data to form a multi-modal source data characteristic decision vector diagram; setting an adaptive fuzzy fusion decision criterion by using an l1 regular optimization criterion and a D-s criterion, and roughly screening the first n point group targets which most accord with the adaptive fuzzy fusion decision criterion as point group targets to be determined; and establishing a real target attribute discrimination strategy based on the D-s criterion, performing fusion judgment on the point group target to be determined by using the real target attribute discrimination strategy, and giving the attribute type and the confidence coefficient of the point group target to be determined.
5. The cross-space-time authenticity target multi-modal association ultra-remote passive ranging method as claimed in claim 4, wherein the expression of the l1 regular optimization criterion is as follows:
min||I 1 -I 2 || * (2)
where Min denotes the minimum of iteration of the expression, I 1 Representing Point group goal 1,I 2 Indicating the point cloud object 2.
6. The method for multi-modal correlation ultra-remote passive ranging across space-time authenticity targets according to claim 4, wherein the true target attribute discrimination strategy is as follows:
P=ω*P(T n |(Point,Size))+(1-ω)*P'(T n |(Point,word)) (3)
ω=Sigmoid(Size/(H*W)-5) (4)
equation (3) is a true target attribute discrimination strategy, and equation (4) is an additional attribute information weight added with a target size; p denotes a discrimination operator 1, ω denotes a weight parameter, T n The method comprises the steps of representing a maximum likelihood selection condition, representing a selected target Point by Point, representing an abstract Size of the selected target Point by Size, representing a discrimination operator 2 by P', representing an abstract feature description of the selected target Point by word, representing a regular optimization operation of l1 by Sigmoid, representing target height information by H and representing target width information by W.
7. The method for multi-modal correlation ultra-remote passive ranging across space-time authenticity targets according to claim 4, wherein the specific operation steps of the step S4 are as follows:
and sequentially connecting the target points to form a polygon by using a plane graphics principle according to a plurality of point group target mapping graphs to be determined, and constructing a star map module to form a point group target plane graph.
8. The method for multi-modal associated ultra-remote passive ranging across spatiotemporal authenticity targets according to claim 7, wherein the specific operation steps of step S5 are as follows:
setting initial parameters according to a plurality of point group target mapping graphs to be determined, inputting the initial parameters into a star map module to perform star map space curved surface selection and polygon diagonal simple ratio optimization, setting a point trace basic recognition decision characteristic matrix among the plurality of point group target mapping graphs to be determined by utilizing polygon diagonal simple ratio invariance, simultaneously measuring and calculating the point trace basic recognition decision characteristic matrix by utilizing a singular value decomposition algorithm to obtain a singular value of a polygon coordinate matrix, inputting the singular value into a formula (7), performing nonlinear optimization by utilizing an L-M algorithm, and calculating a point group target mapping graph matrix by utilizing an optimization result; performing matching angle compensation measurement and calculation by using a point group target mapping chart matrix, and performing multi-site group target rough matching by using singular value invariance; when the formula (5) is satisfied, further matching of the residual point group targets is carried out, a target point matched with the point group target to be determined is searched by using the point matched with the point on the main diagonal line, and when the formula (6) is satisfied, the searched point meeting the condition is determined as the main target point; finally, taking the internal and external parameters obtained by the formula (5) as initial values, taking all the matching points obtained by the formula (6) as samples, and adopting the formula (7) to carry out nonlinear optimization by using an L-M algorithm to obtain optimal parameters, namely, a matching angle compensation error measurement value;
Figure FDA0003736520760000031
Figure FDA0003736520760000032
Figure FDA0003736520760000033
in the formula, min F 1 (x) Represents the expression F 1 (x) Iteration is taken to be minimum, P 1 And W 1 Respectively representing two points, | cross, on the principal diagonal i (P)-cross i (W) | denotes that the point P and the point W are subjected to simple ratio operation, λ 1 、λ 2 All represent a weighting parameter, σ ui 、σ vi I represents the corresponding standard deviation, | dx (P) -dx (W) | represents the length of the main diagonal, W 0 Representing a weight factor, VC represents a point coordinate matrix, N is the number of identifiable target points, X' i 、X i Indicating the point coordinate values and C indicating the rotation matrix.
9. The method for multi-modal correlation ultra-remote passive ranging across space-time authenticity targets according to claim 8, wherein the specific operation steps of step S6 are as follows:
and setting confidence coefficient based on a feedback link according to the self-adaptive fuzzy fusion decision criterion and the matching angle compensation error measurement value, continuously eliminating the pseudo point target until a minimum point group target plane graph is not formed, and determining a true point target and a secondary target point.
10. The method for multi-modal correlation ultra-remote passive ranging across space-time authenticity targets according to claim 9, wherein the specific operation steps of step S7 are as follows:
and taking the secondary target point as a reference, reducing the error between the real three-dimensional position of the real point target and the system by using a two-dimensional operator, and determining the spatial position of the real point target by using a three-dimensional reconstruction gridding technology and ray intersection measurement.
CN202210798319.8A 2022-07-08 2022-07-08 Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method Active CN115170663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210798319.8A CN115170663B (en) 2022-07-08 2022-07-08 Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210798319.8A CN115170663B (en) 2022-07-08 2022-07-08 Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method

Publications (2)

Publication Number Publication Date
CN115170663A true CN115170663A (en) 2022-10-11
CN115170663B CN115170663B (en) 2023-03-14

Family

ID=83490764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210798319.8A Active CN115170663B (en) 2022-07-08 2022-07-08 Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method

Country Status (1)

Country Link
CN (1) CN115170663B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106595702A (en) * 2016-09-22 2017-04-26 中国人民解放军装备学院 Astronomical-calibration-based spatial registration method for multiple sensors
CN108089148A (en) * 2017-12-14 2018-05-29 电子科技大学 A kind of passive track-corelation direction cross positioning method based on time difference information
CN110866887A (en) * 2019-11-04 2020-03-06 深圳市唯特视科技有限公司 Target situation fusion sensing method and system based on multiple sensors
CN114694011A (en) * 2022-03-25 2022-07-01 中国电子科技南湖研究院 Fog penetrating target detection method and device based on multi-sensor fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106595702A (en) * 2016-09-22 2017-04-26 中国人民解放军装备学院 Astronomical-calibration-based spatial registration method for multiple sensors
CN108089148A (en) * 2017-12-14 2018-05-29 电子科技大学 A kind of passive track-corelation direction cross positioning method based on time difference information
CN110866887A (en) * 2019-11-04 2020-03-06 深圳市唯特视科技有限公司 Target situation fusion sensing method and system based on multiple sensors
CN114694011A (en) * 2022-03-25 2022-07-01 中国电子科技南湖研究院 Fog penetrating target detection method and device based on multi-sensor fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RICHARD LINARES ET AL.: "SPACE OBJECT MASS-SPECIFIC INERTIA MATRIX ESTIMATION FROM PHOTOMETRIC DATA", 《RESEARCHGATE》 *
王英健等: "基于矩阵分析和 D-S 证据理论的时空数据融合及目标识别", 《长沙交通学院学报》 *

Also Published As

Publication number Publication date
CN115170663B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN110675418B (en) Target track optimization method based on DS evidence theory
AU2007355942B2 (en) Arrangement and method for providing a three dimensional map representation of an area
CN110097553A (en) The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN112001958B (en) Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN111257892A (en) Obstacle detection method for automatic driving of vehicle
CN110873879A (en) Device and method for deep fusion of characteristics of multi-source heterogeneous sensor
CN115731268A (en) Unmanned aerial vehicle multi-target tracking method based on visual/millimeter wave radar information fusion
CN110796681A (en) Visual positioning system and method for cooperative work of ship
CN112446844B (en) Point cloud feature extraction and registration fusion method
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN111998862A (en) Dense binocular SLAM method based on BNN
CN114088081A (en) Map construction method for accurate positioning based on multi-segment joint optimization
CN114140539A (en) Method and device for acquiring position of indoor object
CN112184793A (en) Depth data processing method and device and readable storage medium
CN117215316B (en) Method and system for driving environment perception based on cooperative control and deep learning
CN114137564A (en) Automatic indoor object identification and positioning method and device
CN113763423A (en) Multi-mode data based systematic target recognition and tracking method
CN113093162A (en) Personnel trajectory tracking system based on AIOT and video linkage
CN113327271A (en) Decision-level target tracking method and system based on double-optical twin network and storage medium
CN115170663B (en) Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method
CN116758153A (en) Multi-factor graph-based back-end optimization method for accurate pose acquisition of robot
CN113850864B (en) GNSS/LIDAR loop detection method for outdoor mobile robot
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant