CN111612710B - Geometric imaging pixel number calculation method for target rectangular projection image - Google Patents

Geometric imaging pixel number calculation method for target rectangular projection image Download PDF

Info

Publication number
CN111612710B
CN111612710B CN202010405587.XA CN202010405587A CN111612710B CN 111612710 B CN111612710 B CN 111612710B CN 202010405587 A CN202010405587 A CN 202010405587A CN 111612710 B CN111612710 B CN 111612710B
Authority
CN
China
Prior art keywords
target
imaging
horizontal direction
function
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010405587.XA
Other languages
Chinese (zh)
Other versions
CN111612710A (en
Inventor
王伟超
程军练
黄海军
司文涛
吴统邦
张浩元
袁光福
甘世奇
兰淋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unit 95859 Of People's Liberation Army Of China
Original Assignee
Unit 95859 Of People's Liberation Army Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unit 95859 Of People's Liberation Army Of China filed Critical Unit 95859 Of People's Liberation Army Of China
Priority to CN202010405587.XA priority Critical patent/CN111612710B/en
Publication of CN111612710A publication Critical patent/CN111612710A/en
Application granted granted Critical
Publication of CN111612710B publication Critical patent/CN111612710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Abstract

The invention relates to a method for calculating the number of geometric imaging pixels of a target rectangular projection image, which comprises the following steps: acquiring a target image; differentiating the autocorrelation function row by row and column by column; constructing a conjugate peak characteristic curve; extracting the deviation of the negative peak point; calculating the number of the side length geometric imaging pixels of the target; and calculating the number of pixels of the target rectangular projection geometric imaging. The invention establishes an imaging model based on a point spread function, respectively calculates the geometric imaging pixel number of the target side length in the horizontal direction and the vertical direction according to the negative peak point offset of a conjugate peak characteristic curve, and multiplies the geometric imaging pixel number by the negative peak point offset to obtain the geometric imaging pixel number of the target rectangular projection. Compared with the traditional gray scale comparison method, the method has the advantages that the edge characteristics are more accurately distinguished, the number of pixels of the rectangular projection image in geometric imaging can be accurately calculated only by one target image, the method has strong adaptability to image dispersions of different degrees, and the method has the characteristics of strong operability, wide applicability and high extraction precision.

Description

Geometric imaging pixel number calculation method for target rectangular projection image
Technical Field
The invention relates to a method for calculating the number of geometric imaging pixels of a target rectangular projection image, which is a method for calculating a digital image and is a method for calculating the projection area or distance by using an optical image.
Background
Calculating the actual size of the target or estimating the distance of the target from the sensor based on the optical image has wide application in the fields of industrial optical detection, infrared radiation characteristic measurement, automatic driving and the like. The method comprises the following steps of obtaining the number of geometric imaging pixels of a target image, wherein the key for realizing high-precision target actual size estimation or distance estimation is to obtain the number of geometric imaging pixels of the target image. However, due to the influence of factors such as aperture limitation, phase difference, sensor sampling and atmospheric disturbance of the optical imaging system, the energy of the imaged target is dispersed to a plurality of pixels, the dispersed image energy distribution area is larger than the geometric imaging area, and the edge of the target image is blurred.
The traditional target detection algorithm based on threshold segmentation has difficulty in accurately extracting a geometric imaging pixel region from a target image with blurred edges. For example, a 3 σ principle is adopted to extract a target imaging region, that is, a pixel region in which the gray level in an image is greater than the sum of the background gray level mean value and 3 times of the background gray level standard deviation is taken as an extraction method of the target imaging region, and due to the diffusion effect of an imaging system, the obtained region is often greater than the geometric imaging region of the target. Especially in an infrared optical imaging system, the system has long working wavelength, the pixel size of the sensor is large, and the dispersion effect is more obvious. At this time, the target imaging region extracted based on the 3 σ principle is much larger than the ideal geometric imaging region of the target. There is also a class of object size estimation methods based on image sequences. Such methods typically require modeling of the relative motion between the target and the sensor, and estimating the target size or distance by processing a sequence of successive images of the target. However, for infrared weak and small targets, the target is usually far away from the sensor, the number of geometric imaging pixels is small, the formed image is mainly a diffuse speckle, and the position and speed change of the target relative to the sensor are difficult to reflect from the diffuse speckle image, so that the algorithm fails.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention provides a method for calculating the number of geometric imaging pixels of a target rectangular projection image. The method utilizes the negative peak point offset of the conjugate peak characteristic curve to calculate the number of the geometric imaging pixels of the target, and improves the calculation precision of the projection area of the target on the imaging plane.
The purpose of the invention is realized as follows: a geometric imaging pixel number calculation method for a target rectangular projection image comprises the following steps:
step 1, obtaining a target image: acquiring a target image, and carrying out filtering pretreatment on the target image to inhibit high-frequency noise components;
step 2, constructing a one-dimensional differential autocorrelation function: carrying out differential calculation on the image line by line and line by line, and calculating an autocorrelation function after differentiation;
obtaining a one-dimensional differential autocorrelation function in the x direction after differentiation by a one-dimensional imaging model:
Figure BDA0002491165830000021
32 in total, above formula, wherein:
Figure BDA0002491165830000022
wherein: c i (t) is the differential autocorrelation function of a line of images; g' (x) is the differential of a certain line of the imaging system actually outputting the target image along the horizontal direction; h is a target surface gray level distribution function; p is the system point spread function; delta is a pulse function; d is the imaging pixel number of the horizontal direction side length of the rectangular area during geometric imaging; t is an offset; i is a row index; x is the horizontal direction pixel index; * Represents a convolution; the upper corner mark x represents the conjugate function;
Figure BDA0002491165830000023
representing an autocorrelation operation; the specific definitions of K (x), HP1 (x), HP2 (x), and N (x) are as follows:
Figure BDA0002491165830000024
Figure BDA0002491165830000025
Figure BDA0002491165830000026
N(x)=n′(x)
wherein, BP '(x) = (BP (x) × p (x))'; BP (x) is the gray level distribution of the background horizontal direction; n' (x) is a noise differential term in the horizontal direction;
obtaining a y-direction one-dimensional differential autocorrelation function after differentiation by a one-dimensional imaging model:
Figure BDA0002491165830000031
32 in total, above formula, wherein:
Figure BDA0002491165830000032
wherein: d j (y) is the differential autocorrelation function of a certain column of images; g' (y) is the differential of a certain line of the imaging system actual output target image along the vertical direction; l is the imaging pixel number of the vertical direction side length of the rectangular area during geometric imaging; j is a column index; y is a vertical direction pixel index; k (y), HP1 (y), HP2 (y), N (y) are defined as follows:
Figure BDA0002491165830000033
Figure BDA0002491165830000034
Figure BDA0002491165830000035
N(y)=n′(y)
wherein, BP '(y) = (BP (y) × p (y))'; BP (y) is the gray level distribution of the background in the vertical direction; n' (y) is a noise differential term in the vertical direction;
the one-dimensional imaging model is as follows:
one-dimensional horizontal imaging model:
Figure BDA0002491165830000041
wherein: g (x) is an actual output target image of the imaging system in the horizontal direction, p (x) is a horizontal point spread function of the imaging system, and n (x) is noise in the horizontal direction; b (x) is the background gray distribution in the horizontal direction, h (x) is the gray distribution function of the target surface in the horizontal direction,
Figure BDA0002491165830000042
a rectangular function corresponding to a rectangular area when the target in the horizontal direction is in geometric imaging;
one-dimensional vertical imaging model:
Figure BDA0002491165830000043
wherein: g (y) is an actual output target image in the vertical direction of the imaging system, p (y) is a vertical point spread function of the imaging system, and n (y) is noise in the vertical direction; b (y) is the background gray distribution in the vertical direction, h (y) is the gray distribution function of the target surface in the vertical direction,
Figure BDA0002491165830000044
the method is a rectangular function corresponding to a rectangular area when the target in the vertical direction is in geometric imaging.
Step 3, constructing a conjugate peak characteristic curve:
accumulating the one-dimensional differential autocorrelation function in the horizontal direction to obtain a conjugate peak characteristic curve in the horizontal direction:
Figure BDA0002491165830000045
wherein m is the number of lines of the target image;
accumulating the vertical one-dimensional differential autocorrelation function to obtain a vertical conjugate peak characteristic curve:
Figure BDA0002491165830000046
where n is the number of columns in the target image, D j (t) represents a differential autocorrelation function for a certain column of images;
step 4, extracting the deviation of the negative peak point:
calculating the offset H1 corresponding to the minimum value of the H (t) curve in the interval of 1 to n-1;
calculating the offset V1 corresponding to the minimum value of a V (t) curve in the interval of 1 to m-1;
calculating the offset H2 corresponding to the minimum value of the H (t) curve in the interval of the offset n +3 to 2n + 1;
calculating an offset V2 corresponding to the minimum value of a V (t) curve in an interval of the offset m +3 to 2m + 1;
step 5, calculating the number of the target side length geometric imaging pixels:
the theoretical imaging pixel number HL of the side length of the target projection in the horizontal direction is as follows:
HL=(H2-H1)/2
the theoretical imaging pixel number VL of the side length in the vertical direction of the target projection is as follows:
VL=(V2-V1)/2;
step 6, calculating the number S of the geometric imaging pixels of the target rectangular projection as follows:
Figure BDA0002491165830000051
furthermore, the method for filtering preprocessing is a median filtering method.
The invention has the advantages and beneficial effects that: aiming at a target with a rectangular projection shape on an image plane, the invention establishes an imaging model based on a point spread function, and simplifies and obtains a one-dimensional imaging model in the horizontal direction and the vertical direction respectively. And respectively executing differential autocorrelation operation in the horizontal direction and the vertical direction on the target image based on the one-dimensional imaging model to obtain conjugate peak characteristic curves in the horizontal direction and the vertical direction. And respectively calculating the geometric imaging pixel number of the target side length in the horizontal direction and the vertical direction according to the negative peak point offset of the conjugate peak characteristic curve, and multiplying the geometric imaging pixel number and the geometric imaging pixel number to obtain the geometric imaging pixel number of the target rectangular projection. And (3) constructing a conjugate peak characteristic curve by accumulating the differential autocorrelation functions of each row and each column, and designing a peak point offset extraction method according to the characteristics of the characteristic curve. Compared with the traditional gray scale comparison method, the method has the advantages that the identified edge characteristic parameters are more accurate, the number of geometric imaging pixels of the rectangular projection image can be accurately calculated only by one target image, the method has stronger adaptability to image dispersions of different degrees, and the method has the characteristics of strong operability, wide applicability and high extraction precision.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the deviation of the method according to the embodiment of the present invention.
Detailed Description
The first embodiment is as follows:
the embodiment is a geometric imaging pixel number calculating method of a target rectangular projection image. The method is based on a point spread function to establish a target imaging model with a rectangular projection shape on an image plane, and the imaging model is respectively simplified into one-dimensional imaging models in the horizontal direction and the vertical direction; respectively executing differential autocorrelation operation in the horizontal direction and the vertical direction on the target image based on a one-dimensional imaging model to obtain conjugate peak characteristic curves in the horizontal direction and the vertical direction; respectively calculating the number of geometric imaging pixels of the side length of the target in the horizontal direction and the vertical direction according to the offset of the negative peak point of the conjugate peak characteristic curve to obtain the number of the geometric imaging pixels of the projection rectangle of the target, wherein the derivation process of the principle and the formula is as follows:
(1) Establishing imaging model of rectangular projection target
For an object with a rectangular projection shape on an image plane, when the object is ideally imaged, the gray distribution of an image formed by the object on an imaging focal plane can be expressed as background gray plus the product of a rectangular function and a gray distribution function of an object surface:
Figure BDA0002491165830000061
wherein f (x, y) is a two-dimensional target geometric imaging image function, B (x, y) is a two-dimensional background gray distribution function, h (x, y) is a two-dimensional target surface gray distribution function,
Figure BDA0002491165830000062
d and l are imaging pixel numbers of the horizontal direction side length and the vertical direction side length of the rectangular area during geometric imaging respectively, and x and y represent pixel indexes in the horizontal direction and the vertical direction respectively.
In the actual imaging process, the target energy corresponding to a single pixel in geometric imaging is dispersed to a plurality of pixels under the influence of factors such as aperture limitation of an optical system, sensor sampling, aberration, atmospheric disturbance and the like, so that the edge of a target image is blurred, and details are lost. This process can be expressed as a convolution of the imaging system point spread function with the geometric imaging image:
g(x,y)=f(x,y)*p(x,y)+n(x,y) (2)
wherein g (x, y) is a two-dimensional image function of an actual output target of the imaging system, p (x, y) is a two-dimensional point spread function of the system, n (x, y) is a two-dimensional noise function, and x represents convolution.
And (3) integrating the formula (1) and the formula (2) to obtain an imaging model of the rectangular projection target:
Figure BDA0002491165830000063
the imaging model is simplified from two dimensions to one dimension. Taking the horizontal direction as an example, the one-dimensional imaging model in the horizontal direction is:
Figure BDA0002491165830000064
(2) Constructing a conjugate peak characteristic curve
1) Image pre-processing
Differential operations amplify image noise and affect subsequent processing. Therefore, before the differential operation is performed, the target image needs to be filtered to suppress the high-frequency noise component. Common linear low-pass filtering algorithms further blur image edges while suppressing high-frequency noise, and influence subsequent processing results, and therefore, a filtering method capable of well retaining edge information of a target image, such as a median filtering algorithm, should be selected.
2) Establishing a differential autocorrelation function of a one-dimensional image
In equation (4), let BP (x) = B (x) × p (x), and develop equation (4) according to the convolution equation, to obtain:
Figure BDA0002491165830000071
performing first order differentiation on the above formula to obtain a differentiated one-dimensional image:
Figure BDA0002491165830000072
order:
Figure BDA0002491165830000073
calculating the autocorrelation function of the differentiated one-dimensional image:
Figure BDA0002491165830000074
wherein:
Figure BDA0002491165830000075
denotes an autocorrelation operation, t is an offset, i isThe row index.
A total of 32 terms of the above formula, wherein:
Figure BDA0002491165830000081
wherein: the upper corner mark x represents the conjugate function.
According to the nature of the delta function, the term has a negative peak at offset t = d:
Figure BDA0002491165830000082
the same principle is that:
Figure BDA0002491165830000083
according to the nature of the delta function, there is a negative peak in this term at t = -d, and this peak is equal to equation (10).
The two terms of the formula (9) and the formula (11) form a pair of conjugate peaks, the peak values are negative, and the peak values are symmetrically distributed on two sides of the zero-frequency peak. And the offset of the negative peak value point from the zero-frequency peak is the number of the geometric imaging pixels of the horizontal direction side length of the target projection rectangle.
And similarly, constructing a one-dimensional imaging model in the vertical direction, and calculating the autocorrelation function of the image after first-order differentiation. The function also has a pair of conjugate peaks, the peak values are negative and are symmetrically distributed on two sides of the zero-frequency peak, and the offset of the point of the negative peak from the zero-frequency peak is the number of geometric imaging pixels of the side length of the target projection rectangle in the vertical direction.
3) Constructing a conjugate peak characteristic curve
Continuing with the horizontal one-dimensional imaging model as an example, C in equation (8) i (t) represents the differential autocorrelation function of a line of images. In order to further suppress noise and improve estimation accuracy, differential autocorrelation operation is performed on all rows in the original image, and the differential autocorrelation functions of all rows are accumulated to obtain a horizontal conjugate peak characteristic curve:
Figure BDA0002491165830000084
where m is the number of lines in the target image.
The conjugate peak characteristic curve in the vertical direction can be obtained in the same way:
Figure BDA0002491165830000091
where n is the number of columns in the target image, D j (t) represents the differential autocorrelation function for a certain column of images.
(3) Calculating the number of target projection rectangular geometric imaging pixels
As can be known from the definition of the autocorrelation function, the data length of H (t) is 2n +1, and the offset corresponding to the zero-frequency peak position is n +1. The data length of V (t) is 2m +1, and the offset corresponding to the zero-frequency peak position is m +1. In order to reduce the noise influence, the specific steps of calculating the geometric imaging pixel number of the rectangular image target from the conjugate peak characteristic curve are as follows:
1) And calculating the offset H1 corresponding to the minimum value of the H (t) curve in the interval of 1 to n-1. Calculating the offset V1 corresponding to the minimum value of a V (t) curve in the interval of 1 to m-1;
2) And calculating the offset H2 corresponding to the minimum value of the H (t) curve in the interval of the offset n +3 to 2n +1. Calculating an offset V2 corresponding to the minimum value of a V (t) curve in an interval of the offset m +3 to 2m + 1;
3) The theoretical imaging pixel number of the side length in the horizontal direction of the target projection is HL = (H2-H1)/2. The theoretical imaging pixel number of the side length in the projection vertical direction of the target is VL = (V2-V1)/2. Therefore, the number of target rectangular projection geometric imaging pixels is:
Figure BDA0002491165830000092
thus, the geometric imaging pixel number of the target projection is obtained.
Example two:
the present embodiment is an improvement of the first embodiment, and is an improvement of the method for filter preprocessing in the first embodiment, where the method for filter preprocessing described in the present embodiment is a median filter method.
The median filtering algorithm described in this embodiment preprocesses an image. The median filtering algorithm is a nonlinear filtering algorithm, can better retain the edge information of the target image, and has better filtering effect on common salt and pepper noise in an infrared system.
Finally, it should be noted that the above is only used for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred arrangement, it should be understood by those skilled in the art that the technical solutions of the present invention (such as the format and quality of the pictures, the application of various formulas, the sequence of steps, etc.) can be modified or replaced with equivalents without departing from the spirit and scope of the technical solutions of the present invention.

Claims (2)

1. A geometric imaging pixel number calculation method for a target rectangular projection image is characterized by comprising the following steps:
step 1, obtaining a target image: acquiring a target image, and carrying out filtering pretreatment on the target image to inhibit high-frequency noise components;
step 2, constructing a one-dimensional differential autocorrelation function: carrying out differential calculation on the image line by line and column by column, and calculating an autocorrelation function after differentiation;
obtaining a one-dimensional differential autocorrelation function in the x direction after differentiation by a one-dimensional imaging model:
Figure FDA0002491165820000011
32 in total, above formula, wherein:
Figure FDA0002491165820000012
wherein: c i (t) is the differential autocorrelation function of a line of images; g' (x) is the differential of a certain line of the imaging system actually outputting the target image along the horizontal direction; h is a target surface gray level distribution function; p is the system point spread function; delta is a pulse function; d is the imaging pixel number of the horizontal direction side length of the rectangular area during geometric imaging; t is an offset; i is a row index; x is the horizontal direction pixel index; * Represents a convolution; the upper corner mark x represents the conjugate function;
Figure FDA0002491165820000013
representing an autocorrelation operation; k (x), HP1 (x), HP2 (x), N (x) are defined as follows:
Figure FDA0002491165820000021
Figure FDA0002491165820000022
Figure FDA0002491165820000023
N(x)=n′(x)
wherein, BP '(x) = (BP (x) × p (x))'; BP (x) is the gray level distribution in the background horizontal direction; n' (x) is a noise differential in the horizontal direction;
obtaining a one-dimensional differential autocorrelation function in the y direction after differentiation by a one-dimensional imaging model:
Figure FDA0002491165820000024
32 in total, above formula, wherein:
Figure FDA0002491165820000025
wherein: d j (y) is the differential autocorrelation function of a certain column of images; g' (y) is the differential of a certain line of the imaging system actual output target image along the vertical direction; l is the imaging pixel number of the vertical direction side length of the rectangular area during geometric imaging; j is a column index; y is a vertical direction pixel index; k (y), HP1 (y), HP2 (y), N (y) are defined as follows:
Figure FDA0002491165820000031
Figure FDA0002491165820000032
Figure FDA0002491165820000033
N(y)=n′(y)
wherein, BP '(y) = (BP (y) × p (y))'; BP (y) is the gray level distribution of the background in the vertical direction; n' (y) is a noise differential term in the vertical direction; the one-dimensional imaging model is as follows:
one-dimensional horizontal imaging model:
Figure FDA0002491165820000034
wherein: g (x) is an actual output target image of the imaging system in the horizontal direction, p (x) is a horizontal point spread function of the imaging system, and n (x) is noise in the horizontal direction; b (x) is the background gray distribution in the horizontal direction, h (x) is the gray distribution function of the target surface in the horizontal direction,
Figure FDA0002491165820000035
a rectangular function corresponding to a rectangular area when the target in the horizontal direction is in geometric imaging;
one-dimensional vertical imaging model:
Figure FDA0002491165820000036
wherein: g (y) is an actual output target image in the vertical direction of the imaging system, p (y) is a vertical point spread function of the imaging system, and n (y) is noise in the vertical direction; b (y) is the background gray distribution in the vertical direction, h (y) is the gray distribution function of the target surface in the vertical direction,
Figure FDA0002491165820000037
a rectangular function corresponding to a rectangular area when the target in the vertical direction is in geometric imaging;
step 3, constructing a conjugate peak characteristic curve:
accumulating the horizontal direction one-dimensional differential autocorrelation function to obtain a horizontal direction conjugate peak characteristic curve:
Figure FDA0002491165820000038
wherein m is the number of lines of the target image;
accumulating the vertical one-dimensional differential autocorrelation function to obtain a vertical conjugate peak characteristic curve:
Figure FDA0002491165820000039
where n is the number of columns in the target image, D j (t) represents a differential autocorrelation function for a certain column of images;
step 4, extracting the deviation of the negative peak point:
calculating the offset H1 corresponding to the minimum value of the H (t) curve in the interval of 1 to n-1;
calculating the offset V1 corresponding to the minimum value of a V (t) curve in the interval of 1 to m-1;
calculating the offset H2 corresponding to the minimum value of the H (t) curve in the interval of the offset n +3 to 2n + 1;
calculating an offset V2 corresponding to the minimum value of a V (t) curve in an interval of the offset m +3 to 2m + 1;
step 5, calculating the number of the target side length geometric imaging pixels:
the theoretical imaging pixel number HL of the side length of the target projection in the horizontal direction is as follows:
HL=(H2-H1)/2
the theoretical imaging pixel number VL of the side length in the vertical direction of the projection of the target is as follows:
VL=(V2-V1)/2;
step 6, calculating the number S of the geometric imaging pixels of the target rectangular projection as follows:
Figure FDA0002491165820000041
2. the method for calculating the number of pixels for geometric imaging of a target rectangular projection image according to claim 1, wherein said filtering preprocessing is a median filtering method.
CN202010405587.XA 2020-05-14 2020-05-14 Geometric imaging pixel number calculation method for target rectangular projection image Active CN111612710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010405587.XA CN111612710B (en) 2020-05-14 2020-05-14 Geometric imaging pixel number calculation method for target rectangular projection image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010405587.XA CN111612710B (en) 2020-05-14 2020-05-14 Geometric imaging pixel number calculation method for target rectangular projection image

Publications (2)

Publication Number Publication Date
CN111612710A CN111612710A (en) 2020-09-01
CN111612710B true CN111612710B (en) 2022-10-04

Family

ID=72204501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010405587.XA Active CN111612710B (en) 2020-05-14 2020-05-14 Geometric imaging pixel number calculation method for target rectangular projection image

Country Status (1)

Country Link
CN (1) CN111612710B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004092826A1 (en) * 2003-04-18 2004-10-28 Appro Technology Inc. Method and system for obtaining optical parameters of camera
CN110765631A (en) * 2019-10-31 2020-02-07 中国人民解放军95859部队 Effective imaging pixel-based small target judgment method for infrared radiation characteristic measurement

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8598502B2 (en) * 2011-03-28 2013-12-03 Raytheon Company Motionless focus evaluation test station for electro-optic (EO) sensors
CN102413283B (en) * 2011-10-25 2013-08-14 广州飒特红外股份有限公司 Infrared chart digital signal processing system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004092826A1 (en) * 2003-04-18 2004-10-28 Appro Technology Inc. Method and system for obtaining optical parameters of camera
CN110765631A (en) * 2019-10-31 2020-02-07 中国人民解放军95859部队 Effective imaging pixel-based small target judgment method for infrared radiation characteristic measurement

Also Published As

Publication number Publication date
CN111612710A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
JP5567179B2 (en) Stereo image processing apparatus and stereo image processing method
CN107084680B (en) A kind of target depth measurement method based on machine monocular vision
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
CN103426148A (en) Method and device for generating a super-resolution version of a low resolution input data structure
CN105894521A (en) Sub-pixel edge detection method based on Gaussian fitting
CN111489337B (en) Automatic optical detection pseudo defect removal method and system
CN103440653A (en) Binocular vision stereo matching method
CN104794696B (en) A kind of image goes motion blur method and device
CN106169173B (en) Image interpolation method
Ravikumar et al. Digital image processing-a quick review
TWI394097B (en) Detecting method and system for moving object
CN105719251A (en) Compression and quality reduction image restoration method used for large image motion linear fuzziness
CN104200434B (en) Non-local mean image denoising method based on noise variance estimation
TWI496115B (en) Video frame stabilization method for the moving camera
CN102564924B (en) Automatic scanning method of single-frame image of blood cell
CN111028263A (en) Moving object segmentation method and system based on optical flow color clustering
CN111612710B (en) Geometric imaging pixel number calculation method for target rectangular projection image
CN112801141B (en) Heterogeneous image matching method based on template matching and twin neural network optimization
CN107945119B (en) Method for estimating correlated noise in image based on Bayer pattern
TWI434228B (en) Filter design mehtod, apparatus and image restoration method and apparatus using the filter
CN106651932B (en) Single image defocusing fuzzy estimation algorithm based on multi-scale gradient difference
CN111583315A (en) Novel visible light image and infrared image registration method and device
KR100805802B1 (en) Apparatus and method for camera auto-calibration in motion blurred sequence, Augmented reality system using it
Haußecker et al. Tensor-based image sequence processing techniques for the study of dynamical processes
KR100911493B1 (en) Method for image processing and apparatus for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant