A kind of gun sight and its automatic correction method of automatic deviation correction
Technical field
The invention belongs to point technique field, it is implemented as gun sight and its automatic deviation correction side of a kind of automatic deviation correction
Method.
Background technique
Sighting device in the prior art is divided into mechanical aiming device and optical foresight, and mechanical aiming device mechanically passes through gold
Belong to gunsight, such as rear sight, foresight and sight to aim to realize;Optical foresight is imaged by using optical lens, by target
Image and sight line, which overlap, realizes aiming on the same focussing plane.Existing sighting device have the following disadvantages with it is not convenient
Property: (1) installation aim at utensil after, when being applied to aimed fire, it is ensured that accurate aiming position and combine long-term shooting
Experience, just achievable accurate shooting, however for shooting beginner and are penetrated at the fault of aiming without abundant
Experience is hit, will affect the accuracy of its shooting;(2) during shooting, multiple adjustment graduation and point of impact is needed, bullet is made
Point be overlapped with graduation center, during calibrating point of impact and graduation center and being overlapped, be required to multiple adjusting knob, or
Carry out other mechanical adjustment;(3) it when shooting deviation calibration, needs by largely shooting adjustment and it is necessary in profession
It can be approached under experienced shooter's adjustment precisely, for the shooter of common shooter or shortage shooting experience, deviation
When adjustment one it is very troublesome and take considerable time work with material resources, once and the good sighting system of adjustment, encounter
The case where disassembly and replacement gun sight, above-mentioned calibration procedures then need to re-execute, and bring great inconvenience to the use of user.
Summary of the invention
In view of the above-mentioned problems, the present invention is from the sighting system of gun itself, in conjunction with image science and image procossing
The academic research of aspect invents a kind of photoelectronic collimating mirror system of the automatic correction method without manual intervention and its entangles automatically
Inclined method.
The present invention is achieved by the following technical solutions:
A kind of automatic correction method, the automatic correction method are that the optical imagery for obtaining shooting telescopic sight is converted to electricity
Subgraph, extracts target sheet region from the electronic image, and target sheet region and electronic reference target sheet carry out the inspection of Pixel-level subtraction
Point of impact is measured, the central point of each point of impact is calculated, the deviation of each point of impact central point and target sheet regional center point is obtained, by institute
The deviation input shooting telescopic sight is stated to correct subsequent shooting automatically.
Further, perspective correction is carried out by the outer profile school in the target sheet region to target sheet region after extracting target sheet region
Just it is circle, and carries out point of impact detection with the target sheet region after perspective correction.
Further, target sheet region is extracted from the electronic image specifically: big ruler is carried out to the electronic image
The mean filter of degree eliminates the grid interference on target sheet, using adaptive Otsu threshold split plot design, according to the electronic image
The electronic image is divided into background and prospect by gamma characteristic, uses Freeman chain code according to the image for being divided into foreground and background
Vector tracking method and geometrical characteristic determine that minimized profile obtains target sheet region.
Further, the target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction and detect that point of impact is specific
Are as follows: the target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction, obtain the target sheet region and the electronic reference
The pixel difference image of target sheet;
The pixel difference threshold value threshold that before and after frames image is set in the pixel difference image, when pixel difference is more than threshold value
When, result is set as 255, when pixel difference is lower than threshold value, sets result as 0;
Contour extraction is carried out to the pixel difference image to obtain playing dot profile and calculating profile center obtaining point of impact
Central point.
Further, the perspective correction specifically: the edge in the target sheet region is obtained with Canny operator, to described
Edge carries out maximum elliptic contour using Hough transform and is fitted, and maximum elliptic equation is obtained, using Hough transform to the side
Edge carries out the straight line fitting of cross wire, obtains the friendship with the top point, lowest point, rightest point, ultra-left point of maximum circle contour
Crunode, by four of same position in the top point of maximum circle contour, lowest point, rightest point, ultra-left point and perspective transform template
Point combines and perspective transformation matrix is calculated, and carries out perspective transform to the target sheet region using perspective transformation matrix.
Further, the target sheet area extracted when the electronic reference target sheet is the electronic image or historical analysis of blank target sheet
Domain.
Further, the deviation includes longitudinal bias and lateral deviation.
A kind of shooting telescopic sight of automatic deviation correction, the gun sight include visual field acquiring unit, display unit, photoelectric conversion
Circuit board and CPU core core;
The visual field acquiring unit acquires target sheet optical imagery, and the photoelectric conversion circuit plate converts the optical imagery
For electronic image;
The CPU core core includes automatic deviation correction module, and the automatic deviation correction extracts target sheet from the electronic image
The target sheet region and electronic reference target sheet are carried out Pixel-level subtraction and detect point of impact, calculated in each point of impact by region
Heart point calculates the deviation of each point of impact central point and target sheet regional center point, and the deviation is inputted the shooting telescopic sight pair
Subsequent shooting is corrected automatically;Preferably, the automatic deviation correction module uses above-mentioned automatic correction method;
The display unit shows the electronic image and deviation.
Further, CPU core core connects a RAM card, the target sheet area that the RAM card storage extracts by interface board
Domain, fire accuracy.
Further, the CPU core core further includes video stabilization processing unit, the video stabilization processing unit
The electronic image show after processing elimination is shaken in the display unit.
Advantageous effects of the invention: the present invention provides a kind of automatic correction method, and this method can be applied to photoelectric aiming
In Barebone.The automatic correction method can calculate shooting deviation according to history firing data, using history shooting deviation to subsequent
Shooting carries out automatic deviation correction, without excessive artificial experience intervention, realizes slewing, significantly improves fire accuracy.
Detailed description of the invention
Fig. 1 analysis method flow diagram of the present invention;
8 connection chain code in Fig. 2 embodiment of the present invention 1;
Dot chart in Fig. 3 embodiment of the present invention 1;
Fig. 4 target sheet extracted region flow diagram of the present invention;
2 non-maxima suppression schematic diagram of Fig. 5 embodiment of the present invention;
Transformation original point schematic diagram under 2 rectangular coordinate system of Fig. 6 embodiment of the present invention;
Pass through any 4 straight line schematic diagrames of original point under 2 rectangular coordinate system of Fig. 7 embodiment of the present invention;
Table of any 4 straight lines under polar coordinate system under 2 rectangular coordinate system of Fig. 8 embodiment of the present invention by original point
State schematic diagram;
Fig. 9 embodiment of the present invention 2 determines cross wire L1 and L2 and elliptical intersection point schematic diagram;
2 perspective transform of Figure 10 embodiment of the present invention diagram is intended to;
Figure 11 target sheet regional correction of the present invention executes flow diagram;
Figure 12 present invention plays point detecting method and executes flow diagram;
1 gun sight perspective view of Figure 13 embodiment of the present invention;
1 gun sight left view of Figure 14 embodiment of the present invention;
1 gun sight right view of Figure 15 embodiment of the present invention;
4 video anti-shaking method flow diagram of Figure 16 embodiment of the present invention.
In figure: 1. visual field acquiring units, 2. display units, 3. battery compartments, 4. rotary encoders, 5. focusing knobs, outside 6.
Skin rail is hung, 7. key control panels, 8. pick up spit of fland Buddhist nuns, 9. photoelectric conversion plates, 10. aim at processing of circuit unit, 11. display conversions
Plate, 81. adjusting nuts one, 82. adjusting nuts two, 101.CPU core board, 102. interface boards.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is explained in further detail.It should be appreciated that specific embodiment described herein is used only for explaining the present invention, and
It is not used in the restriction present invention.
On the contrary, the present invention covers any substitution done on the essence and scope of the present invention being defined by the claims, repairs
Change, equivalent method and scheme.Further, in order to make the public have a better understanding the present invention, below to of the invention thin
It is detailed to describe some specific detail sections in section description.Part without these details for a person skilled in the art
The present invention can also be understood completely in description.
Embodiment 1
The present invention also provides a kind of shooting telescopic sight of automatic deviation correction, which has automatic deviation correction module, entangles automatically
Inclined module corrects subsequent shooting according to history fire accuracy using automatic correction method automatically.
The sighting system is convenient to be mounted on all kinds of firearms, and the photoelectric sighting system includes: a shell, described
Shell generally detachable structure, the enclosure interior are an accommodation space, and the accommodation space includes visual field acquiring unit, view
Frequency processing unit, display unit, power supply and aiming circuit unit;
The structure of the gun sight is as shown in Figure 13-Figure 15.
The visual field acquiring unit 1 includes having object lens combination or other optical visible equipment;The object lens or optics can
It is mounted on 1 front end of visual field acquiring unit depending on equipment, obtains field-of-view information.
The photoelectric sighting system is integrally a digitalizer, can be with smart phone, intelligent terminal, sighting device or electricity
Road is communicated, and by the video information that visual field acquiring unit 1 acquires be sent to smart phone, intelligent terminal, sighting device or
Circuit is shown the video information that visual field acquiring unit 1 acquires by devices such as smart phone, intelligent terminals.
It include photoelectric conversion circuit in visual field acquiring unit 1, which includes photoelectric conversion plate, which will
Visual field optical signal conversion is electric signal, and the photoelectric conversion plate 9 is the photoelectric conversion circuit plate in visual field acquiring unit 1, institute
It states photoelectric conversion plate 9 and converts optical signals to electric signal, while automatic exposure, automatic white balance, noise reduction, sharpening are carried out to signal
Operation improves signal quality, provides the data of high quality for imaging.
For connecting photoelectric conversion plate 9 and showing that the aiming processing of circuit unit 10 of change-over panel 11 includes CPU core board
101 and interface board 102, the interface board 102 connect with the CPU core core 101, specially CPU core core 101 passes through string
The serial ports of mouth and interface board 102, which is realized, to be connected, and the CPU core core 101 is placed in the interface board 102 and the photoelectric conversion plate
Between 9, three is placed in parallel, and plate face is each perpendicular to visual field acquiring unit 1, and the photoelectric conversion plate 9 passes through parallel data grabbing card,
Video signal transmission after conversion to CPU core core 101 is further handled, the interface board 102 passes through serial ports and CPU
Core board 101 is communicated, and the peripheral operations information such as battery capacity, posture information, time, button operation, knob-operated is passed
CPU core core 101 is transported to be further processed.
The CPU core core 101 can be connected a RAM card by interface board 102 and be obtained in embodiments of the present invention with visual field
It takes unit 1 for observation Way in, memory card slot is set at 101 leftward position of CPU core core, the RAM card is plugged on
In memory card slot, information can be stored in the RAM card, the RAM card can rise automatically the software program built in system
Grade.
It is observation Way in visual field acquiring unit 1, is also set in the 101 left side RAM card trough rim side of CPU core core
It is equipped with a USB interface, external power supply power supply can be carried out to system or by the information of CPU core board 101 by the USB interface
Output.
The photoelectric sighting system further includes multiple sensors, concretely acceleration transducer, wind speed wind direction sensor,
It is geomagnetic sensor, temperature sensor, baroceptor, several or whole in humidity sensor.
It is additionally provided with a battery compartment 3 in the shell, a battery component is equipped in the battery compartment 3, is set in the battery compartment 3
It is equipped with elastic slice, convenient for the fastening of the battery component, middle part is arranged in shell in the battery compartment 3, can beat by shell side
It opens battery cabin cap and realizes replacement battery component.
3 bottom side of battery compartment is equipped with route solder contacts, which connects with the elastic slice inside battery compartment, the battery
The conducting wire of the contact welded bands connecting terminal in storehouse 3, connecting interface plate 102 turn interface board 102, CPU core board 101, photoelectricity
Plate 9, display change-over panel 11 and display unit 2 is changed to be powered.
The display unit 2 is display screen, and the display unit 2 is connected by display change-over panel 11 and interface board 102
It connects, to be communicated with CPU core core 101, CPU core core shows display data transmissions to display unit 2.
The video information that the cross division line of the display screen display is acquired with visual field acquiring unit is overlapped mutually, and is passed through
Cross division line is used for aimed fire, at the same on a display screen also display for secondary fire, passed by above-mentioned various sensors
Defeated secondary fire information and work instruction information;
The information of the secondary fire, part are applied to shooting trajectory calculating, and part is used for display alarm use
Family.
The case top is equipped with external key, and the external key is connected to by the key control panel 7 of case inside
On interface board 102, by touching by the function that the external key can realize switchgear and take pictures, record a video.
It is observation Way in visual field acquiring unit 1, close 2 side of display unit is equipped with band on the right side of the shell
The rotary encoder 4 of keypress function, the rotary encoder 4 is in the enclosure interior concatenated coding device circuit board 41, encoder
Circuit board 41 is connect by the winding displacement with connecting terminal with interface board, and the transmission of operation data is completed.The rotary encoder control
Function switch processed, adjustment are apart from multiplying power data, setting information, typing deviation data etc..
It is observation Way in, close 1 side of visual field acquiring unit on the right side of the shell with visual field acquiring unit 1
Equipped with focusing knob 5, the focusing knob 5 adjusts the focusing of visual field acquiring unit 1 by spring mechanism, reaches under different distance
The purpose of clear observation object.
The bottom of the shell is equipped with pick up spit of fland Buddhist nun 8, on fixed fire instrument, the pick up spit of fland Buddhist nun to include adjustable
Fastening nut 81 and 82 is saved, fastening nut is in the left side or right side of pick up spit of fland Buddhist nun.
The top of visual field acquiring unit 1 is equipped with plug-in skin rail 6 in the shell, and plug-in skin rail 6 is adopted with visual field acquiring unit 1
It is shot, is fixed by screw with same optical axis;Plug-in skin rail 6 is designed using standard size, can be mounted with standard pick up spit of fland
The object of Buddhist nun's connector, the object include laser range finder, light compensating lamp, laser pen etc..
The present embodiment provides a kind of automatic correction method simultaneously, the automatic correction method the following steps are included:
(1) photoelectric conversion: the optical imagery that shooting telescopic sight is obtained is converted to electronic image;
(2) target sheet region target sheet extracted region: is extracted from the electronic image;
Interested target target sheet region is extracted from global image, while eliminating the interference of complex background environment information.
Target sheet method for extracting region is the object detection method based on adaptive threshold fuzziness, which determines that speed is fast,
Preferable to the performance of various complex situations, segmentation quality is secure.The detection method uses the thought for maximizing inter-class variance, if
Determine the segmentation threshold that t is prospect and background, it is w0, average gray u0 that prospect points, which account for image scaled,;Background points account for image
Ratio is w1, average gray u1, sets u as the overall average gray scale of image, then
U=w0*u0+w1*u1;
T is traversed from minimum gradation value to maximum gradation value, when t value makes
G=w0* (u0-u)2+w1*(u1-u)2;
Value when being maximum, t is the optimal threshold divided.
The target sheet method for extracting region executes process such as Fig. 4, and the target sheet method for extracting region is filtered comprising image mean value
Wave, otsu Otsu threshold method determine that segmentation threshold, Threshold segmentation determine that candidate region, contour following algorithm is determining and intercepts minimum
Four steps of profile.
21) Image Mean Filtering
The mean filter of large scale is carried out to image, eliminates the grid interference on target sheet, prominent circle target sheet region.With ruler
For the very little sample for 41*41, calculation method is as follows:
Wherein, g (x, y) indicates that filtered image, x are the abscissa of the central point corresponding points on the image of sample, and y is sample
The ordinate of this central point corresponding points on the image, i be relative to the pixel abscissa index value between -20 to the 20 of x,
J is relative to the pixel ordinate index value between -20 to the 20 of y.
22) otsu Otsu threshold method determines segmentation threshold
Threshold segmentation is divided the image into using adaptive Otsu threshold split plot design (OTSU) according to the gamma characteristic of image
Background and prospect.Variance between background and prospect is bigger, illustrates that the difference between two parts image is bigger.Therefore, for figure
As I (x, y), the segmentation threshold of foregrounding and background is Th, and it is w2 that the pixel for belonging to prospect, which accounts for the ratio of entire image,
Average gray is G1, and the ratio that background pixel point accounts for entire image is w3, average gray G2, total average gray of image
For G_Ave, inter-class variance g, the size of image is M*N, and the number that the pixel of threshold value is less than in image is N1, grey scale pixel value
Number greater than threshold value is denoted as N2, then:
M*N=N1+N2;
W2+w3=1;
G_Ave=w2*G1+w3*G2;
G=w2* (G_Ave-G1)2+w3*(G_Ave-G2)2;
Obtained equivalence formula:
G=w2*w3* (G1-G2)2;
It can be obtained by segmentation threshold Th when being inter-class variance g maximum using traversal.
23) filtered image is split in conjunction with determining Threshold segmentation threshold value Th
Obtain the bianry image for being divided into foreground and background.
24) contour following algorithm is determining and intercepts the vector tracking side that minimized profile Contour extraction uses Freeman chain code
Method, this method are a kind of methods that curve or boundary are described with the coordinate of curve starting point and edge direction code.The party
Method is a kind of coded representation on boundary, uses boundary direction as coding basis, in order to simplify the description on boundary, using boundary point
The description method of collection.
Common chain code is divided into 4 connection chain codes and 8 connection chain codes according to the difference of central pixel point adjacent direction number.4
The abutment points of connection chain code have 4, respectively in the up, down, left and right of central point.8 connection chain codes increase 4 than 4 connection chain codes
A oblique 45 ° of directions because there is 8 abutment points around any one pixel, and 8 connection chain codes just with the practical feelings of pixel
Condition is consistent, and can accurately describe central pixel point and be adjacent information a little.Therefore, this algorithm is connected to chain codes using 8,
As shown in Figure 2.
8 connection chain code distribution tables are as shown in table 1:
Table 18, which is connected to, practices distribution table
As shown in Figure 3, one 9 × 9 dot chart is provided, wherein a line segment, S is starting point, and E is terminal, this line segment
It is represented by L=43322100000066.
In conjunction with self-defined structure body
Customized FreemanList structure
Whether judge chain construction end to end is a bit, to judge whether it is integrity profile, to obtain target sheet area image simultaneously
Store target sheet area image.
(3) point of impact is detected:
The impact point detecting method is the impact point detecting method based on background subtraction.Described this method is from target sheet area
Point of impact is detected in area image, and determines its center position.This method saves previous target surface figure, and recycling works as front target
Face figure and previous target surface figure carry out Pixel-level subtraction, due to carrying out two frames in perspective correction calculating process to image
There may be pixel deviations for image, use down-sampled method using 2 pixels as step-length, count in the pixel region of 2*2 with minimum ash
Angle value is the grey scale pixel value, is calculated the grayscale image after down-sampled, the region that gray scale is greater than 0 is obtained, to the region
Contour detecting is carried out, newly generated point of impact graphical information is obtained.
The impact point detecting method is compared using front and back Pixel-level subtraction, and processing speed is fast, can guarantee to return
Newly generated point of impact position.
The impact point detecting method executes as follows:
31) former target sheet image is stored
Former target sheet image data is stored, and is read in caching, as reference target target sheet image.If being directed to when shooting
The shooting again for having carried out the target of accuracy computation, then using last time accuracy computation when the target sheet region that stores as reference
Target target sheet image.
32) will be by above-mentioned 1) -2) image after step process and former target sheet image carry out Pixel-level subtraction, obtain difference
Position.
The pixel difference threshold value threshold for setting before and after frames image sets result as 255 when pixel difference is more than threshold value,
When pixel difference is lower than threshold value, result is set as 0.
Specific threshold value can be obtained by debugging, and setting range is under normal circumstances 100~160.
33) to above-mentioned steps 32) image that generates carries out Contour extraction and obtains impact dot profile, and calculate in point of impact
Heart point
Freeman chain code carries out Contour extraction calculating mean value and obtains point of impact central point, and calculation formula is as follows:
Centerxi indicates the center x-axis coordinate of i-th of point of impact, and Centeryi indicates the center y of i-th of point of impact
Axial coordinate, FreemanlistiIndicate the profile of i-th of point of impact.
The impact point detecting method executes process such as Figure 12:
(4) deviation calculates:
Transverse direction, the longitudinal bias for detecting point of impact and target sheet center, obtain deviation set.
The target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction and detect point of impact, calculates each point of impact
Central point, and the deviation of each point of impact central point and target sheet regional center point is further calculated, it will be penetrated described in deviation input
Gun sight is hit to correct subsequent shooting automatically.
Embodiment 2
The embodiment is substantially the same manner as Example 1, and difference is, includes target sheet regional correction after extracting target sheet region
Step.
Target sheet regional correction:
Due to target sheet stickup and obtain image when Target observator and target sheet angle of arrival deviation, the then target sheet extracted it is effective
Region will appear heeling condition and make the image obtained non-circular.In order to guarantee that it is higher that the point of impact deviation being calculated has
Precision carries out perspective correction to target sheet image, target sheet image outer profile is corrected to regular circle.Target sheet regional correction side
Method, is the target sheet method for correcting image based on oval endpoint, and the method obtains the edge of image with Canny operator.Due to target
Paper image almost occupies entire image, in the case where parameter variation range is small, carries out maximum elliptic wheel using Hough transform
Exterior feature fitting, obtains maximum elliptic equation.There are cross wires for target sheet image, and with oval there are several intersection points, these
It is most upper that intersection point corresponds respectively to maximum circle contour in standard drawing, most lower, most right, ultra-left point.Cross is carried out using Hough transform
The straight line fitting of cross spider.In the subgraph of input, right-angled intersection and elliptical intersection point set, the identical bits with template are obtained
The point set set calculates perspective transformation matrix together.
The target sheet regional correction method can quickly obtain outermost layer elliptic contour parameter using Hough transform.Together
When, the Hough transform line detection algorithm under polar coordinates also can quickly obtain straight line parameter, and therefore, this method can be quick
Correction target sheet region.
The target sheet regional correction method executes as follows:
51) edge detection is carried out using Canny operator
This method include RGB turn grayscale image, gaussian filtering inhibit noise, single order local derviation calculate gradient, non-maxima suppression,
Dual threshold method detection and connection five, edge part.
RGB turns grayscale image
Gradation conversion is carried out by the conversion proportion of RGB and gray scale, RGB image is converted to grayscale image (will be with R, G, B tri-
Primary conversion is gray value Gray),
It executes as follows:
Gray=0.299R+0.587G+0.114B;
Gaussian filtering is carried out to image
Grayscale image after conversion passes through gaussian filtering, inhibits the noise of the image after turning, sets σ as standard deviation, according to height
This loss reduction principle sets the size of template as (3* σ+1) * (3 σ+1) at this time, sets cross of the x to deviate template center's point
To coordinate, y is the longitudinal coordinate for deviateing template center, and K is the weight of gaussian filtering template, is executed as follows:
Amplitude and the direction of gradient are calculated with the finite difference of single order local derviation
Convolution operator:
The calculating of gradient:
P [i, j]=(f [i, j+1]-f [i, j]+f [i+1, j+1]-f [i+1, j])/2;
Q [i, j]=(f [i, j]-f [i+1, j]+f [i, j+1]-f [i+1, j+1])/2;
θ [i, j]=tan-1(Q[i,j]/P[i,j])。
Non-maxima suppression
This method, which refers to, finds pixel local maximum, gray value corresponding to non-maximum point is set to 0, to pick
Except the point of most of non-edge.
As can be seen from FIG. 5, Yao Jinhang non-maxima suppression, the gray value for just first having to determine pixel C is in its 8 value neighbour
It whether is maximum in domain.Lines in Fig. 5Direction is the gradient direction of C point, is assured that its part in this way
Maximum value be distributed on this line certainly, i.e., other than C point, the value of the two points of the intersection point dTmp1 and dTmp2 of gradient direction
It may also be able to be local maximum.Therefore, judge that C point gray scale and the two point gray scale sizes can determine whether C point is its neighbour
Local maxima gray scale point in domain.If C point gray value is less than any of the two points, that just illustrates that C point is not part
Maximum, then can exclude C point is edge.
The detection of dual threashold value-based algorithm and connection edge
Non-edge quantity is further reduced using dual-threshold voltage.Set Low threshold parameter Lthreshold and high threshold parameter
Hthreshold, the two form comparison condition, and numerical value more than high threshold and high threshold is uniformly transformed into 255 numerical value and is protected
Deposit, the numerical transformation between Low threshold and high threshold as 128 numerical value store, other numerical value regard as non-edge data with
0 substitution.
Edge track is carried out using Freeman chain code again, filters out the small marginal point of length.
52) it is fitted cross wire using Hough transform under polar coordinates, obtains linear equation
Hough transformation is the method for a detection straight line circle simple geometric shape in image procossing.For straight line,
It can be expressed as y=kx+b using rectangular coordinate system, then arbitrary a bit (x, y) is transformed among the space k-b just on the straight line
It is a point, it is then one that non-zero pixels all on straight line, which transform among k-b parameter space, in other words, in image space
Point.Therefore, local peaking's point among parameter space can correspond to the straight line among original image image space.Due to
There is infinitely large quantity or infinitely small quantities for slope, therefore the detection of straight line is carried out using polar coordinate space.In polar coordinate system,
Straight line can state following form as:
ρ=x*cos θ+y*sin θ;
There is above-mentioned formula, in conjunction with Fig. 7 it is found that parameter ρ is coordinate origin to the distance of straight line, each group of parameter ρ and θ will only
One has determined straight line, it is only necessary to using local maximum as search condition in parameter space, then it is available this part most
It is worth corresponding straight line parameter set greatly.After obtaining corresponding straight line parameter set, using non-maxima suppression, retain maximum value
Parameter.
53) cross wire and elliptical 4 intersection points are calculated
L1, L2 linear equation obtain 4 intersection points it is known that need to only search for the intersection point with oval outer profile in the straight direction
Coordinate (a, b), (c, d), (e, f), (g, h), as shown in Figure 9.
1) perspective transformation matrix parameter is calculated, image rectification is carried out
4 points pair are formed using the coordinate of 4 intersection points and 4 points of template definition, perspective school is carried out to target sheet region
Just
Perspective transform is to project image onto a new view plane, general transformation for mula:
U, v are the coordinates of original image, correspond to the coordinate x ', y ' of transformed image;Add to constitute three-dimensional matrice
Add cofactor w, w ', it is the transformed value of w that w, which takes 1, w ',.Wherein
X '=x/w;
Y '=y/w;
Above formula can be equivalent to:
Therefore given corresponding four coordinates of perspective transform, so that it may acquire perspective transformation matrix.Become acquiring perspective
Perspective transform can be completed to image or pixel later by changing matrix.It is as shown in Figure 10:
In order to facilitate calculating, we simplify above formula, set (a1,a2,a3,a4,a5,a6,a7,a8) it is that perspective becomes
8 parameters changed, above-mentioned formula are of equal value are as follows:
Wherein, (x, y) is figure coordinate to be calibrated, and (x ', y ') indicates the figure coordinate after calibration, i.e. Prototype drawing coordinate.It is above-mentioned
Formula is of equal value are as follows:
a1*x+a2*y+a3-a7*x*x′-a8* y*x '-x '=0;
a4*x+a5*y+a6-a7*x*y′-a8* y*y '-y '=0;
Above-mentioned formula is converted into matrix form:
Due to there is 8 parameters, 1 point has 2 equations pair, therefore it may only be necessary to which 4 points are to can solve corresponding 8
A parameter.Set (xi,yi) be image to be calibrated pixel coordinate, (x 'i,y′i) be Prototype drawing pixel coordinate, i=
{1,2,3,4}.Therefore matrix form is convertible into:
It enables
Above-mentioned formula are as follows:
AX=b;
Nonhomogeneous equation is solved, is solved are as follows:
X=A-1b;
Target sheet region after being corrected, while target sheet region after the correction being stored, subsequent trajectory point is applied when detecting
Target sheet area image after correction.
Embodiment 3
The present embodiment gun sight is substantially the same manner as Example 1, the difference is that, in order to improve video display quality, this
Embodiment adds video stabilization processing unit on CPU core core, the video jitter-prevention processing method which includes, to obtaining
The image data taken pre-processed, characteristic point detection, feature point tracking, homography matrix calculate, image filtering and affine transformation,
Treated that image can smoothly be shown by a series of.Video jitter-prevention processing method flow chart is as shown in figure 15.
Video jitter-prevention processing method include the detection of former frame characteristic point, present frame feature point tracking, homography matrix calculate,
Image filtering and affine transformation;The detection of former frame characteristic point uses FAST angular-point detection method to extract characteristic point as latter frame number
According to the template for carrying out feature point tracking, present frame carries out feature to former frame using pyramid Lucas-Kanade optical flow approach
Point tracking, the best characteristic point of 2 property is chosen with RANSAC algorithm from characteristic point used, it is assumed that this feature point only rotates
And translation, then the affine transformation of the homography matrix is rigid body translation, calculates translation distance according to two groups of points and rotation angle calculates
The homography matrix of affine transformation out is then filtered operation to transformation matrix using Kalman filtering, eliminates random motion point
Amount, the transformation matrix after coordinates of original image coordinates and filtering, which is finally done multiplication, can be obtained coordinate of the former coordinate in new images
It realizes affine transformation, eliminates video jitter.
In addition, being needed before former frame characteristic point detects for the non-rgb format of image that the gun sight of certain model obtains
Image information progress pretreatment operation is converted into rgb format, simplifies image information, is supplied to subsequent image processing module.
The specific method is as follows:
(1) image preprocessing
For the non-rgb format of image that the gun sight of certain model obtains, need image information carrying out pretreatment operation
Rgb format is converted to, simplifies image information, is supplied to subsequent image processing module.Image preprocessing by the YUV image of input into
Row image format conversion calculates RGB image and gray level image required for algorithm process.
Its conversion formula is as follows:
R=Y+1.140*V;
G=Y-0.394*U-0.581*V;
B=Y+2.032*V.
(2) former frame characteristic point detects
The detection of former frame characteristic point, as the template of latter frame data progress feature point tracking, the characteristic point detected,
It is recorded in the form of coordinate set.Characteristic point detection uses Shi Tomasi Corner Detection, calculates image grayscale intensity
The autocorrelation matrix of matrix of second derivatives then calculates its characteristic value, if lesser numerical value is greater than threshold value in two characteristic values,
It can obtain strong angle point.Wherein the calculating of matrix of second derivatives is accelerated by sobel operator.
(3) present frame feature point tracking
The above frame data characteristic point is template, and tracing detection obtains the characteristic point data of present frame, using pyramid
Lucas-Kanade optical flow approach carries out feature point tracking, it is assumed that a pixel (x, y) on image, in the brightness of t moment
For E (x, y, t), while indicating the mobile component of this light stream in the horizontal and vertical directions with u (x, y) and v (x, y), then:
After interval of time Δ t this corresponding points brightness be E (x+ Δ x, y+ Δ y, t+ Δ t), when Δ t approach
When 0, one can consider that the brightness is constant, then the brightness of t moment is equivalent to:
E (x, y, t)=E (x+ Δ x, y+ Δ y, t+ Δ t);
When the brightness of the point changes, the brightness put after movement is opened up into Jian by Taylor formula, can be obtained:
Ignore its second order infinitesimal, Δ t level off to 0 when, then: a2+b2=c2;
W=(u, v) in above-mentioned formula formula.Wherein enable
Indicate that pixel gray level is along x in image, above-mentioned formula can be converted by y, the gradient in the direction t:
Exu+Eyv+Et=0;
For big and incoherent movement, the tracking effect of light stream be not it is highly desirable, at this time use image pyramid,
The top calculating light stream of image pyramid, for obtained motion estimation result as the pyramidal starting point of next layer, repeating should
Process is until reaching the pyramidal bottom.
(4) homography matrix calculates
The homography matrix for calculating affine transformation, chooses the best spy of 2 property with RANSAC algorithm from characteristic point used
Levy point, it is assumed that this feature point only has rotation and translation, then the affine transformation of the homography matrix is rigid body translation, can be according to two groups of points
Calculate translation distance and rotation angle.
The input of RANSAC algorithm often contains biggish noise or Null Spot, and RANSAC is by being chosen in data
One group of random subset reaches target, and the subset being selected is assumed to be intra-office point, and verified with following methods:
1) model is adapted to the intra-office point assumed, i.e., all unknown parameters can be calculated from the intra-office point of hypothesis
It obtains.
2) if there is enough points are classified as the intra-office point assumed, then the model estimated is just reasonable enough.
3) it goes to reevaluate model with the intra-office of all hypothesis point.
4) by the error rate of estimation intra-office point and model come assessment models.
The above process is repeatedly executed fixed number, and the model and existing model generated every time is more preferably compared,
If intra-office point is more than existing model, it is used, is otherwise rejected.
For rigid body translation, demand obtains translational movement tx, the ty in rotation angle θ and the direction x, y, for transformation before point (x,
And transformed point (u, v) y).Only need two groups of corresponding points, corresponding relationship are as follows:
V=cos θ * x-sin θ * y+tx;
U=sin θ * x+cos θ * y+ty;
(5) image filtering
Operation is filtered to transformation matrix using Kalman filtering, eliminates random motion component.
The shake of video can approximation regard as and meet Gaussian Profile, 2*3 transformation matrix is reduced to the matrix of 1*3,
It is x, the displacement of the direction y and rotation angle that it, which inputs parameter,.
The equation group of Kalman filtering is as follows:
X (k | k-1)=A*X (k-1 | k-1)+BU (k);
P (k | k-1)=A*P (k-1 | k-1) * A '+Q;
X (k | k)=X (k | k-1)+K (k) * (Z (k)-H*X (k | k-1));
P (k | k)=(I-Kg (k) * H) * P (k | k-1);
For above equation group, in the case where not influencing precision, in order to improve arithmetic speed, above-mentioned equation group is simplified
It is as follows:
X_=X;
Calculate current state estimator X_, be reduced to last time state estimator X it is identical,
P_=P+Q;
Current evaluated error covariance P_ is calculated, as a result last time evaluated error covariance P is plus processing noise
Covariance Q,
Kalman gain K is calculated, wherein R is measurement error covariance,
X=X_+K* (z-X);
State estimator X is updated, wherein z is measured value, it is used for next iteration,
P=(Trajectory (1,1,1)-K) * P_;
Evaluated error covariance P is updated, next iteration is used for.
(6) affine transformation
Since characteristic point only includes rotation and translation, then affine transformation is rigid body translation.It coordinates of original image coordinates and filtered
Transformation matrix afterwards, which does multiplication, can be obtained coordinate of the former coordinate in new images, here the corresponding transformation matrix of rigid body translation
Are as follows:
Assuming that coordinates of original image coordinates is I, image coordinate is I ' after affine transformation, then
I '=I*T;
Video after affine transformation eliminates shake, it appears that apparent stabilization.