The gun sight and its automatic correction method of a kind of automatic deviation correction
Technical field
The invention belongs to point technique field, gun sight and its automatic deviation correction side of a kind of automatic deviation correction are implemented as
Method.
Background technology
Sighting device of the prior art is divided into mechanical aiming device and optical foresight, and mechanical aiming device mechanically passes through gold
Belong to gunsight, such as rear sight, foresight and sight to aim to realize;Optical foresight is imaged by using optical lens, by target
Image and sight line, which overlap, realizes aiming on same focussing plane.Existing sighting device have the following disadvantages with it is not convenient
Property:(1) after installation aims at utensil, during applied to aimed fire, it is ensured that accurate aiming position and combine long-term shooting
Experience, can just complete accurately to shoot, but for shooting beginner, the fault of aiming and penetrated without abundant
Experience is hit, can influence the accuracy of its shooting;(2), it is necessary to which repeatedly adjustment graduation and point of impact, makes bullet during shooting
Point to overlap with graduation center, during calibration point of impact and graduation center overlap, be required to multiple adjusting knob, or
Carry out other mechanical adjustment;(3) when shooting deviation calibration, it is necessary to be adjusted by a large amount of shootings and it is necessary in specialty
It can be approached under experienced shooter's adjustment precisely, for the shooter of common shooter or shortage shooting experience, deviation
One very trouble and takes considerable time the work with material resources during adjustment, once and the good sighting system of adjustment, run into
Dismounting and the situation for replacing gun sight, above-mentioned calibration procedures then need to re-execute, and the use to user brings great inconvenience.
The content of the invention
In view of the above-mentioned problems, sighting system of the present invention from gun itself, with reference to image science and image procossing
The academic research of aspect, invents a kind of photoelectronic collimating mirror system of automatic correction method without manual intervention and its entangles automatically
Inclined method.
The present invention is achieved by the following technical solutions:
A kind of automatic correction method, the automatic correction method are that the optical imagery for obtaining shooting telescopic sight is converted to electricity
Subgraph, extracts target sheet region from the electronic image, and target sheet region and electronic reference target sheet carry out Pixel-level subtraction inspection
Point of impact is measured, calculates the central point of each point of impact, obtains the deviation of each point of impact central point and target sheet regional center point, by institute
The deviation input shooting telescopic sight is stated to correct follow-up shooting automatically.
Further, perspective correction is carried out to target sheet region by the outer contour school in the target sheet region after extracting target sheet region
Just it is circle, and point of impact detection is carried out with the target sheet region after perspective correction.
Further, target sheet region is extracted from the electronic image is specially:Big ruler is carried out to the electronic image
The mean filter of degree, eliminates the grid interference on target sheet, using adaptive Otsu threshold split plot design, according to the electronic image
Gamma characteristic, is divided into background and prospect by the electronic image, and Freeman chain codes are used according to the image for being divided into foreground and background
Vector tracking method and geometric properties determine that minimized profile obtains target sheet region.
Further, the target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction and detect that point of impact is specific
For:The target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction, obtain the target sheet region and the electronic reference
The pixel difference image of target sheet;
The pixel difference threshold value threshold of two field picture before and after being set in the pixel difference image, when pixel difference exceedes threshold value
When, result is set as 255, when pixel difference is less than threshold value, sets result as 0;
Contour extraction is carried out to the pixel difference image to obtain playing dot profile and calculating profile center obtaining point of impact
Central point.
Further, the perspective correction is specially:The edge in the target sheet region is obtained with Canny operators, to described
Edge carries out maximum elliptic contour fitting using Hough transform, maximum elliptic equation is obtained, using Hough transform to the side
Edge carries out the fitting a straight line of cross wire, obtains the friendship with the top point, lowest point, rightest point, ultra-left point of maximum circle contour
Crunode, by four of same position in the top point of maximum circle contour, lowest point, rightest point, ultra-left point and perspective transform template
Point combines and perspective transformation matrix is calculated, and perspective transform is carried out to the target sheet region using perspective transformation matrix.
Further, the target sheet area extracted when the electronic reference target sheet is the electronic image or historical analysis of blank target sheet
Domain.
Further, the deviation includes longitudinal bias and lateral deviation.
A kind of shooting telescopic sight of automatic deviation correction, the gun sight include visual field acquiring unit, display unit, opto-electronic conversion
Circuit board and CPU core core;
The visual field acquiring unit gathers target sheet optical imagery, and the photoelectric switching circuit plate changes the optical imagery
For electronic image;
The CPU core core includes automatic deviation correction module, and the automatic deviation correction extracts target sheet from the electronic image
Region, carries out Pixel-level subtraction by the target sheet region and electronic reference target sheet and detects point of impact, calculate in each point of impact
Heart point, calculates the deviation of each point of impact central point and target sheet regional center point, and the deviation is inputted the shooting telescopic sight pair
Follow-up shooting is corrected automatically;Preferably, the automatic deviation correction module uses above-mentioned automatic correction method;
The display unit shows the electronic image and deviation.
Further, CPU core core connects a RAM card, the target sheet area that the RAM card storage extracts by interface board
Domain, fire accuracy.
Further, the CPU core core further includes video stabilization processing unit, the video stabilization processing unit
The electronic image is carried out to show in the display unit after processing eliminates shake.
The advantageous effects of the present invention:The present invention provides a kind of automatic correction method, and this method can be applied to photoelectric aiming
In Barebone.The automatic correction method can calculate shooting deviation according to history firing data, and deviation is shot to follow-up using history
Shooting carries out automatic deviation correction, without excessive artificial experience intervention, realizes slewing, significantly improves fire accuracy.
Brief description of the drawings
Fig. 1 analysis method FB(flow block)s of the present invention;
8 connection chain code in Fig. 2 embodiment of the present invention 1;
Dot chart in Fig. 3 embodiment of the present invention 1;
Fig. 4 target sheet extracted region FB(flow block)s of the present invention;
2 non-maxima suppression schematic diagram of Fig. 5 embodiment of the present invention;
Conversion original point schematic diagram under 2 rectangular coordinate system of Fig. 6 embodiment of the present invention;
Pass through any 4 straight line schematic diagrames of original point under 2 rectangular coordinate system of Fig. 7 embodiment of the present invention;
Table under 2 rectangular coordinate system of Fig. 8 embodiment of the present invention by any 4 straight lines of original point under polar coordinate system
State schematic diagram;
Fig. 9 embodiment of the present invention 2 determines cross wire L1 and L2 and elliptical intersection point schematic diagram;
2 perspective transform of Figure 10 embodiment of the present invention diagram is intended to;
Figure 11 target sheet regional corrections of the present invention perform FB(flow block);
Figure 12 present invention impact point detecting methods perform FB(flow block);
1 gun sight stereogram of Figure 13 embodiment of the present invention;
1 gun sight left view of Figure 14 embodiment of the present invention;
1 gun sight right view of Figure 15 embodiment of the present invention;
4 video anti-shaking method flow diagram of Figure 16 embodiment of the present invention.
In figure:1. visual field acquiring unit, 2. display units, 3. battery compartments, 4. rotary encoders, 5. focusing knobs, outside 6.
Skin rail is hung, 7. key control panels, 8. pick up spit of fland Buddhist nuns, 9. photoelectric conversion plates, 10. aim at processing of circuit unit, 11. display conversions
Plate, 81. adjusting nuts one, 82. adjusting nuts two, 101.CPU core boards, 102. interface boards.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right
The present invention is explained in further detail.It should be appreciated that specific embodiment described herein is used only for explaining the present invention, and
It is not used in the restriction present invention.
On the contrary, the present invention covers any replacement done in the spirit and scope of the present invention being defined by the claims, repaiies
Change, equivalent method and scheme.Further, in order to make the public have a better understanding the present invention, below to the thin of the present invention
It is detailed to describe some specific detail sections in section description.Part without these details for a person skilled in the art
Description can also understand the present invention completely.
Embodiment 1
The present invention also provides a kind of shooting telescopic sight of automatic deviation correction, which has automatic deviation correction module, entangles automatically
Inclined module corrects follow-up shooting according to history fire accuracy using automatic correction method automatically.
The sighting system can be conveniently mounted on all kinds of firearms, and the photoelectric sighting system includes:One housing, it is described
Housing generally detachable structure, the enclosure interior are an accommodation space, and the accommodation space includes visual field acquiring unit, regards
Frequency processing unit, display unit, power supply and aiming circuit unit;
The structure of the gun sight is as shown in Figure 13-Figure 15.
The visual field acquiring unit 1 includes having object lens combination or other optical visible equipment;The object lens or optics can
1 front end of visual field acquiring unit is installed on depending on equipment, obtains field-of-view information.
The photoelectric sighting system is integrally a digitalizer, can be with smart mobile phone, intelligent terminal, sighting device or electricity
Road communicates, and the video information that visual field acquiring unit 1 is gathered send to smart mobile phone, intelligent terminal, sighting device or
Circuit, is shown the video information that visual field acquiring unit 1 gathers by devices such as smart mobile phone, intelligent terminals.
Visual field acquiring unit 1 includes photoelectric switching circuit, which includes photoelectric conversion plate, which will
Visual field optical signalling is converted to electric signal, and the photoelectric conversion plate 9 is the photoelectric switching circuit plate in visual field acquiring unit 1, institute
State photoelectric conversion plate 9 and convert optical signals to electric signal, while automatic exposure, automatic white balance, noise reduction, sharpening are carried out to signal
Operation, improves signal quality, and the data of high quality are provided for imaging.
For connecting photoelectric conversion plate 9 and showing that the aiming processing of circuit unit 10 of change-over panel 11 includes CPU core boards
101 and interface board 102, the interface board 102 be connected with the CPU core core 101, be specially CPU core core 101 pass through string
The serial ports of mouth and interface board 102, which is realized, to be connected, and the CPU core core 101 is placed in the interface board 102 and the photoelectric conversion plate
Between 9, three is placed in parallel, and plate face is each perpendicular to visual field acquiring unit 1, the photoelectric conversion plate 9 by parallel data grabbing card,
Transformed video signal transmission to CPU core core 101 is further handled, the interface board 102 passes through serial ports and CPU
Core board 101 communicates, and the peripheral operation such as battery capacity, attitude information, time, button operation, knob-operated information is passed
CPU core core 101 is transported to further to handle.
The CPU core core 101 can connect a RAM card by interface board 102, in embodiments of the present invention, be obtained with visual field
Unit 1 is taken memory card slot to be set at 101 leftward position of CPU core core, the RAM card is plugged on for observation Way in
In memory card slot, information can be stored in the RAM card, the RAM card can rise the software program built in system automatically
Level.
It is observation Way in visual field acquiring unit 1, is also set in the 101 left side RAM card trough rim side of CPU core core
A USB interface is equipped with, external power supply power supply or the information by CPU core boards 101 can be carried out to system by the USB interface
Output.
The photoelectric sighting system further includes multiple sensors, concretely acceleration transducer, wind speed wind direction sensor,
It is several or whole in geomagnetic sensor, temperature sensor, baroceptor, humidity sensor.
A battery compartment 3 is additionally provided with the housing, a battery component is equipped with the battery compartment 3, is set in the battery compartment 3
Shrapnel is equipped with, easy to the fastening of the battery component, the battery compartment 3 is arranged on middle part in housing, can be beaten by housing side
Open battery cabin cap and realize replacement battery component.
3 bottom side of battery compartment is equipped with circuit solder contacts, which connects with the shrapnel inside battery compartment, the battery
The conducting wire of the contact welded bands wire connection terminal in storehouse 3, connecting interface plate 102, turns interface board 102, CPU core boards 101, photoelectricity
Plate 9, display change-over panel 11 and display unit 2 is changed to be powered.
The display unit 2 is display screen, and the display unit 2 is by showing that change-over panel 11 connects with interface board 102
Connect, so as to communicate with CPU core core 101, CPU core core is shown display data transmissions to display unit 2.
The video information that the cross division line of the display screen display is gathered with visual field acquiring unit is overlapped mutually, and is passed through
Cross division line is used for aimed fire, at the same on a display screen also display for secondary fire, passed by above-mentioned various sensors
Defeated secondary fire information and work configured information;
The information of the secondary fire, its part are applied to shooting trajectory calculating, its part is used for display alarm use
Family.
The case top is equipped with external button, and the external button is connected to by the key control panel 7 of case inside
On interface board 102, switchgear and the function of taking pictures, record a video can be realized by the external button by touching.
It is observation Way in visual field acquiring unit 1, the housing right side is equipped with band close to 2 side of display unit
The rotary encoder 4 of keypress function, the rotary encoder 4 is in the enclosure interior concatenated coding device circuit board 41, encoder
Circuit board 41 is connected by the winding displacement with wire connection terminal with interface board, completes the transmission of operation data.The rotary encoder control
Function switch processed, adjustment are apart from multiplying power data, configuration information, typing deviation data etc..
It is observation Way in visual field acquiring unit 1, the housing right side is close to 1 side of visual field acquiring unit
Equipped with focusing knob 5, the focusing knob 5 adjusts the focusing of visual field acquiring unit 1 by spring mechanism, reaches under different distance
The purpose of clear observation object.
The bottom of the housing is equipped with pick up spit of fland Buddhist nun 8, and on fixed fire apparatus, the pick up spit of fland Buddhist nun includes adjustable
Clamp nut 81 and 82 is saved, clamp nut is on the left side of pick up spit of fland Buddhist nun or right side.
The top of visual field acquiring unit 1 is equipped with plug-in skin rail 6 in the housing, and plug-in skin rail 6 is adopted with visual field acquiring unit 1
Shot, be fixed by screw with same optical axis;Plug-in skin rail 6 is designed using standard size, can be mounted with standard pick up spit of fland
The object of Buddhist nun's connector, the object include laser range finder, light compensating lamp, laser pen etc..
The present embodiment provides a kind of automatic correction method at the same time, and the automatic correction method comprises the following steps:
(1) opto-electronic conversion:The optical imagery that shooting telescopic sight is obtained is converted to electronic image;
(2) target sheet extracted region:Target sheet region is extracted from the electronic image;
Target target sheet region interested is extracted from global image, while eliminates the interference of complex background environmental information.
Target sheet method for extracting region is the object detection method based on adaptive threshold fuzziness, which determines that speed is fast,
Performance to various complex situations is preferable, and segmentation quality is secure.The detection method uses the thought for maximizing inter-class variance, if
Determine the segmentation threshold that t is prospect and background, it is w0, average gray u0 that prospect points, which account for image scaled,;Background points account for image
Ratio is w1, average gray u1, sets overall average gray scales of the u as image, then
U=w0*u0+w1*u1;
T is traveled through from minimum gradation value to maximum gradation value, when t values cause
G=w0* (u0-u)2+w1*(u1-u)2;
Value for it is maximum when, t is the optimal threshold split.
The target sheet method for extracting region performs flow such as Fig. 4, and the target sheet method for extracting region is filtered comprising image average
Ripple, otsu Otsu threshold methods determine that segmentation threshold, Threshold segmentation determine that candidate region, contour following algorithm determine and intercept minimum
Four steps of profile.
21) Image Mean Filtering
The mean filter of large scale is carried out to image, eliminates the grid interference on target sheet, prominent circle target sheet region.With ruler
Exemplified by the very little sample for 41*41, computational methods are as follows:
Wherein, g (x, y) represents filtered image, and x is the abscissa of the central point corresponding points on the image of sample, and y is sample
The ordinate of this central point corresponding points on the image, i be relative to the pixel abscissa index value between -20 to the 20 of x,
J is relative to the pixel ordinate index value between -20 to the 20 of y.
22) otsu Otsu thresholds method determines segmentation threshold
Threshold segmentation uses adaptive Otsu threshold split plot design (OTSU), according to the gamma characteristic of image, divides the image into
Background and prospect.Variance between background and prospect is bigger, illustrates that the difference between two parts image is bigger.Therefore, for figure
As I (x, y), the segmentation threshold of foregrounding and background is Th, and belonging to the pixel of prospect, to account for the ratio of entire image be w2, its
Average gray is G1, and the ratio that background pixel point accounts for entire image is w3, its average gray is G2, total average gray of image
For G_Ave, inter-class variance g, the size of image is M*N, and the number of the pixel for being less than threshold value in image is N1, grey scale pixel value
Number more than threshold value is denoted as N2, then:
M*N=N1+N2;
W2+w3=1;
G_Ave=w2*G1+w3*G2;
G=w2* (G_Ave-G1)2+w3*(G_Ave-G2)2;
Obtained equivalence formula:
G=w2*w3* (G1-G2)2;
Can be obtained by using traversal be inter-class variance g maximum when segmentation threshold Th.
23) definite Threshold segmentation threshold value Th is combined to split filtered image
Obtain the bianry image for being divided into foreground and background.
24) contour following algorithm determines and intercepts the vector tracking side that minimized profile Contour extraction uses Freeman chain codes
Method, this method are a kind of coordinate with curve starting point and edge direction code to describe the method for curve or border.The party
Method is a kind of coded representation on border, by the use of boundary direction as coding basis, in order to simplify the description on border, using boundary point
The description method of collection.
Common chain code is divided into 4 connection chain codes and 8 connection chain codes according to the difference of central pixel point adjacent direction number.4
The abutment points of connection chain code have 4, respectively in the up, down, left and right of central point.8 connection chain codes add 4 than 4 connection chain codes
A oblique 45 ° of directions, because there is 8 abutment points around any one pixel, and 8 connect the chain codes just actual feelings with pixel
Condition is consistent, and can describe central pixel point exactly and be adjacent information a little.Therefore, this algorithm connects chain codes using 8,
As shown in Figure 2.
8 connection chain code distribution tables are as shown in table 1:
Distribution table is practiced in the connection of table 18
As shown in Figure 3, the dot chart of one 9 × 9 is provided, wherein a line segment, S is starting point, and E is terminal, this line segment
It is represented by L=43322100000066.
With reference to self-defined structure body
Self-defined FreemanList structures
Judge whether chain construction is end to end a bit, so as to determine whether integrity profile, to obtain target sheet area image simultaneously
Store target sheet area image.
(3) point of impact is detected:
The impact point detecting method, is the impact point detecting method based on background subtraction.Described this method is from target sheet area
Point of impact is detected in area image, and determines its center position.This method preserves previous target surface figure, and recycling works as front target
Face figure carries out Pixel-level subtraction with previous target surface figure, due to two frames in perspective correction calculating process is carried out to image
Image uses down-sampled method using 2 pixels as step-length, counts in the pixel region of 2*2 with minimum ash there may be pixel deviations
Angle value is the grey scale pixel value, which is calculated, and the region that gray scale is more than 0 is drawn, to the region
Contour detecting is carried out, obtains newly generated point of impact graphical information.
The impact point detecting method, is compared using front and rear Pixel-level subtraction, and processing speed is fast, can ensure to return
Newly generated point of impact position.
The impact point detecting method performs as follows:
31) former target sheet image is stored
Former target sheet view data is stored, and is read in caching, as with reference to target target sheet image.If it is directed to during shooting
The shooting again for the target for having carried out accuracy computation, then by last time accuracy computation when the target sheet region that stores as reference
Target target sheet image.
32) will be by above-mentioned 1) -2) image after step process carries out Pixel-level subtraction with former target sheet image, obtain difference
Position.
The pixel difference threshold value threshold of two field picture before and after setting, when pixel difference exceedes threshold value, sets result as 255,
When pixel difference is less than threshold value, result is set as 0.
Specific threshold values can be obtained by debugging, and setting range is under normal circumstances 100~160.
33) to above-mentioned steps 32) image that produces carries out Contour extraction and obtains impact dot profile, and calculate in point of impact
Heart point
Freeman chain codes carry out Contour extraction calculating and are worth to point of impact central point, its calculation formula is as follows:
Centerxi represents the center x-axis coordinate of i-th of point of impact, and Centeryi represents the center y of i-th of point of impact
Axial coordinate, FreemanlistiRepresent the profile of i-th of point of impact.
The impact point detecting method performs flow such as Figure 12:
(4) deviation calculates:
Detect transverse direction, the longitudinal bias of point of impact and target sheet center, obtain deviation set.
The target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction and detect point of impact, calculates each point of impact
Central point, and the deviation of each point of impact central point and target sheet regional center point is further calculated, it will be penetrated described in deviation input
Gun sight is hit to correct follow-up shooting automatically.
Embodiment 2
The embodiment is substantially the same manner as Example 1, and difference lies in include target sheet regional correction behind extraction target sheet region for it
Step.
Target sheet regional correction:
Due to target sheet stickup and obtain image when Target observator and target sheet angle of arrival deviation, the then target sheet extracted it is effective
Region occurs that heeling condition make it that the image of acquisition is non-circular.In order to ensure that it is higher that the point of impact deviation being calculated has
Precision, carries out perspective correction to target sheet image, target sheet image outer contour is corrected to regular circular.Target sheet regional correction side
Method, is the target sheet method for correcting image based on oval endpoint, and the method obtains the edge of image with Canny operators.Due to target
Paper image almost occupies entire image, and in the case where parameter variation range is small, maximum elliptic wheel is carried out using Hough transform
Exterior feature fitting, obtains maximum elliptic equation.Target sheet image is there are cross wire, and with oval there are several intersection points, these
It is most upper that intersection point corresponds respectively to maximum circle contour in standard drawing, most lower, most right, ultra-left point.Cross is carried out using Hough transform
The fitting a straight line of cross spider.In the subgraph of input, right-angled intersection and elliptical intersection point set, the identical bits with template are drawn
The point set put calculates perspective transformation matrix together.
The target sheet regional correction method, outermost layer elliptic contour parameter can be quickly obtained using Hough transform.Together
When, the Hough transform line detection algorithm under polar coordinates also can quickly obtain straight line parameter, and therefore, this method can be quick
Correction target sheet region.
The target sheet regional correction method performs as follows:
51) edge detection is carried out using Canny operators
This method comprising RGB turn gray-scale map, gaussian filtering suppress noise, single order local derviation calculate gradient, non-maxima suppression,
Dual threshold method detects and connection five, edge part.
RGB turns gray-scale map
Gradation conversion is carried out by the conversion proportion of RGB and gray scale, RGB image is converted to gray-scale map (will be with R, G, B tri-
Primary conversion is gray value Gray),
Perform as follows:
Gray=0.299R+0.587G+0.114B;
Gaussian filtering is carried out to image
Transformed gray-scale map passes through gaussian filtering, suppresses the noise of the image after turning, σ is set as standard deviation, according to height
This loss reduction principle, sets the size of template as (3* σ+1) * (3 σ+1) at this time, sets horizontal strokes of the x to deviate template center's point
To coordinate, y is the longitudinal coordinate for deviateing template center, and K is the weights of gaussian filtering template, is performed as follows:
The amplitude of gradient and direction are calculated with the finite difference of single order local derviation
Convolution operator:
The calculating of gradient:
P [i, j]=(f [i, j+1]-f [i, j]+f [i+1, j+1]-f [i+1, j])/2;
Q [i, j]=(f [i, j]-f [i+1, j]+f [i, j+1]-f [i+1, j+1])/2;
θ [i, j]=tan-1(Q[i,j]/P[i,j])。
Non-maxima suppression
This method refers to searching pixel local maximum, the gray value corresponding to non-maximum point is set to 0, so as to pick
Except the point of most of non-edge.
As can be seen from FIG. 5, non-maxima suppression is carried out, just first has to determine that the gray value of pixel C is adjacent in its 8 value
Whether it is maximum in domain.Lines in Fig. 5Direction is the gradient direction of C points, is so assured that its part
Maximum be distributed in certainly on this line, i.e., in addition to C points, the value of the two points of the intersection point dTmp1 and dTmp2 of gradient direction
It may also be able to be local maximum.Therefore, judge that C point gray scales can determine whether C points are that it is adjacent with the two point gray scale sizes
Local maxima gray scale point in domain.If C points gray value is less than any one in the two points, that just illustrates that C points are not local
Maximum, then it is edge that can exclude C points.
Dual threashold value-based algorithm detects and connection edge
Non-edge quantity is further reduced using dual-threshold voltage.Set Low threshold parameter Lthreshold and high threshold parameter
Hthreshold, the two composition comparison condition, is uniformly transformed into 255 numerical value by numerical value more than high threshold and high threshold and protects
Deposit, the numerical transformation between Low threshold and high threshold as 128 numerical value store, other numerical value regard as non-edge data with
0 substitutes.
Reuse Freeman chain codes and carry out Edge track, filter out the small marginal point of length.
52) Hough transform fitting cross wire under polar coordinates is utilized, obtains linear equation
Hough transformation is the method for the detection of straight lines circle simple geometric shape in image procossing.For straight line,
It can be expressed as y=kx+b using rectangular coordinate system, then arbitrary a bit (x, y) is transformed among k-b spaces just on the straight line
It is a point, is then one among all non-zero pixels transform to k-b parameter spaces on image space cathetus in other words
Point.Therefore, local peaking's point among parameter space can correspond to the straight line among artwork image space.Due to
Slope utilizes the detection of polar coordinate space progress straight line there is infinitely large quantity or infinitely small quantity.In polar coordinate system,
Straight line can state following form as:
ρ=x*cos θ+y*sin θ;
Have above-mentioned formula, according to Fig. 7, parameter ρ for coordinate origin to the distance of straight line, each group of parameter ρ and θ will only
One determines straight line, it is only necessary to using local maximum as search condition in parameter space, then can obtain the part most
It is worth corresponding straight line parameter set greatly.After obtaining corresponding straight line parameter set, using non-maxima suppression, retain maximum
Parameter.
53) cross wire and elliptical 4 intersection points are calculated
L1, L2 linear equation obtain 4 intersection points it is known that need to only search for the intersection point with oval outer contour in the straight direction
Coordinate (a, b), (c, d), (e, f), (g, h), as shown in Figure 9.
1) perspective transformation matrix parameter is calculated, carries out image rectification
4 points pair are formed using the coordinate of 4 intersection points and 4 points of template definition, perspective school is carried out to target sheet region
Just
Perspective transform is to project image onto a new view plane, general transformation for mula:
U, v are the coordinates of original image, correspond to the coordinate x ', y ' of the image after conversion;Add to form three-dimensional matrice
Add cofactor w, w ', it is the value after w conversion that w, which takes 1, w ',.Wherein
X '=x/w;
Y '=y/w;
Above formula can be equivalent to:
Therefore corresponding four point coordinates of given perspective transform, it is possible to try to achieve perspective transformation matrix.Become trying to achieve perspective
Perspective transform can be completed to image or pixel afterwards by changing matrix.As shown in Figure 10:
In order to facilitate calculating, we simplify above formula, set (a1,a2,a3,a4,a5,a6,a7,a8) become for perspective
8 parameters changed, above-mentioned formula are equivalent to:
Wherein, (x, y) is figure coordinate to be calibrated, and (x ', y ') represents the figure coordinate after calibration, i.e. Prototype drawing coordinate.It is above-mentioned
Formula is equivalent to:
a1*x+a2*y+a3-a7*x*x′-a8* y*x '-x '=0;
a4*x+a5*y+a6-a7*x*y′-a8* y*y '-y '=0;
Above-mentioned formula is converted into matrix form:
Due to there is 8 parameters, 1 point has 2 equations pair, therefore it may only be necessary to which 4 points are to can just solve corresponding 8
A parameter.Set (xi,yi) be image to be calibrated pixel point coordinates, (x 'i,y′i) be Prototype drawing pixel point coordinates, i=
{1,2,3,4}.Therefore matrix form is convertible into:
Order
Above-mentioned formula is:
AX=b;
Nonhomogeneous equation is solved, obtaining solution is:
X=A-1b;
Target sheet region after being corrected, at the same by this correct after target sheet region store, when follow-up trajectory point detection, is applied
Target sheet area image after correction.
Embodiment 3
The present embodiment gun sight is substantially the same manner as Example 1, and difference is, in order to improve video display quality, this
Embodiment adds video stabilization processing unit, the video jitter-prevention processing method which includes, to obtaining on CPU core core
The view data taken is pre-processed, characteristic point detection, feature point tracking, homography matrix calculate, image filtering and affine transformation,
A series of image after processing can smoothly be shown.Video jitter-prevention processing method flow chart is as shown in figure 15.
Video jitter-prevention processing method includes the detection of former frame characteristic point, present frame feature point tracking, homography matrix calculate,
Image filtering and affine transformation;Former frame characteristic point detects by the use of FAST angular-point detection methods and extracts characteristic point as latter frame number
According to the template for carrying out feature point tracking, present frame carries out feature using pyramid Lucas-Kanade optical flow approach to former frame
Point tracking, the best characteristic point of 2 property is chosen with RANSAC algorithms from characteristic point used, it is assumed that this feature point only rotates
And translation, then the affine transformation of the homography matrix is rigid body translation, calculates translation distance according to two groups of points and the anglec of rotation calculates
Go out the homography matrix of affine transformation, operation is then filtered to transformation matrix using Kalman filtering, eliminate random motion point
Amount, finally does multiplication by the transformation matrix after coordinates of original image coordinates and filtering and can obtain coordinate of the former coordinate in new images
Realize affine transformation, eliminate video jitter.
In addition, the non-rgb format of image obtained for the gun sight of certain model is, it is necessary to before the detection of former frame characteristic point
Image information progress pretreatment operation is converted into rgb format, simplifies image information, there is provided give successive image processing module.
Specific method is as follows:
(1) image preprocessing
For certain model gun sight obtain the non-rgb format of image, it is necessary to which image information is carried out pretreatment operation
Rgb format is converted to, simplifies image information, there is provided give successive image processing module.Image preprocessing by the YUV image of input into
Row image format conversion, calculates the required RGB image of algorithm process and gray level image.
Its conversion formula is as follows:
R=Y+1.140*V;
G=Y-0.394*U-0.581*V;
B=Y+2.032*V.
(2) former frame characteristic point detects
Former frame characteristic point detects, as the template of latter frame data progress feature point tracking, the characteristic point detected,
Recorded in the form of coordinate set.Characteristic point detection uses Shi Tomasi Corner Detections, calculates gradation of image intensity
The autocorrelation matrix of matrix of second derivatives, then calculates its characteristic value, if less numerical value is more than threshold value in two characteristic values,
It can obtain strong angle point.Wherein the calculating of matrix of second derivatives is accelerated by sobel operators.
(3) present frame feature point tracking
One frame data characteristic point of the above is template, and tracing detection draws the characteristic point data of present frame, using pyramid
Lucas-Kanade optical flow approach carries out feature point tracking, it is assumed that a pixel (x, y) on image, in the brightness of t moment
For E (x, y, t), while the mobile component of this light stream in the horizontal and vertical directions is represented with u (x, y) and v (x, y), then:
After interval of time Δ t the brightness of this corresponding points be E (x+ Δs x, y+ Δ y, t+ Δ t), when Δ t convergences
When 0, one can consider that the brightness is constant, then the brightness of t moment is equivalent to:
E (x, y, t)=E (x+ Δs x, y+ Δ y, t+ Δ t);
When the brightness of the point changes, the brightness put after movement is opened up into Jian by Taylor formula, can be obtained:
Ignore its second order infinitesimal, Δ t level off to 0 when, then:a2+b2=c2;
W=(u, v) in above-mentioned formula formula.Wherein make
Represent that pixel gray level is along x in image, y, above-mentioned formula, can be converted into by the gradient in t directions:
Exu+Eyv+Et=0;
For big and incoherent movement, the tracking effect of light stream is not highly desirable, at this time using image pyramid,
The top calculating light stream of image pyramid, for obtained motion estimation result as the pyramidal starting point of next layer, repeating should
Process is until reaching the pyramidal bottom.
(4) homography matrix calculates
The homography matrix of affine transformation is calculated, the best spy of 2 property is chosen from characteristic point used with RANSAC algorithms
Levy point, it is assumed that this feature point only has rotation and translation, then the affine transformation of the homography matrix is rigid body translation, can be according to two groups of points
Calculate translation distance and the anglec of rotation.
The input of RANSAC algorithms often contains larger noise or Null Spot, and RANSAC is by being chosen in data
One group of random subset reaches target, and the subset being selected is assumed to be intra-office point, and verified with following methods:
1) there is the intra-office point that a model is adapted to hypothesis, i.e., all unknown parameters can be calculated from the intra-office point of hypothesis
Draw.
2) if enough points are classified as the intra-office point of hypothesis, then the model of estimation is just reasonable enough.
3) go to reevaluate model with the intra-office point of all hypothesis.
4) by estimating the error rate of intra-office point and model come assessment models.
The above process is repeatedly executed fixed number, and the model and existing model produced every time is more preferably compared,
If intra-office point is more than existing model, it is used, is otherwise rejected.
For rigid body translation, a demand obtains rotation angle θ and translational movement tx, the ty in x, y direction, for before conversion point (x,
Y) and conversion after point (u, v).Two groups of corresponding points are only needed, its correspondence is:
V=cos θ * x-sin θ * y+tx;
U=sin θ * x+cos θ * y+ty;
(5) image filtering
Operation is filtered to transformation matrix using Kalman filtering, eliminates random motion component.
The shake of video can approximation regard as and meet Gaussian Profile, 2*3 transformation matrixs are reduced to the matrix of 1*3,
Its input parameter is x, the displacement of y directions and the anglec of rotation.
The equation group of Kalman filtering is as follows:
X (k | k-1)=A*X (k-1 | k-1)+BU (k);
P (k | k-1)=A*P (k-1 | k-1) * A '+Q;
X (k | k)=X (k | k-1)+K (k) * (Z (k)-H*X (k | k-1));
P (k | k)=(I-Kg (k) * H) * P (k | k-1);
For above equation group, in the case where not influencing precision, in order to improve arithmetic speed, above-mentioned equation group is simplified
It is as follows:
X_=X;
Calculate current state estimator X_, be reduced to last time state estimator X it is identical,
P_=P+Q;
Current evaluated error covariance P_ is calculated, as a result last evaluated error covariance P is plus processing noise
Covariance Q,
Kalman gain K is calculated, wherein R is measurement error covariance,
X=X_+K* (z-X);
State estimator X is updated, wherein z is measured value, for next iteration,
P=(Trajectory (1,1,1)-K) * P_;
Evaluated error covariance P is updated, for next iteration.
(6) affine transformation
Since characteristic point only includes rotation and translation, then affine transformation is rigid body translation.Coordinates of original image coordinates and filtered
Transformation matrix afterwards does multiplication and can obtain coordinate of the former coordinate in new images, here the corresponding transformation matrix of rigid body translation
For:
Assuming that coordinates of original image coordinates is I, image coordinate is I ' after affine transformation, then
I '=I*T;
Video after affine transformation eliminates shake, it appears that apparent stabilization.