CN107907006A - The gun sight and its automatic correction method of a kind of automatic deviation correction - Google Patents

The gun sight and its automatic correction method of a kind of automatic deviation correction Download PDF

Info

Publication number
CN107907006A
CN107907006A CN201711048552.XA CN201711048552A CN107907006A CN 107907006 A CN107907006 A CN 107907006A CN 201711048552 A CN201711048552 A CN 201711048552A CN 107907006 A CN107907006 A CN 107907006A
Authority
CN
China
Prior art keywords
point
target sheet
image
impact
deviation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711048552.XA
Other languages
Chinese (zh)
Other versions
CN107907006B (en
Inventor
李丹阳
陈明
龚亚云
粟桑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUIZHOU JINGHAO TECHNOLOGY Co.,Ltd.
Original Assignee
Beijing Aikelite Optoelectronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aikelite Optoelectronic Technology Co Ltd filed Critical Beijing Aikelite Optoelectronic Technology Co Ltd
Priority to CN201711048552.XA priority Critical patent/CN107907006B/en
Publication of CN107907006A publication Critical patent/CN107907006A/en
Application granted granted Critical
Publication of CN107907006B publication Critical patent/CN107907006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G1/00Sighting devices
    • F41G1/06Rearsights
    • F41G1/16Adjusting mechanisms therefor; Mountings therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30212Military

Abstract

The invention belongs to the gun sight and its automatic correction method in point technique field, specially a kind of automatic deviation correction, the automatic correction method is applied to shooting telescopic sight.The automatic correction method is that the optical imagery for obtaining shooting telescopic sight is converted to electronic image, target sheet region is extracted from the electronic image, target sheet region and electronic reference target sheet carry out Pixel-level subtraction and detect point of impact, calculate the central point of each point of impact, the deviation of each point of impact central point and target sheet regional center point is obtained, the deviation is inputted the shooting telescopic sight corrects follow-up shooting automatically.

Description

The gun sight and its automatic correction method of a kind of automatic deviation correction
Technical field
The invention belongs to point technique field, gun sight and its automatic deviation correction side of a kind of automatic deviation correction are implemented as Method.
Background technology
Sighting device of the prior art is divided into mechanical aiming device and optical foresight, and mechanical aiming device mechanically passes through gold Belong to gunsight, such as rear sight, foresight and sight to aim to realize;Optical foresight is imaged by using optical lens, by target Image and sight line, which overlap, realizes aiming on same focussing plane.Existing sighting device have the following disadvantages with it is not convenient Property:(1) after installation aims at utensil, during applied to aimed fire, it is ensured that accurate aiming position and combine long-term shooting Experience, can just complete accurately to shoot, but for shooting beginner, the fault of aiming and penetrated without abundant Experience is hit, can influence the accuracy of its shooting;(2), it is necessary to which repeatedly adjustment graduation and point of impact, makes bullet during shooting Point to overlap with graduation center, during calibration point of impact and graduation center overlap, be required to multiple adjusting knob, or Carry out other mechanical adjustment;(3) when shooting deviation calibration, it is necessary to be adjusted by a large amount of shootings and it is necessary in specialty It can be approached under experienced shooter's adjustment precisely, for the shooter of common shooter or shortage shooting experience, deviation One very trouble and takes considerable time the work with material resources during adjustment, once and the good sighting system of adjustment, run into Dismounting and the situation for replacing gun sight, above-mentioned calibration procedures then need to re-execute, and the use to user brings great inconvenience.
The content of the invention
In view of the above-mentioned problems, sighting system of the present invention from gun itself, with reference to image science and image procossing The academic research of aspect, invents a kind of photoelectronic collimating mirror system of automatic correction method without manual intervention and its entangles automatically Inclined method.
The present invention is achieved by the following technical solutions:
A kind of automatic correction method, the automatic correction method are that the optical imagery for obtaining shooting telescopic sight is converted to electricity Subgraph, extracts target sheet region from the electronic image, and target sheet region and electronic reference target sheet carry out Pixel-level subtraction inspection Point of impact is measured, calculates the central point of each point of impact, obtains the deviation of each point of impact central point and target sheet regional center point, by institute The deviation input shooting telescopic sight is stated to correct follow-up shooting automatically.
Further, perspective correction is carried out to target sheet region by the outer contour school in the target sheet region after extracting target sheet region Just it is circle, and point of impact detection is carried out with the target sheet region after perspective correction.
Further, target sheet region is extracted from the electronic image is specially:Big ruler is carried out to the electronic image The mean filter of degree, eliminates the grid interference on target sheet, using adaptive Otsu threshold split plot design, according to the electronic image Gamma characteristic, is divided into background and prospect by the electronic image, and Freeman chain codes are used according to the image for being divided into foreground and background Vector tracking method and geometric properties determine that minimized profile obtains target sheet region.
Further, the target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction and detect that point of impact is specific For:The target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction, obtain the target sheet region and the electronic reference The pixel difference image of target sheet;
The pixel difference threshold value threshold of two field picture before and after being set in the pixel difference image, when pixel difference exceedes threshold value When, result is set as 255, when pixel difference is less than threshold value, sets result as 0;
Contour extraction is carried out to the pixel difference image to obtain playing dot profile and calculating profile center obtaining point of impact Central point.
Further, the perspective correction is specially:The edge in the target sheet region is obtained with Canny operators, to described Edge carries out maximum elliptic contour fitting using Hough transform, maximum elliptic equation is obtained, using Hough transform to the side Edge carries out the fitting a straight line of cross wire, obtains the friendship with the top point, lowest point, rightest point, ultra-left point of maximum circle contour Crunode, by four of same position in the top point of maximum circle contour, lowest point, rightest point, ultra-left point and perspective transform template Point combines and perspective transformation matrix is calculated, and perspective transform is carried out to the target sheet region using perspective transformation matrix.
Further, the target sheet area extracted when the electronic reference target sheet is the electronic image or historical analysis of blank target sheet Domain.
Further, the deviation includes longitudinal bias and lateral deviation.
A kind of shooting telescopic sight of automatic deviation correction, the gun sight include visual field acquiring unit, display unit, opto-electronic conversion Circuit board and CPU core core;
The visual field acquiring unit gathers target sheet optical imagery, and the photoelectric switching circuit plate changes the optical imagery For electronic image;
The CPU core core includes automatic deviation correction module, and the automatic deviation correction extracts target sheet from the electronic image Region, carries out Pixel-level subtraction by the target sheet region and electronic reference target sheet and detects point of impact, calculate in each point of impact Heart point, calculates the deviation of each point of impact central point and target sheet regional center point, and the deviation is inputted the shooting telescopic sight pair Follow-up shooting is corrected automatically;Preferably, the automatic deviation correction module uses above-mentioned automatic correction method;
The display unit shows the electronic image and deviation.
Further, CPU core core connects a RAM card, the target sheet area that the RAM card storage extracts by interface board Domain, fire accuracy.
Further, the CPU core core further includes video stabilization processing unit, the video stabilization processing unit The electronic image is carried out to show in the display unit after processing eliminates shake.
The advantageous effects of the present invention:The present invention provides a kind of automatic correction method, and this method can be applied to photoelectric aiming In Barebone.The automatic correction method can calculate shooting deviation according to history firing data, and deviation is shot to follow-up using history Shooting carries out automatic deviation correction, without excessive artificial experience intervention, realizes slewing, significantly improves fire accuracy.
Brief description of the drawings
Fig. 1 analysis method FB(flow block)s of the present invention;
8 connection chain code in Fig. 2 embodiment of the present invention 1;
Dot chart in Fig. 3 embodiment of the present invention 1;
Fig. 4 target sheet extracted region FB(flow block)s of the present invention;
2 non-maxima suppression schematic diagram of Fig. 5 embodiment of the present invention;
Conversion original point schematic diagram under 2 rectangular coordinate system of Fig. 6 embodiment of the present invention;
Pass through any 4 straight line schematic diagrames of original point under 2 rectangular coordinate system of Fig. 7 embodiment of the present invention;
Table under 2 rectangular coordinate system of Fig. 8 embodiment of the present invention by any 4 straight lines of original point under polar coordinate system State schematic diagram;
Fig. 9 embodiment of the present invention 2 determines cross wire L1 and L2 and elliptical intersection point schematic diagram;
2 perspective transform of Figure 10 embodiment of the present invention diagram is intended to;
Figure 11 target sheet regional corrections of the present invention perform FB(flow block);
Figure 12 present invention impact point detecting methods perform FB(flow block);
1 gun sight stereogram of Figure 13 embodiment of the present invention;
1 gun sight left view of Figure 14 embodiment of the present invention;
1 gun sight right view of Figure 15 embodiment of the present invention;
4 video anti-shaking method flow diagram of Figure 16 embodiment of the present invention.
In figure:1. visual field acquiring unit, 2. display units, 3. battery compartments, 4. rotary encoders, 5. focusing knobs, outside 6. Skin rail is hung, 7. key control panels, 8. pick up spit of fland Buddhist nuns, 9. photoelectric conversion plates, 10. aim at processing of circuit unit, 11. display conversions Plate, 81. adjusting nuts one, 82. adjusting nuts two, 101.CPU core boards, 102. interface boards.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right The present invention is explained in further detail.It should be appreciated that specific embodiment described herein is used only for explaining the present invention, and It is not used in the restriction present invention.
On the contrary, the present invention covers any replacement done in the spirit and scope of the present invention being defined by the claims, repaiies Change, equivalent method and scheme.Further, in order to make the public have a better understanding the present invention, below to the thin of the present invention It is detailed to describe some specific detail sections in section description.Part without these details for a person skilled in the art Description can also understand the present invention completely.
Embodiment 1
The present invention also provides a kind of shooting telescopic sight of automatic deviation correction, which has automatic deviation correction module, entangles automatically Inclined module corrects follow-up shooting according to history fire accuracy using automatic correction method automatically.
The sighting system can be conveniently mounted on all kinds of firearms, and the photoelectric sighting system includes:One housing, it is described Housing generally detachable structure, the enclosure interior are an accommodation space, and the accommodation space includes visual field acquiring unit, regards Frequency processing unit, display unit, power supply and aiming circuit unit;
The structure of the gun sight is as shown in Figure 13-Figure 15.
The visual field acquiring unit 1 includes having object lens combination or other optical visible equipment;The object lens or optics can 1 front end of visual field acquiring unit is installed on depending on equipment, obtains field-of-view information.
The photoelectric sighting system is integrally a digitalizer, can be with smart mobile phone, intelligent terminal, sighting device or electricity Road communicates, and the video information that visual field acquiring unit 1 is gathered send to smart mobile phone, intelligent terminal, sighting device or Circuit, is shown the video information that visual field acquiring unit 1 gathers by devices such as smart mobile phone, intelligent terminals.
Visual field acquiring unit 1 includes photoelectric switching circuit, which includes photoelectric conversion plate, which will Visual field optical signalling is converted to electric signal, and the photoelectric conversion plate 9 is the photoelectric switching circuit plate in visual field acquiring unit 1, institute State photoelectric conversion plate 9 and convert optical signals to electric signal, while automatic exposure, automatic white balance, noise reduction, sharpening are carried out to signal Operation, improves signal quality, and the data of high quality are provided for imaging.
For connecting photoelectric conversion plate 9 and showing that the aiming processing of circuit unit 10 of change-over panel 11 includes CPU core boards 101 and interface board 102, the interface board 102 be connected with the CPU core core 101, be specially CPU core core 101 pass through string The serial ports of mouth and interface board 102, which is realized, to be connected, and the CPU core core 101 is placed in the interface board 102 and the photoelectric conversion plate Between 9, three is placed in parallel, and plate face is each perpendicular to visual field acquiring unit 1, the photoelectric conversion plate 9 by parallel data grabbing card, Transformed video signal transmission to CPU core core 101 is further handled, the interface board 102 passes through serial ports and CPU Core board 101 communicates, and the peripheral operation such as battery capacity, attitude information, time, button operation, knob-operated information is passed CPU core core 101 is transported to further to handle.
The CPU core core 101 can connect a RAM card by interface board 102, in embodiments of the present invention, be obtained with visual field Unit 1 is taken memory card slot to be set at 101 leftward position of CPU core core, the RAM card is plugged on for observation Way in In memory card slot, information can be stored in the RAM card, the RAM card can rise the software program built in system automatically Level.
It is observation Way in visual field acquiring unit 1, is also set in the 101 left side RAM card trough rim side of CPU core core A USB interface is equipped with, external power supply power supply or the information by CPU core boards 101 can be carried out to system by the USB interface Output.
The photoelectric sighting system further includes multiple sensors, concretely acceleration transducer, wind speed wind direction sensor, It is several or whole in geomagnetic sensor, temperature sensor, baroceptor, humidity sensor.
A battery compartment 3 is additionally provided with the housing, a battery component is equipped with the battery compartment 3, is set in the battery compartment 3 Shrapnel is equipped with, easy to the fastening of the battery component, the battery compartment 3 is arranged on middle part in housing, can be beaten by housing side Open battery cabin cap and realize replacement battery component.
3 bottom side of battery compartment is equipped with circuit solder contacts, which connects with the shrapnel inside battery compartment, the battery The conducting wire of the contact welded bands wire connection terminal in storehouse 3, connecting interface plate 102, turns interface board 102, CPU core boards 101, photoelectricity Plate 9, display change-over panel 11 and display unit 2 is changed to be powered.
The display unit 2 is display screen, and the display unit 2 is by showing that change-over panel 11 connects with interface board 102 Connect, so as to communicate with CPU core core 101, CPU core core is shown display data transmissions to display unit 2.
The video information that the cross division line of the display screen display is gathered with visual field acquiring unit is overlapped mutually, and is passed through Cross division line is used for aimed fire, at the same on a display screen also display for secondary fire, passed by above-mentioned various sensors Defeated secondary fire information and work configured information;
The information of the secondary fire, its part are applied to shooting trajectory calculating, its part is used for display alarm use Family.
The case top is equipped with external button, and the external button is connected to by the key control panel 7 of case inside On interface board 102, switchgear and the function of taking pictures, record a video can be realized by the external button by touching.
It is observation Way in visual field acquiring unit 1, the housing right side is equipped with band close to 2 side of display unit The rotary encoder 4 of keypress function, the rotary encoder 4 is in the enclosure interior concatenated coding device circuit board 41, encoder Circuit board 41 is connected by the winding displacement with wire connection terminal with interface board, completes the transmission of operation data.The rotary encoder control Function switch processed, adjustment are apart from multiplying power data, configuration information, typing deviation data etc..
It is observation Way in visual field acquiring unit 1, the housing right side is close to 1 side of visual field acquiring unit Equipped with focusing knob 5, the focusing knob 5 adjusts the focusing of visual field acquiring unit 1 by spring mechanism, reaches under different distance The purpose of clear observation object.
The bottom of the housing is equipped with pick up spit of fland Buddhist nun 8, and on fixed fire apparatus, the pick up spit of fland Buddhist nun includes adjustable Clamp nut 81 and 82 is saved, clamp nut is on the left side of pick up spit of fland Buddhist nun or right side.
The top of visual field acquiring unit 1 is equipped with plug-in skin rail 6 in the housing, and plug-in skin rail 6 is adopted with visual field acquiring unit 1 Shot, be fixed by screw with same optical axis;Plug-in skin rail 6 is designed using standard size, can be mounted with standard pick up spit of fland The object of Buddhist nun's connector, the object include laser range finder, light compensating lamp, laser pen etc..
The present embodiment provides a kind of automatic correction method at the same time, and the automatic correction method comprises the following steps:
(1) opto-electronic conversion:The optical imagery that shooting telescopic sight is obtained is converted to electronic image;
(2) target sheet extracted region:Target sheet region is extracted from the electronic image;
Target target sheet region interested is extracted from global image, while eliminates the interference of complex background environmental information. Target sheet method for extracting region is the object detection method based on adaptive threshold fuzziness, which determines that speed is fast, Performance to various complex situations is preferable, and segmentation quality is secure.The detection method uses the thought for maximizing inter-class variance, if Determine the segmentation threshold that t is prospect and background, it is w0, average gray u0 that prospect points, which account for image scaled,;Background points account for image Ratio is w1, average gray u1, sets overall average gray scales of the u as image, then
U=w0*u0+w1*u1;
T is traveled through from minimum gradation value to maximum gradation value, when t values cause
G=w0* (u0-u)2+w1*(u1-u)2
Value for it is maximum when, t is the optimal threshold split.
The target sheet method for extracting region performs flow such as Fig. 4, and the target sheet method for extracting region is filtered comprising image average Ripple, otsu Otsu threshold methods determine that segmentation threshold, Threshold segmentation determine that candidate region, contour following algorithm determine and intercept minimum Four steps of profile.
21) Image Mean Filtering
The mean filter of large scale is carried out to image, eliminates the grid interference on target sheet, prominent circle target sheet region.With ruler Exemplified by the very little sample for 41*41, computational methods are as follows:
Wherein, g (x, y) represents filtered image, and x is the abscissa of the central point corresponding points on the image of sample, and y is sample The ordinate of this central point corresponding points on the image, i be relative to the pixel abscissa index value between -20 to the 20 of x, J is relative to the pixel ordinate index value between -20 to the 20 of y.
22) otsu Otsu thresholds method determines segmentation threshold
Threshold segmentation uses adaptive Otsu threshold split plot design (OTSU), according to the gamma characteristic of image, divides the image into Background and prospect.Variance between background and prospect is bigger, illustrates that the difference between two parts image is bigger.Therefore, for figure As I (x, y), the segmentation threshold of foregrounding and background is Th, and belonging to the pixel of prospect, to account for the ratio of entire image be w2, its Average gray is G1, and the ratio that background pixel point accounts for entire image is w3, its average gray is G2, total average gray of image For G_Ave, inter-class variance g, the size of image is M*N, and the number of the pixel for being less than threshold value in image is N1, grey scale pixel value Number more than threshold value is denoted as N2, then:
M*N=N1+N2;
W2+w3=1;
G_Ave=w2*G1+w3*G2;
G=w2* (G_Ave-G1)2+w3*(G_Ave-G2)2
Obtained equivalence formula:
G=w2*w3* (G1-G2)2
Can be obtained by using traversal be inter-class variance g maximum when segmentation threshold Th.
23) definite Threshold segmentation threshold value Th is combined to split filtered image
Obtain the bianry image for being divided into foreground and background.
24) contour following algorithm determines and intercepts the vector tracking side that minimized profile Contour extraction uses Freeman chain codes Method, this method are a kind of coordinate with curve starting point and edge direction code to describe the method for curve or border.The party Method is a kind of coded representation on border, by the use of boundary direction as coding basis, in order to simplify the description on border, using boundary point The description method of collection.
Common chain code is divided into 4 connection chain codes and 8 connection chain codes according to the difference of central pixel point adjacent direction number.4 The abutment points of connection chain code have 4, respectively in the up, down, left and right of central point.8 connection chain codes add 4 than 4 connection chain codes A oblique 45 ° of directions, because there is 8 abutment points around any one pixel, and 8 connect the chain codes just actual feelings with pixel Condition is consistent, and can describe central pixel point exactly and be adjacent information a little.Therefore, this algorithm connects chain codes using 8, As shown in Figure 2.
8 connection chain code distribution tables are as shown in table 1:
Distribution table is practiced in the connection of table 18
As shown in Figure 3, the dot chart of one 9 × 9 is provided, wherein a line segment, S is starting point, and E is terminal, this line segment It is represented by L=43322100000066.
With reference to self-defined structure body
Self-defined FreemanList structures
Judge whether chain construction is end to end a bit, so as to determine whether integrity profile, to obtain target sheet area image simultaneously Store target sheet area image.
(3) point of impact is detected:
The impact point detecting method, is the impact point detecting method based on background subtraction.Described this method is from target sheet area Point of impact is detected in area image, and determines its center position.This method preserves previous target surface figure, and recycling works as front target Face figure carries out Pixel-level subtraction with previous target surface figure, due to two frames in perspective correction calculating process is carried out to image Image uses down-sampled method using 2 pixels as step-length, counts in the pixel region of 2*2 with minimum ash there may be pixel deviations Angle value is the grey scale pixel value, which is calculated, and the region that gray scale is more than 0 is drawn, to the region Contour detecting is carried out, obtains newly generated point of impact graphical information.
The impact point detecting method, is compared using front and rear Pixel-level subtraction, and processing speed is fast, can ensure to return Newly generated point of impact position.
The impact point detecting method performs as follows:
31) former target sheet image is stored
Former target sheet view data is stored, and is read in caching, as with reference to target target sheet image.If it is directed to during shooting The shooting again for the target for having carried out accuracy computation, then by last time accuracy computation when the target sheet region that stores as reference Target target sheet image.
32) will be by above-mentioned 1) -2) image after step process carries out Pixel-level subtraction with former target sheet image, obtain difference Position.
The pixel difference threshold value threshold of two field picture before and after setting, when pixel difference exceedes threshold value, sets result as 255, When pixel difference is less than threshold value, result is set as 0.
Specific threshold values can be obtained by debugging, and setting range is under normal circumstances 100~160.
33) to above-mentioned steps 32) image that produces carries out Contour extraction and obtains impact dot profile, and calculate in point of impact Heart point
Freeman chain codes carry out Contour extraction calculating and are worth to point of impact central point, its calculation formula is as follows:
Centerxi represents the center x-axis coordinate of i-th of point of impact, and Centeryi represents the center y of i-th of point of impact Axial coordinate, FreemanlistiRepresent the profile of i-th of point of impact.
The impact point detecting method performs flow such as Figure 12:
(4) deviation calculates:
Detect transverse direction, the longitudinal bias of point of impact and target sheet center, obtain deviation set.
The target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction and detect point of impact, calculates each point of impact Central point, and the deviation of each point of impact central point and target sheet regional center point is further calculated, it will be penetrated described in deviation input Gun sight is hit to correct follow-up shooting automatically.
Embodiment 2
The embodiment is substantially the same manner as Example 1, and difference lies in include target sheet regional correction behind extraction target sheet region for it Step.
Target sheet regional correction:
Due to target sheet stickup and obtain image when Target observator and target sheet angle of arrival deviation, the then target sheet extracted it is effective Region occurs that heeling condition make it that the image of acquisition is non-circular.In order to ensure that it is higher that the point of impact deviation being calculated has Precision, carries out perspective correction to target sheet image, target sheet image outer contour is corrected to regular circular.Target sheet regional correction side Method, is the target sheet method for correcting image based on oval endpoint, and the method obtains the edge of image with Canny operators.Due to target Paper image almost occupies entire image, and in the case where parameter variation range is small, maximum elliptic wheel is carried out using Hough transform Exterior feature fitting, obtains maximum elliptic equation.Target sheet image is there are cross wire, and with oval there are several intersection points, these It is most upper that intersection point corresponds respectively to maximum circle contour in standard drawing, most lower, most right, ultra-left point.Cross is carried out using Hough transform The fitting a straight line of cross spider.In the subgraph of input, right-angled intersection and elliptical intersection point set, the identical bits with template are drawn The point set put calculates perspective transformation matrix together.
The target sheet regional correction method, outermost layer elliptic contour parameter can be quickly obtained using Hough transform.Together When, the Hough transform line detection algorithm under polar coordinates also can quickly obtain straight line parameter, and therefore, this method can be quick Correction target sheet region.
The target sheet regional correction method performs as follows:
51) edge detection is carried out using Canny operators
This method comprising RGB turn gray-scale map, gaussian filtering suppress noise, single order local derviation calculate gradient, non-maxima suppression, Dual threshold method detects and connection five, edge part.
RGB turns gray-scale map
Gradation conversion is carried out by the conversion proportion of RGB and gray scale, RGB image is converted to gray-scale map (will be with R, G, B tri- Primary conversion is gray value Gray),
Perform as follows:
Gray=0.299R+0.587G+0.114B;
Gaussian filtering is carried out to image
Transformed gray-scale map passes through gaussian filtering, suppresses the noise of the image after turning, σ is set as standard deviation, according to height This loss reduction principle, sets the size of template as (3* σ+1) * (3 σ+1) at this time, sets horizontal strokes of the x to deviate template center's point To coordinate, y is the longitudinal coordinate for deviateing template center, and K is the weights of gaussian filtering template, is performed as follows:
The amplitude of gradient and direction are calculated with the finite difference of single order local derviation
Convolution operator:
The calculating of gradient:
P [i, j]=(f [i, j+1]-f [i, j]+f [i+1, j+1]-f [i+1, j])/2;
Q [i, j]=(f [i, j]-f [i+1, j]+f [i, j+1]-f [i+1, j+1])/2;
θ [i, j]=tan-1(Q[i,j]/P[i,j])。
Non-maxima suppression
This method refers to searching pixel local maximum, the gray value corresponding to non-maximum point is set to 0, so as to pick Except the point of most of non-edge.
As can be seen from FIG. 5, non-maxima suppression is carried out, just first has to determine that the gray value of pixel C is adjacent in its 8 value Whether it is maximum in domain.Lines in Fig. 5Direction is the gradient direction of C points, is so assured that its part Maximum be distributed in certainly on this line, i.e., in addition to C points, the value of the two points of the intersection point dTmp1 and dTmp2 of gradient direction It may also be able to be local maximum.Therefore, judge that C point gray scales can determine whether C points are that it is adjacent with the two point gray scale sizes Local maxima gray scale point in domain.If C points gray value is less than any one in the two points, that just illustrates that C points are not local Maximum, then it is edge that can exclude C points.
Dual threashold value-based algorithm detects and connection edge
Non-edge quantity is further reduced using dual-threshold voltage.Set Low threshold parameter Lthreshold and high threshold parameter Hthreshold, the two composition comparison condition, is uniformly transformed into 255 numerical value by numerical value more than high threshold and high threshold and protects Deposit, the numerical transformation between Low threshold and high threshold as 128 numerical value store, other numerical value regard as non-edge data with 0 substitutes.
Reuse Freeman chain codes and carry out Edge track, filter out the small marginal point of length.
52) Hough transform fitting cross wire under polar coordinates is utilized, obtains linear equation
Hough transformation is the method for the detection of straight lines circle simple geometric shape in image procossing.For straight line, It can be expressed as y=kx+b using rectangular coordinate system, then arbitrary a bit (x, y) is transformed among k-b spaces just on the straight line It is a point, is then one among all non-zero pixels transform to k-b parameter spaces on image space cathetus in other words Point.Therefore, local peaking's point among parameter space can correspond to the straight line among artwork image space.Due to Slope utilizes the detection of polar coordinate space progress straight line there is infinitely large quantity or infinitely small quantity.In polar coordinate system, Straight line can state following form as:
ρ=x*cos θ+y*sin θ;
Have above-mentioned formula, according to Fig. 7, parameter ρ for coordinate origin to the distance of straight line, each group of parameter ρ and θ will only One determines straight line, it is only necessary to using local maximum as search condition in parameter space, then can obtain the part most It is worth corresponding straight line parameter set greatly.After obtaining corresponding straight line parameter set, using non-maxima suppression, retain maximum Parameter.
53) cross wire and elliptical 4 intersection points are calculated
L1, L2 linear equation obtain 4 intersection points it is known that need to only search for the intersection point with oval outer contour in the straight direction Coordinate (a, b), (c, d), (e, f), (g, h), as shown in Figure 9.
1) perspective transformation matrix parameter is calculated, carries out image rectification
4 points pair are formed using the coordinate of 4 intersection points and 4 points of template definition, perspective school is carried out to target sheet region Just
Perspective transform is to project image onto a new view plane, general transformation for mula:
U, v are the coordinates of original image, correspond to the coordinate x ', y ' of the image after conversion;Add to form three-dimensional matrice Add cofactor w, w ', it is the value after w conversion that w, which takes 1, w ',.Wherein
X '=x/w;
Y '=y/w;
Above formula can be equivalent to:
Therefore corresponding four point coordinates of given perspective transform, it is possible to try to achieve perspective transformation matrix.Become trying to achieve perspective Perspective transform can be completed to image or pixel afterwards by changing matrix.As shown in Figure 10:
In order to facilitate calculating, we simplify above formula, set (a1,a2,a3,a4,a5,a6,a7,a8) become for perspective 8 parameters changed, above-mentioned formula are equivalent to:
Wherein, (x, y) is figure coordinate to be calibrated, and (x ', y ') represents the figure coordinate after calibration, i.e. Prototype drawing coordinate.It is above-mentioned Formula is equivalent to:
a1*x+a2*y+a3-a7*x*x′-a8* y*x '-x '=0;
a4*x+a5*y+a6-a7*x*y′-a8* y*y '-y '=0;
Above-mentioned formula is converted into matrix form:
Due to there is 8 parameters, 1 point has 2 equations pair, therefore it may only be necessary to which 4 points are to can just solve corresponding 8 A parameter.Set (xi,yi) be image to be calibrated pixel point coordinates, (x 'i,y′i) be Prototype drawing pixel point coordinates, i= {1,2,3,4}.Therefore matrix form is convertible into:
Order
Above-mentioned formula is:
AX=b;
Nonhomogeneous equation is solved, obtaining solution is:
X=A-1b;
Target sheet region after being corrected, at the same by this correct after target sheet region store, when follow-up trajectory point detection, is applied Target sheet area image after correction.
Embodiment 3
The present embodiment gun sight is substantially the same manner as Example 1, and difference is, in order to improve video display quality, this Embodiment adds video stabilization processing unit, the video jitter-prevention processing method which includes, to obtaining on CPU core core The view data taken is pre-processed, characteristic point detection, feature point tracking, homography matrix calculate, image filtering and affine transformation, A series of image after processing can smoothly be shown.Video jitter-prevention processing method flow chart is as shown in figure 15.
Video jitter-prevention processing method includes the detection of former frame characteristic point, present frame feature point tracking, homography matrix calculate, Image filtering and affine transformation;Former frame characteristic point detects by the use of FAST angular-point detection methods and extracts characteristic point as latter frame number According to the template for carrying out feature point tracking, present frame carries out feature using pyramid Lucas-Kanade optical flow approach to former frame Point tracking, the best characteristic point of 2 property is chosen with RANSAC algorithms from characteristic point used, it is assumed that this feature point only rotates And translation, then the affine transformation of the homography matrix is rigid body translation, calculates translation distance according to two groups of points and the anglec of rotation calculates Go out the homography matrix of affine transformation, operation is then filtered to transformation matrix using Kalman filtering, eliminate random motion point Amount, finally does multiplication by the transformation matrix after coordinates of original image coordinates and filtering and can obtain coordinate of the former coordinate in new images Realize affine transformation, eliminate video jitter.
In addition, the non-rgb format of image obtained for the gun sight of certain model is, it is necessary to before the detection of former frame characteristic point Image information progress pretreatment operation is converted into rgb format, simplifies image information, there is provided give successive image processing module.
Specific method is as follows:
(1) image preprocessing
For certain model gun sight obtain the non-rgb format of image, it is necessary to which image information is carried out pretreatment operation Rgb format is converted to, simplifies image information, there is provided give successive image processing module.Image preprocessing by the YUV image of input into Row image format conversion, calculates the required RGB image of algorithm process and gray level image.
Its conversion formula is as follows:
R=Y+1.140*V;
G=Y-0.394*U-0.581*V;
B=Y+2.032*V.
(2) former frame characteristic point detects
Former frame characteristic point detects, as the template of latter frame data progress feature point tracking, the characteristic point detected, Recorded in the form of coordinate set.Characteristic point detection uses Shi Tomasi Corner Detections, calculates gradation of image intensity The autocorrelation matrix of matrix of second derivatives, then calculates its characteristic value, if less numerical value is more than threshold value in two characteristic values, It can obtain strong angle point.Wherein the calculating of matrix of second derivatives is accelerated by sobel operators.
(3) present frame feature point tracking
One frame data characteristic point of the above is template, and tracing detection draws the characteristic point data of present frame, using pyramid Lucas-Kanade optical flow approach carries out feature point tracking, it is assumed that a pixel (x, y) on image, in the brightness of t moment For E (x, y, t), while the mobile component of this light stream in the horizontal and vertical directions is represented with u (x, y) and v (x, y), then:
After interval of time Δ t the brightness of this corresponding points be E (x+ Δs x, y+ Δ y, t+ Δ t), when Δ t convergences When 0, one can consider that the brightness is constant, then the brightness of t moment is equivalent to:
E (x, y, t)=E (x+ Δs x, y+ Δ y, t+ Δ t);
When the brightness of the point changes, the brightness put after movement is opened up into Jian by Taylor formula, can be obtained:
Ignore its second order infinitesimal, Δ t level off to 0 when, then:a2+b2=c2
W=(u, v) in above-mentioned formula formula.Wherein make
Represent that pixel gray level is along x in image, y, above-mentioned formula, can be converted into by the gradient in t directions:
Exu+Eyv+Et=0;
For big and incoherent movement, the tracking effect of light stream is not highly desirable, at this time using image pyramid, The top calculating light stream of image pyramid, for obtained motion estimation result as the pyramidal starting point of next layer, repeating should Process is until reaching the pyramidal bottom.
(4) homography matrix calculates
The homography matrix of affine transformation is calculated, the best spy of 2 property is chosen from characteristic point used with RANSAC algorithms Levy point, it is assumed that this feature point only has rotation and translation, then the affine transformation of the homography matrix is rigid body translation, can be according to two groups of points Calculate translation distance and the anglec of rotation.
The input of RANSAC algorithms often contains larger noise or Null Spot, and RANSAC is by being chosen in data One group of random subset reaches target, and the subset being selected is assumed to be intra-office point, and verified with following methods:
1) there is the intra-office point that a model is adapted to hypothesis, i.e., all unknown parameters can be calculated from the intra-office point of hypothesis Draw.
2) if enough points are classified as the intra-office point of hypothesis, then the model of estimation is just reasonable enough.
3) go to reevaluate model with the intra-office point of all hypothesis.
4) by estimating the error rate of intra-office point and model come assessment models.
The above process is repeatedly executed fixed number, and the model and existing model produced every time is more preferably compared, If intra-office point is more than existing model, it is used, is otherwise rejected.
For rigid body translation, a demand obtains rotation angle θ and translational movement tx, the ty in x, y direction, for before conversion point (x, Y) and conversion after point (u, v).Two groups of corresponding points are only needed, its correspondence is:
V=cos θ * x-sin θ * y+tx;
U=sin θ * x+cos θ * y+ty;
(5) image filtering
Operation is filtered to transformation matrix using Kalman filtering, eliminates random motion component.
The shake of video can approximation regard as and meet Gaussian Profile, 2*3 transformation matrixs are reduced to the matrix of 1*3,
Its input parameter is x, the displacement of y directions and the anglec of rotation.
The equation group of Kalman filtering is as follows:
X (k | k-1)=A*X (k-1 | k-1)+BU (k);
P (k | k-1)=A*P (k-1 | k-1) * A '+Q;
X (k | k)=X (k | k-1)+K (k) * (Z (k)-H*X (k | k-1));
P (k | k)=(I-Kg (k) * H) * P (k | k-1);
For above equation group, in the case where not influencing precision, in order to improve arithmetic speed, above-mentioned equation group is simplified It is as follows:
X_=X;
Calculate current state estimator X_, be reduced to last time state estimator X it is identical,
P_=P+Q;
Current evaluated error covariance P_ is calculated, as a result last evaluated error covariance P is plus processing noise Covariance Q,
Kalman gain K is calculated, wherein R is measurement error covariance,
X=X_+K* (z-X);
State estimator X is updated, wherein z is measured value, for next iteration,
P=(Trajectory (1,1,1)-K) * P_;
Evaluated error covariance P is updated, for next iteration.
(6) affine transformation
Since characteristic point only includes rotation and translation, then affine transformation is rigid body translation.Coordinates of original image coordinates and filtered Transformation matrix afterwards does multiplication and can obtain coordinate of the former coordinate in new images, here the corresponding transformation matrix of rigid body translation For:
Assuming that coordinates of original image coordinates is I, image coordinate is I ' after affine transformation, then
I '=I*T;
Video after affine transformation eliminates shake, it appears that apparent stabilization.

Claims (10)

1. a kind of automatic correction method, the automatic correction method is applied to shooting telescopic sight, and the shooting telescopic sight is to target Thing aims at collection object optical imagery, it is characterised in that the automatic correction method is the optics for obtaining shooting telescopic sight Image is converted to electronic image, and target sheet region is extracted from the electronic image, and target sheet region and electronic reference target sheet carry out Pixel-level subtraction detects point of impact, calculates the central point of each point of impact, obtains each point of impact central point and target sheet regional center The deviation of point, inputs the shooting telescopic sight by the deviation and follow-up shooting is corrected automatically.
2. automatic correction method as claimed in claim 1, it is characterised in that extraction target sheet has an X-rayed target sheet region behind region The outer contour in the target sheet region is corrected to circle by correction, and carries out point of impact inspection with the target sheet region after perspective correction Survey.
3. automatic correction method as claimed in claim 1, it is characterised in that target sheet region tool is extracted from the electronic image Body is:The mean filter of large scale is carried out to the electronic image, eliminates the grid interference on target sheet;Use adaptive big Tianjin threshold It is worth split plot design;According to the gamma characteristic of the electronic image, the electronic image is divided into background and prospect, foundation is divided into prospect Determine that minimized profile obtains target sheet region using the vector tracking method and geometric properties of Freeman chain codes with the image of background.
4. automatic correction method as claimed in claim 1, it is characterised in that carry out the target sheet region and electronic reference target sheet Pixel-level subtraction detects that point of impact is specially:The target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction, obtained The target sheet region and the pixel difference image of the electronic reference target sheet;
The pixel difference threshold value threshold of two field picture before and after being set in the pixel difference image, when pixel difference exceedes threshold value, if It is 255 to determine result, when pixel difference is less than threshold value, sets result as 0;
Contour extraction is carried out to the pixel difference image to obtain playing dot profile and calculating profile center obtaining the center of point of impact Point.
5. automatic correction method as claimed in claim 2, it is characterised in that the perspective correction is specially:Obtained with Canny operators To the edge in the target sheet region, maximum elliptic contour fitting is carried out using Hough transform to the edge, obtains maximum ellipse Equation, carries out the edge using Hough transform the fitting a straight line of cross wire, obtains most upper with maximum circle contour Point, lowest point, rightest point, the crosspoint of ultra-left point, by the top point of maximum circle contour, lowest point, rightest point, ultra-left point with thoroughly Perspective transformation matrix is calculated in four points combination depending on same position in conversion template, using perspective transformation matrix to the target Paper region carries out perspective transform.
6. automatic correction method as claimed in claim 1, it is characterised in that the electronic reference target sheet is the electronics of blank target sheet The target sheet region extracted when image or historical analysis.
7. automatic correction method as claimed in claim 1, it is characterised in that the deviation includes longitudinal bias and lateral deviation.
8. a kind of shooting telescopic sight of automatic deviation correction, it is characterised in that the gun sight includes visual field acquiring unit, display list Member, photoelectric switching circuit plate and CPU core core;
The visual field acquiring unit gathers target sheet optical imagery, and the optical imagery is converted to electricity by the photoelectric switching circuit plate Subgraph;
The CPU core core includes automatic deviation correction module, and the automatic deviation correction extracts target sheet region from the electronic image, The target sheet region and electronic reference target sheet are subjected to Pixel-level subtraction and detect point of impact, calculates the central point of each point of impact, The deviation of each point of impact central point and target sheet regional center point is calculated, the deviation is inputted into the shooting telescopic sight to subsequently penetrating Hit and corrected automatically;
The display unit shows the electronic image and deviation.
9. gun sight as claimed in claim 8, it is characterised in that CPU core core connects a RAM card by interface board, described interior Deposit target sheet region, the fire accuracy that card storage extracts.
10. gun sight as claimed in claim 8, it is characterised in that it is single that the CPU core core further includes the processing of video stabilization Member, the video stabilization processing unit carry out the electronic image to show in the display unit after processing eliminates shake.
CN201711048552.XA 2017-10-31 2017-10-31 A kind of gun sight and its automatic correction method of automatic deviation correction Active CN107907006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711048552.XA CN107907006B (en) 2017-10-31 2017-10-31 A kind of gun sight and its automatic correction method of automatic deviation correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711048552.XA CN107907006B (en) 2017-10-31 2017-10-31 A kind of gun sight and its automatic correction method of automatic deviation correction

Publications (2)

Publication Number Publication Date
CN107907006A true CN107907006A (en) 2018-04-13
CN107907006B CN107907006B (en) 2019-09-06

Family

ID=61842165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711048552.XA Active CN107907006B (en) 2017-10-31 2017-10-31 A kind of gun sight and its automatic correction method of automatic deviation correction

Country Status (1)

Country Link
CN (1) CN107907006B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110081778A (en) * 2019-05-07 2019-08-02 武汉高德红外股份有限公司 It is a kind of based on image procossing without target school rifle method
CN117146739A (en) * 2023-10-31 2023-12-01 南通蓬盛机械有限公司 Angle measurement verification method and system for optical sighting telescope

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105300181A (en) * 2015-10-30 2016-02-03 北京艾克利特光电科技有限公司 Accurate photoelectric sighting device capable of prompting shooting in advance
CN105300186A (en) * 2015-10-30 2016-02-03 北京艾克利特光电科技有限公司 Integrated and accurate photoelectric sighting system convenient to calibrate
US20160216070A1 (en) * 2014-12-13 2016-07-28 Jack Hancosky Supplementary sight aid adaptable to existing and new sight aid

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160216070A1 (en) * 2014-12-13 2016-07-28 Jack Hancosky Supplementary sight aid adaptable to existing and new sight aid
CN105300181A (en) * 2015-10-30 2016-02-03 北京艾克利特光电科技有限公司 Accurate photoelectric sighting device capable of prompting shooting in advance
CN105300186A (en) * 2015-10-30 2016-02-03 北京艾克利特光电科技有限公司 Integrated and accurate photoelectric sighting system convenient to calibrate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张强等: "一种基于图像处理的激光打靶仪设计", 《现代电子技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110081778A (en) * 2019-05-07 2019-08-02 武汉高德红外股份有限公司 It is a kind of based on image procossing without target school rifle method
CN110081778B (en) * 2019-05-07 2021-07-20 武汉高德红外股份有限公司 Image processing-based target-free gun calibration method
CN117146739A (en) * 2023-10-31 2023-12-01 南通蓬盛机械有限公司 Angle measurement verification method and system for optical sighting telescope
CN117146739B (en) * 2023-10-31 2024-01-23 南通蓬盛机械有限公司 Angle measurement verification method and system for optical sighting telescope

Also Published As

Publication number Publication date
CN107907006B (en) 2019-09-06

Similar Documents

Publication Publication Date Title
US10378857B2 (en) Automatic deviation correction method
CN111179334B (en) Sea surface small-area oil spill area detection system and detection method based on multi-sensor fusion
CN107894189B (en) A kind of photoelectric sighting system and its method for automatic tracking of target point automatic tracing
CN109559355B (en) Multi-camera global calibration device and method without public view field based on camera set
CN108171787A (en) A kind of three-dimensional rebuilding method based on the detection of ORB features
CN111255636A (en) Method and device for determining tower clearance of wind generating set
CN111274959B (en) Oil filling taper sleeve pose accurate measurement method based on variable field angle
CN105279771B (en) A kind of moving target detecting method based on the modeling of online dynamic background in video
CN103886107A (en) Robot locating and map building system based on ceiling image information
CN206611521U (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN108469254A (en) A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN107291088A (en) A kind of underwater robot image recognition and Target Tracking System
CN105513074B (en) A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system
CN107907006B (en) A kind of gun sight and its automatic correction method of automatic deviation correction
CN109883400B (en) Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL
CN111047636A (en) Obstacle avoidance system and method based on active infrared binocular vision
CN114972423A (en) Aerial video moving target detection method and system
CN110889874A (en) Error evaluation method for calibration result of binocular camera
CN113610896B (en) Method and system for measuring target advance quantity in simple fire control sighting device
CN107121262A (en) Background schlieren transient flow field shows system and the flow field measurement method based on the system
CN113916136A (en) High-rise structure dynamic displacement measurement method based on unmanned aerial vehicle aerial photography
CN107703619A (en) Automatically analyze the electronics Target observator and its analysis method of fire accuracy
CN107943083A (en) A kind of flight system of precise control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210708

Address after: 550002 aluminum and aluminum processing park, Baiyun District, Guiyang City, Guizhou Province

Patentee after: GUIZHOU JINGHAO TECHNOLOGY Co.,Ltd.

Address before: 100080 3rd floor, building 1, 66 Zhongguancun East Road, Haidian District, Beijing

Patentee before: BEIJING AIKELITE OPTOELECTRONIC TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Kind of Automatic Correction Sight and Its Automatic Correction Method

Effective date of registration: 20230803

Granted publication date: 20190906

Pledgee: Guiyang Rural Commercial Bank Co.,Ltd. science and technology sub branch

Pledgor: GUIZHOU JINGHAO TECHNOLOGY Co.,Ltd.

Registration number: Y2023520000039

PE01 Entry into force of the registration of the contract for pledge of patent right