CN107894189A - A kind of EOTS and its method for automatic tracking of target point automatic tracing - Google Patents

A kind of EOTS and its method for automatic tracking of target point automatic tracing Download PDF

Info

Publication number
CN107894189A
CN107894189A CN201711047978.3A CN201711047978A CN107894189A CN 107894189 A CN107894189 A CN 107894189A CN 201711047978 A CN201711047978 A CN 201711047978A CN 107894189 A CN107894189 A CN 107894189A
Authority
CN
China
Prior art keywords
target point
feature
image
point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711047978.3A
Other languages
Chinese (zh)
Other versions
CN107894189B (en
Inventor
李丹阳
陈明
龚亚云
粟桑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUIZHOU JINGHAO TECHNOLOGY Co.,Ltd.
Original Assignee
Beijing Aikelite Optoelectronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aikelite Optoelectronic Technology Co Ltd filed Critical Beijing Aikelite Optoelectronic Technology Co Ltd
Priority to CN201711047978.3A priority Critical patent/CN107894189B/en
Publication of CN107894189A publication Critical patent/CN107894189A/en
Application granted granted Critical
Publication of CN107894189B publication Critical patent/CN107894189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G1/00Sighting devices
    • F41G1/06Rearsights
    • F41G1/14Rearsights with lens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10881Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices constructional details of hand-held scanners
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K2207/00Other aspects
    • G06K2207/1011Aiming

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention principally falls into photoelectronic collimating technical field, and in particular to target point automatic tracking system and its method, the target point method for automatic tracking are applied to EOTS.The method for tracing is that the target substance markers in the image that is collected to sighting system obtain target point, and feature extraction is carried out to the image with target point and feature describes, while establishes ridge regression grader and the feature according to extraction carries out learning training to grader;The recurrence score of the ridge regression grader of the feature of the image gathered in real time after training is calculated, score is returned and reaches the automatic identification for being judged as that target point is realized to target point during requirement, and display target point is located at the orientation in display unit region;Think that target disappears if returning score and not reaching requirement, by point on the basis of the position that object disappears, search near the position of disappearance return score and calculate using the method for sliding window and reach requirement to recurrence score and judge that object returns visual field.

Description

A kind of EOTS and its method for automatic tracking of target point automatic tracing
Technical field
The present invention principally falls into point technique field, and in particular to a kind of EOTS of target point automatic tracing And its method for automatic tracking.
Background technology
Sight of the prior art is divided into mechanical aiming device and optical foresight, and mechanical aiming device mechanically passes through Iron sight, such as rear sight, foresight and sight aim to realize;Optical foresight is imaged by using optical lens, by mesh Mark image and sight line, which overlap, realizes aiming on same focussing plane.Existing sight has the following disadvantages and inconvenience Profit:(1) after installation aims at utensil, during applied to aimed fire, it is ensured that accurate aiming position and combine long-term penetrate Experience is hit, can just complete accurately to shoot, but for shooting beginner, the fault of aiming and without abundant Shooting experience, can influence its shooting accuracy;(2), it is necessary to repeatedly adjustment graduation and impact during shooting Point, make point of impact and graduation center superposition, during calibration point of impact and graduation center superposition, be required to repeatedly adjust Knob, or carry out other mechanical adjustment;(3) in shooting, it is necessary to cross be broken up into run-home thing, when artificial , it is necessary to look for target again when shake or object are moved, and in most cases, this can not be known at all When object where direction, because gun sight visual field is smaller, find object when will be extremely difficult, brought to shooting very big Not convenient property.
The content of the invention
In view of the above-mentioned problems, the present invention is from the sighting systems of gun, with reference to the academic research in terms of image procossing, Invent a kind of accurate EOTS of integration of target point automatic tracing and a kind of target for EOTS Point method for automatic tracking.
The present invention is achieved by the following technical solutions:
A kind of target point method for automatic tracking, the target point method for automatic tracking is applied to EOTS, described EOTS aims at object collection object optical imagery, and the target point method for automatic tracking includes target point Mark and target point location tracking;The mark of the target point is the first frame collected to the EOTS Object in image is marked to obtain target point;The location tracking of target point is that the EOTS is collected The later image of the second frame described in target point location tracking;Wherein the location tracking of target point includes target point visual field Interior tracking detects two steps again with target point;Tracking is to be used for target point in visual field unit in the target point visual field Tracking;The target point detect again be for target point or not in visual field unit detection again find.
Further, wherein, after target point is labeled as target point mark instructions are started, the EOTS is adopted The first frame image data collected carries out matrix analysis, according to coordinate position, finds the respective pixel region in image array, Change the color value in the region, realize that target point marks.
Further, tracking is that the image progress feature extraction with target point and feature are retouched in the target point visual field State, at the same establish ridge regression grader and according to extraction feature to grader carry out learning training;
The recurrence score of the ridge regression grader of the feature of the image gathered in real time after training is calculated, score is returned and reaches To being judged as that target point realized to the automatic identification of target point when requiring, and display target point is located at the side in display unit region Position.
Further, the target point be detected as again if return score do not reach requirement if think target disappear, with mesh Point on the basis of the position that mark thing disappears, searched for using the method for sliding window near the position of disappearance and carry out returning score calculating Reach requirement to score is returned to judge whether object returns visual field.
Further, using fast hog feature point detecting methods to target point image carry out feature extraction and Feature describes, and is specially:The Grad on the x directions and y directions of each pixel in image is calculated, and is calculated every The gradient direction of individual pixel, gradient magnitude is calculated respectively on tri- passages of R, G, B of image, take gradient magnitude maximum Gradient direction on passage, per 4*4 pixel it is a unit progress by image with reference to without symbol gradient and having symbol gradient Division, the unit number of formation is n, and counting in each 4*4 unit has symbol and without symbol gradient direction, obtains the list The gradient orientation histogram D of member;Histogram is normalized, four Gradient Features are concatenated together obtaining each unit Feature is stated.
Further, color-naming feature point detections are:The rgb space of image is transformed into color attribute space, The color attribute space is that the colors of 11 dimensions represent, color-naming feature point detections and fast hog feature point detections without Any common factor, complements one another;
The characteristic point and color- that the target point method for automatic tracking extracts fast hog feature point detecting methods The characteristic point of naming feature point detecting methods extraction is effectively merged, and improves the performance of tracking.
Further, establish ridge regression grader and the feature according to extraction is to grader progress learning training:It will carry The feature taken offsets the training sample of structure grader by circulating, so that data matrix becomes a circular matrix, then Characteristic conversion based on the circular matrix has arrived Fourier and has obtained basic sample, the tag size clothes of the basic sample From Gaussian Profile, ridge regression grader is trained using the training sample, obtains returning the model parameter of grader.
Further, the learning training is specially:It is described to follow by the use of feature construction circular matrix as training sample Ring matrix is
Wherein xi, i ∈ (1, n) are the characteristic value extracted;
By circular matrix by discrete Fourier transform, realize that diagonalization obtains in Fourier space
Wherein F is discrete fourier matrix, is a constant;
Represent the diagonal matrix after discrete Fourier transform;
Using the sample training ridge regression grader after diagonalization, the model parameter of recurrence grader is obtained.
Further, the recurrence score tool of the ridge regression grader of the feature of the image gathered in real time after training is calculated Body is:
(x, y) is the training set of ridge regression grader;
Wherein, kxxIt is the first row of core correlation matrix, λ is the regularization coefficient for preventing models fitting from overflowing;
Z is the feature samples of image gathered in real time, kxzFor the nuclear phase of the sample Z after the skew on the basis of sample X Close, Θ e are that dot product symbol represents multiplying for multielement.
Further,
It is described recurrence score reach requirement refer to return score meet it is claimed below:
Fmax>threshold_1;d>threshold_2
Wherein Fmax=maxF(s, w)
FmaxFor the top score of all samples in current search region, FminFor all samples in current search region Minimum score, FW, hThe score of serial number w, h sample is represented, mean is expressed as averaging to the numerical value in bracket, and w is training Model, s be target proximity sampling sample;Threshold_1, threshold_2 are judgment threshold.
A kind of EOTS of target point automatic tracing, the EOTS include visual field acquiring unit, shown Show unit, photoelectric switching circuit plate and CPU core core;
The visual field acquiring unit gathers object optical imagery, and the photoelectric switching circuit plate turns the optical imagery It is changed to electronic image;
The CPU core core includes automatic tracing module, and the automatic tracing module carries out real-time tracing to target point, Calculate orientation of the target point with respect to cross hairs;
The display unit shows the orientation of the electronic image and target point with respect to cross hairs;
Preferably, the EOTS includes stabilization processing unit, and the stabilization processing unit is to the light The extremely display unit is shown after the image that electric sighting system collects carries out stabilization processing.
Further, the automatic tracing module in a kind of EOTS of target point automatic tracing using it is above-mentioned from Dynamic method for tracing carries out real-time tracing.
The beneficial aspects of the present invention:The present invention proposes that a kind of target point automatic tracing can be realized and the automatic of object is chased after Track, the target point automatic tracing is applied slewing can be achieved on EOTS, significantly improve pointing velocity and essence Degree.
Brief description of the drawings
Fig. 1 aiming spot trace flow schematic diagrames;
Fig. 2 embodiment of the present invention 1 marks effect diagram;
Target point prompts schematic diagram in display unit inner orientation in Fig. 3 embodiment of the present invention 1;
Target point prompts schematic diagram in display unit exterior orientation in Fig. 4 embodiment of the present invention 1;
The gun sight stereogram of Fig. 5 embodiment of the present invention 1;
The gun sight left view of Fig. 6 embodiment of the present invention 1;
The gun sight right view of Fig. 7 embodiment of the present invention 1;
The video anti-shaking method schematic flow sheet of Fig. 8 embodiment of the present invention 2.
In figure:1. visual field acquiring unit, 2. display units, 3. battery compartments, 4. rotary encoders, 5. focusing knobs, outside 6. Skin rail is hung, 7. key control panels, 8. pick up spit of fland Buddhist nuns, 9. photoelectric conversion plates, 10. aim at processing of circuit unit, 11. display conversions Plate, 81. regulation clamp nuts one, 82. regulation clamp nuts two, 101.CPU core boards, 102. interface boards.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with drawings and Examples, The present invention is explained in further detail.It should be appreciated that specific embodiment described herein is used only for explaining this hair It is bright, it is not intended to limit the present invention.
On the contrary, the present invention covers any replacement done in the spirit and scope of the present invention being defined by the claims, repaiied Change, equivalent method and scheme.Further, in order that the public has a better understanding to the present invention, below to the thin of the present invention It is detailed to describe some specific detail sections in section description.Part without these details for a person skilled in the art Description can also understand the present invention completely.
Embodiment 1
The present invention provides a kind of EOTS of target point automatic tracing, and the sighting system has automatic tracing mould Block, automatic tracing module carry out real-time automatic tracing using method for automatic tracking to object.
The sighting system can be conveniently mounted on all kinds of firearms, and the EOTS includes:One housing, institute Housing generally demountable structure is stated, the enclosure interior is a receiving space, and the receiving space includes visual field and obtains list Member, video processing unit, display unit, power supply and aiming circuit unit;As shown in Figure 5-Figure 7.
The visual field acquiring unit 1 includes having object lens combination or other optical visible equipment;The object lens or optics can The front end of visual field acquiring unit 1 is arranged on depending on equipment, obtains field-of-view information.
The EOTS is integrally a digitalizer, can with smart mobile phone, intelligent terminal, sighting device or Circuit is communicated, and the video information that visual field acquiring unit 1 is gathered is sent to smart mobile phone, intelligent terminal, sighting device Or circuit, the video information that visual field acquiring unit 1 gathers is shown by devices such as smart mobile phone, intelligent terminals.
Visual field acquiring unit 1 includes photoelectric switching circuit, and the circuit includes photoelectric conversion plate 9, the photoelectric switching circuit Visual field optical signalling is converted into electric signal, the photoelectric conversion plate 9 is the photoelectric switching circuit plate in visual field acquiring unit 1, The photoelectric conversion plate 9 converts optical signals to electric signal, at the same signal is carried out automatic exposure, AWB, noise reduction, Sharpening operation, signal quality is improved, the data of high quality are provided for imaging.
For connecting photoelectric conversion plate 9 and showing that the aiming processing of circuit unit 10 of change-over panel 11 includes CPU core boards 101 and interface board 102, the interface board 102 be connected with the CPU core core 101, specially CPU core core 101 passes through string The serial ports of mouth and interface board 102, which is realized, to be connected, and the CPU core core 101 is placed in the interface board 102 and the opto-electronic conversion Between plate 9, three is placed in parallel, and plate face is each perpendicular to visual field acquiring unit 1, and the photoelectric conversion plate 9 is connect by parallel data Mouthful, the video signal transmission after conversion to CPU core core 101 is further handled, the interface board 102 by serial ports with CPU core core 101 is communicated, and the peripheral operations such as battery electric quantity, attitude information, time, button operation, knob-operated are believed Breath, which is transmitted to CPU core core 101, further to be handled.
The CPU core core 101 can connect a RAM card by interface board 102, in embodiments of the present invention, with visual field Acquiring unit 1 is observation Way in, and internal memory neck, the internal memory card insertion are set at the leftward position of CPU core core 101 Be connected in internal memory neck, in the RAM card can storage information, the RAM card can to built in system software program carry out from Dynamic upgrading.
It is observation Way in visual field acquiring unit 1, is also set in the left side RAM card trough rim side of CPU core core 101 A USB interface is equipped with, external power supply power supply or the information by CPU core boards 101 can be carried out to system by the USB interface Output.
The EOTS also includes multiple sensors, concretely acceleration transducer, wind speed and direction sensing It is several or whole in device, geomagnetic sensor, temperature sensor, baroceptor, humidity sensor.
Distinguishingly, the sensor that the EOTS uses only includes acceleration transducer.
A battery compartment 3 is additionally provided with the housing, a battery component 31, the battery compartment 3 are provided with the battery compartment 3 Shell fragment is inside provided with, is easy to the fastening of the battery component, the battery compartment 3 is arranged on middle part in housing, passes through housing side Battery cabin cap can be opened and realize replacing battery component.
The bottom side of battery compartment 3 is provided with circuit solder contacts, and the contact connects with the shell fragment inside battery compartment, the electricity The wire of the contact welded bands binding post in pond storehouse 3, connecting interface plate 102, docking oralia 102, CPU core boards 101, photoelectricity Change-over panel 9, display change-over panel 11 and display unit 2 are powered.
The display unit 2 is display screen, and the display unit 2 is by showing that change-over panel 11 connects with interface board 102 Connect, so as to be communicated with CPU core core 101, CPU core core is shown display data transmissions to display unit 2.
The video information that the cross division line of the display screen display gathers with visual field acquiring unit is overlapped mutually, and is led to Cross cross division line and be used for aimed fire, at the same also show on a display screen it is for secondary fire, by above-mentioned various sensors The secondary fire information and work configured information of transmission;
The information of the secondary fire, its part are applied to shooting trajectory calculating, and its part is used for display alarm use Family.
The case top is provided with external button, and the external button is connected to by the key control panel 7 of case inside On interface board 102, switchgear and the function of taking pictures, record a video can be realized by the external button by touching.
It is observation Way in visual field acquiring unit 1, is provided with the right side of the housing close to the side of display unit 2 Rotary encoder 4 with keypress function, the rotary encoder 4 is in the enclosure interior concatenated coding device circuit board 41, coding Device plate circuit 41 is connected by the winding displacement with binding post with interface board, completes the transmission of operation data.The rotary encoder Control function switching, adjust apart from multiplying power data, configuration information, typing deviation data etc..
It is observation Way in visual field acquiring unit 1, the housing right side is close to the side of visual field acquiring unit 1 Provided with focusing knob 5, the focusing knob 5 adjusts the focusing of visual field acquiring unit 1 by spring mechanism, reaches different distance The purpose of lower clear observation object.
The bottom of the housing is provided with pick up spit of fland Buddhist nun 8, and on fixed fire apparatus, the pick up spit of fland Buddhist nun includes can Clamp nut 81 and 82 is adjusted, clamp nut is on the left side of pick up spit of fland Buddhist nun or right side.
The housing is provided with plug-in skin rail 6, plug-in skin rail 6 and visual field acquiring unit 1 at the top of visual field acquiring unit 1 Shot, be fixed by screw using same optical axis;Plug-in skin rail 6 is designed using standard size, can be mounted with standard pick up The object of spit of fland Buddhist nun's connector, the object include laser range finder, light compensating lamp, laser pen etc..
The present invention provides a kind of target point method for automatic tracking, applied to above-mentioned EOTS, for the mesh of shooting Mark thing is marked, the target points of automatic discrimination current markers display unit region azimuth information, when mark point is due to moving It is dynamic not in display unit region, but still in visual field unit when, the position of automatic tracing target point, and prompt to work as Preceding target point is located at the orientation in display unit region, and convenient use person finds marked target point again;When mark point due to When moving and not only disappear in display unit region, while disappearing in whole visual field unit area, method for automatic tracking is still in Computing state, object when visual field unit is entered, method for tracing will lock onto target specific orientation and carried Show.Methods described is suitable for any environmental factor, can mark any object of tracking.
The target point method for automatic tracking, include target point mark and aiming spot tracking.
(1) target point marks
After starting target point mark instructions, the first frame data collected are subjected to matrix analysis, according to mark coordinate bit Put, obtain the respective pixel region in image array, change the color value in the region, realize that target point marks.
(2) aiming spot is followed the trail of
Since the second frame of image sequence, feature point extraction, classification ginseng are carried out automatically according to the target of the first frame flag Several learning training, the target among automatic identification scene, on-line study, updates the parameter of grader, realizes tenacious tracking.
Aiming spot tracking detects two parts again comprising tracking, target point in target point visual field.Wherein target point regards In method for tracing processing be object in visual field unit when, the tracking of target point;Target point again detection method processing Be object not in visual field unit when, the detection again of target point is found.
1) followed the trail of in target point visual field
Method for tracing uses the analysis method based on fast hog+color-naming+ ridge regressions in target point visual field, Wherein fasthog is referred to as fhog, color-naming abbreviations cn.This method establishes ridge regression grader, will sample what is obtained Video as training return grader sample, each of which two field picture tracking highest scoring conduct positive sample, other with The image that machine samples to obtain is as negative sample.The grader has the function of on-line study, after each two field picture is tracked, The model parameter of grader is updated, grader all carries out an iteration after each secondary tracking.During target is tracked, lead to Cross the score returned and judge whether object disappears, once object disappears, then grader pause renewal model parameter, starts Detection method carries out object searching to target again.
2) target point detects again
When movement or artificial caused gun sight due to target object are moved, object is disappeared in visual field, upper State in target point visual field in method for tracing, the reference position before record target point disappearance,
By point on the basis of the position that object disappears, retrieved using the method for sliding window near the position of disappearance, Model is classified, looks for out the target of highest scoring, judges the score of the target whether higher than the matching degree threshold set Value, if on threshold value, it is exactly the run-home before disappearing to judge the target, and object reenters visual field.
Target point again detection method use fhog+cn+ sliding windows+ridge regression analysis method, compared to target point visual field Interior tracking, increase sliding window method used when detecting again.
The specific execution step of aiming spot tracking is as follows:
A) position of the first two field picture of mark target;
B) fhog features and cn features are calculated to the region of previous frame target proximity and forms feature description;
C) sample train returns grader accordingly;
D) return region of search target and obtain corresponding score, draw the position of current frame image target, and judge mesh Whether mark disappears;
If e) target does not disappear, step c is entered with current pin image zooming-out fhog features and cn features, otherwise entered Enter step f;
F) using sliding window search target, fhog features and cn features, the recurrence trained before being disappeared using target are extracted Grader finds target.
It is as shown in Figure 1 that aiming spot tracking performs flow:
Fhog feature point extractions
Method for tracing includes fast hog (fhog) feature point detecting method in above-mentioned target point visual field, and this method calculates Grad on the x directions and y directions of each pixel in image, and calculate the gradient direction alf of each pixel (x, y), it is calculated as follows:
Gx(x, y)=H(x+1, y)-H(x+1, y)
Gy(x, y)=H(x, y+1)-H(x, y-1)
Wherein, Gx(x, y) denotation coordination is Grad of (x, y) place pixel in x directions;Gy(x, y) denotation coordination is Grad of (x, y) place pixel in y directions;G (x, y) denotation coordination is the Grad of (x, y) place pixel;alf(x,y) Denotation coordination is the gradient direction of (x, y) place pixel;H(x-1, y)Denotation coordination is the brightness H values of (x-1, y) place pixel;
Gradient magnitude is calculated respectively on tri- passages of R, G, B of image, takes the gradient on the maximum passage of gradient magnitude Direction, with reference to without symbol gradient (0-180) and having symbol gradient (0-360), 9 sections will be divided without symbol gradient (0-180), often 20 degree are a statistical piece, will have symbol gradient to be divided into 18 sections, and every 20 degree are a statistical piece, by image per 4*4 pixel Divided for a unit, the unit number of formation is n, and n is positive integer;Count in each 4*4 unit have symbol and Without symbol gradient direction, the gradient orientation histogram D, D for obtaining the unit are the vectors of 9+18=27 dimensions.
After obtaining gradient orientation histogram D, it is normalized, its method is by active cell D(i, j)With surrounding The region normalization that other three units are formed, wherein active cell D(i, j)It is divided into the situation of surrounding other three units Four kinds of situations, therefore D(i, j)Four normalization data I1 are obtained after normalization(i, j)、I2(i, j)、I3(i, j)、I4(i, j)Following institute Show:
Each the new character gradient after unit normalization is:
N1 (i, j)=D(i, j)/I1(i, j)
N2 (i, j)=D(i, j)/I2(i, j)
N3 (i, j)=D(i, j)/I3(i, j)
N4 (i, j)=D(i, j)/I4(i, j)
By newly calculate four Gradient Features N1 (i, j)、N2 (i, j)、N3 (i, j)、N4 (i, j)It is concatenated together obtaining each The new feature statement of unit, each unit finally give 4*27=108 dimensional vectors V.After obtaining new characteristic vector, due to Dimension is too high, and new feature vector will be down to 31 dimensional vectors using principal component dimension reduction method.From above-mentioned statement, image has n Individual unit, therefore the characteristic dimension finally obtained is tieed up for n*31.
Color-naming feature extractions
Color-naming feature extractions are the feature extractions based on color attribute, with reference to features of the fhog based on gradient Extraction, improve target detection precision.First according to the size of the matrix of fhog features, such as:The size of fhog eigenmatrixes is H* W*31, the area zoom of feature will be extracted to H*W, carry out resize operations to image, in the image after being adjusted Each pixel, index of this feature point in eigenmatrix is calculated according to the computational methods of its rgb value and color characteristic, Setting RR, GG, BB represent the rgb value of each pixel respectively, and Index is index, then computational methods are as follows:
Call number in obtained index value corresponding color eigenmatrix, color characteristic matrix size are set as 20768* 10 size, 10 dimensional features in matrix are taken out by Index, after the color characteristic extracted is enumerated into fhog features, 41 dimensional features are obtained in so fhog Feature extraction~+s cn feature extractions, are carried out finally by related convolution filtering algorithm corresponding Training detection.
Ridge analysis
Image carries out the instruction of grader using ridge analysis method after fhog feature extractions and cn feature extractions Practice, ridge analysis method is that the training sample of structure grader is offset by circulating, and is followed so that data matrix becomes one Problem is transformed to Fourier by ring matrix, the characteristic for being then based on circular matrix, and model parameter is calculated in Fourier's threshold.
Above-mentioned ridge analysis method mainly comprises the following steps:
Linear regression
The function of training set (xi, yi) linear regression of grader is f (x)=wTX, it is set as preventing models fitting from overflowing The regularization coefficient gone out, passes through training method
Can be in the hope of the method solution of the least square of linear regression:
W=(XHX+λI)-1XHy
Wherein
XH=(X*)T
It is x conjugate transposition.Set n*1 vector
X=[xn, x1, x2..., xn-1]T
A sample is represented, sets sample X recycle ratio P,
Wherein
PX=[xn, x1, x2..., xn-1]T
So that x in the vertical directions move an element.PuX is then that x in the vertical directions move u element, and u takes negative Represent opposite direction movement.What the image boundary directly obtained by circulating was handled is not very smooth, then needs by right Basesample images are multiplied by a Hamming window and eliminate this rough phenomenon to reduce the method for the weight of image.
According to the property of circulation, all shift signal set expressions are as follows:
{PUX | u=0 ... n-1 };
Then the loopy moving of sample is represented by matrix
By circular matrix by discrete Fourier transform (DFT), diagonalization is realized in Fourier space,
F is discrete fourier matrix in above-mentioned formula, is a constant, and DFT is a linear operation,
Represent the diagonal matrix after DFT transform.
The speed-raising of linear regression training
By formula
Substitute into the formula of ridge regression
W=(X1HX1+λI)-1X1Hy;
Obtain
Utilize the property of unitary matrice
FHF=I;
Obtain
Multiplying for dot product symbol ⊙ for multielement is defined, is obtained:
Above-mentioned formula is equivalent to
The property inverted using circular matrix can obtain:
Above-mentioned formula is equivalent to
Utilize the anticaustic diagonalization of matrix:
Then
By
It can obtain:
Above-mentioned formula is equivalent to:
Above-mentioned formula is equivalent to:
Draw:
Regression training based on kernel method
In order to improve the precision of regression training, solves precision problem using the regression training based on kernel method, this method will Linear transformation is Nonlinear Processing, is changed by equation below
Represent wherein mapping functionFeature Mapping to higher dimensional space, order:
K (x, x ') is stored in matrix K, K is the core correlation matrix of all training samples, setting
Kij=k (xi, xj);
In order to improve precision, it is by objective function optimization:
Can must be based on the non-linear regression method under kernel method by above-mentioned linear regression formula:
F (z)=wTz;
Above-mentioned formula is equivalent to
Nuclear space can be obtained reclaim the solution returned and be:
α=(K+ λ I)-1y;
From above-mentioned setting and deriving, K is the core correlation matrix of all training samples, if Selection of kernel function is proper, So that the order of elements inside x changes the value of not influence function, then it can ensure that nuclear matrix K is also circular matrix.
Radial direction base core meets the condition of circular matrix, therefore show that derivation formula is as follows:
Above-mentioned formula is equivalent to
Above-mentioned formula is equivalent to
Above-mentioned formula is equivalent to
Using circular matrix Convolution Properties:
The right and left does DFT transform simultaneously and can derived:
Wherein kxxIt is the first row of core correlation matrix, sets kxxElement in vector is
ki xx′=k (x ', Pi-1x);
Above-mentioned formula is equivalent to when calculating Gaussian kernel correlation:
ki xx′=k (| | x ', Pi-1x||2);
Above-mentioned formula is equivalent to
ki xx′=h (| | x | |2+||x′||2-2x′TPi-1x);
Using the property of cyclic convolution:
It can draw
Above-mentioned formula is equivalent to
Illustrate basic sample and offset by the size of the nuclear phase pass of i sample with it.
Recurrence detection based on kernel method
The core correlation matrix of all training samples and sample to be detected is:
Kz=C (kxz);
Wherein, kxzIt is x and z nuclear phase pass.Therefore can draw:
F (z)=Kzα;
Above-mentioned formula is equivalent to
F (z)=C (Kz)α;
Above-mentioned formula is equivalent to
Derive
Above-mentioned formula is equivalent to
Draw derivation result
Judge dbjective state
The recurrence score calculated is returned in region of search by using the ridge regression model of training, searches for target area, meter The recurrence score for drawing each sample is calculated, whether is given for change by following two condition judgment targets.
A) first condition:The top score of all samples in current search region is found, according to top score and is set
Fixed judgment threshold is compared, and sets judgment threshold as threshold_1, top score Fmax
Fmax=maxF(s, w)
Wherein w is the model of training, and s is the sample of target proximity sampling.If top score Fmax>threshold_1 Then meet.
B) second condition:The score of all samples in current search region is calculated, calculates the peak value phase in the region Energy, the severe degree that the index reflection top score shakes relative to other sample scores in region of search are closed, setting is sentenced Disconnected threshold value is threshold_2, and its circular is:
Wherein FmaxRepresent highest score, FminRepresent minimum score;FW, hRepresent obtaining for serial number w, h sample Point, mean is expressed as averaging to the numerical value in bracket.
If the peak value correlation energy d being calculated>Threshold_2 then represents to meet.
If c) meeting two above condition simultaneously, it can be determined that target is retrieved.
After above-mentioned target point labeling method, effect is obtained as shown in Fig. 2 being put on the basis of cross differentiation center To changing coordinates, one radius of mark is r circle at coordinate points, and the circle carries out prompting as the target point to be shot and shown.
When unknown skew occurs for object or gun sight, ensure mark using method for tracing in above-mentioned target point visual field Position of the target point of note on object is constant, as shown in figure 3, now target point in the lower left that cross breaks up, while Onscreen cue target point is in the specific direction of cross differentiation center, such as position prompting in Fig. 3, if target point and cross point When occurring in same screen, user can easily know the particular location of target point for change, if target point in visual field, But when not in screen, then now user can quickly find target point according to position prompting.
When target point is disappeared in visual field, using above-mentioned target point detection method searching target again, regarded when target enters When, then target point again detection method can automatic lock onto target, if now target point is at the edge of visual field, and cross point When change is not simultaneously displayed in screen, then as shown in figure 4, system prompt object is in the direction of cross differentiation center;If this When target point in visual field and with cross differentiation be simultaneously displayed in screen, then as shown in Figure 4 prompting.
Embodiment 2
The present embodiment is substantially the same manner as Example 1, and difference is, in order to improve video display quality, the present embodiment EOTS video stabilization processing unit is set up on CPU core core, the processing of video stabilization that the unit includes Method, the view data of acquisition is pre-processed, feature point detection, feature point tracking, homography matrix calculate, image filtering And affine transformation, a series of image after processing can smoothly be shown.Video jitter-prevention processing method flow chart As shown in Figure 8.
Video jitter-prevention processing method includes former frame feature point detection, present frame feature point tracking, homography matrix meter Calculation, image filtering and affine transformation;Former frame feature point detection extracts characteristic point as latter by the use of FAST angular-point detection methods Frame data carry out the template of feature point tracking, and present frame is carried out using pyramid Lucas-Kanade optical flow approach to former frame Feature point tracking, the best characteristic point of 2 property is chosen from characteristic point used with RANSAC algorithms, it is assumed that this feature point is only There is rotation and translation, then the affine transformation of the homography matrix is rigid body translation, and translation distance and rotation are calculated according to two groups of points Angle calculates the homography matrix of affine transformation, and operation is then filtered to transformation matrix using Kalman filtering, eliminates random Component motion, the transformation matrix after coordinates of original image coordinates and filtering is finally done into multiplication and can obtain former coordinate in new images In coordinate realize affine transformation, eliminate video jitter.
In addition, for certain model gun sight obtain the non-rgb format of image, it is necessary in former frame feature point detection It is preceding that image information progress pretreatment operation is converted into rgb format, simplify image information, there is provided give successive image processing module.
Specific method is as follows:
(1) image preprocessing
For certain model gun sight obtain the non-rgb format of image, it is necessary to which image information is carried out into pretreatment operation Rgb format is converted to, simplifies image information, there is provided give successive image processing module.Image preprocessing is by the YUV image of input Image format conversion is carried out, calculates the RGB image and gray level image required for algorithm process.Its conversion formula is as follows:
R=Y+1.140*V;
G=Y-0.394*U-0.581*V;
B=Y+2.032*V.
(2) former frame feature point detection
Former frame feature point detection, the template of feature point tracking, the feature detected are carried out as latter frame data Point, is recorded in the form of coordinate set.Feature point detection uses FAST Corner Detections, and FAST detection methods pass through judgement The carry out feature point extraction compared with the gray scale of neighborhood territory pixel point, obtain the second dervative of calculating gradation of image intensity after characteristic point The autocorrelation matrix of matrix, and its characteristic value is calculated, if less numerical value is more than threshold value in two characteristic values, can obtain strong Angle point.Wherein the calculating of matrix of second derivatives is calculated by sobel operators and accelerated.
(3) present frame feature point tracking
The frame data characteristic point of the above one is template, and tracing detection draws the characteristic point data of present frame, using pyramid Lucas-Kanade optical flow approach carries out feature point tracking, it is assumed that a pixel (x, y) on image, in the brightness of t For E (x, y, t), while the mobile component of the light stream in the horizontal and vertical directions is represented with u (x, y) and v (x, y), then:
After interval of time Δ t the brightness of this corresponding points be E (x+ Δs x, y+ Δ y, t+ Δ t), when Δ t convergences When 0, one can consider that the brightness is constant, then the brightness of t is equivalent to:
E (x, y, t)=E (x+ Δs x, y+ Δ y, t+ Δ t);
When the brightness of the point changes, the brightness put after movement is opened up into Jian by Taylor formula, can be obtained:
Ignore its second order infinitesimal, Δ t level off to 0 when, then:a2+b2=c2
W=(u, v) in above-mentioned formula formula.Wherein make
Represent that above-mentioned formula along x, y, the gradient in t directions, can be converted into by pixel gray level in image:
Exu+Eyv+Et=0;
For big and incoherent motion, the tracking effect of light stream is not highly desirable, now using image pyramid, In the top calculating light stream of image pyramid, obtained motion estimation result repeats as the pyramidal starting point of next layer The process is until reaching the pyramidal bottom.
(4) homography matrix calculates
The homography matrix of affine transformation is calculated, the motion of image is approximately considered the rigid body translation of only rotation and translation, For the transformation matrix of rigid body translation, at least need two groups of points to calculate translation distance and the anglec of rotation, using RANSAC algorithms from The calculating that 2 groups of best characteristic points of property are used as transformation matrix is chosen in all more characteristic points.
For the input containing more noise or Null Spot, RANSAC is by be chosen in data one group with loom Collect to reach target, the subset being selected is assumed to be intra-office point, and is verified with following methods:
1) there is the intra-office point that a model is adapted to hypothesis, i.e., all unknown parameters can be from the intra-office point meter of hypothesis Draw.
2) if enough points are classified as the intra-office point of hypothesis, then the model of estimation is just reasonable enough.
3) go to reevaluate model with the intra-office point of all hypothesis.
4) by estimating the error rate of intra-office point and model come assessment models.
Said process is repeatedly executed fixed number, and caused model and existing model are compared every time, if Intra-office point is more than existing model, then is used, is otherwise rejected.
For rigid body translation, a demand obtains rotation angle θ and translational movement tx, the ty in x, y direction, for before conversion point (x, Y) and conversion after point (u, v).Two groups of corresponding points are only needed, its corresponding relation is:
V=cos θ * x-sin θ * y+tx;U=sin θ * x+cos θ * y+ty.
(5) image filtering
Operation is filtered to transformation matrix using Kalman filtering, eliminates random motion component.
The shake of video can approximation regard as and meet Gaussian Profile, 2*3 transformation matrixs are reduced to 1*3 matrix, its is defeated It is x, the displacement of y directions and the anglec of rotation to enter parameter.
The equation group of Kalman filtering is as follows:
X (k | k-1)=A*X (k-1 | k-1)+BU (k);
P (k | k-1)=A*P (k-1 | k-1) * A '+Q;
X (k | k)=X (k | k-1)+K (k) * (Z (k)-H*X (k | k-1));
P (k | k)=(I-Kg (k) * H) * P (k | k-1);
For above equation group, in the case where not influenceing precision, in order to improve arithmetic speed, by above-mentioned equation
Group is simplified as:
X_=X;
Calculate current state estimator X_, be reduced to it is identical with last state estimator X,
P_=P+Q;
Current evaluated error covariance P_ is calculated, its result is last evaluated error covariance P plus processing
Noise covariance Q,
Kalman gain K is calculated, wherein R is measurement error covariance,
X=X_+K*(z-X);
State estimator X is updated, wherein z is measured value, for next iteration,
P=(Trajectory (1,1,1)-K) * P_;
Evaluated error covariance P is updated, for next iteration.
(6) affine transformation
The shake of image is considered as to the rigid body translation for only including rotation and translation.After coordinates of original image coordinates and filtering Transformation matrix does the pixel that multiplication can obtain coordinate position of the former coordinate in new images, becomes here corresponding to rigid body translation Changing matrix is:
Assuming that coordinates of original image coordinates is I, image coordinate is I ' after affine transformation, then
I '=I*T
Video after affine transformation eliminates shake, it appears that apparent stabilization.

Claims (10)

1. a kind of target point method for automatic tracking, the target point method for automatic tracking is applied to EOTS, the light Electric sighting system aims at object collection object optical imagery, it is characterised in that the target point method for automatic tracking bag Include the mark of target point and the location tracking of target point;The mark of the target point collects to the EOTS Object in first two field picture is marked to obtain target point;The location tracking of target point is that the EOTS is adopted The location tracking of target point described in the later image of the second frame for collecting;Wherein, the location tracking of target point includes target point The interior tracking of visual field detects two steps again with target point;Tracking is to be used for target point in visual field unit in the target point visual field In tracking;The target point detect again be for target point or not in visual field unit detection again find.
2. a kind of target point method for automatic tracking as claimed in claim 1, it is characterised in that wherein, target point is labeled as startup After target point mark instructions, the first frame image data that the EOTS is collected carries out matrix analysis, according to light The coordinate position at electric sighting system cross differentiation line center, finds the respective pixel region in image array, changes the region Color value, realize that target point marks.
3. a kind of target point method for automatic tracking as claimed in claim 1, it is characterised in that followed the trail of in the target point visual field To carry out feature extraction and feature to the image with target point and describing, while establish ridge regression grader and according to the spy of extraction Sign carries out learning training to grader;
The recurrence score of the ridge regression grader of the feature of the image gathered in real time after training is calculated, score is returned and reaches requirement When be judged as that target point is realized to the automatic identification of target point, and display target point is located at the orientation in display unit region;
The target point be detected as again if return score do not reach requirement if think target disappear, with object disappear position On the basis of point, searched for using the method for sliding window near the position of disappearance return score calculate to return that score reaches will Ask to judge whether object returns visual field.
4. a kind of target point method for automatic tracking as claimed in claim 3, it is characterised in that examined using fast hog characteristic point Survey method carries out feature extraction to the image with target point and feature describes, and fast hog feature point detections are specially:Calculate figure Grad on the x directions and y directions of each pixel as in, and the gradient direction of each pixel is calculated, scheming Gradient magnitude is calculated respectively on tri- passages of R, G, B of picture, the gradient direction on the maximum passage of gradient magnitude is taken, with reference to without symbol Number gradient and there is symbol gradient, per 4*4 pixel be that a unit is divided by image, the unit number of formation be n, is counted often There are symbol and the direction without symbol gradient in individual 4*4 unit, obtain the gradient orientation histogram D of the unit;To described Histogram is normalized to obtain the feature statement of each unit;N is positive integer.
A kind of 5. target point method for automatic tracking as claimed in claim 4, it is characterised in that the target point method for automatic tracking Simultaneously using color-naming feature point detecting methods to carrying out feature extraction and feature description to the image with target point;
Color-naming feature point detections are:The rgb space of image is transformed into color attribute space, the color attribute is empty Between represented for the colors of 11 dimensions, color-naming feature point detections and fast hog feature point detections are without any common factor, each other Supplement;
The characteristic point and color-naming that the target point method for automatic tracking extracts fast hog feature point detecting methods The characteristic point of feature point detecting method extraction is effectively merged, and improves the performance of tracking.
6. a kind of target point method for automatic tracking as claimed in claim 3, it is characterised in that establish ridge regression grader and foundation The feature of extraction carries out learning training to grader:The feature of extraction is offset to the training sample of structure grader by circulating This, so that data matrix becomes a circular matrix, the characteristic conversion for being then based on the circular matrix has arrived Fourier Basic sample is obtained, the tag size Gaussian distributed of the basic sample, utilizes training sample training ridge regression point Class device, obtain returning the model parameter of grader.
7. a kind of target point method for automatic tracking as claimed in claim 3, it is characterised in that the learning training is specially:Profit It is by the use of feature construction circular matrix as training sample, the circular matrix
Wherein xi, i ∈ (1, n) are the characteristic value extracted;
By circular matrix by discrete Fourier transform, realize that diagonalization obtains in Fourier spaceIts Middle F is discrete fourier matrix, is a constant;
Represent the diagonal matrix after discrete Fourier transform;
Using the sample training ridge regression grader after diagonalization, the model parameter of recurrence grader is obtained.
8. a kind of target point method for automatic tracking as claimed in claim 3, it is characterised in that calculate the spy of the image gathered in real time The recurrence score of sign ridge regression grader after training is specially:
(x, y) is the training set of ridge regression grader;
Wherein, kxxIt is the first row of core correlation matrix, λ is the regularization coefficient for preventing models fitting from overflowing;
Z is the feature samples of image gathered in real time, kxzClosed for the nuclear phase of the sample Z after the skew on the basis of sample X, ⊙ is Dot product symbol represents multiplying for multielement.
A kind of 9. target point method for automatic tracking as claimed in claim 4, it is characterised in that
It is described recurrence score reach requirement refer to return score meet it is claimed below:
Fmax>threshold_1;d>threshold_2
Wherein Fmax=maxF(s, w)
FmaxFor the top score of all samples in current search region, FminObtained for the minimum of all samples in current search region Point, FW, hThe score of serial number w, h sample is represented, mean is expressed as averaging to the numerical value in bracket, and w is the model of training, S is the sample of target proximity sampling;Threshold_1, threshold_2 are judgment threshold.
10. a kind of EOTS of target point automatic tracing, it is characterised in that the EOTS obtains including visual field Take unit, display unit, photoelectric switching circuit plate and CPU core core;
The visual field acquiring unit gathers object optical imagery, and the photoelectric switching circuit plate is converted to the optical imagery Electronic image;
The CPU core core includes automatic tracing module, and the automatic tracing module carries out real-time tracing to target point, calculates mesh Orientation of the punctuate with respect to cross hairs;
The display unit shows the orientation of the electronic image and target point with respect to cross hairs;
Preferably, the EOTS includes stabilization processing unit, and the stabilization processing unit is to the photoelectric aiming The extremely display unit is shown after the image that Barebone collects carries out stabilization processing.
CN201711047978.3A 2017-10-31 2017-10-31 A kind of photoelectric sighting system and its method for automatic tracking of target point automatic tracing Active CN107894189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711047978.3A CN107894189B (en) 2017-10-31 2017-10-31 A kind of photoelectric sighting system and its method for automatic tracking of target point automatic tracing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711047978.3A CN107894189B (en) 2017-10-31 2017-10-31 A kind of photoelectric sighting system and its method for automatic tracking of target point automatic tracing

Publications (2)

Publication Number Publication Date
CN107894189A true CN107894189A (en) 2018-04-10
CN107894189B CN107894189B (en) 2019-08-23

Family

ID=61802928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711047978.3A Active CN107894189B (en) 2017-10-31 2017-10-31 A kind of photoelectric sighting system and its method for automatic tracking of target point automatic tracing

Country Status (1)

Country Link
CN (1) CN107894189B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034193A (en) * 2018-06-20 2018-12-18 上海理工大学 Multiple features fusion and dimension self-adaption nuclear phase close filter tracking method
CN109978161A (en) * 2019-03-08 2019-07-05 吉林大学 A kind of general convolution-pond synchronization process convolution kernel system
CN111563438A (en) * 2020-04-28 2020-08-21 厦门市美亚柏科信息股份有限公司 Target duplication eliminating method and device for video structuring
CN111624203A (en) * 2020-06-16 2020-09-04 河北工业大学 Relay contact alignment non-contact measurement method based on machine vision
CN114035186A (en) * 2021-10-18 2022-02-11 北京航天华腾科技有限公司 Target position tracking and indicating system and method
CN114184086A (en) * 2021-12-13 2022-03-15 绵阳久强智能装备有限公司 Photoelectric tracking image alignment method for anti-sniper robot
CN115631359A (en) * 2022-11-17 2023-01-20 诡谷子人工智能科技(深圳)有限公司 Image data processing method and device for machine vision recognition
CN117146739A (en) * 2023-10-31 2023-12-01 南通蓬盛机械有限公司 Angle measurement verification method and system for optical sighting telescope

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955951A (en) * 2014-05-09 2014-07-30 合肥工业大学 Fast target tracking method based on regularization templates and reconstruction error decomposition
CN105300183A (en) * 2015-10-30 2016-02-03 北京艾克利特光电科技有限公司 Aiming device assembly capable of achieving automatic tracking and aiming
US20170016699A1 (en) * 2010-07-19 2017-01-19 Cubic Corporation Integrated multifunction scope for optical combat identification and other uses
CN107084644A (en) * 2017-04-06 2017-08-22 江苏科技大学海洋装备研究院 A kind of firearms automatic aiming tracking system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170016699A1 (en) * 2010-07-19 2017-01-19 Cubic Corporation Integrated multifunction scope for optical combat identification and other uses
CN103955951A (en) * 2014-05-09 2014-07-30 合肥工业大学 Fast target tracking method based on regularization templates and reconstruction error decomposition
CN105300183A (en) * 2015-10-30 2016-02-03 北京艾克利特光电科技有限公司 Aiming device assembly capable of achieving automatic tracking and aiming
CN107084644A (en) * 2017-04-06 2017-08-22 江苏科技大学海洋装备研究院 A kind of firearms automatic aiming tracking system and method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034193A (en) * 2018-06-20 2018-12-18 上海理工大学 Multiple features fusion and dimension self-adaption nuclear phase close filter tracking method
CN109978161A (en) * 2019-03-08 2019-07-05 吉林大学 A kind of general convolution-pond synchronization process convolution kernel system
CN109978161B (en) * 2019-03-08 2022-03-04 吉林大学 Universal convolution-pooling synchronous processing convolution kernel system
CN111563438A (en) * 2020-04-28 2020-08-21 厦门市美亚柏科信息股份有限公司 Target duplication eliminating method and device for video structuring
CN111563438B (en) * 2020-04-28 2022-08-12 厦门市美亚柏科信息股份有限公司 Target duplication eliminating method and device for video structuring
CN111624203B (en) * 2020-06-16 2023-07-04 河北工业大学 Relay contact point alignment non-contact measurement method based on machine vision
CN111624203A (en) * 2020-06-16 2020-09-04 河北工业大学 Relay contact alignment non-contact measurement method based on machine vision
CN114035186A (en) * 2021-10-18 2022-02-11 北京航天华腾科技有限公司 Target position tracking and indicating system and method
CN114035186B (en) * 2021-10-18 2022-06-28 北京航天华腾科技有限公司 Target position tracking and indicating system and method
CN114184086A (en) * 2021-12-13 2022-03-15 绵阳久强智能装备有限公司 Photoelectric tracking image alignment method for anti-sniper robot
CN114184086B (en) * 2021-12-13 2023-10-03 绵阳久强智能装备有限公司 Photoelectric tracking image alignment method for anti-sniper robot
CN115631359A (en) * 2022-11-17 2023-01-20 诡谷子人工智能科技(深圳)有限公司 Image data processing method and device for machine vision recognition
CN117146739A (en) * 2023-10-31 2023-12-01 南通蓬盛机械有限公司 Angle measurement verification method and system for optical sighting telescope
CN117146739B (en) * 2023-10-31 2024-01-23 南通蓬盛机械有限公司 Angle measurement verification method and system for optical sighting telescope

Also Published As

Publication number Publication date
CN107894189B (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN107894189B (en) A kind of photoelectric sighting system and its method for automatic tracking of target point automatic tracing
US10782095B2 (en) Automatic target point tracing method for electro-optical sighting system
CA3100569C (en) Ship identity recognition method base on fusion of ais data and video data
Boltes et al. Automatic extraction of pedestrian trajectories from video recordings
CN111476827B (en) Target tracking method, system, electronic device and storage medium
CN105373135B (en) A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN110929567B (en) Monocular camera monitoring scene-based target position and speed measuring method and system
CN105306892B (en) A kind of generation of ship video of chain of evidence form and display methods
CN112613568B (en) Target identification method and device based on visible light and infrared multispectral image sequence
CN111274992A (en) Cross-camera pedestrian re-identification method and system
CN103090796B (en) Rocket beat, the measuring system of sedimentation and method
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
CN108090922A (en) Intelligent Target pursuit path recording method
CN109657717A (en) A kind of heterologous image matching method based on multiple dimensioned close packed structure feature extraction
CN112197705A (en) Fruit positioning method based on vision and laser ranging
CN114511592B (en) Personnel track tracking method and system based on RGBD camera and BIM system
CN115717867A (en) Bridge deformation measurement method based on airborne double cameras and target tracking
CN209530065U (en) A kind of coordinate location device based on image
CN110910379A (en) Incomplete detection method and device
CN106897730A (en) SAR target model recognition methods based on fusion classification information with locality preserving projections
CN107907006A (en) The gun sight and its automatic correction method of a kind of automatic deviation correction
Wu et al. Multimodal Collaboration Networks for Geospatial Vehicle Detection in Dense, Occluded, and Large-Scale Events
CN111964681B (en) Real-time positioning system of inspection robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210709

Address after: 550002 aluminum and aluminum processing park, Baiyun District, Guiyang City, Guizhou Province

Patentee after: GUIZHOU JINGHAO TECHNOLOGY Co.,Ltd.

Address before: 100080 3rd floor, building 1, 66 Zhongguancun East Road, Haidian District, Beijing

Patentee before: BEIJING AIKELITE OPTOELECTRONIC TECHNOLOGY Co.,Ltd.