CN105654085A - Image technology-based bullet hole recognition method - Google Patents

Image technology-based bullet hole recognition method Download PDF

Info

Publication number
CN105654085A
CN105654085A CN201511023771.3A CN201511023771A CN105654085A CN 105654085 A CN105654085 A CN 105654085A CN 201511023771 A CN201511023771 A CN 201511023771A CN 105654085 A CN105654085 A CN 105654085A
Authority
CN
China
Prior art keywords
area
shell hole
confidence level
confidence
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201511023771.3A
Other languages
Chinese (zh)
Inventor
周斯忠
蒋荣欣
岳猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Transinfo Tech Co Ltd
Original Assignee
Hangzhou Transinfo Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Transinfo Tech Co Ltd filed Critical Hangzhou Transinfo Tech Co Ltd
Priority to CN201511023771.3A priority Critical patent/CN105654085A/en
Publication of CN105654085A publication Critical patent/CN105654085A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses an image technology-based bullet hole recognition method. The method includes a preprocessing stage, a recognition stage and a confidence calculation stage. According to the preprocessing stage, two adjacent frames of images are acquired and are adopted as pictures to be recognized; regions of interest which exactly contain a target position are extracted from the two frames of images; based on different thresholds for different color regions, frame difference processing is performed on the regions of interest in the two frames of images to be recognized, so that a post-frame difference picture is obtained. According to the recognition stage, an edge detection algorithm is utilized to find contour information in the post-frame difference picture. According to the confidence calculation stage, the absolute area confidence, length-width ratio confidence and area duty ratio confidence of each contour are calculated, a contour of which the sum of the absolute area confidence, length-width ratio confidence and area duty ratio confidence is maximum is a bullet hole which is obtained through recognition. According to the method of the invention, the two frames of images before and after a bullet is shot on a target are processed, so that a new bullet hole can be recognized accurately and fast, and an accuracy rate reaches more than 98%.

Description

A kind of shell hole recognition methods based on image technique
Technical field
The present invention relates to automatic target-indicating technical field, be specifically related to a kind of shell hole recognition methods based on image technique.
Background technology
In prior art, single image is substantially carried out image procossing by shell hole recognition methods, thus identifying shell hole, such as based on many shell holes recognition methods of projection theory, image containing shell hole is divided into several regions by the rectangle split-run adopting projection theory, identify shell hole by the cross template method based on area filling again, and calculate the center of shell hole.
In prior art, there is also and two two field pictures before and after shooting are carried out process obtain shell hole, such as, shell hole recognition methods based on color similarity tolerance, with coloured image for data source, it is being parsed into as on basis, adopt color similarity concept that shell hole feature is described, and with the square region of 3*3 for basic comparing unit, the situation of change of two frame target surface images before and after shooting is carried out quantization means, and extract the region meeting human eye threshold value, last according to the analysis acquisition result to shell hole feature. But the result of this method depends on a large amount of shell hole data, it is easy to cause erroneous judgement.
Accordingly, it is desirable to provide a kind of shell hole recognition methods, while meeting real-time needs, also can guarantee that accuracy.
Summary of the invention
The invention provides a kind of shell hole recognition methods based on image technique, by two two field pictures before and after target on bullet are processed, identify at a high speed the shell hole that makes new advances exactly, rate of accuracy reached is to more than 98%.
A kind of shell hole recognition methods based on image technique, including:
Pretreatment stage: gather adjacent two two field pictures as picture to be identified, and extract the area-of-interest just including target position in two frame pictures to be identified respectively, different threshold values is adopted for regions of different colours, area-of-interest in two frame pictures to be identified is carried out frame difference process, obtains frame after the recovery picture;
Cognitive phase: utilize edge detection algorithm to find the profile information in frame after the recovery picture;
In the confidence level stage: calculate the absolute area confidence level of each profile, length-width ratio confidence level and area dutycycle confidence level, the profile that absolute area confidence level, length-width ratio confidence level and area dutycycle confidence level sum are the highest is and identifies the shell hole obtained.
When the present invention carries out shell hole identification, it is continuously shot two two field pictures, by two two field pictures are processed, obtains shell hole positional information.
When carrying out interested position extracted region, it is possible to adopt prior art, it would however also be possible to employ based on the target position recognition methods that Lis Hartel is levied.
The present invention is directed to the regions of different colours on target position, adopt different threshold values to carry out frame difference process, it is possible to increase the accuracy of shell hole identification.
Such as, for chest silhouette target, target position is divided into white portion and green area, white portion is adopted bigger threshold value, adopt less threshold value to carry out frame difference process green area.
As preferably, pretreatment stage: the area-of-interest in two frame pictures to be identified is carried out the trichroism Histogram statistics of RGB, obtain the upper lower limit value of target position regions of different colours and the brightness flop value L of area-of-interest in two frames picture to be identified;
According to the upper lower limit value of regions of different colours, following formula is utilized to obtain corresponding adaptive threshold value T:
T=��+L, wherein, �� is reference value.
Same color region color on image incomplete same on target position, after obtaining the upper lower limit value of target position regions of different colours, the color being between upper lower limit value is considered as same color, such as, after obtaining the upper lower limit value of white portion, the color being between this upper lower limit value is considered as white.
In order to reduce the difficulty that profile is found, it is preferable that cognitive phase: frame after the recovery picture is sequentially carried out closed operation and opens operation process, the picture after processing is carried out profile information searching.
Utilizing closed operation to eliminate the isolated point that neighbor point is other, the shape making profile is more mellow and full, then carries out out operation, eliminates noise, removes interference.
Utilize simple edges detection canny algorithm to find profile, fit to rectangle, obtain the center point coordinate of profile and the information such as long and wide, and store.
In the present invention, absolute area confidence level is the ratio of profile absolute area and desirable shell hole absolute area; Length-width ratio confidence level is the ratio of profile length-width ratio and desirable shell hole length-width ratio; Area dutycycle confidence level is the ratio of contour area dutycycle and desirable shell hole area dutycycle.
Shell hole recognition methods based on image technique provided by the invention, it is regions of different colours adaptive setting threshold value by image being carried out the statistics with histogram of RGB, then two width images before and after shooting are processed by different colours according to corresponding threshold value, processing mode compares to direct frame difference method and optimizes, it is simultaneously introduced certainty factor algebra and has got rid of because geometric distortion and target position such as rock at the interference that reason is brought, substantially increase accuracy of identification and speed, rate of accuracy reached is to more than 98%, and this method is frequent at such as brightness flop, strong wind, under the environment such as rainy day all unaffected.
Accompanying drawing explanation
Fig. 1 is that the present invention is based on the schematic flow sheet of pretreatment stage in the shell hole recognition methods of image technique;
Fig. 2 is that the present invention is based on the schematic flow sheet of cognitive phase in the shell hole recognition methods of image technique;
Fig. 3 is that the present invention is based on the schematic flow sheet in confidence level stage in the shell hole recognition methods of image technique.
Detailed description of the invention
Below in conjunction with Figure of description, the present invention is carried out details description based on the shell hole recognition methods of image technique.
Pretreatment stage is as it is shown in figure 1, specifically comprise the following steps that
(1) using front and back two two field picture that collects as samples pictures to be identified, this two two field picture respectively goes up image I before targetupdateWith image I after upper targetscore��
Utilize prior art or the target position recognition methods identification target position levied based on Lis Hartel, to image I before upper targetupdateCarry out cutting, just included the region of interest area image I of target positionlastROI;Utilize same method to image I after upper targetscoreCarry out cutting, just included the region of interest area image I of target positionROI��
The target position recognition methods levied based on Lis Hartel includes initial phase, training stage and identification cutting stage, specifically comprising the following steps that of initial phase
1-a, collection n1Width comprises the picture of target position as positive sample, gathers n2Width does not comprise the picture of target position as negative sample, and positive sample and negative sample comprise the background element of basic simlarity.
1-b, the positive sample collected and negative sample are normalized, all zoom to Xs*YsSize.
1-c, by n1Individual positive sample is separately converted to gray level image, is then sequentially carried out Lis Hartel and levies extraction, constitutes sample space X1��
1-d, by n2Individual negative sample is separately converted to gray level image, is then sequentially carried out Lis Hartel and levies extraction, constitutes sample space X2��
1-e, merging sample space X1With sample space X2, and utilize integrogram to ask for eigenvalue, all eigenvalue constitutive characteristic matrixes.
(x, integrogram y) is all pixel sums in its upper left corner to coordinate A, is defined as
Ff (x, y)=��X ' < x, y ' < yF (x ', y ').
Wherein, ff represents integrogram, and f represents original image.
Specifically comprising the following steps that of training stage
2-a, utilize the eigenvalue of n the training sample that initial phase obtains, n=n1+n2, Adaboost cascade classifier is trained;
2-b, initial time, the weight of all training samples is set to equal, and in the present embodiment, the initial value of weight is 1/n, with this understanding training obtain Weak Classifier.
For each eigenvalue ff ', Weak Classifier h (x, ff ', p, ��) training method it is:
Wherein, ff ' is eigenvalue, and �� is threshold value, and p represents sign of inequality direction.
2-c, in the T time iteration (T=1,2,3 ..., T is iterations), the weight of training sample is determined by the result of the T-1 time iteration, adjusts weight and (if the classification of this grader is correct, then reduce this sample weights during each iteration; If classification error, then improve weights), obtain new sample distribution.
2-d, through T time circulate, obtain T Weak Classifier, T Weak Classifier combined according to certain weight, obtain final strong classifier H (x), compound mode is as follows:
�� is the weight of Weak Classifier h (x), and t value is the natural number of 1��T.
Identify specifically comprising the following steps that of cutting stage
3-a, using the RGB image that collects as sample image I to be identifiedRGB, in sample image to be identified, four angles place of target position profile is provided with color distortion and identifies significantly, image I to the sample identifiedRGBCarry out gray proces, obtain gray level image IGRAY��
3-b, in order to tackle the different illumination impact on the inventive method, to gray level image IGRAYIntensity profile carry out histogram information extraction, then carry out Nonlinear extension, make the pixel-number equalization in certain tonal range, increase contrast.
3-c, according to initialization step to step (2) adjust after gray level image IGRAYCarry out Lis Hartel and levy extraction, obtain gray level image Lis Hartel and levy.
The AdaBoost cascade classifier that 3-d, utilization train is to gray level image IGRAYDifferentiating, the matching area choosing area maximum is designated as S region.
3-e, from gray level image IGRAYThe middle image intercepting S region, cuts its x-axis, obtains the histogram projection Hist of x-axisx, traversal rectangular histogram determines maximum value position, then positions target position histogram projection in the direction of the x axis according to maximum and initiates and final position (xhist_start,xhist_end)��
3-f, y-axis to the image in S region are cut, and obtain the histogram projection Hist of y-axisy.Traversal rectangular histogram determines maximum value position, then positions target position histogram projection in the y-axis direction according to maximum and initiates and final position (yhist_start,yhist_end)��
3-g, according to the histogram projection of x-axis and y-axis initial and final position (xhist_start,xhist_end) and (yhist_start,yhist_end) S region carried out more accurate cutting, the target position district S obtainedfinal��
(2) to region of interest area image IlastROIAnd IROICarry out the trichroism statistics with histogram of R, G, B respectively.
For R passage, first travel through histogram image, obtain peak value, then according to peak value Hmax, finding peak value is �� * HmaxThe X-coordinate of point, wherein 0 < �� < 0.15, �� be set as required.
(3) X-coordinate according to step (2), respectively obtains region of interest area image IlastROIAnd IROITarget position white portion upper lower limit value, target position green area upper lower limit value, and the brightness flop value L of two region of interest area imagesoffset��
Histogram image presents and is usually two crests, crest near 0 represents the gray-scale statistical of green area, crest near 255 represents the gray-scale statistical of white portion, the upper lower limit value of two troughs corresponding target position green area respectively that the crest near 0 is contiguous, the upper lower limit value of two troughs corresponding target position white portion respectively that the crest near 255 is contiguous. Brightness flop value is determined according to the change of front and back two field picture higher limit.
(4) obtain the upper lower limit value of white portion and the upper lower limit value of green area according to step (3), generate a pair adaptive threshold value T respectivelywhite, Tgreen; Wherein, Twhite=��+Loffset, Tgreen=��+Loffset��
Under different brightness, upper lower limit value is different, and threshold value can correspondence change, if upper lower limit value is high, threshold value can improve accordingly; If upper lower limit value is low, threshold value can reduce accordingly.
Wherein, the value of �� is a given reference value when initializing, and adds up the RGB difference of each shell hole and periphery background, then generates the initial value of a ��. After identifying shell hole, add up the RGB difference of each shell hole and periphery background, adjustment reference value beta of taking this as a foundation every time.
(5) chest silhouette target position is divided into green area and white portion carry out front and back frame difference process:
To white portion, if IlastROI[x,y]-IROI[x,y]>Twhite, then at black image IholediffThis point is set to white;
In like manner, for green area, if IlastROI[x,y]-IROI[x,y]>Tgreen, then at black image IholediffThis point is set to white;
Traversal target position image on institute a little after, obtain frame after the recovery picture Iholediff��
Cognitive phase is as in figure 2 it is shown, specifically comprise the following steps that
(1) morphological image is utilized to operate frame after the recovery picture IholediffProcess.
First to frame after the recovery picture IholediffCarrying out closed operation, eliminate the isolated point that point of proximity is other, the shape making profile is more mellow and full, then carries out out operation, filters noise, removes interference, obtains picture Iholediff_new��
(2) progressive scan picture Iholediff_new, record the starting point of every a line continuous print white portion, terminal and line number.
(3) carry out such as lower label for each white portion except the first row;
If it is all non-conterminous with all white portions of previous row, then give the label that one is new;
If some white portion of it and lastrow is adjacent, then gives it by the label in lastrow adjacent white region, travel through all white portions, obtain picture Iholediff_newIn together with region.
(4) to together with extracted region profile.
The confidence level stage is as it is shown on figure 3, specifically comprise the following steps that
(1) center point coordinate (X, Y) of each profile is calculated.
(2) the absolute area S of each profile is calculatedarea, and compare with the absolute area of desirable shell hole, obtain absolute area confidence level Sabs��
(3) calculate the length-width ratio of each profile, and compare with the length-width ratio of desirable shell hole, obtain length-width ratio confidence level Sxy��
(4) calculate the area dutycycle of each profile, and compare with the area dutycycle of desirable shell hole, obtain dutycycle confidence level Sduty��
Asking for the boundary rectangle of profile, the length of boundary rectangle is x, wide for y, the circle of one radius R=(x+y)/2 of matching, and calculates the area of this fitting circle, and the absolute area of profile and the area ratio of boundary rectangle are area dutycycle.
(5) three confidence level parameters additions are obtained confidence level S=Sarea+Sabs+Sduty, profile the highest for confidence level S is regarded as this and identifies the shell hole obtained.

Claims (6)

1. the shell hole recognition methods based on image technique, it is characterised in that including:
Pretreatment stage: gather adjacent two two field pictures as picture to be identified, and extract the area-of-interest just including target position in two frame pictures to be identified respectively, different threshold values is adopted for regions of different colours, area-of-interest in two frame pictures to be identified is carried out frame difference process, obtains frame after the recovery picture;
Cognitive phase: utilize edge detection algorithm to find the profile information in frame after the recovery picture;
In the confidence level stage: calculate the absolute area confidence level of each profile, length-width ratio confidence level and area dutycycle confidence level, the profile that absolute area confidence level, length-width ratio confidence level and area dutycycle confidence level sum are the highest is and identifies the shell hole obtained.
2. the shell hole recognition methods based on image technique as claimed in claim 1, it is characterized in that, pretreatment stage: the area-of-interest in two frame pictures to be identified is carried out the trichroism Histogram statistics of RGB, obtain the upper lower limit value of target position regions of different colours and the brightness flop value L of area-of-interest in two frames picture to be identified;
According to the upper lower limit value of regions of different colours, following formula is utilized to obtain adaptive threshold value T:
T=��+L, wherein, �� is reference value.
3. the shell hole recognition methods based on image technique as claimed in claim 1, it is characterised in that cognitive phase: frame after the recovery picture is sequentially carried out closed operation and opens operation process, the picture after processing is carried out profile information searching.
4. the shell hole recognition methods based on image technique as claimed in claim 1, it is characterised in that absolute area confidence level is the ratio of profile absolute area and desirable shell hole absolute area.
5. the shell hole recognition methods based on image technique as claimed in claim 1, it is characterised in that length-width ratio confidence level is the ratio of profile length-width ratio and desirable shell hole length-width ratio.
6. the shell hole recognition methods based on image technique as claimed in claim 1, it is characterised in that area dutycycle confidence level is the ratio of contour area dutycycle and desirable shell hole area dutycycle.
CN201511023771.3A 2015-12-31 2015-12-31 Image technology-based bullet hole recognition method Pending CN105654085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511023771.3A CN105654085A (en) 2015-12-31 2015-12-31 Image technology-based bullet hole recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511023771.3A CN105654085A (en) 2015-12-31 2015-12-31 Image technology-based bullet hole recognition method

Publications (1)

Publication Number Publication Date
CN105654085A true CN105654085A (en) 2016-06-08

Family

ID=56490758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511023771.3A Pending CN105654085A (en) 2015-12-31 2015-12-31 Image technology-based bullet hole recognition method

Country Status (1)

Country Link
CN (1) CN105654085A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485258A (en) * 2016-10-21 2017-03-08 中北大学 A kind of line array CCD bullet location drawing that is based on is as rapid extraction processing method
CN106802113A (en) * 2016-12-23 2017-06-06 西安交通大学 Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions
CN107958205A (en) * 2017-10-31 2018-04-24 北京艾克利特光电科技有限公司 Gunnery training intelligent management system
CN108805210A (en) * 2018-06-14 2018-11-13 深圳深知未来智能有限公司 A kind of shell hole recognition methods based on deep learning
CN108805144A (en) * 2018-06-01 2018-11-13 杭州晨鹰军泰科技有限公司 Shell hole recognition methods based on morphology correction and system, indication of shots equipment
CN109948630A (en) * 2019-03-19 2019-06-28 深圳初影科技有限公司 Recognition methods, device, system and the storage medium of target sheet image
CN109990662A (en) * 2019-04-23 2019-07-09 西人马帝言(北京)科技有限公司 Automatic target-indicating method, apparatus, equipment and computer readable storage medium
CN111222504A (en) * 2019-11-18 2020-06-02 杭州晨鹰军泰科技有限公司 Bullet hole target scoring method, device, equipment and medium
CN111507987A (en) * 2020-04-10 2020-08-07 刘盛杰 Method and device for acquiring and processing firing practice target image
CN112507827A (en) * 2020-11-30 2021-03-16 之江实验室 Intelligent video target shooting real-time detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826228A (en) * 2010-05-14 2010-09-08 上海理工大学 Detection method of bus passenger moving objects based on background estimation
US20130266197A1 (en) * 2010-12-17 2013-10-10 Region Midtjylland Method for delineation of tissue lesions

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826228A (en) * 2010-05-14 2010-09-08 上海理工大学 Detection method of bus passenger moving objects based on background estimation
US20130266197A1 (en) * 2010-12-17 2013-10-10 Region Midtjylland Method for delineation of tissue lesions

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢水根: ""一种基于AVI视频的动态人体轮廓检测识别及简单动作分析研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485258B (en) * 2016-10-21 2019-03-05 中北大学 One kind being based on line array CCD bullet location drawing picture rapidly extracting processing method
CN106485258A (en) * 2016-10-21 2017-03-08 中北大学 A kind of line array CCD bullet location drawing that is based on is as rapid extraction processing method
CN106802113A (en) * 2016-12-23 2017-06-06 西安交通大学 Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions
CN106802113B (en) * 2016-12-23 2017-10-20 西安交通大学 Intelligent hit telling system and method based on many shell hole algorithm for pattern recognitions
CN107958205A (en) * 2017-10-31 2018-04-24 北京艾克利特光电科技有限公司 Gunnery training intelligent management system
CN108805144A (en) * 2018-06-01 2018-11-13 杭州晨鹰军泰科技有限公司 Shell hole recognition methods based on morphology correction and system, indication of shots equipment
CN108805210B (en) * 2018-06-14 2022-03-04 深圳深知未来智能有限公司 Bullet hole identification method based on deep learning
CN108805210A (en) * 2018-06-14 2018-11-13 深圳深知未来智能有限公司 A kind of shell hole recognition methods based on deep learning
CN109948630A (en) * 2019-03-19 2019-06-28 深圳初影科技有限公司 Recognition methods, device, system and the storage medium of target sheet image
CN109948630B (en) * 2019-03-19 2020-03-31 深圳初影科技有限公司 Target paper image identification method, device and system and storage medium
CN109990662A (en) * 2019-04-23 2019-07-09 西人马帝言(北京)科技有限公司 Automatic target-indicating method, apparatus, equipment and computer readable storage medium
CN109990662B (en) * 2019-04-23 2022-04-12 西人马帝言(北京)科技有限公司 Automatic target scoring method, device, equipment and computer readable storage medium
CN111222504A (en) * 2019-11-18 2020-06-02 杭州晨鹰军泰科技有限公司 Bullet hole target scoring method, device, equipment and medium
CN111507987A (en) * 2020-04-10 2020-08-07 刘盛杰 Method and device for acquiring and processing firing practice target image
CN112507827A (en) * 2020-11-30 2021-03-16 之江实验室 Intelligent video target shooting real-time detection method
CN112507827B (en) * 2020-11-30 2022-05-13 之江实验室 Intelligent video target shooting real-time detection method

Similar Documents

Publication Publication Date Title
CN105654085A (en) Image technology-based bullet hole recognition method
CN106845374B (en) Pedestrian detection method and detection device based on deep learning
CN103077521B (en) A kind of area-of-interest exacting method for video monitoring
CN105740945B (en) A kind of people counting method based on video analysis
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN101299268B (en) Semantic object dividing method suitable for low depth image
CN102521616B (en) Pedestrian detection method on basis of sparse representation
CN104715252B (en) A kind of registration number character dividing method of dynamic template combination pixel
CN105139015B (en) A kind of remote sensing images Clean water withdraw method
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN108346160A (en) The multiple mobile object tracking combined based on disparity map Background difference and Meanshift
CN104751142A (en) Natural scene text detection algorithm based on stroke features
CN104778701A (en) Local image describing method based on RGB-D sensor
CN105513053B (en) One kind is used for background modeling method in video analysis
CN105678245A (en) Target position identification method based on Haar features
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN109101926A (en) Aerial target detection method based on convolutional neural networks
Cao et al. NUICNet: Non-uniform illumination correction for underwater image using fully convolutional network
CN111080696A (en) Underwater sea cucumber identification and positioning method based on computer vision
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN106295458A (en) Eyeball detection method based on image procossing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160608