CN110472608A - Image recognition tracking processing method and system - Google Patents

Image recognition tracking processing method and system Download PDF

Info

Publication number
CN110472608A
CN110472608A CN201910774328.1A CN201910774328A CN110472608A CN 110472608 A CN110472608 A CN 110472608A CN 201910774328 A CN201910774328 A CN 201910774328A CN 110472608 A CN110472608 A CN 110472608A
Authority
CN
China
Prior art keywords
template
image
frame
picture
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910774328.1A
Other languages
Chinese (zh)
Inventor
石翊鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910774328.1A priority Critical patent/CN110472608A/en
Publication of CN110472608A publication Critical patent/CN110472608A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention provides a kind of image recognition tracking processing method and system, and the method comprising the steps of: obtaining current video, and carries out sub-frame processing to the current video;The selection of target frame is carried out to first frame image, to be set as feature templates, and target's feature-extraction is carried out to the picture of each frame of the current video;Template matching is carried out to the picture after extraction, and template library is updated according to matching result;It controls template library output matching result, and when determining the matching result and meeting pre-set level condition, exports the corresponding video frame images of the matching result.The present invention is by carrying out the target's feature-extraction, the judgement for carrying out template matching, being updated to template library and carrying out pre-set level condition design, improve the precision of target detection, have compressed the time of detection, the accuracy and recognition efficiency of identification are improved, and is further ensured that tracking to the accuracy and robustness to set the goal.

Description

Image recognition tracking processing method and system
Technical field
The present invention relates to electronic information and technical field of computer vision more particularly to a kind of image recognition to track processing side Method and system.
Background technique
Image procossing is a very popular at present research field, it is well by computer vision and artificial intelligence knot It has been closed that, be the Important Platform of AI Yu signal processing industrialization.And in image procossing, the recognition and tracking of target is always The direction they people research and explored.
Specifically, there are mainly two types of algorithmic systems for target identification (+tracking): first is that by computer vision, computer Graphics, the methods of Digital Image Processing handle image, analyze, and convert, and extract, to complete correlation function;Second is that With advanced AI algorithm, such as machine learning, deep learning (neural network) constructs a physical-statistical model, constantly to it Middle received image signal sample is trained and assesses, and is to the model to set the goal until generating one and can be identified with efficiently and accurately Only.Certainly both methods also often fusion, cross-reference now, to promote recognition performance and efficiency.
In traditional computer vision or digital image processing method use process, algorithm and process are mostly more lonely It is vertical, it cannot simply merge, and be unable to speed up processing using traditional algorithm, processing accuracy cannot be improved, be more pair A kind of realization of original theory, for identifying that demanding image or its identification precision of target are low, recognition efficiency is low.
Summary of the invention
The technical problem to be solved by the present invention is to provide a kind of identification high image recognition tracking processing method of precision and it is System.
In order to solve the above technical problems, the image recognition tracking processing method that invention provides, the method includes the steps:
Current face's data are obtained, and carry out account identification according to current face's data, to obtain target account;
Current video is obtained, and sub-frame processing is carried out to the current video;
The selection of target frame is carried out to first frame image, to be set as feature templates, and to each frame of the current video Picture carries out target's feature-extraction;
Template matching is carried out to the picture after extraction, and template library is updated according to matching result;
Control template library output matching result, and when determining the matching result and meeting pre-set level condition, Export the corresponding video frame images of the matching result.
Preferably, the picture to each frame of the current video carries out the step of target's feature-extraction and includes:
The picture is converted into grayscale image, and carries out Gradient Features extraction in the horizontal and vertical directions respectively;
The grayscale image is converted picture gradient by the absolute value that the gradient value on each pixel is taken in the picture Figure, and gaussian filtering and normalized are sequentially carried out to the picture gradient map;
The Gradient Features of template image are extracted in the horizontal and vertical directions respectively, are taken in the template image The absolute value of gradient value on each pixel;
Template gradient map is converted by the template image, and gaussian filtering process is carried out to the template gradient map;
The obtained template gradient map is carried out expanding calculation process, then is normalized and thresholding processing.
Preferably, for the first frame image, described pair extract after picture the step of carrying out template matching packet It includes:
It controls the target frame and carries out Object selection, and the picture is split according to external frame;
Generate monitoring standard module and according to the size of target dynamic adjustment search box size.
Preferably, for the nth frame image of the current video, the picture after described pair of extraction carries out template matching The step of include:
The nth frame image is compareed with (N-1) frame image, to obtain the average image under present frame;
It is handled according to the average image matching degree, to obtain matching value, and the matching value and matching threshold is carried out Compare;
By treated, the average image is matched with (N-1) frame image;
When successful match, the matching result is exported, when matching failed, is continued and template library progress Match.
Preferably, for the nth frame image of the current video, the picture after described pair of extraction carries out template matching The step of include:
The nth frame image is matched with the template library, to obtain the detecting and tracking result figure of present frame;
The template library is updated according to the detecting and tracking result figure, and is carried out with the updated template library Matching;
When successful match, each template time state saved by the template library works as matching to judge operating status When failed, search range is expanded according to linearly.
Preferably, described pair extract after the picture carry out template matching the step of include:
Transformation tracking box is search box, and API is called to be matched;
Minimum position and maximum position are modified, and the tracking box is updated, while by the tracking box It is passed to the template library as template, so that as matched correlate template next time;
It is matched with the template library, judges the motion state of current goal, and institute is changed according to the motion state State the size of search box.
Preferably, described pair extract after the picture carry out template matching the step of include:
When determining current goal loss, judge whether capture apparatus occurs zoom;
When determining the capture apparatus generation zoom, amplifying parameters are incoming, and adjudicate and obscure for image;
When determining the capture apparatus zoom not occurring, it is determined as decoy.
Preferably, after described the step of being determined as decoy, the method also includes:
Point target is converted by Area Objects, tracks function into the point target, and prediction locus is so that positioning target position It sets.
Compared with the relevant technologies, image recognition tracking processing method provided by the invention has the following beneficial effects: logical It crosses and carries out the target's feature-extraction, carry out template matching, template library is updated and carries out the pre-set level condition Judgement design, improves the precision of target detection, has compressed the time of detection, improve the accuracy and recognition efficiency of identification, And it is further ensured that tracking to the accuracy and robustness to set the goal.
The another object of the embodiment of the present invention is to provide a kind of image recognition tracking processing system, comprising:
Sub-frame processing module carries out sub-frame processing for obtaining current video, and to the current video;
Characteristic extracting module, for carrying out the selection of target frame to first frame image, to be set as feature templates, and to described The picture of each frame of current video carries out target's feature-extraction;
Template renewal module for carrying out template matching to the picture after extraction, and updates mould according to matching result Plate library;
Image output module, for controlling the template library output matching result, and it is full when determining the matching result When sufficient pre-set level condition, the corresponding video frame images of the matching result are exported.
Preferably, the characteristic extracting module is also used to:
The picture is converted into grayscale image, and carries out Gradient Features extraction in the horizontal and vertical directions respectively;
The grayscale image is converted picture gradient by the absolute value that the gradient value on each pixel is taken in the picture Figure, and gaussian filtering and normalized are sequentially carried out to the picture gradient map;
The Gradient Features of template image are extracted in the horizontal and vertical directions respectively, are taken in the template image The absolute value of gradient value on each pixel;
Template gradient map is converted by the template image, and gaussian filtering process is carried out to the template gradient map;
The obtained template gradient map is carried out expanding calculation process, then is normalized and thresholding processing.
Detailed description of the invention
Fig. 1 is the flow diagram for the image recognition tracking processing method that first embodiment of the invention provides;
Fig. 2 is the flow diagram for the image recognition tracking processing method that second embodiment of the invention provides;
Fig. 3 is the structural schematic diagram that the image recognition that third embodiment of the invention provides tracks processing system.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Embodiment one
Referring to Fig. 1, being the flow diagram for the image recognition tracking processing method that first embodiment of the invention provides, packet Include step:
Step S10 obtains current video, and carries out sub-frame processing to the current video;
Wherein, after video flowing is inputted by photographic equipment, progress sub-frame processing first, to obtain multiple video frames;
Step S20 carries out the selection of target frame to first frame image, to be set as feature templates, and to the current video The picture of each frame carries out target's feature-extraction;
Wherein, by carrying out the selection of target frame, to facilitate the setting of template, target signature is carried out to the picture of each frame Extraction process is specifically divided into manually and automatically two kinds;Then to treated, picture carries out template matching
Step S30 carries out template matching to the picture after extraction, and updates template library according to matching result;
Wherein, template library is that same database connects, and new template can be all generated after having matched each time;
Step S40 controls the template library output matching result, and meets pre-set level when determining the matching result When condition, the corresponding video frame images of the matching result are exported;
Wherein, if as a result meeting index, export this frame in video, if do not met, transfer monitoring template into Row target detection, if it is subsequent determine meet index if can equally export this frame, if it could not, assertive goal has disappeared It loses, program stopped;
Preferably, transmission of video finishes program and stops by force, and video is a generalized concept in the present embodiment, can both refer to Static image is continuously introduced into, and can also refer to the video pictures of video or camera shooting.
In the present embodiment, by carrying out the target's feature-extraction, progress template matching, being updated and go forward side by side to template library The judgement design of the row pre-set level condition, improves the precision of target detection, has compressed the time of detection, improve identification Accuracy and recognition efficiency, and be further ensured that tracking to the accuracy and robustness that set the goal.
Embodiment two
Referring to Fig. 2, being the flow diagram for the image recognition tracking processing method that second embodiment of the invention provides, packet Include step:
Step S11 obtains current video, and carries out sub-frame processing to the current video;
Wherein, after video flowing is inputted by photographic equipment, progress sub-frame processing first, to obtain multiple video frames;
Step S21 carries out the selection of target frame to first frame image, to be set as feature templates, and to the current video The picture of each frame is converted to grayscale image, and carries out Gradient Features extraction in the horizontal and vertical directions respectively;
Wherein, the extraction of the Gradient Features is carried out by using the mode that small kernel extracts, and in the step, by adopting With the Gradient Features of sobel operator extraction image;
Step S31 takes the absolute value of the gradient value on each pixel in the picture, converts the grayscale image to Picture gradient map, and gaussian filtering and normalized are sequentially carried out to the picture gradient map;
Wherein, by carrying out the design of gaussian filtering and normalized, so as to effectively remove the picture gradient map In noise effect;
Step S41 respectively in the horizontal and vertical directions extracts the Gradient Features of template image, in the template The absolute value of the gradient value on each pixel is taken in image;
Step S51 converts template gradient map for the template image, and carries out gaussian filtering to the template gradient map Processing, carries out expanding calculation process to the obtained template gradient map, then is normalized and thresholding processing;
Wherein, how much needs of threshold value manually adjust, the purpose of the step be in order to the aspect ratio of template more thoroughly and Complete extraction comes out, and strengthens the feature of template and weakens the feature of full image moderately to preferably position target in the picture Position;
Step S61 carries out template matching to the picture after extraction, and updates template library according to matching result;
Wherein, for the first frame image, described pair extract after picture the step of carrying out template matching include:
It controls the target frame and carries out Object selection, and the picture is split according to external frame;
Generate monitoring standard module and according to the size of target dynamic adjustment search box size.
Preferably, for the nth frame image of the current video, the picture after described pair of extraction carries out template matching The step of include:
The nth frame image is compareed with (N-1) frame image, to obtain the average image under present frame;
It is handled according to the average image matching degree, to obtain matching value, and the matching value and matching threshold is carried out Compare, wherein in the step, the method used is that thresholding-expansion-corrosion-image data position is matched with operation-dynamic corrections Degree-and threshold value comparison.Matching degree is measured according to the intensity of the profile diagram of generation;
By treated, the average image is matched with (N-1) frame image;
When successful match, the matching result is exported, when matching failed, is continued and template library progress Match.
In addition, in the present embodiment, for the nth frame image of the current video, described pair extract after the picture into The step of row template matching includes:
The nth frame image is matched with the template library, to obtain the detecting and tracking result figure of present frame;
The template library is updated according to the detecting and tracking result figure, and is carried out with the updated template library Matching;
When successful match, each template time state saved by the template library works as matching to judge operating status When failed, search range is expanded according to linearly.
Further, in this embodiment the picture after described pair of extraction includes: the step of carrying out template matching
Transformation tracking box is search box, and API is called to be matched;
Minimum position and maximum position are modified, and the tracking box is updated, while by the tracking box It is passed to the template library as template, so that as matched correlate template next time;
It is matched with the template library, judges the motion state of current goal, and institute is changed according to the motion state State the size of search box.
Further, described pair extract after the picture carry out template matching the step of include:
When determining current goal loss, judge whether capture apparatus occurs zoom;
When determining the capture apparatus generation zoom, amplifying parameters are incoming, and adjudicate and obscure for image;
When determining the capture apparatus zoom not occurring, it is determined as decoy.
In the present embodiment, after described the step of being determined as decoy, the method also includes:
Point target is converted by Area Objects, tracks function into the point target, and prediction locus is so that positioning target position It sets, i.e., in the present embodiment, in the present embodiment, when there is the case where target is lost suddenly in the image of a certain frame, executes this Algorithm:
Lose judgement 1. losing: processed image is in by two kinds of matching process, tracking box and pattern plate bolster and search box All fail to exactly match, matched principle is according to step S51;
2. confirmation enters a decision mechanism after losing, the method used in the present invention is hardware apriority: reading equipment Parameter, if there is the zoom of picture pick-up device, amplifying parameters are incoming, directly adjudicate and obscure for image;Otherwise judgement is decoy
3. second judgement mechanism+solution: if determination is zoom problem, doing delay process, and execute following block diagram and calculate Method;If determination is decoy, then point target, inlet point target tracking function point_detection are converted by Area Objects (argv), prediction locus positions target position.
Step S71 controls the template library output matching result, and meets pre-set level when determining the matching result When condition, the corresponding video frame images of the matching result are exported;
In addition, mainly including three entity actions in template database in the present embodiment:
(1) the template created: when can be first frame frame and selecting target, automatically into the template of template library, which is designed For a variety of gray values, several template Mat variables of more sizes are also possible to manual frame choosing creation new template, name in nth frame Title is unified for Box
(2) more new template: monitoring template supBox and tracking template trackBox are divided into, tracking template is when matching every time The template automatically generated, there are two sources for monitoring template: first is that the copy template that the choosing of manual frame generates, second is that being transferred from template library Template
(3) search box size is saved.
The present embodiment has the advantage that
1. feature height reducing, precision.The present embodiment has more efficiently played gradient operator and grayscale image and has mentioned The effect in feature is taken, by multiple-stage filtering, multistage edge extracting, the methods of template and original image processing accuracy differentiation, It ensure that the feature extracted is more accurate accurate.This is one of innovative point of the invention.
2. processing localization, detailed-oriented.The present embodiment has abandoned the detection in traditional image procossing based on statistical law Method will be layered using the thought for being similar to SURF algorithm in image data transformation to other dimensions (gradient+gray scale) Secondaryization processing, and the details to template, scale and edge are analyzed, and are constrained image detection zone with template characteristic, are handled in this way Better effect can be obtained than relying on image to deacclimatize template in traditional template matching method.This is innovative point of the invention Two.
3. algorithm centripetalization, can regression it is strong.In actual match, the present embodiment uses frame difference+thresholding combination Method makes full use of the correlation between picture frame and frame, and target's center may be deviateed by not only improving feature extraction step Problem, and to detect and drawing objective contour, search range can be made constantly to return to target actual position.And this calculation Method eliminates the influence of many inessential noises, and the noiseproof feature than being directly trained using convolutional neural networks is stronger, from The accurate location of target so can be also collected more accurately.This is the three of innovative point of the present invention.
4. process flow simplification, programmed readability is good.The present embodiment can be rapidly to every on actual test platform One frame picture is dealt with, and target position is collected.And program carries out secondary development directly on opencv, simplifies data knot Structure, enhancing are readable
5. significantly improving the loss and entanglement problem of target, the anti-interference ability of algorithm and system is improved.The present embodiment By the improvement in dynamic corrections template library and adjustment search range and algorithm above-mentioned, in actual test, reduce It is fitted on decoy, barrier is encountered and can not find when can not be properly positioned former target and lens blur target these three situations and go out Existing probability improves the quality and efficiency of image procossing, reduces loss caused by wrong identification.
6. configuring simplification.The present embodiment when designing program to read image data as principal function, feature extraction, Match, template generation, lose judgement as submodule, and parametric method is passed using pointer or reference, so that program structure is simply certainly So, readable high.And this method can be used in deep learning test data module and be used as preprocessing module before, improve net The accuracy of network training.
Embodiment three
Referring to Fig. 3, being the structural representation for the image recognition tracking processing system 100 that third embodiment of the invention provides Figure, comprising: sub-frame processing module 10, characteristic extracting module 11, template renewal module 12 and image output module 13, in which:
Sub-frame processing module 10 carries out sub-frame processing for obtaining current video, and to the current video
Characteristic extracting module 11, for carrying out the selection of target frame to first frame image, to be set as feature templates, and to institute The picture for stating each frame of current video carries out target's feature-extraction.
Wherein, the characteristic extracting module 11 is also used to: the picture being converted to grayscale image, and respectively horizontal and perpendicular Histogram carries out Gradient Features extraction upwards;The absolute value that the gradient value on each pixel is taken in the picture, by the ash Degree figure is converted into picture gradient map, and sequentially carries out gaussian filtering and normalized to the picture gradient map;Respectively in water The Gradient Features of template image are extracted on gentle vertical direction, the ladder on each pixel is taken in the template image The absolute value of angle value;Template gradient map is converted by the template image, and the template gradient map is carried out at gaussian filtering Reason;The obtained template gradient map is carried out expanding calculation process, then is normalized and thresholding processing.
Template renewal module 12 for carrying out template matching to the picture after extraction, and is updated according to matching result Template library.
Wherein, the template renewal module 12 is also used to: being controlled the target frame and is carried out Object selection, and according to external frame The picture is split;Generate monitoring standard module and according to the size of target dynamic adjustment search box size.
Preferably, the template renewal module 12 is also used to: the nth frame image and (N-1) frame image are carried out pair According to obtain the average image under present frame;It is handled according to the average image matching degree, to obtain matching value, and will be described Matching value is compared with matching threshold;By treated, the average image is matched with (N-1) frame image;When When successful match, the matching result is exported, when matching failed, continuation is matched with the template library.
Further, the template renewal module 12 is also used to: by the nth frame image and template library progress Match, to obtain the detecting and tracking result figure of present frame;The template library is updated according to the detecting and tracking result figure, and It is matched with the updated template library;When successful match, each template time state for being saved by the template library To judge operating status, when matching failed, search range is expanded according to linearly.
In addition, the template renewal module 12 is also used to: transformation tracking box is search box, and API is called to be matched;It is right Minimum position and maximum position are modified, and are updated to the tracking box, while being passed the tracking box as template Enter the template library, so that as matched correlate template next time;It is matched with the template library, judges current goal Motion state, and according to the size of motion state change described search frame.
Further, the template renewal module 12 is also used to: when determining current goal loss, judging that shooting is set It is standby whether zoom to occur;When determining the capture apparatus generation zoom, amplifying parameters are incoming, and adjudicate and obscure for image; When determining the capture apparatus zoom not occurring, it is determined as decoy.
In the present embodiment, the template renewal module 12 is also used to: point target is converted by Area Objects, into described mesh Mark tracking function, and prediction locus is so that positioning target position
Image output module 13 for controlling the template library output matching result, and is worked as and determines the matching result When meeting pre-set level condition, the corresponding video frame images of the matching result are exported.
In the present embodiment, 1. feature height reducing, precision are had the advantage that.The present embodiment is more efficiently sent out It has waved gradient operator and grayscale image and has extracted the effect in feature, by multiple-stage filtering, multistage edge extracting, template and original graph As the methods of processing accuracy differentiation, it ensure that the feature extracted is more accurate accurate.This is one of innovative point of the invention. 2. processing localization, detailed-oriented.The present embodiment has abandoned the detection method in traditional image procossing based on statistical law, uses Similar to the thought of SURF algorithm, change by different level will be carried out in image data transformation to other dimensions (gradient+gray scale) and handled, And the details to template, scale and edge are analyzed, and constrain image detection zone with template characteristic, processing is than traditional mould in this way Better effect can be obtained by deacclimatizing template by image in plate matching process.This is the two of innovative point of the invention.3. calculating The normal direction heart, can regression it is strong.In actual match, the present embodiment uses frame difference+thresholding combination method and makes full use of Correlation between picture frame and frame not only improves the problem of feature extraction step may deviate target's center, Er Qiewei Detection and drafting objective contour, can be such that search range constantly returns to target actual position.And this algorithm eliminates very The influence of mostly inessential noise, the noiseproof feature than being directly trained using convolutional neural networks is stronger, naturally also can be more Add the accurate location for accurately collecting target.This is the three of innovative point of the present invention.4. process flow simplification, programmed readability It is good.The present embodiment can rapidly deal with to each frame picture on actual test platform, collect target position.And Program carries out secondary development directly on opencv, simplifies data structure, enhancing readability 5. significantly improve target loss and Entanglement problem improves the anti-interference ability of algorithm and system.The present embodiment passes through dynamic corrections template library and adjustment search model Enclose and algorithm above-mentioned on improvement reduce in actual test and be matched to decoy, encountering barrier can not be correct The probability that can not find the appearance of these three situations of target when navigating to former target and lens blur, improves the quality of image procossing And efficiency, reduce loss caused by wrong identification.6. configuring simplification.The present embodiment is when designing program to read picture number According to as principal function, feature extraction, matching, template generation loses judgement as submodule, and pointer or reference is used to pass ginseng Number method, so that program structure is unsophisticated, it is readable high.And by this method can be used in deep learning test data module it It is preceding to be used as preprocessing module, improve the accuracy of network training.
The present embodiment also provides a kind of image recognition tracking processing apparatus, including storage equipment and processor, described to deposit Storage equipment is for storing computer program, and the processor runs the computer program so that institute's image recognition tracking processing dress It sets and executes above-mentioned image recognition tracking processing method.
The present embodiment additionally provides a kind of storage medium, and being stored thereon in above-mentioned image recognition tracking processing apparatus is made Computer program, the program when being executed, include the following steps:
Current video is obtained, and sub-frame processing is carried out to the current video;
The selection of target frame is carried out to first frame image, to be set as feature templates, and to each frame of the current video Picture carries out target's feature-extraction;
Template matching is carried out to the picture after extraction, and template library is updated according to matching result;
Control template library output matching result, and when determining the matching result and meeting pre-set level condition, Export the corresponding video frame images of the matching result.The storage medium, such as: ROM/RAM, magnetic disk, CD.
It is apparent to those skilled in the art that for convenience and simplicity of description, only with above-mentioned each function The division progress of unit, module can according to need and for example, in practical application by above-mentioned function distribution by different function Energy unit or module are completed, i.e., the internal structure of storage device is divided into different functional unit or module, more than completing The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, It can be each unit to physically exist alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.
It will be understood by those skilled in the art that composed structure shown in Fig. 3 is not constituted to image recognition of the invention The restriction for tracking processing system may include perhaps combining certain components or different than illustrating more or fewer components Component layout, and the image recognition tracking processing method in Fig. 1-2 also uses more or fewer components shown in Fig. 3, or Person combines certain components or different component layouts to realize.The so-called unit of the present invention, module etc. refer to that one kind can be by The performed simultaneously function of processor (not shown) in image recognition tracking processing system enough completes the series of computation of specific function Machine program can be stored in image recognition and track in the storage equipment (not shown) of processing system.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (10)

1. a kind of image recognition tracking processing method, which is characterized in that the method includes the steps:
Current video is obtained, and sub-frame processing is carried out to the current video;
The selection of target frame is carried out to first frame image, to be set as feature templates, and to the picture of each frame of the current video Carry out target's feature-extraction;
Template matching is carried out to the picture after extraction, and template library is updated according to matching result;
Control template library output matching result, and when determining the matching result and meeting pre-set level condition, output The corresponding video frame images of the matching result.
2. image recognition tracking processing method according to claim 1, which is characterized in that described every to the current video The picture of one frame carries out the step of target's feature-extraction and includes:
The picture is converted into grayscale image, and carries out Gradient Features extraction in the horizontal and vertical directions respectively;
The grayscale image is converted picture gradient map by the absolute value that the gradient value on each pixel is taken in the picture, And gaussian filtering and normalized are sequentially carried out to the picture gradient map;
The Gradient Features of template image are extracted in the horizontal and vertical directions respectively, are taken in the template image each The absolute value of gradient value on pixel;
Template gradient map is converted by the template image, and gaussian filtering process is carried out to the template gradient map;
The obtained template gradient map is carried out expanding calculation process, then is normalized and thresholding processing.
3. image recognition tracking processing method according to claim 1, which is characterized in that it is directed to the first frame image, Described pair extract after the picture carry out template matching the step of include:
It controls the target frame and carries out Object selection, and the picture is split according to external frame;
Generate monitoring standard module and according to the size of target dynamic adjustment search box size.
4. image recognition tracking processing method according to claim 1, which is characterized in that for the of the current video N frame image, described pair extract after the picture carry out template matching the step of include:
The nth frame image is compareed with (N-1) frame image, to obtain the average image under present frame;
It is handled according to the average image matching degree, to obtain matching value, and the matching value is compared with matching threshold;
By treated, the average image is matched with (N-1) frame image;
When successful match, the matching result is exported, when matching failed, continuation is matched with the template library.
5. image recognition tracking processing method according to claim 1, which is characterized in that for the of the current video N frame image, described pair extract after the picture carry out template matching the step of include:
The nth frame image is matched with the template library, to obtain the detecting and tracking result figure of present frame;
The template library is updated according to the detecting and tracking result figure, and is carried out with the updated template library Match;
When successful match, each template time state saved by the template library to judge operating status, when matching not at When function, search range is expanded according to linearly.
6. image recognition tracking processing method according to claim 1, which is characterized in that the figure after described pair of extraction Piece carry out template matching the step of include:
Transformation tracking box is search box, and API is called to be matched;
Minimum position and maximum position are modified, and the tracking box is updated, at the same using the tracking box as Template is passed to the template library, so that as matched correlate template next time;
It is matched with the template library, judges the motion state of current goal, and search according to motion state change The size of rope frame.
7. image recognition tracking processing method according to claim 1, which is characterized in that the figure after described pair of extraction Piece carry out template matching the step of include:
When determining current goal loss, judge whether capture apparatus occurs zoom;
When determining the capture apparatus generation zoom, amplifying parameters are incoming, and adjudicate and obscure for image;
When determining the capture apparatus zoom not occurring, it is determined as decoy.
8. image recognition tracking processing method according to claim 7, which is characterized in that the step for being determined as decoy After rapid, the method also includes:
Point target is converted by Area Objects, tracks function into the point target, and prediction locus is so that positioning target position.
9. a kind of image recognition tracks processing system characterized by comprising
Sub-frame processing module carries out sub-frame processing for obtaining current video, and to the current video;
Characteristic extracting module, for carrying out the selection of target frame to first frame image, to be set as feature templates, and to described current The picture of each frame of video carries out target's feature-extraction;
Template renewal module for carrying out template matching to the picture after extraction, and updates template library according to matching result;
Image output module meets in advance for controlling the template library output matching result, and when determining the matching result If when indicator conditions, exporting the corresponding video frame images of the matching result.
10. image recognition according to claim 9 tracks processing system, which is characterized in that the characteristic extracting module is also For:
The picture is converted into grayscale image, and carries out Gradient Features extraction in the horizontal and vertical directions respectively;
The grayscale image is converted picture gradient map by the absolute value that the gradient value on each pixel is taken in the picture, And gaussian filtering and normalized are sequentially carried out to the picture gradient map;
The Gradient Features of template image are extracted in the horizontal and vertical directions respectively, are taken in the template image each The absolute value of gradient value on pixel;
Template gradient map is converted by the template image, and gaussian filtering process is carried out to the template gradient map;
The obtained template gradient map is carried out expanding calculation process, then is normalized and thresholding processing.
CN201910774328.1A 2019-08-21 2019-08-21 Image recognition tracking processing method and system Pending CN110472608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910774328.1A CN110472608A (en) 2019-08-21 2019-08-21 Image recognition tracking processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910774328.1A CN110472608A (en) 2019-08-21 2019-08-21 Image recognition tracking processing method and system

Publications (1)

Publication Number Publication Date
CN110472608A true CN110472608A (en) 2019-11-19

Family

ID=68513262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910774328.1A Pending CN110472608A (en) 2019-08-21 2019-08-21 Image recognition tracking processing method and system

Country Status (1)

Country Link
CN (1) CN110472608A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611854A (en) * 2020-04-16 2020-09-01 杭州电子科技大学 Classroom condition evaluation method based on pattern recognition
CN111738053A (en) * 2020-04-15 2020-10-02 上海摩象网络科技有限公司 Tracking object determination method and device and handheld camera
CN112287906A (en) * 2020-12-18 2021-01-29 中汽创智科技有限公司 Template matching tracking method and system based on depth feature fusion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
US8477998B1 (en) * 2008-06-20 2013-07-02 Google Inc. Object tracking in video with visual constraints
CN103324920A (en) * 2013-06-27 2013-09-25 华南理工大学 Method for automatically identifying vehicle type based on vehicle frontal image and template matching
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN107798329A (en) * 2017-10-29 2018-03-13 北京工业大学 Adaptive particle filter method for tracking target based on CNN
CN108846854A (en) * 2018-05-07 2018-11-20 中国科学院声学研究所 A kind of wireless vehicle tracking based on motion prediction and multiple features fusion
CN110084829A (en) * 2019-03-12 2019-08-02 上海阅面网络科技有限公司 Method for tracking target, device, electronic equipment and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
US8477998B1 (en) * 2008-06-20 2013-07-02 Google Inc. Object tracking in video with visual constraints
CN103324920A (en) * 2013-06-27 2013-09-25 华南理工大学 Method for automatically identifying vehicle type based on vehicle frontal image and template matching
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN107798329A (en) * 2017-10-29 2018-03-13 北京工业大学 Adaptive particle filter method for tracking target based on CNN
CN108846854A (en) * 2018-05-07 2018-11-20 中国科学院声学研究所 A kind of wireless vehicle tracking based on motion prediction and multiple features fusion
CN110084829A (en) * 2019-03-12 2019-08-02 上海阅面网络科技有限公司 Method for tracking target, device, electronic equipment and computer readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738053A (en) * 2020-04-15 2020-10-02 上海摩象网络科技有限公司 Tracking object determination method and device and handheld camera
WO2021208253A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Tracking object determination method and device, and handheld camera
CN111738053B (en) * 2020-04-15 2022-04-01 上海摩象网络科技有限公司 Tracking object determination method and device and handheld camera
CN111611854A (en) * 2020-04-16 2020-09-01 杭州电子科技大学 Classroom condition evaluation method based on pattern recognition
CN111611854B (en) * 2020-04-16 2023-09-01 杭州电子科技大学 Classroom condition evaluation method based on pattern recognition
CN112287906A (en) * 2020-12-18 2021-01-29 中汽创智科技有限公司 Template matching tracking method and system based on depth feature fusion
CN112287906B (en) * 2020-12-18 2021-04-09 中汽创智科技有限公司 Template matching tracking method and system based on depth feature fusion

Similar Documents

Publication Publication Date Title
CN111709311B (en) Pedestrian re-identification method based on multi-scale convolution feature fusion
CN111652317B (en) Super-parameter image segmentation method based on Bayes deep learning
CN112668483B (en) Single-target person tracking method integrating pedestrian re-identification and face detection
CN113763424B (en) Real-time intelligent target detection method and system based on embedded platform
CN110472608A (en) Image recognition tracking processing method and system
CN105893946A (en) Front face image detection method
CN107909081A (en) The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN114758288A (en) Power distribution network engineering safety control detection method and device
CN111931654A (en) Intelligent monitoring method, system and device for personnel tracking
CN107316029A (en) A kind of live body verification method and equipment
CN110991397B (en) Travel direction determining method and related equipment
CN112434599A (en) Pedestrian re-identification method based on random shielding recovery of noise channel
CN114049581A (en) Weak supervision behavior positioning method and device based on action fragment sequencing
CN112686122B (en) Human body and shadow detection method and device, electronic equipment and storage medium
CN111881775B (en) Real-time face recognition method and device
CN109493370A (en) A kind of method for tracking target based on spatial offset study
CN114708645A (en) Object identification device and object identification method
Harish et al. New features for webcam proctoring using python and opencv
CN116229052A (en) Method for detecting state change of substation equipment based on twin network
CN110197123A (en) A kind of human posture recognition method based on Mask R-CNN
CN112348011B (en) Vehicle damage assessment method and device and storage medium
CN113920168A (en) Image tracking method in audio and video control equipment
CN111310679A (en) Goose walking gait feature recognition method based on machine vision
CN113255549A (en) Intelligent recognition method and system for pennisseum hunting behavior state
CN117011335B (en) Multi-target tracking method and system based on self-adaptive double decoders

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191119

RJ01 Rejection of invention patent application after publication