CN109492537A - A kind of object identification method and device - Google Patents

A kind of object identification method and device Download PDF

Info

Publication number
CN109492537A
CN109492537A CN201811206301.4A CN201811206301A CN109492537A CN 109492537 A CN109492537 A CN 109492537A CN 201811206301 A CN201811206301 A CN 201811206301A CN 109492537 A CN109492537 A CN 109492537A
Authority
CN
China
Prior art keywords
sample
target
tracking
tracker
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811206301.4A
Other languages
Chinese (zh)
Other versions
CN109492537B (en
Inventor
魏承赟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin Feiyu Polytron Technologies Inc
Original Assignee
Guilin Feiyu Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin Feiyu Polytron Technologies Inc filed Critical Guilin Feiyu Polytron Technologies Inc
Priority to CN201811206301.4A priority Critical patent/CN109492537B/en
Publication of CN109492537A publication Critical patent/CN109492537A/en
Application granted granted Critical
Publication of CN109492537B publication Critical patent/CN109492537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A kind of object identification method of the present invention and device, this method comprises: step S1, the sample of tracking target and training tracker are extracted from initial frame picture, target signature is stored in sample space;Step S2 reads present frame picture, judges to track whether target loses in previous frame;Step S3 is handled to obtain score chart the position of previous frame target and its sample of several positions of surrounding using the tracker of last time training if losing;Step S4 extracts present frame sample to the position of the tracking target of previous frame using the tracker of last time training if not losing to obtain score chart;Step S5 carries out fraction assessment to the sampling fraction figure of each position, judges whether score chart is ideal;Step S6 updates sample weights and more new target location if ideal, carries out target scale scale prediction, updates ratio;New samples are updated to sample space with sample weights weighting by step S7, and according to setting frame number interval training tracker.

Description

A kind of object identification method and device
Technical field
The present invention relates to technical field of computer vision, more particularly to the object identification method and device of a kind of optimization.
Background technique
Since 21st century, with the fast development of Internet technology and mobile phone, camera, PC it is general And image data shows explosive growth.On the other hand, with the needs of construction safe city, the quantity of monitoring camera More and more, according to incompletely statistics, only the monitoring camera quantity of Guangzhou has just been more than 300,000, and the monitoring camera in the whole nation Head quantity is even more to reach more than 2,000 ten thousand, and still increase with annual 20% quantity.So large-scale data far beyond The analysis processing capacity of the mankind.Therefore, intelligently handle these images and video data become there is an urgent need to.In this background Under, how using computer vision technique is automatic, intelligently analysis and understanding image data gets more and more people's extensive concerning.
Object identification is the classical problem of Computer Vision Task, while being also the core for solving many high-rise visual tasks Problem, the research of object identification are that the solution of high-rise visual task (such as Activity recognition, scene understanding etc.) is laid a good foundation. It has a wide range of applications in daily life and industrial production, such as: intelligent video monitoring, automobile auxiliary are driven It sails, intelligent transportation, the Internet images retrieval, virtual reality and human-computer interaction etc..
In recent decades, it is answered with a large amount of statistical machine learning algorithms in the success of artificial intelligence and computer vision field With computer vision technique has the progress advanced by leaps and bounds, and especially in recent years, the arrival of big data era mentions for visual task Mass image data more abundant is supplied, the development of high-capability computing device is provided hardware support to big data calculating, greatly Amount computer vision algorithms make also constantly emerges, although however, also emerge a large amount of technology and algorithm at present, and before It compares, robustness, correctness, efficiency and the range of object identification method are conveniently greatly improved, but still remain Some difficult and cognitive disorders, existing object recognition algorithms are primarily present following defect:
1, ratio tracking velocity is too slow;
2, it without retrieval function is lost, can not be tracked again once there is the appearance of the case where with losing;
3, it can only track the short time, not be able to satisfy all application scenarios.
Summary of the invention
In order to overcome the deficiencies of the above existing technologies, one of present invention be designed to provide a kind of object identification method and Device, to realize the purpose for continuing tracking with losing.
Another object of the present invention is to provide a kind of object identification method and device, to accelerate ratio tracking velocity.
The further object of the present invention is to provide a kind of object identification method and device, it can be achieved that tracking for a long time.
In view of the above and other objects, the present invention proposes a kind of object identification method, include the following steps:
Step S1 extracts the sample of tracking target and training tracker from initial frame picture, and target signature is stored in Sample space;
Step S2 reads present frame picture, judges to track whether target loses in previous frame;
Step S3, if judging result is tracking, target is lost, using the tracker of last training to previous frame target The position of loss and its picture sample of several positions of surrounding extract, and obtain the score chart of each position, enter step S5;
Step S4, if judging result is tracking, target is not lost, extracts present frame using the position of previous frame target Picture sample, and sample is assessed to obtain score chart using the tracker of last time training;
Step S5 carries out fraction assessment to the sample of each position, and judges whether score chart ideal, according to judging result into Enter step S6 or enters return step S2 after next frame;
Step S6, updates sample weights, and more new target location carries out target scale scale prediction, and updates ratio;
New samples are updated to sample space with sample weights weighting by step S7, and according to the training of setting frame number interval with Track device.
Preferably, step S1 further comprises:
Step S100 obtains the positions and dimensions information that target is tracked in initial frame picture;
Step S101, extracts the HOG feature and CN feature of tracking target area, and is located in advance to the target signature of extraction Reason;
Step S102 is carried out according to pretreated target signature training tracker and dimensionality reduction matrix, and to target signature Dimension-reduction treatment;
Target signature after dimensionality reduction is stored in sample space by step S103.
Preferably, in step S3 and S4, the operation for extracting sample includes to the HOG feature and CN for tracking target area Feature extraction, and pre-processed to result is extracted.
Preferably, step S5 further comprises:
Step S500 assesses score chart using average peak correlation energy, and obtains energy value;
Step S501, if previous frame target is not lost, this judgement carried out to assessment is energy value and score chart Whether the peak value whether peak value meets presetting condition and energy value and score chart relative to the situation of change of previous frame is full The presetting condition of foot;
Step S502, if previous frame target is lost, this judgement carried out to assessment is the peak of energy value and score chart Whether value meets presetting condition;
Step S503, the desired level of assessment result be divided into it is fabulous, preferable, poor and very poor, when final result be it is poor Or more when, then enter step S6;Then think that the tracking target of this frame is lost when final result is very poor, into next frame, And it is back to step S2.
Preferably, in step S3, upper and lower, left and right with the position of the tracking target of previous frame picture and around it, Upper left, lower-left, upper right, bottom-right location extract sample in order.
Preferably, in step S3, each sample is compared with previous frame tracker, obtains the score of each position sample Figure.
Preferably, step S6 further comprises:
Step S600 distributes sample weights according to the assessment result in step S5;
Step S601 is iterated optimization to score chart using Newton method, to obtain optimal score chart, in score chart Maximum value position is target position;
Step S603 carries out target scale scale prediction using PCA dimensionality reduction.
Preferably, step S7 further comprises following steps:
The sample of this frame is weighted by step S700;
Whether step S701, judgement sample space are filled with sample;
Step S702 judges that new samples are deposited in a manner of merging old sample insertion new samples if sample space has been expired Enter sample space, or is stored in sample space in a manner of with old samples fusion;
Step S703 is directly put into new samples after old sample if sample space is less than;
Step S704 is spaced according to preset training, utilizes sample space training tracker;
Preferably, in step S702, if sample space has been expired, old sample all in new samples and sample space is calculated Similarity degree, if letter sample and the similarity degree of old sample exceed certain threshold value, with old samples fusion;Otherwise, it counts Calculate the similarity degree in sample space between all old samples, two highest samples of similarity selected to be merged, then The position insertion new samples being available.
In order to achieve the above objectives, the present invention also provides a kind of object identification devices, including
Initial frame processing unit, for extracting the sample and training tracker of tracking target from initial frame picture, and will Target signature is stored in sample space
Judging unit is lost, for reading present frame picture, judges to track whether target loses in previous frame;
Loss gives unit for change, for being that tracking target is lost in the judging result for losing judging unit, utilizes upper one The tracker of secondary training assesses the position of previous frame picture tracking target and its sample of several positions of surrounding, and obtains The score chart of each position;
Present frame tracking position of object acquiring unit, for being tracking target in the judging result for losing judging unit It does not lose, then present frame picture sample is extracted according to the position of the tracking target of previous frame picture, and utilize last training Tracker assesses to obtain score chart sample;
Tracking result assessment unit for assessing score chart, and judges whether target loses;
Tracking result updating unit updates tracking position of object, it is pre- to carry out target scale ratio for updating sample weights It surveys, and updates target proportion;
Tracker training unit, for new samples to be updated to sample space with sample weights weighting, and according to preset Interval sample space training tracker.
In order to achieve the above objectives, the present invention also provides a kind of object identification devices, including
Initial frame processing unit, for extracting the sample and training tracker of tracking target from initial frame picture, and will Target signature is stored in sample space
Judging unit is lost, for reading present frame picture, judges to track whether target loses in previous frame;
Loss gives unit for change, for being that tracking target is lost in the judging result for losing judging unit, utilizes upper one The tracker of secondary training assesses the position of previous frame picture tracking target and its sample of several positions of surrounding, and obtains The score chart of each position;
Present frame tracking position of object acquiring unit, for being tracking target in the judging result for losing judging unit It does not lose, then present frame picture sample is extracted according to the position of the tracking target of previous frame picture, and utilize last training Tracker assesses to obtain score chart sample;
Tracking result assessment unit for assessing score chart, and judges whether target loses;
Tracking result updating unit updates tracking position of object, it is pre- to carry out target scale ratio for updating sample weights It surveys, and updates target proportion.
Tracker training unit, for new samples to be updated to sample space with sample weights weighting, and according to preset Interval sample space training tracker.
Preferably, the initial frame processing unit further comprises:
Target Acquisition unit is tracked, for obtaining the information of initial frame picture, i.e., tracks target in acquisition initial frame picture Positions and dimensions information;
Feature extraction unit is carried out for extracting the HOG feature and CN feature of tracking target, and to the target signature of extraction Pretreatment;
Training dimensionality reduction unit, for training tracker and dimensionality reduction matrix according to pretreated target signature, and to target Feature carries out dimension-reduction treatment;
Storage unit, for the target signature after dimensionality reduction to be stored in sample space.
Compared with prior art, a kind of object identification method of the present invention and device are by judging tracking mesh to present frame picture Whether mark is lost, and lose when tracking target and losing and given processing for change, realizes with losing, target is again Occur can continue to the purpose of tracking, meanwhile, invention increases scale tracking, can zoom in and out tracking, and accelerate ratio The speed of example tracking.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of object identification method of the present invention;
Fig. 2 is the detailed flowchart of step S1 in the specific embodiment of the invention;
Fig. 3 is the detailed flowchart of step S3 in the specific embodiment of the invention;
Fig. 4 is the detailed flowchart of step S301 in the specific embodiment of the invention;
Fig. 5 is a kind of system architecture diagram of object identification device of the present invention;
Fig. 6 is the process flow diagram in the specific embodiment of the invention to new frame picture.
Specific embodiment
Below by way of specific specific example and embodiments of the present invention are described with reference to the drawings, those skilled in the art can Understand further advantage and effect of the invention easily by content disclosed in the present specification.The present invention can also pass through other differences Specific example implemented or applied, details in this specification can also be based on different perspectives and applications, without departing substantially from Various modifications and change are carried out under spirit of the invention.
Fig. 1 is a kind of step flow chart of object identification method of the present invention.As shown in Figure 1, a kind of object identification of the present invention Method includes the following steps:
Step S1 extracts the sample of tracking target and training tracker from initial frame picture, and target signature is stored in Sample space.Here tracking target is the object for the movement to be identified in video image.
Specifically, as shown in Fig. 2, step S1 further comprises:
Step S100 obtains the information of initial frame picture, i.e., tracks the positions and dimensions of target in acquisition initial frame picture Information, and initialize tracker parameters;
Step S101, (Histogram of Oriented Gradient, direction gradient are straight by the HOG of extraction tracking target Side's figure) feature and CN (Color Name, color) feature, and the target signature of extraction is pre-processed;Specifically, this step Pretreatment include Feature Dimension Reduction plus Cosine Window, DFT, interpolation etc., it should be noted that, dimensionality reduction matrix here is by the beginning of PCA Beginningization, it is updated in S102, i.e., has carried out a dimensionality reduction again before deposit sample space.
Step S102 is carried out according to pretreated target signature training tracker and dimensionality reduction matrix, and to target signature Dimension-reduction treatment;
Target signature after dimensionality reduction is stored in sample space by step S103.
Step S2 reads present frame picture, judges to track whether target loses in previous frame;In the specific embodiment of the invention In, by each frame be arranged target lose flag bit update flag with judge track target whether lose, specifically, mesh Mark loses flag bit update flag and is initially true, can then be carried out more according to the result of fraction assessment to it in step S5 Newly.
Step S3, if judging result is tracking, target is lost, using the tracker of last training to previous frame target The position of loss and its picture sample of several positions of surrounding extract, and obtain the score chart of each position, go to step S5。
Specifically, as shown in figure 3, step S3 further comprises:
Step S300, with 8 positions (upper and lower, left and right, a left side around the position of the tracking target of previous frame picture and its Upper, lower-left, upper right, bottom right) sample is extracted in order, the operation of extraction sample here includes the HOG to tracking target area Feature and CN feature extraction, and pre-processed to result is extracted, pretreatment still includes Feature Dimension Reduction plus Cosine Window, DFT, inserts The operation such as value;
Each sample is compared with previous frame tracker, obtains the score chart of each position sample by step S301.Namely It says, the score chart then comparison (i.e. frequency domain correlation) from sample Yu previous frame tracker, having several samples just has several scores Scheme, the practical score in score chart is the degree of correlation.
Step S4, if judging result is tracking, target is not lost, extracts present frame using the position of previous frame target Picture sample, and sample is assessed to obtain score chart using the tracker of last time training.It is embodied in the present invention In example, the operation of the extraction sample in this step also includes the HOG feature and CN feature extraction to tracking target area, and right It extracts result to be pre-processed, pretreatment still includes the operation such as Feature Dimension Reduction plus Cosine Window, DFT, interpolation.Score chart also comes from The comparison of sample and previous frame tracker, it will not be described here.
Step S5 carries out fraction assessment to the score chart of the sample of each position, and judges whether score chart is ideal.
Specifically, step S5 further comprises:
Step S500 assesses score chart using average peak correlation energy (APCE), and obtains energy value;
Step S501, if previous frame target is not lost, this judgement carried out to assessment is energy value and score chart Whether the peak value whether peak value meets presetting condition and energy value and score chart relative to the situation of change of previous frame is full The presetting condition of foot.
Step S502, if previous frame target is lost, this judgement carried out to assessment is the peak of energy value and score chart Whether value meets presetting condition.
Step S503 enters step S6 according to the desired level of judging result or enters next frame return step S2.At this In invention specific embodiment, the desired level of assessment result be divided into it is fabulous, preferable, poor and very poor, when the final result of judgement When being poor or more, then S6 is entered step;And then think that the tracking target of this frame is lost when final result is very poor, enter Next frame, and return step S2, target loss flag bit update flag is then updated to false at this time, as shown in figure 4, wherein Higher, general, the lower corresponding ideal degree of confidence level is fabulous, preferable, poor in Fig. 4.
Step S6, updates sample weights, and more new target location carries out target scale scale prediction, and updates ratio.Also It is to say, if the judging result of step S5 is that score is ideal (i.e. desired level is fabulous preferable or poor), updates sample This weight, more new target location carry out target scale scale prediction, and update ratio.
Specifically, step S6 further comprises:
Step S600 distributes sample weights according to the assessment result in S5;
Step S601 is iterated optimization to score chart using Newton method, to obtain optimal score chart, in score chart Maximum value position is target position;
Step S603 carries out target scale scale prediction using PCA dimensionality reduction.The present invention uses PCA dimensionality reduction skill, can be substantially It reduces and calculates size of data;Meanwhile the method that scaling uses frequency domain interpolation, calculative ratio is reduced, is greatly improved Calculating speed.
New samples are updated to sample space with sample weights weighting by step S7, and according to the training of setting frame number interval with Track device, return step S2.
Specifically, step S7 further comprises:
The sample of this frame is weighted by step S700.
Step S701, whether judgement sample space is filled with sample, can be according to preset sample in the specific embodiment of the invention Whether this space size judgement sample space is filled with sample.
Step S702 judges that new samples are deposited in a manner of merging old sample insertion new samples if sample space has been expired Enter sample space, or is stored in sample space in a manner of with old samples fusion.Specifically, firstly, calculating new samples and sample The similarity degree of all old samples in space, if the similarity degree with old sample exceeds certain threshold value, with old sample Fusion;Otherwise, the similarity degree in sample space between all old samples is calculated, the highest sample of two similarities is selected to carry out Then new samples are inserted into the position being available by fusion.
Step S703 is directly put into new samples after old sample if sample space is less than.
Step S704 is spaced according to preset training, utilizes sample space training tracker.
Fig. 5 is a kind of system architecture diagram of object identification device of the present invention.As shown in figure 5, a kind of object identification of the present invention Device, including
Initial frame processing unit 50, for extracting the sample and training tracker of tracking target from initial frame picture, and Target signature is stored in sample space.Here tracking target is the object for the movement to be identified in video image.
Specifically, initial frame processing unit 50 further comprises:
Target Acquisition unit is tracked, for obtaining the information of initial frame picture, i.e., tracks target in acquisition initial frame picture Positions and dimensions information;
Feature extraction unit, for extracting HOG (Histogram of Oriented Gradient, the side of tracking target To histogram of gradients) feature and CN feature, and the target signature of extraction is pre-processed, pretreatment here includes feature drop Dimension plus Cosine Window, DFT, interpolation etc.;
Training dimensionality reduction unit, for training tracker and dimensionality reduction matrix according to pretreated target signature, and to target Feature carries out dimension-reduction treatment;
Storage unit, for the target signature after dimensionality reduction to be stored in sample space.
Judging unit 51 is lost, for reading present frame picture, judges to track whether target loses in previous frame;
Loss gives unit 52 for change, when for being that tracking target is lost in the judging result for losing judging unit 51, in utilization Once trained tracker extracts the position of previous frame picture tracking target and its picture sample of several positions of surrounding, And obtain the score chart of each position.
Specifically, it loses and gives unit 52 for change and further comprise:
Adjacent position sample extraction unit, for 8 positions around the position of the tracking target of previous frame picture and its It sets (upper and lower, left and right, upper left, lower-left, upper right, bottom right) and extracts sample in order, and sample is pre-processed, mentioning here The operation of sampling originally includes the HOG feature and CN feature extraction to tracking target area, and is pre-processed to result is extracted, Pretreatment still includes the operation such as Feature Dimension Reduction plus Cosine Window, DFT, interpolation;
Score chart acquiring unit obtains point of each position sample for each sample to be compared with previous frame tracker Number figure.That is, the score chart then comparison (i.e. frequency domain correlation) from sample Yu previous frame tracker, has several samples just There are several score charts, the practical score in score chart is the degree of correlation.
Present frame tracking position of object acquiring unit 53, for being tracking mesh in the judging result for losing judging unit When mark is not lost, then present frame picture sample is extracted according to the position of the tracking target of previous frame picture, and utilize last instruction Experienced tracker assesses to obtain score chart sample.
Tracking result assessment unit 54, the score chart for the sample to each position carries out fraction assessment, and judges score Whether figure is ideal.
Tracking result assessment unit 54 is specifically used for:
Score chart is assessed using average peak correlation energy (APCE), and obtains energy value;
If previous frame target is not lost, this judgement that assessment is carried out be energy value and score chart peak value relative to It is presetting whether the peak value whether situation of change of previous frame meets presetting condition and energy value and score chart meets Condition;
If previous frame target is lost, this judgement carried out to assessment is whether the peak value of energy value and score chart meets Presetting condition;
According to the desired level start-up trace result updating unit 55 of judging result or enter next frame return loss judgement Unit 51.In the specific embodiment of the invention, the desired level of assessment result is divided into fabulous, preferable, poor and very poor, works as judgement Final result when being poor or more, then start-up trace result updating unit 55;And then think when final result is very poor The tracking target of this frame is lost, and into next frame, and is returned and is lost judging unit 51, target loses flag bit update at this time Flag is then updated to false.
Tracking result updating unit 55 updates tracking position of object, carries out target scale ratio for updating sample weights Prediction, and update ratio.That is, if the judging result of tracking result assessment unit 54 is that score is ideal (i.e. ideal Degree is fabulous preferable or poor), then sample weights, more new target location are updated, carry out target scale scale prediction, and more New ratio.
Specifically, tracking result updating unit 55 further comprises:
Sample weights updating unit, for distributing sample weights according to the assessment result of tracking result assessment unit 54;
Tracking position of object updating unit, it is optimal to obtain for being iterated optimization to score chart using Newton method Score chart, the maximum value position in score chart are target position;
Scale prediction updating unit, for carrying out target scale scale prediction using PCA dimensionality reduction.The present invention is dropped using PCA Skill is tieed up, calculating size of data can be greatly reduced;Meanwhile the method that scaling uses frequency domain interpolation, it reduces calculative Ratio greatly improves calculating speed.
Tracker training unit 56, for new samples to be updated to sample space with sample weights weighting, and according to setting Train tracker in frame number interval.
Tracker training unit 56 is specifically used for:
The sample of this frame is weighted;
Whether judgement sample space is filled with sample, can be big according to preset sample space in the specific embodiment of the invention Whether small judgement sample space is filled with sample;
If sample space has been expired, judge that new samples are to be stored in sample sky in a manner of merging old sample insertion new samples Between, or in a manner of with old samples fusion it is stored in sample space;Specifically, firstly, calculating institute in new samples and sample space The similarity degree for the old sample having, if the similarity degree with old sample exceeds certain threshold value, with old samples fusion;It is no Then, the similarity degree in sample space between all old samples is calculated, selects two highest samples of similarity to be merged, so New samples are inserted into the position being available afterwards.
If sample space is less than, new samples directly are put into after old sample;
According to preset training interval, sample space training tracker is utilized.
Fig. 6 is the process flow diagram in the specific embodiment of the invention to new frame picture.Its treatment process is specific as follows:
1, new frame picture is read in;
2, judge to track whether target loses in previous frame, in the specific embodiment of the invention, using dbjective state mark Update Flag tracks whether target loses to record, and dbjective state mark Update Flag, which is 0, indicates that tracking target is lost It loses, indicates that tracking target is not lost for 1;
If 3, Update Flag is 0, then it represents that tracking target is lost, then with the position of the tracking target of previous frame picture And 8 positions (upper and lower, left and right, upper left, lower-left, upper right, bottom right) extract sample in order around it, and to sample into Row pretreatment (including dimensionality reduction, add Cosine Window, FFT, interpolation), obtains the score chart of each position;
4,9 samples are assessed using the tracker of last time training, and obtains the score of each position of sample, into Row fraction assessment;
5, sample satisfaction is judged whether there is using APCE give condition for change;
6, if there is sample satisfaction gives condition for change, then optimize scores, and find out the conduct of largest score position and look for The position for the tracking target returned, and enter 10.
If 7, Update Flag is 1, then it represents that tracking target is not lost, then extracts sample with the position of previous frame, and right Sample extraction feature and pretreatment (including dimensionality reduction, add Cosine Window, FFT, interpolation);
8, new samples are assessed using the tracker of last time training, obtains the score of each position, then divided Number assessment;
9, optimize scores, and find out largest score position;
10, the position of tracking target is updated, carries out target scale scale prediction, and update ratio;
11, new samples are updated to sample space with sample weights weighting;
12, at regular intervals often, with the sample training tracker in sample space.
In conclusion a kind of object identification method of the present invention and device are by judging whether track target to present frame picture It loses, and lose when tracking target and losing and given processing for change, realize with losing, energy occurs again in target Enough continue the purpose of tracking, meanwhile, invention increases scale tracking, can zoom in and out tracking, and accelerate ratio tracking Speed, the experiment proved that, the ratio tracking velocity of existing object identification method is about 140ms/ frame, and pass through the present invention The ratio tracking velocity of object identification after optimization is 40ms/ frame, hence it is evident that greatly accelerates the speed of ratio tracking, and this hair The bright sample training tracker that can be then utilized in sample space at regular intervals, so that the present invention may be implemented to track for a long time Purpose.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.Any Without departing from the spirit and scope of the present invention, modifications and changes are made to the above embodiments by field technical staff.Therefore, The scope of the present invention, should be as listed in the claims.

Claims (10)

1. a kind of object identification method, includes the following steps:
Step S1 extracts the sample of tracking target and training tracker from initial frame picture, and target signature is stored in sample Space;
Step S2 reads present frame picture, judges to track whether target loses in previous frame;
Step S3, if judging result is tracking, target is lost, and is lost using the tracker of last training to previous frame target Position and its picture samples of several positions of surrounding extract, and obtain the score chart of each position, enter step S5;
Step S4, if judging result is tracking, target is not lost, and the picture of present frame is extracted using the position of previous frame target Sample, and sample is assessed to obtain score chart using the tracker of last time training;
Step S5 carries out fraction assessment to the sample of each position, and judges whether score chart is ideal, enters step according to judging result Rapid S6 enters return step S2 after next frame;
Step S6, updates sample weights, and more new target location carries out target scale scale prediction, and updates ratio;
New samples are updated to sample space with sample weights weighting by step S7, and according to setting frame number interval training tracker.
2. a kind of object identification method as described in claim 1, which is characterized in that step S1 further comprises:
Step S100 obtains the positions and dimensions information that target is tracked in initial frame picture;
Step S101, extracts the HOG feature and CN feature of tracking target area, and pre-processes to the target signature of extraction;
Step S102 carries out dimensionality reduction according to pretreated target signature training tracker and dimensionality reduction matrix, and to target signature Processing;
Target signature after dimensionality reduction is stored in sample space by step S103.
3. a kind of object identification method as described in claim 1, it is characterised in that: in step S3 and S4, extract sample Operation includes the HOG feature and CN feature extraction to tracking target area, and pre-processes to result is extracted.
4. a kind of object identification method as described in claim 1, which is characterized in that step S5 further comprises:
Step S500 assesses score chart using average peak correlation energy, and obtains energy value;
Step S501, if previous frame target is not lost, this judgement carried out to assessment is energy value and score chart peak value It is pre- whether the peak value whether situation of change relative to previous frame meets presetting condition and energy value and score chart meets The condition of setting;
Step S502, if previous frame target is lost, this judgement carried out to assessment is that the peak value of energy value and score chart is It is no to meet presetting condition;
Step S503, the desired level of assessment result be divided into it is fabulous, preferable, poor and very poor, when final result be it is poor and with When upper, then S6 was entered step;Then think that the tracking target of this frame is lost when final result is very poor, into next frame, and returns It is back to step S2.
5. a kind of object identification method as described in claim 1, it is characterised in that: in step S3, with previous frame picture It tracks the position of target and upper and lower, left and right, upper left, lower-left, upper right, bottom-right location extracts sample in order around it.
6. a kind of object identification method as claimed in claim 5, it is characterised in that: in step S3, by each sample and upper one Frame tracker is compared, and obtains the score chart of each position sample.
7. a kind of object identification method as described in claim 1, which is characterized in that step S6 further comprises:
Step S600 distributes sample weights according to the assessment result in step S5;
Step S601 is iterated optimization to score chart using Newton method, the maximum to obtain optimal score chart, in score chart Value position is target position;
Step S603 carries out target scale scale prediction using PCA dimensionality reduction.
8. a kind of object identification method as described in claim 1, which is characterized in that step S7 further comprises following steps:
The sample of this frame is weighted by step S700;
Whether step S701, judgement sample space are filled with sample;
Step S702 judges that new samples are to be stored in sample in a manner of merging old sample insertion new samples if sample space has been expired This space, or sample space is stored in a manner of with old samples fusion;
Step S703 is directly put into new samples after old sample if sample space is less than;
Step S704 is spaced according to preset training, utilizes sample space training tracker.
9. a kind of object identification method as claimed in claim 8, it is characterised in that: in step S702, if sample space is It is full, the similarity degree of old sample all in new samples and sample space is calculated, if the similarity degree of letter sample and old sample Exceed certain threshold value, then with old samples fusion;Otherwise, the similarity degree in sample space between all old samples, choosing are calculated It selects two highest samples of similarity to be merged, new samples is then inserted into the position being available.
10. a kind of object identification device, including
Initial frame processing unit, for extracting the sample and training tracker of tracking target from initial frame picture, and by target Feature is stored in sample space
Judging unit is lost, for reading present frame picture, judges to track whether target loses in previous frame;
Loss gives unit for change, for being that tracking target is lost in the judging result for losing judging unit, is instructed using the last time Experienced tracker assesses the position of previous frame picture tracking target and its sample of several positions of surrounding, and obtains everybody The score chart set;
Present frame tracking position of object acquiring unit, for being that tracking target is not lost in the judging result for losing judging unit It loses, then present frame picture sample is extracted according to the position of the tracking target of previous frame picture, and utilize the tracking of last training Device assesses to obtain score chart sample;
Tracking result assessment unit for assessing score chart, and judges whether target loses;
Tracking result updating unit, for updating sample weights, update tracking position of object carries out target scale scale prediction, And update target proportion;
Tracker training unit, for new samples to be updated to sample space with sample weights weighting, and according to preset interval With sample space training tracker.
CN201811206301.4A 2018-10-17 2018-10-17 Object identification method and device Active CN109492537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811206301.4A CN109492537B (en) 2018-10-17 2018-10-17 Object identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811206301.4A CN109492537B (en) 2018-10-17 2018-10-17 Object identification method and device

Publications (2)

Publication Number Publication Date
CN109492537A true CN109492537A (en) 2019-03-19
CN109492537B CN109492537B (en) 2023-03-14

Family

ID=65691341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811206301.4A Active CN109492537B (en) 2018-10-17 2018-10-17 Object identification method and device

Country Status (1)

Country Link
CN (1) CN109492537B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium
CN110910422A (en) * 2019-11-13 2020-03-24 北京环境特性研究所 Target tracking method and device, electronic equipment and readable storage medium
CN112150460A (en) * 2020-10-16 2020-12-29 上海智臻智能网络科技股份有限公司 Detection method, detection system, device, and medium
CN112200790A (en) * 2020-10-16 2021-01-08 鲸斛(上海)智能科技有限公司 Cloth defect detection method, device and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187173A1 (en) * 2007-02-02 2008-08-07 Samsung Electronics Co., Ltd. Method and apparatus for tracking video image
CN102930296A (en) * 2012-11-01 2013-02-13 长沙纳特微视网络科技有限公司 Image identifying method and device
CN104574445A (en) * 2015-01-23 2015-04-29 北京航空航天大学 Target tracking method and device
CN104899561A (en) * 2015-05-27 2015-09-09 华南理工大学 Parallelized human body behavior identification method
CN106372666A (en) * 2016-08-31 2017-02-01 同观科技(深圳)有限公司 Target identification method and device
CN106920248A (en) * 2017-01-19 2017-07-04 博康智能信息技术有限公司上海分公司 A kind of method for tracking target and device
CN107992791A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target following failure weight detecting method and device, storage medium, electronic equipment
CN108510521A (en) * 2018-02-27 2018-09-07 南京邮电大学 A kind of dimension self-adaption method for tracking target of multiple features fusion
CN108564008A (en) * 2018-03-28 2018-09-21 厦门瑞为信息技术有限公司 A kind of real-time pedestrian and method for detecting human face based on ZYNQ
CN108664930A (en) * 2018-05-11 2018-10-16 西安天和防务技术股份有限公司 A kind of intelligent multi-target detection tracking

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187173A1 (en) * 2007-02-02 2008-08-07 Samsung Electronics Co., Ltd. Method and apparatus for tracking video image
CN102930296A (en) * 2012-11-01 2013-02-13 长沙纳特微视网络科技有限公司 Image identifying method and device
CN104574445A (en) * 2015-01-23 2015-04-29 北京航空航天大学 Target tracking method and device
CN104899561A (en) * 2015-05-27 2015-09-09 华南理工大学 Parallelized human body behavior identification method
CN106372666A (en) * 2016-08-31 2017-02-01 同观科技(深圳)有限公司 Target identification method and device
CN106920248A (en) * 2017-01-19 2017-07-04 博康智能信息技术有限公司上海分公司 A kind of method for tracking target and device
CN107992791A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target following failure weight detecting method and device, storage medium, electronic equipment
CN108510521A (en) * 2018-02-27 2018-09-07 南京邮电大学 A kind of dimension self-adaption method for tracking target of multiple features fusion
CN108564008A (en) * 2018-03-28 2018-09-21 厦门瑞为信息技术有限公司 A kind of real-time pedestrian and method for detecting human face based on ZYNQ
CN108664930A (en) * 2018-05-11 2018-10-16 西安天和防务技术股份有限公司 A kind of intelligent multi-target detection tracking

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium
CN110910422A (en) * 2019-11-13 2020-03-24 北京环境特性研究所 Target tracking method and device, electronic equipment and readable storage medium
CN112150460A (en) * 2020-10-16 2020-12-29 上海智臻智能网络科技股份有限公司 Detection method, detection system, device, and medium
CN112200790A (en) * 2020-10-16 2021-01-08 鲸斛(上海)智能科技有限公司 Cloth defect detection method, device and medium
CN112150460B (en) * 2020-10-16 2024-03-15 上海智臻智能网络科技股份有限公司 Detection method, detection system, device and medium

Also Published As

Publication number Publication date
CN109492537B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN109492537A (en) A kind of object identification method and device
CN110807385B (en) Target detection method, target detection device, electronic equipment and storage medium
CN110120064B (en) Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning
CN103984738A (en) Role labelling method based on search matching
CN112837344B (en) Target tracking method for generating twin network based on condition countermeasure
CN111401192B (en) Model training method and related device based on artificial intelligence
CN112329605B (en) City appearance random pasting and random drawing behavior identification method, storage device and server
CN115861738A (en) Category semantic information guided remote sensing target detection active sampling method
CN115115969A (en) Video detection method, apparatus, device, storage medium and program product
CN115049841A (en) Depth unsupervised multistep anti-domain self-adaptive high-resolution SAR image surface feature extraction method
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN113590971B (en) Interest point recommendation method and system based on brain-like space-time perception characterization
Xiao et al. Siamese block attention network for online update object tracking
CN110147876A (en) The neural network and its movement motion generation method of view-based access control model characteristic similarity
KR20150061914A (en) Skin texture predicting method and apparatus
CN110287912A (en) Method, apparatus and medium are determined based on the target object affective state of deep learning
CN110516559A (en) Suitable for precisely monitor method for tracking target and device, computer equipment
CN108595469A (en) A kind of semantic-based agricultural machinery monitor video image section band Transmission system
CN114596435A (en) Semantic segmentation label generation method, device, equipment and storage medium
CN114565791A (en) Figure file identification method, device, equipment and medium
CN114494972A (en) Target tracking method and system combining channel selection and position optimization
CN110826471A (en) Video label labeling method, device, equipment and computer readable storage medium
Sun et al. Distilling Siamese Trackers with Attention Mask
CN116935494B (en) Multi-person sitting posture identification method based on lightweight network model
CN116612341B (en) Image processing method, device, equipment and storage medium for object counting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant