Summary of the invention
In view of this, the embodiment of the present application is designed to provide recongnition of objects method and dress under a kind of monitoring scene
It sets, target object suspicious in monitor video can be found by image processing techniques, solve in the prior art by manually checking
Monitor video is lower come treatment effeciency when screening suspect, the low problem of accuracy rate, thus place when improving checking monitoring video
Manage efficiency and accuracy rate.
In a first aspect, the embodiment of the present application provides a kind of recongnition of objects method under monitoring scene, comprising:
The fisrt feature information of the first object is obtained, the fisrt feature information includes the gait feature of first object
And/or facial characteristics;
According to the fisrt feature information, first object is determined in monitor video;
At least one second pair apart from first object within the scope of pre-determined distance is determined from the monitor video
As, and the second feature information of each second object is extracted respectively, the second feature information includes the step of second object
State feature and/or facial characteristics;
According to the second feature information of each second object, obtain each described second pair within the scope of setting time
As the frequency of occurrence in the monitor video, or there is duration;
The frequency of occurrence is filtered out not less than preset times or described the of duration not less than preset duration threshold value occurs
Two objects are defined as selected second object;
The third for each sample object for including in the second feature information and date library of selected second object is special
Reference breath is matched, and selected second object of successful match is determined as the target object;Wherein, the third is special
Reference breath includes the gait feature and/or facial characteristics of the sample object.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, wherein institute
It states according to the fisrt feature information, first object is determined in monitor video, comprising:
Obtain the facial characteristics of each object in the monitor video;
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition item
Part;
Facial characteristics is met to the facial characteristics of the object of the face recognition condition and the face spy of first object
Sign is matched, and the object of facial characteristics successful match is determined as first object;
Extract the gait feature that facial characteristics is unsatisfactory for the object of the face recognition condition, the gait feature that will be extracted
It is matched with the gait feature of first object, and the object of gait feature successful match is determined as described first pair
As.
With reference to first aspect, the embodiment of the present application provides second of possible embodiment of first aspect, wherein institute
It states according to the fisrt feature information, first object is determined in monitor video, comprising:
Obtain the facial characteristics of each object in the monitor video;
The facial characteristics of each object is matched with the facial characteristics of first object, and by facial characteristics
The object of successful match is determined as first object.
With reference to first aspect, the embodiment of the present application provides the third possible embodiment of first aspect, wherein institute
It states according to the fisrt feature information, first object is determined in monitor video, comprising:
Obtain the gait feature of each object in the monitor video;
The gait feature of each object is matched with the gait feature of first object, and by gait feature
The object of successful match is determined as first object.
With reference to first aspect, the embodiment of the present application provides the 4th kind of possible embodiment of first aspect, wherein institute
It states the second feature information according to each second object, obtains within the scope of setting time each second object in institute
State the frequency of occurrence in monitor video, comprising:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
The fourth feature information is matched with the second feature information of second object;
The object of successful match is determined as second object, and second object is tracked;
When in the monitor video second object can not be tracked, the frequency of occurrence of second object is united
It is calculated as once, with frequency of occurrence of determination second object in the monitor video.
With reference to first aspect, the embodiment of the present application provides the 5th kind of possible embodiment of first aspect, wherein institute
It states the second feature information according to each second object, obtains within the scope of setting time each second object in institute
State the frequency of occurrence in monitor video, further includes:
For each second object, perform the following operations:
In multiple video-frequency bands within the scope of the setting time of the monitor video, obtain each in each video-frequency band
The fourth feature information of object;
The fourth feature information is matched with the second feature information of second object, by pair of successful match
As being determined as second object;
Occur the number of the video-frequency band of second object by counting, determines that second object is regarded in the monitoring
Frequency of occurrence in frequency.
With reference to first aspect, the embodiment of the present application provides the 6th kind of possible embodiment of first aspect, wherein institute
It states the second feature information according to each second object, obtains within the scope of setting time each second object in institute
State the length of the time of occurrence in monitor video, comprising:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
The fourth feature information is matched with the second feature information of second object;
The object of successful match is determined as second object, and second object is tracked, described in statistics
The number of image frames of second object is continuously had in monitor video;
According to described image frame number, time of occurrence length of second object in the monitor video is determined.
With reference to first aspect, the embodiment of the present application provides the 7th kind of possible embodiment of first aspect, wherein also
Include:
The gait feature of each selected second object is inputted into preparatory trained gait identification model respectively
First identity of middle each selected second object of prediction, and the prediction for obtaining the first identity of each selected second object is general
Rate;And
The facial characteristics of each selected second object is inputted into facial identification model trained in advance respectively
Second identity of middle each selected second object of prediction, and the prediction for obtaining the second identity of each selected second object is general
Rate;
According to the weight of pre-set first identity and the weight of second identity, calculate separately each selected
The target prediction probability of the corresponding different identity of second object;
According to the target prediction probability of each selected corresponding different identity of second object, from each described second pair selected
The target object is filtered out as in.
Second aspect, the embodiment of the present application provide recongnition of objects device under a kind of monitoring scene, comprising:
First object information obtains module, for obtaining the fisrt feature information of the first object, the fisrt feature information
Gait feature and/or facial characteristics including first object;
First object determining module, for determining described first pair in monitor video according to the fisrt feature information
As;
Second object determining module, for determining apart from first object from the monitor video in pre-determined distance model
At least one interior second object is enclosed, and extracts the second feature information of each second object respectively, the second feature information
Gait feature and/or facial characteristics including second object;
There is data obtaining module in second object, for the second feature information according to each second object, obtains
Frequency of occurrence of each second object in the monitor video within the scope of setting time, or the time span occurred;
Selected second object determining module, for filtering out the frequency of occurrence not less than preset times or the appearance
Time span be not less than the second object of predetermined time period threshold value, be defined as selected second object;
Target object determining module, for that will include in the second feature information and date library of selected second object
The third feature information of each sample object is matched, and is determined as selected second object of successful match to appear in institute
State target object.
In conjunction with second aspect, the embodiment of the present application provides the first possible embodiment of second aspect, wherein also
It include: another target object determining module;
The another kind target object determining module, specifically using following step for determining the target object:
The gait feature of each selected second object is inputted into preparatory trained gait identification model respectively
First identity of middle each selected second object of prediction, and the prediction for obtaining the first identity of each selected second object is general
Rate;And
The facial characteristics of each selected second object is inputted into facial identification model trained in advance respectively
Second identity of middle each selected second object of prediction, and the prediction for obtaining the second identity of each selected second object is general
Rate;
According to the weight of pre-set first identity and the weight of second identity, calculate separately each selected
The target prediction probability of the corresponding different identity of second object;
According to the target prediction probability of each selected corresponding different identity of second object, from each described second pair selected
The target object is filtered out as in.
Recongnition of objects method and device under monitoring scene provided by the embodiments of the present application, by obtaining the first object
Fisrt feature information determines first object according to the fisrt feature information in monitor video;From the monitor video
Middle determining at least one second object apart from first object within the scope of pre-determined distance, and each second pair is extracted respectively
The second feature information of elephant;According to the second feature information of each second object, obtain each within the scope of setting time
Frequency of occurrence of second object in the monitor video, or there is duration;The frequency of occurrence is filtered out not less than pre-
If number or second object for duration occur and being not less than preset duration threshold value, are defined as selected second object;Finally by institute
State the third feature information progress for each sample object for including in the second feature information and date library of selected second object
Match, selected second object of successful match is determined as the target object.The embodiment of the present application can be by image at
Reason technology finds target object suspicious in monitor video, solves in the prior art by artificial checking monitoring video come the suspicion of screening
Treatment effeciency is lower when people, the low problem of accuracy rate, thus treatment effeciency and accuracy rate when improving checking monitoring video.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
Middle attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
It is some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is real
The component for applying example can be arranged and be designed with a variety of different configurations.Therefore, below to the application's provided in the accompanying drawings
The detailed description of embodiment is not intended to limit claimed scope of the present application, but is merely representative of the selected reality of the application
Apply example.Based on embodiments herein, those skilled in the art institute obtained without making creative work
There are other embodiments, shall fall in the protection scope of this application.
Screening the mode of suspect by artificial checking monitoring video at present, not only treatment effeciency is lower, is also easy to nothing occur
Method accurately determines the problem of suspect, is based on this, recongnition of objects method and dress under a kind of monitoring scene provided by the present application
It sets, target object suspicious in monitor video can be found by image processing techniques, solve in the prior art by manually checking
Monitor video is lower come treatment effeciency when screening suspect, the low problem of accuracy rate, thus place when improving checking monitoring video
Manage efficiency and accuracy rate.
For convenient for understanding the present embodiment, first to target under a kind of monitoring scene disclosed in the embodiment of the present application
Object identifying method describes in detail.
Shown in Figure 1, recongnition of objects method includes step under a kind of monitoring scene provided by the embodiment of the present application
Rapid S101~S106:
S101: the fisrt feature information of the first object is obtained.
Herein, fisrt feature information includes the gait feature and/or facial characteristics of the first object.
In a kind of possible application scenarios, after thievery occurs for certain region, it can be chosen in the area monitoring video
Certain an object is as the first object, such as can choose aggrieved artificial first object.
S102: according to fisrt feature information, the first object is determined in monitor video.
Herein, it is desirable to find target object suspicious in monitor video, namely the first object may be implemented stealing or its
The target object of his criminal offence, it is necessary first to judge whether comprising the first object in each frame image of monitor video, and
The first object is determined in each frame image of monitor video.
Optionally, monitor video is the monitor video in place when the first object may occur stolen, such as subway station, public affairs
Hand over the monitor video in station somewhere.
When specific implementation, the embodiment of the present application determines the first object based on following three kinds of modes:
Mode one:
Shown in Figure 2, the embodiment of the present application determines the first object based on following manner one:
S201: the facial characteristics of each object in monitor video is obtained.
When specific implementation, the facial characteristics of each object in monitor video in each frame image is obtained.
Optionally, it is first determined the human region of each object in each frame image to be processed;Then, face and people are based on
The geometrical relationship of body determines the human face region in human region;The image of human face region is finally obtained as facial image.It is this
Mode does not have to global image and carries out facial characteristics identification, but first identifies human region, then human region is recycled to determine
Human face region shortens the recognition time of facial characteristics identification, recognition efficiency can be improved.
S202: for the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition item
Part.
Herein, the face-image of the object taken due to monitor video may be vague, and it is special not have progress face
Levy the condition of identification, it is therefore desirable to detect to whether the facial characteristics of each object meets face recognition condition.
For example, facial characteristics includes: left eye, right eye, nose, the left corners of the mouth, the right corners of the mouth;
When specific implementation, optionally, the scoring of this 5 facial characteristics is obtained based on full convolutional network, then will be commented
/ and with default scoring threshold value comparison meet face recognition condition if the sum of scoring is greater than or equal to default scoring threshold value;
If the sum of scoring is less than default scoring threshold value, it is unsatisfactory for face recognition condition.
S203: by facial characteristics meet the object of face recognition condition facial characteristics and the first object facial characteristics into
Row matching, and the object of facial characteristics successful match is determined as the first object.
When specific implementation, calculate facial characteristics meet face recognition condition object facial characteristics feature to
Similarity between the feature vector of the facial characteristics of amount and the first object, if the obtained corresponding similarity of all objects
In maximum similarity be greater than default similarity threshold, then it is assumed that successful match, and the corresponding object of maximum similarity is true
It is set to the first object.
Optionally, when extracting the feature vector of facial characteristics, convolutional neural networks or scale invariant feature can be used
Convert the feature vector that (Scale-invariant feature transform, SIFT) algorithm etc. extracts facial characteristics.
S204: extracting the gait feature that facial characteristics is unsatisfactory for the object of face recognition condition, by the gait extracted spy
Sign is matched with the gait feature of the first object, and the object of gait feature successful match is determined as the first object.
Herein, the facial characteristics of object in monitor video possibly can not be sometimes acquired, or the facial characteristics of acquisition is not inconsistent
Close face recognition condition, then it can be with the gait feature of acquisition target, by matching collected gait feature and the first object
Gait feature determines the first object in monitor video.When carrying out gait feature matching, needs multiple image just and can determine that step
State feature.The embodiment of the present application determines the gait feature of object using following manner:
Firstly, extracting at least one associated images septum reset feature corresponding to nth frame image is unsatisfactory for face recognition item
The gait feature of the object of part.
Herein, associated images include but is not limited to: the preceding X frame adjacent with nth frame image and continuous in monitor video
Image, it is adjacent with nth frame image and continuous after Y frame image, or it is adjacent with nth frame image and continuous before X frame image and
Y frame image after adjacent with nth frame image and continuous.
It optionally, can Method of Gait Feature Extraction algorithm, grain based on Hough (Hough) transformation when extracting gait feature
Method of Gait Feature Extraction algorithm of sub- filter tracking etc. extracts gait feature.
Then, the gait feature extracted is matched with the gait feature of the first object.For example, what calculating extracted
Similarity between gait feature and the gait feature of the first object, if maximum in the obtained corresponding similarity of all objects
Similarity be greater than default similarity threshold, then it is assumed that successful match, and the corresponding object of maximum similarity is determined as the
An object.
Mode two:
Shown in Figure 3, the embodiment of the present application determines the first object based on following manner two:
S301: the facial characteristics of each object in monitor video is obtained.
S301: the facial characteristics of each object is matched with the facial characteristics of the first object, and by facial characteristics
It is determined as the first object with successful object.
Mode three:
Shown in Figure 4, the embodiment of the present application determines the first object based on following manner three:
S401: the gait feature of each object in monitor video is obtained.
S402: the gait feature of each object is matched with the gait feature of the first object, and by gait feature
It is determined as the first object with successful object.
It should be noted that in specific implementation, the facial characteristics or step of mode two and mode three only with each object
State feature is matched with the facial characteristics of the first object or gait feature, and object is met face recognition first by mode one
The facial characteristics of condition is matched with the facial characteristics of the first object, when facial characteristics is unsatisfactory for face recognition condition, is adopted
It is matched with the gait feature of each object with the gait feature of the first object.As can be seen that mode is first is that one preferable
Embodiment, precision is higher, and the precision of mode two and mode three is lower.
After the embodiment of the present application has determined the first object in monitor video through the above steps, following step S103 is executed
~S106.
S103: from determining at least one second object within the scope of pre-determined distance of the first object of distance in monitor video,
And the second feature information of each second object is extracted respectively.
Specifically, second feature information includes the gait feature and/or facial characteristics of the second object.
In a kind of possible embodiment, since suspect is when implementing theft or other criminal offences to victim, need
It appears near victim.Therefore need from monitor video determine the first object of distance within the scope of pre-determined distance at least
One the second object.
When specific implementation, which is all carried out to each frame image in the monitor video for having the first object,
From at least one second object of the first object of distance determining in monitor video within the scope of pre-determined distance, and extract respectively each
The second feature information of second object.
S104: according to the second feature information of each second object, each second object within the scope of setting time is obtained
Frequency of occurrence in monitor video, or there is duration.
If a people frequently occurs on the biggish place of some flow of the people within certain time, this people's
Behavior will be more suspicious, it is likely to suspect.Therefore existed by each second object within the scope of statistics setting time
Frequency of occurrence in monitor video, or there is duration, it is more then to filter out frequency of occurrence, or occur longer second pair of duration
As the range for finding target object can be reduced.
Monitoring it should be noted that optionally, here for determining the frequency of occurrence of the second object, or when there is duration
Video can be a local monitor video with the monitor video in step S102 and S103;It is also possible to different places
Monitor video in monitor video, such as step S102 and S103 is primarily to determine the first object and the second object, therefore
Monitor video in step S102 and S103 is to be determined by the appearance position of the first object, and the monitor video of step S104 is
In order to determine that the second object in the frequency of occurrence in some place, or duration occurs, this place can be the prison on subway station doorway
Control the monitor video etc. in somewhere in video or other subway stations.
When specific implementation, the embodiment of the present application determines the second object in monitor video by following two mode
Frequency of occurrence:
Mode one:
It is shown in Figure 5, for each second object, perform the following operations:
S501: the fourth feature information of each object of the monitor video within the scope of setting time is obtained.
Specifically, fourth feature information includes the gait feature and/or facial characteristics of object.
S502: fourth feature information is matched with the second feature information of the second object.
S503: the object of successful match is determined as the second object, and the second object is tracked.
S504: being one by the frequency of occurrence statistics of the second object when that can not track the second object in monitor video
It is secondary, to determine frequency of occurrence of second object in monitor video.
Mode two:
It is shown in Figure 6, for each second object, perform the following operations:
S601: in multiple video-frequency bands within the scope of the setting time of monitor video, it is each right in each video-frequency band to obtain
The fourth feature information of elephant.
Specifically, fourth feature information includes the gait feature and/or facial characteristics of the object.
S602: fourth feature information is matched with the second feature information of the second object, by the object of successful match
It is determined as the second object.
S603: occur the number of the video-frequency band of the second object by counting, determine the second object in monitor video
Frequency of occurrence.
Shown in Figure 7 when specific implementation, the embodiment of the present application determines that the second object is monitoring by following manner
Appearance duration in video:
S701: the fourth feature information of each object of the monitor video within the scope of setting time is obtained.
Specifically, fourth feature information includes the gait feature and/or facial characteristics of the object.
S702: fourth feature information is matched with the second feature information of the second object.
S703: the object of successful match is determined as the second object, and the second object is tracked, Statistical monitor video
In continuously have the number of image frames of the second object.
S704: according to number of image frames, appearance duration of second object in monitor video is determined.
The embodiment of the present application obtains frequency of occurrence of second object in monitor video through the above steps, and duration occurs
Afterwards, recongnition of objects method further includes following step S105 and step S106 under monitoring scene provided by the embodiments of the present application.
S105: filtering out frequency of occurrence not less than preset times, or occurs duration is not less than preset duration threshold value second
Object is defined as selected second object.
S106: the third for each sample object for including in the second feature information and date library of selected second object is special
Reference breath is matched, and selected second object of successful match is determined as target object.
Wherein, third feature information includes the gait feature and/or facial characteristics of the sample object.
Optionally, each sample object for including in database, including the people with criminal record.
Optionally, shown in Figure 8, the embodiment of the present application also provides another modes for determining target object:
S801: the gait feature of each selected second object is inputted into preparatory trained gait identification model respectively
First identity of middle each selected second object of prediction, and obtain the prediction probability of the first identity of each selected second object.
Herein, the training sample of gait identification model is the respective body of multiple sample objects with criminal record
Part label and gait feature.Can obtain selected second object by gait identification model is each to have criminal record
The prediction probability of sample object optionally according to the sequence of prediction probability from big to small, chooses prediction probability in the sample of several former
First identity of the identity label of this object as selected second object.
For example, the first identity of selected second object and the prediction probability of the first identity are (identity A, prediction probability 1),
(identity B, prediction probability 2), (identity C, prediction probability 3), (identity D, prediction probability 4).
S802: the facial characteristics of each selected second object is inputted into facial identification model trained in advance respectively
Second identity of middle each selected second object of prediction, and obtain the prediction probability of the second identity of each selected second object.
Herein, the training sample of facial identification model is the respective body of multiple sample objects with criminal record
Part label and facial characteristics.Can obtain selected second object by facial identification model is each to have criminal record
The prediction probability of sample object optionally according to the sequence of prediction probability from big to small, chooses prediction probability in the sample of several former
Second identity of the identity label of this object as selected second object.
For example, the second identity of selected second object and the prediction probability of the second identity are (identity A, prediction probability 5),
(identity B, prediction probability 6), (identity D, prediction probability 7), (identity E, prediction probability 8).
S803: according to the weight of pre-set first identity and the weight of the second identity, each selected the is calculated separately
The target prediction probability of the corresponding different identity of two objects.
For example, the weight of the first identity is a, the weight of the second identity is b, then selectes the target that the second object is identity A
Prediction probability is a* prediction probability 1+b* prediction probability 5, and so on, obtain each selected corresponding different identity of second object
Target prediction probability.
S804: according to the target prediction probability of each selected corresponding different identity of second object, from each selected second
Target object is filtered out in object.
When specific implementation, by the target prediction probability highest goal of each selected corresponding different identity of second object
The corresponding identity of prediction probability is determined as the target prediction identity of selected second object, selectes all in the second objects, mesh
The corresponding target prediction probability of mark prediction identity is greater than selected second object of goal-selling prediction probability threshold value, is determined as target
Object.
For example, if the target prediction probability value highest that selected second object is identity A, the body of selected second object
Part is identity A, if this is selected second pair greater than goal-selling prediction probability threshold value by the target prediction probability value of identity A
As being determined as target object.
Optionally, facial identification model and gait identification model provided by the embodiments of the present application, can be based on
Neural network model or support vector machines training obtain.
Optionally, the embodiment of the present application can be existed target object by way of mark after target object has been determined
It is marked in monitor video, so that paying close attention to the target object being marked when through artificial checking monitoring video, reduces and find
The range of suspect improves treatment effeciency and accuracy rate.
Recongnition of objects method under monitoring scene provided by the embodiments of the present application, it is special by obtain the first object first
Reference breath determines the first object according to fisrt feature information in monitor video;The first object of distance is determined from monitor video
At least one second object within the scope of pre-determined distance, and the second feature information of each second object is extracted respectively;According to
The second feature information of each second object obtains appearance of each second object in monitor video within the scope of setting time
Number, or there is duration;Frequency of occurrence is filtered out not less than preset times, or the of duration not less than preset duration threshold value occurs
Two objects are defined as selected second object;It is finally each by include in the second feature information and date library of selected second object
The third feature information of a sample object is matched, and selected second object of successful match is determined as target object.This Shen
Please embodiment can find target object suspicious in monitor video by image processing techniques, solve in the prior art by artificial
Checking monitoring video is lower come treatment effeciency when screening suspect, the low problem of accuracy rate, thus when improving checking monitoring video
Treatment effeciency and accuracy rate.
Based on the same inventive concept, it is additionally provided in the embodiment of the present application and recongnition of objects method pair under monitoring scene
Recongnition of objects device under the monitoring scene answered, the principle and the application solved the problems, such as due to the device in the embodiment of the present application
Recongnition of objects method is similar under the above-mentioned monitoring scene of embodiment, therefore the implementation of device may refer to the implementation of method, weight
Multiple place repeats no more.
It is shown in Figure 9, recongnition of objects device under monitoring scene provided by the embodiment of the present application, comprising:
First object information obtains module 91, for obtaining the fisrt feature information of the first object, fisrt feature packet
Include the gait feature and/or facial characteristics of the first object;
First object determining module 92, for determining the first object in monitor video according to fisrt feature information;
Second object determining module 93, for from monitor video determine the first object of distance within the scope of pre-determined distance
At least one second object, and the second feature information of each second object is extracted respectively, second feature information includes second pair
The gait feature and/or facial characteristics of elephant;
There is data obtaining module 94 in second object, for the second feature information according to each second object, obtains
Frequency of occurrence of each second object in monitor video within the scope of setting time, or the time span occurred;
Selected second object determining module 95, for filtering out frequency of occurrence not less than preset times, or the time of appearance
Length is not less than the second object of predetermined time period threshold value, is defined as selected second object;
Target object determining module 96, for each by include in the second feature information and date library for selecting the second object
The third feature information of a sample object is matched, and is determined as selected second object of successful match to appear in target pair
As.
Optionally, recongnition of objects device under monitoring scene provided by the embodiment of the present application, further includes: another mesh
Mark object determining module 97;
Optionally, another target object determining module 97, specifically using following step for determining target object:
The gait feature of each selected second object is inputted respectively pre- in preparatory trained gait identification model
The first identity of each selected second object is surveyed, and obtains the prediction probability of the first identity of each selected second object;And
The facial characteristics of each selected second object is inputted respectively pre- in facial identification model trained in advance
The second identity of each selected second object is surveyed, and obtains the prediction probability of the second identity of each selected second object;
According to the weight of pre-set first identity and the weight of the second identity, each selected second object is calculated separately
The target prediction probability of corresponding different identity;
According to the target prediction probability of each selected corresponding different identity of second object, from each selected second object
Filter out target object.
Optionally, the first object determining module 92, specifically for determining the first object using following manner:
Obtain the facial characteristics of each object in monitor video;
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition condition;
Facial characteristics is met to the facial characteristics progress of the facial characteristics and the first object of the object of face recognition condition
Match, and the object of facial characteristics successful match is determined as the first object;
The gait feature that facial characteristics is unsatisfactory for the object of face recognition condition is extracted, by the gait feature extracted and the
The gait feature of an object is matched, and the object of gait feature successful match is determined as the first object.
Optionally, the first object determining module 92 is also used to determine the first object using following manner:
Obtain the facial characteristics of each object in monitor video;
The facial characteristics of each object is matched with the facial characteristics of the first object, and by facial characteristics successful match
Object be determined as the first object.
Optionally, the first object determining module 92 is also used to determine the first object using following manner:
Obtain the gait feature of each object in monitor video;
The gait feature of each object is matched with the gait feature of the first object, and by gait feature successful match
Object be determined as the first object.
Optionally, there is data obtaining module 94 in the second object, is specifically used for obtaining using following manner in setting time
Frequency of occurrence of each second object in monitor video in range:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
Fourth feature information is matched with the second feature information of the second object;
The object of successful match is determined as the second object, and the second object is tracked;
When that can not track the second object in monitor video, it is primary that the frequency of occurrence of the second object, which is counted, with
Determine frequency of occurrence of second object in monitor video.
Optionally, there is data obtaining module 94 in the second object, is also used to obtain using following manner in setting time model
Enclose frequency of occurrence of interior each second object in monitor video:
For each second object, perform the following operations:
In multiple video-frequency bands within the scope of the setting time of monitor video, of each object in each video-frequency band is obtained
Four characteristic informations;
Fourth feature information is matched with the second feature information of the second object, the object of successful match is determined as
Second object;
Occur the number of the video-frequency band of the second object by counting, determines that the second object goes out occurrence in monitor video
Number.
Optionally, there is data obtaining module 94 in the second object, is specifically used for obtaining using following manner in setting time
Appearance duration of each second object in monitor video in range:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
Fourth feature information is matched with the second feature information of the second object;
The object of successful match is determined as the second object, and the second object is tracked, is connected in Statistical monitor video
The continuous number of image frames for having the second object;
According to number of image frames, appearance duration of second object in monitor video is determined.
Recongnition of objects device under monitoring scene provided by the embodiments of the present application, when being identified to target object,
By obtaining the fisrt feature information of the first object, according to fisrt feature information, the first object is determined in monitor video;From prison
It controls and determines at least one second object of the first object of distance within the scope of pre-determined distance in video, and extract each second respectively
The second feature information of object;According to the second feature information of each second object, obtain each the within the scope of setting time
Frequency of occurrence of two objects in monitor video, or there is duration;When filtering out frequency of occurrence not less than preset times, or occurring
Long the second object for being not less than preset duration threshold value is defined as selected second object;It is finally special by the second of selected second object
Reference breath is matched with the third feature information for each sample object for including in database, by selected the second of successful match
Object is determined as target object.The embodiment of the present application can find target pair suspicious in monitor video by image processing techniques
As, lower come treatment effeciency when screening suspect by artificial checking monitoring video in the prior art, the low problem of accuracy rate is solved,
To improve the treatment effeciency and accuracy rate when checking monitoring video.
The embodiment of the present application also provides a kind of computer readable storage medium, stored on the computer readable storage medium
There is computer program, which executes the step of recongnition of objects method under above-mentioned monitoring scene when being run by processor
Suddenly.
Specifically, which can be general storage medium, such as mobile disk, hard disk, on the storage medium
Computer program when being run, recongnition of objects method under above-mentioned monitoring scene is able to carry out, so as to pass through image
Processing technique finds target object suspicious in monitor video, and suspicion is screened in solution by artificial checking monitoring video in the prior art
Treatment effeciency is lower when doubting people, the low problem of accuracy rate, thus treatment effeciency and accuracy rate when improving checking monitoring video.
Corresponding to recongnition of objects method under the monitoring scene in Fig. 1, the embodiment of the present application also provides a kind of calculating
Machine equipment, as shown in Figure 10, the equipment include memory 1000, processor 2000 and are stored on the memory 1000 and can be
The computer program run on the processor 2000, wherein above-mentioned processor 2000 is realized when executing above-mentioned computer program
The step of stating recongnition of objects method under monitoring scene.
Specifically, above-mentioned memory 1000 and processor 2000 can be general memory and processor, not do here
It is specific to limit, when the computer program of 2000 run memory 1000 of processor storage, it is able to carry out under above-mentioned monitoring scene
Recongnition of objects method solves existing so as to find target object suspicious in monitor video by image processing techniques
There is lower come treatment effeciency when screening suspect by artificial checking monitoring video in technology, the low problem of accuracy rate, to improve
Treatment effeciency and accuracy rate when checking monitoring video.
The computer program product of recongnition of objects method and device under monitoring scene provided by the embodiment of the present application,
Computer readable storage medium including storing program code, the instruction that said program code includes can be used for executing front side
Method method as described in the examples, specific implementation can be found in embodiment of the method, and details are not described herein.
In all examples being illustrated and described herein, any occurrence should be construed as merely illustratively, without
It is as limitation, therefore, other examples of exemplary embodiment can have different values.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description
It with the specific work process of device, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.In the application
In provided several embodiments, it should be understood that disclosed device and method may be implemented in other ways.With
Upper described Installation practice is only schematical.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in the executable non-volatile computer-readable storage medium of a processor.Based on this understanding, the application
Technical solution substantially the part of the part that contributes to existing technology or the technical solution can be with software in other words
The form of product embodies, which is stored in a storage medium, including some instructions use so that
One computer equipment (can be personal computer, server or the network equipment etc.) executes each embodiment institute of the application
State all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only
Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. is various to deposit
Store up the medium of program code.
Finally, it should be noted that embodiment described above, the only specific embodiment of the application, to illustrate the application
Technical solution, rather than its limitations, the protection scope of the application is not limited thereto, although with reference to the foregoing embodiments to this Shen
It please be described in detail, those skilled in the art should understand that: anyone skilled in the art
Within the technical scope of the present application, it can still modify to technical solution documented by previous embodiment or can be light
It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make
The essence of corresponding technical solution is detached from the spirit and scope of the embodiment of the present application technical solution, should all cover the protection in the application
Within the scope of.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.