CN109508645A - Personal identification method and device under monitoring scene - Google Patents
Personal identification method and device under monitoring scene Download PDFInfo
- Publication number
- CN109508645A CN109508645A CN201811223326.5A CN201811223326A CN109508645A CN 109508645 A CN109508645 A CN 109508645A CN 201811223326 A CN201811223326 A CN 201811223326A CN 109508645 A CN109508645 A CN 109508645A
- Authority
- CN
- China
- Prior art keywords
- monitor video
- biological characteristic
- facial characteristics
- determined
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/70—Multimodal biometrics, e.g. combining information from different biometric modalities
Abstract
This application provides personal identification method and devices under a kind of monitoring scene, it is related to Computer Recognition Technology field, first object is oriented in monitor video using the first biological characteristic for obtaining the first object, target area is set in monitor video with the position of the first object, and the second biological characteristic of at least one the second object is found in target area;According to the second biological characteristic of each second object, frequency of occurrence of each second object in monitor video within the scope of setting time is obtained, or duration occurs;Frequency of occurrence is filtered out from each second object not less than preset times, or selected second object of the duration not less than predetermined time period threshold value occurs, and selected second object is determined as appearing in the target object in monitor video.Suspect is selected come preliminary by the number that the second object in monitor video occurs, the step of completing preliminary screening suspect is eliminated through manual analysis massive video file, improves treatment effeciency, while can accurately determine suspect.
Description
Technical field
This application involves Computer Recognition Technology fields, more particularly, to personal identification method and dress under a kind of monitoring scene
It sets.
Background technique
24 hours of each street area under one's jurisdiction in city are monitored currently, monitoring system has been realized in, video image and case
Association analysis has become public security system and finds evidence, clears up a cace most strong means.It is specific to certain by monitoring system at present
The search of target is mainly completed by manual analysis massive video file.But the data volume of monitor video data is huge
Greatly, not only labor intensive, treatment effeciency are low for artificial analysis mode, while it is possible that can not accurately determine suspect's
Problem.
Summary of the invention
In view of this, the application's is designed to provide personal identification method and device under a kind of monitoring scene, to improve
Suspect can accurately be determined simultaneously to the treatment effeciency of monitor video data.
In a first aspect, the embodiment of the present application provides personal identification method under a kind of monitoring scene, comprising:
The first biological characteristic of the first object is obtained, the first biological characteristic includes gait feature and/or the face of the first object
Portion's feature;According to the first biological characteristic, the first object is determined in monitor video;
According to the position of the first object, target area is determined out of monitor video;It obtains and appears in from monitor video
Second biological characteristic of at least one the second object in target area, the second biological characteristic include the gait feature of the second object
And/or facial characteristics;According to the second biological characteristic of each second object, each second object within the scope of setting time is obtained
Frequency of occurrence in monitor video, or there is duration;Frequency of occurrence is filtered out from each second object not less than default time
Number, or there is selected second object of the duration not less than predetermined time period threshold value;Selected second object is determined as appearing in
Target object in monitor video.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, wherein root
According to first biological characteristic, first object is determined in monitor video, comprising: obtain each object in monitor video
Biological characteristic;The biological characteristic of each object is matched with the first biological characteristic of the first object respectively, determination is monitoring
The position of first object in video.
The possible embodiment of with reference to first aspect the first, the embodiment of the present application provide second of first aspect
Possible embodiment will be each when the biological characteristic of each object includes the facial characteristics and gait feature of each object
The biological characteristic of object is matched with the first biological characteristic of the first object respectively, determines the first object in monitor video
Position, comprising: for the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition condition;
Facial characteristics is met to the facial characteristics progress of the facial characteristics and the first object of the object of face recognition condition
Match, and the object of facial characteristics successful match is determined as the position of the first object in the position that monitor video occurs;
The gait feature that facial characteristics is unsatisfactory for the object of face recognition condition is extracted, by the gait feature extracted and the
The gait feature of an object is matched, and the position that the object of gait feature successful match is occurred in monitor video determines
For the position of the first object.
The possible embodiment of with reference to first aspect the first, the embodiment of the present application provide the third of first aspect
Possible embodiment, when the biological characteristic of each object includes the facial characteristics of each object, by the biology of each object
Feature is matched with the first biological characteristic of the first object respectively, determines the position of the first object in monitor video, comprising:
The facial characteristics of each object is matched with the facial characteristics of the first object, and by facial characteristics successful match
The position that occurs in monitor video of object be determined as the position of the first object.
The possible embodiment of with reference to first aspect the first, the embodiment of the present application provide the 4th kind of first aspect
Possible embodiment, when the biological characteristic of each object includes the gait feature of each object, by the biology of each object
Feature is matched with the first biological characteristic of the first object respectively, determines the position of the first object in monitor video, comprising:
The gait feature of each object is matched with the gait feature of the first object, and by gait feature successful match
The position that occurs in monitor video of object be determined as the position of the first object.
With reference to first aspect, the embodiment of the present application provides the 5th kind of possible embodiment of first aspect, according to
The position of an object, determines target area out of monitor video, comprising: using the position of the first object as the center of circle, with distance
The position of an object is that pre-determined distance length is radius, and target area is determined from monitor video.
With reference to first aspect, the embodiment of the present application provides the 6th kind of possible embodiment of first aspect, according to each
Second biological characteristic of a second object obtains each second object within the scope of setting time and goes out occurrence in monitor video
Number, comprising: be directed to each second object, perform the following operations:
Obtain the third biological characteristic of monitor video each object within the scope of setting time;By third biological characteristic and
Two biological characteristics are matched;The object of successful match is determined as the second object, and to the second object in monitor video into
Row tracking;When that can not track the second object in monitor video, it is primary that the number that the second object occurs, which is counted, with true
Fixed frequency of occurrence of second object in monitor video.
With reference to first aspect, the embodiment of the present application provides the 7th kind of possible embodiment of first aspect, according to each
Second biological characteristic of a second object obtains each second object within the scope of setting time and goes out occurrence in monitor video
Number, further includes:
It for each second object, performs the following operations: multiple video-frequency bands within the scope of the setting time of monitor video
In, obtain the third biological characteristic of each object in each video-frequency band;By the second biology of third biological characteristic and the second object
Feature is matched;The object of successful match is determined as the second object;There is of the video-frequency band of the second object by statistics
Number, determines frequency of occurrence of second object in monitor video.
With reference to first aspect, the embodiment of the present application provides the 8th kind of possible embodiment of first aspect, and described
According to the second biological characteristic of each second object, each second object is obtained within the scope of setting time in the prison
Control the appearance duration in video, comprising:
It for each second object, performs the following operations: obtaining the monitor video within the scope of setting time
The third biological characteristic of each object;By the progress of the second biological characteristic of the third biological characteristic and second object
Match;The object of successful match is determined as second object, and second object is tracked, counts the monitoring view
The number of image frames of second object is continuously had in frequency;According to described image frame number, determine second object described
There is duration in described in monitor video.
Second aspect, the embodiment of the present application provide identity recognition device under a kind of monitoring scene, comprising: first obtains mould
Block, for obtaining the first biological characteristic of the first object, the first biological characteristic includes the gait feature and/or face of the first object
Feature;First object's position determining module, for determining the first object in monitor video according to the first biological characteristic;First
Determining module determines target area for the position according to the first object from monitor video;Second obtains module, is used for
The second biological characteristic of at least one the second object in target area is appeared in from acquisition in monitor video;Second biological characteristic
Gait feature and/or facial characteristics including the second object;Third obtains module, for raw according to the second of each second object
Object feature obtains frequency of occurrence of each second object in monitor video within the scope of setting time, or duration occurs;Screening
Module, for filtering out the frequency of occurrence from each second object not less than preset times or described duration occur
Not less than selected second object of predetermined time period threshold value;Second determining module is determined as that will select the second object
Target object in present monitor video.
In conjunction with second aspect, the embodiment of the present application provides the first possible embodiment of second aspect, and first pair
As position determination module is specifically used for obtaining the biological characteristic of each object in monitor video;By the biological characteristic of each object point
It is not matched with the first biological characteristic of the first object, determines the position of the first object in monitor video.
In conjunction with the first possible embodiment of second aspect, the embodiment of the present application provides second of second aspect
Possible embodiment, the first object's position determining module, when the biological characteristic of each object includes the face of each object
When portion's feature and gait feature, specifically for being directed to the facial characteristics of each object, the facial characteristics of each object is detected
Whether face recognition condition is met;
Facial characteristics is met to the facial characteristics of the object of the face recognition condition and the face spy of first object
Sign is matched, and the position that the object of facial characteristics successful match occurs in the monitor video is determined as described first
The position of object;
Extract the gait feature that facial characteristics is unsatisfactory for the object of the face recognition condition, the gait feature that will be extracted
It is matched with the gait feature of first object, and the object of gait feature successful match is gone out in the monitor video
Existing position is determined as the position of first object.
In conjunction with the first possible embodiment of second aspect, the embodiment of the present application provides the third of second aspect
Possible embodiment, the first object's position determining module, when the biological characteristic of each object includes the face of each object
When portion's feature, specifically for the facial characteristics of each object is matched with the facial characteristics of first object, and
The position that the object of facial characteristics successful match occurs in the monitor video is determined as to the position of first object.
In conjunction with the first possible embodiment of second aspect, the embodiment of the present application provides the 4th kind of second aspect
Possible embodiment, the first object's position determining module, when the biological characteristic of each object includes the step of each object
When state feature, specifically for the gait feature of each object is matched with the gait feature of first object, and
The position that the object of gait feature successful match occurs in the monitor video is determined as to the position of first object.
In conjunction with second aspect, the embodiment of the present application provides the 5th kind of possible embodiment of second aspect, and first really
Cover half block is specifically used for using the position of first object as the center of circle, long as pre-determined distance using the position apart from first object
Degree is radius, and the target area is determined from the monitor video.
In conjunction with second aspect, the embodiment of the present application provides the 6th kind of possible embodiment of second aspect, and third obtains
Modulus block obtains each second pair within the scope of setting time specifically for the second biological characteristic according to each second object
As the frequency of occurrence in monitor video, comprising: be directed to each second object, perform the following operations:
Obtain the third biological characteristic of monitor video each object within the scope of setting time;By third biological characteristic and
Two biological characteristics are matched;The object of successful match is determined as the second object, and to the second object in monitor video into
Row tracking;When that can not track the second object in monitor video, it is primary that the number that the second object occurs, which is counted, with true
Fixed frequency of occurrence of second object in monitor video.
In conjunction with second aspect, the embodiment of the present application provides the 7th kind of possible embodiment of second aspect, and third obtains
Modulus block is also used to execute following manner:
According to the second biological characteristic of each second object, obtains each second object within the scope of setting time and monitoring
Frequency of occurrence in video is performed the following operations for each second object: more within the scope of the setting time of monitor video
In a video-frequency band, the third biological characteristic of each object in each video-frequency band is obtained;By third biological characteristic and the second object
Second biological characteristic is matched;The object of successful match is determined as the second object;Occurs the view of the second object by statistics
The number of frequency range determines frequency of occurrence of second object in monitor video.
In conjunction with second aspect, the embodiment of the present application provides the 8th kind of possible embodiment of second aspect, and third obtains
Modulus block is also used to the second biological characteristic according to each second object, obtains each described the within the scope of setting time
Appearance duration of two objects in the monitor video, comprising: be directed to each second object, perform the following operations: it obtains
The third biological characteristic of each object of the monitor video within the scope of setting time;By the third biological characteristic with it is described
Second biological characteristic of the second object is matched;The object of successful match is determined as second object, and to described
Two objects are tracked, and count the number of image frames that second object is continuously had in the monitor video;According to the figure
As frame number, determine that duration occurs in described in the monitor video of second object.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: processor, memory and bus, storage
Device is stored with the executable machine readable instructions of processor, when electronic equipment operation, by total between processor and memory
Line communication executes any possible one such as above-mentioned first aspect or first aspect when machine readable instructions are executed by processor
Under kind monitoring scene the step of personal identification method.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer-readable storage medium
It is stored with computer program in matter, executes when which is run by processor such as above-mentioned first aspect or first aspect
Under a kind of any possible monitoring scene the step of personal identification method.
Personal identification method and device under a kind of monitoring scene provided by the embodiments of the present application, using the first object of acquisition
First biological characteristic determines the first object, sets target area with the position of the first object, finds at least in target area
Second biological characteristic of one the second object;According to the second biological characteristic of each second object, obtain in setting time range
Frequency of occurrence of interior each second object in monitor video, or there is duration;Occurrence is filtered out out from each second object
Number is not less than preset times, or selected second object of the duration not less than predetermined time period threshold value occurs;It will be second pair selected
As the target object for being determined as appearing in monitor video.It is selected by the number that the second object in monitor video occurs come preliminary
Suspect eliminates through manual analysis massive video file the step of completing preliminary screening suspect, improves processing effect
Rate, while can accurately determine suspect.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows the flow diagram of personal identification method under a kind of monitoring scene provided by the embodiment of the present application;
Fig. 2 shows a kind of flow diagrams of the first object's position of determination provided by the embodiment of the present application;
Fig. 3 shows the flow diagram that another kind provided by the embodiment of the present application determines the first object's position;
Fig. 4 shows a kind of frequency of occurrence of the second object of determination provided by the embodiment of the present application in monitor video
Flow diagram;
Fig. 5 shows another kind provided by the embodiment of the present application and determines frequency of occurrence of second object in monitor video
Flow diagram;
Fig. 6 shows a kind of appearance duration of the second object of determination provided by the embodiment of the present application in monitor video
Flow diagram;
Fig. 7 shows the structural schematic diagram of identity recognition device under a kind of monitoring scene provided by the embodiment of the present application;
Fig. 8 shows the structural schematic diagram of a kind of electronic equipment provided by the embodiment of the present application.
Icon:
701- first obtains module;702- locating module;The first determining module of 703-;704- second obtains module;705-
Third obtains module;706- screening module;The second determining module of 707-.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
Middle attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
It is some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is real
The component for applying example can be arranged and be designed with a variety of different configurations.Therefore, below to the application's provided in the accompanying drawings
The detailed description of embodiment is not intended to limit claimed scope of the present application, but is merely representative of the selected reality of the application
Apply example.Based on embodiments herein, those skilled in the art institute obtained without making creative work
There are other embodiments, shall fall in the protection scope of this application.
In view of the search at present by monitoring system to certain specific objective, mainly pass through manual analysis massive video text
Part is completed.But the data volume of monitor video data is huge, artificial analysis mode not only labor intensive, treatment effeciency
Lowly, while it is possible that the problem of can not accurately determining suspect.Based on this, the embodiment of the present application provides a kind of monitoring
Personal identification method and device, are described below by embodiment under scene.
For convenient for understanding the present embodiment, first to identity under a kind of monitoring scene disclosed in the embodiment of the present application
Recognition methods describes in detail.
Embodiment one:
The embodiment of the present application provides personal identification method under a kind of monitoring scene, reference can be made to a kind of prison shown in Fig. 1
The flow diagram for controlling personal identification method under scene, includes the following steps:
Step S102, obtains the first biological characteristic of the first object, and the first biological characteristic includes that the gait of the first object is special
Sign and/or facial characteristics.
The related personnel that above-mentioned first object can be reporter or give a clue, obtain the first object first are raw
The mode of object feature can be shooting photo and perhaps pass through the photo or video to shooting by the video that camera-shooting and recording device bat takes
Carry out the extraction of the gait feature and/or facial characteristics of the first object.Wherein, gait feature include but is not limited to step-length, stride,
The rhythm, speed, dynamic basis, process line, footing inclination angle and Hip Angle;The process of gait analysis contains measurement, and
Wherein measurable parameter is introduced, analyze and is explained, the index of correlation of the first object can be obtained by gait feature
As health status, the age, the bodily form, weight and with personal characteristics body performance.Facial characteristics mainly passes through identification face
Each characteristic point, including but not limited to right eye endocanthion point, left eye endocanthion point, right labial angle point and bottom right margo palpebrae bottom point, by each
Line between a characteristic point identifies the face mask of the first object.
Step S104 determines the first object according to the first biological characteristic in monitor video.
It is available when positioning the first object in monitor video according to the first biological characteristic in the embodiment of the present application
Appear in the biological characteristic of each object in monitor video, so by the biological characteristic of each object respectively with the first object
First biological characteristic is matched, and determines the position of the first object in monitor video.
Specifically, by the first biological characteristic of the biological characteristic of each object in monitor video and the first object carry out by
One matching, when matched result reaches default similarity value, it is determined that corresponding object is the first object in monitor video, into
And the first object is determined in monitor video.
In specific implementation, executes matched process and is specifically as follows:
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition condition;It will
The facial characteristics that facial characteristics meets the object of face recognition condition is matched with the facial characteristics of the first object, and will be facial
The successful object of characteristic matching is determined as the position of the first object in the position that monitor video occurs;
The gait feature that facial characteristics is unsatisfactory for the object of face recognition condition is extracted, by the gait feature extracted and the
The gait feature of an object is matched, and the object of gait feature successful match is determined as in the position that monitor video occurs
The position of first object.
Wherein, the condition of above-mentioned face recognition is to be first for the image quality of the face-image of each object in monitor video
It is no clear, it can identify the facial characteristics of above-mentioned each object, by image recognition technology so as to the face with the first object
Feature is matched, followed by the face-image image quality of above-mentioned each object clearly on the basis of, if image can be passed through
Identification technology identifies crucial facial characteristics, is only able to find the right side in the facial characteristics of each object such as in acquired image
Labial angle point, this can not just be matched by a characteristic point with the facial characteristics of the first object.
In above-described embodiment, when judging whether facial characteristics meets the condition of face recognition, full convolution net can be based on
Network obtains the scoring of facial characteristics, then by the sum of scoring and default scoring threshold value comparison, if the sum of scoring is greater than or equal in advance
If scoring threshold value, then meet face recognition condition;If the sum of scoring is less than default scoring threshold value, it is unsatisfactory for face recognition item
Part.
In a kind of possible embodiment, the object of face recognition condition is met for facial characteristics, can will face it is special
The facial characteristics for levying the object for meeting face recognition condition is matched with the facial characteristics of the first object, when facial characteristic matching
Result reach default similarity value, then the object of above-mentioned successful match is determined as the first object in the position that monitor video occurs
Position.
In addition, illustrating facial characteristics, it fails to match, this when the not up to default similarity value of the result of facial characteristic matching
In the case of the facial characteristics object that it fails to match can not directly be excluded that facial characteristics can also be directed to for the first object person
The object that it fails to match further extracts gait feature, is matched with the gait feature of the first object, and by gait feature
The object that matching result reaches default similarity value is determined as the position of the first object in the position that monitor video occurs, and works as gait
When the matching result of feature does not reach default similarity value yet, then the facial characteristics object that it fails to match is determined as not being first
Object.
In alternatively possible embodiment, the object of face recognition condition is unsatisfactory for for facial characteristics, it can be into one
Step extracts the gait feature that facial characteristics is unsatisfactory for the object of face recognition condition, by the gait feature extracted and the first object
Gait feature matched, and the object of gait feature successful match is determined as first pair in the position that monitor video occurs
The position of elephant.
Herein, when carrying out gait feature matching, the multiple image in monitor video is needed just to can determine that gait feature.Tool
When body is realized, firstly, extracting at least one associated images septum reset feature corresponding to nth frame image is unsatisfactory for facial knowledge
The gait feature of the object of other condition.Then, the gait feature extracted is matched with the gait feature of the first object, when
Matched result reaches default similarity value, then the object of above-mentioned successful match is determined as first in the position that monitor video occurs
The position of object.The matching process of above-mentioned gait feature and the matching process of facial characteristics are similar, and details are not described herein.
Wherein, above-mentioned associated images include but is not limited to: adjacent with nth frame image in monitor video and it is continuous before
X frame image, it is adjacent with nth frame image and continuous after Y frame image, or it is adjacent with nth frame image and continuous before X frame image with
And rear Y frame image adjacent and continuous with nth frame image.
It optionally, can Method of Gait Feature Extraction algorithm, grain based on Hough (Hough) transformation when extracting gait feature
Method of Gait Feature Extraction algorithm of sub- filter tracking etc. extracts gait feature.
Optionally, a kind of flow diagram of the first object's position of determination shown in Figure 2, the embodiment of the present application are based on
Following manner determines the position of the first object:
S202: the facial characteristics of each object in monitor video is obtained.
S204: the facial characteristics of each object is matched with the facial characteristics of the first object, and by facial characteristics
The position occurred in monitor video with successful object is determined as the position of the first object.
Optionally, another kind shown in Figure 3 determines the flow diagram of the first object's position, the embodiment of the present application base
The position of the first object is determined in following manner:
S302: the gait feature of each object in monitor video is obtained.
S304: the gait feature of each object is matched with the gait feature of the first object, and by gait feature
The position occurred in monitor video with successful object is determined as the position of the first object.
Behind the position for determining the first object, step S106 is executed:
Step S106 determines target area according to the position of the first object from monitor video.
In a kind of possible embodiment, when determining target area, can using the position of the first object as the center of circle, with away from
Position from the first object is that pre-determined distance length is radius, and target area is determined from monitor video, is conducive to reduce and find
The range of target object.
Step S108, from the second biology for obtaining at least one the second object appeared in target area in monitor video
Feature, wherein the second biological characteristic includes the gait feature and/or facial characteristics of the second object.
Wherein, in the acquisition process of the second biological characteristic of at least one the second object and step S102 the first object the
The acquisition process of one biological characteristic is similar, for convenience and simplicity of description, the second biological characteristic of the second object of foregoing description
Specific work process can refer to the corresponding process of abovementioned steps S102, and details are not described herein.
Step S110 is obtained each second within the scope of setting time according to the second biological characteristic of each second object
Frequency of occurrence of the object in monitor video, or there is duration;
Obtain at least one the second object biological characteristic after, in monitor video choose be most likely to occur event when
Between section monitor video (monitor video within the scope of i.e. above-mentioned setting time), unite in the time range of above-mentioned setting
Count frequency of occurrence of each second object in monitor video.
Wherein, if a people frequently occurs on the biggish place of some flow of the people within the scope of setting time, this
Personal behavior will be more suspicious, it is likely to suspect.Therefore pass through each second object within the scope of statistics setting time
Frequency of occurrence in monitor video can reduce searching by judging the number of frequency of occurrence or the length of duration occur
The range of target object.
In order to clearly describe, by according to the second biological characteristic of each second object, obtain in setting time model
Frequency of occurrence of interior each second object in monitor video is enclosed, or the process of duration occurs, is described in detail as follows:
For each second object, perform the following operations: (detailed process can be found in second pair of a kind of determination shown in Fig. 4
As the flow diagram of the frequency of occurrence in monitor video)
Step S402 obtains the third biological characteristic of monitor video each object within the scope of setting time;
Step S404 matches third biological characteristic with the second biological characteristic;
The object of successful match is determined as the second object, and carried out in monitor video to the second object by step S406
Tracking;
Step S408, when that can not track the second object in monitor video, the number that the second object is occurred is counted
To be primary, to determine frequency of occurrence of second object in monitor video.
Optionally, Fig. 5 shows the flow diagram of frequency of occurrence of another determining second object in monitor video,
For each second object, perform the following operations:
Step S502 in multiple video-frequency bands within the scope of the setting time of monitor video, is obtained every in each video-frequency band
The third biological characteristic of a object;
Step S504 matches third biological characteristic with the second biological characteristic of the second object;
The object of successful match is determined as the second object by step S506;
Step S508 occurs the number of the video-frequency band of the second object by statistics, determines the second object in monitor video
Frequency of occurrence.
When specific implementation, a kind of appearance duration of the second object of determination shown in Figure 6 in monitor video
Flow diagram, the embodiment of the present application determine appearance duration of second object in monitor video by following manner:
For each second object, perform the following operations:
Step S602 obtains the third biological characteristic of each object of the monitor video within the scope of setting time;
Step S604 matches third biological characteristic with the second biological characteristic of the second object;
The object of successful match is determined as the second object, and is tracked to the second object, Statistical monitor by step S606
The number of image frames of the second object is continuously had in video;
Step S608 determines appearance duration of second object in monitor video according to number of image frames.
Frequency of occurrence of above-mentioned each second object of determination in monitor video, or after there is duration, step will be executed
S112。
Step S112 filters out frequency of occurrence not less than preset times from each second object, or duration occur not small
In selected second object of predetermined time period threshold value, and selected second object is determined as appearing in the target in monitor video
Object.
Personal identification method under a kind of monitoring scene provided by the embodiments of the present application, it is raw using obtain the first object first
Object feature location goes out the first object, sets target area with the position of the first object, found in target area at least one the
Second biological characteristic of two objects;According to the second biological characteristic of each second object, obtain each within the scope of setting time
Frequency of occurrence of second object in monitor video, or there is duration;It is not small that frequency of occurrence is filtered out from each second object
In preset times, or there is selected second object of the duration not less than predetermined time period threshold value;Selected second object is determined
To appear in the target object in monitor video.Suspicion is selected by the number that the second object in monitor video occurs come preliminary
People eliminates through manual analysis massive video file the step of completing preliminary screening suspect, improves treatment effeciency, together
When can accurately determine suspect.
Embodiment two:
The embodiment of the present application provides identity recognition device under a kind of monitoring scene, referring to a kind of monitoring field shown in Fig. 7
The structural schematic diagram of identity recognition device under scape, comprising:
First obtains module 701, and for obtaining the first biological characteristic of the first object, the first biological characteristic includes first pair
The gait feature and/or facial characteristics of elephant;
First object's position determining module 702, for positioning first pair in monitor video according to the first biological characteristic
As;
First determining module 703 determines target area for the position according to the first object out of monitor video;Tool
Body is for using following manner: using the position of the first object as the center of circle, using the position of the first object of distance as pre-determined distance length
For radius, target area is determined from monitor video.
Second obtains module 704, for appearing at least one second pair in target area from acquisition in monitor video
The second biological characteristic of elephant;Second biological characteristic includes the gait feature and/or facial characteristics of the second object;
Third obtains module 705, for the second biological characteristic according to each second object, obtains in setting time range
Frequency of occurrence of interior each second object in monitor video, or there is duration;
Screening module 706, when for filtering out frequency of occurrence from each second object not less than preset times, or occur
Long selected second object not less than predetermined time period threshold value;
Second determining module 707 is determined as appearing in target object in monitor video for that will select the second object.
Wherein, the first object's position determining module 702 is specifically used for obtaining the biological characteristic of each object in monitor video;
The biological characteristic of each object is matched with the first biological characteristic of the first object respectively, is determined first in monitor video
The position of object.
First object's position determining module 702, when the biological characteristic of each object include each object facial characteristics and
When gait feature, the position of the first object is determined using following manner:
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition condition;
Facial characteristics is met to the facial characteristics progress of the facial characteristics and the first object of the object of face recognition condition
Match, and the position that the object of facial characteristics successful match occurs in monitor video is determined as to the position of the first object;
The gait feature that facial characteristics is unsatisfactory for the object of face recognition condition is extracted, by the gait feature extracted and the
The gait feature of an object is matched, and the position that the object of gait feature successful match is occurred in monitor video determines
For the position of the first object.
Optionally, the first object's position determining module 702, when the biological characteristic of each object includes the face of each object
When feature, specifically for the facial characteristics of each object is matched with the facial characteristics of the first object, and by facial characteristics
The position that the object of successful match occurs in monitor video is determined as the position of the first object.
Optionally, the first object's position determining module 702, when the biological characteristic of each object includes the gait of each object
When feature, specifically for the gait feature of each object is matched with the gait feature of the first object, and by gait feature
The position that the object of successful match occurs in monitor video is determined as the position of the first object.
Specifically, third obtains module 705, for being obtained each second pair within the scope of setting time using following manner
As the frequency of occurrence in monitor video:
For each second object, perform the following operations:
Obtain the third biological characteristic of monitor video each object within the scope of setting time;By third biological characteristic and
Two biological characteristics are matched;The object of successful match is determined as the second object, and to the second object in monitor video into
Row tracking;When that can not track the second object in monitor video, it is primary that the number that the second object occurs, which is counted, with true
Fixed frequency of occurrence of second object in monitor video.
Specifically, third obtains module 705, for being obtained each second pair within the scope of setting time using following manner
As the frequency of occurrence in monitor video:
According to the second biological characteristic of each second object, obtains each second object within the scope of setting time and monitoring
Frequency of occurrence in video is performed the following operations for each second object: more within the scope of the setting time of monitor video
In a video-frequency band, the third biological characteristic of each object in each video-frequency band is obtained;By third biological characteristic and the second object
Second biological characteristic is matched;The object of successful match is determined as the second object;Occurs the view of the second object by statistics
The number of frequency range determines frequency of occurrence of second object in monitor video.
Specifically, third obtains module 705, for being obtained each second pair within the scope of setting time using following manner
As the appearance duration in monitor video:
It for each second object, performs the following operations: obtaining each object of the monitor video within the scope of setting time
Third biological characteristic;Third biological characteristic is matched with the second biological characteristic of the second object;By pair of successful match
It is tracked as being determined as the second object, and to the second object, the image of the second object is continuously had in Statistical monitor video
Frame number;According to number of image frames, appearance duration of second object in monitor video is determined.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
Embodiment three:
Based on the same technical idea, the embodiment of the present application also provides a kind of electronic equipment and computer storage to be situated between
Matter, referring to the structural schematic diagram of a kind of electronic equipment shown in Fig. 8, particular content can be found in following embodiment.
Wherein, memory 41 may include high-speed random access memory (RAM, Random Access Memory),
It may further include non-labile memory (non-volatile memory), for example, at least a magnetic disk storage.Bus 42
It can be isa bus, pci bus or eisa bus etc..The bus can be divided into address bus, data/address bus, control bus
Deng.
Wherein, memory 41 is for storing program, and the processor 40 executes the journey after receiving and executing instruction
Sequence, method performed by the device that the flowchart process that aforementioned any embodiment of the embodiment of the present invention discloses defines can be applied to locate
It manages in device 40, or realized by processor 41.
Processor 41 may be a kind of IC chip, the processing capacity with signal.During realization, above-mentioned side
Each step of method can be completed by the integrated logic circuit of the hardware in processor 41 or the instruction of software form.Above-mentioned
Processor 41 can be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network
Processor (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (Digital Signal
Processing, abbreviation DSP), specific integrated circuit (Application Specific Integrated Circuit, referred to as
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, abbreviation FPGA) or other are programmable
Logical device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute in the embodiment of the present invention
Disclosed each method, step and logic diagram.General processor can be microprocessor or the processor is also possible to appoint
What conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly in hardware decoding processing
Device executes completion, or in decoding processor hardware and software module combination execute completion.Software module can be located at
Machine memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable memory, register etc. are originally
In the storage medium of field maturation.The storage medium is located at memory 41, and processor 40 reads the information in memory 41, in conjunction with
Its hardware completes the step of above method.
Specifically, when the computer program of 40 run memory 41 of processor storage, it is able to carry out above-mentioned determining target
The method of object solves the prior art so as to find target object suspicious in monitor video by image processing techniques
In, accuracy rate low problem lower come treatment effeciency when screening suspect by artificial checking monitoring video, thus improve check prison
Treatment effeciency and accuracy rate when control video.
The computer program of personal identification method and device under a kind of monitoring scene is carried out provided by the embodiment of the present application
Product, the computer readable storage medium including storing the executable non-volatile program code of processor, described program generation
The instruction that code includes can be used for executing previous methods method as described in the examples, and specific implementation can be found in embodiment of the method,
This is repeated no more.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with
It realizes by another way.The apparatus embodiments described above are merely exemplary, for example, the division of the unit,
Only a kind of logical function partition, there may be another division manner in actual implementation, in another example, multiple units or components can
To combine or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or beg for
The mutual coupling, direct-coupling or communication connection of opinion can be through some communication interfaces, device or unit it is indirect
Coupling or communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in the executable non-volatile computer-readable storage medium of a processor.Based on this understanding, the application
Technical solution substantially the part of the part that contributes to existing technology or the technical solution can be with software in other words
The form of product embodies, which is stored in a storage medium, including some instructions use so that
One computer equipment (can be personal computer, server or the network equipment etc.) executes each embodiment institute of the application
State all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only
Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. is various to deposit
Store up the medium of program code.
Finally, it should be noted that embodiment described above, the only specific embodiment of the application, to illustrate the application
Technical solution, rather than its limitations, the protection scope of the application is not limited thereto, although with reference to the foregoing embodiments to this Shen
It please be described in detail, those skilled in the art should understand that: anyone skilled in the art
Within the technical scope of the present application, it can still modify to technical solution documented by previous embodiment or can be light
It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make
The essence of corresponding technical solution is detached from the spirit and scope of the embodiment of the present application technical solution, should all cover the protection in the application
Within the scope of.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.
Claims (12)
1. personal identification method under a kind of monitoring scene characterized by comprising
Obtain the first biological characteristic of the first object, first biological characteristic include first object gait feature and/
Or facial characteristics;
According to first biological characteristic, first object is determined in monitor video;
According to the position of first object, target area is determined from the monitor video;
The second biological characteristic of at least one the second object in the target area is appeared in from acquisition in the monitor video,
Second biological characteristic includes the gait feature and/or facial characteristics of second object;
According to the second biological characteristic of each second object, obtains each second object within the scope of setting time and exist
Frequency of occurrence in the monitor video, or there is duration;
The frequency of occurrence is filtered out from each second object not less than preset times or described duration occurred and is not less than
Selected second object of predetermined time period threshold value;
It is determined as selected second object to appear in the target object in the monitor video.
2. the method according to claim 1, wherein described according to first biological characteristic, in monitor video
Middle determination first object, comprising:
Obtain the biological characteristic of each object in the monitor video;
The biological characteristic of each object is matched with the first biological characteristic of first object respectively, is determined in institute
State the position of first object in monitor video.
3. according to the method described in claim 2, it is characterized in that, the biological characteristic when each object includes each object
Facial characteristics and gait feature when, the biological characteristic by each object is raw with the first of first object respectively
Object feature is matched, and determines the position of the first object described in the monitor video, comprising:
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition condition;
By facial characteristics meet the object of the face recognition condition facial characteristics and first object facial characteristics into
Row matching, and the position that the object of facial characteristics successful match occurs in the monitor video is determined as first object
Position;
The gait feature that facial characteristics is unsatisfactory for the object of the face recognition condition is extracted, by the gait feature extracted and institute
The gait feature for stating the first object is matched, and the object of gait feature successful match is occurred in the monitor video
Position is determined as the position of first object.
4. according to the method described in claim 2, it is characterized in that, the biological characteristic when each object includes each object
Facial characteristics when, the biological characteristic by each object respectively with the first biological characteristic of first object carry out
Matching determines the position of the first object described in the monitor video, comprising:
The facial characteristics of each object is matched with the facial characteristics of first object, and facial characteristics is matched
The position that successful object occurs in the monitor video is determined as the position of first object.
5. according to the method described in claim 2, it is characterized in that, the biological characteristic when each object includes each object
Gait feature when, the biological characteristic by each object respectively with the first biological characteristic of first object carry out
Matching determines the position of the first object described in the monitor video, comprising:
The gait feature of each object is matched with the gait feature of first object, and gait feature is matched
The position that successful object occurs in the monitor video is determined as the position of first object.
6. the method according to claim 1, wherein the position according to first object, from the prison
Target area is determined in control video, comprising:
It is pre-determined distance length as radius using the position apart from first object using the position of first object as the center of circle,
The target area is determined from the monitor video.
7. the method according to claim 1, wherein described special according to the second biology of each second object
Sign obtains frequency of occurrence of each second object in the monitor video within the scope of setting time, comprising:
For each second object, perform the following operations:
Obtain the third biological characteristic of the monitor video each object within the scope of setting time;
The third biological characteristic is matched with second biological characteristic;
The object of successful match is determined as second object, and second object is chased after in the monitor video
Track;
When in the monitor video second object can not be tracked, it is by the number statistics that second object occurs
Once, the frequency of occurrence with determination second object in the monitor video.
8. the method according to claim 1, wherein described special according to the second biology of each second object
Sign obtains frequency of occurrence of each second object in the monitor video within the scope of setting time, further includes:
For each second object, perform the following operations:
In multiple video-frequency bands within the scope of the setting time of the monitor video, each object in each video-frequency band is obtained
Third biological characteristic;
The third biological characteristic is matched with the second biological characteristic of second object;
The object of successful match is determined as second object;
Occur the number of the video-frequency band of second object by statistics, determines second object in the monitor video
The frequency of occurrence.
9. the method according to claim 1, wherein described special according to the second biology of each second object
Sign obtains appearance duration of each second object in the monitor video within the scope of setting time, comprising:
For each second object, perform the following operations:
Obtain the third biological characteristic of each object of the monitor video within the scope of setting time;
The third biological characteristic is matched with the second biological characteristic of second object;
The object of successful match is determined as second object, and second object is tracked, counts the monitoring
The number of image frames of second object is continuously had in video;
According to described image frame number, determine that duration occurs in described in the monitor video of second object.
10. identity recognition device under a kind of monitoring scene characterized by comprising
First obtains module, and for obtaining the first biological characteristic of the first object, first biological characteristic includes described first
The gait feature and/or facial characteristics of object;
First object's position determining module, for determining described first pair in monitor video according to first biological characteristic
As;
First determining module determines target area from the monitor video for the position according to first object;
Second obtains module, for appearing at least one second pair in the target area from acquisition in the monitor video
The second biological characteristic of elephant, second biological characteristic include the gait feature and/or facial characteristics of second object;
Third obtains module, for the second biological characteristic according to each second object, obtains within the scope of setting time
Frequency of occurrence of each second object in the monitor video, or there is duration;
Screening module, for filtering out the frequency of occurrence from each second object not less than preset times or described
There is selected second object of the duration not less than predetermined time period threshold value;
Second determining module, for being determined as selected second object to appear in the target object in the monitor video.
11. a kind of electronic equipment characterized by comprising processor, memory and bus, the memory are stored with described
The executable machine readable instructions of processor, when electronic equipment operation, by total between the processor and the memory
Line communication, the machine readable instructions execute a kind of monitoring as described in any one of claim 1 to 9 when being executed by the processor
Under scene the step of personal identification method.
12. a kind of computer readable storage medium, which is characterized in that be stored with computer journey on the computer readable storage medium
Sequence, which executes identity under a kind of monitoring scene as described in any one of claim 1 to 9 when being run by processor knows
The step of other method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811223326.5A CN109508645A (en) | 2018-10-19 | 2018-10-19 | Personal identification method and device under monitoring scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811223326.5A CN109508645A (en) | 2018-10-19 | 2018-10-19 | Personal identification method and device under monitoring scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109508645A true CN109508645A (en) | 2019-03-22 |
Family
ID=65746777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811223326.5A Pending CN109508645A (en) | 2018-10-19 | 2018-10-19 | Personal identification method and device under monitoring scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109508645A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110751116A (en) * | 2019-10-24 | 2020-02-04 | 银河水滴科技(北京)有限公司 | Target identification method and device |
CN111178231A (en) * | 2019-12-26 | 2020-05-19 | 航天信息股份有限公司 | Object monitoring method and device |
CN111461031A (en) * | 2020-04-03 | 2020-07-28 | 银河水滴科技(北京)有限公司 | Object recognition system and method |
CN111767782A (en) * | 2020-04-15 | 2020-10-13 | 上海摩象网络科技有限公司 | Tracking target determining method and device and handheld camera |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103246869A (en) * | 2013-04-19 | 2013-08-14 | 福建亿榕信息技术有限公司 | Crime monitoring method based on face recognition technology and behavior and sound recognition |
CN106203389A (en) * | 2016-07-22 | 2016-12-07 | 成都知人善用信息技术有限公司 | The intelligent transportation system of stolen vehicle is found in a kind of face recognition |
CN107393256A (en) * | 2017-07-31 | 2017-11-24 | 深圳前海弘稼科技有限公司 | Method for preventing missing, server and terminal equipment |
CN107909033A (en) * | 2017-11-15 | 2018-04-13 | 西安交通大学 | Suspect's fast track method based on monitor video |
-
2018
- 2018-10-19 CN CN201811223326.5A patent/CN109508645A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103246869A (en) * | 2013-04-19 | 2013-08-14 | 福建亿榕信息技术有限公司 | Crime monitoring method based on face recognition technology and behavior and sound recognition |
CN106203389A (en) * | 2016-07-22 | 2016-12-07 | 成都知人善用信息技术有限公司 | The intelligent transportation system of stolen vehicle is found in a kind of face recognition |
CN107393256A (en) * | 2017-07-31 | 2017-11-24 | 深圳前海弘稼科技有限公司 | Method for preventing missing, server and terminal equipment |
CN107909033A (en) * | 2017-11-15 | 2018-04-13 | 西安交通大学 | Suspect's fast track method based on monitor video |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110751116A (en) * | 2019-10-24 | 2020-02-04 | 银河水滴科技(北京)有限公司 | Target identification method and device |
CN110751116B (en) * | 2019-10-24 | 2022-07-01 | 银河水滴科技(宁波)有限公司 | Target identification method and device |
CN111178231A (en) * | 2019-12-26 | 2020-05-19 | 航天信息股份有限公司 | Object monitoring method and device |
CN111178231B (en) * | 2019-12-26 | 2023-07-18 | 航天信息股份有限公司 | Object monitoring method and device |
CN111461031A (en) * | 2020-04-03 | 2020-07-28 | 银河水滴科技(北京)有限公司 | Object recognition system and method |
CN111461031B (en) * | 2020-04-03 | 2023-10-24 | 银河水滴科技(宁波)有限公司 | Object recognition system and method |
CN111767782A (en) * | 2020-04-15 | 2020-10-13 | 上海摩象网络科技有限公司 | Tracking target determining method and device and handheld camera |
CN111767782B (en) * | 2020-04-15 | 2022-01-11 | 上海摩象网络科技有限公司 | Tracking target determining method and device and handheld camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10402627B2 (en) | Method and apparatus for determining identity identifier of face in face image, and terminal | |
CN109508645A (en) | Personal identification method and device under monitoring scene | |
CN108038176B (en) | Method and device for establishing passerby library, electronic equipment and medium | |
CN107644213A (en) | Video person extraction method and device | |
CN108197250B (en) | Picture retrieval method, electronic equipment and storage medium | |
CN110287889A (en) | A kind of method and device of identification | |
CN109145742B (en) | Pedestrian identification method and system | |
CN108875481B (en) | Method, device, system and storage medium for pedestrian detection | |
CN108038937B (en) | Method and device for showing welcome information, terminal equipment and storage medium | |
CN109766755B (en) | Face recognition method and related product | |
CN106997629A (en) | Access control method, apparatus and system | |
CN109359666A (en) | A kind of model recognizing method and processing terminal based on multiple features fusion neural network | |
CN105005777A (en) | Face-based audio and video recommendation method and face-based audio and video recommendation system | |
CN107133629B (en) | Picture classification method and device and mobile terminal | |
CN111914665A (en) | Face shielding detection method, device, equipment and storage medium | |
WO2014193220A2 (en) | System and method for multiple license plates identification | |
WO2019119396A1 (en) | Facial expression recognition method and device | |
CN108108711A (en) | Face supervision method, electronic equipment and storage medium | |
CN107844742A (en) | Facial image glasses minimizing technology, device and storage medium | |
CN110765903A (en) | Pedestrian re-identification method and device and storage medium | |
CN112417970A (en) | Target object identification method, device and electronic system | |
CN111291646A (en) | People flow statistical method, device, equipment and storage medium | |
CN111191531A (en) | Rapid pedestrian detection method and system | |
CN111476070A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US20160140395A1 (en) | Adaptive sampling for efficient analysis of ego-centric videos |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190322 |