CN105844128A - Method and device for identity identification - Google Patents
Method and device for identity identification Download PDFInfo
- Publication number
- CN105844128A CN105844128A CN201510019275.4A CN201510019275A CN105844128A CN 105844128 A CN105844128 A CN 105844128A CN 201510019275 A CN201510019275 A CN 201510019275A CN 105844128 A CN105844128 A CN 105844128A
- Authority
- CN
- China
- Prior art keywords
- moving component
- user
- view data
- sample
- dynamic vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
Abstract
The invention provides a method and a device for identity identification. The method comprises: using a dynamic vision sensor to acquire signals and outputting a detected event point; accumulating the event points in a period of time, to form image data; using an identity classifier to identify identities according to the image data, wherein the identity classifier is pre-trained according to the image data formed by signals which are acquired by the dynamic vision sensor aimed at a user when the user registers identity. Using the device and the method can identify identity by low-energy-consumption, simple, and convenient operation, and privacy of users is protected.
Description
Technical field
The present invention relates to technical field of intelligent equipment, specifically, the present invention relates to a kind of identification
Method and apparatus.
Background technology
Along with the continuous growth of security requirement, identity recognizing technology is widely used in monitoring, gate inhibition system
System, smart machine field.Such as, smart machine, before unlocking, can first be held by smart machine
The person of having carries out identification, if the identity identified meets with user identity registered in advance, then and intelligence
Equipment is unlocked;Otherwise, can be still in locking state or alert.Wherein, intelligence sets
For being specially smart mobile phone, intelligent glasses, intelligent television, Smart Home, intelligent automobile etc..
At present, traditional personal identification method mainly includes two kinds;One be by key, identity card,
The article such as smart card carry out identification;Another kind be based on authentication information (such as, password, password,
Specific operation etc.) carry out identification.Such as, the interactive interface ejected on smart mobile phone has inputted
The unlocking pin set, completes identification by checking password;Or, it is also possible to by intelligence
Can on the screen of mobile phone according to ad hoc fashion carry out sliding (square in such as slip screen or according to
Particular order connects the point etc. in screen) complete identification.
But, owing to the authentication information such as password, password and the certification article such as key, smart card are deposited
In situation about may be obtained by other users, therefore, above-mentioned traditional personal identification method is caused to be deposited
The highest in the problem easily acted as fraudulent substitute for a person, i.e. safety.And, entered by above-mentioned personal identification method
The operation of row identification is relatively complicated, such as, during the point in inputting password, connection screen, need
Complete operation by touching screen, and generally require two hands coordination operation, reduce the Experience Degree of user.
In view of comparing traditional authentication information, certification article, character attribute is not easy to be acquired,
Therefore, existing there is also a kind of safety personal identification method higher, based on character attribute, its
Mainly by traditional based on CCD (Charge-coupled Device, charge coupled cell) or
CMOS (Complementary Metal Oxide Semiconductor, partly lead by CMOS (Complementary Metal Oxide Semiconductor)
Body) picture pick-up device obtain character image information (such as, the eyes, face, hand, dynamic of user
Make image etc.), by the character image of the character image information of the user of collection with the registration user prestored
Information is mated, and identifies user identity.
But, there is the deficiency that energy consumption is high in existing personal identification method based on character attribute.Although,
Existing can save electric energy by the way of first waking up up and unlocking again, but add user operation.Therefore,
It is necessary to provide a kind of simple to operate and that energy consumption is low personal identification method.
Summary of the invention
The purpose of the present invention is intended at least solve one of above-mentioned technological deficiency, particularly complex operation, energy
Consume high problem.
The invention provides a kind of personal identification method, including:
Utilize dynamic vision sensor acquisition signal the case point of output detections;
Case point in accumulation a period of time forms view data;
Identities device is utilized to carry out identification according to described view data.
The present invention program additionally provides identity recognition device, including:
Signal gathering unit, for utilizing dynamic vision sensor acquisition signal the event of output detections
Point;
Target imaging unit, the event within a period of time accumulating the output of described signal gathering unit
Point forms view data;
Identification subelement, for utilizing identities device to export according to described target imaging unit
Described view data carries out identification.
In the scheme of the present embodiment, it is possible to use the user of identity registration is gathered by dynamic vision sensor tip
Signal, and go out identity grader according to the view data training in advance that formed of signal gathered.So,
Follow-up carry out identification time, it is possible to use dynamic vision sensor acquisition signal, the thing that will detect
Part point adds up a period of time formation view data;Identities device is utilized to enter according to the view data formed
Row identification.
Comparing existing personal identification method, in the scheme that the present invention provides, the dynamic vision of low energy consumption passes
Sensor can gather signal constantly, as long as user moves in the visual field of dynamic vision sensor, and dynamic vision
Sense sensor just can catch the action of user and user timely and effectively;And sense according to dynamic vision
The signal of device collection carries out identification, it is not necessary to user first wakes up terminal unit up, exists also without user
Extra operation is carried out on the screen of terminal unit, simple to operation.
Aspect and advantage that the present invention adds will part be given in the following description, and these will be from following
Description in become obvious, or recognized by the practice of the present invention.
Accompanying drawing explanation
Fig. 1 a is the schematic flow sheet of the identities device training method of the embodiment of the present invention;
Fig. 1 b is the image schematic diagram of the user image data of the embodiment of the present invention;
Fig. 2 is that the flow process of the personal identification method based on dynamic vision technology of the embodiment of the present invention is shown
It is intended to;
Fig. 3 a is that the method flow utilizing identities device to carry out identification of the embodiment of the present invention shows
It is intended to;
Fig. 3 b is the image schematic diagram of the target area that the embodiment of the present invention detects;
Fig. 4 is the schematic flow sheet of the part classification device training method of the embodiment of the present invention;
Fig. 5 is that the structure of the identity recognition device based on dynamic vision technology of the embodiment of the present invention is shown
It is intended to;
Fig. 6 is the internal structure schematic diagram of the identification subelement of the embodiment of the present invention;
Fig. 7 is the internal structure schematic diagram of the action recognition unit of the embodiment of the present invention.
Detailed description of the invention
Below with reference to accompanying drawing, technical scheme is carried out clear, complete description, it is clear that institute
The embodiment described is only a part of embodiment of the present invention rather than whole embodiments.Based on this
Embodiment in invention, those of ordinary skill in the art are gained on the premise of not making creative work
The all other embodiments arrived, broadly fall into the scope that the present invention is protected.
The term such as " module " used in this application, " system " is intended to include the entity relevant to computer,
Such as but not limited to hardware, firmware, combination thereof, software or executory software.Such as, mould
Block it may be that it is not limited to: on processor run process, processor, object, journey can be performed
Sequence, the thread of execution, program and/or computer.For example, application program calculating equipment run
Can be module with this calculating equipment.One or more modules may be located at an executory process and/
Or in thread, a module can also be positioned on a computer and/or be distributed in two or the calculating of more multiple stage
Between machine.
It was found by the inventors of the present invention that there is energy consumption in existing personal identification method based on character attribute
High reason is: during carrying out identification, needs always on traditional picture pick-up device
Carry out the collection of character image information, and the energy consumption of traditional picture pick-up device is the biggest, thus makes
The energy consumption becoming whole identification procedure is big.
Further, present inventor have further discovered that, pixel intensity is only become by dynamic vision sensor
Change responds at the case point more than to a certain degree, and has the spies such as energy consumption is low, illumination condition is wide in range
Point.And energy consumption is low that it can be made in running order when terminal standbies such as mobile devices, it is possible in time,
Gather signal rapidly;Need to unlock terminal unit once user can respond in time.Illumination
Condition is wide in range so that dynamic vision sensor is effectively worked at different environmental backgrounds, though place
The environment the most weak in dark light source can also gather signal.
And, the image that signal based on dynamic vision sensor acquisition is formed only substantially reflects shifting
The profile information of moving-target, do not have a conventional modal information such as color, texture, and automatic rejection moves
The background not being moved in scene residing for moving-target so that dynamic vision sensor also has secrecy
The feature that property is strong, such that make also will not leak the letter of user in the case of terminal unit is hacked
Breath, is conducive to the privacy of protection user, the safety improving user profile and the Experience Degree of user.
Therefore, the present inventor considers, it is possible to use dynamic vision sensor tip is to registration user
Gather signal, and the view data training in advance formed according to the signal gathered goes out identity grader.This
Sample, follow-up carry out identification time, it is possible to use user to be identified is adopted by dynamic vision sensor tip
Collection signal, adds up a period of time formation view data by the case point detected;Then, identity is utilized
Grader carries out identification according to the view data formed.
Comparing existing personal identification method, in the scheme that the present invention provides, the dynamic vision of low energy consumption passes
Sensor can gather signal constantly, and user only need to move in the visual field of dynamic vision sensor, dynamic vision
Sense sensor just can catch the action of user and user timely and effectively;Then, pass according to dynamic vision
The signal of sensor collection can carry out identification, it is not necessary to user wakes up terminal unit in advance up, is also not required to
User is wanted to carry out extra operation on the screen of terminal unit to carry out identification, simple to operation.
Describe technical scheme below in conjunction with the accompanying drawings in detail.
In the embodiment of the present invention, before carrying out identification, can be used for carrying out identity with training in advance
The identities device identified;Such as, smart machine can be when customer identity registration, according to dynamic vision
The most above-mentioned identity of view data training in advance that the signal that sense sensor gathers for user is formed
Grader;Specifically, as shown in Figure 1a, can be trained as follows:
S101: when carrying out customer identity registration, utilizes dynamic vision sensor tip to gather user dynamic
State visual signal.
Specifically, the registration of user identity can first be carried out in smart machine.Such as, the use of registration
Family can input register instruction by the mode such as button, voice to smart machine, and smart machine receives
After register instruction, enter registration mode;
In registration mode, smart machine utilizes dynamic vision sensor tip that this user is carried out dynamic vision
The collection of feel signal.Such as, in registration mode, user is at the dynamic vision sensor of smart machine
Visual field in oneself head mobile time, using the signal of dynamic vision sensor acquisition as user's head
Dynamic vision signal.
It is true that the user of the registration of smart machine can be one or more;Gathered for user
Dynamic vision signal, may refer to the dynamic vision signal gathered for certain position of user,
May also mean that the dynamic vision signal gathered for user's entirety.
S102: detect case point and by defeated for dynamic vision sensor from the dynamic vision signal gathered
The case point gone out is as customer incident point.
In actual application, owing to pixel intensity is only changed more than to a certain degree by dynamic vision sensor
Case point responds, transmission the case point of memory response.Therefore, it can sense dynamic vision
The case point of device output is as smart machine customer incident point used in registration mode.
S103: the case point in a period of time is mapped as view data, i.e. in accumulation a period of time
Customer incident point formed user image data.
Specifically, the customer incident point of (such as, 20ms) in smart machine can accumulate a period of time
After, close on relation according to coordinate position, response precedence relationship and the space of each customer incident point, will step
Customer incident point obtained by rapid S102 is converted into corresponding picture signal, forms user image data.
It can be seen that the picture signal of conversion only substantially reflects from user image data as shown in Figure 1 b
The profile of the registration user of movement and texure information, and directly have ignored in background and will not move
Object, so, the follow-up training carrying out identities device quickly and accurately.
S104: utilize user image data, the registrant's identity demarcated for user image data, to deeply
Degree convolutional network is trained, and obtains identities device.
In this step, the user images number of the registration user that step S103 can be obtained by smart machine
According to carrying out identity demarcation, such as, can directly be demarcated as registrant.
And, those skilled in the art can also prestore in smart machine and be defined as non-registered person
The view data (follow-up can be referred to as non-user view data) of user, and for non-user figure
As data correspondence stores the non-registered person's identity demarcated in advance.
So, smart machine can be using user image data and non-user view data as sample
Data, and demarcate corresponding calibration result for sample data;Wherein, the calibration result tool of sample data
Body includes: the registrant's identity demarcated for user image data, and demarcates for non-user view data
Non-registered person's identity.Then, it is possible to use sample data and calibration result thereof, to degree of depth convolution net
Network is trained, and obtains identities device.In actual application, degree of depth convolutional network is utilized automatically to learn
User characteristics in user image data, trains identity forecast model the side through back-propagating with this
After method optimizes network parameter, the classification accuracy of the identities device finally given can be improved.Ability
Field technique personnel can use existing method to be trained degree of depth convolutional network, and here is omitted.
In actual application, the identities device obtained may be used for carrying out the identity of user to be identified
Identify, i.e. identify whether user is registrant, or, more preferably, the registration user of smart machine
In the case of multiple, identities device can be also used for identifying user's specifically which note further
Volume user.
Therefore, more preferably, the user images of the registration user that step S103 is formed by smart machine
Data carry out identity timing signal, and the registrant's identity demarcated for user image data can also be wrapped further
Include: the ID of registrant.So, user identity is being identified by the identities device obtained
After, its recognition result exported, in addition to can including registrant or non-registered person, also may be used
To farther include: be identified as the ID of registrant.
Wherein, about the demarcation of the ID of registrant, specifically can be accomplished in that
In registration mode, smart machine carries out dynamic vision to this user utilizing dynamic vision sensor tip
After the collection of signal, carry out self-calibration according to the precedence relationship of each registration user, such as, demarcate
The ID of registrant can be registrant A, registrant B or registrant C etc..
Or, smart machine can return prompting to registration user and input the prompting of self-defined ID
Information, so, registration user can input registrant by the mode such as button, voice to smart machine
ID;After the ID of the registrant that smart machine receives user's input, utilize and receive
To the ID of registrant be that user image data demarcates registrant's identity.
In actual application, at the dynamic vision sensor tip dynamic vision to certain position of registration user
In the case of signal is acquired, the identities device obtained is trained to be based on this dynamic vision signal
The identities device of motion feature based on this position;Use this identities device can use for difference
The motion feature at this position at family carries out the identification of user.
More preferably, it is also possible to the different parts of registration user (is such as used by dynamic vision sensor tip
The ear at family, face, head, the upper part of the body etc.) view data that the signal that gathers is formed, all make
For the training data of degree of depth convolutional network, being trained degree of depth convolutional network, obtaining with this can be with pin
The integrated motion feature at each position of different user is carried out the identities device of the identification of user.
Identities device owing to training out by the method can be with the motion feature at each position of synthetic user
Carry out the identification of user, it is to avoid the motion feature from a position carries out the office of user identity identification
Sex-limited, such that it is able to improve identities device further to carry out the accuracy of identification.
Based on above-mentioned identities device, the invention provides a kind of identity based on dynamic vision technology and know
Other method, its idiographic flow is as in figure 2 it is shown, may include steps of:
S201: utilize dynamic vision sensor acquisition signal the case point of output detections.
Specifically, smart machine can utilize dynamic vision sensor to carry out the collection of signal in real time, when
When user to be identified moves in the visual field of dynamic vision sensor, dynamic vision sensor can
To collect the dynamic vision signal of user to be identified, and the case point of output detections.
Such as, when smart machine is moved in one's ear by user to be identified from head position below,
Owing to dynamic vision sensor is constantly in opening, therefore dynamic vision sensor can be rapidly
Capture the action of user and collect the dynamic vision signal of user to be identified.
Wherein, for each case point of dynamic vision sensor output, this case point has one
Pixel coordinate position, but same pixel coordinate position may corresponding multiple case points.Therefore, dynamic
State vision sensor, before outgoing event point, needs to get rid of according to the response precedence relationship of case point
The case point repeated, retains the case point of up-to-date generation and exports.
In actual application, may there is, in the signal gathered, the noise that system, environment etc. cause,
Therefore, dynamic vision sensor can close on relation according to the response precedence relationship of case point and space,
Remove the noise in signal.
S202: the case point in a period of time is mapped as view data, i.e. in accumulation a period of time
Case point formed view data.
In this step, smart machine can accumulate the case point of a period of time interior (such as, 20ms),
I.e. accumulate the case point responded when user to be identified moved within a period of time;In conjunction with each event
The position of point, is converted into view data by the case point of accumulation.
S203: utilize identities device to carry out identification according to view data;If recognition result is note
Volume person, performs step S204;If recognition result is non-registered person, the most do not perform subsequent step.
In this step, obtained by smart machine can utilize identities device according to above-mentioned steps S202
View data carry out identification, the recognition result obtained can be registrant or non-registered person.
Therefore, after carrying out identification, can determine whether whether recognition result is registrant, if treating
The recognition result of the user identified is registrant, then perform step S204;Otherwise, smart machine can
Not perform subsequent step, and continue to keep current state.
More preferably, user is registered as in the case of multiple, if utilizing identities device at smart machine
The recognition result obtained is registrant, then can further include in recognition result: be identified as registration
The ID of person.
Wherein, identities device can be obtained by the training of above-mentioned steps S101-S104, it is also possible to logical
Cross other training methodes to train and obtain.It is for instance possible to use traditional capture apparatus collection sets number
The view data of the user of amount, and put into as sample in the sample set of registration user;By various kinds herbal classic
Overturn, rotate, translate, after the conversion gain sample set such as scaling generates training data, according to setting in advance
The target characteristic of meter, training characteristics disaggregated model, obtain the identity for user identity is identified
Grader.Wherein, the target characteristic being pre-designed can be the HOG of traditional recognition of face
(Histogram of Oriented Gradient, histogram of gradients), (Mean-Shift is equal for M-SHIFT
Value skew) etc.;Tagsort model can be KNN (k-Nearest Neighbor, k is closest)
Sorting algorithm, SVM (Support Vector Machine, support vector machine), Boosting (push away
Enter) algorithm etc..
The view data that identities device formed how is utilized according to step S202 about smart machine
Carry out identification, will be discussed in detail follow-up.
In actual application, smart machine is after the identity being identified user by identities device, permissible
Carry out certain operation according to recognition result, such as, be unlocked operation or send announcement to registration user
Alert etc..
More preferably, smart machine being identified during user identity by above-mentioned steps S201-S203,
Or identifying user to be identified for after registrant, the action recognition of user can carried out.So,
Identifying user to be identified for after registrant, can mate according to the action of the registrant identified
Go out corresponding instruction, and perform corresponding operation, such as incoming call answering, opening car door etc..
About the identification process of the action of user, specifically may include steps of:
S204: identify the movement locus of moving component according to the case point detected.
Specifically, if the identity that smart machine identifies user to be identified by step S203 is note
Volume person, then the case point that part classification device can be utilized to go out for step S201 current detection identifies
The classification of moving component and position;And determine according to the position of the moving component of the classification identified successively
The movement locus of moving component.
Wherein, part classification device is that the sample signal according to dynamic vision sensor acquisition is trained out
, can be obtained by the training of other equipment and be stored in smart machine, it is also possible to by smart machine
Training in advance.Training method about part classification device will be discussed in detail follow-up.
In this step, smart machine can utilize the case point that part classification device goes out according to current detection
Neighbours' point, determines the classification of moving component belonging to this case point.
Wherein, neighbours' point of case point can determine in the following way:
For the case point of current detection, determine dynamic vision sensor detect this case point it
The all case points gathered in front setting time interval, therefrom select setting at this case point periphery
Determine the case point in spatial dimension (such as, the rectangle of 80 × 80 pixels), and be defined as this case point
Neighbours' point.
Further, smart machine is determining the affiliated moving component of all case points detected
After classification, it is also possible to for the moving component of every kind, according to the moving component belonging to the category
The position of each case point, determine the position of the moving component of the category.
For example, it is possible to calculate the center of the case point belonging to same category of moving component;Will
The center calculated is as the position of the moving component of the category.In actual application, it is possible to use
Any commonly employed cluster mode known in those skilled in the art obtains this center.Such as, may be used
To use K-means clustering method to obtain the center of moving component, in order to follow-up motion portion
The accurate tracking of part.
In this step, behind classification and the position being identified moving component by step S202, permissible
The position of the moving component according to the category identified successively, determines the moving component of the category
Movement locus.
In actual application, it is possible to use the track algorithm that those skilled in the art commonly use is to carry out motion portion
The determination of the movement locus of part, such as, smoothing filter, sequential track algorithm etc., at this no longer
Describe in detail.
More preferably, in the embodiment of the present invention, smart machine is in classification and the position identifying moving component
After, it is also possible to the classification of the moving component identified is carried out region soundness verification, by false judgment
The position of moving component get rid of, improve the tracking efficiency of subsequent motion parts with this, improve dynamic
Make the accuracy identified.
Specifically, smart machine may determine that the classification currently identified moving component position whether
In the range of reasonable region;The most then by checking;Otherwise, checking is not passed through.If identifying
The classification of moving component is by checking, then the classification of the moving component that will identify that is corresponding with position to be remembered
Record.Such as, can record in the tracking unit list built in advance, tracking unit list is used for
The position of moving component is tracked record.As such, it is possible to remember successively according in tracking unit list
The position of the moving component of the category of record determines movement locus.
Wherein, reasonable regional extent be the category according to last registration moving component position and
The position range priori of the moving component of the category determines.Such as, it is specially when moving component
During the concrete position such as the head of human body or hand, the motion portion of the category that can will currently identify
The position of the moving component of the position of part (such as, head or hand) and the category of last registration
Carrying out distance to calculate, if distance meets certain condition, and the experience meeting Regular Human's form is known
During knowledge, illustrate that the position of moving component of the classification currently identified is in the range of reasonable region.
In actual application, due to the particularity of dynamic vision sensor imaging, occur short at moving component
During temporary time-out, the moving component that the case point detected according to dynamic vision sensor is reflected can
The situation of the of short duration disappearance of movement locus can be there will be.Therefore, it can by safeguarding that tracking unit list comes
Realize the continuous print to different motion parts to follow the tracks of, and movement position is smoothed.Wherein,
The process of smoothing processing can be to use conventional smoothing processing means such as Kalman filtering mode etc..
S205: match command adapted thereto according to the movement locus of moving component.
Specifically, the motion rail of the moving component that smart machine can be determined from previous step S204
Mark extracts track characteristic;Action dictionary searches whether to store and extraction for moving component
The feature that track characteristic matches;If having, then using the instruction corresponding with the feature found as with
The movement locus of moving component instructs accordingly.
Wherein, action dictionary is built in advance by technical staff;In advance for every kind in action dictionary
The moving component of classification, stores the feature of the movement locus having the moving component with the category to match,
And corresponding record has action command set in advance, such as, answering cell phone instruction, opening car door instruction
Deng.
S206: perform corresponding operating according to the instruction matched.
Specifically, smart machine can perform corresponding operating according to the instruction that step S205 is matched.
Such as, when the classification being identified moving component by step S204 is nose or ear, intelligence
Energy equipment can determine the movement locus of nose or ear by step S205, and according to determining
Nose or the movement locus of ear match command adapted thereto, such as auto-pickup instruction.So, intelligence
Can perform corresponding operating according to instruction by equipment, such as perform auto-pickup operation.
Or, when the classification being identified moving component by step S204 is nose, eyes or hands
Refer to;Correspondingly, smart machine can match phase according to the movement locus of nose, eyes or finger
After instruction is reminded in the auto-unlocking answered/danger, perform/dangerous prompting operation of unblanking.
In the embodiment of the present invention, about mentioned by step S203 how to utilize identities device according to
The view data that step S202 is formed carries out the process of identification, the most as shown in Figure 3 a, and can
To be implemented by:
S301: the target area in inspection image data.
Specifically, each frame image data that step S202 is formed can be detected by smart machine,
Target area is detected from view data;Wherein, target area is set in advance, such as face
Region, head zone, hand region, body region etc..
In actual application, will not owing to the signal of dynamic vision sensor acquisition having filtered the most automatically
The background of movement, then by removing the noise that system, environment etc. cause in signal after, the most dynamically
The user that all of case point of vision sensor output should be all to be identified moves produced sound
Should.It is therefore possible to use existing method well-known to those skilled in the art, based on view data
Level, the projection histogram of vertical direction, determine the level of target area, vertical border.
Such as, when target area specifically refers to head zone, can be by vertical direction constant depth
Image projects (impact removing shoulder etc.) to trunnion axis, obtains projection histogram, according to rectangular histogram
Seriality determine width value and the right boundary of target cranial;Average length further according to default head
Wide ratio calculates the height of target cranial, has i.e. obtained the up-and-down boundary of target cranial, and then has detected
Head zone, as shown in Figure 3 b.
S302: target area is carried out regularization.
In actual application, owing to user is in moving process the most in the same time, smart machine is with user's
Physical distance may be different, so can cause target area (the such as head detected by step S301
Portion) varying in size in the picture;If directly utilizing such input to carry out identification by impact knowledge
The accuracy of other result.Therefore, the target area of detection can be transformed into same size by smart machine
On, i.e. carry out the normalized of size.For example, it is possible to after obtaining the width of target area, will
It zooms to the width fixed, and records the ratio of scaling, and in the vertical direction does same operation.
Further, it is contemplated that the illumination condition impact on view data, smart machine can also be to inspection
Survey target area and carry out the regularization of illumination condition.Such as, the illumination bar of current image date is detected
After part value, it is automatically adjusted to picture so that the imaging features of target area is basic under different illumination conditions
Keep consistent.
More preferably, smart machine can also move the regularization of speed, such as, passes through case point
Time tag and its neighborhood in the difference of other case point time tags, by the movement of target area
Velocity stages, is just generating translational speed according to the time of integration that the different choice of movement velocity rank is different
View data after then changing, reaches have consistent image mode under friction speed.
S303: utilize identities device to carry out identification according to the view data after regularization.
In actual application, owing to the case point changed only is responded by dynamic vision sensor,
Therefore, the effective pixel points comprised in the view data sometimes changed is considerably less or only comprises user
Partial response pixel.And, position incorrect view data few for effective pixel points is often to knowledge
Other result has dysgenic image, i.e. this view data to belong to the view data being not suitable for identifying.
Therefore, more preferably, smart machine is after carrying out regularization to the target area detected, it is also possible to
Utilize filtering classification device that the view data after regularization is carried out filtration treatment, will be demarcated as being not suitable for knowing
Other view data is removed.So, correspondingly, smart machine can utilize identities device according to mistake
View data after filter processes carries out identification.
Wherein, filtering classification device is that the positive negative sample training in advance according to dynamic vision sensor acquisition goes out
Come.Wherein, positive negative sample is that the case point exported by dynamic vision sensor is formed and warp
View data after regularization.And the case point exported by dynamic vision sensor is for registration user
Or the dynamic vision signal that gathered of other users detects.
And it is possible to according to effective pixel points number, the information such as position of response pixel, to by moving
That the case point of state vision sensor output is formed and after regularization view data is marked
Fixed, calibration result can be specifically positive sample or negative sample.Wherein, positive sample specifically refers to demarcate
View data for applicable identification;Negative sample specifically refers to the view data being demarcated as being not suitable for identifying.
And the view data being suitable for identifying is to be demarcated in advance by technical staff with the view data being not suitable for identifying
's.
In actual application, filtering classification device can be instructed before carrying out identification in advance by smart machine
Practice, it is also possible to be stored on smart machine by after other equipment training in advance.Regardless of whether be smart machine
Or other equipment, it is after gathering positive negative sample, can be by the way of cluster or training grader
Obtain filtering classification device.Such as, can be based on positive and negative sample training SVM (Support Vector
Machine, support vector machine) grader, obtain filtering classification device.
In the embodiment of the present invention, about the training method of the part classification device mentioned in step S204,
Its flow process as shown in Figure 4, specifically may include steps of:
S401: generate training sample according to the case point of dynamic vision sensor acquisition sample signal output
This.
In this step, can be first with dynamic vision sensor tip to moving component collecting sample signal;
And using the case point of dynamic vision sensor output as sample case point.Such as, user is at dynamic vision
In the visual field of sense sensor after oneself head mobile, dynamic vision sensor can collect for
The sample signal of the head at family.
In view of the view data formed by the case point in accumulation a period of time, to a certain extent
Can be good at describing the motion outline of user, and the profile information that these motions produce can also be expressed
The shape information of user self.
Therefore, determining that the case point exported by dynamic vision sensor acquisition sample signal is as sample
After case point, it may be determined that neighbours' point of the sample case point of current output;The sample that will currently export
Present event point, and neighbours' point of this sample case point, as a training sample.
Further, the position put according to sample case point and neighbours thereof, this sample case point is carried out
Classification, i.e. judges the classification of this moving component belonging to sample case point.Wherein, moving component
Classification can be specifically the head of user, hand, health etc..As such, it is possible to this sample that will determine that out
The classification of the moving component of this training sample is marked by the classification of the moving component belonging to present event point
Fixed.
S402: utilize the training sample and calibration result thereof generated, degree of depth confidence network is trained,
Obtain part classification device.
Wherein, the calibration result of training sample refers to the class of the moving component demarcated for this training sample
Not.
In this step, the multiple training samples composition training sample set that step S401 can be generated,
Utilize the calibration result of each training sample of training sample set and this training sample concentration to the degree of depth
Confidence network is trained, and obtains part classification device.Wherein, about how degree of depth confidence network being entered
Row training, can use the technological means that those skilled in the art commonly use.
Such as, utilize the training sample and calibration result thereof generated, degree of depth confidence network is carried out repeatedly
Repetitive exercise.Wherein, an iteration training process specifically includes: the instruction formed by multiple training samples
Practice the sample set input as degree of depth confidence network;Then, by the output of degree of depth confidence network and each instruction
The calibration result practicing sample compares;And join according to the level of comparative result percentage regulation confidence network
Number continues iteration next time, or stopping iteration obtaining part classification device.
Wherein, the output of degree of depth confidence network is actually to the class of moving component belonging to sample case point
Other conjecture, so, by by the classification of the affiliated moving component of conjecture with that demarcate in advance, more
Calibration result compares accurately, the error amount both the produced training technique by back-propagating
Each level parameter of degree of depth confidence network is adjusted, improves the class of the part classification device finally given
Do not divide accuracy, be easy to accurately identifying and responding of subsequent user actions with this.
Based on above-mentioned personal identification method based on dynamic vision technology, the one of embodiment of the present invention offer
Plant identity recognition device based on dynamic vision technology, as it is shown in figure 5, specifically may include that signal
Collecting unit 501, target imaging unit 502, identification subelement 503.
Wherein, signal gathering unit 501 is used for utilizing dynamic vision sensor acquisition signal and exporting inspection
The case point surveyed.
The target imaging unit 502 thing within a period of time of accumulating signal collecting unit 501 output
Part point forms view data.
Identification subelement 503 is used for utilizing identities device to export according to target imaging unit 502
View data carry out identification.
In actual application, the recognition result of identification subelement 503 output may is that registrant,
Or non-registered person.More preferably, user is being registered as in the case of multiple, identification subelement 503
The recognition result of output can also include: is identified as the ID of registrant.
In actual application, when identities device is customer identity registration, according to dynamic vision sensor tip
The view data training in advance that formed of signal that gathers user is out.
More preferably, in the embodiment of the present invention, identity recognition device based on dynamic vision technology also may be used
To include: identities device training unit 504.
Identities device training unit 504, for when carrying out customer identity registration, utilizes dynamic vision
Sensor tip gathers dynamic vision signal to user, and the case point exported by dynamic vision sensor is made
For customer incident point;Customer incident point in accumulation a period of time forms user image data;Utilize sample
Notebook data and calibration result thereof, be trained degree of depth convolutional network, obtains identities device.
Wherein, sample data includes: user image data and non-user view data;Sample number
According to calibration result include: registrant's identity of demarcating for user image data and for non-user figure
Non-registered person's identity as data scaling.
Further, in the embodiment of the present invention, in identity recognition device based on dynamic vision technology also
May include that action recognition unit 505, instructions match unit 506 and instruction response unit 507.
Wherein, action recognition unit 505 is for receiving the identification knot of identification subelement 503 output
Really, and when recognition result is registrant, the case point detected according to signal gathering unit 501 is known
Do not go out the movement locus of moving component.
The fortune of the instructions match unit 506 moving component for identifying according to action recognition unit 505
Dynamic path matching goes out command adapted thereto.
Instruction response unit 507 performs corresponding for the instruction matched according to instructions match unit 506
Operation.
In actual application, as shown in Figure 6, identification subelement 503 can specifically include: target
Region detection subelement 601, target area regularizing filter unit 602 and identification subelement
603。
Wherein, target area detection sub-unit 601 is for detecting the figure of target imaging unit 502 output
As the target area in data.
Target area regularizing filter unit 602 is for detecting target area detection sub-unit 601
Target area carries out regularization.
Specifically, the target area detecting target area detection sub-unit 601 can be carried out chi
Very little regularization, illumination condition regularization and translational speed regularization.
Identification subelement 603 is used for utilizing identities device according to target area regularizing filter list
View data after unit's 602 regularizations carries out identification.
Further, identification subelement 503 can further include: image filtering processes son
Unit 604.
Image filtering processes subelement 604 and is used for utilizing filtering classification device to target area regularizing filter
View data after unit 602 regularization carries out filtration treatment.Correspondingly, identification subelement
After 603 specifically for utilizing identities device to process subelement 604 filtration treatment according to image filtering
View data carries out identification.
Wherein, filtering classification device is that the positive negative sample training in advance according to dynamic vision sensor acquisition goes out
Coming, positive negative sample is that the case point exported by dynamic vision sensor is formed and through regularization
After view data;And just sample is the view data being demarcated as being suitable for identification, negative sample is to demarcate
For being not suitable for the view data identified.In actual application, filtering classification implement body can be by based on dynamically
The identity recognition device training in advance of vision technique obtains, it is also possible to be stored in after being trained by other devices
In identity recognition device based on dynamic vision technology.
In actual application, as it is shown in fig. 7, action recognition unit 505 specifically may include that parts are known
Small pin for the case unit 701 and track following subelement 702.
Wherein, parts identification subelement 701 is used for utilizing part classification device for signal gathering unit
The case point that 501 current detection go out identifies classification and the position of moving component.Wherein, part classification
Device be the sample signal training in advance according to dynamic vision sensor acquisition out.
Track following subelement 702 is for the classification identified successively according to parts identification subelement 701
The position of moving component determine the movement locus of moving component.
So, instructions match unit 506 is specifically for the fortune determined from track following subelement 702
The movement locus of dynamic component extracts track characteristic;Action dictionary searches whether for moving component
Storage has the feature matched with the track characteristic extracted;If having, then by relative with the feature found
The instruction answered instructs as corresponding with the movement locus of moving component.
In actual application, part classification device can by other device training in advance be stored in the most afterwards based on
In the identity recognition device of dynamic vision technology, it is also possible to by identification based on dynamic vision technology
Device training in advance is out.
Therefore, more preferably, action recognition unit 505 can also include: part classification device training is single
Unit 703.
Part classification device training subelement 703 is for according to dynamic vision sensor acquisition sample signal
The case point of output generates training sample;Utilize the training sample and calibration result thereof generated, to the degree of depth
Confidence network is trained, and obtains part classification device.
Wherein, the calibration result of training sample refers to the class of the moving component demarcated for this training sample
Not.
In the embodiment of the present invention, each unit in identity recognition device based on dynamic vision technology, with
And the concrete function of the subelement under unit realizes, it is referred to above-mentioned body based on dynamic vision technology
The concrete steps of part recognition methods, are not described in detail in this.
In actual application, above-mentioned smart machine can be smart mobile phone.So, it is configured with above-mentioned body
Part identifies that the smart mobile phone of device can identify the identity of the current holder of smart mobile phone, if identifying
Result is the registration user of registrant, i.e. smart mobile phone, then smart mobile phone is unlocked;Further,
Smart mobile phone can also identify user according to the dynamic vision sensor Real-time Collection signal of low energy consumption
Action, match corresponding action command, and perform corresponding operation, such as auto-pickup incoming call,
Automatically broadcasting etc..
Such as, registration user has only to single-hand held smart mobile phone in the presence of by the track rolling oneself arranged
Dynamic smart mobile phone can carry out the unblock of smart mobile phone, it is not necessary to smart mobile phone screen contact, the most not
Needing a hands hand-held intelligent mobile phone, another hands does unlocking motion on screen, simple to operation.
When smart mobile phone has incoming call, user has only to smart mobile phone track routinely is moved to ear
Limit, smart mobile phone gets final product auto-pickup, without triggering Answer Key or completing to answer slide,
It is user-friendly to.On the other hand, even if nonregistered user is carried out according to the mode of operation of registration user
Operation, also cannot unlock smart mobile phone or answering cell phone, improves the confidentiality of smart mobile phone.
Or, above-mentioned smart machine may refer to be applied to the intelligent glasses of blind man navigation.Such as,
When blind person goes out, the intelligent glasses of the identity recognition device being configured with above-mentioned dynamic vision technology can be worn,
In traveling by the dynamic vision sensor acquisition in the identity recognition device of above-mentioned dynamic vision technology before
The signal of the object moved relative to blind person in side's scene, carries out identification to the current signal gathered;If
Identify that road signs or dangerous goods occurs in traveling front, then can be carried by different sound or sense of touch
Awake blind person takes different traveling measures.Owing to dynamic vision sensor energy consumption is low, can be always
It is in start duty, and stand-by time is long, is especially suitable for blind man navigation.
Or, above-mentioned smart machine may refer to be configured with the identification dress of above-mentioned dynamic vision technology
Put automobile.Such as, dynamic vision sensor is arranged on above the car door of automobile and carries out signals collecting in real time.
So, when car owner moves closer to automobile, the dynamic vision sensor of low energy consumption can be adopted the most rapidly
Collection arrives facial information and the motion track information of car owner, completes automatically turning on or automobile energising of door lock
Deng action, simple to operate quickly, improve user experience.And, registration user is not dynamic according to registration
Operating and cannot respond to, nonregistered user cannot respond to according to the mode of operation of registration user, can improve
The safety of vehicle.
Or, above-mentioned smart machine can be equipped with the intelligent television of above-mentioned identity recognition device.Example
As, the dynamic vision sensor in the identity recognition device of above-mentioned dynamic vision technology can be arranged on intelligence
The top of energy TV carries out signals collecting, and when user moves in the visual field of dynamic vision sensor,
The dynamic vision signal (such as face, health etc.) of user can be collected, and by above-mentioned dynamic vision
The identity recognition device of technology identifies the identity of user, if being identified as unrestricted user, Intelligent electric
Depending on the channel that this user is most interested in can be automatic jumped to, or eject the viewing historical record of this user
Selective;If being identified as restricted user, intelligent television can shield channel correlation, forbids restricted
User watches;And add up the viewing time of limited users on the same day, exceed and the time limit is set viewing function is not provided.
So, the viewing authority of response channel can be limited according to identity information by shirtsleeve operation, the least
The Television watching etc. of child, improves Consumer's Experience.
In technical scheme, it is possible to use the user of identity registration is adopted by dynamic vision sensor tip
Collect signal, and the view data training in advance formed according to the signal gathered goes out identity grader.So,
Follow-up carry out identification time, it is possible to use dynamic vision sensor acquisition signal, the thing that will detect
Part point adds up a period of time formation view data;Identities device is utilized to enter according to the view data formed
Row identification.Compare existing personal identification method, in the scheme that the present invention provides, moving of low energy consumption
State vision sensor can gather signal constantly, as long as user moves in the visual field of dynamic vision sensor,
Dynamic vision sensor just can catch the action of user and user timely and effectively;According to dynamic vision
The signal of sensor acquisition carries out identification, it is not necessary to user first wake up up terminal unit, also without with
Family carries out extra operation on the screen of terminal unit, simple to operation.
Those skilled in the art of the present technique are appreciated that the present invention includes relating to performing described herein
One or more equipment in operation.These equipment can be required purpose and specialized designs and manufacture,
Or the known device in general purpose computer can also be included.These equipment have storage calculating within it
Machine program, these computer programs optionally activate or reconstruct.Such computer program can be deposited
Store up in equipment (such as, computer) computer-readable recording medium or be stored in and be suitable to store e-command difference
Being coupled in any kind of medium of bus, described computer-readable medium includes but not limited to any class
The dish (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk) of type, ROM (Read-Only
Memory, read only memory), RAM (Random Access Memory, memorizer immediately), EPROM
(Erasable Programmable Read-Only Memory, Erarable Programmable Read only Memory),
(Electrically Erasable Programmable Read-Only Memory, electrically erasable can for EEPROM
Program read-only memory), flash memory, magnetic card or light card.It is, computer-readable recording medium includes by setting
Standby (such as, computer) is with the form storage that can read or any medium of transmission information.
Those skilled in the art of the present technique are appreciated that and can realize these structures with computer program instructions
In each frame in figure and/or block diagram and/or flow graph and these structure charts and/or block diagram and/or flow graph
The combination of frame.Those skilled in the art of the present technique are appreciated that can be provided these computer program instructions
Realize to the processor of general purpose computer, special purpose computer or other programmable data processing methods, from
And perform structure disclosed by the invention by the processor of computer or other programmable data processing methods
The scheme specified in figure and/or block diagram and/or the frame of flow graph or multiple frame.
Those skilled in the art of the present technique be appreciated that the present invention had discussed various operations, method,
Step in flow process, measure, scheme can be replaced, changed, combined or deleted.Further, tool
There are the various operations discussed in the present invention, method, other steps in flow process, measure, scheme
Can also be replaced, changed, reset, decomposed, combined or deleted.Further, of the prior art
Have with the step in the various operations disclosed in the present invention, method, flow process, measure, scheme can also
Replaced, changed, reset, decomposed, combined or deleted.
The above is only the preferred embodiment of the present invention, it is noted that general for the art
For logical technical staff, under the premise without departing from the principles of the invention, it is also possible to make some improvement and profit
Decorations, these improvements and modifications also should be regarded as protection scope of the present invention.
Claims (22)
1. a personal identification method, it is characterised in that including:
Utilize dynamic vision sensor acquisition signal the case point of output detections;
Case point in accumulation a period of time forms view data;
Identities device is utilized to carry out identification according to described view data.
2. the method for claim 1, it is characterised in that described utilize identities device root
Identification is carried out according to described view data, including:
Detect the target area in described view data;
Described target area is carried out regularization;
Described identities device is utilized to carry out identification according to the view data after regularization.
3. method as claimed in claim 2, it is characterised in that described to described target area
After carrying out regularization, also include:
Utilize filtering classification device that the view data after regularization is carried out filtration treatment;
Wherein, described filtering classification device is that the positive negative sample according to described dynamic vision sensor acquisition is pre-
First train out;
Wherein, described positive negative sample is to be formed by the case point of described dynamic vision sensor output
And view data after regularization;And
Described positive sample is the view data being demarcated as being suitable for identification, and described negative sample is to be demarcated as discomfort
Close the view data identified.
4. method as claimed in claim 3, it is characterised in that described identities device is according to just
Then change after view data carry out identification particularly as follows:
Described identities device is utilized to carry out identification according to the view data after filtration treatment.
5. the method as described in claim 1-4 is arbitrary, it is characterised in that described identities device root
The recognition result carrying out identification output according to described view data is: registrant or non-registered person.
6. method as claimed in claim 5, it is characterised in that described register user as multiple,
And described recognition result also includes: be identified as the ID of registrant.
7. the method as described in claim 1-4 is arbitrary, it is characterised in that described identities device is
During customer identity registration, the signal institute shape described user gathered according to described dynamic vision sensor tip
The view data training in advance become is out:
When carrying out customer identity registration, utilize described dynamic vision sensor tip that described user is gathered
Dynamic vision signal, and the case point exported by described dynamic vision sensor is as customer incident point;
Customer incident point in accumulation a period of time forms user image data;
Utilize sample data and calibration result thereof, degree of depth convolutional network is trained, obtain described body
Part grader;
Wherein, described sample data includes: described user image data and non-user view data;
The calibration result of described sample data includes: the registrant's body demarcated for described user image data
Part and be described non-user view data demarcation non-registered person's identity.
8. method as claimed in claim 7, it is characterised in that for described user image data mark
Fixed registrant's identity specifically includes the ID of registrant.
9. method as claimed in claim 5, it is characterised in that utilize identities device described
After carrying out identification according to described view data, also include:
If recognition result is registrant, then:
The movement locus of moving component is identified according to the case point detected;
After movement locus according to described moving component matches command adapted thereto, perform according to described instruction
Corresponding operating.
10. method as claimed in claim 9, it is characterised in that the event that described basis detects
Point identifies the movement locus of moving component, including:
Utilize part classification device for the case point that current detection goes out identify moving component classification and
Position;
The position of the moving component according to the described classification identified successively determines described moving component
Movement locus.
11. methods as claimed in claim 10, it is characterised in that described part classification implement body
Obtain according to the training of following method:
Case point according to the output of described dynamic vision sensor acquisition sample signal generates training sample;
Utilize the training sample and calibration result thereof generated, degree of depth confidence network is trained, obtains
Described part classification device;
Wherein, the calibration result of described training sample refers to the moving component demarcated for this training sample
Classification.
12. methods as claimed in claim 11, it is characterised in that described training sample specifically leads to
Cross following manner to generate:
The case point exported by described dynamic vision sensor acquisition sample signal is as sample case point;
Determine neighbours' point of the sample case point of current output;
The sample case point that will currently export, and neighbours' point of this sample case point, as an instruction
Practice sample.
13. methods as claimed in claim 9, it is characterised in that described according to described moving component
Movement locus match command adapted thereto, including:
Track characteristic is extracted from the movement locus of described moving component;
Action dictionary searches whether to store and the track characteristic extracted for described moving component
The feature matched;
If having, then using the instruction corresponding with the feature found as the motion with described moving component
Track instructs accordingly.
14. methods as claimed in claim 10, it is characterised in that the classification of described moving component
For nose or ear;And
After movement locus according to described moving component matches command adapted thereto, perform according to described instruction
Corresponding operating, including:
After movement locus according to described nose or ear matches the instruction of corresponding auto-pickup, perform
Auto-pickup operates.
15. methods as claimed in claim 10, it is characterised in that the classification of described moving component
For nose, eyes or finger;And
After movement locus according to described moving component matches command adapted thereto, perform according to described instruction
Corresponding operating, including:
Movement locus according to described nose, eyes or finger matches corresponding auto-unlocking/danger
After instruction is reminded in danger, perform to unblank/dangerous remind operation.
16. 1 kinds of identity recognition devices, it is characterised in that including:
Signal gathering unit, for utilizing dynamic vision sensor acquisition signal the event of output detections
Point;
Target imaging unit, the event within a period of time accumulating the output of described signal gathering unit
Point forms view data;
Identification subelement, for utilizing identities device to export according to described target imaging unit
Described view data carries out identification.
17. devices as claimed in claim 16, it is characterised in that described identification subelement
Specifically include:
Target area detection sub-unit, for detecting the described picture number of described target imaging unit output
Target area according to;
Target area regularizing filter unit, for the mesh detecting described target area detection sub-unit
Mark region carries out regularization;
Identification subelement, is used for utilizing described identities device according to the regularization of described target area
View data after subelement regularization carries out identification.
18. devices as described in claim 16 or 17, it is characterised in that described identification
The recognition result of unit output is: registrant or non-registered person.
19. devices as described in claim 16 or 17, it is characterised in that also include:
Identities device training unit, for when carrying out customer identity registration, utilizes described dynamic vision
Sense sensor gathers dynamic vision signal for described user, and is exported by described dynamic vision sensor
Case point as customer incident point;Customer incident point in accumulation a period of time forms user images number
According to;Utilize sample data and calibration result thereof, degree of depth convolutional network is trained, obtain described body
Part grader;
Wherein, described sample data includes: described user image data and non-user view data;
The calibration result of described sample data includes: registrant's identity of demarcating for described user image data,
And be non-registered person's identity of described non-user view data demarcation.
20. devices as claimed in claim 18, it is characterised in that also include:
Action recognition unit, for receiving the recognition result of described identification subelement output, and
When described recognition result is registrant, identify according to the case point that described signal gathering unit detects
The movement locus of moving component;
Instructions match unit, for the described moving component that identifies according to described action recognition unit
Movement locus matches command adapted thereto;
Instruction response unit, performs phase for the described instruction gone out according to described instructions match units match
Should operate.
21. devices as claimed in claim 20, it is characterised in that described action recognition unit has
Body includes:
Parts identification subelement, is used for utilizing part classification device currently to examine for described signal gathering unit
The case point measured identifies classification and the position of moving component;Wherein, described part classification device is root
According to described dynamic vision sensor acquisition sample signal training in advance out;
Track following subelement, for the described class identified successively according to described parts identification subelement
The position of other moving component determines the movement locus of described moving component.
22. devices as claimed in claim 21, it is characterised in that
Described instructions match unit is specifically for the described fortune determined from described track following subelement
The movement locus of dynamic component extracts track characteristic;Action dictionary is searched for described moving component
Whether storage has the feature matched with the track characteristic extracted;If having, then by with the feature found
Corresponding instruction instructs as corresponding with the movement locus of described moving component.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510019275.4A CN105844128B (en) | 2015-01-15 | 2015-01-15 | Identity recognition method and device |
KR1020150173971A KR102465532B1 (en) | 2015-01-15 | 2015-12-08 | Method for recognizing an object and apparatus thereof |
US14/995,275 US10127439B2 (en) | 2015-01-15 | 2016-01-14 | Object recognition method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510019275.4A CN105844128B (en) | 2015-01-15 | 2015-01-15 | Identity recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105844128A true CN105844128A (en) | 2016-08-10 |
CN105844128B CN105844128B (en) | 2021-03-02 |
Family
ID=56579904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510019275.4A Active CN105844128B (en) | 2015-01-15 | 2015-01-15 | Identity recognition method and device |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102465532B1 (en) |
CN (1) | CN105844128B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106597463A (en) * | 2016-12-29 | 2017-04-26 | 天津师范大学 | Photoelectric proximity sensor based on dynamic vision sensor (DVS) chip, and detection method |
CN108563937A (en) * | 2018-04-20 | 2018-09-21 | 邓坚 | A kind of identity identifying method and bracelet based on vein |
CN108764078A (en) * | 2018-05-15 | 2018-11-06 | 上海芯仑光电科技有限公司 | A kind of processing method and computing device of event data stream |
CN110796040A (en) * | 2019-10-15 | 2020-02-14 | 武汉大学 | Pedestrian identity recognition method based on multivariate spatial trajectory correlation |
CN110929242A (en) * | 2019-11-20 | 2020-03-27 | 上海交通大学 | Method and system for carrying out attitude-independent continuous user authentication based on wireless signals |
CN111083354A (en) * | 2019-11-27 | 2020-04-28 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
CN111177669A (en) * | 2019-12-11 | 2020-05-19 | 宇龙计算机通信科技(深圳)有限公司 | Terminal identification method and device, terminal and storage medium |
CN112114653A (en) * | 2019-06-19 | 2020-12-22 | 北京小米移动软件有限公司 | Terminal device control method, device, equipment and storage medium |
CN112118380A (en) * | 2019-06-19 | 2020-12-22 | 北京小米移动软件有限公司 | Camera control method, device, equipment and storage medium |
CN112669344A (en) * | 2020-12-24 | 2021-04-16 | 北京灵汐科技有限公司 | Method and device for positioning moving object, electronic equipment and storage medium |
CN114077730A (en) * | 2021-11-26 | 2022-02-22 | 广域铭岛数字科技有限公司 | Login verification method, vehicle unlocking system, equipment and storage medium |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180060257A (en) | 2016-11-28 | 2018-06-07 | 삼성전자주식회사 | Metohd and apparatus for object recognition |
KR102070956B1 (en) * | 2016-12-20 | 2020-01-29 | 서울대학교산학협력단 | Apparatus and method for processing image |
KR20180073118A (en) | 2016-12-22 | 2018-07-02 | 삼성전자주식회사 | Convolutional neural network processing method and apparatus |
KR20180092778A (en) | 2017-02-10 | 2018-08-20 | 한국전자통신연구원 | Apparatus for providing sensory effect information, image processing engine, and method thereof |
RU2656708C1 (en) * | 2017-06-29 | 2018-06-06 | Самсунг Электроникс Ко., Лтд. | Method for separating texts and illustrations in images of documents using a descriptor of document spectrum and two-level clustering |
KR101876433B1 (en) * | 2017-07-20 | 2018-07-13 | 주식회사 이고비드 | Activity recognition-based automatic resolution adjustment camera system, activity recognition-based automatic resolution adjustment method and automatic activity recognition method of camera system |
WO2019017720A1 (en) * | 2017-07-20 | 2019-01-24 | 주식회사 이고비드 | Camera system for protecting privacy and method therefor |
KR102086042B1 (en) * | 2018-02-28 | 2020-03-06 | 서울대학교산학협력단 | Apparatus and method for processing image |
KR102108951B1 (en) * | 2018-05-16 | 2020-05-11 | 한양대학교 산학협력단 | Deep learning-based object detection method and system utilizing global context feature of image |
KR102108953B1 (en) * | 2018-05-16 | 2020-05-11 | 한양대학교 산학협력단 | Robust camera and lidar sensor fusion method and system |
KR102083192B1 (en) | 2018-09-28 | 2020-03-02 | 주식회사 이고비드 | A method for controlling video anonymization apparatus for enhancing anonymization performance and a apparatus video anonymization apparatus thereof |
WO2021202526A1 (en) | 2020-03-30 | 2021-10-07 | Sg Gaming, Inc. | Gaming state object tracking |
US11861975B2 (en) | 2020-03-30 | 2024-01-02 | Lnw Gaming, Inc. | Gaming environment tracking optimization |
KR102384419B1 (en) * | 2020-03-31 | 2022-04-12 | 주식회사 세컨핸즈 | Method, system and non-transitory computer-readable recording medium for estimating information about objects |
KR102261880B1 (en) * | 2020-04-24 | 2021-06-08 | 주식회사 핀텔 | Method, appratus and system for providing deep learning based facial recognition service |
KR20220052620A (en) | 2020-10-21 | 2022-04-28 | 삼성전자주식회사 | Object traking method and apparatus performing the same |
KR20220102044A (en) * | 2021-01-12 | 2022-07-19 | 삼성전자주식회사 | Method of acquiring information based on always-on camera |
KR102422962B1 (en) * | 2021-07-26 | 2022-07-20 | 주식회사 크라우드웍스 | Automatic image classification and processing method based on continuous processing structure of multiple artificial intelligence model, and computer program stored in a computer-readable recording medium to execute the same |
KR20230056482A (en) * | 2021-10-20 | 2023-04-27 | 한화비전 주식회사 | Apparatus and method for compressing images |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129570A (en) * | 2010-01-19 | 2011-07-20 | 中国科学院自动化研究所 | Method for designing manifold based regularization based semi-supervised classifier for dynamic vision |
CN103533234A (en) * | 2012-07-05 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, method of operating the same, and system including the image sensor chip |
CN103761460A (en) * | 2013-12-18 | 2014-04-30 | 微软公司 | Method for authenticating users of display equipment |
CN103955639A (en) * | 2014-03-18 | 2014-07-30 | 深圳市中兴移动通信有限公司 | Motion sensing game machine and login method and device for motion sensing game |
CN104182169A (en) * | 2013-05-23 | 2014-12-03 | 三星电子株式会社 | Method and apparatus for user interface based on gesture |
US20140354537A1 (en) * | 2013-05-29 | 2014-12-04 | Samsung Electronics Co., Ltd. | Apparatus and method for processing user input using motion of object |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101034189B1 (en) * | 2009-07-08 | 2011-05-12 | (주)엑스퍼넷 | Adult image detection method using object analysis and multi resizing scan |
KR101880998B1 (en) * | 2011-10-14 | 2018-07-24 | 삼성전자주식회사 | Apparatus and Method for motion recognition with event base vision sensor |
KR101441285B1 (en) * | 2012-12-26 | 2014-09-23 | 전자부품연구원 | Multi-body Detection Method based on a NCCAH(Normalized Cross-Correlation of Average Histogram) And Electronic Device supporting the same |
US9829984B2 (en) | 2013-05-23 | 2017-11-28 | Fastvdo Llc | Motion-assisted visual language for human computer interfaces |
KR102227494B1 (en) * | 2013-05-29 | 2021-03-15 | 삼성전자주식회사 | Apparatus and method for processing an user input using movement of an object |
-
2015
- 2015-01-15 CN CN201510019275.4A patent/CN105844128B/en active Active
- 2015-12-08 KR KR1020150173971A patent/KR102465532B1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129570A (en) * | 2010-01-19 | 2011-07-20 | 中国科学院自动化研究所 | Method for designing manifold based regularization based semi-supervised classifier for dynamic vision |
CN103533234A (en) * | 2012-07-05 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, method of operating the same, and system including the image sensor chip |
CN104182169A (en) * | 2013-05-23 | 2014-12-03 | 三星电子株式会社 | Method and apparatus for user interface based on gesture |
US20140354537A1 (en) * | 2013-05-29 | 2014-12-04 | Samsung Electronics Co., Ltd. | Apparatus and method for processing user input using motion of object |
CN103761460A (en) * | 2013-12-18 | 2014-04-30 | 微软公司 | Method for authenticating users of display equipment |
CN103955639A (en) * | 2014-03-18 | 2014-07-30 | 深圳市中兴移动通信有限公司 | Motion sensing game machine and login method and device for motion sensing game |
Non-Patent Citations (1)
Title |
---|
JURGEN KOGLER ET AL: "Event-Based Stereo Matching Approaches for Frameless Address Event Stereo Data", 《ISVC 2011》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106597463B (en) * | 2016-12-29 | 2019-03-29 | 天津师范大学 | Photo-electric proximity sensor and detection method based on dynamic visual sensor chip |
CN106597463A (en) * | 2016-12-29 | 2017-04-26 | 天津师范大学 | Photoelectric proximity sensor based on dynamic vision sensor (DVS) chip, and detection method |
CN108563937A (en) * | 2018-04-20 | 2018-09-21 | 邓坚 | A kind of identity identifying method and bracelet based on vein |
CN108563937B (en) * | 2018-04-20 | 2021-10-15 | 北京锐思智芯科技有限公司 | Vein-based identity authentication method and wristband |
CN108764078A (en) * | 2018-05-15 | 2018-11-06 | 上海芯仑光电科技有限公司 | A kind of processing method and computing device of event data stream |
CN112114653A (en) * | 2019-06-19 | 2020-12-22 | 北京小米移动软件有限公司 | Terminal device control method, device, equipment and storage medium |
US11336818B2 (en) | 2019-06-19 | 2022-05-17 | Beijing Xiaomi Mobile Software Co., Ltd. | Method and apparatus for controlling camera, device and storage medium |
CN112118380A (en) * | 2019-06-19 | 2020-12-22 | 北京小米移动软件有限公司 | Camera control method, device, equipment and storage medium |
CN110796040A (en) * | 2019-10-15 | 2020-02-14 | 武汉大学 | Pedestrian identity recognition method based on multivariate spatial trajectory correlation |
CN110796040B (en) * | 2019-10-15 | 2022-07-05 | 武汉大学 | Pedestrian identity recognition method based on multivariate spatial trajectory correlation |
CN110929242A (en) * | 2019-11-20 | 2020-03-27 | 上海交通大学 | Method and system for carrying out attitude-independent continuous user authentication based on wireless signals |
CN111083354A (en) * | 2019-11-27 | 2020-04-28 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
CN111177669A (en) * | 2019-12-11 | 2020-05-19 | 宇龙计算机通信科技(深圳)有限公司 | Terminal identification method and device, terminal and storage medium |
CN112669344A (en) * | 2020-12-24 | 2021-04-16 | 北京灵汐科技有限公司 | Method and device for positioning moving object, electronic equipment and storage medium |
CN112669344B (en) * | 2020-12-24 | 2024-05-28 | 北京灵汐科技有限公司 | Method and device for positioning moving object, electronic equipment and storage medium |
CN114077730A (en) * | 2021-11-26 | 2022-02-22 | 广域铭岛数字科技有限公司 | Login verification method, vehicle unlocking system, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
KR102465532B1 (en) | 2022-11-11 |
CN105844128B (en) | 2021-03-02 |
KR20160088224A (en) | 2016-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105844128A (en) | Method and device for identity identification | |
JP6549797B2 (en) | Method and system for identifying head of passerby | |
US10127439B2 (en) | Object recognition method and apparatus | |
CN102214291B (en) | Method for quickly and accurately detecting and tracking human face based on video sequence | |
CN101593425B (en) | Machine vision based fatigue driving monitoring method and system | |
Han et al. | Deep learning-based workers safety helmet wearing detection on construction sites using multi-scale features | |
CN106845344A (en) | Demographics' method and device | |
JP6234762B2 (en) | Eye detection device, method, and program | |
CN101246544A (en) | Iris locating method based on boundary point search and SUSAN edge detection | |
CN102324166A (en) | Fatigue driving detection method and device | |
CN110097724B (en) | Automatic article nursing method and system based on FPGA | |
CN106570490A (en) | Pedestrian real-time tracking method based on fast clustering | |
CN106652291A (en) | Indoor simple monitoring and alarming system and method based on Kinect | |
CN106778637B (en) | Statistical method for man and woman passenger flow | |
CN103456123B (en) | A kind of video smoke detection method based on flowing with diffusion characteristic | |
CN115841651B (en) | Constructor intelligent monitoring system based on computer vision and deep learning | |
CN103049788A (en) | Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk | |
CN103123690A (en) | Information acquisition device, information acquisition method, identification system and identification method | |
CN107221056B (en) | The method stopped based on human bioequivalence | |
CN103049748A (en) | Behavior-monitoring method and behavior-monitoring system | |
CN105243380A (en) | Single facial image recognition method based on combination of selective median filtering and PCA | |
CN107221058A (en) | Intelligent channel barrier system | |
CN203885510U (en) | Driver fatigue detection system based on infrared detection technology | |
CN116630853A (en) | Real-time video personnel tracking method and system for key transportation hub | |
JP3444115B2 (en) | Dozing state detection device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |