CN105844128B - Identity recognition method and device - Google Patents
Identity recognition method and device Download PDFInfo
- Publication number
- CN105844128B CN105844128B CN201510019275.4A CN201510019275A CN105844128B CN 105844128 B CN105844128 B CN 105844128B CN 201510019275 A CN201510019275 A CN 201510019275A CN 105844128 B CN105844128 B CN 105844128B
- Authority
- CN
- China
- Prior art keywords
- identity
- image data
- user
- motion
- classifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention provides an identity recognition method and an identity recognition device, wherein the method comprises the following steps: acquiring a signal by using a dynamic vision sensor and outputting a detected event point; accumulating event points over a period of time to form image data; performing identity recognition according to the image data by using an identity classifier; the identity classifier is trained in advance according to image data formed by the dynamic visual sensor aiming at signals collected by the user when the user identity is registered. The invention can be used for identity recognition with low energy consumption, simple and convenient operation and simultaneously protect the privacy of users.
Description
Technical Field
The invention relates to the technical field of intelligent equipment, in particular to an identity recognition method and device.
Background
With the continuous increase of safety requirements, the identity recognition technology is widely applied to the fields of monitoring, access control systems and intelligent equipment. For example, before the intelligent device is unlocked, the holder of the intelligent device may be identified, and if the identified identity matches the pre-registered user identity, the intelligent device is unlocked; otherwise, it may remain locked or an alarm may be raised. The smart device can be a smart phone, smart glasses, a smart television, a smart home, a smart car, and the like.
At present, the traditional identity recognition methods mainly comprise two types; one is to identify the identity through keys, identity cards, smart cards and other articles; another is identification based on authentication information (e.g., password, special operation, etc.). For example, a set unlocking password is input on an interactive interface popped up from a smart phone, and identity recognition is completed through a verification password; alternatively, the identification may be performed by sliding on the screen of the smartphone in a specific manner (e.g., sliding a block in the screen or connecting points in the screen in a specific order, etc.).
However, since there are cases where authentication information such as passwords and passwords, and authentication items such as keys and smart cards may be acquired by other users, the conventional identification method described above has a problem that it is easy to be falsely replaced, that is, security is not high. Moreover, the operation of identifying the identity through the above identity identifying method is cumbersome, for example, when a password is input and a point in a screen is connected, the operation needs to be completed through touching the screen, and the operation often needs to be coordinated by both hands, thereby reducing the experience of the user.
In view of the fact that the person attributes are not easily obtained compared to the conventional authentication information and authentication articles, there exists a more secure person attribute-based identification method, which mainly obtains the person image information (such as eyes, face, hands, motion image, etc.) of the user through a conventional CCD (Charge-coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) -based camera Device, matches the collected person image information of the user with the pre-stored person image information of the registered user, and identifies the user identity.
However, the existing identification method based on the character attribute has the defect of high energy consumption. Although the prior art can save power by waking up and then unlocking, user operation is increased. Therefore, there is a need for an identity recognition method that is simple to operate and consumes less power.
Disclosure of Invention
The invention aims to solve at least one of the technical defects, in particular the problems of complex operation and high energy consumption.
The invention provides an identity recognition method, which comprises the following steps:
acquiring a signal by using a dynamic vision sensor and outputting a detected event point;
accumulating event points over a period of time to form image data;
and carrying out identity recognition according to the image data by using an identity classifier.
The scheme of the invention also provides an identity recognition device, which comprises:
the signal acquisition unit is used for acquiring signals by using the dynamic vision sensor and outputting detected event points;
the target imaging unit is used for accumulating event point formation image data within a period of time output by the signal acquisition unit;
and the identity recognition subunit is used for recognizing the identity according to the image data output by the target imaging unit by using an identity classifier.
In the scheme of this embodiment, a dynamic visual sensor may be used to collect signals for a user who registers an identity, and an identity classifier may be trained in advance according to image data formed by the collected signals. Therefore, when identity recognition is carried out subsequently, a dynamic visual sensor can be used for collecting signals, and detected event points are accumulated for a period of time to form image data; and carrying out identity recognition according to the formed image data by using an identity classifier.
Compared with the existing identity recognition method, the scheme provided by the invention has the advantages that the dynamic vision sensor with low energy consumption can acquire signals at any time, and the dynamic vision sensor can timely and effectively capture the user and the action of the user as long as the user moves in the field of view of the dynamic vision sensor; and the identity recognition is carried out according to the signals acquired by the dynamic vision sensor, so that the user does not need to wake up the terminal equipment first, and does not need to carry out additional operation on the screen of the terminal equipment, and the operation is simple and convenient.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1a is a schematic flow chart of a method for training an identity classifier according to an embodiment of the present invention;
FIG. 1b is a diagram illustrating an image of user image data according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an identity recognition method based on dynamic vision technology according to an embodiment of the present invention;
FIG. 3a is a flowchart illustrating a method for performing identity recognition using an identity classifier according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of an image of a target area detected according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a component classifier training method according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an identification apparatus based on dynamic vision technology according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an internal structure of an identity recognition subunit according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an internal structure of the motion recognition unit according to the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used in this application, the terms "module," "system," and the like are intended to include a computer-related entity, such as but not limited to hardware, firmware, a combination of hardware and software, or software in execution. For example, a module may be, but is not limited to: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. For example, an application running on a computing device and the computing device may both be a module. One or more modules may reside within a process and/or thread of execution and a module may be localized on one computer and/or distributed between two or more computers.
The inventor of the invention finds that the reason that the existing identity identification method based on the character attribute has high energy consumption is as follows: in the process of identity recognition, the traditional camera equipment needs to be started all the time to acquire the image information of the person, and the energy consumption of the traditional camera equipment is usually large, so that the energy consumption of the whole identity recognition process is large.
Further, the inventors of the present invention have found that the dynamic vision sensor responds only to event points where the luminance of the pixels changes to a certain degree or more, and has the characteristics of low energy consumption, wide illumination conditions, and the like. The low energy consumption can enable the mobile terminal to be in a working state when the mobile terminal and other terminals are in standby, and signals can be collected timely and quickly; and once the user needs to unlock the terminal equipment, the response can be made in time. The wide range of illumination condition can make dynamic vision sensor effectively work under different environmental backgrounds, even be in the environment that dark light source is very weak can the signal of collection.
Moreover, an image formed based on signals acquired by the dynamic visual sensor only approximately reflects contour information of a moving target, has no conventional modal information such as color, texture and the like, and automatically eliminates a background which does not move in a scene where the moving target is located, so that the dynamic visual sensor also has the characteristic of strong confidentiality, information of a user cannot be leaked even if the terminal equipment is attacked, and the method is favorable for protecting the privacy of the user, improving the safety of the information of the user and improving the experience of the user.
Accordingly, the present inventors have considered that a signal may be collected for a registered user using a dynamic vision sensor, and an identity classifier may be trained in advance based on image data formed from the collected signal. Therefore, when identity recognition is carried out subsequently, a dynamic vision sensor can be utilized to collect signals aiming at a user to be recognized, and detected event points are accumulated for a period of time to form image data; then, identity recognition is performed from the formed image data using an identity classifier.
Compared with the existing identity recognition method, the scheme provided by the invention has the advantages that the dynamic vision sensor with low energy consumption can acquire signals at any time, and the dynamic vision sensor can timely and effectively capture the user and the action of the user only by moving in the field of view of the dynamic vision sensor; then, the identity can be identified according to the signal acquired by the dynamic vision sensor, the user does not need to wake up the terminal equipment in advance, and the user does not need to perform additional operation on the screen of the terminal equipment to identify the identity, so that the operation is simple and convenient.
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
In the embodiment of the invention, before identity recognition, an identity classifier for identity recognition can be trained in advance; for example, the intelligent device may pre-train the identity classifier according to image data formed by the dynamic visual sensor for the signal acquired by the user when the user identity is registered; specifically, as shown in fig. 1a, training can be performed by:
s101: when the user identity is registered, a dynamic visual sensor is used for acquiring dynamic visual signals for the user.
In particular, the registration of the user identity may be performed first in the smart device. For example, a registered user may input a registration instruction to the intelligent device by means of a key, voice, or the like, and the intelligent device enters a registration mode after receiving the registration instruction;
in the registration mode, the smart device utilizes the dynamic vision sensor to perform dynamic visual signal acquisition for the user. For example, in the registration mode, when the user moves his head within the field of view of the dynamic vision sensor of the smart device, the signal acquired by the dynamic vision sensor is used as the dynamic vision signal of the user's head.
Indeed, the registered users of the smart device may be one or more; the dynamic visual signal collected by the user may be a dynamic visual signal collected for a certain part of the user, or may be a dynamic visual signal collected for the whole user.
S102: and detecting an event point from the acquired dynamic visual signal and taking the event point output by the dynamic visual sensor as a user event point.
In practical application, the dynamic vision sensor only responds to event points with pixel brightness changing to a certain degree or more, and transmits and stores the event points in response. Thus, the event point output by the dynamic vision sensor can be used as a user event point used by the smart device in the registration mode.
S103: event points over a period of time are mapped to image data, i.e., user event points over a period of time are accumulated to form user image data.
Specifically, after accumulating user event points within a period of time (for example, 20ms), the smart device may convert the user event points obtained in step S102 into corresponding image signals according to the coordinate position, the response precedence relationship, and the spatial proximity relationship of each user event point, so as to form user image data. As can be seen from the user image data shown in fig. 1b, the converted image signal only approximately reflects the contour and partial texture information of the moving registered user, and directly ignores the non-moving object in the background, which is beneficial to the subsequent training of the identity classifier quickly and accurately.
S104: and training the deep convolutional network by using the user image data and the registrant identity calibrated for the user image data to obtain an identity classifier.
In this step, the smart device may perform identity calibration on the user image data of the registered user obtained in step S103, for example, may directly calibrate the user image data as a registrant.
Moreover, those skilled in the art may also store image data of a user determined as a non-registrant (which may be referred to as non-user image data in short in the following) in the smart device in advance, and correspondingly store a pre-calibrated non-registrant identity for the non-user image data.
Therefore, the intelligent equipment can take the user image data and the non-user image data as sample data and calibrate corresponding calibration results for the sample data; the calibration result of the sample data specifically includes: a registrant identity that is targeted for user image data, and a non-registrant identity that is targeted for non-user image data. Then, the deep convolutional network can be trained by using the sample data and the calibration result thereof to obtain the identity classifier. In practical application, the user characteristics in the user image data are automatically learned by utilizing the deep convolutional network, so that the classification accuracy of the finally obtained identity classifier can be improved after the identity prediction model is trained and the network parameters are optimized by a back propagation method. The deep convolutional network can be trained by those skilled in the art by using existing methods, which are not described in detail herein.
In practical applications, the obtained identity classifier may be used to identify the identity of the user to be identified, that is, to identify whether the user is a registrant, or more preferably, in the case that there are a plurality of registered users of the smart device, the identity classifier may be further used to identify which registered user the user is specifically.
Therefore, preferably, when the intelligent device performs identity calibration on the user image data of the registered user formed in step S103, the identity calibration of the registrant for the user image data may further include: the subscriber identity of the registrant. In this way, after the obtained identity classifier identifies the user identity, the output identification result of the obtained identity classifier may further include, in addition to the registrant or the unregisterer: identified as the subscriber identity of the registrant.
The calibration of the user identifier of the registrant may be specifically implemented by the following method: in the registration mode, after the smart device collects the dynamic visual signals for the user by using the dynamic visual sensor, the smart device performs self-calibration according to the precedence relationship of each registered user, for example, the user identifier of the calibrated registrant may be registrant a, registrant B, registrant C, or the like.
Or the intelligent device can return prompt information for prompting the user-defined user identification to be input to the registered user, so that the registered user can input the user identification of the registrant to the intelligent device in a key pressing mode, a voice mode and the like; after receiving the user identification of the registrant input by the user, the intelligent equipment calibrates the identity of the registrant for the user image data by utilizing the received user identification of the registrant.
In practical application, under the condition that a dynamic visual sensor collects a dynamic visual signal of a certain part of a registered user, an identity classifier trained on the basis of the dynamic visual signal is an identity classifier based on the motion characteristics of the part; the identity classifier can be used for carrying out identity recognition on the user aiming at the motion characteristics of the part of different users.
Preferably, the image data formed by the dynamic vision sensor aiming at the signals collected by different parts (such as the ears, the face, the head, the upper body and the like) of the registered user are all used as the training data of the deep convolutional network, and the deep convolutional network is trained, so that the identity classifier which can carry out the identity recognition of the user aiming at the comprehensive motion characteristics of all the parts of different users is obtained. The identity classifier trained by the method can integrate the motion characteristics of all parts of the user to identify the identity of the user, so that the limitation of identifying the identity of the user from the motion characteristics of one part is avoided, and the accuracy of identifying the identity by the identity classifier can be further improved.
Based on the identity classifier, the invention provides an identity recognition method based on a dynamic vision technology, the specific flow of which is shown in fig. 2, and the method can comprise the following steps:
s201: and acquiring signals by using the dynamic vision sensor and outputting the detected event points.
Specifically, the smart device may collect signals in real time by using the dynamic vision sensor, and when the user to be identified moves within the field of view of the dynamic vision sensor, the dynamic vision sensor may collect the dynamic vision signal of the user to be identified and output a detected event point.
For example, when the user to be recognized moves the smart device from a position below the head to the ear, since the dynamic vision sensor is always in an on state, the dynamic vision sensor can quickly capture the motion of the user and acquire the dynamic vision signal of the user to be recognized.
Wherein for each event point output by the dynamic vision sensor, the event point has a pixel coordinate position, but the same pixel coordinate position may correspond to multiple event points. Therefore, before outputting the event point, the dynamic vision sensor needs to remove the duplicate event point according to the response precedence relationship of the event point, and retain and output the newly generated event point.
In practical application, noise caused by a system, an environment and the like may exist in the acquired signals, so that the dynamic visual sensor can remove the noise in the signals according to the response precedence relationship and the spatial proximity relationship of the event points.
S202: the event points over a period of time are mapped to image data, i.e., the event points over a period of time are accumulated to form image data.
In this step, the smart device may accumulate event points within a period of time (e.g., 20ms), that is, accumulate event points to which the user to be identified responds when moving within a period of time; the accumulated event points are converted into image data in conjunction with the location of each event point.
S203: performing identity recognition according to the image data by using an identity classifier; if the identification result is the registrant, go to step S204; if the identification result is a non-registrant, the subsequent steps are not executed.
In this step, the intelligent device may perform identity recognition according to the image data obtained in step S202 by using an identity classifier, and the obtained recognition result may be a registrant or a non-registrant. Therefore, after the identity identification, it may be further determined whether the identification result is a registrant, and if the identification result of the user to be identified is the registrant, step S204 is executed; otherwise, the smart device may not perform the subsequent steps and continue to maintain the current state.
Preferably, when there are a plurality of registered users of the smart device, if the identification result obtained by using the identity classifier is a registrant, the identification result may further include: identified as the subscriber identity of the registrant.
The identity classifier may be obtained through the training in steps S101-S104, or may be obtained through training by other training methods. For example, a conventional shooting device may be used to collect image data of a set number of users and put the image data as a sample into a sample set of registered users; after the various samples are turned, rotated, translated, scaled and the like to generate training data through the transformation gain sample set, the feature classification model is trained according to the pre-designed target features, and the identity classifier for identifying the identity of the user is obtained. The pre-designed target features may be HOG (Histogram of Gradient) of traditional face recognition, M-SHIFT (Mean-SHIFT) and the like; the feature classification model may be a KNN (k-Nearest Neighbor) classification algorithm, an SVM (Support Vector Machine), a Boosting algorithm, or the like.
How the intelligent device performs identity recognition according to the image data formed in step S202 by using the identity classifier will be described in detail later.
In practical application, after the identity of the user is identified by the identity classifier, the intelligent device may perform a certain operation according to the identification result, for example, perform an unlocking operation or send an alarm to a registered user.
Preferably, the smart device may perform the action recognition of the user during the process of recognizing the user identity through the above steps S201 to S203, or after recognizing that the user to be recognized is a registrant. Therefore, after the user to be identified is identified as the registrant, the corresponding instruction can be matched according to the action of the identified registrant, and corresponding operation is executed, such as answering a call, opening a vehicle door and the like.
The process of recognizing the user's motion may specifically include the following steps:
s204: and identifying the motion track of the moving part according to the detected event points.
Specifically, if the smart device identifies that the identity of the user to be identified is a registrant through step S203, the component classifier may be used to identify the category and the position of the moving component for the event point currently detected in step S201; and determining the motion track of the motion part according to the positions of the motion parts of the sequentially identified categories.
The component classifier is trained according to sample signals acquired by the dynamic visual sensor, can be obtained by training of other equipment and stored in the intelligent equipment, and can also be pre-trained by the intelligent equipment. The training method for the component classifier will be described in detail later.
In this step, the intelligent device may determine the category of the moving part to which the event point belongs, by using the part classifier, according to the neighbor point of the currently detected event point.
Wherein, the neighbor point of the event point can be determined by the following method:
for the currently detected event point, all event points acquired by the dynamic vision sensor within a set time interval before the event point is detected are determined, the event points in a set space range (for example, a rectangle of 80 × 80 pixels) around the event point are selected from the event points, and the event points are determined as neighbor points of the event point.
Further, after determining the categories of the moving parts to which all the detected event points belong, the intelligent device may determine, for each category of moving parts, the positions of the moving parts of the category according to the positions of the event points of the moving parts belonging to the category.
For example, the center position of the event point of the moving part belonging to the same category may be calculated; the calculated center position is taken as the position of the moving part of the category. In practice, the center position can be obtained by any conventional clustering method known to those skilled in the art. For example, a K-means clustering method may be used to obtain the center position of the moving component, so as to facilitate accurate tracking of the subsequent moving component.
In this step, after the type and the position of the moving member are identified in step S202, the movement locus of the moving member of the type may be determined based on the sequentially identified positions of the moving members of the type.
In practice, the determination of the motion trajectory of the moving component can be performed by using a tracking algorithm commonly used by those skilled in the art, such as a smoothing filter, a time-series tracking algorithm, etc., and will not be described in detail herein.
Preferably, in the embodiment of the present invention, after identifying the category and the position of the moving component, the intelligent device may further perform area rationality verification on the identified category of the moving component, and exclude the position of the moving component that is erroneously determined, so as to improve the tracking efficiency of subsequent moving components and improve the accuracy of motion identification.
Specifically, the smart device may determine whether the position of the moving part of the currently identified category is within a reasonable area range; if yes, passing the verification; otherwise, the verification is not passed. And if the identified type of the moving part passes the verification, correspondingly recording the identified type and position of the moving part. For example, the position of the moving part can be recorded in a pre-constructed tracking part list, and the position of the moving part can be tracked and recorded in the tracking part list. In this way, the motion trajectory can be determined according to the positions of the moving parts of the category recorded in sequence in the tracking part list.
Wherein the reasonable area range is determined according to the position of the moving part of the category recorded last time and the position range prior knowledge of the moving part of the category. For example, when the motion component is a specific part such as a head or a hand of a human body, a distance between a currently recognized position of the motion component (for example, the head or the hand) of the category and a previously recorded position of the motion component of the category may be calculated, and if the distance meets a certain condition and meets the conventional experience of human body morphology, it indicates that the currently recognized position of the motion component of the category is within a reasonable area range.
In practical applications, due to the special imaging of the dynamic vision sensor, when a moving component is temporarily suspended, the moving component reflected by an event point detected by the dynamic vision sensor may have a temporarily lost motion track. Therefore, continuous tracking of different moving parts and smoothing of the moving position can be achieved by maintaining a list of tracking parts. The smoothing process may use a common smoothing means such as a kalman filter.
S205: and matching a corresponding instruction according to the motion track of the motion part.
Specifically, the smart device may extract a trajectory feature from the motion trajectory of the moving component determined in the previous step S204; searching whether the characteristics matched with the extracted track characteristics are stored or not in the action dictionary aiming at the motion part; and if so, taking the instruction corresponding to the searched characteristic as an instruction corresponding to the motion track of the motion part.
Wherein, the action dictionary is constructed in advance by technicians; the action dictionary stores the characteristics of the motion tracks matched with the motion parts of each category in advance aiming at the motion parts of each category, and preset action instructions, such as a mobile phone answering instruction, a car door opening instruction and the like, are correspondingly recorded.
S206: and executing corresponding operation according to the matched instruction.
Specifically, the smart device may perform a corresponding operation according to the matched instruction in step S205.
For example, when the category of the moving part is identified as nose or ear through step S204, the smart device may determine the movement track of the nose or ear through step S205, and match a corresponding instruction, such as an automatic answering instruction, according to the determined movement track of the nose or ear. In this way, the intelligent device can perform corresponding operations according to the instructions, such as performing automatic answering operations.
Alternatively, when the category of the moving part is identified as the nose, the eye, or the finger by step S204; correspondingly, the intelligent device can execute unlocking/danger reminding operation after matching out a corresponding automatic unlocking/danger reminding instruction according to the movement track of the nose, the eyes or the fingers.
In the embodiment of the present invention, as shown in fig. 3a, the process of how to perform identity recognition by using the identity classifier according to the image data formed in step S202 in step S203 may be implemented by the following steps:
s301: a target region in the image data is detected.
Specifically, the smart device may detect each frame of image data formed in step S202, and detect a target area from the image data; wherein the target area is preset, such as a human face area, a head area, a hand area, a body area, and the like.
In practical application, because background which does not move is automatically filtered out from signals acquired by the dynamic vision sensor, and noise caused by systems, environments and the like in the signals is removed, theoretically, all event points output by the dynamic vision sensor should be responses generated by the movement of a user to be identified. Accordingly, the horizontal and vertical boundaries of the target region may be determined based on the projected histograms of the horizontal and vertical directions of the image data using existing methods known to those skilled in the art.
For example, when the target region specifically refers to a head region, the image with a fixed depth in the vertical direction may be projected to the horizontal axis (to remove the influence of shoulders and the like) to obtain a projection histogram, and the width value and the left and right boundaries of the target head may be determined according to the continuity of the histogram; then, the height of the target head is calculated according to the preset average aspect ratio of the head, that is, the upper and lower boundaries of the target head are obtained, and then the head region is detected, as shown in fig. 3 b.
S302: the target area is regularized.
In practical applications, since the physical distances between the smart device and the user may be different during the movement of the user at different times, the target area (for example, the head) detected in step S301 may have different sizes in the image; the accuracy of the recognition result is affected if the identity recognition is performed by directly using the input. Therefore, the intelligent device can convert the detected target area to the same size, namely, perform size normalization processing. For example, the width of the target area may be obtained, then scaled to a fixed width, and the scaling may be recorded, and the same operation may be performed in the vertical direction.
Further, the intelligent device may perform regularization of the illumination condition on the detection target area in consideration of the influence of the illumination condition on the image data. For example, after detecting the illumination condition value of the current image data, the imaging is automatically adjusted so that the imaging characteristics of the target area are substantially consistent under different illumination conditions.
Preferably, the smart device may further perform regularization of the moving speed, for example, classify the moving speed of the target region according to a difference between a time label of the event point and time labels of other event points in a neighborhood of the event point, and select different integration times according to a difference of the moving speed grades to generate image data with the regularized moving speed, so as to achieve a consistent imaging modality at different speeds.
S303: and carrying out identity recognition according to the regularized image data by using an identity classifier.
In practical applications, the dynamic vision sensor only responds to event points that change, so that the converted image data sometimes contains very few effective pixels or only partial response pixels of the user. And for the image data with few effective pixel points and incorrect parts, the image data often has adverse effect on the identification result, namely the image data belongs to the image data which is not suitable for identification.
Therefore, preferably, after regularizing the detected target region, the intelligent device may further perform filtering processing on the regularized image data by using a filtering classifier, so as to remove the image data calibrated as unsuitable for identification. Accordingly, the intelligent device can perform identity recognition according to the filtered image data by using the identity classifier.
Wherein, the filter classifier is trained in advance according to positive and negative samples collected by the dynamic visual sensor. The positive and negative samples are image data which are formed by event points output by the dynamic vision sensor and are normalized. And the event points output by the dynamic vision sensor are detected from the dynamic vision signals collected by the registered user or other users.
Moreover, image data which is formed by event points output by the dynamic visual sensor and is normalized can be calibrated according to information such as the number of effective pixel points and the positions of response pixels, and the calibration result can be a positive sample or a negative sample. Wherein, the positive sample is image data which is calibrated to be suitable for identification; negative examples refer in particular to image data that are calibrated to be unsuitable for recognition. And the image data suitable for identification and the image data not suitable for identification are calibrated by technicians in advance.
In practical application, the filtering classifier may be pre-trained by the smart device before performing identity recognition, or may be pre-trained by another device and then stored on the smart device. No matter the intelligent device or other devices, the filtering classifier can be obtained in a clustering or classifier training mode after positive and negative samples are collected. For example, an SVM (Support Vector Machine) classifier may be trained based on positive and negative samples to obtain a filtering classifier.
In the embodiment of the present invention, a flow of the training method for the component classifier mentioned in step S204 is shown in fig. 4, and the training method may specifically include the following steps:
s401: and generating a training sample according to the event point output by the dynamic vision sensor collecting the sample signal.
In this step, a dynamic vision sensor can be used to collect a sample signal for the moving part; and taking the event point output by the dynamic vision sensor as a sample event point. For example, after a user moves his head within the field of view of the dynamic vision sensor, the dynamic vision sensor may acquire a sample signal for the user's head.
The motion profile of the user can be described well to some extent in consideration of image data formed by accumulating event points over a period of time, and the profile information generated by these motions can also express the shape information of the user himself.
Therefore, after determining an event point output by collecting a sample signal by the dynamic vision sensor as a sample event point, neighbor points of the currently output sample event point may be determined; and taking the currently output sample event point and the neighbor point of the sample event point as a training sample.
Further, according to the positions of the sample event points and the neighbor points thereof, the sample event points are classified, that is, the category of the moving part to which the sample event point belongs is judged. The category of the motion component may be specifically the head, hand, body, and the like of the user. In this way, the determined category of the moving part to which the sample event point belongs can be calibrated for the category of the moving part of the training sample.
S402: and training the deep confidence network by using the generated training sample and the calibration result thereof to obtain the component classifier.
The calibration result of the training sample refers to the category of the moving part calibrated for the training sample.
In this step, the training sample set may be formed by the plurality of training samples generated in step S401, and the deep belief network may be trained by using the training sample set and the calibration result of each training sample in the training sample set, so as to obtain the component classifier. Wherein, the technical means commonly used by those skilled in the art can be adopted for how to train the deep belief network.
For example, the deep confidence network is iteratively trained for a plurality of times by using the generated training samples and the calibration results thereof. Wherein, the one-time iterative training process specifically comprises: taking a training sample set consisting of a plurality of training samples as the input of the deep confidence network; then, comparing the output of the deep confidence network with the calibration result of each training sample; and adjusting the level parameters of the deep confidence network according to the comparison result to continue the next iteration, or stopping the iteration to obtain the component classifier.
The output of the deep confidence network is actually guess of the category of the moving part to which the sample event point belongs, so that the guessed category of the moving part to which the event point belongs is compared with a pre-calibrated accurate calibration result, error values generated by the guessed category of the moving part and the pre-calibrated accurate calibration result are used for adjusting parameters of each level of the deep confidence network through a back propagation training technology, and the category division accuracy of the finally obtained part classifier is improved, so that accurate identification and response of subsequent user actions are facilitated.
Based on the above identity recognition method based on the dynamic vision technology, an identity recognition apparatus based on the dynamic vision technology provided in the embodiment of the present invention, as shown in fig. 5, may specifically include: a signal acquisition unit 501, a target imaging unit 502 and an identity recognition subunit 503.
The signal acquisition unit 501 is configured to acquire a signal using a dynamic vision sensor and output a detected event point.
The target imaging unit 502 is configured to accumulate event point formation image data for a period of time output by the signal acquisition unit 501.
The identity recognition subunit 503 is configured to perform identity recognition according to the image data output by the target imaging unit 502 by using an identity classifier.
In practical applications, the recognition result output by the identity recognition subunit 503 may be: registrant, or unregisterer. Preferably, in the case that there are a plurality of registered users, the identification result output by the identity identifying subunit 503 may further include: identified as the subscriber identity of the registrant.
In practical application, the identity classifier is trained in advance according to image data formed by a dynamic visual sensor aiming at signals collected by a user when the user identity is registered.
Preferably, in the embodiment of the present invention, the identity recognition apparatus based on the dynamic vision technology may further include: an identity classifier training unit 504.
The identity classifier training unit 504 is configured to, when performing user identity registration, acquire a dynamic visual signal for a user by using a dynamic visual sensor, and use an event point output by the dynamic visual sensor as a user event point; accumulating user event points over a period of time to form user image data; and training the deep convolutional network by using the sample data and the calibration result thereof to obtain the identity classifier.
Wherein the sample data comprises: user image data, and non-user image data; the calibration result of the sample data comprises: a registrant identity that is targeted for user image data, and a non-registrant identity that is targeted for non-user image data.
Further, in the embodiment of the present invention, the identity recognition apparatus based on the dynamic vision technology may further include: an action recognition unit 505, an instruction matching unit 506, and an instruction response unit 507.
The motion recognition unit 505 is configured to receive the recognition result output by the identity recognition subunit 503, and recognize the motion trajectory of the moving component according to the event point detected by the signal acquisition unit 501 when the recognition result is the registrant.
The instruction matching unit 506 is configured to match a corresponding instruction according to the motion trajectory of the moving component identified by the motion identification unit 505.
The instruction response unit 507 is configured to execute a corresponding operation according to the instruction matched by the instruction matching unit 506.
In practical applications, as shown in fig. 6, the identity recognizing subunit 503 may specifically include: a target area detection subunit 601, a target area regularization subunit 602, and an identity identification subunit 603.
The target area detection subunit 601 is configured to detect a target area in the image data output by the target imaging unit 502.
The target area regularization subunit 602 is configured to regularize the target area detected by the target area detection subunit 601.
Specifically, the target area detected by the target area detection subunit 601 may be subjected to size regularization, illumination condition regularization, and movement speed regularization.
The identity recognition subunit 603 is configured to perform identity recognition according to the image data normalized by the target region regularization subunit 602 by using an identity classifier.
Further, the identity identifying subunit 503 may further include: an image filtering processing subunit 604.
The image filtering processing subunit 604 is configured to perform filtering processing on the image data normalized by the target region regularization subunit 602 by using a filtering classifier. Accordingly, the identity recognizing subunit 603 is specifically configured to perform identity recognition by using an identity classifier according to the image data filtered by the image filtering processing subunit 604.
The filtering classifier is trained in advance according to positive and negative samples collected by the dynamic visual sensor, and the positive and negative samples are image data which are formed by event points output by the dynamic visual sensor and are normalized; and positive samples are image data that are calibrated to be suitable for recognition, and negative samples are image data that are calibrated to be unsuitable for recognition. In practical application, the filtering classifier can be obtained by pre-training the identity recognition device based on the dynamic vision technology, or can be stored in the identity recognition device based on the dynamic vision technology after being trained by other devices.
In practical application, as shown in fig. 7, the action recognition unit 505 may specifically include: a component identification subunit 701, and a trajectory tracking subunit 702.
The component identification subunit 701 is configured to identify the category and the position of the moving component for the event point currently detected by the signal acquisition unit 501 by using a component classifier. The component classifier is trained in advance according to sample signals collected by the dynamic vision sensor.
The trajectory tracking subunit 702 is configured to determine a motion trajectory of the moving component according to the positions of the moving components of the categories sequentially identified by the component identifying subunit 701.
In this way, the instruction matching unit 506 is specifically configured to extract the trajectory feature from the motion trajectory of the moving component determined by the trajectory tracking subunit 702; searching whether the characteristics matched with the extracted track characteristics are stored or not in the action dictionary aiming at the motion part; and if so, taking the instruction corresponding to the searched characteristic as an instruction corresponding to the motion track of the motion part.
In practical application, the component classifier may be pre-trained by other devices and then stored in the identity recognition device based on the dynamic vision technology, or pre-trained by the identity recognition device based on the dynamic vision technology.
Therefore, more preferably, the action recognition unit 505 may further include: the component classifier trains subunit 703.
The component classifier training subunit 703 is configured to generate a training sample according to an event point output by a dynamic vision sensor collecting a sample signal; and training the deep confidence network by using the generated training sample and the calibration result thereof to obtain the component classifier.
The calibration result of the training sample refers to the category of the moving part calibrated for the training sample.
In the embodiment of the present invention, the specific functions of each unit and sub-units under the unit in the identity recognition apparatus based on the dynamic vision technology are implemented by referring to the specific steps of the identity recognition method based on the dynamic vision technology, which are not described in detail herein.
In practical applications, the smart device may be a smart phone. In this way, the smart phone equipped with the identity recognition device can recognize the identity of the current holder of the smart phone, and if the recognition result is the registrant, namely the registered user of the smart phone, the smart phone is unlocked; furthermore, the smart phone can also identify the action of the user according to the signals collected by the low-energy-consumption dynamic vision sensor in real time, match out a corresponding action instruction, and execute corresponding operations, such as automatic call answering, automatic playing and the like.
For example, a registered user can unlock the smart phone by only holding the smart phone with one hand and shaking the smart phone according to a track set by the registered user, the registered user does not need to contact with a screen of the smart phone, the registered user does not need to hold the smart phone with one hand and unlock the smart phone with the other hand on the screen, and the operation is simple and convenient.
When the smart phone has an incoming call, the user only needs to move the smart phone to the ear along a conventional track, the smart phone can automatically answer the call without triggering an answer key or completing answer sliding operation, and the use of the smart phone is facilitated for the user. On the other hand, even if the unregistered user operates according to the operation mode of the registered user, the smart phone cannot be unlocked or answered, and the security of the smart phone is improved.
Or, the intelligent device may be intelligent glasses applied to navigation for the blind. For example, when the blind person goes out, the blind person can wear the intelligent glasses provided with the identification device of the dynamic vision technology, and during the course of the movement, the dynamic vision sensor in the identification device of the dynamic vision technology collects the signals of the object moving relative to the blind person in the scene in front, and identifies the currently collected signals; if the road signs or dangerous goods appear in front of the blind person in the traveling process, the blind person can be reminded of taking different traveling measures through different sounds or touch senses. Due to the characteristic of low energy consumption of the dynamic vision sensor, the dynamic vision sensor can be always in a starting working state, has long standby time and is very suitable for the navigation of the blind.
Alternatively, the smart device may refer to an identification device car equipped with the dynamic vision technology. For example, a dynamic vision sensor is installed above the door of an automobile for signal acquisition in real time. Like this, when the car owner is close to the car gradually, the dynamic vision sensor of low energy consumption can in time gather car owner's facial information fast and movement track information, accomplishes the automation of lock and opens or actions such as car circular telegram, and easy operation is quick, has improved user experience degree. Moreover, the registered user can not respond according to the operation of the registered action, and the unregistered user can not respond according to the operation mode of the registered user, so that the safety of the vehicle can be improved.
Alternatively, the intelligent device may be an intelligent television configured with the identification device. For example, a dynamic vision sensor in the identification apparatus of the dynamic vision technology may be disposed above the smart television for signal acquisition, and when the user moves within a field of view of the dynamic vision sensor, dynamic vision signals (such as a human face, a body, and the like) of the user may be acquired, and the identification apparatus of the dynamic vision technology identifies the identity of the user, and if the user is identified as an unrestricted user, the smart television may automatically jump to a channel in which the user is most interested, or pop up a viewing history of the user for selection; if the user is identified as the restricted user, the smart television can shield the related channel and forbid the restricted user from watching the related channel; and counting the watching time of the limited users on the same day, and not providing the watching function when the set time limit is exceeded. Therefore, the watching permission of the response channel, such as the television watching time of children and the like, can be limited according to the identity information through simple operation, and the user experience is improved.
According to the technical scheme, the dynamic visual sensor can be used for collecting signals aiming at the user with the identity registration, and the identity classifier is trained in advance according to image data formed by the collected signals. Therefore, when identity recognition is carried out subsequently, a dynamic visual sensor can be used for collecting signals, and detected event points are accumulated for a period of time to form image data; and carrying out identity recognition according to the formed image data by using an identity classifier. Compared with the existing identity recognition method, the scheme provided by the invention has the advantages that the dynamic vision sensor with low energy consumption can acquire signals at any time, and the dynamic vision sensor can timely and effectively capture the user and the action of the user as long as the user moves in the field of view of the dynamic vision sensor; the identity recognition is carried out according to the signals collected by the dynamic vision sensor, the user does not need to wake up the terminal equipment first, and the user does not need to carry out extra operation on the screen of the terminal equipment, so that the operation is simple and convenient.
Those skilled in the art will appreciate that the present invention includes apparatus directed to performing one or more of the operations described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the features specified in the block or blocks of the block diagrams and/or flowchart illustrations of the present disclosure.
Those of skill in the art will appreciate that various operations, methods, steps in the processes, acts, or solutions discussed in the present application may be alternated, modified, combined, or deleted. Further, various operations, methods, steps in the flows, which have been discussed in the present application, may be interchanged, modified, rearranged, decomposed, combined, or eliminated. Further, steps, measures, schemes in the various operations, methods, procedures disclosed in the prior art and the present invention can also be alternated, changed, rearranged, decomposed, combined, or deleted.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and these improvements and modifications should also be construed as the protection scope of the present invention.
Claims (32)
1. An identity recognition method, comprising:
forming image data based on event points of a user, the event points being acquired using a dynamic vision sensor;
detecting a target region in the image data;
acquiring motion characteristics of different parts of the user based on the target area;
and carrying out identity recognition by utilizing an identity classifier according to the comprehensive motion characteristics generated by the motion characteristics of different parts of the user.
2. The method of claim 1, wherein the using the identity classifier for identity recognition from the image data comprises:
regularizing the target region;
and carrying out identity recognition according to the regularized image data by using the identity classifier.
3. The method of claim 2, further comprising, after the regularizing the target region:
filtering the normalized image data by using a filtering classifier;
the filtering classifier is trained in advance according to positive and negative samples collected by the dynamic visual sensor;
wherein the positive and negative samples are image data which are formed by event points output by the dynamic visual sensor and are normalized; and
the positive sample is image data that is calibrated to be suitable for recognition, and the negative sample is image data that is calibrated to be unsuitable for recognition.
4. The method of claim 3, wherein the identity classifier performs identity recognition according to the regularized image data specifically by:
and performing identity recognition according to the filtered image data by using the identity classifier.
5. The method according to any one of claims 1-4, wherein the identity classifier performs identity recognition based on the image data and outputs a recognition result of: registrant, or unregisterer.
6. The method of claim 5, wherein the registrant is a plurality of registrants, and the identifying further comprises: identified as the subscriber identity of the registrant.
7. The method of any of claims 1-4, wherein the identity classifier is pre-trained from image data formed by the dynamic vision sensor for the signals collected by the user when the user is registered for identity:
when user identity registration is carried out, acquiring dynamic visual signals for the user by using the dynamic visual sensor, and taking event points output by the dynamic visual sensor as user event points;
accumulating user event points over a period of time to form user image data;
training the deep convolutional network by using the sample data and the calibration result thereof to obtain the identity classifier;
wherein the sample data comprises: the user image data, and non-user image data;
the calibration result of the sample data comprises: the identity of the registrant calibrated for the user image data and the identity of the non-registrant calibrated for the non-user image data.
8. The method of claim 7, wherein the identity of the registrant marked for the user image data specifically includes a user identification of the registrant.
9. The method of claim 5, after said identifying with the identity classifier based on the image data, further comprising:
if the identification result is the registrant, then:
identifying a motion track of the moving part according to the detected event points;
and after matching a corresponding instruction according to the motion trail of the motion part, executing corresponding operation according to the instruction.
10. The method of claim 9, wherein identifying a motion trajectory of a moving part based on the detected event points comprises:
identifying the category and the position of the moving part aiming at the currently detected event point by utilizing a part classifier;
and determining the motion track of the motion part according to the sequentially identified positions of the motion parts of the categories.
11. The method of claim 10, wherein the component classifier is trained in particular according to the following method:
generating a training sample according to an event point output by the dynamic vision sensor collecting a sample signal;
training the deep confidence network by using the generated training sample and the calibration result thereof to obtain the component classifier;
the calibration result of the training sample refers to the category of the moving part calibrated for the training sample.
12. The method of claim 11, wherein the training samples are generated by:
taking an event point output by the dynamic vision sensor for collecting a sample signal as a sample event point;
determining neighbor points of the currently output sample event points;
and taking the currently output sample event point and the neighbor point of the sample event point as a training sample.
13. The method of claim 9, wherein matching corresponding instructions according to the motion trail of the moving component comprises:
extracting track characteristics from the motion track of the motion part;
searching whether the characteristics matched with the extracted track characteristics are stored or not for the motion part in an action dictionary;
and if so, taking the instruction corresponding to the searched characteristic as an instruction corresponding to the motion track of the motion part.
14. The method of claim 10, wherein the category of the athletic component is a nose or an ear; and
after matching out a corresponding instruction according to the motion trail of the motion part, executing corresponding operation according to the instruction, wherein the operation comprises the following steps:
and executing automatic answering operation after matching out a corresponding automatic answering instruction according to the movement track of the nose or the ears.
15. The method of claim 10, wherein the category of the moving part is a nose, an eye, or a finger; and
after matching out a corresponding instruction according to the motion trail of the motion part, executing corresponding operation according to the instruction, wherein the operation comprises the following steps:
and executing unlocking/danger reminding operation after matching out a corresponding automatic unlocking/danger reminding instruction according to the motion track of the nose, the eyes or the fingers.
16. An identification device, comprising:
a target imaging unit for forming image data based on an event point of a user, the event point being acquired by using a dynamic vision sensor;
a target area detection subunit configured to detect a target area in the image data output by the target imaging unit;
a feature extraction unit, configured to obtain motion features of different parts of the user based on the target area output by the target area detection subunit;
and the identity recognition subunit is used for performing identity recognition by utilizing an identity classifier according to the comprehensive motion characteristics generated by the motion characteristics of different parts of the user output by the target imaging unit.
17. The apparatus according to claim 16, wherein the identity recognition subunit comprises:
a target area regularization subunit, configured to regularize the target area detected by the target area detection subunit;
and the identity recognition subunit is used for performing identity recognition by utilizing the identity classifier according to the image data normalized by the target region regularization subunit.
18. The apparatus of claim 17, wherein the target region regularization subunit, after the regularizing the target region, is further to:
filtering the normalized image data by using a filtering classifier;
the filtering classifier is trained in advance according to positive and negative samples collected by the dynamic visual sensor;
wherein the positive and negative samples are image data which are formed by event points output by the dynamic visual sensor and are normalized; and
the positive sample is image data that is calibrated to be suitable for recognition, and the negative sample is image data that is calibrated to be unsuitable for recognition.
19. The apparatus according to claim 18, wherein the identity recognition subunit is specifically configured to:
and performing identity recognition according to the filtered image data by using the identity classifier.
20. The apparatus according to any of claims 16-19, wherein the identity recognition subunit outputs the recognition result as: registrant, or unregisterer.
21. The apparatus as claimed in claim 20, wherein the plurality of registrants are present, and the identification result output by the identity recognition subunit further comprises: identified as the subscriber identity of the registrant.
22. The apparatus of any one of claims 16-19, further comprising:
the identity classifier training unit is used for acquiring dynamic visual signals for the user by using the dynamic visual sensor when the user identity is registered, and taking event points output by the dynamic visual sensor as user event points; accumulating user event points over a period of time to form user image data; training the deep convolutional network by using the sample data and the calibration result thereof to obtain the identity classifier;
wherein the sample data comprises: the user image data, and non-user image data; the calibration result of the sample data comprises: the identity of the registrant calibrated for the user image data and the identity of the non-registrant calibrated for the non-user image data.
23. The apparatus of claim 22, wherein the identity of the registrant marked for the user image data specifically includes a user identification of the registrant.
24. The apparatus of claim 20, further comprising:
the action recognition unit is used for receiving the recognition result output by the identity recognition subunit and recognizing the motion track of the motion part according to the event point acquired by the dynamic vision sensor when the recognition result is the registrant;
the instruction matching unit is used for matching a corresponding instruction according to the motion track of the motion part identified by the action identification unit;
and the instruction response unit is used for executing corresponding operation according to the instruction matched by the instruction matching unit.
25. The apparatus according to claim 24, wherein the motion recognition unit specifically comprises:
the component identification subunit is used for identifying the category and the position of the moving component aiming at the event point currently acquired by the dynamic vision sensor by utilizing a component classifier; the component classifier is trained in advance according to sample signals collected by the dynamic vision sensor;
and the track tracking subunit is used for determining the motion track of the motion part according to the positions of the motion parts of the categories sequentially identified by the part identification subunit.
26. The apparatus of claim 25, wherein the action recognition unit further comprises a component classifier training subunit operable to generate training samples from event points of the dynamic vision sensor acquisition sample signal output; training the deep confidence network by using the generated training sample and the calibration result thereof to obtain the component classifier;
the calibration result of the training sample refers to the category of the moving part calibrated for the training sample.
27. The apparatus of claim 26, wherein the action recognition unit is further configured to:
taking an event point output by the dynamic vision sensor for collecting a sample signal as a sample event point;
determining neighbor points of the currently output sample event points;
and taking the currently output sample event point and the neighbor point of the sample event point as a training sample.
28. The apparatus of claim 24,
the instruction matching unit is specifically configured to extract a trajectory feature from the motion trajectory of the motion component determined by the motion recognition unit; searching whether the characteristics matched with the extracted track characteristics are stored or not for the motion part in an action dictionary; and if so, taking the instruction corresponding to the searched characteristic as an instruction corresponding to the motion track of the motion part.
29. The apparatus of claim 25, wherein the category of the moving parts is a nose or an ear; and
after matching the corresponding instruction according to the motion trajectory of the moving component, the instruction response unit is specifically configured to, when executing the corresponding operation according to the instruction:
and executing automatic answering operation after matching out a corresponding automatic answering instruction according to the movement track of the nose or the ears.
30. The apparatus of claim 25, wherein the category of the moving part is a nose, eyes, or fingers; and
after matching the corresponding instruction according to the motion trajectory of the moving component, the instruction response unit is specifically configured to, when executing the corresponding operation according to the instruction:
and executing unlocking/danger reminding operation after matching out a corresponding automatic unlocking/danger reminding instruction according to the motion track of the nose, the eyes or the fingers.
31. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is adapted to execute the computer program to implement the method of any of claims 1-15.
32. A computer-readable storage medium, characterized in that a computer program is stored in the storage medium, which computer program, when being executed by a processor, carries out the method of any one of claims 1-15.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510019275.4A CN105844128B (en) | 2015-01-15 | 2015-01-15 | Identity recognition method and device |
KR1020150173971A KR102465532B1 (en) | 2015-01-15 | 2015-12-08 | Method for recognizing an object and apparatus thereof |
US14/995,275 US10127439B2 (en) | 2015-01-15 | 2016-01-14 | Object recognition method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510019275.4A CN105844128B (en) | 2015-01-15 | 2015-01-15 | Identity recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105844128A CN105844128A (en) | 2016-08-10 |
CN105844128B true CN105844128B (en) | 2021-03-02 |
Family
ID=56579904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510019275.4A Active CN105844128B (en) | 2015-01-15 | 2015-01-15 | Identity recognition method and device |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102465532B1 (en) |
CN (1) | CN105844128B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180060257A (en) | 2016-11-28 | 2018-06-07 | 삼성전자주식회사 | Metohd and apparatus for object recognition |
KR102070956B1 (en) * | 2016-12-20 | 2020-01-29 | 서울대학교산학협력단 | Apparatus and method for processing image |
KR20180073118A (en) | 2016-12-22 | 2018-07-02 | 삼성전자주식회사 | Convolutional neural network processing method and apparatus |
CN106597463B (en) * | 2016-12-29 | 2019-03-29 | 天津师范大学 | Photo-electric proximity sensor and detection method based on dynamic visual sensor chip |
KR20180092778A (en) | 2017-02-10 | 2018-08-20 | 한국전자통신연구원 | Apparatus for providing sensory effect information, image processing engine, and method thereof |
RU2656708C1 (en) * | 2017-06-29 | 2018-06-06 | Самсунг Электроникс Ко., Лтд. | Method for separating texts and illustrations in images of documents using a descriptor of document spectrum and two-level clustering |
WO2019017720A1 (en) * | 2017-07-20 | 2019-01-24 | 주식회사 이고비드 | Camera system for protecting privacy and method therefor |
KR101876433B1 (en) * | 2017-07-20 | 2018-07-13 | 주식회사 이고비드 | Activity recognition-based automatic resolution adjustment camera system, activity recognition-based automatic resolution adjustment method and automatic activity recognition method of camera system |
KR102086042B1 (en) * | 2018-02-28 | 2020-03-06 | 서울대학교산학협력단 | Apparatus and method for processing image |
CN108563937B (en) * | 2018-04-20 | 2021-10-15 | 北京锐思智芯科技有限公司 | Vein-based identity authentication method and wristband |
CN108764078B (en) * | 2018-05-15 | 2019-08-02 | 上海芯仑光电科技有限公司 | A kind of processing method and calculating equipment of event data stream |
KR102108953B1 (en) * | 2018-05-16 | 2020-05-11 | 한양대학교 산학협력단 | Robust camera and lidar sensor fusion method and system |
KR102108951B1 (en) * | 2018-05-16 | 2020-05-11 | 한양대학교 산학협력단 | Deep learning-based object detection method and system utilizing global context feature of image |
KR102083192B1 (en) | 2018-09-28 | 2020-03-02 | 주식회사 이고비드 | A method for controlling video anonymization apparatus for enhancing anonymization performance and a apparatus video anonymization apparatus thereof |
CN112118380B (en) * | 2019-06-19 | 2022-10-25 | 北京小米移动软件有限公司 | Camera control method, device, equipment and storage medium |
CN112114653A (en) * | 2019-06-19 | 2020-12-22 | 北京小米移动软件有限公司 | Terminal device control method, device, equipment and storage medium |
CN110796040B (en) * | 2019-10-15 | 2022-07-05 | 武汉大学 | Pedestrian identity recognition method based on multivariate spatial trajectory correlation |
CN110929242B (en) * | 2019-11-20 | 2020-07-10 | 上海交通大学 | Method and system for carrying out attitude-independent continuous user authentication based on wireless signals |
CN111083354A (en) * | 2019-11-27 | 2020-04-28 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
CN111177669A (en) * | 2019-12-11 | 2020-05-19 | 宇龙计算机通信科技(深圳)有限公司 | Terminal identification method and device, terminal and storage medium |
WO2021202526A1 (en) | 2020-03-30 | 2021-10-07 | Sg Gaming, Inc. | Gaming state object tracking |
US11861975B2 (en) | 2020-03-30 | 2024-01-02 | Lnw Gaming, Inc. | Gaming environment tracking optimization |
KR102384419B1 (en) * | 2020-03-31 | 2022-04-12 | 주식회사 세컨핸즈 | Method, system and non-transitory computer-readable recording medium for estimating information about objects |
KR102261880B1 (en) * | 2020-04-24 | 2021-06-08 | 주식회사 핀텔 | Method, appratus and system for providing deep learning based facial recognition service |
KR20220052620A (en) | 2020-10-21 | 2022-04-28 | 삼성전자주식회사 | Object traking method and apparatus performing the same |
CN112669344B (en) * | 2020-12-24 | 2024-05-28 | 北京灵汐科技有限公司 | Method and device for positioning moving object, electronic equipment and storage medium |
KR20220102044A (en) * | 2021-01-12 | 2022-07-19 | 삼성전자주식회사 | Method of acquiring information based on always-on camera |
KR102422962B1 (en) * | 2021-07-26 | 2022-07-20 | 주식회사 크라우드웍스 | Automatic image classification and processing method based on continuous processing structure of multiple artificial intelligence model, and computer program stored in a computer-readable recording medium to execute the same |
KR20230056482A (en) * | 2021-10-20 | 2023-04-27 | 한화비전 주식회사 | Apparatus and method for compressing images |
CN114077730A (en) * | 2021-11-26 | 2022-02-22 | 广域铭岛数字科技有限公司 | Login verification method, vehicle unlocking system, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129570A (en) * | 2010-01-19 | 2011-07-20 | 中国科学院自动化研究所 | Method for designing manifold based regularization based semi-supervised classifier for dynamic vision |
CN103533234A (en) * | 2012-07-05 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, method of operating the same, and system including the image sensor chip |
CN103761460A (en) * | 2013-12-18 | 2014-04-30 | 微软公司 | Method for authenticating users of display equipment |
CN103955639A (en) * | 2014-03-18 | 2014-07-30 | 深圳市中兴移动通信有限公司 | Motion sensing game machine and login method and device for motion sensing game |
CN104182169A (en) * | 2013-05-23 | 2014-12-03 | 三星电子株式会社 | Method and apparatus for user interface based on gesture |
US20140354537A1 (en) * | 2013-05-29 | 2014-12-04 | Samsung Electronics Co., Ltd. | Apparatus and method for processing user input using motion of object |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101034189B1 (en) * | 2009-07-08 | 2011-05-12 | (주)엑스퍼넷 | Adult image detection method using object analysis and multi resizing scan |
KR101880998B1 (en) * | 2011-10-14 | 2018-07-24 | 삼성전자주식회사 | Apparatus and Method for motion recognition with event base vision sensor |
KR101441285B1 (en) * | 2012-12-26 | 2014-09-23 | 전자부품연구원 | Multi-body Detection Method based on a NCCAH(Normalized Cross-Correlation of Average Histogram) And Electronic Device supporting the same |
US9829984B2 (en) * | 2013-05-23 | 2017-11-28 | Fastvdo Llc | Motion-assisted visual language for human computer interfaces |
KR102227494B1 (en) * | 2013-05-29 | 2021-03-15 | 삼성전자주식회사 | Apparatus and method for processing an user input using movement of an object |
-
2015
- 2015-01-15 CN CN201510019275.4A patent/CN105844128B/en active Active
- 2015-12-08 KR KR1020150173971A patent/KR102465532B1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129570A (en) * | 2010-01-19 | 2011-07-20 | 中国科学院自动化研究所 | Method for designing manifold based regularization based semi-supervised classifier for dynamic vision |
CN103533234A (en) * | 2012-07-05 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, method of operating the same, and system including the image sensor chip |
CN104182169A (en) * | 2013-05-23 | 2014-12-03 | 三星电子株式会社 | Method and apparatus for user interface based on gesture |
US20140354537A1 (en) * | 2013-05-29 | 2014-12-04 | Samsung Electronics Co., Ltd. | Apparatus and method for processing user input using motion of object |
CN103761460A (en) * | 2013-12-18 | 2014-04-30 | 微软公司 | Method for authenticating users of display equipment |
CN103955639A (en) * | 2014-03-18 | 2014-07-30 | 深圳市中兴移动通信有限公司 | Motion sensing game machine and login method and device for motion sensing game |
Non-Patent Citations (1)
Title |
---|
Event-Based Stereo Matching Approaches for Frameless Address Event Stereo Data;Jurgen Kogler et al;《ISVC 2011》;20110928;674-684 * |
Also Published As
Publication number | Publication date |
---|---|
KR102465532B1 (en) | 2022-11-11 |
CN105844128A (en) | 2016-08-10 |
KR20160088224A (en) | 2016-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105844128B (en) | Identity recognition method and device | |
US10127439B2 (en) | Object recognition method and apparatus | |
US11383676B2 (en) | Vehicles, vehicle door unlocking control methods and apparatuses, and vehicle door unlocking systems | |
CN102663452B (en) | Suspicious act detecting method based on video analysis | |
CN109145742B (en) | Pedestrian identification method and system | |
US10534957B2 (en) | Eyeball movement analysis method and device, and storage medium | |
JP5333080B2 (en) | Image recognition system | |
JP2018032391A (en) | Liveness test method and apparatus | |
WO2019080578A1 (en) | 3d face identity authentication method and apparatus | |
WO2017071064A1 (en) | Area extraction method, and model training method and apparatus | |
CN113366487A (en) | Operation determination method and device based on expression group and electronic equipment | |
KR101159164B1 (en) | Fake video detecting apparatus and method | |
US10521704B2 (en) | Method and apparatus for distributed edge learning | |
TWI492193B (en) | Method for triggering signal and electronic apparatus for vehicle | |
US11227156B2 (en) | Personalized eye openness estimation | |
US20230222842A1 (en) | Improved face liveness detection using background/foreground motion analysis | |
CN108197585A (en) | Recognition algorithms and device | |
CN113284274A (en) | Trailing identification method and computer readable storage medium | |
Li et al. | An accurate and efficient user authentication mechanism on smart glasses based on iris recognition | |
KR101337554B1 (en) | Apparatus for trace of wanted criminal and missing person using image recognition and method thereof | |
WO2016110061A1 (en) | Terminal unlocking method and device based on eye print recognition | |
CN111259757B (en) | Living body identification method, device and equipment based on image | |
CN108596057B (en) | Information security management system based on face recognition | |
CN110688969A (en) | Video frame human behavior identification method | |
CN113837006A (en) | Face recognition method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |