CN111626074A - Face classification method and device - Google Patents
Face classification method and device Download PDFInfo
- Publication number
- CN111626074A CN111626074A CN201910145371.1A CN201910145371A CN111626074A CN 111626074 A CN111626074 A CN 111626074A CN 201910145371 A CN201910145371 A CN 201910145371A CN 111626074 A CN111626074 A CN 111626074A
- Authority
- CN
- China
- Prior art keywords
- face
- target
- face group
- group
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face classification method and a face classification device, wherein the method comprises the following steps: extracting target features corresponding to the face picture from the face picture; according to the target features and stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person; determining a target face group to which the target features belong according to the corresponding similarity of each face group; and storing the target features to the target face group, and returning identification information corresponding to the target face group. The invention can realize automatic classification of the face features without acquiring face samples and registering face feature marks, and returns the classified identification marks, thereby saving time, improving the face feature classification efficiency and being beneficial to common users or families.
Description
Technical Field
The invention relates to the technical field of image recognition and classification, in particular to a face classification method and device.
Background
Smart devices comprising cameras, such as smart cameras, smart doorbells, smart phones, robots, etc., are becoming more and more popular today. Generally, the intelligent devices are equipped with a face recognition function, and the general work flow of the intelligent devices is to take an image or a video, process and analyze the video and recognize faces in the video.
Generally, a face recognition algorithm needs to manually mark a plurality of face pictures of each person, and when a new picture appears, the algorithm compares the face picture with the marked face data, and then determines the person corresponding to the picture. Because the classification mode needs a large amount of face sample sets, in an actual application scene, a large amount of time needs to be consumed for collecting and marking a large amount of face samples to realize registration, and finally, the classification of the faces and the return of the classified identification information can be realized.
Disclosure of Invention
In view of the above problems, the method and the device for face classification provided by the invention can realize automatic classification of face features without acquiring face samples and registering face feature labels, and return the classified identification labels, thereby saving time, improving face feature classification efficiency, and being beneficial to common users or families.
In a first aspect, the present application provides the following technical solutions through an embodiment:
a method of face classification, the method comprising:
extracting target features corresponding to the face picture from the face picture; according to the target features and stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person; determining a target face group to which the target features belong according to the corresponding similarity of each face group; and storing the target features to the target face group, and returning identification information corresponding to the target face group.
Preferably, the obtaining the similarity between the target feature and each face group according to the target feature and the stored face features in each face group includes:
calculating the similarity between the target feature and each face feature in each stored face group; and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
Preferably, the determining the target face group to which the target feature belongs according to the similarity corresponding to each face group includes:
and if the maximum value in the similarity is larger than a preset judgment threshold value, taking the face group corresponding to the maximum similarity as a target face group.
Preferably, the determining the target face group to which the target feature belongs according to the similarity corresponding to each face group includes:
if the maximum value in the similarity is smaller than a preset judgment threshold value, a face group is newly established; and taking the newly-built face group as the target face group.
Preferably, after the new face group is taken as the target face group, the method includes:
and distributing the identification information to the target face group.
Preferably, the saving the target feature to the target face group includes:
judging whether the number of the face features stored in the target face group exceeds a preset storage threshold value or not; if the number of the face features stored in the target face group exceeds a preset storage threshold value, deleting the face feature with the earliest storage time, and storing the target feature into the target face group; and if the number of the face features stored in the target face group does not exceed a preset storage threshold, storing the target features into the target face group.
In a second aspect, based on the same inventive concept, the present application provides the following technical solutions through an embodiment:
an apparatus for face classification, the apparatus comprising:
the characteristic extraction module is used for extracting target characteristics corresponding to the face image from the face image; the similarity calculation module is used for obtaining the similarity between the target feature and each face group according to the target feature and the stored face features in each face group, wherein one face group correspondingly stores the corresponding face features of the same person; the face group determining module is used for determining a target face group to which the target features belong according to the corresponding similarity of each face group; and the characteristic storage module is used for storing the target characteristics to the target face group and returning the identification information corresponding to the target face group.
Preferably, the similarity calculation module is further configured to:
calculating the similarity between the target feature and each face feature in each stored face group; and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
Preferably, the face group determination module is further configured to:
and if the maximum value in the similarity is larger than a preset judgment threshold value, taking the face group corresponding to the maximum similarity as a target face group.
Preferably, the face group determination module is further configured to:
if the maximum value in the similarity is smaller than a preset judgment threshold value, a face group is newly established;
and taking the newly-built face group as the target face group.
Preferably, the method further comprises the following steps: an identity assignment module to:
and after the newly-built face group is taken as the target face group, allocating the identification information to the target face group.
Preferably, the feature saving module is further configured to:
judging whether the number of the face features stored in the target face group exceeds a preset storage threshold value or not; if the number of the face features stored in the target face group exceeds a preset storage threshold value, deleting the face feature with the earliest storage time, and storing the target feature into the target face group; and if the number of the face features stored in the target face group does not exceed a preset storage threshold, storing the target features into the target face group.
In a third aspect, based on the same inventive concept, the present application provides the following technical solutions through an embodiment:
a face classification apparatus comprising a processor and a memory coupled to the processor, the memory storing instructions that, when executed by the processor, cause the face classification apparatus to perform the steps of the method of any of the first aspects above.
In a fourth aspect, based on the same inventive concept, the present application provides the following technical solutions through an embodiment:
a computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any of the first aspects.
The embodiment of the invention provides a face classification method and a face classification device, wherein the method extracts target characteristics corresponding to a face picture from the face picture; according to the target features and stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person; determining a target face group to which the target features belong according to the corresponding similarity of each face group; and storing the target features to the target face group, and returning identification information corresponding to the target face group. The invention does not need to collect and mark and register the face samples in advance in the whole classification process, and directly determines the target face group through the calculation result of the similarity, thereby realizing the automatic classification of the target characteristics, returning the corresponding identification information stored in the target face group, saving the time for collecting and marking and registering the face samples, improving the face characteristic classification efficiency and being beneficial to common users or families.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating a face classification method according to a first embodiment of the present invention;
FIG. 2 is a flow chart illustrating the sub-steps of step S20 of FIG. 1;
FIG. 3 is a functional block diagram of a face classification method according to a second embodiment of the present invention;
fig. 4 shows a block diagram of a face classification apparatus according to a third embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
First embodiment
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for classifying a human face according to a first embodiment of the present invention. The method specifically comprises the following steps:
step S10: and extracting target characteristics corresponding to the face picture from the face picture.
Step S20: and according to the target features and the stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person.
Step S30: and determining a target face group to which the target features belong according to the similarity corresponding to each face group.
Step S40: and storing the target features to the target face group, and returning identification information corresponding to the target face group.
Before step S10, a step of obtaining a face picture may also be included, and specifically, the face picture may be obtained in a manner of taking a picture, recording a video, and decoding by using a camera installed on the smart device.
In step S10, the picture of the human face is a picture or video (a video frame containing the human face) taken by the smart device. Smart devices such as smart doorbells, smart phones, etc. The target features corresponding to the face image represent features that can identify and represent the face in the face image, for example, the face features are extracted by the existing lbp (local Binary patterns) face feature extraction method, Eigenface method (Eigenface), face feature point position detection and other manners, and the representation form of the face features may be a vector group representing face texture, Eigenface, face feature points and the like, and is not limited.
It should be noted that the above feature extraction methods are all existing methods, and detailed implementation processes thereof are not described again.
In step S20, the face groups are used to store face features, and the face features stored in the same face group are all face features of the same person. A storage threshold value can be set in one face group to limit the number of the corresponding face features to be stored to an upper limit, so that storage resources and similarity calculation resources can be saved.
For example: the face group a is used to store the face features of the user a, and the upper limit of the storage of the face features is 10 face features, so that the number of the face features stored in the face group a is less than or equal to 10.
In addition, the storage occupation size of the face group may also be limited, that is, the storage threshold may be the number or the content size. For example, the memory occupation size of each face group does not exceed 1024K.
Specifically, referring to fig. 2, step S20 includes:
step S21: and calculating the similarity between the target feature and each face feature in each stored face group.
Step S22: and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
For example: in the intelligent doorbell or a server of the intelligent doorbell, two face groups of a face group A and a face group B are stored, the upper limit of the face feature storage of the two face groups is 10, 10 face features are stored in the face group A, and 7 face features are stored in the face group B. Calculating the similarity of the face group, calculating the similarity of the target feature and 10 personal face features in the face group A to obtain 10 similarities, then averaging the 10 similarities corresponding to the face group A, and taking the obtained average as the similarity between the target feature and the face group A; similarly, for the face group B, the similarity between the target feature and 7 personal face features in the face group B is calculated, so that 7 similarities can be obtained, then the 7 similarities corresponding to the face group B are averaged, and the obtained average value can be used as the similarity between the target feature and the face group B.
The similarity calculation can be carried out in local intelligent equipment or a cloud server. For example, for a smart device with weak data processing capability (weak processor performance), data may be uploaded to the cloud for similarity calculation, for example: intelligent doorbell, intelligent bracelet etc.. For an intelligent device with stronger data processing capability (stronger processor performance), the calculation of the similarity can be performed in the local intelligent device, for example: smart phones, smart robots, and the like.
In addition, the specific measure of similarity is not limited, for example: and (4) measuring the similarity between the target features and the human face features by cosine distance, Euclidean distance, Hamming distance and the like.
In step S30, specifically, after obtaining the similarity between each face group and the target feature, the face group corresponding to the largest similarity may be used as the target face group storing the target feature.
However, in order to further improve the accuracy of target feature storage, two human face features which are not the same person but are similar are prevented from being stored in the same human face group by mistake. A decision threshold may be set, and when the maximum similarity is greater than or equal to the decision threshold, the target features may be stored in the face group. Otherwise, judging that all the face groups cannot be used for storing the target features at present, and needing to newly establish a face group as the target face group for storing the target features.
In order to facilitate the determination of the person corresponding to each face group, corresponding identification information can be added when each face group is newly created, so as to distinguish different face groups. For example, each face group is identified by a corresponding serial number, or a newly-created face group is set by a user in a self-defined manner, for example, after a newly-created face group is used as a target face group, if the person corresponding to the target feature is Zhang III, the target face group is marked with Zhang III or the identity card number corresponding to the Zhang III as identification information.
In step S40, when the target features are stored in the target face group, it should be further determined whether the face features stored in the target face group exceed a preset storage threshold, where the storage thresholds of different face groups may be the same or different, for example, in an intelligent doorbell, a larger storage upper limit may be set for the face group storing the face features of an owner, so as to improve the accuracy of similarity calculation and ensure the accuracy of identification for the owner of the intelligent doorbell.
Further, if the face features stored in the target face group exceed a preset storage threshold, deleting the face feature with the earliest storage time and then storing the target feature into the target face group; and if the face features stored in the target face group do not exceed the preset storage threshold, directly storing the target features into the target face group. For example, if the similarity between the face group a and the target feature is greater than the similarity between the face group B and the target feature and greater than the judgment threshold, at least one face feature with the earliest storage time in the face group a needs to be deleted, and then the face feature is stored. If the similarity between the face group B and the target feature is greater than the similarity between the face group A and the target feature and is greater than the judgment threshold, 7 face features are stored in the face group B and do not reach the storage upper limit, and at the moment, the target feature can be stored in the face group B without deleting the face feature.
In step S40, the returned identification information may be, without limitation, a name, a user-defined number, an identification number, and the like of a person corresponding to the face group.
In summary, in the face classification method provided in the embodiment of the present invention, the target features corresponding to the face image are extracted from the face image; according to the target features and stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person; determining a target face group for storing the target characteristics according to the similarity corresponding to each face group; and storing the target features to the target face group, and returning identification information corresponding to the target face group. In the whole classification process executed by the invention, human face samples are not required to be collected and labeled and registered in advance, and a target human face group is determined directly through the calculation result of the similarity, so that the automatic classification of target characteristics can be realized, namely, the automatic grouping of human faces can be realized without manual labeling; and only need input a face image and can return its identification information (such as ID), and need not return the identification information of each face after inputting a lot of faces, saved the time that the face sample was gathered and the sign is registered, improved the face characteristic classification efficiency, be favorable to ordinary user or family to use, can popularize on a large scale.
Second embodiment
Based on the same inventive concept, the second embodiment of the present invention provides a face classification apparatus 400. Fig. 3 shows a functional block diagram of a face classification apparatus 400 according to a second embodiment of the present invention.
Specifically, the apparatus 400 includes:
the feature extraction module 401 is configured to extract a target feature corresponding to a face image from the face image.
A similarity calculating module 402, configured to obtain a similarity between the target feature and each face group according to the target feature and stored face features in each face group, where a face group stores corresponding face features of the same person.
A face group determining module 403, configured to determine, according to the similarity corresponding to each face group, a target face group to which the target feature belongs.
A feature storage module 404, configured to store the target feature in the target face group, and return identification information corresponding to the target face group.
As an optional implementation manner, the similarity calculation module 402 is further configured to:
calculating the similarity between the target feature and each face feature in each stored face group; and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
As an optional implementation manner, the face group determining module 403 is further configured to:
and if the maximum value in the similarity is larger than a preset judgment threshold value, taking the face group corresponding to the maximum similarity as a target face group.
As an optional implementation manner, the face group determining module 403 is further configured to:
if the maximum value in the similarity is smaller than a preset judgment threshold value, a face group is newly established; and taking the newly-built face group as the target face group.
As an optional implementation, the method further includes: an identity assignment module to:
and after the newly-built face group is taken as the target face group, allocating the identification information to the target face group.
As an alternative implementation, the feature saving module 404 is further configured to:
judging whether the number of the face features stored in the target face group exceeds a preset storage threshold value or not; if the number of the face features stored in the target face group exceeds a preset storage threshold value, deleting the face feature with the earliest storage time, and storing the target feature into the target face group; and if the number of the face features stored in the target face group does not exceed a preset storage threshold, storing the target features into the target face group.
It should be noted that the apparatus 400 provided by the embodiment of the present invention has the same technical effects as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the apparatus embodiments that are not mentioned.
Third embodiment
In addition, based on the same inventive concept, a third embodiment of the present invention further provides a face classification apparatus, including a processor and a memory, the memory being coupled to the processor, the memory storing instructions that, when executed by the processor, cause the face classification apparatus to perform the following operations:
extracting target features corresponding to the face picture from the face picture; according to the target features and stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person; determining a target face group to which the target features belong according to the corresponding similarity of each face group; and storing the target features to the target face group, and returning identification information corresponding to the target face group.
It should be noted that, in the face classification device provided in the embodiment of the present invention, the specific implementation and the generated technical effect of each step are the same as those of the foregoing method embodiment, and for brief description, for the sake of brevity, reference may be made to corresponding contents in the foregoing method embodiment for the non-mentioned part of the present embodiment.
In the embodiment of the present invention, the face classification device is installed with an operating system and a third-party application program. The face classification device can be a tablet personal computer, a mobile phone, an intelligent doorbell, a floor sweeping robot, a notebook computer, a Personal Computer (PC), a wearable device, a vehicle-mounted terminal and other terminal devices.
Fig. 4 shows a block diagram of modules of an exemplary face classification apparatus 500. As shown in fig. 4, the face classification apparatus 500 includes a memory 502, a storage controller 504, one or more (only one shown) processors 506, a peripheral interface 508, a network module 510, an input-output module 512, a display module 514, and the like. These components communicate with one another via one or more communication buses/signal lines 516.
The memory 502 may be used to store software programs and modules, such as program instructions/modules corresponding to the face classification method and apparatus in the embodiment of the present invention, and the processor 506 executes various functional applications and data processing, such as the face classification method provided in the embodiment of the present invention, by operating the software programs and modules stored in the memory 502.
The memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. Access to the memory 502 by the processor 506, and possibly other components, may be under the control of the memory controller 504.
The network module 510 is used for receiving and transmitting network signals. The network signal may include a wireless signal or a wired signal.
The input/output module 512 is used for providing input data for the user to realize the interaction between the user and the face classification device. The input/output module 512 can be, but is not limited to, a mouse, a keyboard, a touch screen, and the like.
The display module 514 provides an interactive interface (e.g., a user interface) between the face classification device 500 and the user or for displaying image data to the user for reference. In this embodiment, the display module 514 may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing.
It will be appreciated that the configuration shown in fig. 4 is merely illustrative and that the face classification apparatus 500 may also include more or fewer components than shown in fig. 4, or have a different configuration than that shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof.
Fourth embodiment
A fourth embodiment of the present invention provides a computer storage medium, and the functional module integrated with the face classification device in the second embodiment of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software functional module and sold or used as a stand-alone product. Based on such understanding, all or part of the flow in the face classification method according to the first embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the above-mentioned method embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the face classification apparatus according to embodiments of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The invention discloses A1. a face classification method, which is characterized by comprising the following steps:
extracting target features corresponding to the face picture from the face picture;
according to the target features and stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person;
determining a target face group to which the target features belong according to the corresponding similarity of each face group;
and storing the target features to the target face group, and returning identification information corresponding to the target face group.
A2. The method according to a1, wherein the obtaining the similarity between the target feature and each face group according to the target feature and the stored face features in each face group comprises:
calculating the similarity between the target feature and each face feature in each stored face group;
and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
A3. The method according to a1, wherein the determining a target face group to which the target feature belongs according to the similarity corresponding to each face group includes:
and if the maximum value in the similarity is larger than a preset judgment threshold value, taking the face group corresponding to the maximum similarity as a target face group.
A4. The method according to a1, wherein the determining a target face group to which the target feature belongs according to the similarity corresponding to each face group includes:
if the maximum value in the similarity is smaller than a preset judgment threshold value, a face group is newly established;
and taking the newly-built face group as the target face group.
A5. The method according to a4, wherein the step of taking the newly created face group as the target face group comprises:
and distributing the identification information to the target face group.
A6. The method according to a1, wherein the saving the target features to the target face group comprises:
judging whether the number of the face features stored in the target face group exceeds a preset storage threshold value or not;
if the number of the face features stored in the target face group exceeds a preset storage threshold value, deleting the face feature with the earliest storage time, and storing the target feature into the target face group;
and if the number of the face features stored in the target face group does not exceed a preset storage threshold, storing the target features into the target face group.
The invention discloses a B7. face classification device, which is characterized by comprising the following components:
the characteristic extraction module is used for extracting target characteristics corresponding to the face image from the face image;
the similarity calculation module is used for obtaining the similarity between the target feature and each face group according to the target feature and the stored face features in each face group, wherein one face group correspondingly stores the corresponding face features of the same person;
the face group determining module is used for determining a target face group to which the target features belong according to the corresponding similarity of each face group;
and the characteristic storage module is used for storing the target characteristics to the target face group and returning the identification information corresponding to the target face group.
B8. The apparatus of B7, wherein the similarity calculation module is further configured to:
calculating the similarity between the target feature and each face feature in each stored face group;
and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
B9. The apparatus of B7, wherein the face group determination module is further configured to:
and if the maximum value in the similarity is larger than a preset judgment threshold value, taking the face group corresponding to the maximum similarity as a target face group.
B10. The apparatus of B7, wherein the face group determination module is further configured to:
if the maximum value in the similarity is smaller than a preset judgment threshold value, a face group is newly established;
and taking the newly-built face group as the target face group.
B11. The apparatus of B10, further comprising: an identity assignment module to:
and after the newly-built face group is taken as the target face group, allocating the identification information to the target face group.
B12. The apparatus of B7, wherein the feature saving module is further configured to:
judging whether the number of the face features stored in the target face group exceeds a preset storage threshold value or not;
if the number of the face features stored in the target face group exceeds a preset storage threshold value, deleting the face feature with the earliest storage time, and storing the target feature into the target face group;
and if the number of the face features stored in the target face group does not exceed a preset storage threshold, storing the target features into the target face group.
C13. a face classification apparatus, comprising a processor and a memory coupled to the processor, the memory storing instructions that, when executed by the processor, cause the face classification apparatus to perform the steps of any of the methods of a1-a 6.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of a1-a 6.
Claims (10)
1. A face classification method, characterized in that the method comprises:
extracting target features corresponding to the face picture from the face picture;
according to the target features and stored face features in each face group, obtaining the similarity between the target features and each face group, wherein one face group correspondingly stores the corresponding face features of the same person;
determining a target face group to which the target features belong according to the corresponding similarity of each face group;
and storing the target features to the target face group, and returning identification information corresponding to the target face group.
2. The method of claim 1, wherein obtaining the similarity between the target feature and each face group according to the target feature and the stored face features in each face group comprises:
calculating the similarity between the target feature and each face feature in each stored face group;
and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
3. The method according to claim 1, wherein the determining the target face group to which the target feature belongs according to the similarity corresponding to each face group comprises:
and if the maximum value in the similarity is larger than a preset judgment threshold value, taking the face group corresponding to the maximum similarity as a target face group.
4. The method according to claim 1, wherein the determining the target face group to which the target feature belongs according to the similarity corresponding to each face group comprises:
if the maximum value in the similarity is smaller than a preset judgment threshold value, a face group is newly established;
and taking the newly-built face group as the target face group.
5. The method according to claim 4, wherein the step of setting the newly created face group as the target face group comprises:
and distributing the identification information to the target face group.
6. The method of claim 1, wherein saving the target features to the target face group comprises:
judging whether the number of the face features stored in the target face group exceeds a preset storage threshold value or not;
if the number of the face features stored in the target face group exceeds a preset storage threshold value, deleting the face feature with the earliest storage time, and storing the target feature into the target face group;
and if the number of the face features stored in the target face group does not exceed a preset storage threshold, storing the target features into the target face group.
7. An apparatus for classifying a human face, the apparatus comprising:
the characteristic extraction module is used for extracting target characteristics corresponding to the face image from the face image;
the similarity calculation module is used for obtaining the similarity between the target feature and each face group according to the target feature and the stored face features in each face group, wherein one face group correspondingly stores the corresponding face features of the same person;
the face group determining module is used for determining a target face group to which the target features belong according to the corresponding similarity of each face group;
and the characteristic storage module is used for storing the target characteristics to the target face group and returning the identification information corresponding to the target face group.
8. The apparatus of claim 7, wherein the similarity calculation module is further configured to:
calculating the similarity between the target feature and each face feature in each stored face group;
and acquiring an average value between each similarity corresponding to each face feature in the same face group, and taking the average value as the similarity between the target feature and the current face group.
9. A face classification apparatus comprising a processor and a memory coupled to the processor, the memory storing instructions that, when executed by the processor, cause the face classification apparatus to perform the steps of the method of any of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910145371.1A CN111626074A (en) | 2019-02-27 | 2019-02-27 | Face classification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910145371.1A CN111626074A (en) | 2019-02-27 | 2019-02-27 | Face classification method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111626074A true CN111626074A (en) | 2020-09-04 |
Family
ID=72259618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910145371.1A Pending CN111626074A (en) | 2019-02-27 | 2019-02-27 | Face classification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111626074A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112114985A (en) * | 2020-09-22 | 2020-12-22 | 杭州海康威视系统技术有限公司 | Method, device and equipment for issuing face information |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515324A (en) * | 2009-01-21 | 2009-08-26 | 上海银晨智能识别科技有限公司 | Control system applied to multi-pose face recognition and a method thereof |
CN103177102A (en) * | 2013-03-22 | 2013-06-26 | 北京小米科技有限责任公司 | Method and device of image processing |
CN104036259A (en) * | 2014-06-27 | 2014-09-10 | 北京奇虎科技有限公司 | Face similarity recognition method and system |
CN104036261A (en) * | 2014-06-30 | 2014-09-10 | 北京奇虎科技有限公司 | Face recognition method and system |
CN104133875A (en) * | 2014-07-24 | 2014-11-05 | 北京中视广信科技有限公司 | Face-based video labeling method and face-based video retrieving method |
CN105869235A (en) * | 2015-01-20 | 2016-08-17 | 阿里巴巴集团控股有限公司 | Safe gate inhibition method and system thereof |
CN106156755A (en) * | 2016-07-29 | 2016-11-23 | 深圳云天励飞技术有限公司 | Similarity calculating method in a kind of recognition of face and system |
CN106778653A (en) * | 2016-12-27 | 2017-05-31 | 北京光年无限科技有限公司 | Towards the exchange method and device based on recognition of face Sample Storehouse of intelligent robot |
WO2017162076A1 (en) * | 2016-03-24 | 2017-09-28 | 北京握奇数据股份有限公司 | Face identification method and system |
CN107741996A (en) * | 2017-11-30 | 2018-02-27 | 北京奇虎科技有限公司 | Family's map construction method and device based on recognition of face, computing device |
CN107944427A (en) * | 2017-12-14 | 2018-04-20 | 厦门市美亚柏科信息股份有限公司 | Dynamic human face recognition methods and computer-readable recording medium |
CN108021846A (en) * | 2016-11-01 | 2018-05-11 | 杭州海康威视数字技术股份有限公司 | A kind of face identification method and device |
CN109145679A (en) * | 2017-06-15 | 2019-01-04 | 杭州海康威视数字技术股份有限公司 | A kind of method, apparatus and system issuing warning information |
-
2019
- 2019-02-27 CN CN201910145371.1A patent/CN111626074A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515324A (en) * | 2009-01-21 | 2009-08-26 | 上海银晨智能识别科技有限公司 | Control system applied to multi-pose face recognition and a method thereof |
CN103177102A (en) * | 2013-03-22 | 2013-06-26 | 北京小米科技有限责任公司 | Method and device of image processing |
CN104036259A (en) * | 2014-06-27 | 2014-09-10 | 北京奇虎科技有限公司 | Face similarity recognition method and system |
CN104036261A (en) * | 2014-06-30 | 2014-09-10 | 北京奇虎科技有限公司 | Face recognition method and system |
CN104133875A (en) * | 2014-07-24 | 2014-11-05 | 北京中视广信科技有限公司 | Face-based video labeling method and face-based video retrieving method |
CN105869235A (en) * | 2015-01-20 | 2016-08-17 | 阿里巴巴集团控股有限公司 | Safe gate inhibition method and system thereof |
WO2017162076A1 (en) * | 2016-03-24 | 2017-09-28 | 北京握奇数据股份有限公司 | Face identification method and system |
CN106156755A (en) * | 2016-07-29 | 2016-11-23 | 深圳云天励飞技术有限公司 | Similarity calculating method in a kind of recognition of face and system |
CN108021846A (en) * | 2016-11-01 | 2018-05-11 | 杭州海康威视数字技术股份有限公司 | A kind of face identification method and device |
CN106778653A (en) * | 2016-12-27 | 2017-05-31 | 北京光年无限科技有限公司 | Towards the exchange method and device based on recognition of face Sample Storehouse of intelligent robot |
CN109145679A (en) * | 2017-06-15 | 2019-01-04 | 杭州海康威视数字技术股份有限公司 | A kind of method, apparatus and system issuing warning information |
CN107741996A (en) * | 2017-11-30 | 2018-02-27 | 北京奇虎科技有限公司 | Family's map construction method and device based on recognition of face, computing device |
CN107944427A (en) * | 2017-12-14 | 2018-04-20 | 厦门市美亚柏科信息股份有限公司 | Dynamic human face recognition methods and computer-readable recording medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112114985A (en) * | 2020-09-22 | 2020-12-22 | 杭州海康威视系统技术有限公司 | Method, device and equipment for issuing face information |
CN112114985B (en) * | 2020-09-22 | 2024-03-01 | 杭州海康威视系统技术有限公司 | Method, device and equipment for issuing face information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886928B (en) | Target cell marking method, device, storage medium and terminal equipment | |
CN110147722A (en) | A kind of method for processing video frequency, video process apparatus and terminal device | |
CN104268498B (en) | A kind of recognition methods of Quick Response Code and terminal | |
CN111476227A (en) | Target field recognition method and device based on OCR (optical character recognition) and storage medium | |
CN109117773B (en) | Image feature point detection method, terminal device and storage medium | |
CN106777007A (en) | Photograph album Classified optimization method, device and mobile terminal | |
CN106156347A (en) | Cloud photograph album classification methods of exhibiting, device and server | |
CN108961267B (en) | Picture processing method, picture processing device and terminal equipment | |
CN109116129B (en) | Terminal detection method, detection device, system and storage medium | |
CN110020093A (en) | Video retrieval method, edge device, video frequency searching device and storage medium | |
US9355338B2 (en) | Image recognition device, image recognition method, and recording medium | |
CN106250916B (en) | Method and device for screening pictures and terminal equipment | |
CN111191582A (en) | Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium | |
CN108460346A (en) | Fingerprint identification method and device | |
CN104077557A (en) | Method and device for acquiring card information | |
CN111177450B (en) | Image retrieval cloud identification method and system and computer readable storage medium | |
CN115631122A (en) | Image optimization method and device for edge image algorithm | |
CN112149570A (en) | Multi-person living body detection method and device, electronic equipment and storage medium | |
CN109711287B (en) | Face acquisition method and related product | |
CN108960246B (en) | Binarization processing device and method for image recognition | |
CN111626074A (en) | Face classification method and device | |
CN112069342A (en) | Image classification method and device, electronic equipment and storage medium | |
CN110222576B (en) | Boxing action recognition method and device and electronic equipment | |
CN110610178A (en) | Image recognition method, device, terminal and computer readable storage medium | |
CN112069357B (en) | Video resource processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |