CN108875654B - Face feature acquisition method and device - Google Patents

Face feature acquisition method and device Download PDF

Info

Publication number
CN108875654B
CN108875654B CN201810659963.0A CN201810659963A CN108875654B CN 108875654 B CN108875654 B CN 108875654B CN 201810659963 A CN201810659963 A CN 201810659963A CN 108875654 B CN108875654 B CN 108875654B
Authority
CN
China
Prior art keywords
face
feature information
image
preset
image set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810659963.0A
Other languages
Chinese (zh)
Other versions
CN108875654A (en
Inventor
吴伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201810659963.0A priority Critical patent/CN108875654B/en
Publication of CN108875654A publication Critical patent/CN108875654A/en
Application granted granted Critical
Publication of CN108875654B publication Critical patent/CN108875654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a face feature acquisition method and a device, wherein the method comprises the following steps of but not limited to: acquiring an image comprising a human face; detecting face feature information in the image, wherein the image comprises auxiliary information; determining preset feature information matched with the face feature information in a plurality of preset feature information; each preset feature information corresponds to a face image set; adding the image to a face image set corresponding to preset feature information matched with the face feature information; wherein each face image set comprises a plurality of images with different auxiliary information. The method can automatically collect the high-quality training data of the face recognition algorithm, improve the data acquisition efficiency and reduce the cost of manpower and material resources.

Description

Face feature acquisition method and device
Technical Field
The invention relates to the field of computer information, in particular to a face feature acquisition method and device.
Background
The face recognition is a biological recognition technology for carrying out identity recognition based on face feature information of people, and is a series of related technologies for collecting images or video streams containing faces through a camera or a camera, automatically detecting and tracking the faces in the images and further recognizing the detected faces. The face recognition algorithm is the core of the face recognition technology, the recognition accuracy of the face recognition algorithm depends on training data to a great extent, and the higher the quality of the training data is, the higher the recognition accuracy of the face recognition algorithm is. However, collecting high quality training data requires a significant investment in human and material costs.
Disclosure of Invention
The embodiment of the invention provides a face feature acquisition method and device, which can automatically collect high-quality images as training data of a face recognition algorithm, thereby reducing the cost of manpower and material resources.
In a first aspect, the present application provides a method for acquiring facial features, including:
acquiring an image comprising a human face;
detecting face feature information in the image, wherein the image comprises auxiliary information;
determining preset feature information matched with the face feature information in a plurality of preset feature information; each preset feature information corresponds to a face image set;
adding the image to a face image set corresponding to preset feature information matched with the face feature information; wherein each face image set comprises a plurality of images with different auxiliary information.
In a second aspect, the present application provides a facial feature acquisition device, the device comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an image comprising a human face;
the detection unit is used for detecting the face feature information in the image, and the image comprises auxiliary information;
the matching unit is used for determining preset feature information matched with the face feature information in a plurality of preset feature information; each preset feature information in the plurality of preset feature information corresponds to a face image set;
the adding unit is used for adding the image into a face image set corresponding to preset feature information matched with the face feature information; wherein each face image set comprises a plurality of images with different auxiliary information.
In a third aspect, the present application provides a terminal comprising a processor, a memory, and an input/output system, wherein the processor, the memory, and the input/output system are connected to each other, the memory is used for storing a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions to execute the method according to the first aspect.
In a fourth aspect, the present application provides a system, which includes a terminal and a camera, where the camera is independently disposed with the terminal, the terminal is in communication connection with the camera, the camera is configured to collect an image and send the image to the terminal, and the terminal is configured to execute the method according to the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method according to the first aspect.
Therefore, the face feature information of the shot person can be detected from the image comprising the auxiliary information, and the image is added to the face image set corresponding to the preset feature information after being matched with the preset feature information. When the face image set comprises a plurality of images with different auxiliary information, the face image set can be used as training data of a high-quality face recognition algorithm; that is to say, through image acquisition for a certain time, images of the person to be shot based on different aspects such as angle, illumination, background, age, posture, jewelry wearing and the like can be obtained as training data, and the images are rich in content and high in quality. Compared with the specific acquisition of training data at different times, the method saves more time, can effectively improve the efficiency of the training data acquisition process, and greatly reduces the cost of manpower and material resources.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a face feature acquisition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another human face feature acquisition method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a face feature acquisition device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flow chart of a human face feature acquisition method according to an embodiment of the present invention. The embodiment mainly illustrates that the face feature acquisition method is applied to a terminal with a data processing function, and the terminal may be an intelligent bracelet, an intelligent watch, a portable digital player, an intelligent mobile phone, a palm computer, a tablet computer, a notebook computer, a desktop computer, a server, and the like. The face feature acquisition method comprises but is not limited to the following steps:
step 101, an image including a human face is acquired.
In the embodiment of the invention, the terminal can obtain the image comprising the face by receiving the image sent by other equipment; the terminal can also be shot by a camera so as to obtain an image comprising a human face. The camera may be a digital camera, an analog camera, a charge-coupled device (CCD) camera, or a Complementary Metal Oxide Semiconductor (CMOS) camera. Further, the camera may be integrally formed with the terminal, such as a mobile phone camera, or may be provided independently in the case of a communication connection with the terminal, such as a remote camera.
The terminal may start the camera to shoot after receiving a shooting instruction triggered by the shot person, so as to obtain an image of the shot person. The terminal can also instruct the camera to keep a continuous shooting state, and when a shot person appears in the shooting area, the terminal acquires the image of the shot person. It should be understood that the above examples are for illustrative purposes only and are not intended to be limiting.
And 103, detecting the face feature information in the image.
In the embodiment of the present invention, the face feature information is related to face features of a person to be photographed, which are embodied in the image, and the face feature information may be a face image, a face feature vector, a face image feature vector set, and the like, which is not limited herein. In the embodiment of the invention, the terminal detects the image shot by the camera and extracts the face characteristic information of the shot person. For example, the terminal may identify a face area of the person to be photographed in the image, and perform feature extraction on the image in the face area to obtain face feature information of the person to be photographed.
Further, the image includes auxiliary information; the auxiliary information can comprise one or more of light, background, face angle, personnel posture, jewelry wearing and personnel age; it should be understood that the auxiliary information is used to reflect information of multiple aspects such as the angle, illumination, background, age, posture, jewelry wearing, etc. of the face part of the person to be photographed, that is, the image of the person to be photographed at the moment of being photographed, and the auxiliary information of different images of the person to be photographed is different.
It should be noted that the terminal may detect the face feature information in the image through a preset feature extraction model (or a feature extraction algorithm). The feature extraction model may be a Neural Network model, and specifically includes, but is not limited to, one of a Convolutional Neural Network (CNN) model, a Residual Network (ResNet) model, a Full Convolutional Network (FCN) model, a multi-tasking Network cascaded MNC model, and a Mask-RCNN model.
And 105, determining preset feature information matched with the face feature information in a plurality of pieces of preset feature information.
In the embodiment of the invention, each preset feature information in the plurality of preset feature information corresponds to one face image set, and each face image set comprises a plurality of images with different auxiliary information of a person to be shot. The preset feature information is face feature information which is preset and acquired by a shot person, for example, the preset feature information is face feature information acquired according to a resident identification card, an account opening photo or a passport of the shot person. In a possible embodiment, the face image set includes images corresponding to preset feature information of corresponding persons to be photographed.
In a specific embodiment, the terminal may detect a similarity value between the face feature information and each of the plurality of preset feature information, and determine a maximum similarity value therebetween; judging whether the maximum similarity value is greater than or equal to a first preset threshold value or not; and under the condition that the maximum similarity value is greater than or equal to a first preset threshold value, taking preset feature information corresponding to the maximum similarity value as preset feature information matched with the face feature information.
For example, the similarity values between the face feature information and the preset feature information are respectively: the 3%, 64%, 23%, 89%, 8% … … terminals determine a similarity value of which 89% is the maximum, and compare 89% with a first preset threshold value of 85%, thereby determining that the similarity value satisfies the first preset threshold value. And the terminal further detects preset face information corresponding to the maximum similarity so as to take the preset face information as preset feature information matched with the face feature information.
In a specific embodiment, the terminal may input the detected face feature information into a face recognition model, where the face recognition model is a model trained in advance through preset feature information corresponding to a plurality of different photographed persons. The terminal can obtain preset feature information matched with the face feature information based on the face recognition model. It should be understood that the examples in the foregoing embodiments are only for illustration, and other matching processes are also possible, and are not limited herein.
And 107, adding the image into a face image set corresponding to preset feature information matched with the face feature information.
In the embodiment of the invention, after the terminal determines the preset feature information matched with the acquired face feature information, the image is added to a face image set corresponding to the preset feature information matched with the face feature information. It should be noted that the facial image set is a set of images of a single person to be photographed, and meanwhile, the facial image set is a set of training data of a single person to be photographed that can be used for model training.
It should be noted that, during model training, if the images in the face image set as training data are too single in angle, illumination, background, age, posture, jewelry wearing, etc., the trained model has poor performance and low accuracy. In other words, the more complete the coverage of angles, lighting, background, age, posture, jewelry wear, and the like among a plurality of different images of the same person being photographed, the higher the quality of these images as training data. Further, in the embodiment of the present invention, a plurality of images of the same person to be photographed may be obtained through image acquisition processing for a period of time (for example, one month, one quarter, etc.), the plurality of images are all stored in a face image set corresponding to the person to be photographed, and each of the plurality of images in the face image set has different auxiliary information. For example, a user may be captured by a camera every day to extract facial feature information, but the angle, illumination, background, age, posture, jewelry wearing, and the like of the user in the image captured by the camera every day are different, so that the images in the facial image set corresponding to the user are different. When the images are used for model training, a training data set with higher quality can be provided, and a model with better performance is obtained through training.
One possible application scenario of the embodiments of the present invention is described below in conjunction with the above method flows to facilitate a further understanding of the inventive concepts of the present invention.
In one possible application scenario, one or more locations are provided with cameras, where people are in heavy traffic, such as bank halls, government office halls, etc. The camera is in communication connection with the terminal, and the terminal collects face feature information by receiving a video image sent by the camera. After the terminal collects the images of the shot persons appearing in the shooting area of the camera, the face feature information in the images is detected, and the face feature information is matched with a plurality of preset feature information stored in a terminal memory in advance to obtain the preset feature information matched with the face feature information. And then the terminal adds the acquired image to a face image set corresponding to the preset characteristic information.
The embodiment of the invention can shoot the shot person appearing in the camera to obtain the image containing the face of the shot person, detect and match the face characteristic information of the shot person, and add the image into the corresponding face image set, thereby being more convenient and practical, improving the efficiency of data acquisition, and accurately adding the image of the shot person into the corresponding face image set for convenient subsequent use. And the images of the shot persons comprise auxiliary information of the faces of the shot persons in different aspects such as various angles, illumination, backgrounds, ages, postures, jewelry wearing and the like, so that the contents of the faces are rich, the data quality is high, the efficiency of the training data acquisition process is effectively improved, and the cost of manpower and material resources is greatly reduced.
Based on the same inventive concept, the embodiment of the invention also provides a flow schematic diagram of a human face feature acquisition method, which is shown in fig. 2. The face feature acquisition method comprises but is not limited to the following steps:
step 201, acquiring an image shot by a camera, and detecting face feature information in the image through a feature extraction model.
In a specific embodiment, when the terminal detects that the shot person is indicated in the image shot by the camera, the terminal starts to detect the face feature information in the image. For example, an image captured by a camera is in a still state, which indicates that there is no object or person moving in the image by a large amount. And when the terminal detects that the image is in a non-static state, starting to detect the face feature information in the image.
In a specific embodiment, the method may further include a fuzzy determination step. Specifically, the terminal can identify a face area in the image, detect the degree of blur of the area image, and when the degree of blur is higher than a preset standard, the terminal ends the processing of the image and acquires and processes the image subsequently shot by the camera.
The specific implementation process of step 201 may refer to the related descriptions of step 101 and step 103 in the embodiment of fig. 1, and is not described herein again.
Step 203, determining preset feature information matched with the face feature information in a plurality of preset feature information based on the face feature information. The specific implementation process of step 203 may refer to the related description of step 105 in the embodiment of fig. 1, and is not described herein again.
Step 205, adding the image to a face image set corresponding to preset feature information matched with the face feature information. The specific implementation process of step 205 may refer to the related description of step 107 in the embodiment in fig. 1, and is not described herein again.
And step 207, performing data cleaning on the facial image set to obtain a facial image set which can be used for model training.
In the embodiment of the invention, the data cleaning is used for finding and correcting the recognizable low-quality or wrong face characteristic information in the face image set. For example, the terminal captures an image through the camera, and a captured person corresponding to the detected face feature information is not registered in the terminal to obtain preset feature information, that is, the preset feature information of the captured person is not included in the preset feature information. After the terminal collects the face feature information of the shot person, the terminal determines preset face information matched with the face feature information of the shot person and adds the image to a face image set corresponding to the preset face information. It is understood that the person to be photographed is not the user corresponding to the facial image set, i.e. the image of the person to be photographed should not be in the facial image set. Therefore, data cleansing is used in this example to remove the image of the person from the set of facial images. It should be understood that the above examples are illustrative only and not limiting in any way.
In a specific embodiment, the step of the terminal performing data washing on the face image set may be: the terminal determines a target image in a plurality of images of the face image set; respectively detecting similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; counting the number of images with similarity values smaller than a second preset threshold value in the similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; and deleting the target image from the face image set under the condition that the proportion of the number to the number of the images in the face image set is greater than or equal to a third preset threshold value.
For example, the total number of images in the face image set is 1000, the number of face feature information with the similarity between the terminal statistics and the target image being less than 95% based on the face feature information is 50, the number of the face feature information accounts for 5% of the total number, so that the proportion exceeds a third preset threshold value of 3%, and the terminal deletes the target face feature information. It should be noted that, in a possible embodiment, the second preset threshold may be equal to the first preset threshold in the foregoing step 105.
It should be noted that the data cleansing step in the above embodiment is a processing procedure performed on one image in a face image set. In a further implementation, the terminal may perform the data cleansing process described in the above embodiments for each image of each set of facial images.
In a specific embodiment, the data cleaning performed on the face image set by the terminal may further be: and the terminal respectively detects the similarity value between the face characteristic information of each image in the face image set and the corresponding preset characteristic information, and performs average operation on all the calculated similarity values to obtain an average similarity value. And the terminal takes a value obtained by subtracting a preset value from the average similarity value as a fourth preset threshold value, and deletes the image with the similarity value smaller than the fourth threshold value from the face image set.
For example, the total amount of the face feature information in the face image set is 600, and the terminal averages the similarity value between the face feature information of each image in the face image set and the corresponding preset feature information, so that the average similarity value is 96%. And the terminal takes 3 percentage points below 96% as a fourth preset threshold value, and deletes the image corresponding to the face feature information with the similarity value smaller than 93% between the image and the corresponding preset feature information.
It should be understood that the above-described examples of embodiments are intended to be illustrative only and not limiting.
It should be noted that the terminal may perform the processing of step 201 to step 205 within a preset time, and then perform step 207 after the preset time. For example, the terminal performs steps 201 to 205 within one month to collect a sufficient amount of facial feature information, and performs step 207 when the terminal detects that the collection time reaches one month to perform data cleansing.
Step 209, training the feature extraction model according to the facial image set available for model training, so as to update the model parameters of the feature extraction model.
In the embodiment of the invention, the terminal trains the feature extraction model by using the facial image set which is subjected to data cleaning and can be used for model training, so as to update the model parameters of the feature extraction model, thereby extracting more accurate facial feature information. It should be noted that the face image set that can be used for model training may also be used for training of other models, for example, the terminal trains a face recognition model according to the face image set that can be used for model training, and may update the model parameters of the face recognition model, so that the face recognition model may recognize a face more accurately.
The embodiment of the invention can acquire the images of the shot persons appearing in the camera, detect the face characteristic information of the shot persons, and add the images into the corresponding face image sets, thereby being more convenient and practical and improving the data acquisition efficiency. The terminal can also carry out data cleaning on the face image set to obtain the face image set which is more accurate and more suitable for model training. And the face image set can also be used for training a feature extraction model, and the performance of the feature extraction model can be continuously improved in an iterative mode.
Based on the same inventive concept, an embodiment of the present invention provides a face feature acquisition apparatus, referring to fig. 3, the terminal at least includes an acquisition unit 301, a detection unit 303, a determination unit 305, and an addition unit 307, and the terminal is configured to implement the face feature acquisition method described in the embodiment of the method in fig. 1 and fig. 2.
An acquisition unit 301 for acquiring an image including a human face;
a detecting unit 303, configured to detect face feature information in the image, where the image includes auxiliary information;
a determining unit 305, configured to determine preset feature information that matches the face feature information from a plurality of preset feature information; each preset feature information in the plurality of preset feature information corresponds to a face image set;
an adding unit 307, configured to add the image to a face image set corresponding to preset feature information matched with the face feature information; wherein each face image set comprises a plurality of images with different auxiliary information.
Specifically, the determining unit 305 is configured to: respectively detecting similarity values between the face feature information and each preset feature information in the plurality of preset feature information, and determining the maximum similarity value; judging whether the maximum similarity value is greater than or equal to a first preset threshold value or not; and under the condition that the maximum similarity value is greater than or equal to a first preset threshold value, taking preset feature information corresponding to the maximum similarity value as preset feature information matched with the face feature information.
Optionally, the auxiliary information comprises one or more of light, background, face angle, person posture, jewelry wearing, and person age; the device further comprises: and the data cleaning unit is used for cleaning the data of the face image set after the preset time is reached to obtain the face image set which can be used for model training.
Specifically, the data cleansing unit is configured to: determining a target image in a plurality of images of the face image set; respectively detecting similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; counting the number of images with similarity values smaller than a second preset threshold value in the similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; and deleting the target image from the face image set under the condition that the proportion of the number to the number of the images in the face image set is greater than or equal to a third preset threshold value.
Optionally, the detecting unit 303 is specifically configured to detect face feature information in the image through a feature extraction model; the device further comprises an updating unit, which is used for training the feature extraction model according to the face image set which can be used for model training so as to update the model parameters of the feature extraction model.
It should be noted that, through the detailed description of the foregoing method embodiments in fig. 1 and fig. 2, those skilled in the art can clearly know the implementation method of each unit included in the face feature acquisition device, and therefore, for the brevity of the description, no further description is given here.
Based on the same inventive concept, an embodiment of the present invention provides a terminal, which is shown in fig. 4 and is used for implementing the face feature acquisition method described in the embodiment of the method shown in fig. 1 and fig. 2. As shown in fig. 4, the terminal may include: a processor 401, a memory 402, an input-output system 403. These components may communicate over one or more communication buses 404. The terminal may further include a communication module 405 and a power module 406.
The Processor 401 may be a Central Processing Unit (CPU), or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 402 may include both read-only memory and random access memory, and provides instructions and data to the processor 401. A portion of the memory 402 may also include non-volatile random access memory. Additionally, the memory 402 may also store device type information.
The input/output system 403 is mainly used for receiving user instructions and capturing images. In a specific implementation, the input/output system may include: a camera controller 4031. Wherein each controller can be coupled to a respective peripheral device (camera 4032). It should be noted that the input/output system 403 may also include other I/O peripherals.
The camera 4033 is an execution mechanism of the terminal. Specifically, the camera 4033 is used to capture a capture area to obtain an image.
The communication module 405 is mainly used for communicating with a server; the power module 406 is mainly used to provide stable power for other devices in the apparatus.
In this embodiment of the present invention, the processor 401 is configured to call an instruction stored in the memory 402, and execute the following steps:
acquiring an image comprising a human face;
detecting face feature information in the image, wherein the image comprises auxiliary information;
determining preset feature information matched with the face feature information in a plurality of preset feature information; each preset feature information corresponds to a face image set;
adding the image to a face image set corresponding to preset feature information matched with the face feature information; wherein each face image set comprises a plurality of images with different auxiliary information.
In a specific embodiment, the processor 401 may specifically execute the following steps: respectively detecting similarity values between the face feature information and each preset feature information in the plurality of preset feature information, and determining the maximum similarity value; judging whether the maximum similarity value is greater than or equal to a first preset threshold value or not; and under the condition that the maximum similarity value is greater than or equal to a first preset threshold value, taking preset feature information corresponding to the maximum similarity value as preset feature information matched with the face feature information.
In a specific embodiment, the auxiliary information includes one or more of light, background, face angle, person posture, jewelry wearing, and person age; the processor 401 may also invoke the instructions stored in the memory 402 to perform the following steps: and after the preset time is reached, carrying out data cleaning on the face image set to obtain a face image set which can be used for model training.
In a specific embodiment, the processor 401 may specifically execute the following steps: determining a target image in a plurality of images of the face image set; respectively detecting similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; counting the number of images with similarity values smaller than a second preset threshold value in the similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; and deleting the target image from the face image set under the condition that the proportion of the number to the number of the images in the face image set is greater than or equal to a third preset threshold value.
In a specific embodiment, the processor 401 may specifically execute the following steps: and detecting the face feature information in the image through a feature extraction model. The processor may also invoke instructions stored in the memory 402 to perform: and training the feature extraction model according to the face image set which can be used for model training so as to update the model parameters of the feature extraction model.
It should be noted that, through the foregoing detailed description of the method embodiment in fig. 1 or fig. 2, those skilled in the art can clearly know the implementation method of each functional device included in the terminal, so for brevity of the description, no further description is provided here.
Based on the same inventive concept, an embodiment of the present invention provides a system, which includes a terminal and a camera, wherein the camera is configured to collect an image and send the image to the terminal, and the terminal is configured to perform the following steps:
acquiring an image comprising a human face;
detecting face feature information in the image, wherein the image comprises auxiliary information;
determining preset feature information matched with the face feature information in a plurality of preset feature information; each preset feature information corresponds to a face image set;
adding the image to a face image set corresponding to preset feature information matched with the face feature information; wherein each face image set comprises a plurality of images with different auxiliary information.
The camera is a remote camera with a communication function, the camera and the terminal are independently arranged, and the camera is in communication connection with the terminal. It should be noted that the camera and the terminal may communicate with each other in a wired or wireless manner. The wired method includes but is not limited to: RS232, RS485, network cable, copper cable, etc. Wireless means include, but are not limited to: a cellular communication mode, a device-to-device mode, or other wireless communication mode. The cellular communication method may be based on Global System for Mobile communications (gsm), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), or other systems, and is not limited herein. The Device-to-Device (Device to Device) method may adopt a WIFI, Zigbee, bluetooth or other Device-to-Device communication method, which is not limited herein. And, the camera may be a digital camera, an analog camera, a charge-coupled device (CCD) camera, or a Complementary Metal Oxide Semiconductor (CMOS) camera, which is not limited herein.
It should be noted that, through the detailed description of the foregoing method embodiment in fig. 1 or fig. 2, a person skilled in the art may clearly know the functions of each device terminal in the system, so for brevity of description, detailed description is omitted here.
Based on the same inventive concept, in another embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program comprising program instructions, which when executed by a processor, implement the method described in any of the method embodiments described above.
The computer readable storage medium may be an internal storage unit of the terminal according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the units and the devices described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in this application, it should be understood that the disclosed units, devices and methods may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A face feature acquisition method is characterized by comprising the following steps:
acquiring an image comprising a human face;
detecting face feature information in the image, wherein the image comprises auxiliary information;
determining preset feature information matched with the face feature information in a plurality of preset feature information; each preset feature information corresponds to a face image set;
adding the image to a face image set corresponding to preset feature information matched with the face feature information; each face image set comprises a plurality of images with different auxiliary information;
after the preset time is reached, carrying out data cleaning on the face image set to obtain a face image set which can be used for model training; wherein, data cleaning includes: determining a target image in a plurality of images of the face image set; respectively detecting similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; counting the number of images with similarity values smaller than a second preset threshold value in the similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; and deleting the target image from the face image set under the condition that the proportion of the number to the number of the images in the face image set is greater than or equal to a third preset threshold value.
2. The method according to claim 1, wherein the determining of the preset feature information matching with the face feature information from the plurality of preset feature information comprises:
respectively detecting similarity values between the face feature information and each preset feature information in the plurality of preset feature information, and determining the maximum similarity value;
judging whether the maximum similarity value is greater than or equal to a first preset threshold value or not;
and under the condition that the maximum similarity value is greater than or equal to a first preset threshold value, taking preset feature information corresponding to the maximum similarity value as preset feature information matched with the face feature information.
3. The method of claim 1, wherein the auxiliary information comprises one or more of light, background, face angle, person pose, jewelry wear, person age.
4. The method of claim 3,
the detecting the face feature information in the image comprises: detecting face feature information in the image through a feature extraction model;
after the facial image set is subjected to data cleaning to obtain a facial image set which can be used for model training, the method further comprises the following steps: and training the feature extraction model according to the face image set which can be used for model training so as to update the model parameters of the feature extraction model.
5. A face feature acquisition device, comprising:
an acquisition unit configured to acquire an image including a human face;
the detection unit is used for detecting the face feature information in the image, and the image comprises auxiliary information;
the determining unit is used for determining preset feature information matched with the face feature information in a plurality of preset feature information; each preset feature information in the plurality of preset feature information corresponds to a face image set;
the adding unit is used for adding the image into a face image set corresponding to preset feature information matched with the face feature information; each face image set comprises a plurality of images with different auxiliary information;
the data cleaning unit is used for cleaning the data of the face image set after the preset time is reached to obtain a face image set which can be used for model training; wherein the data cleansing unit is specifically configured to: determining a target image in a plurality of images of the face image set; respectively detecting similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; counting the number of images with similarity values smaller than a second preset threshold value in the similarity values between the face feature information of the target image and the face feature information of each image except the target image in the face image set; and deleting the target image from the face image set under the condition that the proportion of the number to the number of the images in the face image set is greater than or equal to a third preset threshold value.
6. The apparatus according to claim 5, wherein the determining unit is specifically configured to:
respectively detecting similarity values between the face feature information and each preset feature information in the plurality of preset feature information, and determining the maximum similarity value;
judging whether the maximum similarity value is greater than or equal to a first preset threshold value or not;
and under the condition that the maximum similarity value is greater than or equal to a first preset threshold value, taking preset feature information corresponding to the maximum similarity value as preset feature information matched with the face feature information.
7. The apparatus of claim 5, wherein the auxiliary information comprises one or more of light, background, face angle, person pose, jewelry wear, person age.
8. The apparatus of claim 7,
the detection unit is specifically used for detecting the face feature information in the image through a feature extraction model;
the device further comprises an updating unit, which is used for training the feature extraction model according to the face image set which can be used for model training so as to update the model parameters of the feature extraction model.
CN201810659963.0A 2018-06-25 2018-06-25 Face feature acquisition method and device Active CN108875654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810659963.0A CN108875654B (en) 2018-06-25 2018-06-25 Face feature acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810659963.0A CN108875654B (en) 2018-06-25 2018-06-25 Face feature acquisition method and device

Publications (2)

Publication Number Publication Date
CN108875654A CN108875654A (en) 2018-11-23
CN108875654B true CN108875654B (en) 2021-03-05

Family

ID=64295603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810659963.0A Active CN108875654B (en) 2018-06-25 2018-06-25 Face feature acquisition method and device

Country Status (1)

Country Link
CN (1) CN108875654B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382647B (en) * 2018-12-29 2021-07-30 广州市百果园信息技术有限公司 Picture processing method, device, equipment and storage medium
CN110427912A (en) * 2019-08-12 2019-11-08 深圳市捷顺科技实业股份有限公司 A kind of method for detecting human face and its relevant apparatus based on deep learning
WO2021098801A1 (en) * 2019-11-20 2021-05-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Data cleaning device, data cleaning method and face verification method
CN111310580A (en) * 2020-01-19 2020-06-19 四川联众竞达科技有限公司 Face recognition method under non-matching state
CN111710085A (en) * 2020-05-27 2020-09-25 南京金陵塑胶化工有限公司 Production workshop safety management method and system based on facial recognition

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702126A (en) * 2013-12-10 2014-04-02 清华大学深圳研究生院 Parallel encoding optimization method based on standard video HEVC (High Efficiency Video Coding)
CN105184238A (en) * 2015-08-26 2015-12-23 广西小草信息产业有限责任公司 Human face recognition method and system
CN106529593A (en) * 2016-11-08 2017-03-22 广东诚泰交通科技发展有限公司 Pavement disease detection method and system
CN106650804A (en) * 2016-12-13 2017-05-10 深圳云天励飞技术有限公司 Facial sample cleaning method and system based on deep learning features
CN107292252A (en) * 2017-06-09 2017-10-24 南京华捷艾米软件科技有限公司 A kind of personal identification method of autonomous learning
CN107491685A (en) * 2017-09-27 2017-12-19 维沃移动通信有限公司 A kind of face identification method and mobile terminal
CN107563897A (en) * 2017-09-08 2018-01-09 廖海斌 Based on face matching famous person pursue a goal with determination recommendation and social networks method for building up and system
CN107679546A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Face image data acquisition method, device, terminal device and storage medium
CN107909104A (en) * 2017-11-13 2018-04-13 腾讯数码(天津)有限公司 The face cluster method, apparatus and storage medium of a kind of picture
CN107944020A (en) * 2017-12-11 2018-04-20 深圳云天励飞技术有限公司 Facial image lookup method and device, computer installation and storage medium
CN108052925A (en) * 2017-12-28 2018-05-18 江西高创保安服务技术有限公司 A kind of cell personnel archives intelligent management

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101396277A (en) * 2007-09-26 2009-04-01 中国科学院声学研究所 Ultrasonics face recognition method and device
CN104182734A (en) * 2014-08-18 2014-12-03 桂林电子科技大学 Linear-regression based classification (LRC) and collaborative representation based two-stage face identification method
CN104700076B (en) * 2015-02-13 2017-09-12 电子科技大学 Facial image virtual sample generation method
CN106295482B (en) * 2015-06-11 2019-10-29 中移信息技术有限公司 A kind of update method and device of face database
CN105513368B (en) * 2015-11-26 2017-10-17 银江股份有限公司 A kind of false-trademark car screening technique based on uncertain information
CN105631404B (en) * 2015-12-17 2018-11-30 小米科技有限责任公司 The method and device that photo is clustered
CN107301578A (en) * 2016-04-15 2017-10-27 上海新飞凡电子商务有限公司 Obtain and recognize the method and its device of customer information
CN106204779B (en) * 2016-06-30 2018-08-31 陕西师范大学 Check class attendance method based on plurality of human faces data collection strategy and deep learning
CN107423606A (en) * 2017-08-01 2017-12-01 黄河科技学院 A kind of identification system based on fuzzy control theory

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702126A (en) * 2013-12-10 2014-04-02 清华大学深圳研究生院 Parallel encoding optimization method based on standard video HEVC (High Efficiency Video Coding)
CN105184238A (en) * 2015-08-26 2015-12-23 广西小草信息产业有限责任公司 Human face recognition method and system
CN106529593A (en) * 2016-11-08 2017-03-22 广东诚泰交通科技发展有限公司 Pavement disease detection method and system
CN106650804A (en) * 2016-12-13 2017-05-10 深圳云天励飞技术有限公司 Facial sample cleaning method and system based on deep learning features
CN107292252A (en) * 2017-06-09 2017-10-24 南京华捷艾米软件科技有限公司 A kind of personal identification method of autonomous learning
CN107679546A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Face image data acquisition method, device, terminal device and storage medium
CN107563897A (en) * 2017-09-08 2018-01-09 廖海斌 Based on face matching famous person pursue a goal with determination recommendation and social networks method for building up and system
CN107491685A (en) * 2017-09-27 2017-12-19 维沃移动通信有限公司 A kind of face identification method and mobile terminal
CN107909104A (en) * 2017-11-13 2018-04-13 腾讯数码(天津)有限公司 The face cluster method, apparatus and storage medium of a kind of picture
CN107944020A (en) * 2017-12-11 2018-04-20 深圳云天励飞技术有限公司 Facial image lookup method and device, computer installation and storage medium
CN108052925A (en) * 2017-12-28 2018-05-18 江西高创保安服务技术有限公司 A kind of cell personnel archives intelligent management

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于改进ORB特征的多姿态人脸识别";周凯汀等;《计算机辅助设计与图形学学报》;20150228;第27卷(第2期);第287页摘要、第288页左栏第1段、第289-290页第2节和第290页第3.1节 *

Also Published As

Publication number Publication date
CN108875654A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108875654B (en) Face feature acquisition method and device
CN109831622B (en) Shooting method and electronic equipment
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
CN107370942B (en) Photographing method, photographing device, storage medium and terminal
CN107506687B (en) Living body detection method and related product
JP7261296B2 (en) Target object recognition system, method, apparatus, electronic device, and recording medium
CN107657218B (en) Face recognition method and related product
US20170161553A1 (en) Method and electronic device for capturing photo
CN108737728B (en) Image shooting method, terminal and computer storage medium
CN111339831B (en) Lighting lamp control method and system
Vazquez-Fernandez et al. Built-in face recognition for smart photo sharing in mobile devices
CN107888822A (en) Image pickup method, device, terminal and readable storage medium storing program for executing
CN104767963A (en) Method and device for representing information of persons participating in video conference
CN109684993B (en) Face recognition method, system and equipment based on nostril information
US20090169108A1 (en) System and method for recognizing smiling faces captured by a mobile electronic device
CN102546945A (en) Method for automatically optimizing mobile phone photography
CN111104910A (en) Method for monitoring garbage delivery behavior and related product
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN112911139A (en) Article shooting method and device, electronic equipment and storage medium
CN107992816B (en) Photographing search method and device, electronic equipment and computer readable storage medium
CN107729736B (en) Face recognition method and related product
CN113657154A (en) Living body detection method, living body detection device, electronic device, and storage medium
WO2018121552A1 (en) Palmprint data based service processing method, apparatus and program, and medium
CN201774591U (en) Digital camera with address book and face recognition function
CN104933338B (en) Fingerprint identification sensor shooting method and mobile device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant