CN115273177A - Method, device and equipment for recognizing face types of heterogeneous faces and storage medium - Google Patents

Method, device and equipment for recognizing face types of heterogeneous faces and storage medium Download PDF

Info

Publication number
CN115273177A
CN115273177A CN202210746798.9A CN202210746798A CN115273177A CN 115273177 A CN115273177 A CN 115273177A CN 202210746798 A CN202210746798 A CN 202210746798A CN 115273177 A CN115273177 A CN 115273177A
Authority
CN
China
Prior art keywords
image
face
special
recognized
face type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210746798.9A
Other languages
Chinese (zh)
Inventor
敖琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202210746798.9A priority Critical patent/CN115273177A/en
Publication of CN115273177A publication Critical patent/CN115273177A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for identifying face types of heterogeneous faces, wherein the method comprises the following steps: acquiring an image to be identified; extracting a feature vector to be identified from an image to be identified; comparing the feature vector to be recognized with the label feature vector of each special face type respectively to obtain a target special face type meeting preset conditions, wherein the special face type is preset, and the label feature vector is obtained by calculating image feature vectors extracted from a plurality of special face images, wherein the plurality of special face images belong to the plurality of special face types; and outputting the target special face type as the face type of the image to be recognized. The method determines the special face type of the face image in the image to be recognized by comparing the image characteristic vector of the image to be recognized with the pre-constructed label characteristic vector for representing the special face type characteristic, and has the advantages of high recognition accuracy, high response speed and strong interpretability.

Description

Method, device and equipment for recognizing face types of heterogeneous faces and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for recognizing face types of heterogeneous faces.
Background
With the development of artificial intelligence, face recognition is always a popular field of research in all circles, and makes great progress on the basis of a deep convolutional neural network, so that the face recognition method is widely applied to various industries in the society. For example, in internet wind control, for the contents of face images published by users, alignment identification is required to confirm whether there are problems related to politics, yellowness, violence, and the like. At present, the auditing mode can be realized through a machine learning model, and for auditing the machine learning model, part of users use cartoons or sketches to express the outline of a human face and exaggerate or deform the characteristics of human beings, so that the machine learning model is difficult to detect problems in the human face, illegal image data flows into the internet, and negative effects are caused.
In the aspect of a detection algorithm, face recognition is performed on a face shot by a camera based on a deep convolutional neural network, and the face recognition is relatively mature in the industry and is widely applied. Meanwhile, the construction of the sketch/cartoon face recognition model is completed by extracting bottom layer features based on sketch/cartoon samples. However, once the three types of images are mixed together to perform heterogeneous face image recognition, the characteristics of a real face and a sketch/cartoon are not comparable, and the images cannot be correctly matched by a traditional face recognition algorithm. In terms of application resources, at present, a deep neural network is adopted in most face recognition methods, model accuracy is improved by constructing a multilayer network and utilizing complex calculation, but resource cost consumed at the same time is huge, and if multiple types of face recognition models are simultaneously used in parallel, the cost is also increased in a proportional manner.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for recognizing face types of heterogeneous faces, and aims to solve the problems that the existing heterogeneous face recognition mode is not accurate enough and is low in efficiency.
In order to solve the technical problem, the application adopts a technical scheme that: a face type recognition method of a heterogeneous face is provided, which comprises the following steps: acquiring an image to be identified; extracting a feature vector to be identified from an image to be identified; comparing the feature vector to be recognized with the label feature vector of each special face type respectively to obtain a target special face type meeting preset conditions, wherein the special face type is preset, and the label feature vector is obtained by calculating image feature vectors extracted from a plurality of special face images, wherein the plurality of special face images belong to the plurality of special face types; and outputting the target special face type as the face type of the image to be recognized.
As a further improvement of the present application, the tag feature vector calculation process includes: respectively extracting the image feature vector of each special face image by using a pre-trained deep neural network; and respectively calculating a mean characteristic vector and a standard deviation characteristic vector of each special face type according to all image characteristic vectors corresponding to each special face type, wherein the mean characteristic vector and the standard deviation characteristic vector form a label characteristic vector.
As a further improvement of the present application, the feature vector to be recognized is compared with the tag feature vector of each special face type, so as to obtain a target special face type meeting the preset condition, including: constructing a feature value range of each dimension corresponding to each special face type by using the feature value of each dimension of the mean feature vector and the standard deviation feature vector corresponding to each special face type; comparing the characteristic value of each dimension of the characteristic vector to be recognized with the characteristic value range of each dimension corresponding to each special face type respectively to obtain a first special face type meeting a preset range condition, wherein the preset range condition comprises that the characteristic value of each dimension of the characteristic vector to be recognized falls into the characteristic value range of the corresponding dimension of the first special face type; calculating the Euclidean distance between the image characteristic vector of each special face image in the first special face type and the characteristic vector to be recognized; when at least one Euclidean distance meets a preset distance condition, a first special face type is reserved, otherwise, the first special face type is deleted; and selecting a first special face type corresponding to the image feature vector with the minimum Euclidean distance as a target special face type.
As a further improvement of the present application, the method further comprises: and when the first special face type meeting the preset range condition does not exist or the Euclidean distance meeting the preset distance condition does not exist, conveying the image to be recognized to manual examination.
As a further improvement of the application, the method for extracting the feature vector to be identified from the image to be identified comprises the following steps: detecting whether a face image exists in an image to be recognized or not by utilizing a pre-trained multi-task convolutional neural network; if the facial image exists, segmenting the image to be recognized to obtain a facial image, and extracting a feature vector of the facial image by using a pre-trained deep neural network to obtain a feature vector to be recognized; and if not, skipping the current image to be identified and continuously identifying the next image to be identified.
As a further improvement of the application, a face image is obtained by segmenting the image to be recognized, and then the feature vector of the face image is extracted by using the pre-trained deep neural network to obtain the feature vector to be recognized, wherein the method comprises the following steps: judging whether a plurality of face images exist in the image to be recognized or not; if yes, at least one face image is obtained by segmentation from the image to be recognized, and a unique identifier is given to each face image; and extracting the image characteristics of each face image by using a deep neural network to obtain the characteristic vector to be identified of each face image.
As a further improvement of the application, the number of the special face images corresponding to each special face type is the same, and the special face images are single face images.
In order to solve the above technical problem, another technical solution adopted by the present application is: the human face type recognition device of a heterogeneous human face is provided, comprising: the acquisition module is used for acquiring an image to be identified; the extraction module is used for extracting the characteristic vector to be identified from the image to be identified; the comparison module is used for comparing the feature vector to be identified with the label feature vector of each special face type respectively to obtain a target special face type meeting preset conditions, the special face types are preset, and the label feature vector is obtained by calculating image feature vectors extracted from a plurality of special face images, wherein the plurality of special face images belong to the plurality of special face types; and the output module is used for outputting the target special face type as the face type of the image to be recognized.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer device comprising a processor, a memory coupled to the processor, the memory having stored therein program instructions which, when executed by the processor, cause the processor to perform the steps of the method of face type recognition of a heterogeneous face as defined in any one of the preceding claims.
In order to solve the above technical problem, the present application adopts another technical solution that: there is provided a storage medium storing program instructions capable of implementing the above-described face type recognition method for a heterogeneous face.
The beneficial effect of this application is: according to the method for recognizing the face types of the heterogeneous faces, the label characteristic vectors used for representing the special face types are constructed in advance, after the image to be recognized is obtained, the characteristic vectors to be recognized in the image to be recognized are extracted, the characteristic vectors to be recognized are compared with each label characteristic vector respectively, the target label characteristic vector closest to the characteristic vector to be recognized and the corresponding target special face type are confirmed, the target special face type is output as the face type to which the image to be recognized belongs, the method can recognize the special face images without depending on a deep neural network with a multilayer structure, the accuracy is guaranteed, meanwhile, the calculated amount is small, the response speed is high, and occupied system resources are few.
Drawings
FIG. 1 is a flow chart of a method for recognizing face types of heterogeneous faces according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of a face type recognition apparatus for heterogeneous faces according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. In the embodiment of the present application, all the directional indicators (such as upper, lower, left, right, front, and rear … …) are used only to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
Fig. 1 is a schematic flow chart of a face type identification method of a heterogeneous face according to an embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
step S101: and acquiring an image to be identified.
Specifically, with the rapid development of the internet platform, a large group of users can issue various video and image works through microblogs, jitters and other means, and in order to avoid spreading administrative, yellow and storm-related image contents, the video or image works issued by the users need to be checked. The image to be identified in the embodiment is a video or an image work which a user wants to release, and when the user releases the video work, the video work is cut into the image according to frames and then identified.
It should be noted that the face type recognition method for a heterogeneous face of the present embodiment is mainly used for type recognition of a heterogeneous face, for example, recognition of a face image including a real face, a cartoon face, and a sketch face, so as to prevent a user from publishing illegal image content by means of the heterogeneous face.
Step S102: and extracting the feature vector to be identified from the image to be identified.
Specifically, in order to facilitate the recognition of the image to be recognized by the computer, in this embodiment, after the image to be recognized is acquired, the feature vector to be recognized is extracted from the image to be recognized, so as to perform face image recognition subsequently. The feature vector extraction of the image can be realized by using a deep neural network, for example: both VGGNet network and Resnet network can be used for feature extraction of images. In this embodiment, the deep neural network is preferably a Facenet face recognition model.
In this embodiment, the feature vector of the extracted image includes 128-dimensional features, and each dimension corresponds to a feature value.
Further, it should be understood that, in a video or an image work distributed by a user, not all images need to be identified, and an image that does not include a human face does not need to be identified, so to improve the efficiency of image identification, step S102 specifically includes:
1. and detecting whether the image to be recognized has a face image or not by utilizing a pre-trained multi-task convolutional neural network.
Specifically, after an image to be recognized is acquired, the image to be recognized is input into a multitask convolutional neural network for detection, and whether a face image exists in the image to be recognized is detected by using the multitask convolutional neural network. It should be noted that a Multi-task convolutional neural network (MTCNN) mainly includes a three-layer network architecture: P-Net, R-Net, O-Net. P-Net is the first layer of the multitask convolution neural network, and is used for extracting information coordinates which may be human faces and inputting the information coordinates into R-Net; R-Net is the second layer of the multitask convolution neural network, is used for extracting the face data according to the information coordinate input by the P-Net of the previous layer, then inputs the face data into a full-connection layer containing 128 neurons to filter out more errors, and finally carries out NMS algorithm optimization on the output result; and the O-Net is a third layer of the multitask convolutional neural network and is used for extracting five marking point information of the human face from an output result of the R-Net. And detecting whether the image to be recognized contains the face image or not by utilizing the multitask convolution neural network.
2. If the facial image exists, the facial image is obtained by segmenting the image to be recognized, and the feature vector of the facial image is extracted by utilizing the pre-trained deep neural network to obtain the feature vector to be recognized.
Specifically, if a face image exists in the image to be recognized, the face image is obtained by segmentation from the image to be recognized, wherein the segmentation of the face image can be realized through an RNN model or an FCN model.
3. And if not, skipping the current image to be identified and continuously identifying the next image to be identified.
Specifically, when the face image does not exist in the image to be recognized, the face recognition of the image to be recognized does not need to be continuously performed, and the next image to be recognized can be continuously recognized. Whether the face image exists in the image to be recognized or not is detected, so that the image to be recognized without the face image is removed, and the image detection efficiency is greatly improved.
Further, a plurality of face images may exist in one image to be recognized, and at this time, all the face images need to be recognized one by one, so that the face images are obtained by segmenting the image to be recognized, and then the feature vectors of the face images are extracted by using the pre-trained deep neural network to obtain the feature vectors to be recognized, which specifically includes the steps of:
1. and judging whether a plurality of face images exist in the image to be identified.
Specifically, whether a plurality of face images exist in the image to be recognized is confirmed according to the detection result of the multitask convolution neural network.
2. And if so, obtaining at least one face image by dividing the image to be recognized, and endowing each face image with a unique identifier.
Specifically, when a plurality of face images exist in the image to be recognized, each face image is divided, and for convenience of distinguishing, a unique identifier is marked on each face image, and the unique identifier can be realized by adopting a mode of image name and serial number of the image to be recognized.
3. And extracting the image characteristics of each face image by using a deep neural network to obtain the characteristic vector to be identified of each face image.
Specifically, after all face images in the image to be recognized are obtained through segmentation, feature extraction is respectively carried out on each face image.
It should be understood that, when a plurality of face images of special face types exist in one image to be recognized, all the existing special face types are unified as the recognition result of the image to be recognized when the recognition result of the special face types is output.
Step S103: the feature vector to be recognized is compared with the label feature vector of each special face type respectively to obtain a target special face type meeting preset conditions, the special face types are preset, and the label feature vector is obtained by calculating image feature vectors extracted from a plurality of special face images, wherein the plurality of special face images belong to the plurality of special face types.
It should be noted that the special face type refers to a face type that needs to be supervised and is specified in advance. In this embodiment, the special face types include a real face, a cartoon face, and a sketch face.
Specifically, before the face type recognition method of the heterogeneous face is used to perform face type recognition, special face types and label feature vectors corresponding to the special face types need to be prepared in advance, and the label feature vectors are calculated according to image feature vectors of special face images corresponding to each special face type.
Furthermore, in order to ensure that the data is balanced enough, the number of the special face images corresponding to each special face type is the same, and the special face images are single face images.
Further, the tag feature vector calculation process includes:
1. and respectively extracting the image feature vector of each special face image by using a pre-trained deep neural network.
Specifically, a 128-dimensional image feature vector of each special face image of each special face type is respectively extracted through a pre-trained Facenet face recognition model.
2. And respectively calculating a mean characteristic vector and a standard deviation characteristic vector of each special face type according to all image characteristic vectors corresponding to each special face type, wherein the mean characteristic vector and the standard deviation characteristic vector form a label characteristic vector.
Specifically, after the image feature vector of each special face image is obtained, the feature values of the image feature vectors of all the special face images belonging to the same special face type are accumulated according to the dimension, and then the accumulated feature values are divided by the number of all the special face images belonging to the same special face type to obtain the mean value of each dimension, wherein the mean value of all the dimensions forms the mean value feature vector. And calculating the standard deviation feature vector by using the image feature vector and the mean feature vector. And after the mean characteristic vector and the standard deviation characteristic vector are obtained, the mean characteristic vector and the standard deviation characteristic vector are used as the label characteristic vector of the special face type.
Further, comparing the feature vector to be recognized with the label feature vector of each special face type respectively to obtain a target special face type meeting a preset condition, including:
1. and constructing a feature value range of each dimension corresponding to each special face type by using the feature value of each dimension of the mean feature vector and the standard deviation feature vector corresponding to each special face type.
Specifically, the range of the eigenvalues of each dimension is: [ eigenvalue of mean eigenvector-2 × standard deviation eigenvector, eigenvalue of mean eigenvector +2 × standard deviation eigenvector ].
2. And comparing the characteristic value of each dimension of the characteristic vector to be recognized with the characteristic value range of each dimension corresponding to each special face type respectively to obtain a first special face type meeting a preset range condition, wherein the preset range condition comprises that the characteristic value of each dimension of the characteristic vector to be recognized falls into the characteristic value range of the corresponding dimension of the first special face type.
3. And calculating the Euclidean distance between the image feature vector of each special face image in the first special face type and the feature vector to be recognized.
4. And when at least one Euclidean distance meets a preset distance condition, keeping the first special face type, otherwise, deleting the first special face type.
5. And selecting a first special face type corresponding to the image feature vector with the minimum Euclidean distance as a target special face type.
Further, when the first special face type meeting the preset range condition does not exist or the Euclidean distance meeting the preset distance condition does not exist, the image to be recognized is conveyed for manual examination.
Step S104: and outputting the target special face type as the face type of the image to be recognized.
The method for recognizing the face types of the heterogeneous faces, provided by the embodiment of the invention, comprises the steps of constructing label characteristic vectors for representing special face types in advance, extracting the characteristic vectors to be recognized in the image to be recognized after acquiring the image to be recognized, comparing the characteristic vectors to be recognized with each label characteristic vector respectively, thus confirming the target label characteristic vector closest to the characteristic vector to be recognized and the corresponding target special face type, and outputting the target special face type as the face type to which the image to be recognized belongs.
Fig. 2 is a functional module schematic diagram of a face type recognition apparatus for a heterogeneous face according to an embodiment of the present invention. As shown in fig. 2, the apparatus 20 includes an obtaining module 21, an extracting module 22, a comparing module 23, and an outputting module 24.
An obtaining module 21, configured to obtain an image to be identified;
the extraction module 22 is configured to extract a feature vector to be identified from an image to be identified;
the comparison module 23 is configured to compare the feature vector to be identified with the tag feature vector of each special face type, respectively, to obtain a target special face type meeting a preset condition, where the special face type is preset, and the tag feature vector is calculated from image feature vectors extracted from multiple special face images, where the multiple special face images belong to multiple special face types;
and the output module 24 is used for outputting the target special face type as the face type of the image to be recognized.
Optionally, the comparing module 23 is further configured to pre-calculate a tag feature vector, which includes: respectively extracting the image feature vector of each special face image by using a pre-trained deep neural network; and respectively calculating a mean characteristic vector and a standard deviation characteristic vector of each special face type according to all image characteristic vectors corresponding to each special face type, wherein the mean characteristic vector and the standard deviation characteristic vector form a label characteristic vector.
Optionally, the comparing module 23 performs an operation of comparing the feature vector to be recognized with the tag feature vector of each special face type, to obtain a target special face type meeting the preset condition, and the operation specifically includes: constructing a feature value range of each dimension corresponding to each special face type by using the feature value of each dimension of the mean feature vector and the standard deviation feature vector corresponding to each special face type; comparing the characteristic value of each dimension of the characteristic vector to be recognized with the characteristic value range of each dimension corresponding to each special face type respectively to obtain a first special face type meeting a preset range condition, wherein the preset range condition comprises that the characteristic value of each dimension of the characteristic vector to be recognized falls into the characteristic value range of the corresponding dimension of the first special face type; calculating the Euclidean distance between the image feature vector of each special face image in the first special face type and the feature vector to be recognized; when at least one Euclidean distance meets a preset distance condition, a first special face type is reserved, otherwise, the first special face type is deleted; and selecting a first special face type corresponding to the image feature vector with the minimum Euclidean distance as a target special face type.
Optionally, the alignment module 23 is further configured to: and when the first special face type meeting the preset range condition does not exist or the Euclidean distance meeting the preset distance condition does not exist, conveying the image to be recognized to manual examination.
Optionally, the extracting module 22 performs an operation of extracting a feature vector to be identified from the image to be identified, which specifically includes: detecting whether a face image exists in an image to be recognized or not by utilizing a pre-trained multi-task convolutional neural network; if the facial image exists, segmenting the image to be recognized to obtain a facial image, and extracting a feature vector of the facial image by using a pre-trained deep neural network to obtain a feature vector to be recognized; and if the image to be identified does not exist, skipping the current image to be identified and continuously identifying the next image to be identified.
Optionally, the extracting module 22 performs an operation of obtaining a face image by segmenting the image to be recognized, and then extracting the feature vector of the face image by using a pre-trained deep neural network to obtain the feature vector to be recognized, which specifically includes: judging whether a plurality of face images exist in the image to be recognized or not; if yes, at least one face image is obtained by segmentation from the image to be recognized, and a unique identifier is given to each face image; and extracting the image characteristics of each face image by using a deep neural network to obtain the characteristic vector to be identified of each face image.
Optionally, the number of the special face images corresponding to each special face type is the same, and the special face images are single face images.
For other details of the technical solutions implemented by the modules in the face type recognition apparatus for a heterogeneous face in the foregoing embodiments, reference may be made to the description in the face type recognition method for a heterogeneous face in the foregoing embodiments, and details are not repeated here.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention. As shown in fig. 3, the computer device 30 includes a processor 31 and a memory 32 coupled to the processor 31, wherein the memory 32 stores program instructions, and when the program instructions are executed by the processor 31, the processor 31 executes the steps of the method for recognizing the face types of the heterogeneous faces according to any of the embodiments.
The processor 31 may also be referred to as a CPU (Central Processing Unit). The processor 31 may be an integrated circuit chip having signal processing capabilities. The processor 31 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a storage medium according to an embodiment of the invention. The storage medium of the embodiment of the present invention stores program instructions 41 capable of implementing all the methods described above, where the program instructions 41 may be stored in the storage medium in the form of a software product, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or computer equipment, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed computer apparatus, device and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. A face type recognition method of heterogeneous faces is characterized by comprising the following steps:
acquiring an image to be identified;
extracting a feature vector to be identified from the image to be identified;
comparing the feature vector to be recognized with a label feature vector of each special face type respectively to obtain a target special face type meeting a preset condition, wherein the special face type is preset, and the label feature vector is obtained by calculating image feature vectors extracted from a plurality of special face images, wherein the plurality of special face images belong to a plurality of special face types;
and outputting the target special face type as the face type of the image to be recognized.
2. The method for recognizing the face types of the heterogeneous faces according to claim 1, wherein the label feature vector calculation process comprises the following steps:
respectively extracting the image feature vector of each special face image by using a pre-trained deep neural network;
and respectively calculating a mean characteristic vector and a standard deviation characteristic vector of each special face type according to all image characteristic vectors corresponding to each special face type, wherein the mean characteristic vector and the standard deviation characteristic vector form the label characteristic vector.
3. The method for recognizing the face types of the heterogeneous faces according to claim 2, wherein the step of comparing the feature vector to be recognized with the label feature vector of each special face type respectively to obtain a target special face type meeting a preset condition comprises the steps of:
constructing a feature value range of each dimension corresponding to each special face type by using the feature value of each dimension of the mean feature vector and the standard deviation feature vector corresponding to each special face type;
comparing the characteristic value of each dimension of the characteristic vector to be recognized with the characteristic value range of each dimension corresponding to each special face type respectively to obtain a first special face type meeting a preset range condition, wherein the preset range condition comprises that the characteristic value of each dimension of the characteristic vector to be recognized falls into the characteristic value range of the corresponding dimension of the first special face type;
calculating Euclidean distance between the image feature vector of each special face image in the first special face type and the feature vector to be recognized;
when at least one Euclidean distance meets a preset distance condition, the first special face type is reserved, otherwise, the first special face type is deleted;
and selecting a first special face type corresponding to the image feature vector with the minimum Euclidean distance as the target special face type.
4. The method for recognizing the face types of the heterogeneous faces according to claim 3, wherein the method further comprises the following steps:
and when the first special face type meeting the preset range condition does not exist or the Euclidean distance meeting the preset distance condition does not exist, the image to be recognized is conveyed for manual examination.
5. The method for recognizing the face types of the heterogeneous faces according to claim 1, wherein the step of extracting the feature vectors to be recognized from the images to be recognized comprises the following steps:
detecting whether a face image exists in the image to be recognized or not by utilizing a pre-trained multi-task convolutional neural network;
if the facial image exists, the facial image is obtained by segmenting the image to be recognized, and then the feature vector of the facial image is extracted by utilizing a pre-trained deep neural network to obtain the feature vector to be recognized;
and if the image to be identified does not exist, skipping the current image to be identified and continuously identifying the next image to be identified.
6. The method for recognizing the face types of the heterogeneous faces according to claim 5, wherein the obtaining of the face image by segmenting the image to be recognized and then extracting the feature vector of the face image by using a pre-trained deep neural network to obtain the feature vector to be recognized comprises:
judging whether a plurality of face images exist in the image to be recognized or not;
if yes, segmenting the image to be identified to obtain at least one face image, and endowing each face image with a unique identifier;
and extracting the image characteristics of each face image by using the deep neural network to obtain the characteristic vector to be identified of each face image.
7. The method for recognizing the human face types of the heterogeneous human faces according to claim 1, wherein the number of the special human face images corresponding to each special human face type is the same, and the special human face images are single human face images.
8. A face type recognition device for heterogeneous faces, comprising:
the acquisition module is used for acquiring an image to be identified;
the extraction module is used for extracting the feature vector to be identified from the image to be identified;
the comparison module is used for comparing the feature vector to be identified with the label feature vector of each special face type respectively to obtain a target special face type meeting preset conditions, the special face types are preset, and the label feature vector is obtained by calculating image feature vectors extracted from a plurality of special face images, wherein the plurality of special face images belong to a plurality of special face types;
and the output module is used for outputting the target special face type as the face type of the image to be recognized.
9. A computer device, characterized in that it comprises a processor, a memory coupled to the processor, in which memory program instructions are stored, which program instructions, when executed by the processor, cause the processor to carry out the steps of the method for face type recognition of heterogeneous faces according to any of claims 1 to 7.
10. A storage medium storing program instructions capable of implementing the face type recognition method of a heterogeneous face according to any one of claims 1 to 7.
CN202210746798.9A 2022-06-29 2022-06-29 Method, device and equipment for recognizing face types of heterogeneous faces and storage medium Pending CN115273177A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210746798.9A CN115273177A (en) 2022-06-29 2022-06-29 Method, device and equipment for recognizing face types of heterogeneous faces and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210746798.9A CN115273177A (en) 2022-06-29 2022-06-29 Method, device and equipment for recognizing face types of heterogeneous faces and storage medium

Publications (1)

Publication Number Publication Date
CN115273177A true CN115273177A (en) 2022-11-01

Family

ID=83763461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210746798.9A Pending CN115273177A (en) 2022-06-29 2022-06-29 Method, device and equipment for recognizing face types of heterogeneous faces and storage medium

Country Status (1)

Country Link
CN (1) CN115273177A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342968A (en) * 2023-01-18 2023-06-27 北京六律科技有限责任公司 Dual-channel face recognition method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342968A (en) * 2023-01-18 2023-06-27 北京六律科技有限责任公司 Dual-channel face recognition method and device
CN116342968B (en) * 2023-01-18 2024-03-19 北京六律科技有限责任公司 Dual-channel face recognition method and device

Similar Documents

Publication Publication Date Title
Ni et al. Multilevel depth and image fusion for human activity detection
CN112434721A (en) Image classification method, system, storage medium and terminal based on small sample learning
CN109255352A (en) Object detection method, apparatus and system
Li et al. Adaptive metric learning for saliency detection
CN113762309B (en) Object matching method, device and equipment
Yan et al. Multiscale convolutional neural networks for hand detection
CN111737479B (en) Data acquisition method and device, electronic equipment and storage medium
CN110751097B (en) Semi-supervised three-dimensional point cloud gesture key point detection method
CN108198172B (en) Image significance detection method and device
Li et al. Hierarchical semantic parsing for object pose estimation in densely cluttered scenes
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN108268510B (en) Image annotation method and device
CN111401339A (en) Method and device for identifying age of person in face image and electronic equipment
CN115035367A (en) Picture identification method and device and electronic equipment
CN115273177A (en) Method, device and equipment for recognizing face types of heterogeneous faces and storage medium
JP5734000B2 (en) Object identification system and method, and feature point position extraction system and method
CN111723688B (en) Human body action recognition result evaluation method and device and electronic equipment
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium
CN117058421A (en) Multi-head model-based image detection key point method, system, platform and medium
CN111680680A (en) Object code positioning method and device, electronic equipment and storage medium
CN114913330B (en) Point cloud component segmentation method and device, electronic equipment and storage medium
CN115240647A (en) Sound event detection method and device, electronic equipment and storage medium
CN114581978A (en) Face recognition method and system
CN114332599A (en) Image recognition method, image recognition device, computer equipment, storage medium and product
CN103514434B (en) Method and device for identifying image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination