CN114155589B - Image processing method, device, equipment and storage medium - Google Patents
Image processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114155589B CN114155589B CN202111448404.3A CN202111448404A CN114155589B CN 114155589 B CN114155589 B CN 114155589B CN 202111448404 A CN202111448404 A CN 202111448404A CN 114155589 B CN114155589 B CN 114155589B
- Authority
- CN
- China
- Prior art keywords
- image
- abnormal
- face
- face recognition
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 230000002159 abnormal effect Effects 0.000 claims abstract description 217
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 25
- 239000000284 extract Substances 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 28
- 230000005856 abnormality Effects 0.000 claims description 17
- 238000009826 distribution Methods 0.000 claims description 6
- 230000001815 facial effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 19
- 238000013473 artificial intelligence Methods 0.000 abstract description 9
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 12
- 238000000605 extraction Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 238000013441 quality evaluation Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003924 mental process Effects 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The disclosure provides an image processing method, an image processing device and a storage medium, relates to the technical field of artificial intelligence, in particular to the technical field of deep learning and computer vision, and can be used for face recognition scenes. The specific implementation scheme is as follows: selecting a target image from the images to be processed; the target image is an image to be processed which adopts a face recognition task to successfully extract face information; extracting face features of a target image and clustering the face features; and determining an abnormal image in the target image according to the clustering result. Abnormal images which are easy to be recognized by mistake can be automatically collected in the face recognition process.
Description
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of deep learning and computer vision, and can be used for face recognition scenes.
Background
Along with the development of the Internet and artificial intelligence technology, face recognition is widely applied to aspects of people's production and life. However, in the face recognition process, there are some abnormal images that affect the recognition accuracy, such as non-face images, blurred images, or large-angle face images. Therefore, how to quickly and accurately extract abnormal images from massive image data is important to the accuracy of face recognition.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device, and storage medium.
According to an aspect of the present disclosure, there is provided an image processing method including:
selecting a target image from the images to be processed; the target image is an image to be processed which adopts a face recognition task to successfully extract face information;
extracting face features of a target image and clustering the face features;
and determining an abnormal image in the target image according to the clustering result.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the image processing method of any of the embodiments of the present disclosure.
According to the technology disclosed by the invention, the abnormal image which is easy to be identified by mistake can be automatically collected in the face recognition process, and a guarantee is provided for improving the accuracy of the face recognition process based on the abnormal image.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of an image processing method provided in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of another image processing method provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart of another image processing method provided in accordance with an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a face recognition system according to an embodiment of the present disclosure;
fig. 5 is a schematic structural view of an image processing apparatus provided according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing an image processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Along with the development of the Internet and artificial intelligence technology, face recognition is widely applied to aspects of people's production and life. However, in the face recognition process, there are some abnormal images that affect the recognition accuracy, such as non-face images, blurred images, or large-angle face images. Because the conventional training sample set has a small data size of the abnormal data, the trained face recognition system is difficult to accurately detect the abnormal image. Therefore, it is necessary to augment training sample data by collecting a large number of outlier images to iteratively train the face recognition system. At present, an abnormal image is usually collected manually, so that the cost is high and the efficiency is low. The embodiment of the disclosure provides a new solution for automatically collecting abnormal images which are easy to be identified by mistake in the face recognition process.
Fig. 1 is a flowchart of an image processing method provided according to an embodiment of the present disclosure. The embodiment of the disclosure is suitable for automatically acquiring the abnormal images affecting the recognition accuracy of the face recognition system, and is particularly suitable for rapidly extracting the abnormal images affecting the recognition accuracy of the face recognition system from massive image data. The method may be performed by an image processing device, which may be implemented in software and/or hardware. As shown in fig. 1, the image processing method provided in this embodiment may include:
S101, selecting a target image from the images to be processed.
The target image is an image to be processed, wherein face information of the image is successfully extracted by adopting a face recognition task. The face recognition task may at least include detecting whether the image to be processed is a face image, and may further include: and (3) aligning the face image, evaluating the quality, detecting living bodies, identifying the user identity corresponding to the face image, and the like. The present embodiment may perform face recognition tasks by a face recognition system. Correspondingly, the face recognition system at least comprises a face detection model and can further comprise: a face alignment model, a quality assessment model, a living body detection and face detection model, and the like. It should be noted that, in the process of executing the face recognition task on any image to be processed, face information needs to be extracted from the image, but the face information may not be extracted from a non-face image (such as a landscape image), and only the face image may be successfully extracted.
The image to be processed may be an actual production environment, or unlabeled image data present in an image library. I.e. whether the image to be processed is a face image or not, the corresponding user identity, etc. are not marked. The number of images to be processed in this embodiment is preferably plural.
It should be noted that, the image to be processed in this embodiment may include various types of images encountered in the face recognition process, for example, may include: normal face images, face images and abnormal images can be eliminated. The normal face image can be a standard face image which can accurately identify the identity of the user by a face recognition system; the exclusionary face image may be an image that is not suitable for face recognition, such as a landscape image, that is clearly detectable by the face recognition system; the abnormal image may be an image that should be detected by the face recognition system but is not actually detected and is not suitable for face recognition, for example, an animal face image, a blurred face image, or a wide-angle face image.
Optionally, in the embodiment of the present disclosure, when performing a face recognition task on an input image to be processed, the face recognition system performs face information extraction for each image to be processed, and if the face information is successfully extracted, the image to be processed is taken as the target image. Considering the situation that the face recognition system can have false recognition on the abnormal image, namely that the face information cannot be extracted for the abnormal image, the face information is actually and successfully extracted. The target image acquired in this embodiment includes an abnormal image and a normal face image.
Optionally, in this embodiment, in a process that the face recognition system performs a face recognition task on an image to be processed, a target image is selected from the image to be processed; the target image may be selected from the images to be processed after the face recognition system performs the face recognition task on the images to be processed.
Since the number of images to be processed in the present embodiment is plural, the number of extracted target images is also generally plural.
S102, extracting the face features of the target image and clustering the face features.
Wherein the face feature may be a feature characterizing facial information of the person.
When the normal face image is mapped to the face feature space, the image of the same person is mapped to the similar position of the face feature space, namely the similar higher face features of the same person in different images. When the abnormal face image is mapped to the face feature space, the abnormal face image of the same cause of the abnormality is also mapped to the similar position of the face feature space, namely the similar higher face features of the abnormal face image of the same cause of the abnormality. For example, the similar higher of the face features of all blurred face images and the similar higher of the face features of all large-angle face images; similar higher of the face features of all non-face images, etc. In view of this, the present embodiment may determine the abnormal image from the target image by extracting the face feature of the target image and clustering it.
Specifically, in this embodiment, after the target images are obtained, face feature extraction is performed on each target image, where there are many ways of extracting the face features of the target images, which is not limited. For example, a face feature extraction algorithm may be used to process the target image to extract the face features of the target image; the face features of the target image may be extracted by a pre-trained face feature extraction model. After the face features of each target image are obtained, the face features of each target image can be clustered, and the obtained clustering result is as follows: which target images are grouped into one type, the number of target images contained in each type, and the like. The method for clustering the face features is not limited to this embodiment, and may be a single-chain clustering method or a full-chain clustering method.
S103, determining an abnormal image in the target image according to the clustering result.
Optionally, when determining the abnormal image in the target image according to the clustering result, the embodiment may analyze whether each class in the clustering result is an abnormal class corresponding to a certain abnormality cause, and acquire each image included in each abnormal class as the abnormal image. Among other reasons, the causes of the anomaly may include, but are not limited to: image blurring, large angle faces, non-faces, etc. Specifically, there are many ways to determine whether each class in the clustering result is an abnormal class, where:
One embodiment is: and aiming at each class in the clustering result, adopting an abnormal class analysis algorithm or a manual analysis mode, selecting the face characteristics of at least one target image from the abnormal class analysis algorithm or the manual analysis mode to analyze the abnormal reasons, and determining whether the class is the abnormal class or not and the corresponding abnormal reasons according to the analysis result.
Another embodiment is: after a large number of target images are clustered, the clustering result is usually long-tailed distribution, and the head in the long-tailed distribution, that is, the class with a large number of images, is usually an abnormal image. The embodiment can be based on the number of images contained in each class in the analysis and clustering result, and the classes with the larger number are used as abnormal classes.
Another embodiment is: and calculating a characteristic distribution area of each class in the clustering result, and taking the class with the distribution area larger than a preset range as an abnormal class.
It should be noted that, in this embodiment, other manners may be used to determine the abnormal class in the clustering result, which is not limited.
Optionally, when determining an abnormal image in the target image, the embodiment may distinguish abnormal images corresponding to various abnormal reasons, that is, for each abnormal reason, obtain an image set of the corresponding abnormal image; or it may be indistinguishable, i.e. all outlier images are taken as one image set.
According to the scheme of the embodiment of the disclosure, a target image based on face information successfully extracted by a face recognition task is selected from images to be processed, face feature extraction and feature clustering operation are carried out on the target image, and abnormal images in the target image are determined according to a clustering result. According to the scheme, the abnormal images which are easy to be mistakenly identified can be automatically collected in batches from a large number of images to be processed in the face identification process, the cost is low, the efficiency is high, a face identification system is optimized based on the abnormal images, and the guarantee is provided for improving the accuracy of the face identification process.
Further, another alternative way of extracting the face feature of the target image in this embodiment is: and when the face recognition system executes the face recognition task on the target image, the face features of the output target image are acquired. Specifically, since the target image is an image that successfully extracts face information based on a face recognition task, and the face features are necessarily used in the process of determining the identity of the user, the face features of the target image are necessarily extracted in the process of performing the face recognition task on the target image by the face recognition system, for example, the face features of the target image may be detected based on a face recognition link of the face recognition system. The face characteristics of the target image extracted by the face recognition system can be directly obtained, and the face characteristics of the target image can be obtained without occupying additional operation resources, so that the consumption of resources is reduced, and the extraction efficiency of the face characteristics is improved.
Further, the method of this embodiment further includes: and under the condition that the abnormal image updating condition is met, taking the abnormal image in the images to be processed as a new image to be processed, and triggering the execution of acquiring the selection target image from the new image to be processed. Among them, the abnormal image update conditions may be numerous, and are not limited thereto. For example, it may be that the number of abnormal images reaches a preset number threshold; the update period for the abnormal image may be the current time. In the embodiment, under the condition that the updating condition of the abnormal image is met, the abnormal image determined in the step S103 is used as a new image to be processed, the step S101 is triggered to be executed again to select a target image from the new image to be processed, and the face features of the target image extracted subsequently are clustered; and determining an abnormal image in the target image according to the clustering result, namely re-executing S101-S103 to continuously update the abnormal image in the image to be processed. The advantage of this is that the accuracy of the determined outlier image is further improved by the plurality of iterations.
Fig. 2 is a flowchart of an image processing method provided according to an embodiment of the present disclosure. Based on the foregoing embodiments, the embodiments of the present disclosure further explain how to determine an abnormal image in a target image according to a clustering result, as shown in fig. 2, an image processing method provided in this embodiment may include:
S201, selecting a target image from the images to be processed.
S202, extracting face features of the target image and clustering the face features.
S203, determining an abnormality threshold.
The anomaly threshold may be a number threshold that determines whether each class in the clustering result is an anomaly class.
Alternatively, in this embodiment, an anomaly threshold may be determined for each anomaly cause corresponding to an anomaly class, or a uniform anomaly threshold may be determined for all anomaly classes. Specifically, the present embodiment is not limited to a specific manner of determining the abnormality threshold corresponding to the abnormality class.
One embodiment is: and carrying out statistical analysis on the clustering results of the face features of a large number of target images, and determining an abnormal threshold according to the number of the abnormal classes in each clustering result, wherein the abnormal threshold can be a number average value, a weighted average value, a maximum value or the like of the abnormal classes in each clustering result.
Another embodiment is: and determining an abnormal threshold for the current target image according to a certain rule according to the number of images contained in different categories in the clustering result of the face features of the current target image.
Alternatively, for the second possible embodiment, the anomaly threshold value may preferably be determined in the following manner: calculating a quartile value and a quartile range according to the number of images contained in different categories in the clustering result; and determining an abnormal threshold according to the quartile value and the quartile range. Specifically, after the number of images corresponding to each class is ordered, the quartile value Q3 and the quartile range IQR are determined according to a calculation formula of the quartile value and the quartile range, and then the quartile range IQR with a preset multiple (for example, 1.5 times or 3 times) of the quartile value Q3 is summed to be used as an abnormal threshold. The method has the advantages that the accuracy of determining the abnormal threshold value is greatly improved, and the accuracy of the abnormal image extracted based on the abnormal threshold value is further guaranteed.
S204, determining abnormal classes in the clustering result according to the number of images contained in different classes in the clustering result and the abnormal threshold.
Optionally, in this embodiment, for each category in the clustering result, the number value (i.e., the number of images) of the target images included in the category may be compared with an anomaly threshold, for example, may be compared with a uniform anomaly threshold, and may also be compared with an anomaly threshold corresponding to the category. If the number of the images is larger than or equal to the abnormal threshold, the class is indicated to be abnormal, otherwise, the class is indicated to be normal face class.
S205, the target image belonging to the abnormality class is set as the abnormality image.
Optionally, after determining each abnormal class in the clustering result, acquiring a target image contained in each abnormal class, and taking the target image as an abnormal image.
According to the scheme of the embodiment of the disclosure, a target image which successfully extracts face information based on a face recognition task is selected from images to be processed, and face feature extraction and feature clustering operation are carried out on the target image; and determining an abnormal threshold value, and determining an abnormal image in the target image according to the relation between the number of images contained in different categories in the clustering result and the abnormal threshold value. According to the method and the device, the abnormal images can be rapidly and accurately determined by comparing the relation between the number of the images in each category and the abnormal threshold value, and the flexibility of the abnormal image determining mode is improved.
Fig. 3 is a flowchart of an image processing method provided according to an embodiment of the present disclosure. The embodiment of the disclosure provides a preferred example of quickly judging whether the newly added image is an abnormal image or not based on the above embodiment. As shown in fig. 3, the image processing method provided in this embodiment may include:
s301, selecting a target image from the images to be processed.
S302, extracting the face features of the target image and clustering the face features.
S303, determining an abnormality threshold.
S304, determining abnormal classes in the clustering result according to the number of images contained in different classes in the clustering result and the abnormal threshold.
S305, taking the target image belonging to the abnormality class as an abnormality image.
S306, according to the abnormal image corresponding to the abnormal class, determining the central characteristic of the abnormal class.
The central feature of the abnormal class may be a feature representing face information of the entire abnormal class. The embodiment can be to determine a central feature for all exception classes; it is also possible to determine a central feature for each abnormality type corresponding to the cause of the abnormality.
Optionally, in this embodiment, according to an abnormal image corresponding to the abnormal class, a manner of determining the central feature of the abnormal class may be: and aiming at each abnormal class (or all abnormal classes), carrying out fusion processing on the face features of each abnormal image contained in the abnormal classes to obtain the central feature of the abnormal class. Specifically, there are many ways to perform fusion processing on the face features of each of the different images. This is not limited. For example, the face features of the different images may be subjected to a mean value operation or a weighted mean value operation, and the face feature mean value or the weighted mean value obtained by calculation may be used as the central feature of the different types. If the weighted average operation is performed, the weight of the abnormal image may be set according to the quality score of the abnormal image or the norm (e.g., L2 norm) of the face feature of the abnormal image, for example, the higher the image quality is, the higher the weight is, the larger the norm of the face feature is, and so on.
S307, according to the central characteristics of the abnormal class, determining whether the newly added image is taken as the abnormal image of the abnormal class, if so, executing S308, and if not, returning to executing S307.
The newly added image may be newly added image data without labels in the production environment or the image library.
Optionally, in this embodiment, after S301-S306 are performed to determine a part of the abnormal images, when a new image is detected, face features of the new image may be extracted, similarity between the face features of the new image and central features of abnormal classes (for example, each abnormal class) may be calculated, whether the similarity between the two features is higher than a similarity threshold is determined, if yes, mapping of the new image to the face feature space is described, and then the position of the new image is close to the position of the abnormal image in the abnormal class, that is, the new image belongs to the abnormal class, that is, S308 is performed to use the new image as the abnormal image of the abnormal class. Otherwise, the new image is described as an abnormal image which does not belong to the abnormal class. At this time, returning to S307, when the newly added image is detected again, an operation of determining whether the newly added image can be an abnormal image of an abnormal class is performed again.
S308, taking the newly added image as an abnormal image of the abnormal class, and returning to S306.
Alternatively, the embodiment may return to S306 after taking the newly added image as the abnormal image of the abnormal class, and update the central feature of the abnormal class based on the abnormal image to which the newly added image is added. To ensure the accuracy of the central feature of the anomaly class.
Alternatively, in this embodiment, if the newly added image in the production environment or the image library is the tagged image, the abnormal image may be selected directly according to the tag thereof, and be directly used as the abnormal image corresponding to the abnormal class.
According to the scheme of the embodiment of the disclosure, a target image based on face recognition task successfully extracting face information is selected from images to be processed, face feature extraction and feature clustering operation are carried out on the target image, abnormal images in the target image are determined according to clustering results, and the central features of abnormal types are calculated. And if the new image exists, rapidly judging whether the new image is of an abnormal type according to the similarity between the face characteristics of the new image and the central characteristics of the abnormal type. According to the scheme of the embodiment, after the abnormal image of a part of abnormal types is determined, whether the newly added image is the abnormal image can be rapidly and accurately positioned in a simpler and more convenient mode, so that the abnormal image determination efficiency is further improved.
Based on the above embodiments, the embodiments of the present disclosure may further train a face recognition system that performs a face recognition task by using the abnormal image as a training sample after determining the abnormal image in the target image. The system parameters of the face recognition system are optimized, the accuracy of the face recognition system for recognizing abnormal images is improved, and the accuracy of the face recognition system for executing face recognition tasks is further improved.
Furthermore, after updating the system parameters of the face recognition system, the embodiment can execute the operation of determining the abnormal image on the basis of the updated face recognition system again for the image to be processed, so as to continuously expand the abnormal image and continuously improve the face recognition effect of the face recognition system based on the expanded abnormal image.
Optionally, the face recognition system according to the embodiment of the present disclosure is a neural network model system for performing face recognition tasks, where the system may include at least one task model, and each task model is a neural network model. Fig. 4 is a schematic structural diagram of a face recognition system according to an embodiment of the present disclosure. As shown in fig. 4, five task models, namely, a face detection model 41, a face alignment model 42, a quality assessment model 43, a living body detection model 44, and a face recognition model 45, are included in the face recognition system 4.
The face detection model 41 is configured to perform a face detection task, and is specifically configured to perform face detection on an image to be processed input into the face recognition system 4, so as to determine whether the image to be processed includes a face region, and if so, indicate that the face detection is passed, and transmit the image to be processed to a next task model, i.e. a face alignment model. Otherwise, refusing to carry out subsequent processing operation on the image to be processed.
The face alignment model 42 is used for performing a face alignment task, specifically for labeling the face key feature points of the face region identified by the face detection model 41, and transmitting the image to be processed labeled with the face key feature points to the next task model, namely, the quality evaluation model.
The quality evaluation model 43 is used for performing quality evaluation tasks, in particular, for performing image quality evaluation on the received image to be processed, such as evaluating whether the image sharpness meets the recognition requirement and/or whether the shooting angle meets the recognition requirement. If the quality evaluation is passed, and the image to be processed is transmitted to the next task model, namely the living body detection model. Otherwise, refusing to carry out subsequent processing operation on the image to be processed.
The living body detection model 44 is configured to perform a living body detection task, and specifically is configured to determine whether the target object is a living body object according to the received image to be processed, if so, the living body detection is passed, and the image to be processed is transmitted to the next task model, i.e. the face recognition model. Otherwise, refusing to carry out subsequent processing operation on the image to be processed.
The face recognition model 45 is used for performing a face recognition task, and specifically is used for recognizing the face of the received image to be processed so as to determine the user identity corresponding to the face image.
Optionally, in the case where the face recognition system of the present embodiment includes at least two task models, if the face recognition system is the face recognition system shown in fig. 4, the training method for training the face recognition system with the abnormal image as the training sample further includes: determining a task model to be updated in a face recognition system for executing a face recognition task according to the abnormal class to which the abnormal image belongs; and training the task model to be updated according to the abnormal image.
Specifically, one embodiment is: the quality scoring of each abnormal class can be carried out according to the abnormal image contained in the abnormal class, the target evaluation segment to which the quality scoring of the abnormal class belongs is judged according to the corresponding relation between the preset evaluation segment and the task model, and the task model corresponding to the target evaluation segment is used as the task model to be updated corresponding to the abnormal class. Another embodiment is: the user sets according to the actual demands. For example, setting a task model to be updated corresponding to abnormal data of a non-face class as a face detection model; the task model to be updated corresponding to the fuzzy abnormal data is a quality evaluation model and the like. The method has the advantages that different task models of the face recognition system are trained in a targeted mode according to the abnormal images of different abnormal reasons, and the training accuracy of the task models is further improved.
Fig. 5 is a schematic structural view of an image processing apparatus according to an embodiment of the present disclosure. The embodiment of the disclosure is suitable for automatically acquiring the abnormal images affecting the recognition accuracy of the face recognition system, and is particularly suitable for rapidly extracting the abnormal images affecting the recognition accuracy of the face recognition system from massive image data. The apparatus may be implemented in software and/or hardware, and the apparatus may implement the image processing method of any embodiment of the disclosure. As shown in fig. 5, the image processing apparatus includes:
a target image selection module 501 for selecting a target image from the images to be processed; the target image is an image to be processed which adopts a face recognition task to successfully extract face information;
the image feature processing module 502 is configured to extract a face feature of a target image, and cluster the face feature;
an abnormal image determining module 503, configured to determine an abnormal image in the target image according to the clustering result.
According to the scheme of the embodiment of the disclosure, a target image based on face information successfully extracted by a face recognition task is selected from images to be processed, face feature extraction and feature clustering operation are carried out on the target image, and abnormal images in the target image are determined according to a clustering result. According to the scheme, the abnormal images which are easy to be mistakenly identified can be automatically collected in batches from a large number of images to be processed in the face identification process, the cost is low, the efficiency is high, a face identification system is optimized based on the abnormal images, and the guarantee is provided for improving the accuracy of the face identification process.
Further, the abnormal image determining module 503 includes:
a threshold value determining unit configured to determine an abnormality threshold value;
the abnormal class determining unit is used for determining abnormal classes in the clustering result according to the number of images contained in different classes in the clustering result and the abnormal threshold value;
an abnormal image determination unit configured to take a target image belonging to an abnormality class as an abnormal image.
Further, the threshold determining unit is specifically configured to:
calculating a quartile value and a quartile range according to the number of images contained in different categories in the clustering result;
and determining an abnormal threshold according to the quartile value and the quartile range.
Further, the image processing apparatus further includes:
the central feature determining module is used for determining the central feature of the abnormal class according to the abnormal image corresponding to the abnormal class;
the abnormal image determining module is further used for determining whether the newly added image is used as an abnormal image of the abnormal class according to the central characteristics of the abnormal class.
Further, the image feature processing module 502 is specifically configured to:
and when the face recognition system executes the face recognition task on the target image, the face features of the output target image are acquired.
Further, the image processing apparatus further includes:
And the abnormal image updating module is used for taking the abnormal image in the images to be processed as a new image to be processed and triggering execution of selecting a target image from the new image to be processed under the condition that the abnormal image updating condition is met.
Further, the image processing apparatus further includes:
and the model training module is used for training the face recognition system for executing the face recognition task by taking the abnormal image as a training sample.
Further, the face recognition system comprises at least two task models, and the corresponding model training module comprises:
the training model determining unit is used for determining a task model to be updated in the face recognition system for executing the face recognition task according to the abnormal class of the abnormal image;
and the model training unit is used for training the task model to be updated according to the abnormal image.
The product can execute the method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of executing the method.
In the technical scheme of the disclosure, the related processes of collection, storage, use, processing, transmission, provision, disclosure and the like of various images (such as normal face images, target images, images to be processed, abnormal images, newly added images and the like) all conform to the regulations of related laws and regulations and do not violate the popular regulations.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When a computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligent software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Cloud computing (cloud computing) refers to a technical system that a shared physical or virtual resource pool which is elastically extensible is accessed through a network, resources can comprise servers, operating systems, networks, software, applications, storage devices and the like, and resources can be deployed and managed in an on-demand and self-service mode. Through cloud computing technology, high-efficiency and powerful data processing capability can be provided for technical application such as artificial intelligence and blockchain, and model training.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (16)
1. An image processing method, comprising:
selecting a target image from the images to be processed; the target image is an image to be processed which adopts a face recognition task to successfully extract face information;
extracting face features of the target image and clustering the face features;
determining an anomaly threshold;
determining abnormal classes in the clustering result according to the number of images contained in different classes in the clustering result and the abnormal threshold, wherein the clustering result is long tail distribution, and the abnormal classes are classes with more images contained in the different classes;
and taking the target image belonging to the anomaly class as an anomaly image.
2. The method of claim 1, wherein the determining an anomaly threshold value comprises:
calculating a quartile value and a quartile range according to the number of images contained in different categories in the clustering result;
And determining an abnormal threshold according to the quartile value and the quartile range.
3. The method of claim 1 or 2, further comprising:
determining the central characteristics of the abnormal class according to the abnormal image corresponding to the abnormal class;
and determining whether the newly added image is used as an abnormal image of the abnormal class according to the central characteristics of the abnormal class.
4. The method of claim 1, wherein the extracting facial features of the target image comprises:
and when the face recognition system executes the face recognition task on the target image, the face features of the target image are output.
5. The method of any one of claims 1-2 and 4, further comprising:
and taking the abnormal image in the images to be processed as a new image to be processed and triggering execution of selecting a target image from the new image to be processed under the condition that the abnormal image updating condition is met.
6. The method of any one of claims 1-2 and 4, further comprising:
and training the face recognition system for executing the face recognition task by taking the abnormal image as a training sample.
7. The method of claim 6, wherein the face recognition system comprises at least two task models,
Correspondingly, the training of the face recognition system for executing the face recognition task by taking the abnormal image as a training sample comprises the following steps:
determining a task model to be updated in a face recognition system for executing the face recognition task according to the abnormal class to which the abnormal image belongs;
and training the task model to be updated according to the abnormal image.
8. An image processing apparatus comprising:
the target image selection module is used for selecting a target image from the images to be processed; the target image is an image to be processed which adopts a face recognition task to successfully extract face information;
the image feature processing module is used for extracting the face features of the target image and clustering the face features;
an abnormal image determination module comprising:
a threshold value determining unit configured to determine an abnormality threshold value;
the abnormal class determining unit is used for determining abnormal classes in the clustering result according to the number of images contained in different classes in the clustering result and the abnormal threshold, wherein the clustering result is long tail distribution, and the abnormal classes are classes with more images contained in the different classes;
an abnormal image determination unit configured to take a target image belonging to the abnormality class as an abnormal image.
9. The apparatus of claim 8, wherein the threshold determining unit is specifically configured to:
calculating a quartile value and a quartile range according to the number of images contained in different categories in the clustering result;
and determining an abnormal threshold according to the quartile value and the quartile range.
10. The apparatus of claim 8 or 9, further comprising:
the central feature determining module is used for determining the central feature of the abnormal class according to the abnormal image corresponding to the abnormal class;
the abnormal image determining module is further used for determining whether to take the newly added image as an abnormal image of the abnormal class according to the central characteristics of the abnormal class.
11. The apparatus of claim 8, wherein the image feature processing module is specifically configured to:
and when the face recognition system executes the face recognition task on the target image, the face features of the target image are output.
12. The apparatus of any one of claims 8-9 and 11, further comprising:
and the abnormal image updating module is used for taking the abnormal image in the images to be processed as a new image to be processed and triggering execution of selecting a target image from the new image to be processed under the condition that the abnormal image updating condition is met.
13. The apparatus of any one of claims 8-9 and 11, further comprising:
and the model training module is used for taking the abnormal image as a training sample and training a face recognition system for executing the face recognition task.
14. The apparatus of claim 13, wherein the face recognition system comprises at least two task models,
correspondingly, the model training module comprises:
the training model determining unit is used for determining a task model to be updated in the face recognition system for executing the face recognition task according to the abnormal class to which the abnormal image belongs;
and the model training unit is used for training the task model to be updated according to the abnormal image.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any one of claims 1-7.
16. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the image processing method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111448404.3A CN114155589B (en) | 2021-11-30 | 2021-11-30 | Image processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111448404.3A CN114155589B (en) | 2021-11-30 | 2021-11-30 | Image processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114155589A CN114155589A (en) | 2022-03-08 |
CN114155589B true CN114155589B (en) | 2023-08-08 |
Family
ID=80455168
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111448404.3A Active CN114155589B (en) | 2021-11-30 | 2021-11-30 | Image processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114155589B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972211B (en) * | 2022-05-09 | 2024-09-27 | 推想医疗科技股份有限公司 | Training method, segmentation method, device, equipment and medium for image segmentation model |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563279A (en) * | 2017-07-22 | 2018-01-09 | 复旦大学 | The model training method adjusted for the adaptive weighting of human body attributive classification |
CN108228684A (en) * | 2017-05-26 | 2018-06-29 | 北京市商汤科技开发有限公司 | Training method, device, electronic equipment and the computer storage media of Clustering Model |
CN110245679A (en) * | 2019-05-08 | 2019-09-17 | 北京旷视科技有限公司 | Image clustering method, device, electronic equipment and computer readable storage medium |
CN110414431A (en) * | 2019-07-29 | 2019-11-05 | 广州像素数据技术股份有限公司 | Face identification method and system based on elastic context relation loss function |
CN110427888A (en) * | 2019-08-05 | 2019-11-08 | 北京深醒科技有限公司 | A kind of face method for evaluating quality based on feature clustering |
CN110795975A (en) * | 2018-08-03 | 2020-02-14 | 浙江宇视科技有限公司 | Face false detection optimization method and device |
CN112668482A (en) * | 2020-12-29 | 2021-04-16 | 中国平安人寿保险股份有限公司 | Face recognition training method and device, computer equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10289825B2 (en) * | 2016-07-22 | 2019-05-14 | Nec Corporation | Login access control for secure/private data |
-
2021
- 2021-11-30 CN CN202111448404.3A patent/CN114155589B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108228684A (en) * | 2017-05-26 | 2018-06-29 | 北京市商汤科技开发有限公司 | Training method, device, electronic equipment and the computer storage media of Clustering Model |
CN107563279A (en) * | 2017-07-22 | 2018-01-09 | 复旦大学 | The model training method adjusted for the adaptive weighting of human body attributive classification |
CN110795975A (en) * | 2018-08-03 | 2020-02-14 | 浙江宇视科技有限公司 | Face false detection optimization method and device |
CN110245679A (en) * | 2019-05-08 | 2019-09-17 | 北京旷视科技有限公司 | Image clustering method, device, electronic equipment and computer readable storage medium |
CN110414431A (en) * | 2019-07-29 | 2019-11-05 | 广州像素数据技术股份有限公司 | Face identification method and system based on elastic context relation loss function |
CN110427888A (en) * | 2019-08-05 | 2019-11-08 | 北京深醒科技有限公司 | A kind of face method for evaluating quality based on feature clustering |
CN112668482A (en) * | 2020-12-29 | 2021-04-16 | 中国平安人寿保险股份有限公司 | Face recognition training method and device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
基于多任务卷积神经网络的人脸识别技术研究;祝永志,苏晓云;《通信技术》;第53卷(第3期);718-723页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114155589A (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113705425B (en) | Training method of living body detection model, and method, device and equipment for living body detection | |
CN112906502A (en) | Training method, device and equipment of target detection model and storage medium | |
CN112633276B (en) | Training method, recognition method, device, equipment and medium | |
CN113869449A (en) | Model training method, image processing method, device, equipment and storage medium | |
CN115457329B (en) | Training method of image classification model, image classification method and device | |
CN113947188A (en) | Training method of target detection network and vehicle detection method | |
CN113591736A (en) | Feature extraction network, training method of living body detection model and living body detection method | |
CN113688887A (en) | Training and image recognition method and device of image recognition model | |
CN114155589B (en) | Image processing method, device, equipment and storage medium | |
CN113827240B (en) | Emotion classification method, training device and training equipment for emotion classification model | |
CN114511756A (en) | Attack method and device based on genetic algorithm and computer program product | |
CN113657248A (en) | Training method and device for face recognition model and computer program product | |
CN115937993B (en) | Living body detection model training method, living body detection device and electronic equipment | |
CN116894242A (en) | Identification method and device of track verification code, electronic equipment and storage medium | |
CN115482436B (en) | Training method and device for image screening model and image screening method | |
CN114764874B (en) | Deep learning model training method, object recognition method and device | |
CN114445711B (en) | Image detection method, image detection device, electronic equipment and storage medium | |
CN113361455B (en) | Training method of face counterfeit identification model, related device and computer program product | |
CN115273148A (en) | Pedestrian re-recognition model training method and device, electronic equipment and storage medium | |
CN114912541A (en) | Classification method, classification device, electronic equipment and storage medium | |
CN114120410A (en) | Method, apparatus, device, medium and product for generating label information | |
CN114120180A (en) | Method, device, equipment and medium for generating time sequence nomination | |
CN115809687A (en) | Training method and device for image processing network | |
CN114677691B (en) | Text recognition method, device, electronic equipment and storage medium | |
CN114140851B (en) | Image detection method and method for training image detection model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |