CN117523630A - Face recognition method and device, storage medium and electronic equipment - Google Patents

Face recognition method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117523630A
CN117523630A CN202311474772.4A CN202311474772A CN117523630A CN 117523630 A CN117523630 A CN 117523630A CN 202311474772 A CN202311474772 A CN 202311474772A CN 117523630 A CN117523630 A CN 117523630A
Authority
CN
China
Prior art keywords
image
face
target
pixel point
gray value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311474772.4A
Other languages
Chinese (zh)
Inventor
陈妍伶
黄菁
唐琳娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202311474772.4A priority Critical patent/CN117523630A/en
Publication of CN117523630A publication Critical patent/CN117523630A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Nonlinear Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a face recognition method, a face recognition device, a storage medium and electronic equipment, and relates to the fields of biological recognition, financial science and technology and other related technical fields. The method comprises the following steps: acquiring a first image corresponding to a face of a target object, wherein the first image comprises X pixel points, each pixel point corresponds to a gray value and a coordinate value, performing a first operation on the first image according to a face template to obtain a second image, wherein the first operation is used for updating the coordinate value corresponding to each pixel point included in the first image according to the face template, performing a second operation on the second image to obtain a third image, wherein the second operation is used for updating the gray value corresponding to each pixel point included in the second image to a first preset interval, and inputting the third image to the target model to obtain an identification result. The method and the device solve the technical problem that face recognition efficiency is low due to excessively strong or excessively weak illumination conditions in the prior art.

Description

Face recognition method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of biometric identification, financial science and technology, and other related technical fields, and in particular, to a face recognition method, device, storage medium, and electronic apparatus.
Background
With the development of biometric identification technology, the application range of the face identification technology in the field of financial science and technology is also becoming wider, for example: face payment, financial software identity authentication, and the like. In practical application, because the illumination conditions of the environment where the user is located are different each time, the situation that the scanned image corresponding to the user is too dark or too bright occurs, in the prior art, most face recognition devices only pay attention to the accuracy of feature extraction on the face image in the recognition process, and the enhancement processing link of the face image before feature extraction is omitted, so that the problem of low face recognition efficiency is caused.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The application provides a face recognition method, a device, a storage medium and electronic equipment, which are used for at least solving the technical problem of low face recognition efficiency caused by too strong or too weak illumination conditions in the prior art.
According to one aspect of the present application, there is provided a face recognition method, including: acquiring a first image corresponding to a face of a target object, wherein the first image comprises X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used for representing the brightness of the pixel point, and the coordinate value is used for representing the position information of the pixel point; performing a first operation on the first image according to a face template to obtain a second image, wherein the face template is a face image generated according to N first face images, N is a positive integer, and the first operation is used for updating coordinate values corresponding to each pixel point included in the first image according to the face template; performing a second operation on the second image to obtain a third image, wherein the second operation is used for updating the gray value corresponding to each pixel point included in the second image into the first preset interval; inputting the third image into a target model to obtain a recognition result, wherein the target model is a neural network model obtained by training N first face images, the recognition result is used for representing the similarity between the third image and each of M second face images, M is a positive integer, and each second face image is a face image corresponding to an object authorized by acquired target software.
Optionally, the face recognition method further includes: calculating an average value of image sizes corresponding to each first face image in the N first face images to obtain an image size of a face template, wherein the face template comprises X pixel points; acquiring image information of a target object to obtain an initial image corresponding to the target object, wherein the initial image is used for representing human body information and human face information corresponding to the target object; and dividing the initial image according to the image size of the face template to obtain a first image with the image size of X.
Optionally, the face recognition method further includes: extracting features of coordinate values corresponding to each pixel point in X pixel points included in the first image to obtain X first feature vectors; extracting features of coordinate values corresponding to each pixel point in X pixel points included in the face template to obtain X average feature vectors; determining a one-to-one correspondence between each first feature vector in the X first feature vectors and each average feature vector in the X average feature vectors to obtain X mapping relations; updating each first feature vector corresponding to the first image according to each average feature vector in the X average feature vectors and the X mapping relations to obtain a second feature vector corresponding to the first feature vector; and generating a second image according to the second characteristic vector corresponding to each of the X first characteristic vectors.
Optionally, the face recognition method further includes: determining a first gray value and a second gray value corresponding to the second image, wherein the first gray value is a gray value corresponding to a pixel with the smallest gray value in X pixels included in the second image, and the second gray value is a gray value corresponding to a pixel with the largest gray value in X pixels included in the second image; taking the difference value between the second gray value and the first gray value as a first difference value; determining a difference value between a gray value corresponding to each pixel point in the X pixel points included in the second image and the first gray value to obtain a second difference value corresponding to the pixel point; and determining a gray value corresponding to each pixel point in the X pixel points included in the third image according to a target ratio, wherein the target ratio is a ratio of a second difference value corresponding to each pixel point in the X pixel points included in the second image to the first difference value.
Optionally, the face recognition method further includes: generating a target image according to the third image, wherein the gray value corresponding to each pixel point in the X pixel points included in the target image is located between second preset intervals, and the second preset intervals are sub-intervals of the first preset intervals; extracting features of coordinate values and gray values corresponding to each pixel point in X pixel points included in a target image to obtain a target feature vector corresponding to the pixel point, wherein the target feature vector is used for representing the coordinate values and the gray values of the pixel points corresponding to the target feature vector; extracting features of coordinate values and gray values corresponding to each pixel point in X pixel points included in each second face image to obtain a third feature vector corresponding to the pixel point, wherein the third feature vector is used for representing the coordinate values and the gray values of the pixel points corresponding to the third feature vector; determining target similarity between the second face image and the target image according to the distance between the target feature vector corresponding to each pixel point included in the target image and the third feature vector corresponding to each pixel point included in the second face image, wherein the target similarity is used for representing difference information between the target image and the second face image; and determining a recognition result according to the target similarity between each second face image in the M second face images and the target image.
Optionally, the face recognition method further includes: dividing X pixel points included in a third image to obtain P filter blocks, wherein each filter block in the P filter blocks comprises Y pixel points, and P and Y are positive integers; determining an average gray value corresponding to each filter block in the P filter blocks, wherein the average gray value corresponding to each filter block is an average value of gray values of each pixel point in Y pixel points included in the filter block; updating the gray value of each pixel point in each filtering block according to the average gray value corresponding to each filtering block to obtain the gray value corresponding to each pixel point in the X pixel points included in the fourth image; and performing a third operation on the fourth image to obtain a target image, wherein the third operation is used for updating the gray value corresponding to each pixel point included in the fourth image into a second preset interval.
Optionally, the face recognition method further includes: when at least one target similarity corresponding to the second face image is larger than or equal to a preset similarity, determining the identification result as a first result, wherein the first result is used for representing that a target object corresponding to the target image has acquired authorization of target software; and under the condition that the target similarity corresponding to each of the M second face images is smaller than the preset similarity, determining the identification result as a second result, wherein the second result is used for representing that the target object corresponding to the target image does not acquire the target software authorization.
According to another aspect of the present application, there is also provided a face recognition apparatus, including: the image processing device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a first image corresponding to a human face of a target object, the first image comprises X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used for representing the brightness of the pixel point, and the coordinate value is used for representing the position information of the pixel point; the first operation unit is used for performing a first operation on the first image according to the face template to obtain a second image, wherein the face template is a face image generated according to N first face images, N is a positive integer, and the first operation unit is used for updating coordinate values corresponding to each pixel point included in the first image according to the face template; the second operation unit is used for carrying out second operation on the second image to obtain a third image, wherein the second operation unit is used for updating the gray value corresponding to each pixel point included in the second image into the first preset interval; the input unit is used for inputting the third image into the target model to obtain a recognition result, wherein the target model is a neural network model obtained by training N first face images, the recognition result is used for representing the similarity between the third image and each of M second face images, M is a positive integer, and each second face image is a face image corresponding to an object authorized by the acquired target software.
According to another aspect of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer readable storage medium is controlled to execute the face recognition method of any one of the above when the computer program is run.
According to another aspect of the present application, there is also provided an electronic device, wherein the electronic device includes one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the face recognition method of any of the above.
In the application, a first image corresponding to a face of a target object is firstly obtained, wherein the first image comprises X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used for representing the brightness of the pixel point, the coordinate value is used for representing the position information of the pixel point, secondly, a first operation is carried out on the first image according to a face template to obtain a second image, wherein the face template is a face image generated according to N first face images, N is a positive integer, the first operation is used for updating the coordinate value corresponding to each pixel point included in the first image according to the face template, then a second operation is carried out on the second image to obtain a third image, the second operation is used for updating the gray value corresponding to each pixel point included in the second image into a first preset interval, then the third image is input into the target model to obtain a recognition result, the target model is a neural network model obtained by training the N first face images, the recognition result is used for representing a neural network model obtained by training the N first face images, the recognition result is used for obtaining the positive integer corresponding to the face image of the second face image, and the second face image is similar to the first face image is obtained.
According to the method and the device, before face recognition is carried out on the first image corresponding to the target object, the coordinate value corresponding to each pixel point included in the first image is updated according to the face template to obtain the second image, so that the aim of face alignment of the first image is achieved, then the gray value corresponding to each pixel point included in the second image is updated to a first preset interval, the aim of reducing the difference between the gray values of each pixel point in X pixel points included in the second image is achieved, the probability of an excessively bright or excessively dark area in a third image is further reduced, finally, face recognition is carried out on the third image by using the target model, and the efficiency of face recognition of the target object is improved.
Therefore, the technical scheme of the application realizes the purposes of carrying out face alignment on the face image corresponding to the target object and reducing the gray value difference of each pixel point in the face image by carrying out the second operation and the third operation on the first image corresponding to the target object, thereby realizing the technical effect of improving the face recognition efficiency and further solving the technical problem of low face recognition efficiency caused by over-strong or over-weak illumination conditions in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of an alternative face recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative second image acquisition method according to an embodiment of the present application;
FIG. 3 is a flow chart of an alternative third image acquisition method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an alternative face recognition device according to an embodiment of the present application;
fig. 5 is a schematic diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be further noted that, the related information (including the image information corresponding to the target object) and the data (including, but not limited to, the data for presentation and the data for analysis) related to the present application are both information and data authorized by the user or sufficiently authorized by each party. For example, an interface is provided between the system and the relevant user or institution, before acquiring the relevant information, the system needs to send an acquisition request to the user or institution through the interface, and acquire the relevant information after receiving the consent information fed back by the user or institution.
The present application is further illustrated below in conjunction with various embodiments.
Example 1
According to the embodiments of the present application, there is provided an embodiment of a face recognition method, it should be noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
The present application provides a face recognition system for performing a face recognition method in the present application, and fig. 1 is a flowchart of an alternative face recognition method according to an embodiment of the present application, as shown in fig. 1, and the method includes the following steps:
step S101, a first image corresponding to a face of a target object is acquired.
In step S101, the first image includes X pixels, where X is a positive integer, each pixel corresponds to a gray value and a coordinate value, the gray value is used to represent brightness of the pixel, and the coordinate value is used to represent position information of the pixel.
Optionally, the face recognition system scans the target object, acquires image information corresponding to the target object, and obtains an initial image of the target object, wherein the image size of the initial image is Z, Z is a positive integer greater than or equal to X, then a face template is generated according to the N first face images, and the initial image is positioned and segmented according to the face template, so that a first image with the image size of X is obtained.
Step S102, performing a first operation on the first image according to the face template to obtain a second image.
In step S102, the face template is a face image generated according to N first face images, N is a positive integer, and the first operation is used for updating coordinate values corresponding to each pixel included in the first image according to the face template.
Optionally, the face recognition system firstly acquires L key pixel points corresponding to each first face image in the N first face images, wherein L is a positive integer smaller than or equal to N, the key pixel points are used for representing five-sense organ information of an object corresponding to the first face images, then the L key pixel points corresponding to each first face image in the N first face images are aligned, then a first average gray value corresponding to X pixel points included in each first face image in the N first face images is calculated, and finally a face template is generated according to the first average gray value corresponding to the X pixel points included in each first face image.
Step S103, performing a second operation on the second image to obtain a third image.
In step S103, the second operation is used for updating the gray value corresponding to each pixel included in the second image to the first preset interval.
Optionally, the face recognition system determines a minimum gray value and a maximum gray value corresponding to X pixel points included in the second image, calculates a difference value between the maximum gray value and the minimum gray value to obtain a first difference value, calculates a difference value between the gray value corresponding to each pixel point in the X pixel points and the minimum gray value to obtain a second difference value corresponding to each pixel point, and finally determines a gray value corresponding to each pixel point in the X pixel points included in the third image according to a ratio of the second difference value corresponding to each pixel point to the first difference value.
Step S104, inputting the third image into the target model to obtain a recognition result.
In step S104, the target model is a neural network model obtained by training N first face images, the recognition result is used to represent the similarity between the third image and each of M second face images, M is a positive integer, and each of the second face images is a face image corresponding to the object authorized by the acquired target software.
Optionally, the face recognition system divides the X pixel points included in the third image through the target model to obtain P filtering blocks, then updates the gray value of each pixel point in each filtering block according to the average gray value corresponding to each filtering block, generates a fourth image according to the gray value corresponding to each updated pixel point, then performs a third operation on the fourth image to obtain the target image, and finally performs face recognition on the target object to obtain the recognition result.
According to the method and the device, before face recognition is carried out on the first image corresponding to the target object, the coordinate value corresponding to each pixel point included in the first image is updated according to the face template to obtain the second image, so that the aim of face alignment of the first image is achieved, then the gray value corresponding to each pixel point included in the second image is updated to a first preset interval, the aim of reducing the difference between the gray values of each pixel point in X pixel points included in the second image is achieved, the probability of an excessively bright or excessively dark area in a third image is further reduced, finally, face recognition is carried out on the third image by using the target model, and the efficiency of face recognition of the target object is improved.
Therefore, the technical scheme of the application realizes the purposes of carrying out face alignment on the face image corresponding to the target object and reducing the gray value difference of each pixel point in the face image by carrying out the second operation and the third operation on the first image corresponding to the target object, thereby realizing the technical effect of improving the face recognition efficiency and further solving the technical problem of low face recognition efficiency caused by over-strong or over-weak illumination conditions in the prior art.
In an alternative embodiment, the face recognition system first calculates an average value of image sizes corresponding to each first face image in the N first face images to obtain an image size of a face template, wherein the face template comprises X pixel points, then acquires image information of a target object to obtain an initial image corresponding to the target object, wherein the initial image is used for representing human body information and face information corresponding to the target object, and then segments the initial image according to the image sizes of the face templates to obtain a first image with the image size of X.
Optionally, the initial image corresponding to the target object acquired by the face recognition system includes human body information and face information, but the face recognition does not need to identify the human body information of the target object, so that the face recognition system performs face positioning and image segmentation on the initial image according to the face template, thereby achieving the purpose of removing redundant information which is likely to affect the subsequent face recognition.
In an alternative embodiment, fig. 2 is a flowchart of an alternative second image acquisition method according to an embodiment of the present application, as shown in fig. 2, including the steps of:
In step S201, feature extraction is performed on coordinate values corresponding to each of X pixels included in the first image, so as to obtain X first feature vectors.
Step S202, extracting features of coordinate values corresponding to each pixel point in the X pixel points included in the face template to obtain X average feature vectors.
In step S203, a one-to-one correspondence between each of the X first feature vectors and each of the X average feature vectors is determined, so as to obtain X mapping relationships.
Step S204, updating each first feature vector corresponding to the first image according to each average feature vector in the X average feature vectors and the X mapping relations to obtain a second feature vector corresponding to the first feature vector.
In step S205, a second image is generated according to the second feature vector corresponding to each of the X first feature vectors.
Optionally, compared with the first image, the face position and the face angle in the second image obtained after the first operation are relatively fixed, so that the interference of the difference of the face position and the face angle on face recognition is reduced, and the accuracy of face recognition on the target object is improved.
In an alternative embodiment, fig. 3 is a flowchart of an alternative third image acquisition method according to an embodiment of the present application, as shown in fig. 3, including the steps of:
step S301, determining a first gray value and a second gray value corresponding to the second image.
In step S301, the first gray value is a gray value corresponding to a pixel having the smallest gray value among the X pixels included in the second image, and the second gray value is a gray value corresponding to a pixel having the largest gray value among the X pixels included in the second image.
In step S302, the difference between the second gray level value and the first gray level value is taken as the first difference.
Step S303, determining a difference value between the gray value corresponding to each pixel point in the X pixel points included in the second image and the first gray value, to obtain a second difference value corresponding to the pixel point.
Step S304, determining a gray value corresponding to each pixel point in the X pixel points included in the third image according to the target ratio.
In step S304, the target ratio is a ratio of the second difference value corresponding to each of the X pixels included in the second image to the first difference value.
Optionally, compared with the second image, the difference between gray values corresponding to each pixel point in the X pixel points in the third image obtained after the second operation is smaller, so that the probability of an excessively bright or excessively dark area in the third image is reduced, and the efficiency of face recognition of the target object is improved.
Optionally, the calculation formula of the target ratio is shown in the following formula (1):
in the above formula (1), I o (I, j) representing the original gray value corresponding to each pixel point, and (I, j) representing the coordinate value of the pixel point, I min (I, j) representing the pixel point with the smallest gray value, I max (I, j) is used for representing the pixel point with the maximum gray value, and I (I, j) is used for representing the pixel pointCorresponding target ratio.
In an alternative embodiment, the face recognition system firstly generates a target image according to a third image, wherein the gray value corresponding to each pixel point in the X pixel points included in the target image is located between second preset intervals, the second preset intervals are subintervals of the first preset intervals, secondly, feature extraction is conducted on the coordinate value and the gray value corresponding to each pixel point in the X pixel points included in the target image to obtain a target feature vector corresponding to the pixel point, the target feature vector is used for representing the coordinate value and the gray value of the pixel point corresponding to the target feature vector, then feature extraction is conducted on the coordinate value and the gray value corresponding to each pixel point in the X pixel points included in each second face image to obtain a third feature vector corresponding to the pixel point, the third feature vector is used for representing the coordinate value and the gray value of the pixel point corresponding to the third feature vector, and then the distance between the target feature vector corresponding to each pixel point included in the target image and the second face image is determined according to the coordinate value and the gray value corresponding to each pixel point corresponding to the pixel point in the second face image, and the similarity between the target feature vector and the second face image is determined, and the similarity between the target feature vector and the target image and the second face image is determined.
Optionally, when determining the target similarity between the second face image and the target image, the face recognition system first determines a one-to-one correspondence between a target feature vector corresponding to each pixel included in the target image and a third feature vector corresponding to each pixel included in the second face image, then determines a distance between one target feature vector and one third feature vector according to the correspondence, determines a sub-similarity corresponding to one pixel of the target feature vector according to the distance between the target feature vector and the third feature vector, and finally determines the target similarity between the second face image and the target image according to the X sub-similarities corresponding to the target image.
Optionally, the calculation formula of the target similarity is shown in the following formula (2):
in the above formula (2), p is used to characterize a target feature vector, and p' is used to characterize a third feature vector.
In an alternative embodiment, the face recognition system firstly divides X pixels included in the third image to obtain P filter blocks, where each filter block in the P filter blocks includes Y pixels, P and Y are positive integers, and x=y×p, secondly determines an average gray value corresponding to each filter block in the P filter blocks, where the average gray value corresponding to each filter block is an average value of gray values of each pixel in the Y pixels included in the filter block, and then updates the gray value of each pixel in the filter block according to the average gray value corresponding to each filter block to obtain a gray value corresponding to each pixel in the X pixels included in the fourth image, and then performs a third operation on the fourth image to obtain the target image, where the third operation is used to update the gray value corresponding to each pixel in the fourth image to a second preset interval.
Optionally, in updating the gray value of each pixel point in each filter block according to the average gray value corresponding to the filter block, the formula used is shown in the following formula (3):
in the above formula (3), W (i, j) is used to represent the gray value corresponding to one pixel point in the fourth image,is an offset coefficient, beta is a feedback coefficient, m x Flat for representing gray values corresponding to X pixel points in third imageMean value, m x The method is used for representing the average value of gray values corresponding to Y pixel points in a filtering block, t is the iteration number, and initial values of t and beta are 1.
Optionally, in the process of performing the third operation on the fourth image, the face recognition system first obtains Q abnormal pixels in the X pixels included in the fourth image, where a gray value corresponding to the abnormal pixel is greater than 1, that is, W (i, j) >1, and then updates the feedback coefficient β according to the following formula (4), so as to achieve the purpose of updating the gray value corresponding to each pixel in the Q abnormal pixels.
β=-M×(0.1×t) (4)
In the above formula (4), M is a decision value, where M is 1 when the pixel intensity values B corresponding to the adjacent regions of the abnormal pixel point are distributed in the first interval [0,50], M is 1/2 when the pixel intensity values B are distributed in the second interval [50,200], and M is-1 when the pixel intensity values B are greater than 200, wherein the calculation formula of the pixel intensity values B is as shown in the following formula (5):
In the above formula (5), the area size corresponding to the adjacent area of the abnormal pixel point is n×n, that is, n×n pixel points are included in the adjacent area of the abnormal pixel point.
In an alternative embodiment, the face recognition system determines that the recognition result is a first result when there is at least one target similarity corresponding to the second face image that is greater than or equal to a preset similarity, where the first result is used to characterize that the target object corresponding to the target image has acquired authorization of the target software, and determines that the recognition result is a second result when the target similarity corresponding to each of the M second face images is less than the preset similarity, where the second result is used to characterize that the target object corresponding to the target image has not acquired authorization of the target software.
In the application, a first image corresponding to a face of a target object is firstly obtained, wherein the first image comprises X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used for representing the brightness of the pixel point, the coordinate value is used for representing the position information of the pixel point, secondly, a first operation is carried out on the first image according to a face template to obtain a second image, wherein the face template is a face image generated according to N first face images, N is a positive integer, the first operation is used for updating the coordinate value corresponding to each pixel point included in the first image according to the face template, then a second operation is carried out on the second image to obtain a third image, the second operation is used for updating the gray value corresponding to each pixel point included in the second image into a first preset interval, then the third image is input into the target model to obtain a recognition result, the target model is a neural network model obtained by training the N first face images, the recognition result is used for representing a neural network model obtained by training the N first face images, the recognition result is used for obtaining the positive integer corresponding to the face image of the second face image, and the second face image is similar to the first face image is obtained.
According to the method and the device, before face recognition is carried out on the first image corresponding to the target object, the coordinate value corresponding to each pixel point included in the first image is updated according to the face template to obtain the second image, so that the aim of face alignment of the first image is achieved, then the gray value corresponding to each pixel point included in the second image is updated to a first preset interval, the aim of reducing the difference between the gray values of each pixel point in X pixel points included in the second image is achieved, the probability of an excessively bright or excessively dark area in a third image is further reduced, finally, face recognition is carried out on the third image by using the target model, and the efficiency of face recognition of the target object is improved.
Therefore, the technical scheme of the application realizes the purposes of carrying out face alignment on the face image corresponding to the target object and reducing the gray value difference of each pixel point in the face image by carrying out the second operation and the third operation on the first image corresponding to the target object, thereby realizing the technical effect of improving the face recognition efficiency and further solving the technical problem of low face recognition efficiency caused by over-strong or over-weak illumination conditions in the prior art.
Example 2
According to an embodiment of the present application, an embodiment of a face recognition device is provided. Fig. 4 is a schematic diagram of an alternative face recognition device according to an embodiment of the present application, as shown in fig. 4, the face recognition device includes: an acquisition unit 401, a first operation unit 402, a second operation unit 403, and an input unit 404.
Optionally, the acquiring unit is configured to acquire a first image corresponding to a face of the target object, where the first image includes X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used to characterize brightness of the pixel point, the coordinate value is used to characterize position information of the pixel point, the first operating unit is configured to perform a first operation on the first image according to a face template to obtain a second image, where N is a positive integer, the first operation is configured to update coordinate values corresponding to each pixel point included in the first image according to the face template, the second operating unit is configured to perform a second operation on the second image to obtain a third image, the second operation is configured to update the gray value corresponding to each pixel point included in the second image into a first preset interval, the input unit is configured to input the third image into the target model to obtain a recognition result, the target model is configured to perform recognition on the N first face images, N is a positive integer, the first face model is used to obtain a neural training result, and the M is similar to the second face model.
In an alternative embodiment, the acquisition unit further comprises: the device comprises a first computing subunit, a collecting subunit and a dividing subunit.
Optionally, the first computing subunit is configured to compute an average value of image sizes corresponding to each first face image in the N first face images to obtain an image size of the face template, where the face template includes X pixels, the collecting subunit is configured to collect image information of the target object to obtain an initial image corresponding to the target object, where the initial image is used to represent human body information and face information corresponding to the target object, and the segmentation subunit is configured to segment the initial image according to the image size of the face template to obtain a first image with an image size X.
In an alternative embodiment, the first operation unit further comprises: the device comprises a first extraction subunit, a second extraction subunit, a first determination subunit, a first update subunit and a first generation subunit.
Optionally, the first extracting subunit is configured to perform feature extraction on coordinate values corresponding to each of X pixel points included in the first image to obtain X first feature vectors, the second extracting subunit is configured to perform feature extraction on coordinate values corresponding to each of X pixel points included in the face template to obtain X average feature vectors, the first determining subunit is configured to determine a one-to-one correspondence between each of the X first feature vectors and each of the X average feature vectors to obtain X mapping relationships, the first updating subunit is configured to update each of the first feature vectors corresponding to the first image according to each of the X average feature vectors and the X mapping relationships to obtain second feature vectors corresponding to the first feature vectors, and the first generating subunit is configured to generate the second image according to the second feature vectors corresponding to each of the X first feature vectors.
In an alternative embodiment, the second operation unit further comprises: the second determining subunit, the third determining subunit, the fourth determining subunit, and the fifth determining subunit.
Optionally, the second determining subunit is configured to determine a first gray value and a second gray value corresponding to the second image, where the first gray value is a gray value corresponding to a pixel with a minimum gray value among X pixels included in the second image, the second gray value is a gray value corresponding to a pixel with a maximum gray value among X pixels included in the second image, the third determining subunit is configured to use a difference between the second gray value and the first gray value as a first difference, the fourth determining subunit is configured to determine a difference between the gray value corresponding to each pixel among X pixels included in the second image and the first gray value, obtain a second difference corresponding to the pixel, and the fifth determining subunit is configured to determine, according to a target ratio, the gray value corresponding to each pixel among X pixels included in the third image, where the target ratio is a ratio of the second difference corresponding to each pixel among X pixels included in the second image and the first difference.
In an alternative embodiment, the input unit further comprises: the second generation subunit, the third extraction subunit, the fourth extraction subunit, the sixth determination subunit, and the seventh determination subunit.
Optionally, the second generating subunit is configured to generate a target image according to a third image, where a gray value corresponding to each pixel in X pixels included in the target image is located between second preset intervals, the second preset intervals are subintervals of the first preset intervals, the third extracting subunit is configured to perform feature extraction on coordinate values and gray values corresponding to each pixel in X pixels included in the target image to obtain a target feature vector corresponding to the pixel, the target feature vector is used to represent coordinate values and gray values of the pixel corresponding to the target feature vector, the fourth extracting subunit is configured to perform feature extraction on coordinate values and gray values corresponding to each pixel in X pixels included in each second face image to obtain a third feature vector corresponding to the pixel, the third feature vector is used to represent coordinate values and gray values of the pixel corresponding to the third feature vector, the sixth determining subunit is configured to determine a similarity between the target feature vector corresponding to each pixel in the target image and the second face image according to a coordinate value and a gray value corresponding to each pixel in the X pixels included in the target image, and the sixth determining subunit is configured to determine a similarity between the target feature vector corresponding to each second face image and the target feature image, and the similarity between the target feature vector and the second face image is determined according to a similarity between the target feature vector and the second face image.
In an alternative embodiment, the second generation subunit further comprises: the device comprises a dividing module, a first determining module, an updating module and an operating module.
Optionally, the dividing module is configured to divide X pixels included in the third image to obtain P filter blocks, where each filter block in the P filter blocks includes Y pixel points, P and Y are positive integers, the first determining module is configured to determine an average gray value corresponding to each filter block in the P filter blocks, where the average gray value corresponding to each filter block is an average value of gray values of each pixel point in the Y pixel points included in the filter block, and the updating module is configured to update the gray value of each pixel point in the filter block according to the average gray value corresponding to each filter block to obtain a gray value corresponding to each pixel point in the X pixel points included in the fourth image, and the operating module is configured to perform a third operation on the fourth image to obtain the target image, where the third operation is configured to update the gray value corresponding to each pixel point included in the fourth image to within a second preset interval.
In an alternative embodiment, the seventh determining subunit further comprises: a second determination module and a third determination module.
Optionally, the second determining module is configured to determine that the recognition result is a first result when there is at least one target similarity corresponding to the second face image that is greater than or equal to a preset similarity, where the first result is used to characterize that the target object corresponding to the target image has acquired authorization of the target software, and the third determining module is configured to determine that the recognition result is a second result when the target similarity corresponding to each of the M second face images is less than the preset similarity, where the second result is used to characterize that the target object corresponding to the target image has not acquired authorization of the target software.
In the application, a first image corresponding to a face of a target object is firstly obtained, wherein the first image comprises X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used for representing the brightness of the pixel point, the coordinate value is used for representing the position information of the pixel point, secondly, a first operation is carried out on the first image according to a face template to obtain a second image, wherein the face template is a face image generated according to N first face images, N is a positive integer, the first operation is used for updating the coordinate value corresponding to each pixel point included in the first image according to the face template, then a second operation is carried out on the second image to obtain a third image, the second operation is used for updating the gray value corresponding to each pixel point included in the second image into a first preset interval, then the third image is input into the target model to obtain a recognition result, the target model is a neural network model obtained by training the N first face images, the recognition result is used for representing a neural network model obtained by training the N first face images, the recognition result is used for obtaining the positive integer corresponding to the face image of the second face image, and the second face image is similar to the first face image is obtained.
According to the method and the device, before face recognition is carried out on the first image corresponding to the target object, the coordinate value corresponding to each pixel point included in the first image is updated according to the face template to obtain the second image, so that the aim of face alignment of the first image is achieved, then the gray value corresponding to each pixel point included in the second image is updated to a first preset interval, the aim of reducing the difference between the gray values of each pixel point in X pixel points included in the second image is achieved, the probability of an excessively bright or excessively dark area in a third image is further reduced, finally, face recognition is carried out on the third image by using the target model, and the efficiency of face recognition of the target object is improved.
Therefore, the technical scheme of the application realizes the purposes of carrying out face alignment on the face image corresponding to the target object and reducing the gray value difference of each pixel point in the face image by carrying out the second operation and the third operation on the first image corresponding to the target object, thereby realizing the technical effect of improving the face recognition efficiency and further solving the technical problem of low face recognition efficiency caused by over-strong or over-weak illumination conditions in the prior art.
Example 3
According to another aspect of the embodiments of the present application, there is also provided a computer readable storage medium, where the computer readable storage medium includes a stored computer program, where the computer program when executed controls a device in which the computer readable storage medium is located to perform the face recognition method according to any one of the above embodiments 1.
Example 4
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the face recognition method of any one of the above-described embodiments 1 via execution of executable instructions.
Fig. 5 is a schematic diagram of an alternative electronic device according to an embodiment of the present application, and as shown in fig. 5, the embodiment of the present application provides an electronic device, where the electronic device includes a processor, a memory, and a program stored on the memory and executable on the processor, and the processor implements the face recognition method of any one of the foregoing embodiments 1 when executing the program.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A face recognition method, comprising:
acquiring a first image corresponding to a face of a target object, wherein the first image comprises X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used for representing the brightness of the pixel point, and the coordinate value is used for representing the position information of the pixel point;
performing a first operation on the first image according to a face template to obtain a second image, wherein the face template is a face image generated according to N first face images, N is a positive integer, and the first operation is used for updating coordinate values corresponding to each pixel point included in the first image according to the face template;
performing a second operation on the second image to obtain a third image, wherein the second operation is used for updating the gray value corresponding to each pixel point included in the second image into a first preset interval;
and inputting the third image into a target model to obtain a recognition result, wherein the target model is a neural network model obtained by training the N first face images, the recognition result is used for representing the similarity between the third image and each of M second face images, M is a positive integer, and each of the M second face images is a face image corresponding to an object authorized by the acquired target software.
2. The face recognition method according to claim 1, wherein acquiring a first image corresponding to a face of a target object includes:
calculating an average value of image sizes corresponding to each first face image in the N first face images to obtain an image size of the face template, wherein the face template comprises X pixel points;
acquiring image information of the target object to obtain an initial image corresponding to the target object, wherein the initial image is used for representing human body information and human face information corresponding to the target object;
and dividing the initial image according to the image size of the face template to obtain a first image with the image size of X.
3. The face recognition method according to claim 2, wherein performing a first operation on the first image according to a face template to obtain a second image includes:
extracting features of coordinate values corresponding to each pixel point in X pixel points included in the first image to obtain X first feature vectors;
extracting features of coordinate values corresponding to each pixel point in X pixel points included in the face template to obtain X average feature vectors;
Determining a one-to-one correspondence between each first feature vector of the X first feature vectors and each average feature vector of the X average feature vectors to obtain X mapping relations;
updating each first feature vector corresponding to the first image according to each average feature vector in the X average feature vectors and the X mapping relations to obtain a second feature vector corresponding to the first feature vector;
and generating a second image according to the second characteristic vector corresponding to each of the X first characteristic vectors.
4. The face recognition method according to claim 1, wherein performing the second operation on the second image to obtain a third image includes:
determining a first gray value and a second gray value corresponding to the second image, wherein the first gray value is a gray value corresponding to a pixel with the smallest gray value in X pixel points included in the second image, and the second gray value is a gray value corresponding to a pixel with the largest gray value in X pixel points included in the second image;
taking the difference value between the second gray level value and the first gray level value as a first difference value;
Determining a difference value between a gray value corresponding to each pixel point in the X pixel points included in the second image and the first gray value to obtain a second difference value corresponding to the pixel point;
and determining a gray value corresponding to each pixel point in the X pixel points included in the third image according to a target ratio, wherein the target ratio is a ratio of a second difference value corresponding to each pixel point in the X pixel points included in the second image to the first difference value.
5. The face recognition method according to claim 1, wherein inputting the third image to the object model to obtain the recognition result includes:
generating a target image according to the third image, wherein the gray value corresponding to each pixel point in the X pixel points included in the target image is located between a second preset interval, and the second preset interval is a sub-interval of the first preset interval;
extracting features of coordinate values and gray values corresponding to each pixel point in X pixel points included in the target image to obtain a target feature vector corresponding to the pixel point, wherein the target feature vector is used for representing the coordinate values and the gray values of the pixel points corresponding to the target feature vector;
Extracting features of coordinate values and gray values corresponding to each pixel point in X pixel points included in each second face image to obtain a third feature vector corresponding to the pixel point, wherein the third feature vector is used for representing the coordinate values and the gray values of the pixel points corresponding to the third feature vector;
determining target similarity between the second face image and the target image according to the distance between a target feature vector corresponding to each pixel point included in the target image and a third feature vector corresponding to each pixel point included in the second face image, wherein the target similarity is used for representing difference information between the target image and the second face image;
and determining the recognition result according to the target similarity between each second face image in the M second face images and the target image.
6. The face recognition method of claim 5, wherein generating a target image from the third image comprises:
dividing X pixel points included in the third image to obtain P filter blocks, wherein each filter block in the P filter blocks comprises Y pixel points, and P and Y are positive integers;
Determining an average gray value corresponding to each filtering block in the P filtering blocks, wherein the average gray value corresponding to each filtering block is an average value of gray values of each pixel point in Y pixel points included in the filtering block;
updating the gray value of each pixel point in the filtering block according to the average gray value corresponding to each filtering block to obtain the gray value corresponding to each pixel point in the X pixel points included in the fourth image;
and performing a third operation on the fourth image to obtain a target image, wherein the third operation is used for updating the gray value corresponding to each pixel point included in the fourth image into a second preset interval.
7. The face recognition method of claim 5, wherein determining the recognition result according to the target similarity between each of the M second face images and the target image comprises:
determining the identification result as a first result when the target similarity corresponding to at least one second face image is greater than or equal to a preset similarity, wherein the first result is used for representing that a target object corresponding to the target image has acquired authorization of target software;
And under the condition that the target similarity corresponding to each of the M second face images is smaller than the preset similarity, determining the recognition result as a second result, wherein the second result is used for representing that a target object corresponding to the target image does not acquire target software authorization.
8. A face recognition device, comprising:
the image processing device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a first image corresponding to a human face of a target object, the first image comprises X pixel points, X is a positive integer, each pixel point corresponds to a gray value and a coordinate value, the gray value is used for representing the brightness of the pixel point, and the coordinate value is used for representing the position information of the pixel point;
the first operation unit is used for performing a first operation on the first image according to a face template to obtain a second image, wherein the face template is a face image generated according to N first face images, N is a positive integer, and the first operation unit is used for updating coordinate values corresponding to each pixel point included in the first image according to the face template;
a second operation unit, configured to perform a second operation on the second image to obtain a third image, where the second operation is configured to update a gray value corresponding to each pixel point included in the second image to a first preset interval;
The input unit is used for inputting the third image into a target model to obtain a recognition result, wherein the target model is a neural network model obtained by training the N first face images, the recognition result is used for representing the similarity between the third image and each of the M second face images, M is a positive integer, and each of the M second face images is a face image corresponding to an object authorized by the acquired target software.
9. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and wherein the computer program when executed controls a device in which the computer readable storage medium is located to perform the face recognition method according to any one of claims 1 to 7.
10. An electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the face recognition method of any of claims 1-7.
CN202311474772.4A 2023-11-07 2023-11-07 Face recognition method and device, storage medium and electronic equipment Pending CN117523630A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311474772.4A CN117523630A (en) 2023-11-07 2023-11-07 Face recognition method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311474772.4A CN117523630A (en) 2023-11-07 2023-11-07 Face recognition method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117523630A true CN117523630A (en) 2024-02-06

Family

ID=89756077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311474772.4A Pending CN117523630A (en) 2023-11-07 2023-11-07 Face recognition method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117523630A (en)

Similar Documents

Publication Publication Date Title
CN108509915B (en) Method and device for generating face recognition model
CN109859227B (en) Method and device for detecting flip image, computer equipment and storage medium
CN108229419B (en) Method and apparatus for clustering images
JP5506785B2 (en) Fingerprint representation using gradient histogram
US9025889B2 (en) Method, apparatus and computer program product for providing pattern detection with unknown noise levels
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN108241855B (en) Image generation method and device
CN112967207A (en) Image processing method and device, electronic equipment and storage medium
CN112085721A (en) Damage assessment method, device and equipment for flooded vehicle based on artificial intelligence and storage medium
US10521918B2 (en) Method and device for filtering texture, using patch shift
CN111898408B (en) Quick face recognition method and device
CN110070017B (en) Method and device for generating human face artificial eye image
Ravi et al. Forensic analysis of linear and nonlinear image filtering using quantization noise
CN117523630A (en) Face recognition method and device, storage medium and electronic equipment
CN110633647A (en) Living body detection method and device
KR20070059607A (en) Method for verifying iris using cpa(change point analysis) based on cumulative sum and apparatus thereof
CN111971951A (en) Arithmetic device, arithmetic method, program, and authentication system
CN115273123A (en) Bill identification method, device and equipment and computer storage medium
CN116152542A (en) Training method, device, equipment and storage medium for image classification model
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN112200004B (en) Training method and device for image detection model and terminal equipment
CN114329024A (en) Icon searching method and system
CN116363019B (en) Image data enhancement method, system and device
CN113254710B (en) Video concentration method, system and equipment
CN117473469B (en) Model watermark embedding method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination