CN111783593A - Human face recognition method and device based on artificial intelligence, electronic equipment and medium - Google Patents

Human face recognition method and device based on artificial intelligence, electronic equipment and medium Download PDF

Info

Publication number
CN111783593A
CN111783593A CN202010585000.8A CN202010585000A CN111783593A CN 111783593 A CN111783593 A CN 111783593A CN 202010585000 A CN202010585000 A CN 202010585000A CN 111783593 A CN111783593 A CN 111783593A
Authority
CN
China
Prior art keywords
face
image
target
determining
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010585000.8A
Other languages
Chinese (zh)
Inventor
邹泽宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202010585000.8A priority Critical patent/CN111783593A/en
Publication of CN111783593A publication Critical patent/CN111783593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to artificial intelligence and provides a human face recognition method and device based on artificial intelligence, an electronic device and a medium. The method can extract an image to be recognized and determine a face area, extract face feature information points, construct a three-dimensional face image of the image to be recognized based on the face feature information points, extract a plurality of target features from the three-dimensional face image, calculate the similarity between the plurality of target features and configuration features in a configuration library to obtain a plurality of target values, and determine a target user of the image to be recognized based on the plurality of target values. In addition, the invention also relates to a block chain technology, and the three-dimensional face image is stored in the block chain.

Description

Human face recognition method and device based on artificial intelligence, electronic equipment and medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a human face recognition method and device based on artificial intelligence, electronic equipment and a medium.
Background
The face recognition technology is a biological recognition technology for carrying out identity recognition based on face feature information of people, and the technology actually adopts camera equipment to collect images or video streams containing faces, automatically detects and tracks the faces in the images, and further carries out face recognition on the detected faces.
At present, under the influence of the relative positions of the camera device and the face, a plurality of face images of the same user at different angles cannot be acquired, and therefore, when face features are compared, the face recognition accuracy is low, and in addition, because the face features stored in the configuration library are images from a single shooting angle (for example, shooting at the front), when face recognition is performed, the user is required to shoot an image to be recognized at the same shooting angle, and therefore, the face recognition efficiency is influenced.
Disclosure of Invention
In view of the above, it is desirable to provide a face recognition method, device, electronic device and medium based on artificial intelligence, which can not only improve the face recognition efficiency, but also improve the face recognition accuracy.
An artificial intelligence based face recognition method, comprising:
when a face recognition instruction is received, extracting an image to be recognized from the face recognition instruction, and determining a face area from the image to be recognized;
extracting face feature information points from the face region;
constructing a three-dimensional face image of the image to be recognized based on the face feature information points;
extracting a plurality of target features from the three-dimensional face image;
calculating the similarity between the target characteristics and the configuration characteristics in the configuration library to obtain a plurality of target values;
determining a target user of the image to be recognized based on the plurality of target values.
According to a preferred embodiment of the present invention, the determining a face region from the image to be recognized includes:
determining a plurality of skin color areas from the image to be identified;
selecting the plurality of skin color areas from the image to be identified by using a detection window to obtain a plurality of areas to be determined;
splicing the multiple regions to be determined to obtain a target image;
and detecting the target image by using a pre-trained face detector to obtain the face region.
According to a preferred embodiment of the present invention, before detecting the target image using a pre-trained face detector, the method further comprises:
acquiring a data set, wherein the data set comprises a positive sample and a negative sample, the positive sample is a human face image, and the negative sample is a background image;
dividing the data set to obtain a training sample and a test sample;
extracting pixel level difference features of the training samples, and constructing a depth binary tree according to the pixel level difference features;
cascading the deep binary tree by adopting a bootstrap framework to generate a learner;
testing the learner using the test sample;
and when the learner is detected to pass the test, determining the learner as the human face detector.
According to the preferred embodiment of the present invention, the extracting of the face feature information points from the face region includes:
carrying out gray value processing on the face area to obtain a plurality of pixel points of the face area and a gray value corresponding to each pixel point;
when any gray value is detected to be larger than a threshold value, determining a pixel point corresponding to the any gray value as a pupil edge point;
and determining the pupil center of the face region as the face feature information point according to the pupil edge point, and detecting the eye corner, the mouth corner and the eye edge in the face region as the face feature information point by using a SUSAN operator method.
According to the preferred embodiment of the present invention, the three-dimensional face image is stored in a block chain, and the constructing the three-dimensional face image of the image to be recognized based on the face feature information points includes:
acquiring a reference vector and an average face;
constructing a target face according to the reference vector and the average face;
determining two-dimensional coordinates of the face feature information points in the image to be recognized;
mapping the two-dimensional coordinates to obtain three-dimensional coordinates;
and adjusting the target face according to the three-dimensional coordinates to obtain the three-dimensional face image.
According to a preferred embodiment of the present invention, the calculating the similarity between the target features and the configuration features in the configuration library to obtain the target values includes:
for any feature in the target features, determining the type of the any feature, and acquiring a plurality of configuration features corresponding to the type from a configuration library;
calculating the similarity of the arbitrary features and the plurality of configuration features by adopting a cosine distance formula to obtain a plurality of similar distance values of the arbitrary features;
determining the similarity distance value with the largest value in the similarity distance values as the target value of the arbitrary feature;
and integrating a plurality of target values of the arbitrary features to obtain a plurality of target values corresponding to the target features.
According to a preferred embodiment of the present invention, the determining the target user of the image to be recognized based on the plurality of target values comprises:
determining configuration characteristics corresponding to the target values, determining users corresponding to the configuration characteristics, and obtaining users corresponding to the target characteristics;
and calculating the number of target features corresponding to the users, and determining the users with the largest number as the target users.
An artificial intelligence based face recognition apparatus, comprising:
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for extracting an image to be recognized from a face recognition instruction and determining a face area from the image to be recognized when the face recognition instruction is received;
the extracting unit is used for extracting face characteristic information points from the face region;
the construction unit is used for constructing a three-dimensional face image of the image to be recognized based on the face feature information points;
the extracting unit is further used for extracting a plurality of target features from the three-dimensional face image;
the calculating unit is used for calculating the similarity between the target characteristics and the configuration characteristics in the configuration library to obtain a plurality of target values;
the determining unit is further configured to determine a target user of the image to be recognized based on the plurality of target values.
According to a preferred embodiment of the present invention, the determining unit determining the face region from the image to be recognized includes:
determining a plurality of skin color areas from the image to be identified;
selecting the plurality of skin color areas from the image to be identified by using a detection window to obtain a plurality of areas to be determined;
splicing the multiple regions to be determined to obtain a target image;
and detecting the target image by using a pre-trained face detector to obtain the face region.
According to a preferred embodiment of the invention, the apparatus further comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a data set before a pre-trained face detector is used for detecting the target image, the data set comprises a positive sample and a negative sample, the positive sample is a face image, and the negative sample is a background image;
the dividing unit is used for dividing the data set to obtain a training sample and a test sample;
the extraction unit is further configured to extract pixel level difference features of the training samples, and construct a depth binary tree according to the pixel level difference features;
the generating unit is used for cascading the deep binary tree by adopting a bootstrap framework to generate a learner;
a test unit for testing the learner using the test sample;
the determining unit is further configured to determine the learner as the face detector when it is detected that the learner passes the test.
According to a preferred embodiment of the present invention, the extracting unit extracting the face feature information points from the face region includes:
carrying out gray value processing on the face area to obtain a plurality of pixel points of the face area and a gray value corresponding to each pixel point;
when any gray value is detected to be larger than a threshold value, determining a pixel point corresponding to the any gray value as a pupil edge point;
and determining the pupil center of the face region as the face feature information point according to the pupil edge point, and detecting the eye corner, the mouth corner and the eye edge in the face region as the face feature information point by using a SUSAN operator method.
According to the preferred embodiment of the present invention, the three-dimensional face image is stored in a block chain, and the construction unit is specifically configured to:
acquiring a reference vector and an average face;
constructing a target face according to the reference vector and the average face;
determining two-dimensional coordinates of the face feature information points in the image to be recognized;
mapping the two-dimensional coordinates to obtain three-dimensional coordinates;
and adjusting the target face according to the three-dimensional coordinates to obtain the three-dimensional face image.
According to a preferred embodiment of the present invention, the computing unit is specifically configured to:
for any feature in the target features, determining the type of the any feature, and acquiring a plurality of configuration features corresponding to the type from a configuration library;
calculating the similarity of the arbitrary features and the plurality of configuration features by adopting a cosine distance formula to obtain a plurality of similar distance values of the arbitrary features;
determining the similarity distance value with the largest value in the similarity distance values as the target value of the arbitrary feature;
and integrating a plurality of target values of the arbitrary features to obtain a plurality of target values corresponding to the target features.
According to a preferred embodiment of the present invention, the determining unit determining the target user of the image to be recognized based on the plurality of target values includes:
determining configuration characteristics corresponding to the target values, determining users corresponding to the configuration characteristics, and obtaining users corresponding to the target characteristics;
and calculating the number of target features corresponding to the users, and determining the users with the largest number as the target users.
An electronic device, the electronic device comprising:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the artificial intelligence based face recognition method.
A computer-readable storage medium having at least one instruction stored therein, the at least one instruction being executable by a processor in an electronic device to implement the artificial intelligence based face recognition method.
According to the technical scheme, multiple face images under multiple angles are not required to be acquired through the camera equipment, the target user is accurately determined from the configuration library through only one face image, equipment resources are saved, in addition, the calculation amount for constructing the three-dimensional face image through one face image is small, the face recognition efficiency can be improved, meanwhile, the three-dimensional face image is constructed, the extracted three-dimensional target characteristics are detected, and the face recognition precision can be improved.
Drawings
FIG. 1 is a flow chart of the face recognition method based on artificial intelligence according to the preferred embodiment of the present invention.
Fig. 2 is a functional block diagram of a preferred embodiment of the artificial intelligence based face recognition apparatus of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing an artificial intelligence-based face recognition method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of a preferred embodiment of the face recognition method based on artificial intelligence according to the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The human face recognition method based on artificial intelligence is applied to one or more electronic devices, which are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a cloud computing (cloud computing) based cloud consisting of a large number of hosts or network servers.
The Network where the electronic device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
In at least one embodiment of the present invention, the present invention is applied to the field of artificial intelligence.
And S10, when a face recognition instruction is received, extracting an image to be recognized from the face recognition instruction, and determining a face area from the image to be recognized.
In at least one embodiment of the present invention, the face recognition instruction may be automatically triggered within a preset time.
Further, the preset time may be a time period, for example: the time period may be 8 hours, etc.
In at least one embodiment of the present invention, the information carried in the face recognition instruction includes, but is not limited to: the image to be recognized.
In at least one embodiment of the present invention, the face region refers to a face region of a person in the image to be recognized.
In at least one embodiment of the present invention, the electronic device determining a face region from the image to be recognized includes:
the electronic equipment determines a plurality of skin color areas from the image to be recognized, further, the electronic equipment selects the skin color areas from the image to be recognized by using a detection window to obtain a plurality of areas to be determined, the electronic equipment splices the areas to be determined to obtain a target image, and the electronic equipment detects the target image by using a pre-trained face detector to obtain the face area.
The skin color area comprises a face area and four limb areas.
By determining the face region, the calculation of useless regions (for example, pixels corresponding to body parts) can be reduced, which is beneficial to improving the recognition speed of face recognition, and meanwhile, the interference of the pixels corresponding to the body parts is removed, which is beneficial to improving the recognition precision of the face shape.
Specifically, the electronic equipment determines a plurality of skin color areas from the image to be identified, and comprises the following steps:
the electronic equipment determines the brightness value of each pixel in the image to be recognized, detects whether each brightness value is in a preset interval or not, determines the area corresponding to any pixel as a non-skin color area when the brightness value of any pixel is not in the preset interval, and further eliminates the non-skin color area from the image to be recognized to obtain a plurality of skin color areas.
The preset intervals are determined according to the skin color of a person, the preset intervals corresponding to different skin colors are different, a plurality of preset intervals, specific configuration values can be set according to actual conditions, and the method is not limited to this.
In at least one embodiment of the invention, before detecting the target image using a pre-trained face detector, the method further comprises:
the electronic equipment obtains a data set, the data set comprises a positive sample and a negative sample, the positive sample is a human face image, the negative sample is a background image, further, the electronic equipment divides the data set to obtain a training sample and a test sample, the electronic equipment extracts pixel level difference characteristics of the training sample and constructs a deep binary tree according to the pixel level difference characteristics, the electronic equipment adopts a bootstrap framework to cascade the deep binary tree to generate a learner, further, the electronic equipment utilizes the test sample to test the learner, and when the learner is detected to pass the test, the electronic equipment determines the learner to be the human face detector.
In at least one embodiment of the invention, after the electronic device acquires the data set, the method further comprises:
the electronic equipment calculates the number of images corresponding to the positive samples, further detects whether the number of images is smaller than a preset number threshold, and increases the number of images of the positive samples corresponding to the number of images by a perturbation method when the number of images is smaller than the preset number threshold.
If the number of the images of the positive sample is less than the preset number threshold, the images of the positive sample can be disturbed by adopting a disturbance method, so that the number of the images of the positive sample is increased, and the situation that the generalization capability of the trained face detector on the face recognition is poor due to the insufficient number of the images of the positive sample is avoided. The perturbation method is prior art, and the present invention is not described herein.
Specifically, the dividing, by the electronic device, the data set to obtain the training sample and the test sample includes:
the electronic equipment randomly divides the data set into at least one data packet according to a preset proportion, determines any one data packet in the at least one data packet as the test sample, determines the rest data packets as the training samples, and repeats the steps until all the data packets are sequentially used as the test samples.
The preset ratio can be set by user, and the invention is not limited.
By dividing the data set, each data in the data set participates in training and testing, thereby improving the fitting degree of the face detector.
In other embodiments, when it is detected that the learner fails the test, the electronic device adjusts the learner by using a hyper-parameter mesh search method until the learner passes the test, so as to obtain the face detector.
And S11, extracting the face feature information points from the face region.
In at least one embodiment of the present invention, the facial feature information points include an eye corner, a mouth corner, a pupil center, a mouth center, and an eye edge.
In at least one embodiment of the present invention, the electronic device extracting facial feature information points from the facial region includes:
the electronic equipment performs grey value processing on the face area to obtain a plurality of pixel points of the face area and a grey value corresponding to each pixel point, when any grey value is detected to be larger than a threshold value, the electronic equipment determines the pixel points corresponding to any grey value as pupil edge points, the electronic equipment determines the pupil center of the face area as the face feature information point according to the pupil edge points, and an eye corner, a mouth corner and an eye edge in the face area are detected as the face feature information point by using a SUSAN operator method.
The SUSAN (small empty segmentation associating cycle) operator is a characteristic point obtaining method based on gray scale, the SUSAN operator method adopts an approximately circular template, the circular template is moved on an image, the gray scale value of each image pixel point in the template is compared with the gray scale value of a central pixel of the template, and if the difference value between the gray scale value of a certain pixel in the template and the gray scale value of the central pixel (kernel) of the template is smaller than a certain value, the point and the kernel are considered to have the same (or similar) gray scale.
Through the implementation mode, the face feature information points can be accurately determined.
And S12, constructing the three-dimensional face image of the image to be recognized based on the face feature information points.
It should be emphasized that, in order to further ensure the privacy and security of the three-dimensional face image, the three-dimensional face image may also be stored in a node of a block chain.
In at least one embodiment of the present invention, the reference vector includes a first feature vector of the 3D deformation model and a second feature vector of the 3D shape fusion model. The first feature vector refers to parameters of shape change of the face under different conditions, and further, the second feature vector refers to parameters of expression change of the face under different conditions.
In at least one embodiment of the invention, the open source 3d mms are accompanied by an average face at the time of distribution, and therefore, the electronic device can obtain the average face from the open source website.
In at least one embodiment of the present invention, the electronic device, based on the facial feature information points, constructing a three-dimensional facial image of the image to be recognized includes:
the electronic equipment obtains a reference vector and an average face, the electronic equipment constructs a target face according to the reference vector and the average face, further, the electronic equipment determines two-dimensional coordinates of the face feature information points in the image to be recognized, the electronic equipment performs mapping processing on the two-dimensional coordinates to obtain three-dimensional coordinates, and further, the electronic equipment adjusts the target face according to the three-dimensional coordinates to obtain a three-dimensional face image.
And through the reference vector and the average face, the target face can be quickly determined, and then the three-dimensional face image can be quickly determined.
Specifically, the electronic device performs mapping processing on the two-dimensional coordinates by using a deep learning network to obtain three-dimensional coordinates.
And S13, extracting a plurality of target features from the three-dimensional face image.
In at least one embodiment of the invention, the plurality of target features may include: eyes, mouth, nose, etc.
In at least one embodiment of the present invention, a manner in which the electronic device extracts a plurality of target features from the three-dimensional face image may be the same as a manner in which the electronic device extracts face feature information points from the face region, which is not described in detail herein.
And S14, calculating the similarity between the target characteristics and the configuration characteristics in the configuration library to obtain a plurality of target values.
In at least one embodiment of the present invention, the electronic device calculates similarities between the target features and the configuration features in the configuration library, and obtaining the target values includes:
for any feature in the multiple target features, the electronic device extracts multiple target features from the three-dimensional face image to determine a type to which the any feature belongs, and acquires multiple configuration features corresponding to the type from a configuration library, the electronic device calculates similarity between the any feature and the multiple configuration features by using a cosine distance formula to obtain multiple similar distance values of the any feature, the electronic device determines a similar distance value with the largest value among the multiple similar distance values as a target value of the any feature, and the electronic device integrates the target values of the multiple arbitrary features to obtain multiple target values corresponding to the multiple target features.
S15, determining the target user of the image to be recognized based on the target values.
In at least one embodiment of the present invention, the electronic device determining the target user of the image to be recognized based on the plurality of target values comprises:
the electronic device determines configuration features corresponding to the target values, determines users corresponding to the configuration features, obtains users corresponding to the target features, further calculates the number of the target features corresponding to the users, and determines the user with the largest number as the target user.
Through the embodiment, the target user is determined according to the target values corresponding to the target characteristics, and the recognition accuracy of face recognition is improved.
According to the technical scheme, multiple face images under multiple angles are not required to be acquired through the camera equipment, the target user is accurately determined from the configuration library through only one face image, equipment resources are saved, in addition, the calculation amount for constructing the three-dimensional face image through one face image is small, the face recognition efficiency can be improved, meanwhile, the three-dimensional face image is constructed, the extracted three-dimensional target characteristics are detected, and the face recognition precision can be improved.
Fig. 2 is a functional block diagram of a face recognition device based on artificial intelligence according to a preferred embodiment of the present invention. The artificial intelligence based face recognition device 11 comprises a determining unit 110, an extracting unit 111, a constructing unit 112, a calculating unit 113, an obtaining unit 114, a dividing unit 115, a generating unit 116, a testing unit 117, a detecting unit 118, a disturbing unit 119 and an adjusting unit 120. The module/unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor 13 and that can perform a fixed function, and that are stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
When receiving a face recognition instruction, the determination unit 110 extracts an image to be recognized from the face recognition instruction and determines a face region from the image to be recognized.
In at least one embodiment of the present invention, the face recognition instruction may be automatically triggered within a preset time.
Further, the preset time may be a time period, for example: the time period may be 8 hours, etc.
In at least one embodiment of the present invention, the information carried in the face recognition instruction includes, but is not limited to: the image to be recognized.
In at least one embodiment of the present invention, the face region refers to a face region of a person in the image to be recognized.
In at least one embodiment of the present invention, the determining unit 110 determines a face region from the image to be recognized includes:
the determining unit 110 determines a plurality of skin color regions from the image to be recognized, further, the determining unit 110 selects the plurality of skin color regions from the image to be recognized by using a detection window to obtain a plurality of regions to be determined, the determining unit 110 splices the plurality of regions to be determined to obtain a target image, and the determining unit 110 detects the target image by using a pre-trained face detector to obtain the face region.
The skin color area comprises a face area and four limb areas.
By determining the face region, the calculation of useless regions (for example, pixels corresponding to body parts) can be reduced, which is beneficial to improving the recognition speed of face recognition, and meanwhile, the interference of the pixels corresponding to the body parts is removed, which is beneficial to improving the recognition precision of the face shape.
Specifically, the determining unit 110 determines a plurality of skin color regions from the image to be recognized, including:
the determining unit 110 determines a brightness value of each pixel in the image to be recognized, and detects whether each brightness value is within a preset interval, when the brightness value of any pixel is not within the preset interval, the determining unit 110 determines an area corresponding to the any pixel as a non-skin color area, and further, the determining unit 110 eliminates the non-skin color area from the image to be recognized, so as to obtain the plurality of skin color areas.
The preset intervals are determined according to the skin color of a person, the preset intervals corresponding to different skin colors are different, a plurality of preset intervals, specific configuration values can be set according to actual conditions, and the method is not limited to this.
In at least one embodiment of the invention, prior to detecting the target image using a pre-trained face detector, the acquisition unit 114 acquires a data set, the data set includes a positive sample and a negative sample, the positive sample is a face image, the negative sample is a background image, further, the dividing unit 115 divides the data set to obtain a training sample and a test sample, further, the extracting unit 111 extracts a pixel-level difference feature of the training sample, and constructs a deep binary tree according to the pixel-level difference features, the generating unit 116 cascades the deep binary tree by using a bootstrap framework to generate a learner, further, the testing unit 117 tests the learner by using the test sample, when it is detected that the learner passes the test, the determination unit 110 determines the learner as the face detector.
In at least one embodiment of the present invention, after the acquiring unit 114 acquires the data set, the calculating unit 113 calculates the number of images corresponding to the positive samples, further, the detecting unit 118 detects whether the number of images is smaller than a preset number threshold, and when the number of images is smaller than the preset number threshold, the disturbing unit 119 increases the number of images of the positive samples corresponding to the number of images by a disturbing method.
If the number of the images of the positive sample is less than the preset number threshold, the images of the positive sample can be disturbed by adopting a disturbance method, so that the number of the images of the positive sample is increased, and the situation that the generalization capability of the trained face detector on the face recognition is poor due to the insufficient number of the images of the positive sample is avoided. The perturbation method is prior art, and the present invention is not described herein.
Specifically, the dividing unit 115 divides the data set to obtain training samples and test samples includes:
the dividing unit 115 randomly divides the data set into at least one data packet according to a preset proportion, the dividing unit 115 determines any one data packet of the at least one data packet as the test sample, and the rest data packets as the training samples, and the above steps are repeated until all the data packets are sequentially used as the test samples.
The preset ratio can be set by user, and the invention is not limited.
By dividing the data set, each data in the data set participates in training and testing, thereby improving the fitting degree of the face detector.
In other embodiments, when it is detected that the learner fails the test, the adjusting unit 120 adjusts the learner by using a hyper-parameter mesh search method until the learner passes the test, so as to obtain the face detector.
The extraction unit 111 extracts face feature information points from the face region.
In at least one embodiment of the present invention, the facial feature information points include an eye corner, a mouth corner, a pupil center, a mouth center, and an eye edge.
In at least one embodiment of the present invention, the extracting unit 111 extracts the face feature information points from the face region, including:
the extraction unit 111 performs gray value processing on the face region to obtain a plurality of pixel points of the face region and a gray value corresponding to each pixel point, when any gray value is detected to be greater than a threshold value, the extraction unit 111 determines the pixel points corresponding to any gray value as pupil edge points, the extraction unit 111 determines the pupil center of the face region as the face feature information point according to the pupil edge points, and detects an eye corner, a mouth corner and an eye edge in the face region as the face feature information point by using a SUSAN operator method.
The SUSAN (small empty segmentation associating cycle) operator is a characteristic point obtaining method based on gray scale, the SUSAN operator method adopts an approximately circular template, the circular template is moved on an image, the gray scale value of each image pixel point in the template is compared with the gray scale value of a central pixel of the template, and if the difference value between the gray scale value of a certain pixel in the template and the gray scale value of the central pixel (kernel) of the template is smaller than a certain value, the point and the kernel are considered to have the same (or similar) gray scale.
Through the implementation mode, the face feature information points can be accurately determined.
The construction unit 112 constructs a three-dimensional face image of the image to be recognized based on the face feature information points.
It should be emphasized that, in order to further ensure the privacy and security of the three-dimensional face image, the three-dimensional face image may also be stored in a node of a block chain.
In at least one embodiment of the present invention, the reference vector includes a first feature vector of the 3D deformation model and a second feature vector of the 3D shape fusion model. The first feature vector refers to parameters of shape change of the face under different conditions, and further, the second feature vector refers to parameters of expression change of the face under different conditions.
In at least one embodiment of the invention, the open source 3d mms are accompanied by an average face at the time of distribution, and therefore, the electronic device can obtain the average face from the open source website.
In at least one embodiment of the present invention, the constructing unit 112 constructs a three-dimensional face image of the image to be recognized based on the face feature information points, including:
the construction unit 112 obtains a reference vector and an average face, the construction unit 112 constructs a target face according to the reference vector and the average face, further, the construction unit 112 determines two-dimensional coordinates of the face feature information points in the image to be recognized, the construction unit 112 performs mapping processing on the two-dimensional coordinates to obtain three-dimensional coordinates, and further, the construction unit 112 adjusts the target face according to the three-dimensional coordinates to obtain the three-dimensional face image.
And through the reference vector and the average face, the target face can be quickly determined, and then the three-dimensional face image can be quickly determined.
Specifically, the constructing unit 112 performs mapping processing on the two-dimensional coordinates by using a deep learning network to obtain three-dimensional coordinates.
The extraction unit 111 extracts a plurality of target features from the three-dimensional face image.
In at least one embodiment of the invention, the plurality of target features may include: eyes, mouth, nose, etc.
In at least one embodiment of the present invention, a manner of extracting a plurality of target features from the three-dimensional face image by the extracting unit 111 may be the same as a manner of extracting face feature information points from the face region by the extracting unit 111, which is not described in detail herein.
The calculating unit 113 calculates the similarity between the target features and the configuration features in the configuration library to obtain target values.
In at least one embodiment of the present invention, the calculating unit 113 calculates the similarity between the target features and the configuration features in the configuration library, and obtaining the target values includes:
for any feature in the plurality of target features, the calculating unit 113 extracts a plurality of target features from the three-dimensional face image to determine a type to which the any feature belongs, and acquires a plurality of configuration features corresponding to the type from a configuration library, the calculating unit 113 calculates similarity between the any feature and the plurality of configuration features by using a cosine distance formula to obtain a plurality of similar distance values of the any feature, the calculating unit 113 determines a similar distance value with the largest value among the plurality of similar distance values as a target value of the any feature, and the calculating unit 113 integrates the target values of the plurality of any features to obtain a plurality of target values corresponding to the plurality of target features.
The determination unit 110 determines a target user of the image to be recognized based on the plurality of target values.
In at least one embodiment of the present invention, the determining unit 110 determines the target user of the image to be recognized based on the plurality of target values includes:
the determining unit 110 determines the configuration features corresponding to the target values, determines the users corresponding to the configuration features, obtains the users corresponding to the target features, calculates the number of the target features corresponding to the users, and determines the user with the largest number as the target user.
Through the embodiment, the target user is determined according to the target values corresponding to the target characteristics, and the recognition accuracy of face recognition is improved.
According to the technical scheme, multiple face images under multiple angles are not required to be acquired through the camera equipment, the target user is accurately determined from the configuration library through only one face image, equipment resources are saved, in addition, the calculation amount for constructing the three-dimensional face image through one face image is small, the face recognition efficiency can be improved, meanwhile, the three-dimensional face image is constructed, the extracted three-dimensional target characteristics are detected, and the face recognition precision can be improved.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention, which implements a face recognition method based on artificial intelligence.
In one embodiment of the present invention, the electronic device 1 includes, but is not limited to, a memory 12, a processor 13, and a computer program, such as an artificial intelligence based face recognition program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by a person skilled in the art that the schematic diagram is only an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, and that it may comprise more or less components than shown, or some components may be combined, or different components, e.g. the electronic device 1 may further comprise an input output device, a network access device, a bus, etc.
The Processor 13 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc. The processor 13 is an operation core and a control center of the electronic device 1, and is connected to each part of the whole electronic device 1 by various interfaces and lines, and executes an operating system of the electronic device 1 and various installed application programs, program codes, and the like.
The processor 13 executes an operating system of the electronic device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in each of the above embodiments of the artificial intelligence based face recognition method, such as the steps shown in fig. 1.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the electronic device 1. For example, the computer program may be divided into a determination unit 110, an extraction unit 111, a construction unit 112, a calculation unit 113, an acquisition unit 114, a division unit 115, a generation unit 116, a test unit 117, a detection unit 118, a perturbation unit 119, and an adjustment unit 120.
The memory 12 can be used for storing the computer programs and/or modules, and the processor 13 implements various functions of the electronic device 1 by running or executing the computer programs and/or modules stored in the memory 12 and calling data stored in the memory 12. The memory 12 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 12 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other non-volatile solid state storage device.
The memory 12 may be an external memory and/or an internal memory of the electronic device 1. Further, the memory 12 may be a memory having a physical form, such as a memory stick, a TF Card (Trans-flash Card), or the like.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
With reference to fig. 1, the memory 12 of the electronic device 1 stores a plurality of instructions to implement an artificial intelligence based face recognition method, and the processor 13 can execute the plurality of instructions to implement: when a face recognition instruction is received, extracting an image to be recognized from the face recognition instruction, and determining a face area from the image to be recognized; extracting face feature information points from the face region; constructing a three-dimensional face image of the image to be recognized based on the face feature information points; extracting a plurality of target features from the three-dimensional face image; calculating the similarity between the target characteristics and the configuration characteristics in the configuration library to obtain a plurality of target values; determining a target user of the image to be recognized based on the plurality of target values.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A face recognition method based on artificial intelligence is characterized in that the face recognition method based on artificial intelligence comprises the following steps:
when a face recognition instruction is received, extracting an image to be recognized from the face recognition instruction, and determining a face area from the image to be recognized;
extracting face feature information points from the face region;
constructing a three-dimensional face image of the image to be recognized based on the face feature information points;
extracting a plurality of target features from the three-dimensional face image;
calculating the similarity between the target characteristics and the configuration characteristics in the configuration library to obtain a plurality of target values;
determining a target user of the image to be recognized based on the plurality of target values.
2. The artificial intelligence based face recognition method of claim 1, wherein the determining a face region from the image to be recognized comprises:
determining a plurality of skin color areas from the image to be identified;
selecting the plurality of skin color areas from the image to be identified by using a detection window to obtain a plurality of areas to be determined;
splicing the multiple regions to be determined to obtain a target image;
and detecting the target image by using a pre-trained face detector to obtain the face region.
3. The artificial intelligence based face recognition method of claim 2, wherein prior to detecting the target image with a pre-trained face detector, the artificial intelligence based face recognition method further comprises:
acquiring a data set, wherein the data set comprises a positive sample and a negative sample, the positive sample is a human face image, and the negative sample is a background image;
dividing the data set to obtain a training sample and a test sample;
extracting pixel level difference features of the training samples, and constructing a depth binary tree according to the pixel level difference features;
cascading the deep binary tree by adopting a bootstrap framework to generate a learner;
testing the learner using the test sample;
and when the learner is detected to pass the test, determining the learner as the human face detector.
4. The artificial intelligence based face recognition method of claim 1, wherein the extracting of the face feature information points from the face region comprises:
carrying out gray value processing on the face area to obtain a plurality of pixel points of the face area and a gray value corresponding to each pixel point;
when any gray value is detected to be larger than a threshold value, determining a pixel point corresponding to the any gray value as a pupil edge point;
and determining the pupil center of the face region as the face feature information point according to the pupil edge point, and detecting the eye corner, the mouth corner and the eye edge in the face region as the face feature information point by using a SUSAN operator method.
5. The artificial intelligence based face recognition method according to claim 1, wherein the three-dimensional face image is stored in a block chain, and the constructing the three-dimensional face image of the image to be recognized based on the face feature information points comprises:
acquiring a reference vector and an average face;
constructing a target face according to the reference vector and the average face;
determining two-dimensional coordinates of the face feature information points in the image to be recognized;
mapping the two-dimensional coordinates to obtain three-dimensional coordinates;
and adjusting the target face according to the three-dimensional coordinates to obtain the three-dimensional face image.
6. The artificial intelligence based face recognition method of claim 1, wherein the calculating the similarity of the target features and the configuration features in the configuration library to obtain target values comprises:
for any feature in the target features, determining the type of the any feature, and acquiring a plurality of configuration features corresponding to the type from a configuration library;
calculating the similarity of the arbitrary features and the plurality of configuration features by adopting a cosine distance formula to obtain a plurality of similar distance values of the arbitrary features;
determining the similarity distance value with the largest value in the similarity distance values as the target value of the arbitrary feature;
and integrating a plurality of target values of the arbitrary features to obtain a plurality of target values corresponding to the target features.
7. The artificial intelligence based face recognition method of claim 1, wherein the determining a target user of the image to be recognized based on the plurality of target values comprises:
determining configuration characteristics corresponding to the target values, determining users corresponding to the configuration characteristics, and obtaining users corresponding to the target characteristics;
and calculating the number of target features corresponding to the users, and determining the users with the largest number as the target users.
8. An artificial intelligence based face recognition apparatus, comprising:
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for extracting an image to be recognized from a face recognition instruction and determining a face area from the image to be recognized when the face recognition instruction is received;
the extracting unit is used for extracting face characteristic information points from the face region;
the construction unit is used for constructing a three-dimensional face image of the image to be recognized based on the face feature information points;
the extracting unit is further used for extracting a plurality of target features from the three-dimensional face image;
the calculating unit is used for calculating the similarity between the target characteristics and the configuration characteristics in the configuration library to obtain a plurality of target values;
the determining unit is further configured to determine a target user of the image to be recognized based on the plurality of target values.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the artificial intelligence based face recognition method according to any one of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer-readable storage medium has stored therein at least one instruction that is executable by a processor in an electronic device to implement the artificial intelligence based face recognition method according to any one of claims 1 to 7.
CN202010585000.8A 2020-06-23 2020-06-23 Human face recognition method and device based on artificial intelligence, electronic equipment and medium Pending CN111783593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010585000.8A CN111783593A (en) 2020-06-23 2020-06-23 Human face recognition method and device based on artificial intelligence, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010585000.8A CN111783593A (en) 2020-06-23 2020-06-23 Human face recognition method and device based on artificial intelligence, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN111783593A true CN111783593A (en) 2020-10-16

Family

ID=72759684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010585000.8A Pending CN111783593A (en) 2020-06-23 2020-06-23 Human face recognition method and device based on artificial intelligence, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111783593A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270747A (en) * 2020-11-10 2021-01-26 杭州海康威视数字技术股份有限公司 Face recognition method and device and electronic equipment
CN112446310A (en) * 2020-11-19 2021-03-05 杭州趣链科技有限公司 Age identification system, method and device based on block chain
CN113326814A (en) * 2021-02-22 2021-08-31 王先峰 Face recognition equipment based on 5G framework
CN113673374A (en) * 2021-08-03 2021-11-19 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270747A (en) * 2020-11-10 2021-01-26 杭州海康威视数字技术股份有限公司 Face recognition method and device and electronic equipment
CN112446310A (en) * 2020-11-19 2021-03-05 杭州趣链科技有限公司 Age identification system, method and device based on block chain
CN113326814A (en) * 2021-02-22 2021-08-31 王先峰 Face recognition equipment based on 5G framework
CN113673374A (en) * 2021-08-03 2021-11-19 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment
CN113673374B (en) * 2021-08-03 2024-01-30 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment

Similar Documents

Publication Publication Date Title
CN111783593A (en) Human face recognition method and device based on artificial intelligence, electronic equipment and medium
CN110874594B (en) Human body appearance damage detection method and related equipment based on semantic segmentation network
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
CN111814620A (en) Face image quality evaluation model establishing method, optimization method, medium and device
CN112668453B (en) Video identification method and related equipment
CN113870395A (en) Animation video generation method, device, equipment and storage medium
CN113689436A (en) Image semantic segmentation method, device, equipment and storage medium
CN115222443A (en) Client group division method, device, equipment and storage medium
CN110008922A (en) Image processing method, unit, medium for terminal device
CN113705468A (en) Digital image identification method based on artificial intelligence and related equipment
CN113627576B (en) Code scanning information detection method, device, equipment and storage medium
CN116205723A (en) Artificial intelligence-based face tag risk detection method and related equipment
TWI803243B (en) Method for expanding images, computer device and storage medium
CN113850632B (en) User category determination method, device, equipment and storage medium
CN110751034B (en) Pedestrian behavior recognition method and terminal equipment
CN115035343A (en) Neural network training method, article detection method, apparatus, device and medium
CN111222448B (en) Image conversion method and related product
CN113435357A (en) Voice broadcasting method, device, equipment and storage medium
CN113516205A (en) Data classification method, device, equipment and storage medium based on artificial intelligence
CN103473562A (en) Automatic training and identifying system for specific human body action
CN114764814A (en) Plant height determination method and device, electronic equipment and medium
CN112070744A (en) Face recognition method, system, device and readable storage medium
CN113254710B (en) Video concentration method, system and equipment
Basurah et al. IMPLEMENTATION OF FACE RECOGNITION AND LIVENESS DETECTION SYSTEM USING TENSORFLOW. JS
CN113421575B (en) Voiceprint recognition method, voiceprint recognition device, voiceprint recognition equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination