CN113743191B - Face image alignment detection method and device, electronic equipment and storage medium - Google Patents

Face image alignment detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113743191B
CN113743191B CN202110809114.0A CN202110809114A CN113743191B CN 113743191 B CN113743191 B CN 113743191B CN 202110809114 A CN202110809114 A CN 202110809114A CN 113743191 B CN113743191 B CN 113743191B
Authority
CN
China
Prior art keywords
depth
point
face
face image
alignment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110809114.0A
Other languages
Chinese (zh)
Other versions
CN113743191A (en
Inventor
何金辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202110809114.0A priority Critical patent/CN113743191B/en
Publication of CN113743191A publication Critical patent/CN113743191A/en
Application granted granted Critical
Publication of CN113743191B publication Critical patent/CN113743191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the invention provides a face image alignment detection method, which comprises the following steps: acquiring a face image, wherein the face image comprises a color channel and a depth channel; extracting a search key point and a first alignment point from a color channel of a face image through a pre-trained face key point model; determining a search area in a depth channel of the face image according to the search key points; determining a second alignment point according to the depth value of each pixel point in the search area; and calculating the distance between the first alignment point and the second alignment point, and if the distance is smaller than a preset distance threshold value, determining that the color channel of the face image is aligned with the depth channel of the face image. The first alignment point and the second alignment point can be subjected to distance calculation, so that whether the color channel of the face image is aligned with the depth channel of the face image or not is judged, the color channel of the face image for face recognition and the depth channel of the face image are aligned, and the face recognition accuracy is guaranteed.

Description

Face image alignment detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to a face image alignment detection method, apparatus, electronic device, and storage medium.
Background
Along with the continuous development and progress of artificial intelligence, applications based on image recognition are also becoming more popular, and development and maturity of image acquisition equipment are also driven to compare with applications based on 3D face recognition. The 3D face recognition is a technology for performing face recognition by using 3D face information, and in the 3D face information, not only color texture information of a face but also depth information of the face are included, and the face recognition is performed by using the color texture information and the depth information of the face. At present, cameras used for 3D face recognition in the market are mainly based on the principle of structured light, and based on the collection equipment of the structured light, when a shot object is in a faster motion, the phenomenon that a depth map is not aligned with an rgb map can be caused, if face recognition is carried out by using face images with the depth map not aligned with the rgb map, the recognition effect of a 3D face recognition model can be affected, and the face recognition accuracy is low or even misrecognition is caused.
Disclosure of Invention
The embodiment of the invention provides a face image alignment detection method, which can be used for extracting face key points from a color channel of a face image to serve as search key points and first alignment points, determining second alignment points in a depth channel of the face image according to the search key points, and calculating the distance between the first alignment points and the second alignment points so as to judge whether the color channel of the face image is aligned with the depth channel of the face image or not, so that the color channel and the depth channel of the face image for face recognition are aligned, and the face recognition accuracy is ensured.
In a first aspect, an embodiment of the present invention provides a face image alignment detection method, where the method includes:
acquiring a face image, wherein the face image comprises a color channel and a depth channel;
extracting a search key point and a first alignment point from a color channel of the face image through a pre-trained face key point model;
determining a search area in a depth channel of the face image according to the search key points;
determining a second alignment point according to the depth value of each pixel point in the search area;
and calculating the distance between the first alignment point and the second alignment point, and if the distance is smaller than a preset distance threshold value, determining that the color channel of the face image is aligned with the depth channel of the face image.
Optionally, the acquiring the face image includes:
acquiring an image to be detected, wherein the image to be processed comprises a face to be detected;
performing face angle detection on the image to be detected through a pre-trained face angle detection model to obtain an angle value of the face to be detected;
judging whether the angle value is in a preset angle value range or not;
and if the angle value is within the preset angle value range, extracting a face image according to the image to be detected.
Optionally, the extracting a face image according to the image to be detected includes:
extracting a face region from the color channel of the image to be detected as a color channel of the face image;
and extracting a face region from the depth channel of the image to be detected as a depth channel of the face image.
Optionally, each pixel point in the depth channel of the image to be detected has a first depth value, and the extracting the face area in the depth channel of the image to be detected is used as the depth channel of the face image, and includes:
extracting a face region from a depth channel of the image to be detected to obtain a first depth face region, wherein each pixel point in the first depth face region has a first depth value;
filtering an abnormal first depth value in the first depth face region to obtain a second depth value, and converting the first depth face region into a second depth face region based on the first depth value and the second depth value, wherein each pixel point in the second depth face region has the first depth value or the second depth value;
selecting a maximum depth value from the second depth face region, calculating a first depth value or a depth difference value between the second depth value and the maximum depth value of each pixel point in the second depth face region, obtaining a third depth value, and converting the second depth face region into a third depth face region based on the third depth value, wherein each pixel point in the third depth face region has the third depth value;
And determining the third depth face area as a depth channel of the face image, wherein a depth value in the depth channel of the face image is the third depth value.
Optionally, the search key point and the first alignment point are determined according to prior information of the face, where the prior information is that an ordinate of the first search point is smaller than an ordinate of the first alignment point, an ordinate of the second search point is larger than an ordinate of the first alignment point, an abscissa of the third search point is smaller than an abscissa of the first alignment point, and an abscissa of the fourth search point is larger than an abscissa of the first alignment point, and determining a search area in a depth channel of the face image according to the search key point includes:
determining the first search point, the second search point, the third search point and the fourth search point;
and determining a corresponding search area in a depth channel of the face image according to the first search point, the second search point, the third search point and the fourth search point.
Optionally, the searching area includes a third depth value, and the determining the second alignment point according to the depth value of each pixel point in the searching area includes:
Obtaining a maximum third depth value in the search area;
and determining the pixel point corresponding to the maximum third depth value as a second alignment point.
Optionally, the calculating the distance between the first alignment point and the second alignment point includes:
acquiring a first coordinate value of the first alignment point relative to the face image and acquiring a second coordinate value of the second alignment point relative to the face image;
and calculating the Euclidean distance between the first alignment point and the second alignment point according to the first coordinate value and the second coordinate value.
Optionally, the first alignment point is a nose tip key point, and the method further includes:
extracting two nostril key points from a color channel of the face image through a pre-trained face key point model;
calculating Euclidean distance between the two nostril key points;
and determining the preset distance threshold value through the Euclidean distance between the two nostril key points.
In a second aspect, an embodiment of the present invention provides a face image alignment detection apparatus, including:
the acquisition module is used for acquiring a face image, wherein the face image comprises a color channel and a depth channel;
The extraction module is used for extracting search key points and first alignment points from the color channels of the face images through the pre-trained face key point model;
the first determining module is used for determining a search area in a depth channel of the face image according to the search key points;
the second determining module is used for determining a second alignment point according to the depth value of each pixel point in the searching area;
and the third determining module is used for calculating the distance between the first alignment point and the second alignment point, and determining that the color channel of the face image is aligned with the depth channel of the face image if the distance is smaller than a preset distance threshold.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the face image alignment detection method comprises the steps of a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the steps in the face image alignment detection method are realized when the processor executes the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements the steps in the face image alignment detection method provided in the embodiment of the present invention.
In the embodiment of the invention, a face image is acquired, wherein the face image comprises a color channel and a depth channel; extracting a search key point and a first alignment point from a color channel of the face image through a pre-trained face key point model; determining a search area in a depth channel of the face image according to the search key points; determining a second alignment point according to the depth value of each pixel point in the search area; and calculating the distance between the first alignment point and the second alignment point, and if the distance is smaller than a preset distance threshold value, determining that the color channel of the face image is aligned with the depth channel of the face image. The color channel of the face image can be used as a search key point and a first alignment point, a second alignment point in the depth channel of the face image is determined according to the search key point, and the distance between the first alignment point and the second alignment point can be calculated, so that whether the color channel of the face image is aligned with the depth channel of the face image or not is judged, the color channel and the depth channel of the face image for face recognition are aligned, and the face recognition accuracy is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a face image alignment detection method provided by an embodiment of the invention;
fig. 2 is a flowchart of a face image acquisition method according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for obtaining a depth channel of a face image according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a face image alignment detection device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an acquisition module according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an extraction sub-module according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a second extraction unit according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a first determining module according to an embodiment of the present invention;
Fig. 9 is a schematic structural diagram of a second determining module according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a third determining module according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another face image alignment detection device according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of a face image alignment detection method according to an embodiment of the present invention, as shown in fig. 1, the face image alignment detection method includes the following steps:
101. and acquiring a face image.
In the embodiment of the invention, the face image comprises a color channel and a depth channel, and the color channel of the face image and the depth channel of the face image have the same size, resolution and pixel coordinate system.
The face image may be acquired by a 3D image acquisition device or acquired by uploading an image by a user, the 3D image acquisition device may be a 3D camera, such as a 3D camera based on an obbe medium light structured light, a 3D camera based on a light flight time method, and the like, the 3D camera based on the obbe medium light structured light generally adopts invisible laser with a specific wavelength as a light source, the emitted light is provided with coding information, the coded information is projected on an object, and the position and depth information of the object are obtained by calculating the distortion of a returned coding pattern through a certain algorithm. The 3D camera based on the light flight time method mainly utilizes the measured light flight time to obtain the distance, namely simply emits a processed light, reflects the processed light after touching an object, captures the back and forth time, and can rapidly and accurately calculate the distance to the object because the light speed and the wavelength of the modulated light are known. Since the 3D camera based on the optical structured light in the obbe and the 3D camera based on the optical time-of-flight method both have one light return time, when the photographed object is in a faster motion, the phenomenon that the depth map is not aligned with the rgb map can be caused.
The color channels included in the face image may be RGB color channels, which are formed by three channels of R (red), G (green), and B (blue), and may be other color-related channels, such as gray-scale channels, CMYK channels, HSV channels, and the like. In the color channel, each pixel corresponds to a set of color values, for example, in the RGB color channel, each pixel corresponds to a set of RGB values, where R, G, B is an integer ranging from 0 to 255.
The depth channel comprises a depth value, each pixel in the face image corresponds to one depth value, and the depth value is used for representing the distance between the corresponding position and the camera.
102. And extracting the search key points and the first alignment points from the color channels of the face image through the pre-trained face key point model.
In the embodiment of the invention, the pre-trained face key point model is used for extracting face key points, wherein the face key points can be face contour key points, left eye key points, right eye key points, lip key points, eyebrow key points, chin key points and nose key points, and the nose key points can comprise nose tip key points, nostril key points, nose outline key points and the like.
The pre-trained face key point model is based on a face key point detection model obtained by an open source community, and can also be a face key point detection model which is self-trained.
In a possible embodiment, when the face key point detection model is trained by itself, a face image with a face angle value within a preset angle value range can be selected as a labeling sample, for example, a face image with the face angle value between-20 degrees and 20 degrees in three angle directions of pitch, roll and yaw is selected for face key point labeling, the labeled face image is input into a preset convolutional neural network for training, the preset convolutional neural network comprises a convolutional layer, a pooling layer and an output layer, the output layer comprises a classifier with the same number of face key points, and a linear regression layer, and the linear regression layer returns the result of the classifier to the face image in the form of the face key points.
In one possible embodiment, the number of face keypoints may be 68, where 68 face keypoints include preferred face keypoints such as a nose tip keypoint, a lip bottom edge keypoint, a left eye corner keypoint, a right eye corner keypoint, and candidate face keypoints such as a forehead keypoint, a chin keypoint, a left ear keypoint, and a right ear keypoint. And after training is completed, a pre-trained face key point model is obtained. It should be noted that, the face image for training uses a color channel, for example, the face image for training is an RGB face image or the like.
The above-mentioned search key points are key points of specific positions in the key points of human face, the above-mentioned number of search key points is a plurality of, and all search key points are not on the same straight line.
The first alignment point is also a key point of a specific position in the key points of the face, and the color channel of the face image is used for alignment comparison with the depth channel of the face image. Preferably, the first alignment point has an extremum attribute in its prior information, such as a key point of a face that is empirically closest to or farthest from the camera under a certain face angle value. For example, the face key point closest to the camera is the nose tip key point, and the face key point farthest from the camera is a certain face contour key point in the face contour key points.
103. And determining a search area in the depth channel of the face image according to the search key points.
In the embodiment of the invention, each search key point corresponds to one pixel point in the color channel of the face image, and for each pixel point in the color channel of the face image, one pixel point corresponds to the corresponding pixel point in the depth channel, so that the search area in the depth channel of the face image can be determined by searching the key points. The number of the search key points can be three or four, and an area surrounded by the search key points can be determined through the position relation of the search key points, and is smaller than the face outline in the face image, so that the search calculation amount of the search area can be reduced, the search range can be definitely found in the face, and the background interference is avoided.
The value corresponding to each pixel point in the search area is a depth value, and the distance between each pixel point and the camera is represented by the depth value.
104. And determining a second alignment point according to the depth value of each pixel point in the search area.
In the embodiment of the invention, the search area is an area in a depth channel of the face image, each pixel point in the search area has a corresponding depth value, a priori relationship exists between the second alignment point and a first alignment point, the first alignment point may be a face key point closest to or farthest from the camera among all face key points, and the second alignment point may be a pixel point closest to or farthest from the camera, where the second alignment point is closest to the camera and may be a pixel point with the smallest depth value in the face area, and the second alignment point is farthest from the camera and may be a pixel point with the largest depth value in the face area. When the first alignment point is the face key point closest to the camera, the second alignment point is the pixel point with the smallest depth value in the face area, and when the first alignment point is the face key point farthest from the camera, the second alignment point is the pixel point with the largest depth value in the face area.
In the embodiment of the invention, the first alignment point is a key point of the face closest to the camera, and the second alignment point is a pixel point with the smallest depth value in the face area. Thus, the first alignment point can be selected as a nose tip key point, and the corresponding prior relation is that the nose tip is the most convex part of the human face in a certain angle range, and is the point of the human face closest to the camera when facing the camera. According to experience, when the face angle value is between minus 20 degrees and 20 degrees in the three angle directions of pitch, roll and yaw, the nose tip is the point of the face closest to the camera, so that when the face angle value is between minus 20 degrees and 20 degrees in the three angle directions of pitch, roll and yaw, the nose tip key point is selected as a first alignment point, and the pixel point with the minimum depth value in the face area is selected as a second alignment point.
105. And calculating the distance between the first alignment point and the second alignment point, and if the distance is smaller than a preset distance threshold value, determining that the color channel of the face image is aligned with the depth channel of the face image.
In the embodiment of the invention, since the position of the first alignment point is based on the color channel of the face image and the position of the second alignment point is based on the depth channel of the face image, whether the first alignment point is aligned with the second alignment point is judged, so that whether the color channel of the face image is aligned with the depth channel of the face image can be judged.
In the embodiment of the invention, the color channel of the face image and the depth channel of the face image have the same size, resolution and pixel coordinate system, so that the distance between the first alignment point and the second alignment point can be calculated, if the distance between the first alignment point and the second alignment point is small enough, the first alignment point and the second alignment point are close to each other and approximately aligned, if the distance between the first alignment point and the second alignment point is large, the first alignment point and the second alignment point are far away, the first alignment point and the second alignment point are not aligned, and the distance is judged to be far or near, and can be determined through a preset distance threshold. A first alignment point phase can be acquiredFor a first coordinate value of the face image, and acquiring a second coordinate value of a second alignment point relative to the face image; and calculating the Euclidean distance between the first alignment point and the second alignment point according to the first coordinate value and the second coordinate value. For example, the color channel of the face image is RGB, the first alignment point is a nose tip key point (X RGB ,Y RGB ) The second alignment point is the pixel point (X deep ,Y deep ) Then (X) RGB ,Y RGB ) And (X) deep ,Y deep ) Distance D between the two, when the distance D is greater than the preset distance threshold D th The fact that the distance between the first alignment point and the second alignment point is larger is indicated, the first alignment point is not aligned with the second alignment point, the color channel of the face image is not aligned with the depth channel of the face image, and the face image is not suitable for face recognition; when the distance D is smaller than the preset distance threshold D th The fact that the distance between the first alignment point and the second alignment point is smaller indicates that the first alignment point is aligned with the second alignment point, the color channel of the face image is also aligned with the depth channel of the face image, and the face image can be used for face recognition.
In the embodiment of the invention, a face image is acquired, wherein the face image comprises a color channel and a depth channel; extracting a search key point and a first alignment point from a color channel of the face image through a pre-trained face key point model; determining a search area in a depth channel of the face image according to the search key points; determining a second alignment point according to the depth value of each pixel point in the search area; and calculating the distance between the first alignment point and the second alignment point, and if the distance is smaller than a preset distance threshold value, determining that the color channel of the face image is aligned with the depth channel of the face image. The color channel of the face image can be used as a search key point and a first alignment point, a second alignment point in the depth channel of the face image is determined according to the search key point, and the distance between the first alignment point and the second alignment point can be calculated, so that whether the color channel of the face image is aligned with the depth channel of the face image or not is judged, the color channel and the depth channel of the face image for face recognition are aligned, and the face recognition accuracy is guaranteed.
It should be noted that the face image alignment detection method provided by the embodiment of the invention can be applied to devices such as a smart phone, a computer, a camera, an access control device, a server and the like which can perform face recognition.
Optionally, referring to fig. 2, fig. 2 is a flowchart of a face image acquisition method according to an embodiment of the present invention, where, as shown in fig. 2, the face image acquisition method specifically includes the following steps:
201. and acquiring an image to be detected.
In an embodiment of the present invention, the image to be processed includes a face to be detected, and the image to be processed may include one or more faces, and the image to be detected includes a color channel and a depth channel. Further, the image to be processed may be understood as a large image acquired by the 3D image acquisition device or acquired by uploading an image by a user, and the face image may be understood as a small face image extracted from the large image. Of course, the user may also upload the thumbnail of the face image directly.
202. And carrying out face angle detection on the image to be detected through a pre-trained face angle detection model to obtain an angle value of the face to be detected.
In the embodiment of the invention, the face angle detection model can be a face angle detection model obtained based on an open source community, such as an FSA-Net model, an opencv+dlib face pose estimation model and the like.
The pre-trained face angle detection model detects a face and outputs an angle value of the face to be detected, wherein the angle value of the face to be detected comprises pitch (pitch angle, rotating around an X axis), roll (yaw angle, rotating around a Y axis) and yaw (roll angle, rotating around a Z axis).
203. And judging whether the angle value is within a preset angle value range.
In the embodiment of the invention, the preset angle value range may be-20 degrees to 20 degrees, and if the three angle values of pitch, roll, yaw of the face to be detected are all between-20 degrees to 20 degrees, the angle value of the face to be detected may be determined to be within the preset angle value range.
204. And if the angle value is within the preset angle value range, extracting a face image according to the image to be detected.
In the embodiment of the invention, the angle value of the face to be detected is within the preset angle value range, so that the first alignment point in the face image can be ensured to have the extreme value attribute. Extracting a face image from the image to be detected, wherein the face image comprises a color channel and a depth channel for extracting the face image, and a face region can be extracted from the color channel of the image to be detected as the color channel of the face image; and extracting a face region from the depth channel of the image to be detected as the depth channel of the face image. The face region can be determined by a face detection frame, specifically, an image in the face detection frame can be extracted from an image to be processed by the face detection frame (x, y, w, h) to be used as a face image, wherein x, y is a center point of the face detection frame, w is a width of the face detection frame, and h is a height of the face detection frame.
Before the face image is extracted, face angle detection is carried out on the image to be detected, so that the face image with overlarge face angle can be prevented from being extracted, and the prior relation between the first alignment point and the second alignment point is prevented from being invalid.
Optionally, referring to fig. 3, fig. 3 is a flowchart of a method for obtaining a depth channel of a face image according to an embodiment of the present invention, where each pixel point in the depth channel of the image to be detected has a first depth value, as shown in fig. 3, the method for obtaining the depth channel of the face image specifically includes the following steps:
301. and extracting a face region from the depth channel of the image to be detected to obtain a first depth face region.
In the embodiment of the invention, each pixel point in the first depth face area has a first depth value. The first depth value is an initial depth value when the 3D image capturing device is used or an initial depth value when an image is uploaded by a user.
302. And filtering the abnormal first depth value in the first depth face region to obtain a second depth value, and converting the first depth face region into a second depth face region based on the first depth value and the second depth value.
In the embodiment of the present invention, each pixel point in the second depth face area has a first depth value or a second depth value. Because the structured light camera has a corresponding acquisition distance and the face distance is also within a certain range, some abnormal first depth values are preprocessed. The first depth value of the anomaly may be greater than a maximum preset threshold max_th or less than a minimum preset threshold min_th. The second depth value may be a preset depth value, for example, the first depth value of the anomaly may be filtered to 0 as the second depth value.
In one possible embodiment, the first depth value of the anomaly may be filtered to a maximum preset threshold max_th as the second depth value. Or the first depth value of the anomaly may be filtered to a random value between a maximum preset threshold max_th and a minimum preset threshold min_th as the second depth value.
In another possible embodiment, the first depth value of the anomaly may be repaired, and a smoothing filter algorithm may be employed to repair the first depth value of the anomaly. The smoothing filter algorithm may cause the abnormal first depth value to be restored to a value that smoothly transitions with surrounding normal first depth values.
303. Selecting a maximum depth value from the second depth face region, calculating a first depth value or a depth difference value between the second depth value and the maximum depth value of each pixel point in the second depth face region, obtaining a third depth value, and converting the second depth face region into the third depth face region based on the third depth value.
In the embodiment of the present invention, each pixel point in the third depth face area has the third depth value. The first depth value is an original depth value, and the second depth value is a depth value obtained by filtering the abnormal first depth value, for example, the second depth value is 0.
And selecting a maximum depth value M from the second depth face region, searching the maximum depth value M in the first depth value in the second depth face region because the second depth value is the filtered depth value, subtracting the maximum depth value M from each pixel point in the second depth face region after obtaining the maximum depth value M, obtaining a depth difference value of each pixel point, and taking the depth difference value as a third depth value to obtain a third depth face region.
In one possible embodiment, after obtaining the maximum depth value M, subtracting the maximum depth value M from each pixel point which is not the second depth value in the second depth face region to obtain a depth difference value of each pixel point, and using the depth difference value as a third depth value to further obtain a third depth face region.
In the third depth face region, the pixel point at the highest position in the third depth face region is the nearest pixel point to the camera, namely, the pixel point corresponding to the largest third depth value is the nearest pixel point to the camera.
304. And determining the third depth face area as a depth channel of the face image.
In the embodiment of the invention, after the third depth face area is determined as the depth channel of the face image, the depth value in the depth channel of the face image is the third depth value. By carrying out depth value processing on the face region extracted from the depth channel of the image to be detected, the interference of abnormal depth values can be avoided, the second alignment point can be found more accurately, and the alignment detection accuracy of the first alignment point and the second alignment point is improved.
Optionally, the second alignment point is a pixel point closest to the camera, and specifically, a maximum third depth value in the search area may be obtained; and determining the pixel point corresponding to the maximum third depth value as a second alignment point.
Optionally, the above-mentioned search area may be determined according to a search key point, where the search key point and the first alignment point are determined according to prior information of the face, where the prior information is that an ordinate of the first search point is smaller than an ordinate of the first alignment point, an ordinate of the second search point is larger than an ordinate of the first alignment point, an abscissa of the third search point is smaller than an abscissa of the first alignment point, and an abscissa of the fourth search point is larger than an abscissa of the first alignment point. The first search point, the second search point, the third search point and the fourth search point can be determined from the face key points; and determining a corresponding search area in the depth channel of the face image according to the first search point, the second search point, the third search point and the fourth search point.
Specifically, the first search point, the second search point, the third search point and the fourth search point may determine the search area as a rectangular area, and the first alignment point is taken as a nose tip key point, and according to the prior information, the ordinate of the first search point is smaller than the ordinate of the first alignment point, so that the first search point may be a face key point, where the ordinate is smaller than the ordinate of the first alignment point, such as a chin key point, a lowest lip key point or a contour key point with the minimum ordinate. The ordinate of the second search point is greater than the ordinate of the first alignment point, and the second search point may be a face key point, and the ordinate is greater than the face key point of the ordinate of the first alignment point, such as a forehead key point, a left eye corner key point, or a right eye corner key point. The abscissa of the third search point is smaller than the abscissa of the first alignment point, and the third search point may be a face key point, such as a left eye corner key point or a left ear key point, whose ordinate is smaller than the abscissa of the first alignment point. The fourth search point may be a face key point, such as a right eye corner key point or a right ear key point, whose abscissa is greater than the abscissa of the first alignment point.
Further, in the embodiment of the present invention, the first alignment point is a nose tip key point P, the first search point is a lip lowest edge key point A1, the second search point is a key point with a larger ordinate in the left eye corner key point A2 or the right eye corner key point A3, the third search point is a left eye corner key point A2, and the fourth search point is a right eye corner key point A3. The lower boundary of the rectangular area is determined by the ordinate of the lowest lip key point A1, the upper boundary of the rectangular area is determined by the ordinate of the left-eye corner key point A2 or the right-eye corner key point A3, the left boundary of the rectangular area is determined by the abscissa of the left-eye corner key point A2, and the right boundary of the rectangular area is determined by the abscissa of the right-eye corner key point A3, so that the corresponding rectangular area is determined, and the rectangular area corresponds to the search area in the depth channel of the face image.
In one possible embodiment, the first search point, the second search point, the third search point and the fourth search point may determine the search area as a triangle area, specifically, the first alignment point is a nose tip key point P, a first corner point of a triangle may be determined by coordinates of a lowest lip edge key point A1, a second corner point of a triangle area may be determined by coordinates of a left eye corner key point A2, a third corner point of the triangle area may be determined by coordinates of a right eye corner key point A3, and the first corner point, the second corner point and the third corner point may be sequentially connected to obtain a triangle area, where the triangle area corresponds to the search area in a depth channel of the face image. Therefore, the area of the search area can be further reduced, the number of pixel points in the search area is further reduced, and the determination speed of the second alignment points is improved.
Optionally, the first alignment points are nasal tip key points, and the two nostril key points are extracted from the color channel of the face image through a pre-trained face key point model; calculating the Euclidean distance between two nostril key points; and determining a preset distance threshold value through the Euclidean distance between the two nostril key points.
In the embodiment of the invention, the first alignment point is a nose tip key point P, the second alignment point is a representative point Q with the largest third depth value in the search area, the distance D between P and Q is calculated, the Euclidean distance D1 between the two nostril key points is calculated, and the D1 is directly used as a preset distance threshold D th Or weighting D1 to obtain a preset distance threshold D th Comparing the distance D between P and Q with a preset distance threshold D th If the distance D is greater than the preset distance threshold D th Indicating that the first alignment point is not aligned with the second alignment point, if the distance D is smaller than the preset distanceOff threshold D th The first alignment point is described as being aligned with the second alignment point. And if the first alignment point is aligned with the second alignment point, the color channel and the depth channel of the face image are aligned.
Optionally, referring to fig. 4, fig. 4 is a schematic structural diagram of a face image alignment detection apparatus according to an embodiment of the present invention, as shown in fig. 4, where the apparatus includes:
An acquisition module 401, configured to acquire a face image, where the face image includes a color channel and a depth channel;
a first extraction module 402, configured to extract, through a pre-trained face key point model, a search key point and a first alignment point from a color channel of the face image;
a first determining module 403, configured to determine a search area in a depth channel of the face image according to the search key point;
a second determining module 404, configured to determine a second alignment point according to the depth value of each pixel point in the search area;
a third determining module 405, configured to calculate a distance between the first alignment point and the second alignment point, and determine that a color channel of the face image is aligned with a depth channel of the face image if the distance is less than a preset distance threshold.
Optionally, as shown in fig. 5, the obtaining module 401 includes:
a first obtaining submodule 4011, configured to obtain an image to be detected, where the image to be processed includes a face to be detected;
the angle detection submodule 4012 is used for carrying out face angle detection on the image to be detected through a pre-trained face angle detection model to obtain an angle value of the face to be detected;
A judging submodule 4013, configured to judge whether the angle value is within a preset angle value range;
the extraction submodule 4014 is configured to extract a face image according to the image to be detected if the angle value is within the preset angle value range.
Optionally, as shown in fig. 6, the extracting submodule 4014 includes:
a first extracting unit 40141, configured to extract a face area from the color channel of the image to be detected as a color channel of the face image;
the second extracting unit 40142 is configured to extract a face area from the depth channel of the image to be detected as a depth channel of the face image.
Alternatively, as shown in fig. 7, the second extracting unit 40142 includes:
an extraction subunit 401421, configured to extract a face region from a depth channel of the image to be detected, so as to obtain a first depth face region, where each pixel point in the first depth face region has a first depth value;
a filtering subunit 401422, configured to filter a first depth value of an anomaly in the first depth face region to obtain a second depth value, and convert the first depth face region into a second depth face region based on the first depth value and the second depth value, where each pixel point in the second depth face region has the first depth value or the second depth value;
A processing subunit 401423, configured to select a maximum depth value from the second depth face region, calculate a first depth value or a depth difference value between the second depth value and the maximum depth value of each pixel point in the second depth face region, obtain a third depth value, and convert the second depth face region into a third depth face region based on the third depth value, where each pixel point in the third depth face region has the third depth value;
a determining subunit 401424, configured to determine the third depth face area as a depth channel of a face image, where a depth value in the depth channel of the face image is the third depth value.
Optionally, the search key point and the first alignment point are determined according to prior information of the face, where the prior information is that an ordinate of the first search point is smaller than an ordinate of the first alignment point, an ordinate of the second search point is larger than an ordinate of the first alignment point, an abscissa of the third search point is smaller than an abscissa of the first alignment point, and an abscissa of the fourth search point is larger than an abscissa of the first alignment point, as shown in fig. 8, and the first determining module 403 includes:
A first determining submodule 4031 configured to determine the first search point, the second search point, the third search point, and the fourth search point;
and a second determining submodule 4032, configured to determine a corresponding search area in the depth channel of the face image according to the first search point, the second search point, the third search point and the fourth search point.
Optionally, as shown in fig. 9, the search area includes a third depth value, and the second determining module 404 includes:
a second obtaining submodule 4041, configured to obtain a maximum third depth value in the search area;
and a third determining submodule 4042, configured to determine a pixel point corresponding to the maximum third depth value as a second alignment point.
Optionally, as shown in fig. 10, the third determining module 405 includes:
a third obtaining submodule 4051, configured to obtain a first coordinate value of the first alignment point relative to the face image and obtain a second coordinate value of the second alignment point relative to the face image;
a calculating submodule 4052, configured to calculate a euclidean distance between the first alignment point and the second alignment point according to the first coordinate value and the second coordinate value.
Optionally, the first alignment point is a tip key point, as shown in fig. 11, and the apparatus further includes:
a second extraction module 406, configured to extract two nostril keypoints from the color channel of the face image through a pre-trained face keypoint model;
a calculating module 407, configured to calculate a euclidean distance between the two nostril keypoints;
a fourth determining module 408, configured to determine the preset distance threshold by using the euclidean distance between the two nostril keypoints.
It should be noted that the face image alignment detection device provided by the embodiment of the invention can be applied to devices such as a smart phone, a computer, a camera, an access control device, a server and the like which can perform face recognition.
The face image alignment detection device provided by the embodiment of the invention can realize all the processes realized by the face image alignment detection method in the method embodiment, and can achieve the same beneficial effects. In order to avoid repetition, a description thereof is omitted.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 12, including: a memory 1202, a processor 1201 and a computer program stored on the memory 1202 and executable on the processor 1201 for a face image alignment detection method, wherein:
The processor 1201 is configured to call a computer program stored in the memory 1202, and perform the following steps:
acquiring a face image, wherein the face image comprises a color channel and a depth channel;
extracting a search key point and a first alignment point from a color channel of the face image through a pre-trained face key point model;
determining a search area in a depth channel of the face image according to the search key points;
determining a second alignment point according to the depth value of each pixel point in the search area;
and calculating the distance between the first alignment point and the second alignment point, and if the distance is smaller than a preset distance threshold value, determining that the color channel of the face image is aligned with the depth channel of the face image.
Optionally, the acquiring the face image performed by the processor 1201 includes:
acquiring an image to be detected, wherein the image to be processed comprises a face to be detected;
performing face angle detection on the image to be detected through a pre-trained face angle detection model to obtain an angle value of the face to be detected;
judging whether the angle value is in a preset angle value range or not;
and if the angle value is within the preset angle value range, extracting a face image according to the image to be detected.
Optionally, the extracting, by the processor 1201, a face image according to the image to be detected includes:
extracting a face region from the color channel of the image to be detected as a color channel of the face image;
and extracting a face region from the depth channel of the image to be detected as a depth channel of the face image.
Optionally, each pixel point in the depth channel of the image to be detected has a first depth value, and the extracting, by the processor 1201, the face region in the depth channel of the image to be detected as the depth channel of the face image includes:
extracting a face region from a depth channel of the image to be detected to obtain a first depth face region, wherein each pixel point in the first depth face region has a first depth value;
filtering an abnormal first depth value in the first depth face region to obtain a second depth value, and converting the first depth face region into a second depth face region based on the first depth value and the second depth value, wherein each pixel point in the second depth face region has the first depth value or the second depth value;
selecting a maximum depth value from the second depth face region, calculating a first depth value or a depth difference value between the second depth value and the maximum depth value of each pixel point in the second depth face region, obtaining a third depth value, and converting the second depth face region into a third depth face region based on the third depth value, wherein each pixel point in the third depth face region has the third depth value;
And determining the third depth face area as a depth channel of the face image, wherein a depth value in the depth channel of the face image is the third depth value.
Optionally, the search key point and the first alignment point are determined according to prior information of the face, where the prior information is that an ordinate of the first search point is smaller than an ordinate of the first alignment point, an ordinate of the second search point is larger than an ordinate of the first alignment point, an abscissa of the third search point is smaller than an abscissa of the first alignment point, and an abscissa of the fourth search point is larger than an abscissa of the first alignment point, and the determining, by the processor 1201, a search area in a depth channel of the face image according to the search key point includes:
determining the first search point, the second search point, the third search point and the fourth search point;
and determining a corresponding search area in a depth channel of the face image according to the first search point, the second search point, the third search point and the fourth search point.
Optionally, the search area includes a third depth value, and determining, by the processor 1201, a second alignment point according to the depth value of each pixel point in the search area includes:
Obtaining a maximum third depth value in the search area;
and determining the pixel point corresponding to the maximum third depth value as a second alignment point.
Optionally, the calculating, performed by the processor 1201, a distance between the first alignment point and the second alignment point includes:
acquiring a first coordinate value of the first alignment point relative to the face image and acquiring a second coordinate value of the second alignment point relative to the face image;
and calculating the Euclidean distance between the first alignment point and the second alignment point according to the first coordinate value and the second coordinate value.
Optionally, the first alignment point is a tip key point, and the method executed by the processor 1201 further includes:
extracting two nostril key points from a color channel of the face image through a pre-trained face key point model;
calculating Euclidean distance between the two nostril key points;
and determining the preset distance threshold value through the Euclidean distance between the two nostril key points.
It should be noted that, the electronic device provided by the embodiment of the invention can be applied to devices such as a smart phone, a computer, a camera, an access control device, a server and the like which can perform face recognition.
The electronic equipment provided by the embodiment of the invention can realize each process realized by the face image alignment detection method in the embodiment of the method, and can achieve the same beneficial effects. In order to avoid repetition, a description thereof is omitted.
The embodiment of the invention also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the method for detecting the alignment of the face image or the method for detecting the alignment of the face image at the application end provided by the embodiment of the invention is realized, and the same technical effect can be achieved, so that repetition is avoided, and repeated description is omitted.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM) or the like.
The foregoing disclosure is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (11)

1. The face image alignment detection method is characterized by comprising the following steps of:
acquiring a face image, wherein the face image comprises a color channel and a depth channel;
extracting a search key point and a first alignment point from a color channel of the face image through a pre-trained face key point model;
determining a search area in a depth channel of the face image according to the search key points;
determining a second alignment point according to the depth value of each pixel point in the search area;
and calculating the distance between the first alignment point and the second alignment point, and if the distance is smaller than a preset distance threshold value, determining that the color channel of the face image is aligned with the depth channel of the face image.
2. The method of claim 1, wherein the acquiring a face image comprises:
acquiring an image to be detected, wherein the image to be detected comprises a face to be detected;
performing face angle detection on the image to be detected through a pre-trained face angle detection model to obtain an angle value of the face to be detected;
Judging whether the angle value is in a preset angle value range or not;
and if the angle value is within the preset angle value range, extracting a face image according to the image to be detected.
3. The method according to claim 2, wherein the extracting a face image from the image to be detected includes:
extracting a face region from the color channel of the image to be detected as a color channel of the face image;
and extracting a face region from the depth channel of the image to be detected as a depth channel of the face image.
4. A face image alignment detection method as claimed in claim 3, wherein each pixel point in the depth channel of the image to be detected has a first depth value, and the extracting the face region in the depth channel of the image to be detected as the depth channel of the face image comprises:
extracting a face region from a depth channel of the image to be detected to obtain a first depth face region, wherein each pixel point in the first depth face region has a first depth value;
filtering an abnormal first depth value in the first depth face region to obtain a second depth value, and converting the first depth face region into a second depth face region based on the first depth value and the second depth value, wherein each pixel point in the second depth face region has the first depth value or the second depth value;
Selecting a maximum depth value from the second depth face region, calculating a first depth value or a depth difference value between the second depth value and the maximum depth value of each pixel point in the second depth face region, obtaining a third depth value, and converting the second depth face region into a third depth face region based on the third depth value, wherein each pixel point in the third depth face region has the third depth value;
and determining the third depth face area as a depth channel of the face image, wherein a depth value in the depth channel of the face image is the third depth value.
5. The method of claim 4, wherein the search key point and the first alignment point are determined based on prior information of the face, the prior information being that an ordinate of a first search point is present less than an ordinate of the first alignment point, an ordinate of a second search point is present greater than an ordinate of the first alignment point, an abscissa of a third search point is present less than an abscissa of the first alignment point, and an abscissa of a fourth search point is present greater than an abscissa of the first alignment point, the determining a search area in a depth channel of the face image based on the search key point comprising:
Determining the first search point, the second search point, the third search point and the fourth search point;
and determining a corresponding search area in a depth channel of the face image according to the first search point, the second search point, the third search point and the fourth search point.
6. The method of claim 5, wherein the search area includes a third depth value, and wherein determining the second alignment point based on the depth value of each pixel point in the search area includes:
obtaining a maximum third depth value in the search area;
and determining the pixel point corresponding to the maximum third depth value as a second alignment point.
7. The method of claim 6, wherein the calculating the distance of the first alignment point from the second alignment point comprises:
acquiring a first coordinate value of the first alignment point relative to the face image and acquiring a second coordinate value of the second alignment point relative to the face image;
and calculating the Euclidean distance between the first alignment point and the second alignment point according to the first coordinate value and the second coordinate value.
8. The method of claim 7, wherein the first alignment point is a nose tip keypoint, the method further comprising:
Extracting two nostril key points from a color channel of the face image through a pre-trained face key point model;
calculating Euclidean distance between the two nostril key points;
and determining the preset distance threshold value through the Euclidean distance between the two nostril key points.
9. A face image alignment detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring a face image, wherein the face image comprises a color channel and a depth channel;
the extraction module is used for extracting search key points and first alignment points from the color channels of the face images through the pre-trained face key point model;
the first determining module is used for determining a search area in a depth channel of the face image according to the search key points;
the second determining module is used for determining a second alignment point according to the depth value of each pixel point in the searching area;
and the third determining module is used for calculating the distance between the first alignment point and the second alignment point, and determining that the color channel of the face image is aligned with the depth channel of the face image if the distance is smaller than a preset distance threshold.
10. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the face image alignment detection method as claimed in any one of claims 1 to 8 when the computer program is executed.
11. A computer readable storage medium, having stored thereon a computer program which when executed by a processor performs the steps in the face image alignment detection method according to any of claims 1 to 8.
CN202110809114.0A 2021-07-16 2021-07-16 Face image alignment detection method and device, electronic equipment and storage medium Active CN113743191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110809114.0A CN113743191B (en) 2021-07-16 2021-07-16 Face image alignment detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110809114.0A CN113743191B (en) 2021-07-16 2021-07-16 Face image alignment detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113743191A CN113743191A (en) 2021-12-03
CN113743191B true CN113743191B (en) 2023-08-01

Family

ID=78728732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110809114.0A Active CN113743191B (en) 2021-07-16 2021-07-16 Face image alignment detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113743191B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996416A (en) * 2009-08-24 2011-03-30 三星电子株式会社 3D face capturing method and equipment
KR101156547B1 (en) * 2010-12-30 2012-06-20 주식회사 나무가 Searching method of face and hands using rgb color image and depth image
KR101717379B1 (en) * 2015-12-07 2017-03-20 주식회사 에이스엠이 System for postprocessing 3-dimensional image
KR101853006B1 (en) * 2016-12-19 2018-04-30 동의대학교 산학협력단 Recognition of Face through Detecting Nose in Depth Image
CN109584358A (en) * 2018-11-28 2019-04-05 深圳市商汤科技有限公司 A kind of three-dimensional facial reconstruction method and device, equipment and storage medium
CN109934112A (en) * 2019-02-14 2019-06-25 青岛小鸟看看科技有限公司 A kind of face alignment method and camera
CN110532979A (en) * 2019-09-03 2019-12-03 深圳市华芯技研科技有限公司 A kind of 3-D image face identification method and system
CN111160309A (en) * 2019-12-31 2020-05-15 深圳云天励飞技术有限公司 Image processing method and related equipment
CN112434546A (en) * 2019-08-26 2021-03-02 杭州魔点科技有限公司 Face living body detection method and device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107851321B (en) * 2015-11-30 2020-09-11 华为技术有限公司 Image processing method and dual-camera system
US20180121713A1 (en) * 2016-10-28 2018-05-03 Qualcomm Incorporated Systems and methods for verifying a face
US11568645B2 (en) * 2019-03-21 2023-01-31 Samsung Electronics Co., Ltd. Electronic device and controlling method thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996416A (en) * 2009-08-24 2011-03-30 三星电子株式会社 3D face capturing method and equipment
KR101156547B1 (en) * 2010-12-30 2012-06-20 주식회사 나무가 Searching method of face and hands using rgb color image and depth image
KR101717379B1 (en) * 2015-12-07 2017-03-20 주식회사 에이스엠이 System for postprocessing 3-dimensional image
KR101853006B1 (en) * 2016-12-19 2018-04-30 동의대학교 산학협력단 Recognition of Face through Detecting Nose in Depth Image
CN109584358A (en) * 2018-11-28 2019-04-05 深圳市商汤科技有限公司 A kind of three-dimensional facial reconstruction method and device, equipment and storage medium
CN109934112A (en) * 2019-02-14 2019-06-25 青岛小鸟看看科技有限公司 A kind of face alignment method and camera
CN112434546A (en) * 2019-08-26 2021-03-02 杭州魔点科技有限公司 Face living body detection method and device, equipment and storage medium
CN110532979A (en) * 2019-09-03 2019-12-03 深圳市华芯技研科技有限公司 A kind of 3-D image face identification method and system
CN111160309A (en) * 2019-12-31 2020-05-15 深圳云天励飞技术有限公司 Image processing method and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
深度回归网络下的人脸对齐方法;冯文祥;文畅;谢凯;贺建飚;;计算机工程与设计(第07期);全文 *

Also Published As

Publication number Publication date
CN113743191A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN107948517B (en) Preview picture blurring processing method, device and equipment
EP3477931A1 (en) Image processing method and device, readable storage medium and electronic device
EP1953675B1 (en) Image processing for face and face expression recognition
EP1650711B1 (en) Image processing device, imaging device, image processing method
KR100682889B1 (en) Method and Apparatus for image-based photorealistic 3D face modeling
JP4723834B2 (en) Photorealistic three-dimensional face modeling method and apparatus based on video
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
KR101303877B1 (en) Method and apparatus for serving prefer color conversion of skin color applying face detection and skin area detection
CN107864337B (en) Sketch image processing method, device and equipment and computer readable storage medium
CN108537782B (en) Building image matching and fusing method based on contour extraction
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN108416291B (en) Face detection and recognition method, device and system
CN110400338B (en) Depth map processing method and device and electronic equipment
JP6157165B2 (en) Gaze detection device and imaging device
CN114693760A (en) Image correction method, device and system and electronic equipment
CN112614136A (en) Infrared small target real-time instance segmentation method and device
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN110276831A (en) Constructing method and device, equipment, the computer readable storage medium of threedimensional model
KR20140074201A (en) Tracking device
CN111080537B (en) Intelligent control method, medium, equipment and system for underwater robot
CN113128428B (en) Depth map prediction-based in vivo detection method and related equipment
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109635682B (en) Face recognition device and method
CN111669492A (en) Method for processing shot digital image by terminal and terminal
CN113743191B (en) Face image alignment detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant