CN108960099B - Method, system, equipment and storage medium for estimating left and right inclination angles of human face - Google Patents

Method, system, equipment and storage medium for estimating left and right inclination angles of human face Download PDF

Info

Publication number
CN108960099B
CN108960099B CN201810653661.2A CN201810653661A CN108960099B CN 108960099 B CN108960099 B CN 108960099B CN 201810653661 A CN201810653661 A CN 201810653661A CN 108960099 B CN108960099 B CN 108960099B
Authority
CN
China
Prior art keywords
image
face
relative difference
difference value
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810653661.2A
Other languages
Chinese (zh)
Other versions
CN108960099A (en
Inventor
徐勇
刘宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Peking University Shenzhen Graduate School
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School, Shenzhen Graduate School Harbin Institute of Technology filed Critical Peking University Shenzhen Graduate School
Priority to CN201810653661.2A priority Critical patent/CN108960099B/en
Publication of CN108960099A publication Critical patent/CN108960099A/en
Application granted granted Critical
Publication of CN108960099B publication Critical patent/CN108960099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a system, equipment and a storage medium for estimating the left and right inclination angles of a human face, comprising the following steps: dividing the face image into a first image and a second image in an appointed mode; calculating a relative disparity value of pixel values between the first image and the second image; and calculating the corresponding left and right inclination angles of the human face according to the relative difference values. The method, the system, the equipment and the storage medium for estimating the left and right inclination angles of the human face have the beneficial effects that: the corresponding left and right inclination angles of the face are calculated according to the relative difference value of the pixel values, so that the step of estimating the inclination angles of the face is simplified, the calculation speed of the left and right inclination angles of the face is increased, and the influence of illumination change is effectively avoided.

Description

Method, system, equipment and storage medium for estimating left and right inclination angles of human face
Technical Field
The invention relates to the technical field of pattern recognition and computer vision, in particular to a method, a system, equipment and a storage medium for estimating a left and right inclination angle of a human face.
Background
The estimation of the left and right inclination angles of the human face has important application value in the fields of human face recognition, video tracking, fatigue detection, human-computer interaction and the like. Some face recognition requires obtaining a face image of the front face, and at this time, the camera needs to adjust its angle according to the face inclination angle to obtain the face image of the front face. If the face inclination angle parameter can be obtained by video tracking, the posture of the camera can be dynamically adjusted to be always in the optimal observation position for the monitored object. In addition, the face inclination angle parameters can be used for enabling a plurality of monitoring cameras to coordinate in time and space, and continuous tracking of the monitored object is achieved. An important issue in intelligent human-computer interaction research is the need to accurately judge the focus of a person's attention at a certain moment, so that a computer can better understand the behavior of the person and react accordingly. The driver can be tired in the driving process, the driver can be tired when the situation is serious, and the driver can be found out to be tired and give an alarm in time through the estimation of the face inclination angle, so that accidents can be avoided.
At present, most of techniques for estimating the inclination angle of the face estimate the integral angle uniformly, and the computing method of the techniques is complex and the computing steps are complex, so that the estimating speed is slow.
Disclosure of Invention
The invention mainly aims to provide a method, a system, equipment and a storage medium for estimating a left and right inclination angle of a human face based on image pixel difference analysis, so as to improve the estimation efficiency of the left and right inclination angle of the human face.
The invention provides a method for estimating a left and right inclination angle of a human face, which comprises the following steps:
dividing the face image into a first image and a second image in an appointed mode;
calculating the relative difference value of the pixel values between the first image and the second image;
and calculating the corresponding left and right inclination angles of the human face according to the relative difference values.
Further, before the step of equally dividing the face image into the first image and the second image in a specified manner, the method further comprises:
judging whether the face image is vertically inclined or not;
and if so, calibrating the face image by an affine transformation method.
Further, the step of determining whether the face image is vertically tilted includes:
respectively acquiring position points of two eye corners close to a nose in the face image;
and connecting the position points, and judging whether the line segment has an inclination angle with the horizontal line.
Further, the step of equally dividing the face image into the first image and the second image in a specified manner includes:
acquiring the number m of rows and the number n of columns of the face image matrix, and judging whether n is an even number;
if yes, dividing the face image into
Figure BDA0001705332140000021
The first image and the second image of the size;
if not, abandoning the first or last column of the face image matrix, and equally dividing the face image into two rows
Figure BDA0001705332140000022
The size of the first image and the second image.
Further, the step of calculating the relative difference value of the pixel values between the first image and the second image comprises:
respectively dividing the first image and the second image into a plurality of image blocks, wherein the number of pixel points contained in each image block is the same;
respectively labeling each image block in the image matrix of the first image and the image matrix of the second image, and labeling each pixel point in the image block;
calculating the relative difference value of the pixel values of the pixel points of the corresponding labels in the first image and the second image;
calculating the relative difference value of the pixel value of the image block corresponding to the label according to the relative difference value of the pixel value of each pixel point;
and calculating the relative difference value of the pixel values of the first image and the second image according to the relative difference value of the pixel values of each image block.
Further, the step of calculating the corresponding left-right inclination angle of the face according to the relative difference value comprises:
acquiring the relative difference value of the pixel values of the first image and the second image;
and calculating the corresponding left and right inclination angles of the human face according to the relative difference values of the pixel values of the first image and the second image.
Further, before the step of equally dividing the face image into the first image and the second image in a specified manner, the method further comprises:
establishing a calculation formula of the inclination angle, wherein the steps comprise:
obtaining relative difference values of K historical face images, and sequentially recording the relative difference values as D1......DKOrder matrix
Figure BDA0001705332140000023
Acquiring the left and right inclination angles of K historical face images, and sequentially recording the left and right inclination angles as alpha1......αKOrder matrix
Figure BDA0001705332140000024
Establishing an equation set Pg ═ alpha, calculating a linear relation between P and alpha, and solving to obtain
Figure BDA0001705332140000025
For a face image with unknown any inclination angle, firstly, a relative difference value D of the face image is calculatedhThen by the formula
Figure BDA0001705332140000026
Calculate the inclination angle alphah
Wherein γ and I are respectively a small positive number and a unit matrix, and T represents the transposition operation of the matrix.
The invention also provides a system for estimating the left and right inclination angles of the human face, which comprises the following steps:
the face image segmentation module is used for equally dividing the face image into a first image and a second image in a specified mode;
a first calculating module, configured to calculate a relative difference value between pixel values of the first image and the second image;
and the second calculation module is used for calculating the corresponding left and right inclination angles of the human face according to the relative difference values.
The invention also proposes a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of the embodiments when executing the program.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of the embodiments.
The invention has the following beneficial effects: the corresponding left and right inclination angles of the face are calculated according to the relative difference value of the pixel values, so that the step of estimating the inclination angles of the face is simplified, and the calculation speed of the left and right inclination angles of the face is increased; when the number of columns of the face image matrix is the technology, the first column or the last column is discarded, so that the symmetry of the first image and the second image is ensured, and the accuracy of estimating the left and right inclination angles of the face is improved; the method comprises the steps of firstly calibrating a vertically inclined face image, and stripping the inclination of the face image in the vertical direction and the left-right direction so as to improve the accuracy of estimating the left-right inclination angle of the face; the calculation precision of the relative difference value of the pixel values is improved in a hierarchical calculation mode, so that the accuracy of the estimation of the left and right inclination angles of the face is improved; the left and right inclination angles of the human face are estimated according to the relative difference value of the pixel values, so that the influence of illumination change is effectively avoided.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a method for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a method for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a method for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a system for estimating a left-right tilt angle of a human face according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a computer device according to an embodiment of the present invention;
in the figure: 1. a face image segmentation module; 2. a first calculation module; 3. a second calculation module; 4. a computer device; 5. an external device; 6. a processing unit; 7. a bus; 8. a network adapter; 9. an (I/O) interface; 10. a display; 11. a system memory; 12. random Access Memory (RAM); 13. a cache memory; 14. a storage system; 15. a program/utility tool; 16. and (5) program modules.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, in the embodiment of the present invention, a method for estimating a left-right inclination angle of a human face is provided, including the following steps:
s1, dividing the face image into a first image and a second image in an appointed mode;
s2, calculating the relative difference value of the pixel values between the first image and the second image;
and S3, calculating the corresponding left and right inclination angles of the human face according to the relative difference values.
As the step S1, dividing the face image into the first image and the second image equally in a specified manner, wherein the specified manner may be generally that the face image is divided into the corresponding first image and second image by first obtaining the width center point of the face image and then dividing the face image into the width center point connecting line; the above-mentioned specified method generally includes splitting the width of the face image into two equal wide sections according to the length of the width, and then forming a first image and a second image corresponding to each wide section on the face image; the above-mentioned specified manner may generally be that the first image and the second image are divided into two parts, the first part and the second part having equal height and half width of the original face image, by obtaining the area of the face image; in this embodiment, preferably, the width of the face image is divided into two equal wide sections according to the length of the width, and then a first image and a second image corresponding to each wide section are formed on the face image; specifically, the first image and the second image having the same number of pixels are obtained by cropping or marking a boundary, and in this embodiment, cropping is preferred.
As the step S2, calculating a relative difference value between the pixel values of the first image and the second image, wherein the relative difference value between the pixel values of the first image and the second image is linearly related to the left-right inclination angle of the corresponding face; when the face is not inclined in the left-right direction, the face image is approximate to an axisymmetric image, at this time, the relative difference value of the pixel values between the first image and the second image is smaller, and when the face image is inclined in the left-right direction, the face image is a non-axisymmetric image, and the asymmetry of the face image is larger along with the increase of the left-right direction inclination angle, at this time, the relative difference value of the pixel values between the first image and the second image is larger along with the increase of the left-right direction inclination angle of the face; in general, the pixel value is a value given by a computer when an image of an original is digitized, and represents average luminance information of a certain small block of the original, or average reflection (transmission) density information of the small block, and the pixel value changes as color information included in a unit area in the image changes. For example, when the color information is white, the pixel value is 225; when the color information is black, the pixel value is 0.
In the step S3, the corresponding left/right inclination of the face is calculated according to the relative difference valueThe relative difference value is substituted into the corresponding calculation formula of the inclination angle
Figure BDA0001705332140000051
The corresponding face inclination angle can be calculated; h is a face image with unknown left and right inclination angles, alpha is the left and right inclination angles, D is a relative difference value,
Figure BDA0001705332140000052
to calculate the coefficients.
Referring to fig. 2, in this embodiment, before the step of dividing the face image into the first image and the second image in a specified manner, the method further includes:
s4, judging whether the face image is vertically inclined;
and S5, if yes, calibrating the face image by an affine transformation method.
Since the face inclination includes an inclination in the vertical direction and an inclination in the left-right direction, when there is an inclination in the vertical direction of the face image, if the inclination in the vertical direction of the face image is not corrected, the accuracy of the estimation of the left-right inclination angle of the face may be affected. Therefore, before executing steps S1 to S3, steps S4 to S5 are performed to calibrate the vertical tilt of the face image, so as to improve the accuracy of the estimation of the left-right tilt angle of the face;
as the step S4, it is determined whether the face image is vertically tilted, wherein, generally, it is determined whether the facial features are vertically tilted; for example, it is possible to determine whether a connection line of the position points of the pupils and the pupils has an inclination angle with the horizontal line, determine whether a connection line of the position points of the two corners of the eyes near the nose has an inclination angle with the horizontal line, and determine whether a connection line of the position points of the two corners of the lips has an inclination angle with the horizontal line.
If so, calibrating the face image by an affine transformation method in step S5, where calibrating the face image by the affine transformation method includes performing linear transformation on one vector space in the face image and performing translational transformation on the one vector space to obtain another vector space, and calibrating the face image with an inclination angle in the vertical direction; the line segment formed by connecting the position points of the two eye corners close to the nose in the calibrated face image is a horizontal line, and at the moment, the inclination of the face image in the vertical direction is calibrated. And if not, the face image is not calibrated.
In this embodiment, before the step of determining whether the face image is vertically tilted, the method further includes:
a1, judging whether the face image is a color image;
and a2, if yes, converting the color image into a gray scale image.
As the step a1, it is determined whether the face image is a color image, where the determination of whether the face image is a color image is generally based on the color information contained in the face image, the color image refers to that each pixel value in the image is divided into R, G, B three primary color components, each primary color component directly determines the intensity of its primary color, for example, the image depth is 24, and R: G: B: 8:8:8 represents the color, R, G, B each occupies 8 bits to represent the intensity of its respective primary color component, and the intensity level of each primary color component is 2^ 8: 256.
If yes, converting the color image into a gray-scale image, which is beneficial to facilitating subsequent calculation of the relative difference value between the pixel values of the first image and the second image; the grayscale image refers to an image having only one sampling color per pixel. The above-described grayscale image is generally displayed as a grayscale from darkest black to brightest white. If not, the original face image is used.
In this embodiment, the formula for converting the color image into a grayscale image is Y ═ 0.3R +0.59G +0.11B, where Y represents grayscale and R, G, B represents color values of red, green, and blue, respectively.
Referring to fig. 3, in this embodiment, the step of determining whether the face image is vertically tilted includes:
s6, respectively acquiring the position points of two eye corners close to the nose in the face image;
and S7, connecting the position points and judging whether the line segment has an inclined angle with the horizontal line.
As the step S6, respectively obtaining the position points of the two corners near the nose in the face image, wherein the position points of the two corners near the nose in the face image are generally obtained by a target detection method based on depth learning, and the depth learning is generally a ResNet100 network, specifically, a batch of left and right eye images are firstly extracted from a plurality of face images as positive samples, meanwhile, non-human eye image areas on a large number of face images are taken as negative samples, and the obtained positive samples and negative samples are used to train a depth network; when the face image is input, the trained depth network can detect the position points of two eye corners close to the nose in the face image.
In step S7, the position points are connected to each other, and it is determined whether there is an angle of inclination between the horizontal line and the line segment formed by connecting the position points of the two corners of the eyes near the nose in the face image, where the angle of inclination is an angle between the horizontal line and the line segment, and the angle of inclination is an angle indicating the degree of deflection of the face image in the vertical direction.
In this embodiment, the dividing the face image into the first image and the second image in a designated manner includes:
s8, acquiring the row number m and the column number n of the face image matrix, and judging whether n is an even number;
s9, if yes, dividing the face image into
Figure BDA0001705332140000061
The first image and the second image of the size;
s10, if not, discarding the first or last column of the face image matrix, and equally dividing the face image into two
Figure BDA0001705332140000062
The size of the first image and the second image.
As the above step S8, the number of rows m and the number of columns n of the face image matrix are obtained, and it is determined whether n is an even number, where the number of rows m and the number of columns n of the face image matrix depend on the model of the device obtaining the face image and the distance between the face and the device, and step S9 or step S10 is executed to split the face image according to the parity of the number of columns n, and the size unit of the face image is a pixel.
If yes, the face image is divided into the face images in the step S9
Figure BDA0001705332140000063
The first image and the second image are divided into a face image and a face image, wherein when n is an even number, the face image is divided into two parts in a cutting mode
Figure BDA0001705332140000064
The first image and the second image of the size; the number of rows m and the number of columns n of the image matrix of the first image and the image matrix of the second image are equal.
If not, the first or last column of the face image matrix is discarded, and the face images are equally divided into two rows in step S10
Figure BDA0001705332140000065
And when n is an odd number, discarding the first or last column of the face image matrix, and dividing the face image into the first image and the second image to ensure that the number of rows m and the number of columns n of the image matrix of the first image and the image matrix of the second image obtained by division are equal.
Wherein, the steps S8-S10 can be replaced by steps b8-b9, wherein the step b8 replaces S8, and the step b8 is: cutting the face image matrix into a face image sub-matrix with a fixed size; the fixed size is generally that the number of columns n is even and the number of rows m is even or odd, and in this embodiment, the fixed size is preferably 64 × 64 px; the step b9 replaces the steps S9 and S10, and the step b9 is: equally dividing the face image sub-matrix into the first image and the second image; the above-mentioned averaging method is to divide the face image width constant number equally or to divide the face image by equal distances, and in this embodiment, the dividing method is preferably to divide the face image by equal distances.
Referring to fig. 4, in this embodiment, the step of calculating the relative difference value of the pixel values between the first image and the second image includes:
s11, dividing the first image and the second image into a plurality of image blocks respectively, wherein the number of pixel points contained in each image block is the same;
s12, labeling each image block in the image matrix of the first image and the image matrix of the second image respectively, and labeling each pixel point in the image block;
s13, calculating the relative difference value of the pixel values of the pixels of the corresponding labels in the first image and the second image;
s14, calculating the relative difference value of the pixel value of the image block corresponding to the label according to the relative difference value of the pixel value of each pixel point;
s15, calculating a relative difference value between the pixel values of the first image and the second image according to the relative difference value between the pixel values of the image blocks.
As the above step S11, the first image and the second image are divided into a plurality of image blocks, wherein the width and the height of the first image and the second image are divided into a plurality of image blocks by a fixed number or a fixed distance, and in the present embodiment, the width and the height of the first image and the second image are preferably divided into a fixed number, specifically, the width and the height of the first image and the second image are divided into 5 evenly-separated slices, and the height of the first image and the second image are divided into 10 evenly-separated slices, and the image blocks are formed by orthogonal width slices and height slices; the number of pixels contained in each image block is equal, that is, the size of each image block is equal.
As the step S12, the above steps are respectively carried outMarking each image block in an image matrix of a first image and an image matrix of a second image, and marking each pixel point in the image block, wherein the first image is marked as L, the second image is marked as R, and each image block in the image matrix of the first image is marked as L in sequence1,...,LqAnd each of the image blocks in the image matrix of the second image is denoted as R1,...,Rq(ii) a By image block L1And R1For example, the pixel points are labeled as follows, L1Each pixel point in (1) is marked as11,...,l1q,R1Each pixel point in (1) is marked as r11,...,r1q
In the step S13, a relative difference value between pixel values of pixels corresponding to the labels in the first image and the second image is calculated, and the relative difference value is calculated using the pixel values of the pixels as known parameters, where the pixel values of the corresponding labels refer to the pixel lijCorresponding pixel point rij
In this embodiment, the calculation formula of the step of calculating the relative difference value between the pixel values of the pixels corresponding to the labels in the first image and the second image is as follows:
dij=(lij-rij)2/(lij+0.01),
wherein d isijThe relative difference value l of the pixel values of the pixel points corresponding to the first image and the second imageijIs the pixel value r of the jth pixel point in the ith image block of the first imageijThe calculation formula is in l for the pixel value of the jth pixel point in the ith image block of the second imageijThe special case of zero still applies.
In step S14, the relative difference value of the pixel value of the image block corresponding to the label is calculated according to the relative difference value of the pixel value of each pixel, wherein the relative difference value d of the pixel value of each pixel corresponding to the label is used as the relative difference value of the pixel value of each pixel corresponding to the labelijSumming the known parameters, averaging the known parameters with the result of the summation, and finally obtaining the resultIs the relative difference d of the pixel values of the image block corresponding to the labeli
In step S15, a relative difference between the pixel values of the first image and the second image is calculated according to the relative difference between the pixel values of the image blocks, wherein the relative difference d between the pixel values of the image blocks corresponding to the labels is used as the relative differenceiSumming the known parameters, averaging the known parameters according to the result of the summation, and obtaining the relative difference value Di between the pixel values of the first image and the second image.
Referring to fig. 5, in this embodiment, before the step of calculating the relative difference value between the pixel values of the pixels corresponding to the labels in the first image and the second image, the method further includes:
and A3, acquiring the pixel value of each pixel point corresponding to the label in the first image and the second image.
In the step a3, the pixel values of the pixels corresponding to the labels in the first image and the second image are obtained, wherein the pixel values are generally between 0 and 255.
Referring to fig. 6, in this embodiment, the step of calculating the left-right inclination angle of the corresponding face according to the relative difference value includes:
s16, acquiring the relative difference value of the pixel values of the first image and the second image;
and S17, calculating the corresponding left and right inclination angles of the human face according to the relative difference values of the pixel values of the first image and the second image.
In step S16, the relative difference value between the pixel values of the first image and the second image is obtained, wherein the relative difference value is the result of performing the steps S13-S15.
In step S17, a left-right tilt angle of the face is calculated according to the relative difference between the pixel values of the first image and the second image, wherein the relative difference is a known parameter and is substituted into the tilt angle calculation formula to calculate the tilt angle of the face, and specifically, the tilt angle calculation formula is
Figure BDA0001705332140000081
Referring to fig. 7, before the step of dividing the face image into the first image and the second image in a specified manner, the method further includes:
s18, establishing a calculation formula of the inclination angle, wherein the steps comprise:
s19, obtaining relative difference values of K historical face images, and recording the relative difference values as D1......DKOrder matrix
Figure BDA0001705332140000082
S20, acquiring the left and right inclination angles of K historical face images, and sequentially recording the left and right inclination angles as alpha1......αKOrder matrix
Figure BDA0001705332140000083
S21, establishing an equation set Pg-alpha, calculating the linear relation between P and alpha, and solving to obtain
Figure BDA0001705332140000091
For a face image with unknown any inclination angle, firstly, a relative difference value D of the face image is calculatedhThen by the formula
Figure BDA0001705332140000092
Calculate the inclination angle alphah
Wherein γ and I are respectively a small positive number and a unit matrix, and T represents the transposition operation of the matrix.
In step S18, a tilt angle calculation formula is created, wherein the relative difference value and the left and right tilt angles of the face have a linear correlation, and therefore, the tilt angle calculation formula is created by using the left and right tilt angles of the known face image and the corresponding relative difference value.
As mentioned in step S19, the relative difference values of the K historical face images are obtained and sequentially recorded as D1......DKOrder matrix
Figure BDA0001705332140000093
The matrix P comprises relative difference values of K historical face images.
As the step S20, K left and right tilt angles of the historical face image are obtained and sequentially recorded as α1......αKOrder matrix
Figure BDA0001705332140000094
The matrix alpha comprises left and right inclination angles of K historical face images.
As in step S21, the equation set Pg ═ α is established, and the linear relationship between P and α is calculated and solved to
Figure BDA0001705332140000095
For a face image with unknown any inclination angle, firstly, a relative difference value D of the face image is calculatedhThen by the formula
Figure BDA0001705332140000096
Calculate the inclination angle alphahWherein, the value of gamma has obvious influence on the solution of the equation set, and a large amount of analysis and experimental verification show that the preferable value range of gamma is between 0.01 and 0.1, the more preferable value of gamma is 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09 or 0.1,
Figure BDA0001705332140000097
representing the understanding of the fitness to the above-mentioned tilt angle calculation formula,
Figure BDA0001705332140000098
the smaller and correspondingly more reasonable the value of gamma, the more adaptive the solution to the above-mentioned inclination angle calculation formula under the current data, therefore, the most preferable gamma is the value that enables gamma to be calculated
Figure BDA0001705332140000099
Taking the value of the minimum, the criterion ensuring that the goniometer is inclinedThe solution of the calculation formula has excellent numerical stability and robustness.
Referring to fig. 8, the present invention further provides a system for estimating a left-right inclination angle of a human face, including:
the face image segmentation module 1 is used for equally dividing a face image into a first image and a second image in a specified mode;
a first calculating module 2, configured to calculate a relative difference value between pixel values of the first image and the second image;
and the second calculating module 3 is used for calculating the corresponding left and right inclination angles of the human face according to the relative difference values.
In the face image segmentation module 1, after a face image is obtained, dividing the face image into a first image and a second image in an appointed mode, wherein the appointed mode generally comprises the steps of obtaining a center point of the width of the face image firstly, and then dividing the face image into the corresponding first image and the corresponding second image according to a connecting line of the center points of the width; the above-mentioned specified method generally includes splitting the width of the face image into two equal wide sections according to the length of the width, and then forming a first image and a second image corresponding to each wide section on the face image; the above-mentioned specified manner may generally be that the first image and the second image are divided into two parts, the first part and the second part having equal height and half width of the original face image, by obtaining the area of the face image; in this embodiment, preferably, the width of the face image is divided into two equal wide sections according to the length of the width, and then a first image and a second image corresponding to each wide section are formed on the face image; specifically, the first image and the second image having the same number of pixels are obtained by cropping or marking a boundary, and in this embodiment, cropping is preferred.
In a first calculation module 2, after obtaining pixel values of the first image and the second image, calculating a relative difference value of the pixel values between the first image and the second image, wherein the relative difference value of the pixel values between the first image and the second image is linearly related to a corresponding left-right inclination angle of a human face; when the face is not inclined in the left-right direction, the face image is approximate to an axisymmetric image, at this time, the relative difference value of the pixel values between the first image and the second image is smaller, and when the face image is inclined in the left-right direction, the face image is a non-axisymmetric image, and the asymmetry of the face image is larger along with the increase of the left-right direction inclination angle, at this time, the relative difference value of the pixel values between the first image and the second image is larger along with the increase of the left-right direction inclination angle of the face; in general, the pixel value is a value given by a computer when an image of an original is digitized, and represents average luminance information of a certain small block of the original, or average reflection (transmission) density information of the small block, and the pixel value changes as color information included in a unit area in the image changes. For example, when the color information is white, the pixel value is 225; when the color information is black, the pixel value is 0.
In the second calculating module 3, after a relative difference value between pixel values of the first image and the second image is obtained, a corresponding left and right inclination angle of the face is calculated according to the relative difference value, wherein the corresponding inclination angle can be calculated by substituting the relative difference value into a corresponding inclination angle calculating formula.
In this embodiment, the method further includes: the device comprises a vertical inclination judgment module, a vertical inclination calibration module, an image color judgment module, an image color conversion module, a position point acquisition module, a line segment judgment module, a column number judgment module, a first segmentation sub-module, a second segmentation sub-module, an image block forming module, a labeling module, a first calculation sub-module, a second calculation sub-module, a third calculation sub-module, a pixel value acquisition module, a first relative difference value acquisition module, an inclination angle calculation module, a formula establishment module, a second relative difference value acquisition module, a left and right inclination angle acquisition module and a formula establishment sub-module.
The vertical inclination judging module is configured to judge whether the face image is vertically inclined, where the face inclination includes an inclination in a vertical direction and an inclination in a left-right direction, and when the face image is inclined in the vertical direction, if the inclination in the vertical direction of the face image is not calibrated, the accuracy of estimating the left-right inclination angle of the face may be affected; generally, whether the facial features are vertically inclined or not is judged; for example, it is possible to determine whether a connection line of the position points of the pupils and the pupils has an inclination angle with the horizontal line, determine whether a connection line of the position points of the two corners of the eyes near the nose has an inclination angle with the horizontal line, and determine whether a connection line of the position points of the two corners of the lips has an inclination angle with the horizontal line.
The vertical inclination calibration module is used for calibrating the face image by an affine transformation method, wherein the calibration of the face image by the affine transformation method refers to the calibration of the face image with an inclination angle in the vertical direction by performing linear transformation on one vector space in the face image and performing translational transformation on the other vector space; the line segment formed by connecting the position points of the two eye corners close to the nose in the calibrated face image is a horizontal line, and at the moment, the inclination of the face image in the vertical direction is calibrated.
The image color determining module is configured to determine whether the face image is a color image, where the face image is generally determined according to color information included in the face image, the color image refers to that each pixel value in the image is divided into R, G, B three primary color components, each primary color component directly determines the intensity of its primary color, for example, the image depth is 24, R: G: B is 8:8: 8:8, then R, G, B occupies 8 bits to represent the intensity of each primary color component, and the intensity level of each primary color component is 2^8 or 256.
The image color conversion module is used for converting the color image into a gray image, and the conversion of the color image into the gray image is favorable for facilitating the subsequent calculation of the relative difference value of the pixel values of the first image and the second image; the grayscale image refers to an image having only one sampling color per pixel. The above-described grayscale image is generally displayed as a grayscale from darkest black to brightest white; the formula for converting the color image into a grayscale image is Y ═ 0.3R +0.59G +0.11B, where Y represents grayscale, and R, G, B represents the color values of red, green, and blue, respectively.
The position point acquiring module is configured to acquire position points of two corners of the human face near the nose, respectively, where the position points of the two corners of the human face near the nose are generally acquired by a target detection method based on depth learning, the depth learning is generally a ResNet100 network, and specifically, a batch of left and right human eye images are first captured from a plurality of human face images as positive samples, meanwhile, non-human eye image regions on a large number of human face images are taken as negative samples, and the obtained positive samples and negative samples are used to train a depth network; when the face image is input, the trained depth network can detect the position points of two eye corners close to the nose in the face image.
The line segment judging module is used for judging whether an inclination angle exists between the line segment and a horizontal line, wherein when the inclination angle theta exists between the line segment and the horizontal line, the inclination angle theta refers to an included angle between a line segment formed by connecting position points of two eye corners close to a nose in the face image and the horizontal line, and the inclination angle theta represents the deflection degree of the face image in the vertical direction.
The column number judging module is configured to acquire a number of rows and a number of columns of a face image matrix, and judge whether n is an even number, where the number of rows m and the number of columns n of the face image matrix depend on a model of a device that acquires a face image and a distance between a face and the device, and the number of rows m and the number of columns n is correspondingly sent to the first segmentation submodule or the second segmentation submodule to split the face image according to parity of the number of columns n, and a size unit of the face image is a pixel.
The first division submodule is used for dividing the face image into even number when n is an even number
Figure BDA0001705332140000111
The first and second images of the sizeWhen n is an even number, the face image is divided into two parts in a cutting mode
Figure BDA0001705332140000112
The first image and the second image of the size; the number of rows m and the number of columns n of the image matrix of the first image and the image matrix of the second image are equal.
The second segmentation submodule is used for discarding the first or the last column of the face image matrix when n is an odd number, and equally dividing the face image into two
Figure BDA0001705332140000121
And when n is an odd number, discarding the first or last column of the face image matrix, and dividing the face image into the first image and the second image to ensure that the number of rows m and the number of columns n of the image matrix of the first image and the image matrix of the second image obtained by division are equal.
The column number judging module, the first dividing submodule and the second dividing submodule can be replaced by a cutting module and an averaging module; the column number judging module is replaced by a cutting module which is generally used for cutting the face image matrix into a face image sub-matrix with a fixed size; the fixed size is generally an even number of columns and an even or odd number of rows, and in this embodiment, the fixed size is preferably 64 × 64 px; the first division submodule and the second division submodule are replaced by an averaging module, and the averaging module is generally configured to equally divide the face image submatrix into the first image and the second image, wherein the averaging mode is to equally divide the face image width by a fixed number or at a fixed distance, and in the present embodiment, the equal division is preferably performed at a fixed distance.
The image block forming module is configured to divide the first image and the second image into a plurality of image blocks, wherein the width and the height of the first image and the second image are divided into a plurality of image blocks by a fixed number or a fixed distance, and in the present embodiment, the width and the height of the first image and the second image are preferably divided into a fixed number, specifically, the width and the height of the first image and the second image are divided into 5 uniformly spaced slices, and the height of the first image and the second image are divided into 10 uniformly spaced slices, and the image blocks are formed by orthogonal width slices and height slices; the number of pixels contained in each image block is equal, that is, the size of each image block is equal.
The labeling module is configured to label each image block in the image matrix of the first image and each image block in the image matrix of the second image, and label each pixel point in the image block, where the first image is denoted as L, the second image is denoted as R, and each image block in the image matrix of the first image is denoted as L in sequence1,...,LqAnd each of the image blocks in the image matrix of the second image is denoted as R1,...,Rq(ii) a By image block L1And R1For example, the pixel points are labeled as follows, L1Each pixel point in (1) is marked as11,...,l1q,R1Each pixel point in (1) is marked as r11,...,r1q
The first calculating submodule is configured to calculate a relative difference value between pixel values of pixels corresponding to labels in the first image and the second image, where the relative difference value is calculated by using the pixel values of the pixels as known parameters, and each pixel corresponding to a label refers to a pixel lijCorresponding pixel point rij. The calculation formula for calculating the relative difference value of the pixel values of the pixels corresponding to the labels in the first image and the second image is as follows: dij=(lij-rij)2/(lij+0.01) Wherein d isijThe relative difference value l of the pixel values of the pixel points corresponding to the first image and the second imageijIs the pixel value r of the jth pixel point in the ith image block of the first imageijThe calculation formula is in l for the pixel value of the jth pixel point in the ith image block of the second imageijThe special case of zero still applies.
The second calculating submodule is configured to calculate a relative difference value corresponding to the pixel value of the labeled image block according to the relative difference value of the pixel value of each pixel, where the relative difference value d corresponding to the pixel value of each pixel of the label is used as the relative difference value of the pixel value of each pixelijSumming the known parameters, averaging the known parameters with the result of the summation, and obtaining the relative difference d between the pixel values of the image blocks corresponding to the labelsi
The third calculating submodule is configured to calculate a relative difference value between pixel values of the first image and the second image according to the relative difference value between pixel values of the image blocks, where the relative difference value d of pixel values of each image block corresponding to the label is used as the relative difference valueiSumming the known parameters, averaging the known parameters according to the result of the summation, and obtaining the relative difference value Di between the pixel values of the first image and the second image.
The pixel value obtaining module is configured to obtain a pixel value of each pixel point of the corresponding label in the first image and the second image, where the pixel value is generally between 0 and 255.
The first relative difference value obtaining module is configured to obtain the relative difference value of the pixel value of the first image and the pixel value of the second image, where the relative difference value is a final calculation result of the first calculating submodule, and the third calculating submodule.
The inclination angle calculation module calculates a corresponding left and right inclination angle of the face according to the relative difference value of the pixel values of the first image and the second image, wherein the relative difference value is a known parameter and is substituted into the inclination angle calculation formula to calculate the inclination angle of the face, and specifically, the inclination angle calculation formula is
Figure BDA0001705332140000131
The formula establishing module is used for establishing the inclination angle calculation formula, wherein the relative difference value and the left and right inclination angles of the human face have linear correlation, so that the inclination angle calculation formula is established by using the left and right inclination angles of the known human face image and the corresponding relative difference value.
The second relative difference value obtaining module is configured to obtain relative difference values of the K historical face images, and sequentially record the relative difference values as D1......DKOrder matrix
Figure BDA0001705332140000132
The matrix P comprises relative difference values of K historical face images.
The left and right inclination angle acquisition module is used for acquiring the left and right inclination angles of the K historical face images, and the left and right inclination angles are sequentially recorded as alpha1......αKOrder matrix
Figure BDA0001705332140000133
The matrix alpha comprises left and right inclination angles of K historical face images.
The above formula establishing submodule is used for establishing an equation set Pg ═ alpha, calculating the linear relation between P and alpha and solving
Figure BDA0001705332140000134
For a face image with unknown any inclination angle, firstly, a relative difference value D of the face image is calculatedhThen by the formula
Figure BDA0001705332140000135
Calculate the inclination angle alphahWherein, the value of gamma has obvious influence on the solution of the equation set, and a large amount of analysis and experimental verification show that the preferable value range of gamma is between 0.01 and 0.1, the more preferable value of gamma is 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09 or 0.1,
Figure BDA0001705332140000136
representing the understanding of the fitness to the above-mentioned tilt angle calculation formula,
Figure BDA0001705332140000137
the smaller and correspondingly more reasonable the value of gamma, the more adaptive the solution to the above-mentioned inclination angle calculation formula under the current data, therefore, the most preferable gamma is the value that enables gamma to be calculated
Figure BDA0001705332140000138
And a minimum value is obtained, and the criterion ensures that the solution of the inclination angle calculation formula has excellent numerical stability and robustness.
Referring to fig. 9, in an embodiment of the present invention, the present invention further provides a computer device, where the computer device 4 is represented in a form of a general-purpose computing device, and components of the computer device 4 may include, but are not limited to: one or more processors or processing units 6, a system memory 11, and a bus 7 that couples various system components including the system memory 11 and the processing unit 6.
Bus 7 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 4 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 4 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 11 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)12 and/or cache memory 13. The computer device 4 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, the storage system 15 may be used to read from and write to non-removable, nonvolatile magnetic media (commonly referred to as "hard drives"). Although not shown in FIG. 9, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 7 by one or more data media interfaces. The memory may include at least one program product having a set (e.g., at least one) of program modules 16, the program modules 16 being configured to carry out the functions of embodiments of the invention.
A program/utility 16 having a set (at least one) of program modules 16 may be stored, for example, in memory, such program modules 16 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 16 generally perform the functions and/or methodologies of the described embodiments of the invention.
Computer device 4 may also communicate with one or more external devices 5 (e.g., keyboard, pointing device, display 10, camera, etc.), with one or more devices that enable a user to interact with computer device 4, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 4 to communicate with one or more other computing devices. Such communication may be via an input/output (I/O) interface 9. Also, computer device 4 may communicate with one or more networks (e.g., a Local Area Network (LAN)), a Wide Area Network (WAN), and/or a public network (e.g., the Internet) via network adapter 9. As shown, network adapter 8 communicates with the other modules of computer device 4 over bus 7. It should be appreciated that although not shown in FIG. 9, other hardware and/or software modules may be used in conjunction with computer device 4, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 6 executes various functional applications and data processing by running the program stored in the system memory 11, for example, to implement the method for estimating the left and right inclination angles of a human face according to the embodiment of the present invention.
That is, the processing unit 6 implements, when executing the program,: after a face image is obtained, dividing the face image into a first image and a second image in an appointed mode; calculating the relative difference value of the pixel values between the first image and the second image; and calculating the corresponding left and right inclination angles of the human face according to the relative difference values.
In an embodiment of the present invention, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for estimating the left and right tilt angles of a human face as provided in all embodiments of the present application:
that is, the program when executed by the processor implements: after a face image is obtained, dividing the face image into a first image and a second image in an appointed mode; calculating the relative difference value of the pixel values between the first image and the second image; and calculating the corresponding left and right inclination angles of the human face according to the relative difference values. Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer-readable storage medium or a computer-readable signal medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM)12, a read-only memory (ROM), an erasable programmable read-only memory (EPOM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A method for estimating the left and right inclination angles of a human face is characterized by comprising the following steps:
dividing the face image into a first image and a second image in a specified mode, wherein the method comprises the following steps:
acquiring the number m of rows and the number n of columns of the face image matrix, and judging whether n is an even number;
if so, the method will be usedThe face image is divided into
Figure FDA0003094395340000011
The first and second images of size;
if not, discarding the first or last column of the face image matrix, and equally dividing the face image into two rows
Figure FDA0003094395340000012
The first and second images of size;
calculating a relative disparity value of pixel values between the first image and the second image, comprising:
respectively dividing the first image and the second image into a plurality of image blocks, wherein the number of pixel points contained in each image block is the same;
respectively labeling each image block in the image matrix of the first image and the image matrix of the second image, and labeling each pixel point in the image block;
calculating the relative difference value of the pixel values of the pixels of the corresponding labels in the first image and the second image;
and calculating the corresponding left and right inclination angles of the human face according to the relative difference values.
2. The method of estimating a left-right inclination angle of a human face according to claim 1, wherein before the step of averaging the image of the human face into the first image and the second image in a specified manner, the method further comprises:
judging whether the face image is vertically inclined or not;
and if so, calibrating the face image by an affine transformation method.
3. The method of claim 2, wherein the step of determining whether the face image is tilted vertically comprises:
respectively acquiring position points of two eye corners close to a nose in the face image;
and connecting the position points, and judging whether the line segment has an inclination angle with the horizontal line.
4. The method according to claim 1, wherein the step of calculating the relative difference value between the pixel values of the first image and the second image comprises:
calculating the relative difference value of the pixel value of the corresponding label image block according to the relative difference value of the pixel value of each pixel point;
and calculating the relative difference value of the pixel values of the first image and the second image according to the relative difference value of the pixel values of each image block.
5. The method according to claim 1, wherein the step of calculating the corresponding left-right face inclination angle according to the relative difference value comprises:
acquiring the relative difference value of the pixel values of the first image and the second image;
and calculating the corresponding left and right inclination angles of the human face according to the relative difference values of the pixel values of the first image and the second image.
6. The method of estimating a left-right inclination angle of a human face according to claim 1, wherein before the step of averaging the image of the human face into the first image and the second image in a specified manner, the method further comprises:
establishing a calculation formula of the inclination angle, wherein the steps comprise:
obtaining relative difference values of K historical face images, and sequentially recording the relative difference values as D1.......DKOrder matrix
Figure FDA0003094395340000021
Acquiring the left and right inclination angles of K historical face images, and sequentially recording as alpha1......αKOrder matrix
Figure FDA0003094395340000022
Establishing an equation set Pg ═ alpha, calculating a linear relation between P and alpha, and solving to obtain
Figure FDA0003094395340000023
For a face image with unknown any inclination angle, firstly, a relative difference value D of the face image is calculatedhThen by the formula
Figure FDA0003094395340000024
Calculate the inclination angle alphah
Wherein γ and I are respectively a small positive number and a unit matrix, and T represents the transposition operation of the matrix.
7. A face left-right inclination angle estimation system is characterized by comprising:
the face image segmentation module is used for equally dividing the face image into a first image and a second image in a specified mode, and comprises:
acquiring the number m of rows and the number n of columns of the face image matrix, and judging whether n is an even number;
if yes, dividing the face image into
Figure FDA0003094395340000025
The first and second images of size;
if not, discarding the first or last column of the face image matrix, and equally dividing the face image into two rows
Figure FDA0003094395340000026
The first and second images of size;
a first calculation module for calculating a relative difference value of pixel values between the first image and the second image, comprising:
respectively dividing the first image and the second image into a plurality of image blocks, wherein the number of pixel points contained in each image block is the same;
respectively labeling each image block in the image matrix of the first image and the image matrix of the second image, and labeling each pixel point in the image block;
calculating the relative difference value of the pixel values of the pixels of the corresponding labels in the first image and the second image;
and the second calculation module is used for calculating the corresponding left and right inclination angles of the human face according to the relative difference values.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any one of claims 1 to 6 when executing the program.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN201810653661.2A 2018-06-22 2018-06-22 Method, system, equipment and storage medium for estimating left and right inclination angles of human face Active CN108960099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810653661.2A CN108960099B (en) 2018-06-22 2018-06-22 Method, system, equipment and storage medium for estimating left and right inclination angles of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810653661.2A CN108960099B (en) 2018-06-22 2018-06-22 Method, system, equipment and storage medium for estimating left and right inclination angles of human face

Publications (2)

Publication Number Publication Date
CN108960099A CN108960099A (en) 2018-12-07
CN108960099B true CN108960099B (en) 2021-07-06

Family

ID=64486169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810653661.2A Active CN108960099B (en) 2018-06-22 2018-06-22 Method, system, equipment and storage medium for estimating left and right inclination angles of human face

Country Status (1)

Country Link
CN (1) CN108960099B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999748A (en) * 2012-12-12 2013-03-27 湖北微驾技术有限公司 Refactoring method for optimizing super resolution of facial images
CN104834928A (en) * 2015-05-08 2015-08-12 小米科技有限责任公司 Method for determining identification area in picture and device thereof
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4712563B2 (en) * 2006-01-16 2011-06-29 富士フイルム株式会社 Face detection method, apparatus and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999748A (en) * 2012-12-12 2013-03-27 湖北微驾技术有限公司 Refactoring method for optimizing super resolution of facial images
CN104834928A (en) * 2015-05-08 2015-08-12 小米科技有限责任公司 Method for determining identification area in picture and device thereof
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Adaboost人脸检测融合五官;马里千;《华中科技大学学报(自然科学版)》;20131031;全文 *

Also Published As

Publication number Publication date
CN108960099A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN109242903B (en) Three-dimensional data generation method, device, equipment and storage medium
CN110503074B (en) Information labeling method, device and equipment of video frame and storage medium
US8855406B2 (en) Egomotion using assorted features
EP3309703B1 (en) Method and system for decoding qr code based on weighted average grey method
JP2021515939A (en) Monocular depth estimation method and its devices, equipment and storage media
CN110348522B (en) Image detection and identification method and system, electronic equipment, and image classification network optimization method and system
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN111311543B (en) Image definition detection method, system, device and storage medium
JP7273129B2 (en) Lane detection method, device, electronic device, storage medium and vehicle
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN109934873B (en) Method, device and equipment for acquiring marked image
WO2019015344A1 (en) Image saliency object detection method based on center-dark channel priori information
CN108229494B (en) Network training method, processing method, device, storage medium and electronic equipment
CN110956131B (en) Single-target tracking method, device and system
CN110570435A (en) method and device for carrying out damage segmentation on vehicle damage image
CN110111341B (en) Image foreground obtaining method, device and equipment
CN114972421A (en) Workshop material identification tracking and positioning method and system
JP6244886B2 (en) Image processing apparatus, image processing method, and image processing program
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN108960099B (en) Method, system, equipment and storage medium for estimating left and right inclination angles of human face
CN109901716B (en) Sight point prediction model establishing method and device and sight point prediction method
CN112434582A (en) Lane line color identification method and system, electronic device and storage medium
CN112085842A (en) Depth value determination method and device, electronic equipment and storage medium
CN116052120A (en) Excavator night object detection method based on image enhancement and multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant