CN113051978A - Face recognition method, electronic device and readable medium - Google Patents
Face recognition method, electronic device and readable medium Download PDFInfo
- Publication number
- CN113051978A CN113051978A CN201911380794.8A CN201911380794A CN113051978A CN 113051978 A CN113051978 A CN 113051978A CN 201911380794 A CN201911380794 A CN 201911380794A CN 113051978 A CN113051978 A CN 113051978A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- pedestrian
- score
- rectangular frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000001514 detection method Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012544 monitoring process Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 238000012216 screening Methods 0.000 abstract description 5
- 230000005540 biological transmission Effects 0.000 abstract description 4
- 238000010191 image analysis Methods 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 210000003128 head Anatomy 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Ophthalmology & Optometry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a face recognition method, an electronic device and a readable medium, which are used for acquiring a pedestrian image acquired by an image acquisition device; if human eyes are detected in the pedestrian image, determining the deflection angle score of the human face in the pedestrian image according to the position information of the human eyes in the pedestrian image; calculating an image quality score according to the pedestrian image; if the pedestrian image meets the preset condition according to the deflection angle score and the image quality score, the pedestrian image is sent to a target server for face recognition, the pedestrian image meeting the quality standard is screened out by calculating the deflection angle score and the image quality score of the face, the data transmission quantity is reduced, meanwhile, the quality screening work of a large number of images by the server is reduced, and the accuracy of pedestrian image analysis is improved.
Description
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a face recognition method, an electronic device, and a readable medium.
Background
With the increasing power of internet functions, technologies relying on the internet are emerging, wherein face recognition technologies have been successfully applied to many fields.
For example, in the field of computer vision, pedestrian analysis is a hot research direction, and in the prior art, most of pedestrian analysis uploads an image shot by an image acquisition device to a server, then the server screens pedestrian images containing pedestrians from the multiple images, and then further screens clear pedestrian images from the pedestrian images for human face detection and other work.
However, due to the influence of various environmental factors and shooting angle factors, only a few images of a large number of pedestrian images collected by the image collecting device may meet the quality standard, the server needs to perform quality screening work on the large number of images, and the calculation amount of the server is increased, so that the calculation time is too long.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present application provides a face recognition method, an electronic device and a readable medium.
In a first aspect, the present application provides a face recognition method, including:
acquiring a pedestrian image acquired by an image acquisition device;
if human eyes are detected in the pedestrian image, determining the deflection angle score of the human face in the pedestrian image according to the position information of the human eyes in the pedestrian image;
calculating an image quality score according to the pedestrian image;
and if the pedestrian image meets the preset condition according to the deflection angle score and the image quality score, sending the pedestrian image to a target server for face recognition.
Optionally, the step of determining a deflection angle score of a face in the pedestrian image according to the position information of the human eyes in the pedestrian image includes:
determining the human eye coordinates of the human eyes on the pedestrian image according to the position information;
constructing a rectangular frame on the pedestrian image based on the two human eye coordinates;
carrying out face detection on the pedestrian image to obtain a face image;
judging whether any one side of the rectangular frame is in the face image area;
if any one side of the rectangular frame is in the face image area, calculating the area of the rectangular frame to obtain the face area;
according to the formula:
calculating the face size score, wherein SfaceRepresenting the area of the rectangular frame, n representing a preset face area reference value, lambda1Representing the face size score;
or if any one side of the rectangular frame is not in the face image area, determining that the face size score is 0;
and determining the face size score as the deflection angle score.
Optionally, the step of constructing a rectangular frame on the pedestrian image based on the two human eye coordinates includes:
determining the midpoint of a connecting line of the two human eye coordinates as an origin, and establishing a plane rectangular coordinate system;
calculating the product of the distance between the two human eye coordinates and a preset multiple to obtain the width of a rectangular frame, wherein the middle point of the width of the rectangular frame is located at a first preset position on a positive half shaft of a longitudinal axis of the plane rectangular coordinate system;
calculating the product of the width of the rectangular frame and a preset length-width ratio to obtain the length of the rectangular frame;
and constructing the rectangular frame according to the length of the rectangular frame, the width of the rectangular frame and the first preset position.
Optionally, the step of determining a deflection angle score of a face in the pedestrian image according to the position information of the human eyes in the pedestrian image includes:
determining the human eye coordinates of the human eyes on the pedestrian image according to the position information;
constructing a rectangular frame on the pedestrian image based on the two human eye coordinates;
determining the central coordinate of the central point of the rectangular frame on the pedestrian image;
carrying out face detection on the pedestrian image to obtain a face image;
determining the coordinate of the center point of the face image on the center point of the pedestrian image;
calculating the distance between the center point coordinate and the midpoint coordinate to obtain a reference distance;
using the formula lambda2Calculating to obtain the face symmetry score according to 1-d, wherein lambda2Representing the face symmetry score, d representing a normalized numerical value of the reference distance;
and determining the face symmetry score as the deflection angle score.
Optionally, the step of calculating an image quality score from the pedestrian image comprises:
carrying out graying processing on the pedestrian image to obtain a grayscale image;
determining a normalized histogram of the face image based on the grayscale image;
calculating the mean value of a plurality of distribution probabilities of the normalized histogram to obtain the definition score of the face image, wherein niRepresenting a grey value r in the imageiMN represents the total number of pixels in the face image,represents the mean value;
determining the sharpness score as the image quality score.
Optionally, the step of calculating an image quality score from the pedestrian image comprises:
carrying out graying processing on the pedestrian image to obtain a grayscale image;
determining a normalized histogram of the face image based on the grayscale image;
calculating a face contrast score, wherein niRepresenting a grey value r in the imageiMN represents the total number of pixels in the face image,denotes the mean value, δ2Denotes the variance, λ4Representing the face contrast score;
and determining the face contrast score as the image quality score.
Optionally, the step of calculating an image quality score from the pedestrian image comprises:
carrying out face detection on the pedestrian image to obtain a face image;
carrying out graying processing on the pedestrian image to obtain a grayscale image;
extracting the gray value of each pixel point in the gray image to obtain a plurality of gray values;
calculating the average value of the gray values to obtain the average value of the gray values of the pedestrian image;
calculating the average value of the gray values of a plurality of pixel points in the face image area to obtain the average value of the gray values of the face image;
according to the formula:
calculating the brightness score, wherein,representing the average value of the gray levels of the face image, ImRepresenting the mean value of the pedestrian image gray scale, λ5Representing the brightness score;
determining the brightness score as the image quality score.
Optionally, before the step of acquiring the image of the pedestrian acquired by the image acquisition device, the method further includes:
controlling the image acquisition device to acquire an image of a preset monitoring area;
if a pedestrian is detected in the image, performing Kalman filtering on the image to obtain position information of the pedestrian on the image at the next moment;
and controlling the image acquisition device to shoot the pedestrian according to the position information.
In a second aspect, the present application further provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps of the method according to any one of the above first aspects when executing the computer program.
In a third aspect, the present application also provides a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of any of the first aspects.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the method comprises the steps of acquiring a pedestrian image acquired by an image acquisition device; if human eyes are detected in the pedestrian image, determining the deflection angle score of the human face in the pedestrian image according to the position information of the human eyes in the pedestrian image; calculating an image quality score according to the pedestrian image; if the pedestrian image meets the preset condition according to the deflection angle score and the image quality score, the pedestrian image is sent to a target server for face recognition, the pedestrian image meeting the quality standard is screened out by calculating the deflection angle score and the image quality score of the face, the data transmission quantity is reduced, meanwhile, the quality screening work of a large number of images by the server is reduced, and the accuracy of pedestrian image analysis is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of step S102 according to an embodiment of the present application;
fig. 3 is a schematic flowchart of step S202 provided in the embodiment of the present application;
fig. 4 is a schematic flow chart of another face recognition method according to an embodiment of the present application;
fig. 5 is a structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, in the pedestrian analysis field, most of pedestrian analysis uploads images shot by image acquisition equipment to a server, then the server screens pedestrian images containing pedestrians from multiple images, and then clear pedestrian images are further screened from the pedestrian images to perform work such as face detection. However, due to the influence of various environmental factors and shooting angle factors, only a few images of a large number of pedestrian images collected by an image collecting device may meet a quality standard, a server needs to perform quality screening work on the large number of images, and the calculation amount of the server is increased, which results in an excessively long calculation time, based on which, the invention firstly provides a face recognition method, as shown in fig. 1, the method comprises:
step S101, acquiring a pedestrian image acquired by an image acquisition device;
in the embodiment of the present invention, the image capturing device may enable the monitoring camera to be used for capturing the pedestrian image in the monitored area, wherein the captured pedestrian image may be stored in the memory of the image capturing device or in other terminals, and the specific setting mode may be determined according to the actual situation.
In practical application, the face recognition method can be applied to front-end equipment, and the front-end equipment is connected with the image processing device to obtain the pedestrian image acquired by the image acquisition device.
Step S102, if human eyes are detected in the pedestrian image, determining a deflection angle score of the human face in the pedestrian image according to position information of the human eyes in the pedestrian image;
in the embodiment of the invention, if the human eyes are detected in the pedestrian image, the position information of the human eyes on the pedestrian image is determined, specifically, the position information can be the position coordinates of the human eyes on the pedestrian image, the offset of the human eyes on the pedestrian image and the like, and then the deflection angle score of the human face where the human eyes are located is calculated according to the position information, so that the purpose of detecting the image quality of the human face part of the pedestrian image is achieved.
The method for calculating the deflection angle score of the face where the human eyes are located by using the position information can be a method of matching actual position information of the human eyes on a pedestrian image with preset standard position information to obtain the deflection angle score of the face corresponding to the standard position information, namely the deflection angle score of the face in actual test, or a method for training a neural network model by using a pedestrian image training sample which marks the deflection angle score of the face in advance and the position information of the human eyes and determining the deflection angle score of the face in the pedestrian image according to the position information by using the neural network model, wherein the specific setting can be determined according to actual conditions.
Step S103, calculating an image quality score according to the pedestrian image;
in the embodiment of the present application, the image quality score is a score for reflecting the overall quality of the pedestrian image, such as: scoring the sharpness of the pedestrian image, scoring the contrast of the face in the pedestrian image, scoring the brightness of the pedestrian image, and the like.
The image quality score is calculated according to the pedestrian image, the image quality score of the pedestrian image can be calculated by using a pre-trained neural network model, and the image quality score can also be obtained by adopting a mode of matching with a reference image and acquiring the image quality score corresponding to the matched reference image.
And step S104, if the pedestrian image is determined to meet the preset condition according to the deflection angle score and the image quality score, sending the pedestrian image to a target server for face recognition.
In the embodiment of the present invention, the preset condition may be determined according to an actual situation, for example: the preset condition may be a preset threshold, the deflection angle score and the image quality score correspond to a preset weight respectively, the type score weight with higher attention is larger with reference to an actual situation, and when the product of the deflection angle score and the weight and the sum of the product of the image quality score and the weight are larger than the preset threshold, the pedestrian image can be determined to meet the preset condition, and the pedestrian image is sent to the target server so that the server can perform face recognition work based on the pedestrian image.
In practical application, the face recognition method can be applied to front-end equipment, the front-end equipment is connected with the image processing device to obtain the pedestrian images collected by the image collecting device, and the pedestrian images meeting the preset conditions are sent to the server, so that the data transmission quantity is reduced, the calculation quantity of the server is reduced, and the accuracy of pedestrian image analysis is improved.
The inventor finds that in practical application of pedestrian detection, a traditional face detection method only marks a face part, but when the face of a pedestrian is not directly facing a camera shooting area, even a person lowers the head, or turns to the side, the pedestrian still can be captured, but the quality of a pedestrian image is poor, and the subsequent face recognition work is not facilitated, and further, in another embodiment provided by the invention, an alternative implementation based on step S102 is further provided, as shown in fig. 2, step S102, determining a deflection angle score of the face in the pedestrian image according to the position information of the human eye in the pedestrian image, including:
step S201, determining the coordinates of the human eyes on the pedestrian image according to the position information;
in the embodiment of the invention, a plane rectangular coordinate system is established on the pedestrian image, the origin of the plane rectangular coordinate system can be determined according to the actual situation, and the coordinates of human eyes can be determined according to the position information of the human eyes on the pedestrian image.
Step S202, constructing a rectangular frame on the pedestrian image based on the two human eye coordinates;
in the embodiment of the present invention, a manner of constructing a rectangular frame on a pedestrian image by using two human eyes on the same face may be to construct the rectangular frame on the pedestrian image by presetting a ratio between the rectangular frame and two human eye coordinates and position information of a line segment of a human eye coordinate connection line in the rectangular frame, based on which, as shown in fig. 3, further, step S202 is to construct the rectangular frame on the pedestrian image based on the two human eye coordinates, and includes:
step S301, determining the midpoint of a connecting line of the two human eye coordinates as an origin, and establishing a plane rectangular coordinate system;
in the embodiment of the invention, a line segment is obtained by connecting two human eye coordinates, the midpoint of the line segment is taken as the origin of the seat, and a plane rectangular coordinate system is established, wherein the plane rectangular coordinate system can be the same as or different from the original coordinate system of the two human eye coordinates, and the specific setting can be determined according to the actual situation.
Step S302, calculating the product of the distance between the two human eye coordinates and a preset multiple to obtain the width of a rectangular frame, wherein the middle point of the width of the rectangular frame is located at a first preset position on a positive half shaft of a longitudinal axis of the plane rectangular coordinate system;
in the embodiment of the present invention, a distance between two coordinates on a coordinate system is first calculated (for convenience of calculation, a coordinate axis is usually divided by an actual physical distance, for example, a unit on the coordinate system corresponds to one centimeter of an actual distance, when a distance between two points on the coordinate system is two unit lengths, that is, the distance between two points is two centimeters), then a preset multiple is multiplied to obtain a width of a rectangular frame, where the preset multiple is determined according to a ratio of a human eye distance to a face width, and then the width of the rectangular frame, that is, the width of a human face in an ideal state (where the human face is not deflected), is calculated according to the distance between the two human eye coordinates, and then a midpoint of the calculated width of the rectangular frame is set at a first preset position on a positive half axis of a longitudinal axis, where the first preset position is determined according to an empirical value of the distance from the human eye to the top of the human face.
Step S303, calculating the product of the width of the rectangular frame and a preset length-width ratio to obtain the length of the rectangular frame;
in the embodiment of the present invention, the preset aspect ratio is determined according to an empirical value of the face length-width ratio, for example, the face length-width ratio is 4: 3, the preset aspect ratio can be set to be 4: and 3, calculating the length of the rectangular frame according to the length-width ratio, such as: if the width of the rectangular frame is 30cm, the length of the rectangular frame can be calculated to be 40cm according to the preset length-width ratio.
Step S304, constructing the rectangular frame according to the length of the rectangular frame, the width of the rectangular frame and the first preset position.
In the embodiment of the invention, after the width of the rectangular frame and the position of the width of the rectangular frame on the pedestrian image are obtained, the length of the rectangular frame is calculated through the length-width ratio, and the length of the rectangular frame is respectively made at the two ends of the width of the rectangular frame, wherein the length of the rectangular frame is perpendicular to the width of the rectangular frame to form the rectangular frame.
Step S203, carrying out face detection on the pedestrian image to obtain a face image;
in the embodiment of the invention, the face part of the pedestrian is contained in the face image, but the face part in the face image has the possibility that the whole face cannot be shown due to the head raising or the left and right deflection of the pedestrian.
Step S204, judging whether any side of the rectangular frame is in the face image area;
in the embodiment of the invention, the deflection degree of the human face can be judged by judging the overlapping part of the rectangular frame and the human face image area, and because the rectangular frame is the human face with ideal size constructed according to the human eye distance, when the human face deflects relative to the shooting angle of the image acquisition device, the side of the rectangular frame can exceed the human face image area.
In practical application, when the human face deflects left and right relative to the shooting angle of the image acquisition device, the distance between human eyes in a pedestrian image is reduced, and the whole area of the rectangular frame is reduced; when the face deflects up and down (head rises or falls) relative to the shooting angle of the image acquisition device, the rectangular frame exceeds the face area in the pedestrian image.
If yes, step S205, calculating the area of the rectangular frame to obtain the face area;
step S206, calculating the face size score according to a formula:
wherein S isfaceRepresenting the area of the rectangular frame, n representing a preset face area reference value, lambda1Representing the face size score;
in the embodiment of the invention, n represents a preset face area reference value, namely an empirical value of the face area, when the area of the rectangular frame is larger, the face size score is higher, and whether the face size of the pedestrian image meets the preset condition can be judged, so that the calculation amount of a server is reduced, and the face recognition accuracy is improved.
If not, step S207, determining that the face size score is 0;
in the embodiment of the invention, if any side of the rectangular frame exceeds the face area, the possibility that the face deflects up and down (head up or head down) relative to the shooting angle of the image acquisition device is represented, and at this time, in order to ensure the accuracy of face recognition, the pedestrian image is not considered to be selected, so that when any side of the rectangular frame exceeds the face area, the face size score is determined to be 0.
And step S208, determining the face size score as the deflection angle score.
In the embodiment of the invention, the rectangular frame is the face with the ideal size constructed according to the human eye space, and when the shooting angle of the face relative to the image acquisition device deflects, the edge of the rectangular frame exceeds the face image area, so that the deflection degree of the face can be judged by judging the overlapping part of the rectangular frame and the face image area, and further the deflection angle score of the face in the pedestrian image is determined.
In addition, in the embodiment of the present invention, a symmetry score of a human face may be further calculated through human eye coordinates, and step S102, determining a deflection angle score of the human face in the pedestrian image according to position information of the human eye in the pedestrian image includes:
determining the human eye coordinates of the human eyes on the pedestrian image according to the position information;
constructing a rectangular frame on the pedestrian image based on the two human eye coordinates;
in the embodiment of the present invention, the specific implementation manner described in the above embodiment can be referred to both the manner of determining the eye coordinates of the human eyes on the pedestrian image and the manner of constructing the rectangular frame on the pedestrian image based on the two eye coordinates, and details are not repeated here.
Determining the central coordinate of the central point of the rectangular frame on the pedestrian image;
in the embodiment of the invention, the center point of the rectangular frame is positioned at the intersection point of two diagonal lines of the rectangular frame, the center point coordinate of the center point on the pedestrian image is determined, and the center point coordinate and the human eye coordinate are positioned in the same coordinate system.
Carrying out face detection on the pedestrian image to obtain a face image;
in the embodiment of the invention, the face image contains the face part of the pedestrian.
Determining the coordinate of the center point of the face image on the center point of the pedestrian image;
in the embodiment of the invention, the center point of the face image is positioned at the intersection point of two diagonal lines of the face image, and the coordinate of the center point on the pedestrian image is determined to obtain the midpoint coordinate.
Calculating the distance between the center point coordinate and the midpoint coordinate to obtain a reference distance;
in the embodiment of the present invention, calculating the distance between the coordinates of the center point and the coordinates of the midpoint may obtain a reference distance, for example: the coordinate of the center point is (0, 1), the coordinate of the midpoint is (0, 0), and the reference distance is 1.
Using the formula lambda2Calculating to obtain the face symmetry score according to 1-d, wherein lambda2Representing the face symmetry score, d representing a normalized numerical value of the reference distance;
in the embodiment of the present invention, d represents a normalized value of the reference distance, which is to normalize the obtained reference distance to make the reference distance range from 0 to 1, and in practical applications, for example, if the distance from the center of the face image to the focus is 2 and the reference distance is 1, the distance from the center of the face image to the focus is set to 1, where d represents a normalized value of the reference distance, and the reference distance is set to be 1In practical application, the image acquisition device records the position information of the focus when shooting the pedestrian image, stores the position information, calculates the reference distance in an equal ratio mode to obtain a normalized reference distance of 0.5, and finally obtains the normalized reference distance through a formula lambda2And calculating to obtain the face symmetry score, wherein the smaller the distance between the center point coordinate and the midpoint coordinate is, the higher the symmetry score is, and the higher the symmetry of the representative face image is.
And determining the face symmetry score as the deflection angle score.
In the embodiment of the invention, the rectangular frame is constructed according to the center points of the actually shot human eyes, so that the center points of the rectangular frame and the centers of the human eyes are on the same straight line, and the symmetry of the human face can be calculated by comparing the center of the rectangular frame with the center of the human face image.
Further, the deflection angle score of the face image in the pedestrian image can be determined by integrating the face symmetry score and the face size score, in practical application, the face symmetry score and the face size score can respectively correspond to a preset weight, the higher the attention degree is, the higher the type score weight is, the higher the attention degree is, the deflection angle score is obtained by adding the face symmetry score and the face size score multiplied by the corresponding weight, and a specific weight matching mode can be determined according to practical conditions.
In the embodiment of the present invention, in step S103, the step of calculating an image quality score according to the pedestrian image includes:
carrying out graying processing on the pedestrian image to obtain a grayscale image;
determining a normalized histogram of the face image based on the grayscale image;
calculating the mean value of a plurality of distribution probabilities of the normalized histogram to obtain the definition score of the face image, wherein the definition score isIn niRepresenting a grey value r in the imageiMN represents the total number of pixels in the face image,represents the mean value; determining the sharpness score as the image quality score.
In the embodiment of the present invention, after graying processing is performed on a pedestrian image, a grayscale image is obtained, probability statistics is performed on the grayscale value distribution of each pixel point in the grayscale image, a normalized histogram is obtained, where the normalized histogram includes distribution probabilities of a plurality of different grayscale values, and a definition score of the face image is obtained by calculating a mean value of the distribution probabilities, so as to determine whether the definition of the pedestrian image meets a preset condition, in the embodiment of the present invention, an image quality score is based on the definition score of the pedestrian image, further, step S103, an image quality score is calculated according to the pedestrian image, which includes:
carrying out graying processing on the pedestrian image to obtain a grayscale image;
determining a normalized histogram of the face image based on the grayscale image;
calculating a face contrast score, wherein niRepresenting a grey value r in the imageiMN represents the total number of pixels in the face image,denotes the mean value, δ2Denotes the variance, λ4Representing the face contrast score; and determining the face contrast score as the image quality score.
In the embodiment of the invention, the score of the face contrast in the pedestrian image can be determined by calculating the variance of the gray values of 0-256 levels, so that the pedestrian image which meets the preset condition of the face contrast is screened, the calculation amount of a subsequent server is reduced, and the face recognition accuracy is improved.
Optionally, in practical application, the face contrast score and the face sharpness score may respectively correspond to a preset weight, and with reference to a practical situation, the higher the attention rate is, the higher the type score weight is, the face contrast score and the face sharpness score are multiplied by the respective corresponding weights and then added to obtain an image quality score, and a specific weight matching manner may be determined according to the practical situation.
Further, according to step S103, the step of calculating an image quality score according to the pedestrian image includes:
carrying out face detection on the pedestrian image to obtain a face image;
in the embodiment of the invention, the face detection is used for detecting a face image from a pedestrian image, wherein the face image comprises a face part of a pedestrian.
Carrying out graying processing on the pedestrian image to obtain a grayscale image;
extracting the gray value of each pixel point in the gray image to obtain a plurality of gray values;
calculating the average value of the gray values to obtain the average value of the gray values of the pedestrian image;
calculating the average value of the gray values of a plurality of pixel points in the face image area to obtain the average value of the gray values of the face image;
according to the formula:
calculating the brightness score, wherein,representing the average value of the gray levels of the face image, ImRepresenting the mean value of the pedestrian image gray scale, λ5Representing the brightness score; and the brightness score and the brightness of the human face are evaluatedThe score is determined as the image quality score.
In the embodiment of the invention, the difference between the average gray value of the pixel points in the face image region and the average gray value of the pixel points in the pedestrian image is used, so that the brightness of the face in the pedestrian image can be intuitively reflected, and the smaller the difference between the average gray value of the face region and the average gray value of the pedestrian image is, the higher the brightness score is, the smaller the brightness difference of the whole representative image is, and the preset condition is met.
In addition, in practical application, the image quality average may be calculated according to the brightness score, the face contrast score, the sharpness score, and the respective corresponding weights, and the specific weight matching process may refer to the above embodiments, which is not described herein again.
In the embodiment of the present invention, as shown in fig. 4, before the step of acquiring the image of the pedestrian acquired by the image acquisition device in step S101, the method further includes:
step S401, controlling the image acquisition device to acquire an image of a preset monitoring area;
in the embodiment of the present invention, the image capturing device may be a monitoring camera, and is configured to capture an image of a preset monitoring area.
Step S402, if a pedestrian is detected in the image, performing Kalman filtering on the image to obtain position information of the pedestrian on the image at the next moment;
in the embodiment of the invention, when a pedestrian is detected in the image, the position information of the pedestrian on the image at the next moment can be predicted by using Kalman filtering, so that the pedestrian can be captured by the image acquisition device, preferably, the steps can be executed by adopting an NNIE module of a Si35 xx series chip, and the accuracy of operation can be further improved.
And S403, controlling the image acquisition device to shoot the pedestrian according to the position information.
In the embodiment of the invention, the image acquisition device is controlled to capture the pedestrian by predicting the position of the pedestrian at the next moment, so that the definition of the shot image can be improved, and the accuracy of subsequent face recognition is ensured.
In still another embodiment of the present invention, there is also provided an electronic apparatus including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the face recognition method in the embodiment of the method when executing the program stored in the memory.
According to the electronic equipment provided by the embodiment of the invention, the processor realizes acquisition of the pedestrian image acquired by the image acquisition device by executing the program stored in the memory; if human eyes are detected in the pedestrian image, determining the deflection angle score of the human face in the pedestrian image according to the position information of the human eyes in the pedestrian image; calculating an image quality score according to the pedestrian image; if the pedestrian image meets the preset condition according to the deflection angle score and the image quality score, the pedestrian image is sent to a target server for face recognition, the pedestrian image meeting the quality standard is screened out by calculating the deflection angle score and the image quality score of the face, the data transmission quantity is reduced, meanwhile, the quality screening work of a large number of images by the server is reduced, and the accuracy of pedestrian image analysis is improved.
The communication bus 1140 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 1140 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The communication interface 1120 is used for communication between the electronic device and other devices.
The memory 1130 may include a Random Access Memory (RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor 1110 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In still another embodiment of the present invention, a computer-readable storage medium is further provided, on which a program of a face recognition method is stored, and the program of the face recognition method realizes the steps of the face recognition method according to the method embodiment when executed by a processor.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the invention are brought about in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk (ssd)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A face recognition method, comprising:
acquiring a pedestrian image acquired by an image acquisition device;
if human eyes are detected in the pedestrian image, determining the deflection angle score of the human face in the pedestrian image according to the position information of the human eyes in the pedestrian image;
calculating an image quality score according to the pedestrian image;
and if the pedestrian image meets the preset condition according to the deflection angle score and the image quality score, sending the pedestrian image to a target server for face recognition.
2. The face recognition method of claim 1, wherein the step of determining the deflection angle score of the face in the pedestrian image according to the position information of the human eyes in the pedestrian image comprises:
determining the human eye coordinates of the human eyes on the pedestrian image according to the position information;
constructing a rectangular frame on the pedestrian image based on the two human eye coordinates;
carrying out face detection on the pedestrian image to obtain a face image;
judging whether any one side of the rectangular frame is in the face image area;
if any one side of the rectangular frame is in the face image area, calculating the area of the rectangular frame to obtain the face area;
according to the formula:
calculating the face size score, wherein SfaceRepresenting the area of the rectangular frame, n representing a preset face area reference value, lambda1Representing the face size score;
or if any one side of the rectangular frame is not in the face image area, determining that the face size score is 0;
and determining the face size score as the deflection angle score.
3. The face recognition method of claim 2, wherein the step of constructing a rectangular frame on the pedestrian image based on the two eye coordinates comprises:
determining the midpoint of a connecting line of the two human eye coordinates as an origin, and establishing a plane rectangular coordinate system;
calculating the product of the distance between the two human eye coordinates and a preset multiple to obtain the width of a rectangular frame, wherein the middle point of the width of the rectangular frame is located at a first preset position on a positive half shaft of a longitudinal axis of the plane rectangular coordinate system;
calculating the product of the width of the rectangular frame and a preset length-width ratio to obtain the length of the rectangular frame;
and constructing the rectangular frame according to the length of the rectangular frame, the width of the rectangular frame and the first preset position.
4. The face recognition method of claim 1, wherein the step of determining the deflection angle score of the face in the pedestrian image according to the position information of the human eyes in the pedestrian image comprises:
determining the human eye coordinates of the human eyes on the pedestrian image according to the position information;
constructing a rectangular frame on the pedestrian image based on the two human eye coordinates;
determining the central coordinate of the central point of the rectangular frame on the pedestrian image;
carrying out face detection on the pedestrian image to obtain a face image;
determining the coordinate of the center point of the face image on the center point of the pedestrian image;
calculating the distance between the center point coordinate and the midpoint coordinate to obtain a reference distance;
using the formula lambda2Calculating to obtain the face symmetry score according to 1-d, wherein lambda2Representing the face symmetry score, d representing a normalized numerical value of the reference distance;
and determining the face symmetry score as the deflection angle score.
5. The face recognition method of claim 1, wherein the step of calculating an image quality score from the pedestrian image comprises:
carrying out graying processing on the pedestrian image to obtain a grayscale image;
determining a normalized histogram of the face image based on the grayscale image;
calculating the mean value of a plurality of distribution probabilities of the normalized histogram to obtain the definition score of the face image, wherein niRepresenting a grey value r in the imageiMN represents the total number of pixels in the face image,represents the mean value;
determining the sharpness score as the image quality score.
6. The face recognition method of claim 1, wherein the step of calculating an image quality score from the pedestrian image comprises:
carrying out graying processing on the pedestrian image to obtain a grayscale image;
determining a normalized histogram of the face image based on the grayscale image;
calculating a face contrast score, wherein niRepresenting a grey value r in the imageiMN represents the total number of pixels in the face image,denotes the mean value, δ2Denotes the variance, λ4Representing the pair of facesScoring the ratio;
and determining the face contrast score as the image quality score.
7. The face recognition method of claim 1, wherein the step of calculating an image quality score from the pedestrian image comprises:
carrying out face detection on the pedestrian image to obtain a face image;
carrying out graying processing on the pedestrian image to obtain a grayscale image;
extracting the gray value of each pixel point in the gray image to obtain a plurality of gray values;
calculating the average value of the gray values to obtain the average value of the gray values of the pedestrian image;
calculating the average value of the gray values of a plurality of pixel points in the face image area to obtain the average value of the gray values of the face image;
according to the formula:
a brightness score is calculated, wherein,representing the average value of the gray levels of the face image, ImRepresenting the mean value of the pedestrian image gray scale, λ5Representing the brightness score;
determining the brightness score as the image quality score.
8. The face recognition method according to claim 1, wherein the step of acquiring the image of the pedestrian acquired by the image acquisition device further comprises:
controlling the image acquisition device to acquire an image of a preset monitoring area;
if a pedestrian is detected in the image, performing Kalman filtering on the image to obtain position information of the pedestrian on the image at the next moment;
and controlling the image acquisition device to shoot the pedestrian according to the position information.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method of any of claims 1 to 8 when executing the computer program.
10. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911380794.8A CN113051978A (en) | 2019-12-27 | 2019-12-27 | Face recognition method, electronic device and readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911380794.8A CN113051978A (en) | 2019-12-27 | 2019-12-27 | Face recognition method, electronic device and readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113051978A true CN113051978A (en) | 2021-06-29 |
Family
ID=76506903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911380794.8A Pending CN113051978A (en) | 2019-12-27 | 2019-12-27 | Face recognition method, electronic device and readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113051978A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024174624A1 (en) * | 2023-02-22 | 2024-08-29 | 荣耀终端有限公司 | Image capture method and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216885A (en) * | 2008-01-04 | 2008-07-09 | 中山大学 | Passerby face detection and tracing algorithm based on video |
CN107679504A (en) * | 2017-10-13 | 2018-02-09 | 北京奇虎科技有限公司 | Face identification method, device, equipment and storage medium based on camera scene |
CN108491784A (en) * | 2018-03-16 | 2018-09-04 | 南京邮电大学 | The identification in real time of single feature towards large-scale live scene and automatic screenshot method |
CN108986245A (en) * | 2018-06-14 | 2018-12-11 | 深圳市商汤科技有限公司 | Work attendance method and terminal based on recognition of face |
CN109190532A (en) * | 2018-08-21 | 2019-01-11 | 北京深瞐科技有限公司 | It is a kind of based on cloud side fusion face identification method, apparatus and system |
CN110569822A (en) * | 2019-09-16 | 2019-12-13 | 深圳市商汤科技有限公司 | image processing method and device, electronic equipment and storage medium |
-
2019
- 2019-12-27 CN CN201911380794.8A patent/CN113051978A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216885A (en) * | 2008-01-04 | 2008-07-09 | 中山大学 | Passerby face detection and tracing algorithm based on video |
CN107679504A (en) * | 2017-10-13 | 2018-02-09 | 北京奇虎科技有限公司 | Face identification method, device, equipment and storage medium based on camera scene |
CN108491784A (en) * | 2018-03-16 | 2018-09-04 | 南京邮电大学 | The identification in real time of single feature towards large-scale live scene and automatic screenshot method |
CN108986245A (en) * | 2018-06-14 | 2018-12-11 | 深圳市商汤科技有限公司 | Work attendance method and terminal based on recognition of face |
CN109190532A (en) * | 2018-08-21 | 2019-01-11 | 北京深瞐科技有限公司 | It is a kind of based on cloud side fusion face identification method, apparatus and system |
CN110569822A (en) * | 2019-09-16 | 2019-12-13 | 深圳市商汤科技有限公司 | image processing method and device, electronic equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024174624A1 (en) * | 2023-02-22 | 2024-08-29 | 荣耀终端有限公司 | Image capture method and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110660066B (en) | Training method of network, image processing method, network, terminal equipment and medium | |
CN109389135B (en) | Image screening method and device | |
CN108108754B (en) | Training and re-recognition method, device and system for re-recognition network | |
CN111046959A (en) | Model training method, device, equipment and storage medium | |
CN110059666B (en) | Attention detection method and device | |
CN111935479B (en) | Target image determination method and device, computer equipment and storage medium | |
CN112241976A (en) | Method and device for training model | |
CN107564020B (en) | Image area determination method and device | |
WO2017128990A1 (en) | Method and device for target detection | |
CN111767820B (en) | Method, device, equipment and storage medium for identifying focused object | |
CN111814776B (en) | Image processing method, device, server and storage medium | |
CN111612000B (en) | Commodity classification method and device, electronic equipment and storage medium | |
CN109753886B (en) | Face image evaluation method, device and equipment | |
CN115439875A (en) | Posture evaluation device, method and system | |
CN108615025B (en) | Door identification and positioning method and system in home environment and robot | |
CN112257567B (en) | Training of behavior recognition network, behavior recognition method and related equipment | |
CN113051978A (en) | Face recognition method, electronic device and readable medium | |
CN111161211B (en) | Image detection method and device | |
CN113014876A (en) | Video monitoring method and device, electronic equipment and readable storage medium | |
CN111126106B (en) | Lane line identification method and device | |
CN112784494A (en) | Training method of false positive recognition model, target recognition method and device | |
JP7243372B2 (en) | Object tracking device and object tracking method | |
CN110874538B (en) | Method and device for evaluating decoding result of bar code and electronic equipment | |
CN108764206B (en) | Target image identification method and system and computer equipment | |
CN112529845B (en) | Image quality value determining method and device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |