CN112287792A - Method and device for collecting face image and electronic equipment - Google Patents

Method and device for collecting face image and electronic equipment Download PDF

Info

Publication number
CN112287792A
CN112287792A CN202011138813.9A CN202011138813A CN112287792A CN 112287792 A CN112287792 A CN 112287792A CN 202011138813 A CN202011138813 A CN 202011138813A CN 112287792 A CN112287792 A CN 112287792A
Authority
CN
China
Prior art keywords
face
preview image
data
frame
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011138813.9A
Other languages
Chinese (zh)
Other versions
CN112287792B (en
Inventor
李薇
陈洁丹
舒玉强
卢道和
郭树霞
雷声伟
蔡志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202011138813.9A priority Critical patent/CN112287792B/en
Publication of CN112287792A publication Critical patent/CN112287792A/en
Priority to PCT/CN2021/123356 priority patent/WO2022083479A1/en
Application granted granted Critical
Publication of CN112287792B publication Critical patent/CN112287792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device and electronic equipment for acquiring a face image, wherein the method for acquiring the face image comprises the following steps: determining a face posture corresponding to at least one frame of preview image based on a face feature point of each frame of preview image in at least one frame of preview image acquired by the electronic equipment; determining a first state of the electronic equipment when a preview image is acquired; the first state characterizes a smoothness of the electronic device; under the condition that the face pose corresponding to the at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition, outputting a first photo based on the at least one frame of preview image; the first photo is used for authentication of the server.

Description

Method and device for collecting face image and electronic equipment
Technical Field
The invention relates to the technical field of face recognition in financial technology (Fintech), in particular to a method and a device for collecting face images and electronic equipment.
Background
In the related technology, under the application scenes of account fund security or information security and the like related to financial services, the identity of a user is verified by calculating the similarity between a currently acquired face image and a pre-stored face image by using a face recognition technology. However, in the case that the image quality of the currently acquired face image is poor, the authentication of the legal user may fail.
Disclosure of Invention
In view of this, embodiments of the present invention are intended to provide a method, an apparatus, and an electronic device for collecting a face image, so as to solve the technical problem in the related art that the authentication of a valid user fails due to poor image quality.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides a method for collecting a face image, which is applied to electronic equipment and comprises the following steps:
determining a face posture corresponding to at least one frame of preview image based on a face feature point of each frame of preview image in at least one frame of preview image acquired by the electronic equipment;
determining a first state of the electronic equipment when a preview image is acquired; the first state characterizes a smoothness of the electronic device;
under the condition that the face pose corresponding to the at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition, outputting a first photo based on the at least one frame of preview image; the first photo is used for authentication of the server.
In the foregoing solution, the determining a first state of the electronic device when acquiring the preview image includes:
and under the condition that the face pose corresponding to the at least one frame of preview image meets the first set condition, determining a first state of the electronic equipment when the at least one frame of preview image is collected.
In the foregoing solution, the determining a first state of the electronic device when acquiring the at least one preview image includes:
converting the first data into second data conforming to a quaternion format; the first data represents motion state data of the electronic equipment acquired when the electronic equipment acquires the at least one frame of preview image;
predicting a priori estimated value of the motion state data at the t moment based on a theoretical estimated value of the motion state data at the t-1 moment; wherein t is a positive integer; the optimal estimation value and the prior estimation value are data in accordance with a quaternion format;
determining third data corresponding to the t moment based on the second data corresponding to the t moment and the prior estimation value at the t moment;
inputting third data corresponding to the time t to a set model to obtain a first state of the electronic equipment when a preview image is acquired at the time t; wherein the content of the first and second substances,
the setting model is used for determining a corresponding first state according to input data.
In the foregoing solution, when the converting the first data into the second data conforming to the quaternion format, the method further includes:
deleting first data corresponding to the first time period; the starting time of the first time interval corresponds to the starting time of the electronic equipment for collecting the at least one frame of preview image.
In the foregoing solution, when the third data corresponding to the time t is determined based on the second data corresponding to the time t and the priori estimation value at the time t, the method further includes:
replacing the first coefficient based on at least one confidence level; each confidence coefficient in the at least one confidence coefficient corresponds to a type of motion state data; the first coefficient characterizes a relevant calculation parameter of a Kalman filtering algorithm.
In the foregoing solution, the replacing the first coefficient based on at least one confidence level includes:
replacing a second coefficient of the first coefficients with zero; the second coefficient represents a covariance matrix of the prior estimated value at the moment t;
replacing a filter coefficient in the first coefficient based on at least one confidence.
In the above solution, when the filter coefficient in the first coefficient is replaced based on at least one confidence level, the method further includes:
determining a new filter coefficient based on the confidence degree of the acceleration data and the confidence degree of the angular velocity data under the condition that the first data comprises the acceleration data and the angular velocity data; the new filter coefficients are used to replace filter coefficients in the first coefficients.
In the above solution, the face pose includes at least one of:
a face pose angle;
whether the face image is shielded or not;
whether the user closes his eyes; wherein the content of the first and second substances,
under the condition that the face pose comprises a face pose angle, the first setting condition comprises that the face pose angle is in a set angle range;
under the condition that the face pose comprises whether a face image is shielded or not, the first set condition comprises that the face image is not shielded;
in the case that the face pose includes whether the user closes the eyes, the first setting condition includes that the user does not close the eyes.
In the foregoing solution, the outputting the first photo based on the at least one frame of preview image includes:
selecting a first preview image from the at least one frame of preview image, and outputting the first preview image as a first photo; wherein the content of the first and second substances,
the first preview image satisfies at least one of:
the ratio of the face image to the picture is in a set range; the picture occupation ratio represents the ratio of the area of the face image in the set face frame to the area of the set face frame;
the distance between the first area and the second area in the face image is smaller than a set threshold value; the first area represents an area where the upper lip is located; the second region characterizes the region of the lower lip.
In the above scheme, the method further comprises at least one of:
under the condition that the determined face posture does not meet the first set condition, outputting first prompt information;
under the condition that the determined first state represents that the current smoothness of the electronic equipment does not meet the second set condition, outputting second prompt information; wherein the content of the first and second substances,
the first prompt information is used for prompting a user to adjust the face posture;
the second prompt message is used for prompting the user to stabilize the electronic equipment.
The embodiment of the invention also provides a device for collecting the face image, which comprises:
the first determining unit is used for determining a face posture corresponding to at least one frame of preview image based on a face feature point of each frame of preview image in at least one frame of preview image acquired by the electronic equipment;
the second determining unit is used for determining a first state of the electronic equipment when the electronic equipment acquires the preview image; the first state characterizes a smoothness of the electronic device;
the output unit is used for outputting a first photo based on the at least one frame of preview image under the condition that the face pose corresponding to the at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition; the first photo is used for authentication of the server.
An embodiment of the present invention further provides an electronic device, including: a processor and a memory for storing a computer program capable of running on the processor,
the processor is used for executing any one of the above steps of the method for acquiring the face image when the computer program is run.
The embodiment of the invention also provides a storage medium, on which a computer program is stored, and the computer program realizes the steps of any one of the above methods for acquiring the face image when being executed by a processor.
According to the embodiment of the invention, the electronic equipment determines the face posture corresponding to at least one frame of preview image based on the face characteristic points of each frame of preview image in at least one frame of preview image collected by the electronic equipment; determining a first state of the electronic equipment when a preview image is collected; and outputting a first photo based on at least one frame of preview image under the condition that the face pose corresponding to at least one frame of preview image meets a first set condition and the determined smoothness of the first state representation electronic equipment meets a second set condition. The face pose corresponding to the preview image meets the first set requirement, so that the output first picture meets the acquisition requirement of the face recognition image, and the condition that the face characteristic points in the output first picture are incompletely acquired due to incorrect face pose can be avoided. The smoothness of the electronic equipment meets a second set condition when the preview image is collected, so that the clear face image can be collected, and the situations that the face image is fuzzy and the face characteristic points are missing in the output picture caused by shaking of the electronic equipment when the preview image is collected are avoided. Because the face image in the first photo output by the electronic device is clear and contains all face characteristic points used for face recognition, the success rate of the legal user identity authentication can be improved when the server performs identity authentication based on the first photo.
Drawings
Fig. 1 is a schematic flow chart of an implementation of a method for acquiring a face image according to an embodiment of the present invention;
fig. 2 is a schematic view of an implementation flow of determining a first state in a method for acquiring a face image according to an embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating an implementation of a method for acquiring a face image according to another embodiment of the present invention;
fig. 4 is a schematic flow chart of an implementation of a method for collecting a face image according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a device for acquiring a face image according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware component structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In the related technology, the electronic device collects a face image and sends the collected face image to the server, and the server performs identity verification by calculating the similarity between the received face image and a pre-stored face image by using a face recognition technology.
In a dark condition, the electronic device may cause poor image quality of the acquired face image due to the light problem of the current environment.
Under the condition that the image quality of the face image acquired by the electronic equipment is poor, for example, the image is blurred, the light of the image is dark, and the like, the similarity calculated by the server is smaller than a set threshold value, so that the authentication of the legal user fails. Even if the server preprocesses the face image, a clear face image is obtained. However, in the case where the face feature points for face recognition are incomplete, the authentication of the legitimate user may still fail.
In order to solve the technical problem, an embodiment of the present invention provides a method for acquiring a face image, where an electronic device outputs a first photo based on a preview image when a face pose corresponding to an acquired preview image meets a first setting condition and a first state of the electronic device when the electronic device acquires the preview image indicates that a smoothness of the electronic device meets a second setting condition. The face pose corresponding to the preview image meets the first setting requirement, so that the situation that the face characteristic points in the output first photo are incompletely collected due to incorrect face pose can be avoided. The smoothness of the electronic equipment meets a second set condition when the preview image is collected, so that the clear face image can be collected, and the situations of face image blurring and face image missing in the output picture caused by shaking of the electronic equipment when the preview image is collected are avoided. Because the face image in the first photo output by the electronic device is clear and contains all face characteristic points used for face recognition, the success rate of the legal user identity authentication can be improved when the server performs identity authentication based on the first photo.
The technical solution of the present invention is further described in detail with reference to the drawings and the specific embodiments of the specification.
Fig. 1 shows a schematic implementation flow diagram of a method for acquiring a face image according to an embodiment of the present invention. In the embodiment of the present invention, an execution subject of the method for acquiring a face image is an electronic device, for example, a terminal such as a mobile phone and a tablet computer.
Referring to fig. 1, a method for acquiring a face image according to an embodiment of the present application includes:
s101: and determining the face pose corresponding to at least one frame of preview image based on the face feature points of each frame of preview image in at least one frame of preview image acquired by the electronic equipment.
When the electronic equipment starts an image acquisition mode, at least one frame of preview image can be acquired through a built-in camera or an external camera. The electronic equipment determines the human face characteristic points in the preview image by using a human face recognition technology, and determines the human face posture corresponding to the corresponding preview image based on the determined human face characteristic points. Wherein the facial feature points represent the positions of the five sense organs on the facial image in the preview image. The human face characteristic points comprise points corresponding to positions of human face contours, eyebrows, pupils, eye corners, mouths, noses and the like.
In practical application, the electronic device may also input at least one frame of preview image into a setting model for recognizing a face pose, and obtain the face pose output by the setting model. The setting model is used for determining the corresponding human face posture based on the human face characteristic points in the input image.
And under the condition that the face pose corresponding to the preview image is determined, the electronic equipment judges whether the face pose meets a first set condition. The first setting condition is set based on an image acquisition requirement for face recognition. And when the human face posture does not meet the first set condition, continuously acquiring the preview image.
In one embodiment, the face pose includes at least one of:
a face pose angle;
whether the face image is shielded or not;
whether the user closes his eyes.
Under the condition that the face pose comprises a face pose angle, the first setting condition comprises that the face pose angle is in a set angle range;
under the condition that the face pose comprises whether the face image is shielded or not, the first set condition comprises that the face image is not shielded;
in the case that the face pose represents whether the user closes the eyes, the first setting condition includes that the user does not close the eyes.
Here, when the face pose angle is within the set angle range, the preview image collected by the representation is a front face image. The face pose angles include a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll). Wherein the pitch angle represents the angle of rotation about the X-axis; yaw angle represents the angle of rotation about the Y axis; the roll angle represents the angle of rotation about the Z-axis. Illustratively, the set angle ranges for the pitch angle and yaw angle are each-15 ° to 15 °; the roll angle is set to correspond to an angle in the range of-10 ° to 10 °.
When the number of the face characteristic points identified by the electronic equipment based on the face identification algorithm is smaller than the set number, the face characteristic points are lost, and the face image is shielded. The set number characterizes the number of face feature points corresponding to the face recognition algorithm, for example 68. The electronic device can also compare the face characteristic points of at least two adjacent frames of preview images to determine whether the face characteristic points are missing, so as to determine whether the face image is blocked.
The electronic device can detect the position of the pupil based on a face recognition algorithm, detect the positions of the upper eyelid and the lower eyelid, and determine the first distance between the upper eyelid and the lower eyelid based on the positions of the upper eyelid and the lower eyelid. Determining that the eyes of the user are closed under the condition that the pupils are not detected; in the case where the pupil is detected, it is determined that the user has not closed the eye. Or determining that the user closes the eyes under the condition that the first distance is smaller than the set distance; and determining that the user does not close the eyes when the pupils are detected and the first distance is greater than or equal to the set distance.
S102: determining a first state of the electronic equipment when a preview image is acquired; the first state characterizes a smoothness of the electronic device.
The method comprises the steps that when the electronic equipment collects a preview image, motion state data of the electronic equipment are obtained, and a first state of the electronic equipment when the electronic equipment collects the preview image is determined based on the motion state data of the electronic equipment.
Here, the motion state data of the electronic device is collected by a built-in sensor, and the sensor includes at least one of the following: angular velocity sensor, acceleration sensor.
In practical application, the motion state data of the electronic device when the smoothness of the electronic device meets the second set condition may be stored in advance in the electronic device. And when the motion state data acquired when the electronic equipment acquires the preview image is not matched with the pre-stored motion state data, determining that the smoothness of the electronic equipment represented by the first state does not meet a second set condition. And when the motion state data acquired when the electronic equipment acquires the preview image is matched with the pre-stored motion state data, determining that the smoothness of the electronic equipment represented by the first state meets a second set condition. The second set condition characterizes the smoothness when the electronic device is able to acquire a clear image.
In practical applications, the electronic device may also determine the first state of the electronic device when the preview image is acquired through the trained model. The model is used for determining a corresponding first state based on motion state data of the electronic equipment.
And under the condition that the smoothness of the first state representation electronic equipment does not meet a second set condition, continuously acquiring the preview image.
S103: under the condition that the face pose corresponding to the at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition, outputting a first photo based on the at least one frame of preview image; the first photo is used for authentication of the server.
Under the condition that the face pose corresponding to at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition, the electronic equipment can acquire a clear and complete front face image, at the moment, at least one frame of preview image is acquired, and a first photo is output based on the acquired at least one frame of preview image. The electronic device may send the first photograph to the server for authentication by the server based on the first photograph.
It should be noted that, the electronic device may end the process under the condition that the first photo is output, and prompt the user that the face image is successfully acquired; or under the condition that the first photo is not output within the set time length, the process is ended, and the user is prompted that the face image acquisition fails.
In an embodiment, the outputting the first photograph based on the at least one preview image comprises:
selecting a first preview image from the at least one frame of preview image, and outputting the first preview image as a first photo; wherein the content of the first and second substances,
the first preview image satisfies at least one of:
the ratio of the face image to the picture is in a set range; the picture occupation ratio represents the ratio of the area of the face image in the set face frame to the area of the set face frame; here, a range characterization ratio is set;
the distance between the first area and the second area in the face image is smaller than a set threshold value; the first area represents an area where the upper lip is located; the second region characterizes the region of the lower lip.
When the face pose corresponding to at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition, the electronic equipment acquires at least one frame of preview image, selects a first preview image from the acquired preview images and outputs the first preview image as a first photo. When the proportion of the face image in the first preview image is within the set proportion range, a clear frontal face image can be obtained even if the first photograph is output based on the image in the set face frame in the first preview image. The first preview image meets the condition that the distance between the first area and the second area in the face image is smaller than the set threshold, so that the condition that the legal user identity authentication fails due to the fact that the mouth of the user is opened can be avoided.
The electronic equipment can determine a first area of a set face frame displayed in a preview interface and a second area of a face image displayed in the set face frame, calculate the picture occupation ratio of the face image in the preview image based on the first area and the second area, and determine whether the picture occupation ratio of the face image in the preview image is in a set range. In practical applications, the set face frame may be set based on the shape of the head of the person. The setting range is set based on the image ratio of the face image when the front face image is displayed in the set face frame. The proportion of the face image represents the distance between the face and the display screen of the electronic equipment. Under the condition that the occupation ratio of the face image in the first preview image is not in the set range, the electronic equipment can output prompt information to prompt a user to adjust the distance between the face and the display screen of the electronic equipment.
The electronic device may determine a first region and a second region in the acquired preview image based on a face recognition technique, and determine a distance between the first region and the second region. When the distance between the first area and the second area is larger than or equal to the set threshold value, the user can be prompted not to open the mouth.
In the scheme provided by the embodiment of the invention, the electronic equipment determines the face posture corresponding to at least one frame of preview image based on the face characteristic points of each frame of preview image in at least one frame of preview image acquired by the electronic equipment; determining a first state of the electronic equipment when a preview image is collected; and outputting a first photo based on at least one frame of preview image under the condition that the face pose corresponding to at least one frame of preview image meets a first set condition and the determined smoothness of the first state representation electronic equipment meets a second set condition. The face pose corresponding to the preview image meets the first set requirement, so that the output first picture meets the acquisition requirement of the face recognition image, and the condition that the face characteristic points in the output first picture are incompletely acquired due to incorrect face pose can be avoided. Because the smoothness of the electronic equipment meets the second set condition when the preview image is collected, a clear face image can be collected, and the situations of face image blurring and face image missing in an output photo caused by shaking of the electronic equipment when the preview image is collected are avoided. Because the face image in the first photo output by the electronic device is clear and contains all face characteristic points used for face recognition, the success rate of the legal user identity authentication can be improved when the server performs identity authentication based on the first photo.
As another embodiment of the present invention, the determining a first state of the electronic device when acquiring the preview image includes:
and under the condition that the face pose corresponding to the at least one frame of preview image meets the first set condition, determining a first state of the electronic equipment when the at least one frame of preview image is collected.
Here, when the electronic device executes S101 and determines that the face pose corresponding to at least one frame of the preview image does not satisfy the first setting condition, the electronic device returns to S101 or ends the present flow.
The method comprises the steps that under the condition that the face posture corresponding to at least one frame of preview image meets a first set condition, the electronic equipment determines the first state of the electronic equipment when the at least one frame of preview image is collected, and therefore power consumption of the electronic equipment can be saved. For the implementation method for determining the first state of the electronic device when acquiring the at least one preview image, please refer to the related description in S102, which is not repeated herein.
In this embodiment, when the face pose corresponding to the at least one frame of preview image does not satisfy the first setting condition, the condition for outputting the first photo is not satisfied, and at this time, the step of determining the first state of the electronic device when the at least one frame of preview image is acquired is not performed, so that resources consumed by determining the first state can be saved, the data processing speed can be increased, and power consumption can be saved.
As another embodiment of the present invention, fig. 2 shows a schematic implementation flow diagram for determining the first state in the method for acquiring a face image according to the embodiment of the present invention. Referring to fig. 2, the determining a first state of the electronic device when acquiring the at least one preview image includes:
s201: converting the first data into second data conforming to a quaternion format; the first data represents motion state data of the electronic device acquired when the electronic device acquires the at least one preview image.
Here, the quaternion is a high-order complex number, and is expressed as q ═ (x, y, z, w) ═ ix + jy + kz + w. Wherein i, j and k are all imaginary number units.
Since i, j, k and three-dimensional rotation are similar, quaternions can be expressed using a combination of a vector and a real number, and at this time,
Figure BDA0002737560120000111
wherein the content of the first and second substances,
Figure BDA0002737560120000112
in the form of a vector, the vector,
Figure BDA0002737560120000113
w is a real number. The vector can be regarded asA quaternion with a real part of 0 and a real part can be considered as a quaternion with an imaginary part of 0, then the quaternion
Figure BDA0002737560120000114
A rotation operation of a point in space by an angle θ around a unit vector (x, y, z) axis can be characterized.
In practical applications, since the electronic device can be abstracted to a point in a three-dimensional space, the first data can be decomposed into data in three directions of an X-axis, a Y-axis and a Z-axis, and thus, a quaternion
Figure BDA0002737560120000115
The rotation operation of the electronic device by an angle theta around the unit vector (x, y, z) can also be characterized. The electronic device converts the first data into second data conforming to a quaternion format. Wherein the content of the first and second substances,
second data q ═ ((x, y, z), w);
Figure BDA0002737560120000116
Figure BDA0002737560120000117
Figure BDA0002737560120000118
Figure BDA0002737560120000121
in practical applications, the first data may include at least one of:
angular velocity data collected by an angular velocity sensor;
acceleration data collected by the acceleration sensor.
In one embodiment, in order to reduce data interference and improve the accuracy of the data, thereby improving the accuracy of the determined first state, when converting the first data into the second data conforming to the quaternion format, the method further comprises:
deleting first data corresponding to the first time period; the starting time of the first time interval corresponds to the starting time of the electronic equipment for collecting the at least one frame of preview image.
Here, in the case where the electronic device acquires the first data, the electronic device deletes the first data corresponding to the first period, and then converts the remaining first data into second data conforming to the quaternion format.
In practical applications, the electronic device may delete the first data acquired within 100 milliseconds of entering the preview mode. The electronic device may also delete the first data acquired in a period of acquiring a set number of preview images, for example, the electronic device deletes the first data acquired in a period of acquiring the preview image of the previous 10 frames.
S202: predicting a priori estimated value of the motion state data at the t moment based on a theoretical estimated value of the motion state data at the t-1 moment; wherein t is a positive integer; the optimal estimation value and the prior estimation value are data in accordance with a quaternion format.
Because there is a deviation between the data recorded by the sensor and the actual data due to system noise, measurement noise, and the like, in this embodiment, the second data is processed to improve the accuracy of the data. Specifically, based on the observed value (or called measured value) at the time t and the theoretical estimated value (or called optimal estimated value) at the time t-1, the real value corresponding to the time t is estimated, so as to obtain the corresponding optimal estimated value.
Here, the electronic device may predict the prior estimate of the motion state data at time t based on the theoretical estimate of the motion state data at time t-1, in combination with factors such as system noise and measurement noise.
The theoretical estimation value of the motion state data at the time t-1 is also called an optimal estimation value. The theoretical estimated value of the motion state data at the initial time is the second data at the initial time. The motion state data includes at least one of angular velocity data and acceleration data. Angular velocity data may be collected using an angular velocity sensor (e.g., a gyroscope) built into the electronic device; the acceleration data may be collected using an acceleration sensor built into the electronic device.
It should be noted that the motion state data at the time t-1 and the second data corresponding to the time t are the same type of motion state data. For example, when the motion state data at the time t-1 includes only angular velocity data, the second data corresponding to the time t is the second data corresponding to the angular velocity data. When the motion state data at the time t-1 only comprises angular velocity data and acceleration data, the second data corresponding to the time t comprises second data corresponding to the angular velocity data and second data corresponding to the acceleration data.
S203: and determining third data corresponding to the t moment based on the second data corresponding to the t moment and the prior estimation value at the t moment.
Here, the second data corresponding to time t corresponds to an observed value at time t. And the electronic equipment corrects the prior estimation value at the time t by using the second data corresponding to the time t to obtain third data corresponding to the time t. And the third data corresponding to the time t corresponds to the best estimation value of the time t.
In the embodiment, the electronic device estimates the optimal estimation value corresponding to the time t based on the observation value recorded by the sensor at the time t and the optimal estimation value of the sensor at the time t-1. In practice, the measurements recorded by the sensors comprise at least one of angular velocity data and acceleration data.
In some embodiments, the second data may also be processed based on a kalman filter algorithm to obtain corresponding third data.
The Kalman filtering algorithm is a recursive prediction-correction method, and is divided into two steps:
and (3) prediction: estimating a prior estimation value at the t moment according to the optimal estimation value at the t-1 moment;
updating: and correcting the prior estimation value at the time t by using the observation value at the time t to obtain a posterior estimation value at the time t, which is also called as an optimal estimation value.
Here, the prediction stage is performed by a time update equation, and the update stage is realized by a state update equation. In S202, based on a time updating equation and a theoretical estimation value of the motion state data at the t-1 moment, predicting a priori estimation value of the motion state data at the t moment; in S203, third data (i.e., an optimal estimation value) corresponding to time t is determined based on the state update equation, the second data corresponding to time t, and the prior estimation value at time t. Wherein the time update equation comprises:
Figure BDA0002737560120000141
Pt=FtPt-1Ft T+Qt (2)
here, the first and second liquid crystal display panels are,
Figure BDA0002737560120000142
representing the optimal estimation value at the t-1 moment;
Figure BDA0002737560120000143
representing a prior estimation value of the t moment, namely predicting a result of the t moment according to the optimal estimation value of the t-1 moment; ftRepresenting a prediction matrix or a state transition matrix corresponding to the prediction process; ft TCharacterization FtTransposing; b istIs a control matrix;
Figure BDA0002737560120000144
is a control vector characterizing known potential influencing factors, for example, known potential influencing factors corresponding to the smoothness of the electronic device when the electronic device is in a vibration state. PtCharacterization of
Figure BDA0002737560120000145
The covariance matrix of (a); pt-1Characterization of
Figure BDA0002737560120000146
Of (2)A variance matrix; qtCharacterizing process excitation noise covariance, i.e. covariance of system processes, QtIs used to represent FtError from the actual process. QtIs characterized by external unknown influencing factors that influence the smoothness of the electronic device, such as sudden impacts, external wind speeds, etc.
The state update equation includes:
Figure BDA0002737560120000147
Pt'=Pt-K'HtPt (4)
Figure BDA0002737560120000148
here, the first and second liquid crystal display panels are,
Figure BDA0002737560120000149
representing the optimal estimation value at the time t; k' represents Kalman filtering gain, or Kalman filtering coefficient;
Figure BDA00027375601200001410
a distribution average of the motion state data characterizing the sensor; htCharacterizing an observed value of the sensor; pt' A posteriori estimation of the covariance matrix at time t, i.e. characterizing
Figure BDA00027375601200001411
Represents the uncertainty of the state;
Figure BDA00027375601200001416
characterization HtThe transposed matrix of (2); rtThe measurement noise covariance matrix, i.e. the noise of the sensor, is characterized.
In practical applications, the smoothness of the electronic equipment
Figure BDA00027375601200001412
In relation to the acceleration and angular velocity of the electronic device,
Figure BDA00027375601200001413
the expression for the best estimate is:
Figure BDA00027375601200001414
the covariance matrix is
Figure BDA00027375601200001415
Wherein, sigmaggCharacterizing the correlation of g and g, ΣgaCharacterizing the correlation of g and a, ΣagCharacterizing the correlation of a and g, ΣaaAnd (5) characterizing the relevance of a and a. g and a are both randomly distributed and both conform to a Gaussian distribution.
The derivation processes of equations (3) to (5) are described in detail below:
the estimated value and the observed value of the sensor both conform to the gaussian distribution, and the following expression can be obtained:
Figure BDA0002737560120000151
Figure BDA0002737560120000152
wherein, equation (6) corresponds to the estimated value of the sensor, and equation (7) corresponds to the predicted value of the sensor.
Figure BDA0002737560120000153
Estimated value of the characteristic sensor, HtAn observed value that characterizes the sensor is determined,
Figure BDA0002737560120000159
characterization HtTransposed matrix of mu0Mean value of gaussian distribution corresponding to estimated value characterizing sensor, Σ0The covariance of the gaussian distribution corresponding to the estimate characterizing the sensor.
Figure BDA0002737560120000154
Distribution mean value, R, of data characterizing the state of motion of a sensortNoise, μ, characterizing the sensor1Mean value, Σ, of the gaussian distribution corresponding to the observed value characterizing the sensor1And characterizing the covariance of the Gaussian distribution corresponding to the observed value of the sensor.
Because the estimated value obtained by prediction may be accurate or may not be accurate based on the predicted value and the estimated value, a new gaussian distribution can be obtained by multiplying a matrix corresponding to the predicted value and a matrix corresponding to the estimated value, and the optimal estimated value is determined based on the new gaussian distribution. Wherein, the new Gaussian distribution represents the overlapping area of the predicted value and the estimated value, and is the area where the best estimated value is located. The following describes an implementation process for determining a new gaussian distribution based on a one-dimensional gaussian distribution curve equation:
expected to be μ and variance to be σ2The one-dimensional gaussian distribution curve equation is:
Figure BDA0002737560120000155
multiplying the two gaussian distribution curves yields:
N(x,μ00)×N(x,μ11)=N(x,μ',σ') (9)
wherein, N (x, mu)00) Characterizing a first Gaussian curve, N (x, μ)11) The second gaussian curve is characterized and N (x, μ ', σ') characterizes the new gaussian curve. Extending equation (9) based on equation (8) yields:
Figure BDA0002737560120000156
Figure BDA0002737560120000157
here, let
Figure BDA0002737560120000158
It is possible to obtain:
σ'2=σ0 2-kσ0 2 (12)
μ'=μ0+k(σ10) (13)
extending k, equations (12) and (13) from one-dimensional space to multi-bit space here, the resulting new gaussian distribution describes the following:
K=Σ001)-1 (14)
Figure BDA0002737560120000161
Σ'=Σ0-KΣ0 (16)
substituting equation (6) and equation (7) into equations (14) to (16), respectively, yields:
Figure BDA0002737560120000162
Figure BDA0002737560120000163
Figure BDA0002737560120000164
wherein, K is kalman gain, or kalman coefficient. The expression in consideration of K further contains HtBy eliminating HtThe expressions above are simplified to obtain equations (3) to (5).
In an embodiment, when the third data corresponding to the time t is determined based on the second data corresponding to the time t and the a priori estimation value at the time t, the method further includes:
replacing the first coefficient based on at least one confidence level; each confidence coefficient in the at least one confidence coefficient corresponds to a type of motion state data; the first coefficient characterizes a relevant calculation parameter of a Kalman filtering algorithm.
Here, the Kalman gain K' will follow the noise R of the sensortThe second data is processed by using a Kalman filtering algorithm, so that a corresponding optimal estimation value can be obtained; however, in practical applications, factors such as computing power and algorithm efficiency of the electronic device are considered, and the corresponding result is only required to be within an allowable error range. Therefore, in this embodiment, based on at least one confidence, a fixed weight is determined, and the kalman gain K' is replaced with the fixed weight to obtain an estimated value within the error range.
The at least one confidence level comprises at least one of:
confidence in the acceleration data;
confidence in the angular velocity data.
In an embodiment, the electronic device replaces the first coefficient based on the at least one confidence level, thereby simplifying the above equations (2) to (5) to obtain the equation (20). Wherein said replacing the first coefficient based on the at least one confidence level comprises:
replacing a second coefficient of the first coefficients with zero; the second coefficient represents a covariance matrix of the prior estimated value at the moment t;
replacing a filter coefficient in the first coefficient based on at least one confidence.
Here, the second coefficient of the first coefficients corresponds to P in the above equations (2), (4) and (5)tThe covariance matrix of the a priori estimates at time t is replaced by zero, i.e. P in equations (2), (4) and (5) above is replaced by zerotZero is set so that the above equation (2), equation (4) and equation (5) are all zero.
The filter coefficient in the first coefficient is a kalman filter coefficient, which is also called a kalman gain K'. The electronic device proceeds to Kalman gain K' in equation (3) in the above state update equation based on at least one confidence levelLine replacement, get
Figure BDA0002737560120000171
Corresponding new equation (20). Wherein the content of the first and second substances,
Figure BDA0002737560120000172
the corresponding new process is as follows:
Figure BDA0002737560120000173
Figure BDA0002737560120000174
where ω' characterizes the confidence. The formula (21) is the same as the above formula (1). P is not included in the formulas (20) and (21)tAnd a kalman gain K'. When acquiring the observed value of the sensor at the time t and the optimal estimated value at the time t-1, the electronic equipment can calculate the optimal estimated value at the time t by substituting the observed value at the time t and the optimal estimated value at the time t-1 into the formulas (20) to (21).
In an embodiment, when replacing the filter coefficient included in the first coefficient based on at least one confidence level, the method further includes:
determining a new filter coefficient based on the confidence degree of the acceleration data and the confidence degree of the angular velocity data under the condition that the first data comprises the acceleration data and the angular velocity data; the new filter coefficients are used to replace filter coefficients in the first coefficients.
Wherein the new filter coefficients
Figure BDA0002737560120000181
ωaRepresenting the confidence coefficient of the acceleration data corresponding to the acceleration sensor; omegagA confidence level of the angular velocity data corresponding to the angular velocity sensor is characterized.
S204: inputting third data corresponding to the time t to a set model to obtain a first state of the electronic equipment when a preview image is acquired at the time t; the setting model is used for determining a corresponding first state according to input data.
The set model is obtained by training at least one sample data based on a machine learning algorithm, and each sample data in the at least one sample data is provided with a corresponding first state. Converting the first sample data into second sample data conforming to a quaternion format when a set model is trained; and processing the second sample data based on the processing flow from S202 to S203 to obtain third sample data, and training the setting model based on the third sample data to obtain the trained setting model. Please refer to the relevant descriptions of S201 to S203 for the processing procedure of the first sample data and the second sample data, which is not described herein again.
And the electronic equipment inputs the third data corresponding to the time t to the trained set model to obtain a first state of the electronic equipment output by the set model when the preview image is acquired at the time t. The setting model determines a corresponding first state by analyzing the change condition of the input data.
And under the condition that the variation range of the input data is larger than the maximum threshold value of the set range, the set model outputs a corresponding first state to represent that the electronic equipment shakes violently, and the smoothness of the electronic equipment does not meet a second set condition. Here, a floating range of the range characterizing data is set.
And under the condition that the setting model analyzes that the variation range of the input data is in the setting range, outputting the smoothness of the corresponding first state representation electronic equipment to meet a second setting condition.
In order to improve the accuracy of the determined first state, the electronic device acquires the identity of the electronic device when the setting model analyzes that the variation range of the input data is smaller than the minimum threshold of the setting range, and judges whether the electronic device is attacked by the simulator based on the acquired identity, so as to determine whether the smoothness of the output electronic device represented by the first state meets a second setting condition based on the judgment result. And when the acquired identity is the same as the pre-stored identity, representing that the input data is credible data, the electronic equipment is not attacked by the simulator, and the smoothness of the output electronic equipment represented by the first state meets a second set condition. And when the identity identifier is not acquired or the acquired identity identifier is different from the pre-stored identity identifier, representing that the input data is untrusted data, the electronic equipment is attacked by a simulator, and outputting that the smoothness of the electronic equipment represented by the first state does not meet a second set condition.
In practical applications, when the electronic device is a Mobile phone, the Identity may be an International Mobile Equipment Identity (IMEI).
In the scheme provided by the embodiment of the application, the first data is converted into second data which accords with a quaternion format, and the second data is filtered to obtain third data; and inputting the third data into a set model to obtain a first state of the electronic equipment when the preview image is acquired. Therefore, the second data in the quaternion format can conveniently and quickly represent that the electronic equipment executes the rotation operation of the vector which passes around any origin, and the universal joint lock is avoided; the second data is filtered, so that the noise contained in the second data can be filtered, and the accuracy of the data is improved; the corresponding first state is determined by setting the model, so that the accuracy of the determined first state can be improved.
As another embodiment of the present invention, fig. 3 is a schematic flow chart illustrating an implementation of a method for acquiring a face image according to another embodiment of the present invention. Referring to fig. 3, on the basis of the embodiment corresponding to fig. 1, the method for acquiring a face image provided by this embodiment further includes at least one of the following:
s104: under the condition that the determined face posture does not meet the first set condition, outputting first prompt information; the first prompt information is used for prompting a user to adjust the face posture.
In practical application, the electronic device may prompt the user in a manner of text, voice, adjusting the color of the user interface, and the like.
In an embodiment, when the face pose includes a face pose angle, and the face pose angle includes a pitch angle, a yaw angle, and a roll angle, the outputting of the first prompt information includes at least one of the following under the condition that the determined face pose does not satisfy the first setting condition;
under the condition that the pitch angle is smaller than the minimum threshold value of the first set range, reminding the user not to raise the head;
reminding the user not to lower the head under the condition that the pitch angle is larger than the maximum threshold value of the first set range;
reminding the user not to take the side face under the condition that the yaw angle is not in the second set range;
and in the case that the roll angle is not in the third set range, reminding the user not to face askew.
In practice, the first and second set ranges may be-15 ° to 15 °, and the third set range may be-10 ° to 10 °.
In an embodiment, when the face pose includes whether the face image is occluded, the electronic device may output the first prompt message as "do not occlude the face" when determining that the face image is occluded.
In an embodiment, when the face pose includes whether the user closes the eyes, the electronic device may output the first prompt message as "do not close the eyes" when determining that the user closes the eyes.
S105: under the condition that the determined first state represents that the current smoothness of the electronic equipment does not meet the second set condition, outputting second prompt information; and the second prompt message is used for prompting the user to stabilize the electronic equipment.
Here, when the determined first state represents that the current smoothness of the electronic device does not satisfy the second setting condition, the electronic device outputs second prompt information to prompt a user to keep the electronic device stable, so that a clear image can be acquired.
In the scheme provided by this embodiment, the electronic device outputs corresponding prompt information to prompt the user to adjust the face posture or to keep the electronic device stable, so that the image acquisition efficiency can be improved.
Fig. 4 is a schematic flow chart illustrating an implementation of a method for acquiring a face image according to an embodiment of the present invention. Referring to fig. 4, the method for acquiring a face image according to the embodiment includes:
s401: and determining a face posture angle corresponding to at least one frame of preview image based on the face characteristic points of each frame of preview image in at least one frame of preview image acquired by the electronic equipment.
S402: and judging whether the determined face posture angle is in a set angle range.
In a case where the determined face pose angle is not within the set angle range, S403 is executed. And if the determined face pose angle is in the set angle range, representing that the preview image is a front face image, and executing S404.
S403: and outputting third prompt information, wherein the third prompt information is used for prompting a user to adjust the face posture angle.
S404: and judging whether the human face image in the at least one frame of preview image is blocked.
Executing S405 when the human face image in the at least one frame of preview image is blocked; and executing S406 when the human face image in the at least one frame of preview image is not blocked.
S405: and outputting fourth prompt information, wherein the fourth prompt information is used for prompting the face not to be shielded.
S406: and judging whether the eyes of the user are closed in the at least one frame of preview image.
Executing S407 when the user closes the eyes in the at least one frame of preview image; and executing S408 when the user does not close the eyes in the at least one frame of preview image.
S407: and outputting fifth prompting information, wherein the fifth prompting information is used for prompting the user not to close eyes.
S408: and judging whether the proportion of the face image in the at least one frame of preview image is in a set proportion range.
Executing S409 under the condition that the proportion of the face image in the at least one frame of preview image is not in the set proportion range; and executing S410 when the proportion of the face image in the at least one frame of preview image is in the set proportion range.
S409: outputting a sixth prompt message; and the sixth prompt message is used for prompting the user to adjust the distance between the face and the display screen of the electronic equipment.
S410: and determining a first state of the electronic equipment when the at least one frame of preview image is acquired.
S411: under the condition that the smoothness of the electronic equipment represented by the determined first state meets a second set condition, outputting a first photo based on the at least one frame of preview image; the first photo is used for authentication of the server.
S412: outputting seventh prompt information under the condition that the smoothness of the electronic equipment represented by the determined first state does not meet a second set condition; and the seventh prompt message is used for prompting the user to stabilize the electronic equipment.
In the scheme provided by this embodiment, the face pose angle corresponding to the preview image is within the set range, the output first picture is a clear front face image, and the user does not close his eyes in the first picture, and the face image is not blocked, so that the server can improve the success rate of the authentication of the legitimate user when performing the authentication based on the first picture.
In order to implement the method according to the embodiment of the present invention, an embodiment of the present invention further provides a device for collecting a face image, which is disposed on an electronic device, and as shown in fig. 5, the device for collecting a face image includes:
the first determining unit 51 is configured to determine a face pose corresponding to at least one frame of preview image based on a face feature point of each frame of preview image in the at least one frame of preview image acquired by the electronic device;
a second determining unit 52, configured to determine a first state of the electronic device when acquiring a preview image; the first state characterizes a smoothness of the electronic device;
the output unit 53 is configured to output a first photo based on the at least one frame of preview image when the face pose corresponding to the at least one frame of preview image meets a first setting condition and the smoothness of the electronic device represented by the determined first state meets a second setting condition; the first photo is used for authentication of the server.
In an embodiment, the second determining unit 52 is configured to: and under the condition that the face pose corresponding to the at least one frame of preview image meets the first set condition, determining a first state of the electronic equipment when the at least one frame of preview image is collected.
In an embodiment, the second determining unit 52 is configured to:
converting the first data into second data conforming to a quaternion format; the first data represents motion state data of the electronic equipment acquired when the electronic equipment acquires the at least one frame of preview image;
predicting a priori estimated value of the motion state data at the t moment based on a theoretical estimated value of the motion state data at the t-1 moment; wherein t is a positive integer; the optimal estimation value and the prior estimation value are data in accordance with a quaternion format;
determining third data corresponding to the t moment based on the second data corresponding to the t moment and the prior estimation value at the t moment; inputting third data corresponding to the time t to a set model to obtain a first state of the electronic equipment when a preview image is acquired at the time t; wherein the content of the first and second substances,
the setting model is used for determining a corresponding first state according to input data.
In one embodiment, the apparatus for acquiring a face image further comprises:
the deleting unit is used for deleting the first data corresponding to the first time period; the starting time of the first time interval corresponds to the starting time of the electronic equipment for collecting the at least one frame of preview image.
In one embodiment, the apparatus for acquiring a face image further comprises:
a replacing unit for replacing the first coefficient based on at least one confidence coefficient; each confidence coefficient in the at least one confidence coefficient corresponds to a type of motion state data; the first coefficient characterizes a relevant calculation parameter of a Kalman filtering algorithm.
In an embodiment, the replacement unit is configured to:
replacing a second coefficient of the first coefficients with zero; the second coefficient represents a covariance matrix of the prior estimated value at the moment t;
replacing a filter coefficient in the first coefficient based on at least one confidence.
In an embodiment, the replacement unit is further configured to:
determining a new filter coefficient based on the confidence degree of the acceleration data and the confidence degree of the angular velocity data under the condition that the first data comprises the acceleration data and the angular velocity data; the new filter coefficients are used to replace filter coefficients in the first coefficients.
In one embodiment, the face pose includes at least one of:
a face pose angle;
whether the face image is shielded or not;
whether the user closes his eyes; wherein the content of the first and second substances,
under the condition that the face pose comprises a face pose angle, the first setting condition comprises that the face pose angle is in a set angle range;
under the condition that the face pose comprises whether a face image is shielded or not, the first set condition comprises that the face image is not shielded;
in the case that the face pose includes whether the user closes the eyes, the first setting condition includes that the user does not close the eyes.
In one embodiment, the output unit 53 is configured to:
selecting a first preview image from the at least one frame of preview image, and outputting the first preview image as a first photo; wherein the content of the first and second substances,
the first preview image satisfies at least one of:
the ratio of the face image to the picture is in a set range; the picture occupation ratio represents the ratio of the area of the face image in the set face frame to the area of the set face frame;
the distance between the first area and the second area in the face image is smaller than a set threshold value; the first area represents an area where the upper lip is located; the second region characterizes the region of the lower lip.
In an embodiment, the apparatus for acquiring a face image further includes a prompt unit, where the prompt unit is at least configured to perform one of the following:
under the condition that the determined face posture does not meet the first set condition, outputting first prompt information;
under the condition that the determined first state represents that the current smoothness of the electronic equipment does not meet the second set condition, outputting second prompt information; wherein the content of the first and second substances,
the first prompt information is used for prompting a user to adjust the face posture;
the second prompt message is used for prompting the user to stabilize the electronic equipment.
In practical application, each unit included in the apparatus for acquiring a face image can be implemented by a processor in the apparatus for acquiring a face image. Of course, the processor needs to run the program stored in the memory to realize the functions of the above-described program modules.
It should be noted that: in the device for acquiring a face image according to the above embodiment, when the face image is acquired, only the division of the program modules is used for illustration, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the device for acquiring a face image is divided into different program modules, so as to complete all or part of the processing described above. In addition, the apparatus for acquiring a face image and the method for acquiring a face image provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Based on the hardware implementation of the program module, in order to implement the method according to the embodiment of the present invention, an embodiment of the present invention further provides an electronic device. Fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device includes:
a communication interface 1 capable of information interaction with other devices such as a server and the like;
and the processor 2 is connected with the communication interface 1 to realize information interaction with other equipment, and is used for executing the method for acquiring the face image provided by one or more technical schemes when running a computer program. And the computer program is stored on the memory 3.
In practice, of course, the various components in the electronic device are coupled together by the bus system 4. It will be appreciated that the bus system 4 is used to enable connection communication between these components. The bus system 4 comprises, in addition to a data bus, a power bus, a control bus and a status signal bus. For the sake of clarity, however, the various buses are labeled as bus system 4 in fig. 6.
The memory 3 in the embodiment of the present invention is used to store various types of data to support the operation of the electronic device. Examples of such data include: any computer program for operating on an electronic device.
It will be appreciated that the memory 3 may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced Synchronous Dynamic Random Access Memory), Synchronous linked Dynamic Random Access Memory (DRAM, Synchronous Link Dynamic Random Access Memory), Direct Memory (DRmb Random Access Memory). The memory 3 described in the embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed by the above embodiment of the present invention can be applied to the processor 2, or implemented by the processor 2. The processor 2 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 2. The processor 2 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 2 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 3, and the processor 2 reads the program in the memory 3 and in combination with its hardware performs the steps of the aforementioned method.
When the processor 2 executes the program, the process corresponding to the multi-core processor in each method according to the embodiment of the present invention is realized, and for brevity, no further description is given here.
In an exemplary embodiment, the embodiment of the present invention further provides a storage medium, specifically a computer-readable storage medium, for example, including a memory 3 storing a computer program, which is executable by a processor 2 to perform the steps in the foregoing embodiments corresponding to fig. 1 to 4. The computer readable storage medium may be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The technical means described in the embodiments of the present invention may be arbitrarily combined without conflict.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (13)

1. A method for acquiring a face image is applied to an electronic device, and the method comprises the following steps:
determining a face posture corresponding to at least one frame of preview image based on a face feature point of each frame of preview image in at least one frame of preview image acquired by the electronic equipment;
determining a first state of the electronic equipment when a preview image is acquired; the first state characterizes a smoothness of the electronic device;
under the condition that the face pose corresponding to the at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition, outputting a first photo based on the at least one frame of preview image; the first photo is used for authentication of the server.
2. The method of claim 1, wherein determining the first state of the electronic device when capturing the preview image comprises:
and under the condition that the face pose corresponding to the at least one frame of preview image meets the first set condition, determining a first state of the electronic equipment when the at least one frame of preview image is collected.
3. The method of claim 2, wherein determining the first state of the electronic device when capturing the at least one preview image comprises:
converting the first data into second data conforming to a quaternion format; the first data represents motion state data of the electronic equipment acquired when the electronic equipment acquires the at least one frame of preview image;
predicting a priori estimated value of the motion state data at the t moment based on a theoretical estimated value of the motion state data at the t-1 moment; wherein t is a positive integer; the optimal estimation value and the prior estimation value are data in accordance with a quaternion format;
determining third data corresponding to the t moment based on the second data corresponding to the t moment and the prior estimation value at the t moment;
inputting third data corresponding to the time t to a set model to obtain a first state of the electronic equipment when a preview image is acquired at the time t; wherein the content of the first and second substances,
the setting model is used for determining a corresponding first state according to input data.
4. The method of claim 3, wherein in converting the first data to second data conforming to a quaternion format, the method further comprises:
deleting first data corresponding to the first time period; the starting time of the first time interval corresponds to the starting time of the electronic equipment for collecting the at least one frame of preview image.
5. The method according to claim 3, wherein when the third data corresponding to the time t is determined based on the second data corresponding to the time t and the prior estimation value of the time t, the method further comprises:
replacing the first coefficient based on at least one confidence level; each confidence coefficient in the at least one confidence coefficient corresponds to a type of motion state data; the first coefficient characterizes a relevant calculation parameter of a Kalman filtering algorithm.
6. The method of claim 5, wherein replacing the first coefficient based on the at least one confidence level comprises:
replacing a second coefficient of the first coefficients with zero; the second coefficient represents a covariance matrix of the prior estimated value at the moment t;
replacing a filter coefficient in the first coefficient based on at least one confidence.
7. The method of claim 6, wherein when replacing the filter coefficient in the first coefficient based on at least one confidence level, the method further comprises:
determining a new filter coefficient based on the confidence degree of the acceleration data and the confidence degree of the angular velocity data under the condition that the first data comprises the acceleration data and the angular velocity data; the new filter coefficients are used to replace filter coefficients in the first coefficients.
8. The method of claim 1, wherein the face pose comprises at least one of:
a face pose angle;
whether the face image is shielded or not;
whether the user closes his eyes; wherein the content of the first and second substances,
under the condition that the face pose comprises a face pose angle, the first setting condition comprises that the face pose angle is in a set angle range;
under the condition that the face pose comprises whether a face image is shielded or not, the first set condition comprises that the face image is not shielded;
in the case that the face pose includes whether the user closes the eyes, the first setting condition includes that the user does not close the eyes.
9. The method of claim 8, wherein outputting the first photograph based on the at least one preview image comprises:
selecting a first preview image from the at least one frame of preview image, and outputting the first preview image as a first photo; wherein the content of the first and second substances,
the first preview image satisfies at least one of:
the ratio of the face image to the picture is in a set range; the picture occupation ratio represents the ratio of the area of the face image in the set face frame to the area of the set face frame;
the distance between the first area and the second area in the face image is smaller than a set threshold value; the first area represents an area where the upper lip is located; the second region characterizes the region of the lower lip.
10. The method of any one of claims 1-9, further comprising at least one of:
under the condition that the determined face posture does not meet the first set condition, outputting first prompt information;
under the condition that the determined first state represents that the current smoothness of the electronic equipment does not meet the second set condition, outputting second prompt information; wherein the content of the first and second substances,
the first prompt information is used for prompting a user to adjust the face posture;
the second prompt message is used for prompting the user to stabilize the electronic equipment.
11. An apparatus for acquiring a face image, comprising:
the first determining unit is used for determining a face posture corresponding to at least one frame of preview image based on a face feature point of each frame of preview image in at least one frame of preview image acquired by the electronic equipment;
the second determining unit is used for determining a first state of the electronic equipment when the electronic equipment acquires the preview image; the first state characterizes a smoothness of the electronic device;
the output unit is used for outputting a first photo based on the at least one frame of preview image under the condition that the face pose corresponding to the at least one frame of preview image meets a first set condition and the smoothness of the electronic equipment represented by the determined first state meets a second set condition; the first photo is used for authentication of the server.
12. An electronic device, comprising: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is adapted to perform the steps of the method of any one of claims 1 to 10 when running the computer program.
13. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, realizing the steps of the method of any one of claims 1 to 10.
CN202011138813.9A 2020-10-22 2020-10-22 Method and device for collecting face image and electronic equipment Active CN112287792B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011138813.9A CN112287792B (en) 2020-10-22 2020-10-22 Method and device for collecting face image and electronic equipment
PCT/CN2021/123356 WO2022083479A1 (en) 2020-10-22 2021-10-12 Method and apparatus for capturing face image, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011138813.9A CN112287792B (en) 2020-10-22 2020-10-22 Method and device for collecting face image and electronic equipment

Publications (2)

Publication Number Publication Date
CN112287792A true CN112287792A (en) 2021-01-29
CN112287792B CN112287792B (en) 2023-03-31

Family

ID=74423585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011138813.9A Active CN112287792B (en) 2020-10-22 2020-10-22 Method and device for collecting face image and electronic equipment

Country Status (2)

Country Link
CN (1) CN112287792B (en)
WO (1) WO2022083479A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536900A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Method and device for evaluating quality of face image and computer readable storage medium
WO2022083479A1 (en) * 2020-10-22 2022-04-28 深圳前海微众银行股份有限公司 Method and apparatus for capturing face image, and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647352B1 (en) * 1998-06-05 2003-11-11 Crossbow Technology Dynamic attitude measurement method and apparatus
CN102252676A (en) * 2011-05-06 2011-11-23 微迈森惯性技术开发(北京)有限公司 Method and related equipment for acquiring movement attitude data and tracking human movement attitude
CN105120167A (en) * 2015-08-31 2015-12-02 广州市幸福网络技术有限公司 Certificate picture camera and certificate picture photographing method
US20170161553A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for capturing photo
WO2017128750A1 (en) * 2016-01-28 2017-08-03 中兴通讯股份有限公司 Image collection method and image collection device
CN108875473A (en) * 2017-06-29 2018-11-23 北京旷视科技有限公司 Living body verification method, device and system and storage medium
CN109660719A (en) * 2018-12-11 2019-04-19 维沃移动通信有限公司 A kind of information cuing method and mobile terminal
CN111540090A (en) * 2020-04-29 2020-08-14 北京市商汤科技开发有限公司 Method and device for controlling unlocking of vehicle door, vehicle, electronic equipment and storage medium
CN111553838A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Model parameter updating method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291737B (en) * 2020-05-09 2020-08-28 支付宝(杭州)信息技术有限公司 Face image acquisition method and device and electronic equipment
CN112287792B (en) * 2020-10-22 2023-03-31 深圳前海微众银行股份有限公司 Method and device for collecting face image and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647352B1 (en) * 1998-06-05 2003-11-11 Crossbow Technology Dynamic attitude measurement method and apparatus
CN102252676A (en) * 2011-05-06 2011-11-23 微迈森惯性技术开发(北京)有限公司 Method and related equipment for acquiring movement attitude data and tracking human movement attitude
CN105120167A (en) * 2015-08-31 2015-12-02 广州市幸福网络技术有限公司 Certificate picture camera and certificate picture photographing method
US20170161553A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for capturing photo
WO2017128750A1 (en) * 2016-01-28 2017-08-03 中兴通讯股份有限公司 Image collection method and image collection device
CN108875473A (en) * 2017-06-29 2018-11-23 北京旷视科技有限公司 Living body verification method, device and system and storage medium
CN109660719A (en) * 2018-12-11 2019-04-19 维沃移动通信有限公司 A kind of information cuing method and mobile terminal
CN111540090A (en) * 2020-04-29 2020-08-14 北京市商汤科技开发有限公司 Method and device for controlling unlocking of vehicle door, vehicle, electronic equipment and storage medium
CN111553838A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Model parameter updating method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022083479A1 (en) * 2020-10-22 2022-04-28 深圳前海微众银行股份有限公司 Method and apparatus for capturing face image, and electronic device
CN113536900A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Method and device for evaluating quality of face image and computer readable storage medium

Also Published As

Publication number Publication date
CN112287792B (en) 2023-03-31
WO2022083479A1 (en) 2022-04-28

Similar Documents

Publication Publication Date Title
CN110232667B (en) Image distortion correction method, device, electronic equipment and readable storage medium
CN112419170B (en) Training method of shielding detection model and beautifying processing method of face image
JP6866889B2 (en) Image processing equipment, image processing methods and programs
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN112287792B (en) Method and device for collecting face image and electronic equipment
JPWO2010122721A1 (en) Verification device, verification method and verification program
JP2019117577A (en) Program, learning processing method, learning model, data structure, learning device and object recognition device
CN107944395B (en) Method and system for verifying and authenticating integration based on neural network
CN114511041B (en) Model training method, image processing method, device, equipment and storage medium
CN111753764A (en) Gesture recognition method of edge terminal based on attitude estimation
CN111680573B (en) Face recognition method, device, electronic equipment and storage medium
CN114494347A (en) Single-camera multi-mode sight tracking method and device and electronic equipment
JP5648452B2 (en) Image processing program and image processing apparatus
CN111784658A (en) Quality analysis method and system for face image
CN116453232A (en) Face living body detection method, training method and device of face living body detection model
CN112906571B (en) Living body identification method and device and electronic equipment
CN109543557B (en) Video frame processing method, device, equipment and storage medium
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN111784660A (en) Method and system for analyzing face correcting degree of face image
JP2022526468A (en) Systems and methods for adaptively constructing a 3D face model based on two or more inputs of a 2D face image
CN115829975A (en) Palm vein image quality detection method, system, medium and electronic device
CN113223083B (en) Position determining method and device, electronic equipment and storage medium
CN115035566A (en) Expression recognition method and device, computer equipment and computer-readable storage medium
CN114022949A (en) Event camera motion compensation method and device based on motion model
CN113283318A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant