WO2022083479A1 - 采集人脸图像的方法、装置及电子设备 - Google Patents

采集人脸图像的方法、装置及电子设备 Download PDF

Info

Publication number
WO2022083479A1
WO2022083479A1 PCT/CN2021/123356 CN2021123356W WO2022083479A1 WO 2022083479 A1 WO2022083479 A1 WO 2022083479A1 CN 2021123356 W CN2021123356 W CN 2021123356W WO 2022083479 A1 WO2022083479 A1 WO 2022083479A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
preview image
data
face
frame
Prior art date
Application number
PCT/CN2021/123356
Other languages
English (en)
French (fr)
Inventor
李薇
陈洁丹
舒玉强
卢道和
郭树霞
雷声伟
蔡志杰
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2022083479A1 publication Critical patent/WO2022083479A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken

Definitions

  • the present application relates to the technical field of face recognition in financial technology (Fintech), and in particular, to a method, apparatus and electronic device for collecting face images.
  • face recognition technology is used to calculate the similarity between the currently collected face image and the pre-stored face image, so as to determine the user's identity. authenticating.
  • the image quality of the currently collected face image is poor, the authentication of the legitimate user will fail.
  • the embodiments of the present application provide a method, an apparatus, and an electronic device for collecting a face image.
  • An embodiment of the present application provides a method for collecting a face image, which is applied to an electronic device, and the method includes:
  • the first state represents the stability of the electronic device
  • a frame of preview image outputs a first photo; the first photo is used for authentication by the server.
  • the embodiment of the present application also provides a device for collecting a face image, including:
  • a first determining unit configured to determine the facial posture corresponding to the at least one frame of preview image based on the facial feature points of each frame of preview image in the at least one frame of preview image collected by the electronic device;
  • a second determination unit configured to determine a first state of the electronic device when the preview image is collected; the first state represents the stability of the electronic device;
  • an output unit configured to: in the case that the facial posture corresponding to the at least one frame of preview image satisfies the first set condition, and the determined stability of the electronic device represented by the first state satisfies the second set condition , outputting a first photo based on the at least one frame preview image; the first photo is used for the server to perform identity verification.
  • Embodiments of the present application also provide an electronic device, including: a processor and a memory configured to store a computer program that can be executed on the processor,
  • the processor is configured to execute the steps of any of the above-mentioned methods for collecting a face image when running the computer program.
  • the embodiments of the present application further provide a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of any of the foregoing methods for collecting a face image.
  • the electronic device determines the face pose corresponding to at least one frame of preview image based on the facial feature points of each frame of preview image in at least one frame of preview image collected by the electronic device; the first state; in the case that the face posture corresponding to at least one frame of the preview image satisfies the first setting condition, and the determined first state indicates that the stability of the electronic device meets the second setting condition, based on the at least one frame
  • the preview image outputs the first photo.
  • the face posture corresponding to the preview image satisfies the first setting requirements, and the outputted first photo meets the collection requirements of the face recognition image, which can avoid the collection of facial feature points in the outputted first photo due to the incorrect face posture Incomplete situation occurs.
  • the stability of the electronic device satisfies the second set condition when collecting the preview image, so a clear face image can be collected, avoiding blurring of the face image and blurring of the human face in the output photo due to the jitter of the electronic device when collecting the preview image.
  • the case of missing face feature points occurs. Since the face image in the first photo output by the electronic device is clear and contains all the facial feature points used for face recognition, when the server performs identity verification based on the first photo, the success rate of legal user identity verification can be improved .
  • FIG. 1 is a schematic diagram of an implementation flowchart of a method for collecting a face image provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of the implementation of determining a first state in a method for collecting a face image provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of an implementation flowchart of a method for collecting a face image provided by another embodiment of the present application
  • FIG. 4 is a schematic diagram of an implementation flowchart of a method for collecting a face image provided by an application embodiment of the present application
  • FIG. 5 is a schematic structural diagram of an apparatus for collecting a face image provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a hardware composition of an electronic device according to an embodiment of the present application.
  • an electronic device collects a face image and sends the collected face image to the server, and the server uses the face recognition technology to calculate the similarity between the received face image and the pre-stored face image, Authenticate.
  • the electronic device may have poor image quality of the collected face image due to the light problem in the current environment.
  • the similarity calculated by the server is less than the set threshold, resulting in the failure of authentication of the legitimate user. Even though the server obtains a clear face image by preprocessing the face image. However, when the facial feature points used for face recognition are incomplete, the authentication of legitimate users will still fail.
  • the embodiment of the present application provides a method for collecting a face image, the face posture corresponding to the collected preview image by the electronic device satisfies the first set condition, and the electronic device when collecting the preview image
  • the first state indicates that the first photo is output based on the preview image when the smoothness of the electronic device satisfies the second set condition.
  • the face posture corresponding to the preview image satisfies the first setting requirement, thereby avoiding the situation that the facial feature points in the outputted first photo are not collected completely due to the incorrect facial posture.
  • the stability of the electronic device satisfies the second set condition when collecting the preview image, so a clear face image can be collected, avoiding blurring of the face image in the output photo caused by jitter of the electronic device when collecting the preview image.
  • a situation where the face image is missing occurs. Since the face image in the first photo output by the electronic device is clear and contains all facial feature points used for face recognition, when the server performs authentication based on the first photo, it can improve the success rate of legal user authentication .
  • FIG. 1 shows a schematic diagram of an implementation flow of a method for collecting a face image provided by an embodiment of the present application.
  • the execution subject of the method for collecting a face image is an electronic device, for example, a terminal such as a mobile phone and a tablet computer.
  • the method for collecting a face image includes:
  • S101 Determine a face pose corresponding to the at least one frame of preview image based on the face feature points of each frame of the preview image in the at least one frame of preview image collected by the electronic device.
  • the electronic device When the electronic device starts the image capture mode, at least one frame of preview image can be captured through the built-in camera or an external camera.
  • the electronic device determines the face feature points in the preview image by using the face recognition technology, and determines the face pose corresponding to the corresponding preview image based on the determined face feature points.
  • the facial feature points represent the positions of facial features on the facial image in the preview image.
  • the facial feature points include points corresponding to the contours of the face, eyebrows, pupils, corners of eyes, mouths, noses, etc.
  • the electronic device may also input at least one frame of preview image into a setting model for recognizing the face pose, and obtain the face pose output by the setting model.
  • the setting model is used to determine the corresponding face pose based on the face feature points in the input image.
  • the electronic device determines whether the facial posture satisfies the first setting condition.
  • the first setting condition is set based on image acquisition requirements for face recognition. Wherein, when the facial posture does not meet the first set condition, continue to collect the preview image.
  • the facial gesture includes at least one of the following:
  • the first setting condition includes that the facial posture angle is within a set angle range
  • the first setting condition includes that the facial image is not occluded
  • the first setting condition includes that the user does not close his eyes.
  • the face pose angles include pitch, yaw and roll.
  • the pitch angle represents the rotation angle around the X axis
  • the yaw angle represents the rotation angle around the Y axis
  • the roll angle represents the rotation angle around the Z axis.
  • the set angle ranges corresponding to the pitch angle and the yaw angle are both -15° to 15°
  • the set angle range corresponding to the roll angle is -10° to 10°.
  • the electronic device may also compare the face feature points of at least two adjacent frames of preview images to determine whether the face feature points are missing, thereby determining whether the face image is blocked.
  • the electronic device may detect the position of the pupil and the position of the upper eyelid and the lower eyelid based on the face recognition algorithm, and determine the first distance between the upper eyelid and the lower eyelid based on the detected position of the upper eyelid and the lower eyelid. If the pupil is not detected, it is determined that the user's eyes are closed; if the pupil is detected, it is determined that the user's eyes are not closed. Alternatively, when the first distance is smaller than the set distance, it is determined that the user's eyes are closed; when pupils are detected and the first distance is greater than or equal to the set distance, it is determined that the user's eyes are not closed.
  • S102 Determine the first state of the electronic device when the preview image is collected; the first state represents the stability of the electronic device.
  • the electronic device acquires motion state data of the electronic device when collecting the preview image, and determines the first state of the electronic device when collecting the preview image based on the motion state data of the electronic device.
  • the motion state data of the electronic device is collected by a built-in sensor, and the sensor includes at least one of the following: an angular velocity sensor and an acceleration sensor.
  • motion state data when the stability of the electronic device satisfies the second set condition may be pre-stored in the electronic device.
  • the motion state data collected when the electronic device collects the preview image does not match the pre-stored motion state data
  • the motion state data collected when the electronic device collects the preview image matches the pre-stored motion state data
  • the second setting condition represents the smoothness when the electronic device can acquire a clear image.
  • the electronic device can also determine the first state of the electronic device when the preview image is collected by using the trained model.
  • the model is used to determine the corresponding first state based on the motion state data of the electronic device.
  • the preview image is continued to be collected.
  • the electronic device When the face posture corresponding to at least one frame of the preview image satisfies the first set condition, and the determined stability of the electronic device represented by the first state satisfies the second set condition, it indicates that the electronic device can collect clear and For a complete frontal face image, at this time, at least one frame of preview image is collected, and a first photo is output based on the at least one frame of collected preview image.
  • the electronic device may send the first photo to the server, so that the server performs authentication based on the first photo.
  • the electronic device can end this process when the first photo is output, and prompt the user that the face image is successfully collected; it can also end this process when the first photo is not output within the set time period. , prompting the user that the face image acquisition failed.
  • the outputting the first photo based on the at least one frame preview image includes:
  • a first preview image is selected from the at least one frame of preview image, and the first preview image is output as a first photo;
  • the first preview image satisfies at least one of the following:
  • the proportion of the face image is in the set range; the proportion of the face image represents the ratio between the area of the face image in the set face frame and the area of the set face frame; here, let Defined range characterizing ratio;
  • the distance between the first area and the second area in the face image is smaller than the set threshold; the first area represents the area where the upper lip is located; the second area represents the area where the lower lip is located.
  • the electronic device collects at least one frame when the face posture corresponding to the at least one frame of the preview image satisfies the first set condition, and the determined stability of the electronic device represented by the first state satisfies the second set condition Previewing the image, selecting a first preview image from the collected preview images, and outputting the first preview image as a first photo.
  • the proportion of the face image in the first preview image is within the set ratio range, even if the first photo is output based on the image in the set face frame in the first preview image, it is possible to obtain a clear image. face image.
  • the first preview image satisfies that the distance between the first area and the second area in the face image is less than the set threshold, thereby avoiding the failure of the authentication of the legitimate user due to the user opening his mouth.
  • the electronic device can determine the first area of the set face frame displayed in the preview interface, and determine the second area of the face image displayed in the set face frame, based on the first area and the first area
  • the second area is to calculate the aspect ratio of the face image in the preview image, and determine whether the aspect ratio of the face image in the preview image is within the set range.
  • the set face frame may be set based on the shape of a person's head.
  • the setting range is set based on the proportion of the face image when the frontal face image is displayed within the set face frame.
  • the aspect ratio of the face image represents the distance between the face and the display screen of the electronic device.
  • the electronic device may output prompt information to prompt the user to adjust the distance between the face and the display screen of the electronic device.
  • the electronic device may determine the first area and the second area in the captured preview image based on the face recognition technology, and determine the distance between the first area and the second area. When the distance between the first area and the second area is greater than or equal to the set threshold, the user may be prompted not to open his mouth.
  • the electronic device determines the face pose corresponding to at least one frame of preview image based on the facial feature points of each frame of preview image in at least one frame of preview image collected by the electronic device; The first state when the preview image is collected; when the face posture corresponding to at least one frame of the preview image satisfies the first set condition, and the determined first state indicates that the stability of the electronic device satisfies the second set condition, The first photo is output based on at least one frame of the preview image.
  • the face posture corresponding to the preview image satisfies the first setting requirements, and the outputted first photo meets the collection requirements of the face recognition image, which can avoid the collection of facial feature points in the outputted first photo due to the incorrect face posture Incomplete situation occurs. Since the stability of the electronic device satisfies the second set condition when collecting the preview image, a clear face image can be collected, avoiding blurring of the face image and blurring of the human face in the output photo due to the jitter of the electronic device when collecting the preview image. A situation where the face image is missing occurs. Since the face image in the first photo output by the electronic device is clear and contains all facial feature points used for face recognition, when the server performs authentication based on the first photo, it can improve the success rate of legal user authentication .
  • the determining of the first state of the electronic device when capturing the preview image includes:
  • the first state of the electronic device when the at least one frame of preview image is collected is determined.
  • the electronic device when the electronic device performs S101 and determines that the face pose corresponding to at least one frame of the preview image does not meet the first set condition, it returns to S101 or ends this process.
  • the electronic device determines the first state of the electronic device when the at least one frame of preview image is collected when the face posture corresponding to the at least one frame of the preview image satisfies the first set condition, which can save the power of the electronic device. consumption.
  • the relevant description in S102 for the implementation method of determining the first state of the electronic device when the at least one frame of preview image is collected, please refer to the relevant description in S102, which is not repeated here.
  • the condition for outputting the first photo is not satisfied.
  • the step of frame previewing the first state of the image can save resources consumed by determining the first state, improve data processing speed and save power consumption.
  • FIG. 2 shows a schematic flowchart of an implementation of determining a first state in a method for collecting a face image provided by an embodiment of the present application.
  • the determining of the first state of the electronic device when collecting the at least one frame of preview image includes:
  • S201 Convert the first data into second data conforming to a quaternion format; the first data represents motion state data of the electronic device collected when the electronic device collects the at least one frame of preview image.
  • i, j, and k are all imaginary units.
  • a quaternion can be represented by a combination of a vector and a real number.
  • w is a real number.
  • a vector can be regarded as a quaternion with a real part of 0, and a real number can be regarded as a quaternion with an imaginary part of 0.
  • the quaternion It can represent a rotation operation in which a point in space is rotated by an angle of ⁇ with the unit vector (x, y, z) as the axis.
  • the electronic device can be abstracted as a point in the three-dimensional space, the first data can be decomposed into data in the three directions of the X axis, the Y axis and the Z axis. Therefore, the quaternion It can also characterize the rotation operation in which the electronic device rotates by an angle of ⁇ with the unit vector (x, y, z) as the axis. The electronic device converts the first data into second data conforming to the quaternion format. in,
  • the first data may include at least one of the following:
  • Acceleration data collected by an accelerometer is a measure of acceleration data collected by an accelerometer.
  • the The method in order to reduce data interference and improve the accuracy of the data, thereby improving the accuracy of the determined first state, when the first data is converted into the second data conforming to the quaternion format, the The method also includes:
  • the first data corresponding to the first time period is deleted; the start time of the first time period corresponds to the start time when the electronic device collects the at least one frame of the preview image.
  • the electronic device when the electronic device acquires the first data, it deletes the first data corresponding to the first time period, and then converts the remaining first data into second data conforming to the quaternion format.
  • the electronic device may delete the first data acquired within 100 milliseconds of entering the preview mode.
  • the electronic device may also delete the first data collected during the period of collecting a set number of preview images, for example, the electronic device deletes the first data collected within the period corresponding to the first 10 frames of preview images.
  • S202 Based on the theoretical estimated value of the motion state data at time t-1, predict the a priori estimated value of the motion state data at time t; wherein, t is a positive integer; the best estimated value and the a priori estimated value are both is data in quaternion format.
  • the second data is processed. Specifically, based on the observed value (or measured value) at time t and the theoretical estimated value (or called optimal estimated value) at time t-1, the corresponding real value at time t is estimated to obtain the corresponding optimal estimated value.
  • the electronic device can predict the a priori estimated value of the motion state data at time t based on the theoretical estimated value of the motion state data at time t-1, combined with factors such as system noise and measurement noise.
  • the motion state data includes at least one of angular velocity data and acceleration data.
  • the angular velocity data can be collected using an angular velocity sensor (eg, a gyroscope) built in the electronic device; the acceleration data can be collected using an acceleration sensor built in the electronic device.
  • the motion state data at time t-1 and the second data corresponding to time t are motion state data of the same type.
  • the second data corresponding to time t is the second data corresponding to the angular velocity data.
  • the second data corresponding to time t includes second data corresponding to angular velocity data and second data corresponding to acceleration data.
  • the second data corresponding to time t corresponds to the observation value at time t.
  • the electronic device uses the second data corresponding to time t to correct the priori estimated value at time t to obtain third data corresponding to time t.
  • the third data corresponding to time t corresponds to the best estimated value at time t.
  • the electronic device estimates the best estimated value corresponding to time t based on the observation value recorded by the sensor at time t and the best estimated value of the sensor at time t-1.
  • the measured values recorded by the sensor include at least one of angular velocity data and acceleration data.
  • the second data may also be processed based on a Kalman filter algorithm to obtain corresponding third data.
  • the Kalman filter algorithm is a recursive prediction-correction method.
  • the Kalman filter algorithm is divided into two steps:
  • Prediction Estimate the prior estimate at time t based on the best estimate at time t-1;
  • Update Use the observed value at time t to correct the prior estimate at time t to obtain the posterior estimate at time t, also known as the best estimate.
  • the prediction stage is predicted by the time update equation, and the update stage is implemented by the state update equation.
  • S202 based on the time update equation and the theoretical estimated value of the motion state data at time t-1, a priori estimated value of the motion state data at time t is predicted; in S203, based on the state update equation, the second corresponding to time t is estimated.
  • the data and the a priori estimated value at time t determine the third data (ie, the best estimated value) corresponding to time t.
  • the time update equation includes:
  • Characterize the best estimate at time t-1 Represents the prior estimated value at time t, that is, the result at time t predicted based on the best estimate at time t-1;
  • F t represents the prediction matrix or state transition matrix corresponding to the prediction process;
  • F t T represents the transition of F t . set;
  • B t is the control matrix;
  • Pt characterization The covariance matrix of ; P t-1 representation
  • the covariance matrix of ; Qt represents the process excitation noise covariance, that is, the covariance of the system process, and Qt is used to represent the error between Ft and the actual process.
  • the noise of Q t characterizes the external unknown influencing factors that affect the smoothness of the electronic device, eg, sudden impact, external wind speed, etc.
  • the state update equation includes:
  • K' represents the Kalman filter gain, or Kalman filter coefficient
  • H t represents the observed value of the sensor
  • P t ' represents the posterior estimated covariance matrix at time t, namely The covariance matrix of , representing the uncertainty of the state
  • R t characterizes the measurement noise covariance matrix, that is, the noise of the sensor.
  • ⁇ gg characterizes the correlation between g and g
  • ⁇ ga characterizes the correlation between g and a
  • ⁇ ag characterizes the correlation between a and g
  • ⁇ aa characterizes the correlation between a and a.
  • Both g and a are randomly distributed and both conform to a Gaussian distribution.
  • formula (6) corresponds to the estimated value of the sensor
  • formula (7) corresponds to the predicted value of the sensor.
  • H t characterizes the observed value of the sensor
  • the transposed matrix representing H t ⁇ 0 represents the mean value of the Gaussian distribution corresponding to the estimated value of the sensor
  • ⁇ 0 represents the covariance of the Gaussian distribution corresponding to the estimated value of the sensor.
  • R t represents the noise of the sensor
  • ⁇ 1 represents the mean value of the Gaussian distribution corresponding to the observed value of the sensor
  • ⁇ 1 represents the covariance of the Gaussian distribution corresponding to the observed value of the sensor.
  • the predicted estimated value may be accurate or inaccurate.
  • the matrix corresponding to the predicted value and the matrix corresponding to the estimated value are multiplied to obtain a new Gaussian distribution, The best estimate is thus determined based on the new Gaussian distribution.
  • the new Gaussian distribution represents the overlapping area of the predicted value and the estimated value, which is the area where the best estimated value is located. The following describes the implementation process of determining a new Gaussian distribution based on the one-dimensional Gaussian distribution curve equation:
  • the one-dimensional Gaussian distribution curve equation with expectation ⁇ and variance ⁇ 2 is:
  • N(x, ⁇ 0 , ⁇ 0 ) represents the first Gaussian curve
  • N(x, ⁇ 1 , ⁇ 1 ) represents the second Gaussian curve
  • N(x, ⁇ ', ⁇ ') represents the new Gaussian curve
  • K is the Kalman gain, or Kalman coefficient.
  • formulas (3) to (5) are obtained by simplifying the above-mentioned expressions by subtracting H t .
  • the method further includes:
  • the first coefficient is replaced based on at least one confidence level; each confidence level in the at least one confidence level corresponds to a type of motion state data; the first coefficient represents the relevant calculation parameters of the Kalman filter algorithm.
  • the Kalman gain K' will change with the change of the noise R t of the sensor.
  • the Kalman filter algorithm is used to process the second data, and the corresponding best estimated value can be obtained; but in practical applications, considering the electronic Factors such as the computing power of the device, algorithm efficiency, etc., the corresponding results can be within the allowable error range. Therefore, in this embodiment, a fixed weight is determined based on at least one confidence level, and the fixed weight is used to replace the Kalman gain K' to obtain an estimated value within the error range.
  • At least one confidence level includes at least one of the following:
  • the electronic device replaces the first coefficient based on at least one confidence level, thereby simplifying the above formulas (2) to (5) to obtain formula (20).
  • the replacement of the first coefficient based on at least one confidence level includes:
  • the second coefficient represents the covariance matrix of the a priori estimated value at time t;
  • the filter coefficients in the first coefficients are replaced based on at least one confidence level.
  • the second coefficient in the first coefficient corresponds to P t in the above equations (2), (4) and (5), and the covariance matrix of the prior estimated value at time t is replaced by zero, that is,
  • the P t in the above equations (2), (4) and (5) is set to zero, so that the above equations (2), (4) and (5) are all zero.
  • the filter coefficients in the first coefficients are Kalman filter coefficients, also called Kalman gain K'.
  • the electronic device replaces the Kalman gain K' in the formula (3) in the above state update equation based on at least one confidence level, and obtains The corresponding new equation (20). in, The corresponding new equation is as follows:
  • the method when the filter coefficient included in the first coefficient is replaced based on at least one confidence level, the method further includes:
  • a new filter coefficient is determined based on the confidence of the acceleration data and the confidence of the angular velocity data; the new filter coefficient is used to replace the first coefficient filter coefficients in .
  • the new filter coefficient ⁇ a represents the confidence of the acceleration data corresponding to the acceleration sensor
  • ⁇ g represents the confidence of the angular velocity data corresponding to the angular velocity sensor.
  • S204 Input the third data corresponding to time t into the setting model to obtain the first state when the electronic device collects the preview image at time t; wherein, the setting model is used to determine the corresponding data according to the input data first state.
  • the setting model is obtained by training at least one sample data based on a machine learning algorithm, and each sample data in the at least one sample data is set with a corresponding first state.
  • the set model is trained with three sample data, and the set model after training is obtained.
  • the processing procedures of the first sample data and the second sample data please refer to the relevant descriptions of S201 to S203, which will not be repeated here.
  • the electronic device inputs the third data corresponding to time t to the training-completed setting model, and obtains the first state when the electronic device output by the setting model collects the preview image at time t.
  • the setting model determines the corresponding first state by analyzing the change of the input data.
  • the set model analyzes that the variation range of the input data is greater than the maximum threshold of the set range, the output corresponding first state indicates that the electronic device shakes violently, and the stability of the electronic device does not meet the second set condition.
  • the set range represents the floating range of the data.
  • the output corresponding first state indicates that the stability of the electronic device satisfies the second set condition.
  • the electronic device obtains the identity of the electronic device at this time, based on the obtained
  • the identity identifier is used to determine whether the electronic device is attacked by the simulator, so as to determine whether the stability of the electronic device represented by the output first state satisfies the second set condition based on the judgment result.
  • the data representing the input is trusted data
  • the electronic device has not been attacked by the simulator, and the stability of the electronic device represented by the output first state satisfies The second setting condition.
  • the data representing the input is untrusted data
  • the electronic device has been attacked by the simulator, and the electronic device represented by the first output state
  • the smoothness of does not meet the second set condition.
  • the identity identifier may be an International Mobile Equipment Identity (IMEI, International Mobile Equipment Identity).
  • IMEI International Mobile Equipment Identity
  • the first data is converted into second data conforming to the quaternion format, and the second data is filtered to obtain third data; the third data is input into the setting model to obtain the The first state of the electronic device when the preview image is collected.
  • the second data in the quaternion format can conveniently and quickly represent the electronic device performing a rotation operation around a vector passing through the origin, avoiding the occurrence of gimbal lock; filtering the second data can filter out the second data
  • the noise contained in the data can improve the accuracy of the data; by setting the model to determine the corresponding first state, the accuracy of the determined first state can be improved.
  • FIG. 3 shows a schematic diagram of an implementation flow of a method for collecting a face image provided by another embodiment of the present application.
  • the method for collecting a face image provided by this embodiment further includes at least one of the following:
  • S104 In the case that the determined face posture does not meet the first set condition, output first prompt information; the first prompt information is used to prompt the user to adjust the face posture.
  • the electronic device may prompt the user by means of text, voice, and adjusting the color of the user interface.
  • the determined facial posture does not meet the first set condition.
  • the output of the first prompt information includes at least one of the following;
  • the first setting range and the second setting range may be -15° to 15°
  • the third setting range may be -10° to 10°.
  • the electronic device may output the first prompt information "do not block the face" when it is determined that the facial image is blocked.
  • the electronic device may output the first prompt information as "do not close his eyes" when it is determined that the user has closed his eyes.
  • S105 In the case where the determined first state indicates that the current stability of the electronic device does not meet the second set condition, output second prompt information; wherein the second prompt information is used to prompt the user to hold stabilize the electronic device.
  • the electronic device when the determined first state indicates that the current stability of the electronic device does not meet the second set condition, the electronic device outputs second prompt information to prompt the user to hold the electronic device steadily, so as to capture a clear image.
  • the electronic device outputs corresponding prompt information to prompt the user to adjust the posture of the face or hold the electronic device steadily, which can improve the image acquisition efficiency.
  • FIG. 4 shows a schematic diagram of an implementation flow of a method for collecting a face image provided by an application embodiment of the present application.
  • the method for collecting a face image provided by this implementation includes:
  • S401 Determine, based on the face feature points of each frame of preview image in at least one frame of preview image collected by the electronic device, a face posture angle corresponding to the at least one frame of preview image.
  • S402 Determine whether the determined face posture angle is within a set angle range.
  • S403 Output third prompt information, where the third prompt information is used to prompt the user to adjust the face posture angle.
  • S404 Determine whether the face image in the at least one frame of preview image is blocked.
  • S405 Output fourth prompt information, where the fourth prompt information is used to prompt not to block the face.
  • S406 Determine whether the user's eyes are closed in the at least one frame preview image.
  • S407 In the case where the user's eyes are closed in the at least one frame of preview image, S407 is performed; in the case where it is determined that the user's eyes are not closed in the at least one frame of preview image, S408 is performed.
  • S407 Output fifth prompt information, where the fifth prompt information is used to prompt the user not to close their eyes.
  • S408 Determine whether the aspect ratio of the face image in the at least one frame of preview image is within a set ratio range.
  • S409 Output sixth prompt information; the sixth prompt information is used to prompt the user to adjust the distance between the human face and the display screen of the electronic device.
  • S410 Determine the first state of the electronic device when the at least one frame of preview image is captured.
  • the face posture angle corresponding to the preview image is within the set range
  • the outputted first photo is a clear frontal face image
  • the user's eyes are not closed in the first photo
  • the face image is not blocked. Therefore, when the server performs the identity verification based on the first photo, the success rate of the authentication of the legal user can be improved.
  • the embodiment of the present application further provides a device for collecting a face image, which is arranged on an electronic device.
  • the device for collecting a face image includes:
  • the first determining unit 51 is configured to determine the facial posture corresponding to the at least one frame of preview image based on the facial feature points of each frame of the preview image in the at least one frame of preview image collected by the electronic device;
  • the second determination unit 52 is configured to determine the first state of the electronic device when the preview image is collected; the first state represents the stability of the electronic device;
  • the output unit 53 is configured to meet the first set condition when the face posture corresponding to the at least one frame of preview image meets the first set condition, and the determined stability of the electronic device represented by the first state meets the second set condition Next, output a first photo based on the at least one frame preview image; the first photo is used for the server to perform identity verification.
  • the second determining unit 52 is configured to: determine that the electronic device is collecting the at least one frame when the face posture corresponding to the at least one frame of preview image satisfies the first set condition. The first state when a frame is previewed.
  • the second determining unit 52 is configured to:
  • the first data represents motion state data of the electronic device collected when the electronic device collects the at least one frame of preview image
  • the setting model is used to determine the corresponding first state according to the input data.
  • the apparatus for collecting a face image further includes:
  • the deletion unit is configured to delete the first data corresponding to the first period; the starting moment of the first period corresponds to the starting moment when the electronic device collects the at least one frame of the preview image.
  • the apparatus for collecting a face image further includes:
  • a replacement unit configured to replace the first coefficient based on at least one confidence degree; each confidence degree in the at least one confidence degree corresponds to a type of motion state data; the first coefficient represents the correlation calculation of the Kalman filter algorithm parameter.
  • the replacement unit is configured as:
  • the second coefficient represents the covariance matrix of the a priori estimated value at time t;
  • the filter coefficients in the first coefficients are replaced based on at least one confidence level.
  • the replacement unit is further configured to:
  • a new filter coefficient is determined based on the confidence of the acceleration data and the confidence of the angular velocity data; the new filter coefficient is used to replace the first coefficient filter coefficients in .
  • the facial gesture includes at least one of the following:
  • the first setting condition includes that the facial posture angle is within a set angle range
  • the first setting condition includes that the facial image is not blocked
  • the first setting condition includes that the user has not closed his eyes.
  • the output unit 53 is configured to:
  • a first preview image is selected from the at least one frame of preview image, and the first preview image is output as a first photo;
  • the first preview image satisfies at least one of the following:
  • the proportion of the face image is in the set range; the proportion of the face image represents the ratio between the area of the face image in the set face frame and the area of the set face frame;
  • the distance between the first area and the second area in the face image is smaller than the set threshold; the first area represents the area where the upper lip is located; the second area represents the area where the lower lip is located.
  • the apparatus for collecting a face image further includes a prompting unit, and the prompting unit is at least configured to perform one of the following:
  • the first prompt information is used to prompt the user to adjust the face posture
  • the second prompt information is used to prompt the user to hold the electronic device steadily.
  • each unit included in the apparatus for collecting face images may be implemented by a processor in the apparatus for collecting face images.
  • the processor needs to run the program stored in the memory to realize the functions of the above program modules.
  • the device for collecting a face image provided by the above-mentioned embodiment collects a face image
  • only the division of the above-mentioned program modules is used as an example for illustration.
  • the above-mentioned processing can be allocated by different
  • the program module is completed, that is, the internal structure of the device for collecting the face image is divided into different program modules, so as to complete all or part of the above-described processing.
  • the apparatus for collecting a face image provided in the above embodiment and the embodiment of the method for collecting a face image belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • FIG. 6 is a schematic diagram of a hardware composition structure of an electronic device provided by an embodiment of the application. As shown in FIG. 6 , the electronic device includes:
  • Communication interface 1 which can exchange information with other devices such as servers;
  • the processor 2 is connected to the communication interface 1 to realize information exchange with other devices, and is configured to execute the method for collecting a face image provided by one or more of the above technical solutions when running a computer program.
  • the computer program is instead stored on the memory 3 .
  • bus system 4 is configured to enable connection communication between these components.
  • bus system 4 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 4 in FIG. 6 .
  • the memory 3 in the embodiment of the present application is configured to store various types of data to support the operation of the electronic device.
  • Examples of such data include: any computer program configured to operate on an electronic device.
  • the memory 3 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), an erasable programmable read-only memory (EPROM, Erasable Programmable Read-only memory) Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Magnetic Random Access Memory (FRAM, ferromagnetic random access memory), Flash Memory (Flash Memory), Magnetic Surface Memory , CD-ROM, or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be disk memory or tape memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Type Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Link Dynamic Random Access Memory
  • DDRRAM Direct Memory Bus Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • the memory 3 described in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
  • the methods disclosed in the above embodiments of the present application may be applied to the processor 2 or implemented by the processor 2 .
  • the processor 2 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method can be completed by a hardware integrated logic circuit in the processor 2 or an instruction in the form of software.
  • the above-mentioned processor 2 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • the processor 2 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium, the storage medium is located in the memory 3, and the processor 2 reads the program in the memory 3, and completes the steps of the foregoing method in combination with its hardware.
  • the embodiment of the present application further provides a storage medium, that is, a computer storage medium, specifically a computer-readable storage medium, for example, including a memory 3 storing a computer program, and the above-mentioned computer program can be executed by the processor 2,
  • a storage medium that is, a computer storage medium, specifically a computer-readable storage medium, for example, including a memory 3 storing a computer program, and the above-mentioned computer program can be executed by the processor 2,
  • the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM.
  • the disclosed devices and methods may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.
  • the unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may all be integrated into one processing module, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented either in the form of hardware or in the form of hardware plus software functional units.
  • the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, execute Including the steps of the above method embodiment; and the aforementioned storage medium includes: a mobile storage device, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk and other various A medium on which program code can be stored.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk and other various A medium on which program code can be stored.
  • the term "and/or" in the embodiments of the present application is only an association relationship to describe associated objects, indicating that there may be three kinds of relationships, for example, A and/or B, which may indicate that A exists alone , A and B exist at the same time, and B exists alone.
  • the term "at least one" herein refers to any combination of any one of the plurality or at least two of the plurality, for example, including at least one of A, B, and C, and may mean including from A, B, and C. Any one or more elements selected from the set of B and C.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种采集人脸图像的方法、装置及电子设备,其中,采集人脸图像的方法包括:基于所述电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿态;确定出所述电子设备在采集预览图像时的第一状态;所述第一状态表征电子设备的平稳度;在所述至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。

Description

采集人脸图像的方法、装置及电子设备
相关申请的交叉引用
本申请基于申请号为202011138813.9,申请日为2020年10月22日的中国专利申请提出,并要求上述中国专利申请的优先权,上述中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及金融科技(Fintech)人脸识别技术领域,尤其涉及一种采集人脸图像的方法、装置及电子设备。
背景技术
相关技术中,在涉及金融业务的账户资金安全或信息安全等应用场景下,利用人脸识别技术,通过计算当前采集的人脸图像与预存的人脸图像之间的相似度,对用户的身份进行验证。但是,在当前采集的人脸图像的图像质量较差的情况下,会导致合法用户身份验证失败。
发明内容
为解决相关技术问题,本申请实施例提供了一种采集人脸图像的方法、装置及电子设备。
本申请实施例提供一种采集人脸图像的方法,应用于电子设备,所述方法包括:
基于所述电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿态;
确定出所述电子设备在采集预览图像时的第一状态;所述第一状态表征电子设备的平稳度;
在所述至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。
本申请实施例还提供了一种采集人脸图像的装置,包括:
第一确定单元,配置为基于所述电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿 态;
第二确定单元,配置为确定出所述电子设备在采集预览图像时的第一状态;所述第一状态表征电子设备的平稳度;
输出单元,配置为在所述至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。
本申请实施例还提供了一种电子设备,包括:处理器和配置为存储能够在处理器上运行的计算机程序的存储器,
其中,所述处理器配置为运行所述计算机程序时,执行上述任一种采集人脸图像的方法的步骤。
本申请实施例还提供了一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种采集人脸图像的方法的步骤。
本申请实施例,电子设备基于电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出至少一帧预览图像对应的人脸姿态;确定出电子设备在采集预览图像时的第一状态;在至少一帧预览图像对应的人脸姿态满足第一设定条件,并且确定出的第一状态表征电子设备的平稳度满足第二设定条件的情况下,基于至少一帧预览图像输出第一照片。预览图像对应的人脸姿态满足第一设定要求,由此输出的第一照片符合人脸识别图像的采集要求,可以避免因人脸姿态不正而导致输出的第一照片中人脸特征点采集不完全的情况发生。在采集预览图像时电子设备的平稳度满足第二设定条件,因此可以采集到的清晰的人脸图像,避免因电子设备在采集预览图像时抖动而导致输出的照片中人脸图像模糊以及人脸特征点缺失的情况发生。由于电子设备输出的第一照片中的人脸图像清晰且包含用于人脸识别的所有人脸特征点,因此,服务器基于该第一照片进行身份验证时,可以提高合法用户身份验证的成功率。
附图说明
图1为本申请实施例提供的一种采集人脸图像的方法的实现流程示意图;
图2为本申请实施例提供的一种采集人脸图像的方法中确定第一状态的实现流程示意图;
图3为本申请另一实施例提供的一种采集人脸图像的方法的实现流程示意图;
图4为本申请应用实施例提供的一种采集人脸图像的方法的实现流程示意图;
图5为本申请实施例提供的采集人脸图像的装置的结构示意图;
图6为本申请实施例提供的电子设备的硬件组成结构示意图。
具体实施方式
相关技术中,电子设备采集人脸图像,并将采集到的人脸图像向服务器发送,服务器利用人脸识别技术,通过计算接收到的人脸图像与预存的人脸图像之间的相似度,进行身份验证。
在光线较暗的情况下,电子设备可能因当前环境的光线问题而导致采集到的人脸图像的图像质量较差。
在电子设备采集到的人脸图像的图像质量较差的情况下,例如,图像较模糊、图像的光线较暗等,服务器计算出的相似度小于设定阈值,从而导致合法用户身份验证失败。即使服务器通过对该人脸图像进行预处理,得到清晰的人脸图像。然而在用于人脸识别的人脸特征点不完整的情况下,仍然会导致合法用户身份验证失败。
为了解决上述技术问题,本申请实施例提供了一种采集人脸图像的方法,电子设备在采集到的预览图像对应的人脸姿态满足第一设定条件,并且电子设备在采集预览图像时的第一状态表征电子设备的平稳度满足第二设定条件的情况下,基于预览图像输出第一照片。预览图像对应的人脸姿态满足第一设定要求,由此可以避免因人脸姿态不正而导致输出的第一照片中人脸特征点采集不完全的情况发生。在采集预览图像时电子设备的平稳度满足第二设定条件,因此可以采集到的清晰的人脸图像,避免因电子设备在采集预览图像时抖动而导致输出的照片中人脸图像模糊以及人脸图像缺失的情况发生。由于电子设备输出的第一照片中的人脸图像清晰且包含用于人脸识别的所有人脸特征点,因此,服务器基于该第一照片进行身份验证时,可以提高合法用户身份验证的成功率。
以下结合说明书附图及具体实施例对本申请的技术方案做进一步的详细阐述。
图1示出了本申请实施例提供的一种采集人脸图像的方法的实现流程示意图。在本申请实施例中,采集人脸图像的方法的执行主体为电子设备,例如,手机、平板电脑等终端。
参照图1,本申请实施例提供的采集人脸图像的方法包括:
S101:基于所述电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿态。
电子设备启动图像采集模式的情况下,可以通过内置的摄像头或外接的摄像头采集至少一帧预览图像。电子设备利用人脸识别技术,确定出预览图像中的人脸特征点,并基于确定出的人脸特征点,确定出对应的预览图像对应的人脸姿态。其中,人脸特征点表征五官在预览图像中的人脸图像上的位置。人脸特征点包括人脸轮廓、眉毛、瞳孔、眼角、嘴巴、鼻子 等位置对应的点。
在实际应用中,电子设备也可以将至少一帧预览图像输入至用于识别人脸姿态的设定模型,得到该设定模型输出的人脸姿态。该设定模型用于基于输入的图像中的人脸特征点确定出对应的人脸姿态。
电子设备在确定出预览图像对应的人脸姿态的情况下,判断该人脸姿态是否满足第一设定条件。第一设定条件基于用于人脸识别的图像采集要求进行设置。其中,人脸姿态不满足第一设定条件时,继续采集预览图像。
在一实施例中,所述人脸姿态包括以下至少之一:
人脸姿态角度;
人脸图像是否被遮挡;
用户是否闭眼。
在人脸姿态包括人脸姿态角度的情况下,第一设定条件包括人脸姿态角度处于设定的角度范围;
在人脸姿态包括人脸图像是否被遮挡的情况下,第一设定条件包括人脸图像未被遮挡;
在人脸姿态表征用户是否闭眼的情况下,第一设定条件包括用户未闭眼。
这里,人脸姿态角度处于设定的角度范围时,表征采集到的预览图像为正脸图像。人脸姿态角度包括俯仰角(pitch)、偏航角(yaw)和翻滚角(roll)。其中,俯仰角表征围绕X轴旋转的角度;偏航角表征围绕Y轴旋转的角度;翻滚角表征围绕Z轴旋转的角度。示例性地,俯仰角和偏航角对应的设定的角度范围均为-15°至15°;翻滚角对应的设定的角度范围为-10°至10°。
电子设备基于人脸识别算法识别出的人脸特征点的数量小于设定数量时,表征人脸特征点缺失,人脸图像被遮挡。设定数量表征人脸识别算法对应的人脸特征点的数量,例如68。电子设备也可以比对至少两帧相邻的预览图像的人脸特征点,确定出人脸特征点是否缺失,从而确定出人脸图像是否被遮挡。
电子设备可以基于人脸识别算法检测瞳孔所在的位置,以及检测上眼睑和下眼睑所在位置,并基于检测上眼睑和下眼睑所在位置,确定出上眼睑和下眼睑之间的第一间距。在未检测到瞳孔的情况下,确定为用户闭眼;在检测到瞳孔的情况下,确定为用户未闭眼。或者,在第一间距小于设定间距的情况下,确定为用户闭眼;在检测到瞳孔,且第一间距大于或等于设定间距的情况下,确定为用户未闭眼。
S102:确定出所述电子设备在采集预览图像时的第一状态;所述第一状态表征电子设备的平稳度。
电子设备在采集预览图像时,获取电子设备的运动状态数据,基于电子设备的运动状态数据确定出电子设备在采集预览图像时的第一状态。
这里,电子设备的运动状态数据由内置的传感器采集到,传感器包括以下至少之一:角速度传感器、加速度传感器。
在实际应用中,电子设备中可以预先存储电子设备的平稳度满足第二设定条件时的运动状态数据。在电子设备采集预览图像时采集到的运动状态数据与预先存储的运动状态数据不匹配时,确定为第一状态表征的电子设备的平稳度不满足第二设定条件。在电子设备采集预览图像时采集到的运动状态数据与预先存储的运动状态数据匹配时,确定为第一状态表征的电子设备的平稳度满足第二设定条件。第二设定条件表征电子设备能够采集到清晰的图像时的平稳度。
在实际应用中,电子设备也可以通过训练完成的模型确定电子设备在采集预览图像时的第一状态。该模型用于基于电子设备的运动状态数据确定出对应的第一状态。
在第一状态表征电子设备的平稳度不满足第二设定条件的情况下,继续采集预览图像。
S103:在所述至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。
在至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的电子设备的平稳度满足第二设定条件的情况下,表征电子设备可以采集到清晰且完整的正脸图像,此时,采集至少一帧预览图像,并基于采集到的至少一帧预览图像输出第一照片。电子设备可以将第一照片向服务器发送,以便服务器基于第一照片进行身份验证。
需要说明的是,电子设备可以在输出第一照片的情况下,结束本次流程,提示用户人脸图像采集成功;也可以在设定时长内未输出第一照片的情况下,结束本次流程,提示用户人脸图像采集失败。
在一实施例中,所述基于所述至少一帧预览图像输出第一照片,包括:
从所述至少一帧预览图像中选取出第一预览图像,将所述第一预览图像输出为第一照片;其中,
所述第一预览图像满足以下至少之一:
人脸图像的占画比处于设定范围;所述占画比表征设定的人脸框中的人脸图像的面积与所述设定的人脸框的面积之间的比值;这里,设定范围表征比值;
人脸图像中第一区域与第二区域的间距小于设定阈值;所述第一区域表征上嘴唇所在区域;所述第二区域表征下嘴唇所在区域。
这里,电子设备在至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的电子设备的平稳度满足第二设定条件的情况下,采集至少一帧预览图像,从采集到的预览图像中选出第一预览图像, 将第一预览图像输出为第一照片。在第一预览图像中的人脸图像的占画比处于设定的比例范围的情况下,即使基于第一预览图像中处于设定的人脸框中的图像输出第一照片,也可以得到清晰的正脸图像。第一预览图像满足人脸图像中第一区域与第二区域的间距小于设定阈值,由此,可以避免因用户张开嘴巴而导致合法用户身份验证失败的情况发生。
其中,电子设备可以确定出预览界面中显示的设定的人脸框的第一面积,以及确定出显示于设定的人脸框中的人脸图像的第二面积,基于第一面积和第二面积,计算出预览图像中人脸图像的占画比,并确定预览图像中人脸图像的占画比是否处于设定范围。在实际应用中,设定的人脸框可以基于人的头部的形状进行设置。该设定范围是在正脸图像显示于设定的人脸框之内的情况下,基于人脸图像的占画比进行设置。人脸图像的占画比表征人脸与电子设备的显示屏之间的距离。在第一预览图像中的人脸图像的占画比未处于设定范围的情况下,电子设备可以输出提示信息提示用户调整人脸与电子设备的显示屏之间的距离。
电子设备可以基于人脸识别技术确定出采集到的预览图像中的第一区域和第二区域,并确定出第一区域与第二区域之间的间距。在第一区域与第二区域的间距大于或等于设定阈值时,可以提示用户勿张嘴。
本申请实施例提供的方案中,电子设备基于电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出至少一帧预览图像对应的人脸姿态;确定出电子设备在采集预览图像时的第一状态;在至少一帧预览图像对应的人脸姿态满足第一设定条件,并且确定出的第一状态表征电子设备的平稳度满足第二设定条件的情况下,基于至少一帧预览图像输出第一照片。预览图像对应的人脸姿态满足第一设定要求,由此输出的第一照片符合人脸识别图像的采集要求,可以避免因人脸姿态不正而导致输出的第一照片中人脸特征点采集不完全的情况发生。由于在采集预览图像时电子设备的平稳度满足第二设定条件,因此可以采集到清晰的人脸图像,避免因电子设备在采集预览图像时抖动而导致输出的照片中人脸图像模糊以及人脸图像缺失的情况发生。由于电子设备输出的第一照片中的人脸图像清晰且包含用于人脸识别的所有人脸特征点,因此,服务器基于该第一照片进行身份验证时,可以提高合法用户身份验证的成功率。
作为本申请的另一实施例,所述确定出所述电子设备在采集预览图像时的第一状态,包括:
在所述至少一帧预览图像对应的人脸姿态满足所述第一设定条件的情况下,确定出所述电子设备在采集所述至少一帧预览图像时的第一状态。
这里,电子设备在执行S101,确定出至少一帧预览图像对应的人脸姿态不满足第一设定条件的情况下,返回S101或结束本次流程。
电子设备在至少一帧预览图像对应的人脸姿态满足第一设定条件的情况下,确定出所述电子设备在采集所述至少一帧预览图像时的第一状态, 可以节省电子设备的功耗。其中,确定出所述电子设备在采集所述至少一帧预览图像时的第一状态的实现方法请参照S102中的相关描述,此处不赘述。
在本实施例中,在至少一帧预览图像对应的人脸姿态不满足第一设定条件的情况下,不满足输出第一照片的条件,此时,不执行确定出电子设备在采集至少一帧预览图像时的第一状态这一步骤,可以节省因确定第一状态而消耗的资源,可以提高数据处理速度和节省功耗。
作为本申请的另一实施例,图2示出了本申请实施例提供的一种采集人脸图像的方法中确定第一状态的实现流程示意图。参照图2,所述确定出所述电子设备在采集所述至少一帧预览图像时的第一状态,包括:
S201:将第一数据转换成符合四元数格式的第二数据;所述第一数据表征在所述电子设备采集所述至少一帧预览图像时采集到的所述电子设备的运动状态数据。
这里,四元数是一种高阶复数,表示为q=(x,y,z,w)=ix+jy+kz+w。其中,i、j、k均为虚数单位。
由于i、j、k和三维旋转相似,因此,可以使用一个向量和一个实数组合的形式表示四元数,此时,
Figure PCTCN2021123356-appb-000001
其中,
Figure PCTCN2021123356-appb-000002
为向量,
Figure PCTCN2021123356-appb-000003
w为实数。向量可以视作实部为0的四元数,而实数可以视作虚部为0的四元数,那么,四元数
Figure PCTCN2021123356-appb-000004
可以表征空间中的一个点以单位向量(x,y,z)为轴旋转θ角度的旋转操作。
在实际应用中,由于电子设备可以抽象为三维空间中的一个点,第一数据可以分解为在X轴、Y轴和Z轴三个方向上的数据,因此,四元数
Figure PCTCN2021123356-appb-000005
也可以表征电子设备以单位向量(x,y,z)为轴旋转θ角度的旋转操作。电子设备将第一数据转换成符合四元数格式的第二数据。其中,
第二数据q=((x,y,z),w);
Figure PCTCN2021123356-appb-000006
Figure PCTCN2021123356-appb-000007
Figure PCTCN2021123356-appb-000008
Figure PCTCN2021123356-appb-000009
在实际应用中,第一数据可以包括以下至少之一:
由角速度传感器采集到的角速度数据;
由加速度传感器采集到的加速度数据。
在一实施例中,为了减少数据干扰,提高数据的准确度,从而提高确定出的第一状态的准确度,在所述将第一数据转换成符合四元数格式的第二数据时,所述方法还包括:
删除第一时段对应的第一数据;所述第一时段的起始时刻对应所述电子设备采集所述至少一帧预览图像的起始时刻。
这里,电子设备在获取到第一数据的情况下,删除第一时段对应的第一数据,然后将剩余的第一数据转换成符合四元数格式的第二数据。
在实际应用中,电子设备可以删除在进入预览模式的100毫秒内获取到的第一数据。电子设备也可以删除在采集设定数量的预览图像的时段内采集到的第一数据,例如,电子设备删除在采集前10帧预览图像对应的时段内采集到的第一数据。
S202:基于t-1时刻的运动状态数据的理论估计值,预测t时刻的运动状态数据的先验估计值;其中,t为正整数;所述最佳估计值以及所述先验估计值均为符合四元数格式的数据。
由于存在系统噪声和测量噪声等原因,传感器记录的数据和实际数据存在偏差,因此,本实施例中,为了提高数据的准确度,对第二数据进行处理。具体地,基于t时刻的观测值(或称测量值)以及t-1时刻理论估计值(或称最佳估计值),对t时刻对应真实值进行估计,从而得到对应的最佳估计值。
这里,电子设备可以基于t-1时刻的运动状态数据的理论估计值,结合系统噪声和测量噪声等因素,预测出t时刻的运动状态数据的先验估计值。
其中,t-1时刻的运动状态数据的理论估计值也称最佳估计值。需要说明的是,初始时刻的运动状态数据的理论估计值为初始时刻的第二数据。运动状态数据包括角速度数据和加速度数据中的至少一项。角速度数据可以采用电子设备中内置的角速度传感器(例如,陀螺仪)采集;加速度数据可以采用电子设备中内置的加速度传感器采集。
需要说明的是,t-1时刻的运动状态数据和t时刻对应的第二数据,为相同类型的运动状态数据。例如,当t-1时刻的运动状态数据只包括角速度数据时,t时刻对应的第二数据为角速度数据对应的第二数据。当t-1时刻的运动状态数据只包括角速度数据和加速度数据时,t时刻对应的第二数据包括角速度数据对应的第二数据和加速度数据对应的第二数据。
S203:基于t时刻对应的第二数据以及t时刻的所述先验估计值,确定出t时刻对应的第三数据。
这里,t时刻对应的第二数据,对应为t时刻的观测值。电子设备使用t时刻对应的第二数据,更正t时刻的先验估计值,得到t时刻对应的第三数据。其中,t时刻对应的第三数据对应于t时刻的最佳估计值。
在本实施例中,电子设备基于t时刻传感器记录的观测值和t-1时刻传感器的最佳估计值,估计t时刻对应的最佳估计值。在实际应用中,传感器 记录的测量值包括角速度数据和加速度数据中的至少一项。
在一些实施例中,也可以基于卡尔曼滤波算法对第二数据进行处理,得到对应的第三数据。
卡尔曼滤波算法是一个递归的预测—校正方法,卡尔曼滤波算法分为两步:
预测:根据t-1时刻的最佳估计值来估计t时刻的先验估计值;
更新:使用t时刻的观测值来更正t时刻的先验估计值,得到t时刻的后验估计值,也称最佳估计值。
这里,预测阶段通过时间更新方程进行预测,更新阶段通过状态更新方程来实现。在S202中,基于时间更新方程以及t-1时刻的运动状态数据的理论估计值,预测t时刻的运动状态数据的先验估计值;在S203中,基于状态更新方程、t时刻对应的第二数据以及t时刻的所述先验估计值,确定出t时刻对应的第三数据(即,最佳估计值)。其中,时间更新方程包括:
Figure PCTCN2021123356-appb-000010
P t=F tP t-1F t T+Q t   (2)
这里,
Figure PCTCN2021123356-appb-000011
表征t-1时刻的最佳估计值;
Figure PCTCN2021123356-appb-000012
表征t时刻的先验估计值,即根据t-1时刻的最佳估计值预测得到的t时刻的结果;F t表征预测过程对应的预测矩阵或状态转移矩阵;F t T表征F t的转置;B t是控制矩阵;
Figure PCTCN2021123356-appb-000013
是控制向量,表征已知的潜在影响因素,例如,电子设备处于震动状态为电子设备的平稳度对应的已知的潜在影响因素。P t表征
Figure PCTCN2021123356-appb-000014
的协方差矩阵;P t-1表征
Figure PCTCN2021123356-appb-000015
的协方差矩阵;Q t表征过程激励噪声协方差,即系统过程的协方差,Q t被用来表示F t与实际过程之间的误差。Q t的噪声表征影响电子设备的平稳度的外部未知影响因素,例如,突然的撞击、外部风速等。
状态更新方程包括:
Figure PCTCN2021123356-appb-000016
P t'=P t-K'H tP t    (4)
Figure PCTCN2021123356-appb-000017
这里,
Figure PCTCN2021123356-appb-000018
表征t时刻的最佳估计值;K'表征卡尔曼滤波增益,或称卡尔曼滤波系数;
Figure PCTCN2021123356-appb-000019
表征传感器的运动状态数据的分布平均值;H t表征传感器的观测值;P t'表征t时刻的后验估计协方差矩阵,即
Figure PCTCN2021123356-appb-000020
的协方差矩阵,表示状态的不确定度;
Figure PCTCN2021123356-appb-000021
表征H t的转置矩阵;R t表征测量噪声协方差矩阵,即传感器的噪声。
在实际应用中,电子设备的平稳度
Figure PCTCN2021123356-appb-000022
与电子设备的加速度和角速度有关,
Figure PCTCN2021123356-appb-000023
最佳估计值的表达式为:
Figure PCTCN2021123356-appb-000024
协方差矩阵为
Figure PCTCN2021123356-appb-000025
其中,Σ gg表征g和g的相关性,Σ ga表征g和a的相关性,Σ ag表征a和g的相关性,Σ aa表征a和a的相关性。g和a都是随机分布的,均符合高斯分布。
下面详细介绍公式(3)至(5)的推导过程:
传感器的估计值和观测值均符合高斯分布,可以得到以下表达式:
Figure PCTCN2021123356-appb-000026
Figure PCTCN2021123356-appb-000027
其中,公式(6)对应传感器的估计值,公式(7)对应传感器的预测值。
Figure PCTCN2021123356-appb-000028
表征传感器的估计值,H t表征传感器的观测值,
Figure PCTCN2021123356-appb-000029
表征H t的转置矩阵,μ 0表征传感器的估计值对应的高斯分布的均值,Σ 0表征传感器的估计值对应的高斯分布的协方差。
Figure PCTCN2021123356-appb-000030
表征传感器的运动状态数据的分布平均值,R t表征传感器的噪声,μ 1表征传感器的观测值对应的高斯分布的均值,Σ 1表征传感器的观测值对应的高斯分布的协方差。
由于基于预测值和估计值,预测得到的估计值可能是准确的,也可能是不准确的,在此,将预测值对应的矩阵和估计值对应的矩阵相乘,可以得到新的高斯分布,从而基于新的高斯分布确定出最佳估计值。其中,新的高斯分布表征预测值和估计值的重叠区域,是最佳估计值所在的区域。下面介绍基于一维高斯分布曲线方程确定出新的高斯分布的实现过程:
期望为μ,方差为σ 2的一维高斯分布曲线方程为:
Figure PCTCN2021123356-appb-000031
将两条高斯分布曲线相乘,可得到:
N(x,μ 00)×N(x,μ 11)=N(x,μ',σ')    (9)
其中,N(x,μ 00)表征第一高斯曲线,N(x,μ 11)表征第二高斯曲线,N(x,μ',σ')表征新的高斯曲线。基于公式(8)对公式(9)进行扩展,得到:
Figure PCTCN2021123356-appb-000032
Figure PCTCN2021123356-appb-000033
这里,令
Figure PCTCN2021123356-appb-000034
可以得到:
σ' 2=σ 0 2-kσ 0 2    (12)
μ'=μ 0+k(σ 10)    (13)
这里将k、公式(12)和(13)从一维空间扩展到多位空间,得到新的高斯分布描述以下:
K=Σ 001) -1     (14)
Figure PCTCN2021123356-appb-000035
Σ'=Σ 0-KΣ 0    (16)
将公式(6)和公式(7)分别代入公式(14)至(16),得到:
Figure PCTCN2021123356-appb-000036
Figure PCTCN2021123356-appb-000037
Figure PCTCN2021123356-appb-000038
其中,K为卡尔曼增益,或称卡尔曼系数。考虑到K的表达式中还包含H t,通过约去H t对上述表达式进行简化的得到公式(3)至(5)。
在一实施例中,所述基于t时刻对应的第二数据以及t时刻的所述先验估计值,确定出t时刻对应的第三数据时,所述方法还包括:
基于至少一个置信度,对第一系数进行替换;所述至少一个置信度中的每个置信度对应一类运动状态数据;所述第一系数表征卡尔曼滤波算法的相关计算参数。
这里,卡尔曼增益K'会随着传感器的噪声R t的改变而改变,利用卡尔曼滤波算法对第二数据进行处理,可以得到对应的最佳估计值;但在实际应用中,考虑到电子设备的计算能力、算法效率等因素,对应的结果处于允许的误差范围即可。因此,本实施例中,基于至少一个置信度,确定出固定的权重,并采用该固定的权重替换卡尔曼增益K',以获得在误差范围内的估计值。
至少一个置信度包括以下至少一项:
加速度数据的置信度;
角速度数据的置信度。
在一实施例中,电子设备基于至少一个置信度,对第一系数进行替换,从而对上述公式(2)至(5)进行简化,得到公式(20)。其中,所述基于至少一个置信度,对第一系数进行替换,包括:
将所述第一系数中的第二系数替换为零;所述第二系数表征t时刻的所述先验估计值的协方差矩阵;
基于至少一个置信度,对所述第一系数中的滤波系数进行替换。
这里,第一系数中的第二系数对应于上述公式(2)、(4)和(5)中的P t,将t时刻的先验估计值的协方差矩阵替换为零,也就是说,将上述公式(2)、(4)和(5)中P t置零,这样使得上述公式(2)、公式(4)和公式(5)均为零。
第一系数中的滤波系数为卡尔曼滤波系数,也称卡尔曼增益K'。电子设备基于至少一个置信度,对上述状态更新方程中的公式(3)中的卡尔曼 增益K'进行替换,得到
Figure PCTCN2021123356-appb-000039
对应的新方程(20)。其中,
Figure PCTCN2021123356-appb-000040
对应的新方程如下:
Figure PCTCN2021123356-appb-000041
Figure PCTCN2021123356-appb-000042
其中,ω'表征置信度。公式(21)和上述公式(1)相同。公式(20)和(21)中不包括P t和卡尔曼增益K'。电子设备在获取到传感器在t时刻的观测值和在t-1时刻的最佳估计值的情况下,将t时刻的观测值和t-1时刻的最佳估计值代入公式(20)至(21)即可算出t时刻的最佳估计值。
在一实施例中,所述基于至少一个置信度,对所述第一系数中包含的滤波系数进行替换时,还包括:
在所述第一数据包括加速度数据和角速度数据的情况下,基于加速度数据的置信度和角速度数据的置信度,确定出新的滤波系数;所述新的滤波系数用于替换所述第一系数中的滤波系数。
其中,新的滤波系数
Figure PCTCN2021123356-appb-000043
ω a表征加速度传感器对应的加速度数据的置信度;ω g表征角速度传感器对应的角速度数据的置信度。
S204:将t时刻对应的第三数据输入至设定模型,得到所述电子设备在t时刻采集预览图像时的第一状态;其中,所述设定模型用于根据输入的数据确定出对应的第一状态。
这里,设定模型是基于机器学习算法对至少一个样本数据进行训练得到,至少一个样本数据中每个样本数据均设置有对应的第一状态。在训练设定模型时,将第一样本数据转换成符合四元数格式的第二样本数据;基于S202~S203的处理流程对第二样本数据进行处理,得到第三样本数据,并基于第三样本数据对设定模型进行训练,得到训练完成的设定模型。第一样本数据以及第二样本数据的处理过程请参照S201~S203的相关描述,此处不赘述。
电子设备将t时刻对应的第三数据输入至训练完成的设定模型,得到设定模型输出的电子设备在t时刻采集预览图像时的第一状态。设定模型通过分析输入的数据的变化情况,确定出对应的第一状态。
设定模型在分析出输入的数据的变动幅度大于设定范围的最大阈值的情况下,输出对应的第一状态表征电子设备剧烈晃动,电子设备的平稳度不满足第二设定条件。这里,设定范围表征数据的浮动范围。
设定模型在分析出输入的数据的变动幅度处于设定范围的情况下,输出对应的第一状态表征电子设备的平稳度满足第二设定条件。
为了提高确定出的第一状态的准确度,设定模型在分析出输入的数据的变动幅度小于设定范围的最小阈值的情况下,此时电子设备获取电子设 备的身份标识,基于获取到的身份标识,判断电子设备是否遭受到模拟器攻击,从而基于判断结果确定输出的第一状态表征的电子设备的平稳度是否满足第二设定条件。其中,在获取到的身份标识和预先存储的身份标识相同的情况下,表征输入的数据为可信数据,电子设备未遭受到模拟器攻击,输出的第一状态表征的电子设备的平稳度满足第二设定条件。在未获取到身份标识,或者获取到的身份标识和预先存储的身份标识不同的情况下,表征输入的数据为不可信数据,电子设备遭受到了模拟器攻击,输出的第一状态表征的电子设备的平稳度不满足第二设定条件。
在实际应用中,电子设备为手机时,身份标识可以为国际移动设备识别码(IMEI,International Mobile Equipment Identity)。
本申请实施例提供的方案中,将第一数据转换成符合四元数格式的第二数据,对第二数据进行滤波处理,得到第三数据;将第三数据输入至设定模型,得到所述电子设备在采集预览图像时的第一状态。由此,四元数格式的第二数据可以方便快捷地表示电子设备执行绕任意过原点的向量的旋转操作,避免发生万向节锁;对第二数据进行滤波处理,可以滤除第二数据中包含的噪声,提高数据的准确度;通过设定模型确定出对应的第一状态,可以提高确定出的第一状态的准确度。
作为本申请的另一实施例,图3示出了本申请另一实施例提供的一种采集人脸图像的方法的实现流程示意图。参照图3,在图1对应的实施例的基础上,本实施例提供的采集人脸图像的方法还包括以下至少之一:
S104:在确定出的人脸姿态不满足所述第一设定条件的情况下,输出第一提示信息;所述第一提示信息用于提示用户调整人脸姿态。
在实际应用中,电子设备可以通过文字、语音、调整用户界面的颜色等方式提示用户。
在一实施例中,当人脸姿态包括人脸姿态角度,人脸姿态角度包括俯仰角、偏航角和翻滚角时,所述在确定出的人脸姿态不满足所述第一设定条件的情况下,输出第一提示信息包括以下至少之一;
在俯仰角小于第一设定范围的最小阈值的情况下,提醒用户不要抬头;
在俯仰角大于第一设定范围的最大阈值的情况下,提醒用户不要低头;
在偏航角未处于第二设定范围的情况下,提醒用户不要侧脸;
在翻滚角未处于第三设定范围的情况下,提醒用户不要歪头。
在实际应用中,第一设定范围和第二设定范围可以是-15°至15°,第三设定范围可以是-10°至10°。
在一实施例中,当人脸姿态包括人脸图像是否被遮挡时,电子设备在确定出人脸图像被遮挡的情况下,输出的第一提示信息可以为“请勿遮挡脸部”。
在一实施例中,当人脸姿态包括用户是否闭眼时,电子设备在确定出用户闭眼的情况下,输出的第一提示信息可以为“请勿闭眼”。
S105:在确定出的第一状态表征所述电子设备当前的平稳度不满足所述第二设定条件的情况下,输出第二提示信息;其中,所述第二提示信息用于提示用户持稳所述电子设备。
这里,在确定出的第一状态表征电子设备当前的平稳度不满足第二设定条件的情况下,电子设备输出第二提示信息提示用户持稳电子设备,以便采集到清晰的图像。
在本实施例提供的方案中,电子设备输出对应的提示信息提示用户调整人脸姿态或持稳电子设备,可以提高图像采集效率。
图4示出了本申请应用实施例提供的一种采集人脸图像的方法的实现流程示意图。参照图4,本实施提供的采集人脸图像的方法包括:
S401:基于所述电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿态角度。
S402:判断确定出的人脸姿态角度是否处于设定的角度范围。
在确定出的人脸姿态角度未处于设定的角度范围的情况下,执行S403。在确定出的人脸姿态角度处于设定的角度范围的情况下,表征预览图像为正脸图像,执行S404。
S403:输出第三提示信息,所述第三提示信息用于提示用户调整人脸姿态角度。
S404:判断所述至少一帧预览图像中人脸图像是否被遮挡。
在所述至少一帧预览图像中人脸图像被遮挡的情况下,执行S405;在所述至少一帧预览图像中人脸图像未被遮挡的情况下,执行S406。
S405:输出第四提示信息,所述第四提示信息用于提示勿遮挡脸部。
S406:判断所述至少一帧预览图像中用户是否闭眼。
在所述至少一帧预览图像中用户闭眼的情况下,执行S407;判断所述至少一帧预览图像中用户未闭眼的情况下,执行S408。
S407:输出第五提示信息,所述第五提示信息用于提示用户勿闭眼。
S408:判断所述至少一帧预览图像中人脸图像的占画比是否处于设定的比例范围。
在所述至少一帧预览图像中人脸图像的占画比未处于设定的比例范围的情况下,执行S409;在所述至少一帧预览图像中人脸图像的占画比处于设定的比例范围的情况下,执行S410。
S409:输出第六提示信息;所述第六提示信息用于提示用户调整人脸与所述电子设备的显示屏之间的距离。
S410:确定出所述电子设备在采集所述至少一帧预览图像时的第一状态。
S411:在确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。
S412:在确定出的第一状态表征的所述电子设备的平稳度不满足第二设定条件的情况下,输出第七提示信息;所述第七提示信息用于提示用户持稳所述电子设备。
本实施例提供的方案中,预览图像对应的人脸姿态角度处于设定范围,输出的第一照片为清晰的正脸图像,且第一照片中用户未闭眼,人脸图像也未被遮挡,由此,服务器基于第一照片进行身份验证时,可以提高合法用户的身份验证的成功率。
为实现本申请实施例的方法,本申请实施例还提供了一种采集人脸图像的装置,设置在电子设备上,如图5所示,该采集人脸图像的装置包括:
第一确定单元51,配置为基于电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿态;
第二确定单元52,配置为确定出所述电子设备在采集预览图像时的第一状态;所述第一状态表征电子设备的平稳度;
输出单元53,配置为在所述至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。
在一实施例中,第二确定单元52配置为:在所述至少一帧预览图像对应的人脸姿态满足所述第一设定条件的情况下,确定出所述电子设备在采集所述至少一帧预览图像时的第一状态。
在一实施例中,第二确定单元52配置为:
将第一数据转换成符合四元数格式的第二数据;所述第一数据表征在所述电子设备采集所述至少一帧预览图像时采集到的所述电子设备的运动状态数据;
基于t-1时刻的运动状态数据的理论估计值,预测t时刻的运动状态数据的先验估计值;其中,t为正整数;所述最佳估计值以及所述先验估计值均为符合四元数格式的数据;
基于t时刻对应的第二数据以及t时刻的所述先验估计值,确定出t时刻对应的第三数据;将t时刻对应的第三数据输入至设定模型,得到所述电子设备在t时刻采集预览图像时的第一状态;其中,
所述设定模型用于根据输入的数据确定出对应的第一状态。
在一实施例中,采集人脸图像的装置还包括:
删除单元,配置为删除第一时段对应的第一数据;所述第一时段的起始时刻对应所述电子设备采集所述至少一帧预览图像的起始时刻。
在一实施例中,采集人脸图像的装置还包括:
替换单元,配置为基于至少一个置信度,对第一系数进行替换;所述至少一个置信度中的每个置信度对应一类运动状态数据;所述第一系数表征卡尔曼滤波算法的相关计算参数。
在一实施例中,所述替换单元配置为:
将所述第一系数中的第二系数替换为零;所述第二系数表征t时刻的所述先验估计值的协方差矩阵;
基于至少一个置信度,对所述第一系数中的滤波系数进行替换。
在一实施例中,所述替换单元还配置为:
在所述第一数据包括加速度数据和角速度数据的情况下,基于加速度数据的置信度和角速度数据的置信度,确定出新的滤波系数;所述新的滤波系数用于替换所述第一系数中的滤波系数。
在一实施例中,所述人脸姿态包括以下至少之一:
人脸姿态角度;
人脸图像是否被遮挡;
用户是否闭眼;其中,
在所述人脸姿态包括人脸姿态角度的情况下,所述第一设定条件包括人脸姿态角度处于设定的角度范围;
在所述人脸姿态包括人脸图像是否被遮挡的情况下,所述第一设定条件包括人脸图像未被遮挡;
在所述人脸姿态包括用户是否闭眼的情况下,所述第一设定条件包括用户未闭眼。
在一实施例中,输出单元53配置为:
从所述至少一帧预览图像中选取出第一预览图像,将所述第一预览图像输出为第一照片;其中,
所述第一预览图像满足以下至少之一:
人脸图像的占画比处于设定范围;所述占画比表征设定的人脸框中的人脸图像的面积与所述设定的人脸框的面积之间的比值;
人脸图像中第一区域与第二区域的间距小于设定阈值;所述第一区域表征上嘴唇所在区域;所述第二区域表征下嘴唇所在区域。
在一实施例中,采集人脸图像的装置还包括提示单元,所述提示单元至少配置为执行以下之一:
在确定出的人脸姿态不满足所述第一设定条件的情况下,输出第一提示信息;
在确定出的第一状态表征所述电子设备当前的平稳度不满足所述第二设定条件的情况下,输出第二提示信息;其中,
所述第一提示信息用于提示用户调整人脸姿态;
所述第二提示信息用于提示用户持稳所述电子设备。
实际应用时,采集人脸图像的装置包括的各单元可由采集人脸图像的装置中的处理器来实现。当然,处理器需要运行存储器中存储的程序来实现上述各程序模块的功能。
需要说明的是:上述实施例提供的采集人脸图像的装置在采集人脸图 像时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将采集人脸图像的装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的采集人脸图像的装置与采集人脸图像的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
基于上述程序模块的硬件实现,且为了实现本申请实施例的方法,本申请实施例还提供了一种电子设备。图6为本申请实施例提供的电子设备的硬件组成结构示意图,如图6所示,电子设备包括:
通信接口1,能够与其它设备比如服务器等进行信息交互;
处理器2,与通信接口1连接,以实现与其它设备进行信息交互,配置为运行计算机程序时,执行上述一个或多个技术方案提供的采集人脸图像的方法。而所述计算机程序存储在存储器3上。
当然,实际应用时,电子设备中的各个组件通过总线系统4耦合在一起。可理解,总线系统4配置为实现这些组件之间的连接通信。总线系统4除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图6中将各种总线都标为总线系统4。
本申请实施例中的存储器3配置为存储各种类型的数据以支持电子设备的操作。这些数据的示例包括:配置为在电子设备上操作的任何计算机程序。
可以理解,存储器3可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,ferromagnetic random access memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,Sync Link Dynamic  Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储器3旨在包括但不限于这些和任意其它适合类型的存储器。
上述本申请实施例揭示的方法可以应用于处理器2中,或者由处理器2实现。处理器2可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器2中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器2可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器2可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器3,处理器2读取存储器3中的程序,结合其硬件完成前述方法的步骤。
处理器2执行所述程序时实现本申请实施例的各个方法中多核处理器对应的流程,为了简洁,在此不再赘述。
在示例性实施例中,本申请实施例还提供了一种存储介质,即计算机存储介质,具体为计算机可读存储介质,例如包括存储计算机程序的存储器3,上述计算机程序可由处理器2执行,以完成前述图1至图4对应的实施例中的所述步骤。计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步 骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是:“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
需要说明的是,本申请实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。
需要说明的是,本申请实施例中的术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多个中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种采集人脸图像的方法,应用于电子设备,所述方法包括:
    基于所述电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿态;
    确定出所述电子设备在采集预览图像时的第一状态;所述第一状态表征电子设备的平稳度;
    在所述至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。
  2. 根据权利要求1所述的方法,其中,所述确定出所述电子设备在采集预览图像时的第一状态,包括:
    在所述至少一帧预览图像对应的人脸姿态满足所述第一设定条件的情况下,确定出所述电子设备在采集所述至少一帧预览图像时的第一状态。
  3. 根据权利要求2所述的方法,其中,所述确定出所述电子设备在采集所述至少一帧预览图像时的第一状态,包括:
    将第一数据转换成符合四元数格式的第二数据;所述第一数据表征在所述电子设备采集所述至少一帧预览图像时采集到的所述电子设备的运动状态数据;
    基于t-1时刻的运动状态数据的理论估计值,预测t时刻的运动状态数据的先验估计值;其中,t为正整数;所述最佳估计值以及所述先验估计值均为符合四元数格式的数据;
    基于t时刻对应的第二数据以及t时刻的所述先验估计值,确定出t时刻对应的第三数据;
    将t时刻对应的第三数据输入至设定模型,得到所述电子设备在t时刻采集预览图像时的第一状态;其中,
    所述设定模型用于根据输入的数据确定出对应的第一状态。
  4. 根据权利要求3所述的方法,其中,在所述将第一数据转换成符合四元数格式的第二数据时,所述方法还包括:
    删除第一时段对应的第一数据;所述第一时段的起始时刻对应所述电子设备采集所述至少一帧预览图像的起始时刻。
  5. 根据权利要求3所述的方法,其中,所述基于t时刻对应的第二数据以及t时刻的所述先验估计值,确定出t时刻对应的第三数据时,所述方法还包括:
    基于至少一个置信度,对第一系数进行替换;所述至少一个置信度 中的每个置信度对应一类运动状态数据;所述第一系数表征卡尔曼滤波算法的相关计算参数。
  6. 根据权利要求5所述的方法,其中,所述基于至少一个置信度,对第一系数进行替换,包括:
    将所述第一系数中的第二系数替换为零;所述第二系数表征t时刻的所述先验估计值的协方差矩阵;
    基于至少一个置信度,对所述第一系数中的滤波系数进行替换。
  7. 根据权利要求6所述的方法,其中,所述基于至少一个置信度,对所述第一系数中的滤波系数进行替换时,所述方法还包括:
    在所述第一数据包括加速度数据和角速度数据的情况下,基于加速度数据的置信度和角速度数据的置信度,确定出新的滤波系数;所述新的滤波系数用于替换所述第一系数中的滤波系数。
  8. 根据权利要求1所述的方法,其中,所述人脸姿态包括以下至少之一:
    人脸姿态角度;
    人脸图像是否被遮挡;
    用户是否闭眼;其中,
    在所述人脸姿态包括人脸姿态角度的情况下,所述第一设定条件包括人脸姿态角度处于设定的角度范围;
    在所述人脸姿态包括人脸图像是否被遮挡的情况下,所述第一设定条件包括人脸图像未被遮挡;
    在所述人脸姿态包括用户是否闭眼的情况下,所述第一设定条件包括用户未闭眼。
  9. 根据权利要求8所述的方法,其中,所述基于所述至少一帧预览图像输出第一照片,包括:
    从所述至少一帧预览图像中选取出第一预览图像,将所述第一预览图像输出为第一照片;其中,
    所述第一预览图像满足以下至少之一:
    人脸图像的占画比处于设定范围;所述占画比表征设定的人脸框中的人脸图像的面积与所述设定的人脸框的面积之间的比值;
    人脸图像中第一区域与第二区域的间距小于设定阈值;所述第一区域表征上嘴唇所在区域;所述第二区域表征下嘴唇所在区域。
  10. 根据权利要求1-9任一项所述的方法,其中,还包括以下至少之一:
    在确定出的人脸姿态不满足所述第一设定条件的情况下,输出第一提示信息;
    在确定出的第一状态表征所述电子设备当前的平稳度不满足所述第二设定条件的情况下,输出第二提示信息;其中,
    所述第一提示信息用于提示用户调整人脸姿态;
    所述第二提示信息用于提示用户持稳所述电子设备。
  11. 一种采集人脸图像的装置,包括:
    第一确定单元,配置为基于电子设备采集的至少一帧预览图像中每帧预览图像的人脸特征点,确定出所述至少一帧预览图像对应的人脸姿态;
    第二确定单元,配置为确定出所述电子设备在采集预览图像时的第一状态;所述第一状态表征电子设备的平稳度;
    输出单元,配置为在所述至少一帧预览图像对应的人脸姿态满足第一设定条件,且确定出的第一状态表征的所述电子设备的平稳度满足第二设定条件的情况下,基于所述至少一帧预览图像输出第一照片;所述第一照片用于供服务器进行身份验证。
  12. 一种电子设备,包括:处理器和配置为存储能够在处理器上运行的计算机程序的存储器,
    其中,所述处理器配置为运行所述计算机程序时,执行权利要求1至10任一项所述的方法的步骤。
  13. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至10任一项所述的方法的步骤。
PCT/CN2021/123356 2020-10-22 2021-10-12 采集人脸图像的方法、装置及电子设备 WO2022083479A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011138813.9A CN112287792B (zh) 2020-10-22 2020-10-22 采集人脸图像的方法、装置及电子设备
CN202011138813.9 2020-10-22

Publications (1)

Publication Number Publication Date
WO2022083479A1 true WO2022083479A1 (zh) 2022-04-28

Family

ID=74423585

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123356 WO2022083479A1 (zh) 2020-10-22 2021-10-12 采集人脸图像的方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN112287792B (zh)
WO (1) WO2022083479A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287792B (zh) * 2020-10-22 2023-03-31 深圳前海微众银行股份有限公司 采集人脸图像的方法、装置及电子设备
CN113536900A (zh) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 人脸图像的质量评价方法、装置以及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647352B1 (en) * 1998-06-05 2003-11-11 Crossbow Technology Dynamic attitude measurement method and apparatus
CN102252676A (zh) * 2011-05-06 2011-11-23 微迈森惯性技术开发(北京)有限公司 运动姿态数据获取、人体运动姿态追踪方法及相关设备
CN105120167A (zh) * 2015-08-31 2015-12-02 广州市幸福网络技术有限公司 一种证照相机及证照拍摄方法
CN108875473A (zh) * 2017-06-29 2018-11-23 北京旷视科技有限公司 活体验证方法、装置和系统及存储介质
CN111291737A (zh) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 一种人脸图像采集方法、装置和电子设备
CN111540090A (zh) * 2020-04-29 2020-08-14 北京市商汤科技开发有限公司 控制车门解锁的方法、装置、车辆、电子设备及存储介质
CN112287792A (zh) * 2020-10-22 2021-01-29 深圳前海微众银行股份有限公司 采集人脸图像的方法、装置及电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161553A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for capturing photo
CN107018304A (zh) * 2016-01-28 2017-08-04 中兴通讯股份有限公司 一种图像采集方法和图像采集装置
CN109660719A (zh) * 2018-12-11 2019-04-19 维沃移动通信有限公司 一种信息提示方法及移动终端
CN111553838A (zh) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 模型参数的更新方法、装置、设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647352B1 (en) * 1998-06-05 2003-11-11 Crossbow Technology Dynamic attitude measurement method and apparatus
CN102252676A (zh) * 2011-05-06 2011-11-23 微迈森惯性技术开发(北京)有限公司 运动姿态数据获取、人体运动姿态追踪方法及相关设备
CN105120167A (zh) * 2015-08-31 2015-12-02 广州市幸福网络技术有限公司 一种证照相机及证照拍摄方法
CN108875473A (zh) * 2017-06-29 2018-11-23 北京旷视科技有限公司 活体验证方法、装置和系统及存储介质
CN111540090A (zh) * 2020-04-29 2020-08-14 北京市商汤科技开发有限公司 控制车门解锁的方法、装置、车辆、电子设备及存储介质
CN111291737A (zh) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 一种人脸图像采集方法、装置和电子设备
CN112287792A (zh) * 2020-10-22 2021-01-29 深圳前海微众银行股份有限公司 采集人脸图像的方法、装置及电子设备

Also Published As

Publication number Publication date
CN112287792A (zh) 2021-01-29
CN112287792B (zh) 2023-03-31

Similar Documents

Publication Publication Date Title
JP4501937B2 (ja) 顔特徴点検出装置、特徴点検出装置
JP4093273B2 (ja) 特徴点検出装置、特徴点検出方法および特徴点検出プログラム
JP2022071195A (ja) コンピューティング装置及び方法
WO2022083479A1 (zh) 采集人脸图像的方法、装置及电子设备
EP3076321B1 (en) Methods and systems for detecting user head motion during an authentication transaction
US11694474B2 (en) Interactive user authentication
JP2012530994A (ja) 半顔面検出のための方法および装置
JP2009163555A (ja) 顔照合装置
JPWO2010122721A1 (ja) 照合装置、照合方法および照合プログラム
JP6071002B2 (ja) 信頼度取得装置、信頼度取得方法および信頼度取得プログラム
JP5170094B2 (ja) なりすまし検知システム、なりすまし検知方法およびなりすまし検知用プログラム
JP6822482B2 (ja) 視線推定装置、視線推定方法及びプログラム記録媒体
JP6287827B2 (ja) 情報処理装置、情報処理方法、及びプログラム
JP5480532B2 (ja) 画像処理装置、画像処理方法、及び同方法をコンピュータに実行させるプログラム
JP2005149370A (ja) 画像撮影装置、個人認証装置及び画像撮影方法
JP2012185769A (ja) 認証装置、認証方法、および認証プログラム、並びに記録媒体
US11710353B2 (en) Spoof detection based on challenge response analysis
JP4708835B2 (ja) 顔検出装置、顔検出方法、及び顔検出プログラム
JP4816874B2 (ja) パラメータ学習装置、パラメータ学習方法、およびプログラム
JP5748421B2 (ja) 認証装置、認証方法、及び認証プログラム、並びに記録媒体
CN110766631A (zh) 人脸图像的修饰方法、装置、电子设备和计算机可读介质
CN112990047B (zh) 一种结合面部角度信息的多姿态人脸验证方法
JP2008015871A (ja) 認証装置、及び認証方法
JP2005084979A (ja) 顔認証システムおよび方法並びにプログラム
WO2021084643A1 (ja) 認証装置、認証方法及び記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21881897

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21881897

Country of ref document: EP

Kind code of ref document: A1