WO2020077822A1 - Procédé et appareil de configuration et de vérification de caractéristique d'image, dispositif informatique et support - Google Patents

Procédé et appareil de configuration et de vérification de caractéristique d'image, dispositif informatique et support Download PDF

Info

Publication number
WO2020077822A1
WO2020077822A1 PCT/CN2018/122731 CN2018122731W WO2020077822A1 WO 2020077822 A1 WO2020077822 A1 WO 2020077822A1 CN 2018122731 W CN2018122731 W CN 2018122731W WO 2020077822 A1 WO2020077822 A1 WO 2020077822A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
key points
acquiring
portrait
Prior art date
Application number
PCT/CN2018/122731
Other languages
English (en)
Chinese (zh)
Inventor
胡金丹
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2020077822A1 publication Critical patent/WO2020077822A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present application belongs to the field of image recognition, and more specifically, to an image feature configuration and verification method, device, computer equipment, and storage medium.
  • the use of mobile phones is becoming more and more common, and the information security of users on mobile phones is also getting more and more attention.
  • the mobile phone login method is usually encrypted, and there are many common encryption methods, such as slider, password, voice, portrait or fingerprint. Since the unlocking methods of these encryption methods are single, there are certain cracking methods, and the difficulty of cracking is not very large. Therefore, the user's information security still cannot be well guaranteed.
  • Embodiments of the present application provide an image feature configuration and verification method, device, device, and storage medium, to solve the problem that user login methods are easily cracked.
  • An image feature configuration and verification method including:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  • An image feature configuration and verification device including:
  • the first image acquisition module is used to acquire N first images, where N is a positive integer greater than or equal to 2;
  • a first human key point obtaining module configured to obtain N human key points of the first image according to a preset training model
  • a first portrait feature acquisition module configured to acquire the first portrait feature according to N human key points of the first image
  • a standard image feature configuration module configured to configure the first portrait feature as a standard image feature
  • a second human body key point obtaining module configured to obtain a second image, and obtain human body key points of the second image according to the preset training model
  • a second portrait feature acquisition module configured to acquire a second portrait feature according to the key points of the human body of the second image
  • the portrait feature matching verification module is configured to match the second portrait feature with the standard image feature, and if the match is successful, output a result of verification.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
  • the processor executes the computer-readable instructions, the following steps are implemented:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  • One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  • FIG. 1 is a schematic diagram of an application environment of an image feature configuration and verification method in an embodiment of the present application
  • FIG. 2 is a flowchart of an image feature configuration and verification method in an embodiment of the present application
  • FIG. 3 is another flowchart of the image feature configuration and verification method in an embodiment of the present application.
  • FIG. 6 is another flowchart of an image feature configuration and verification method in an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of an image feature configuration and verification device in an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of a first portrait feature acquisition module in an image feature configuration and verification device according to an embodiment of the present application
  • FIG. 9 is another principle block diagram of a first portrait feature acquisition module in an image feature configuration and verification device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a computer device in an embodiment of the present application.
  • the image feature configuration and verification method provided in this application can be applied in the application environment as shown in FIG. 1, in which the client communicates with the server through the network, and the server obtains N first images through the client according to the preset Train the model to obtain the human key points of the N first images; obtain the first portrait features based on the N human key points of the first images; configure the first portrait features as standard image features; and then obtain the second images from the server, Acquire the human key points of the second image according to the preset training model; acquire the second human portrait features according to the human key points of the second image; finally match the second human portrait features with the standard image features, and if the matching is successful, output to the client Verify the results.
  • the client can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • the server can be implemented with an independent server or a server cluster composed of multiple servers.
  • an image feature configuration and verification method is provided.
  • the method is applied to the server in FIG. 1 as an example for illustration, including the following steps:
  • the first image is a user portrait collected when the user sets image features.
  • the user's portrait can be collected through the shooting tool of the client, for example, the user's portrait can be collected through the shooting function of the camera of the mobile phone.
  • the server sends user login verification to the client to enable the user to enter a password or fingerprint to perform login verification; if the login verification is The result is pass, and then send the instruction to collect the first image to the client to collect the first image.
  • the number of first images is N, and N is a positive integer greater than or equal to 2.
  • the first image may be multiple static images, or multiple images acquired by recording video data.
  • step S10 may specifically include:
  • the first video data is a video recorded to the user himself, for example, a video recording the user blinking.
  • the server sends an instruction to collect the first image to the client.
  • the client opens the shooting tool according to the instruction to collect the first image, records the user's video, and obtains the first video data.
  • S12 Frame the first video data according to a preset time to obtain N images to be processed.
  • the preset time can be specifically set according to the actual situation.
  • the total frame number and overall time of the first video data may be obtained, and then the preset time may be obtained by dividing the total frame number by the overall time.
  • the server then frames the first video data according to the obtained preset time to obtain N images to be processed.
  • S13 Create a normalized image, obtain the height and width information of the normalized image, and obtain the normalized images of N images to be processed based on the height and width information, and replace the normalized images with N pending images
  • the original pixel values of the image are processed to obtain N first images.
  • the server first creates a normalized image, for example, an image of 260 * 260 pixels; then obtains the height and width information of the normalized image; then calculates the height and width information of the image to be processed according to the normalized image After performing the normalized image and replacing the pixel value of the original image to be processed with the normalized image, N first images can be obtained.
  • a normalized image for example, an image of 260 * 260 pixels
  • N first images can be obtained.
  • S20 Acquire N human key points of the first image according to the preset training model.
  • the preset training model may be a face detection model, a feature point detection model, a posture detection model, an emotion detection model, and so on.
  • the key points of the human body refer to the points in the first image that reflect human body characteristics, such as eyebrows, eyes, mouth, shoulders, elbow joints, and wrists.
  • the preset training model can be trained by inputting sample images marked with key points, and learning to obtain key points of the human body.
  • the preset training model can identify the key points of the human body in the first image, thereby acquiring the key points of the human body of the first image.
  • S30 Acquire the first portrait feature according to the key points of the human body of the N first images.
  • the first portrait feature refers to a portrait feature composed of the features of the key points of the human body of the N first images, and is used to determine whether it is the user's own credential.
  • the first portrait feature may be a facial feature, an expression feature, a behavior feature, or the like.
  • the facial expression feature and the behavioral action feature can be combined as the first portrait feature, that is, the user's facial expression and body movement can be used as the login credential to improve the security of user information.
  • Behavioral action features refer to custom behaviors entered when the user sets up login credentials, such as making a hand-raising action or blinking the left eye to the camera, or a combination of blinking the left eye and raising the right hand. That is, the behavior feature may be a single behavior action, or a combination of a plurality of behavior actions.
  • the first portrait feature can be obtained by extracting, calculating, or recognizing features of key points of the human body by a preset training model.
  • a preset training model can be used to extract and recognize the facial expressions formed by key points of the human body, for example, through the features such as the angle of the eyebrow tilt, the downward movement of the mouth and the angle of the eye and the face, the corresponding facial expressions can be identified to obtain Corresponding expression characteristics.
  • the position of the key point of the human body can be tracked, and the change of the position of the key point of the human body can be used as the behavior feature.
  • a coordinate system is established for multiple first images obtained from the video data, and the key point of the human body such as the wrist is moved from position A to position B in the multiple first images
  • the coordinate change information of the position according to the coordinate change information, the change information of the position of the wrist, which is a key point of the human body, can be obtained, so as to obtain the user-defined behavior characteristics of the hand-raising action.
  • the first portrait feature and the user ID are bound in the server database and stored as standard image features, the configuration of the image features is completed, and the standard image features are used as the user login credentials.
  • the user ID is an identifier used by the server to distinguish different users, and may be the user's mobile phone number, account number, or ID number.
  • standard image features and other forms of passwords can be combined as credentials for user login, for example, combined with digital passwords, which can further enhance the security of user information.
  • S50 Acquire the second image, and acquire key points of the human body of the second image according to the preset training model.
  • the second image refers to a portrait image obtained when the user performs login verification.
  • the number of second images is at least one.
  • the second image is acquired through the shooting tool of the client. After acquiring the second image, the second image is input into the preset training model, and the key points of the human body of the second image are acquired according to the preset training model.
  • the process of acquiring the key points of the human body of the second image is the same as the process of acquiring the key points of the human body of the first image, and will not be repeated here.
  • S60 Acquire the second portrait feature according to the key points of the human body in the second image.
  • the process of acquiring the second portrait feature according to the key points of the human body of the second image is the same as the process of acquiring the first portrait feature, which will not be repeated here.
  • the second portrait feature is of the same type as the first portrait feature, for example, all are facial features, facial expression features, or behavioral features.
  • S70 Match the second portrait feature with the standard image feature, and if the match is successful, output a verified result.
  • the server matches the second portrait feature with the standard image feature, and determines whether the acquired second portrait feature matches the standard image feature.
  • the standard image feature corresponds to a facial feature
  • each facial feature in the second portrait feature is compared with each facial feature in the standard image feature to determine whether the facial features are the same, such as whether the eyebrows are raised Or whether the corner of the mouth is moving down, etc. If the facial features are the same, it is determined that the matching is successful, otherwise it is determined that the matching fails.
  • the standard image feature corresponds to the expression feature
  • the expression corresponding to the second portrait feature is compared with the expression corresponding to the standard image feature to determine whether the result of the expression is the same, for example, whether the result of the expression is happy, sad, or surprised.
  • the result of the expression is the same, it is determined that the match is successful, otherwise it is determined that the match fails.
  • the result of the behavior action in the second portrait feature is compared with the result of the behavior action in the standard image feature to determine whether the result of the behavior action is consistent, for example, the behavior action in the standard image feature
  • the result is that the left hand raises the hand, it is judged whether the result of the behavior action of the second portrait feature is also the left hand raise the hand, if the result of the behavior action is the same, it is determined that the match is successful, otherwise it is determined that the match fails.
  • the server judges that the second portrait feature matches the standard image feature, it outputs the result of the verification and allows the user to log in. If the server judges that the second portrait feature does not match the standard image feature, it outputs a result that the verification fails and rejects the user login. It can be understood that when other users want to log in as fake users, they do not know whether the standard image features correspond to facial expression features, behavioral action features, or a combination of facial expression features and behavioral action features, and do not know the specific facial expression features and behavioral actions Features, so it is difficult to crack.
  • the first portrait feature is configured as a standard image feature; then the second image is obtained, the key points of the human body of the second image are obtained according to the preset training model; the second portrait feature is obtained according to the key points of the human body of the second image, and finally the The two portrait features are matched with the standard image features. If the matching is successful, the result of the verification is output.
  • the standard image features are used as the login credentials, the user does not need to enter a password to log in, which facilitates the user's operation.
  • obtaining portrait features based on key points of the human body and configuring the acquired portrait features as standard image features can make the image feature configuration more representative and improve the accuracy of the image feature configuration.
  • using the standard image features as the user's login credential can enable the user to input a customized facial action or behavioral action as the credential for login verification, and it is difficult for non-users to obtain the login credential, which makes it impossible to crack and impersonate the user Log in to improve the security of user information.
  • the first portrait feature can be obtained by establishing a coordinate system for the first image to obtain the coordinates of the key points of the human body, as shown in FIG. Click to obtain the first portrait feature, which may include:
  • S31 Acquire the coordinates of the human key points of the N first images according to the positions of the human key points of the N first images.
  • a coordinate system may be established in the photo frame of the shooting tool that collects the first image.
  • a coordinate system is established using the position of the user's eyebrow center in the photo frame as the origin, and then the coordinates of the key points of the human body of the first image are acquired.
  • the preset training model is used to draw the user's portrait; when the user's portrait is drawn, the coordinates of the drawn point are obtained. For example, when drawing points on the eyebrows, the coordinates of the eyebrow drawing points can be obtained through the coordinate system.
  • the number of human body key points of the first image is counted first, and the coordinates of the human body key points of the first image are acquired after all necessary human body key points have entered the photo frame.
  • the necessary human key points can be obtained according to the training data. For example, after training, the human key points of the face and hands should all enter the photo frame to obtain the corresponding expression characteristics and hand behavior characteristics, you can set After all the key points of the human face and hands enter the photo frame, the coordinates of the key points of the first image are obtained.
  • the first image is N
  • the position of the user each time the frame is entered may be different, or the user's position may move during the recording of the video Therefore, the coordinates of the key points of the human body of the N first images obtained each time may be different. Therefore, in order to make the coordinates more representative, it is necessary to further calculate the acquired coordinates, so as to obtain coordinate values that can be used as the first portrait feature.
  • the key points of the human body are plotted, a key point of the human body obtains a set of coordinates.
  • the coordinates of the key point of the human eyebrow is a set of coordinates when acquiring the coordinates, so for a key point of the human body, the range value of the set of coordinates obtained That is, a characteristic interval value.
  • the coordinates of the key points of the human body of the N first images obtained are calculated by using an exponential weighted average algorithm (Exponential Weighted Moving Average, EWMA for short), and the calculated results are formed into feature interval values of the key points of the human body.
  • EWMA Exposential Weighted Moving Average
  • the formula for calculating the X coordinate with EWMA can be:
  • X is the weighted average coordinate value
  • n is the number of the first image (ie N)
  • x i is the actual value of the ith coordinate
  • is the weight of the ith (the sum of the weights is equal to 1); namely
  • the EWMA calculation of the X coordinate value and the EWMA calculation of the Y coordinate are obtained for the same drawing point of the N first images. Then, the obtained EWMA values of the human body key points are combined together to form the first characteristic interval value of the human body key points.
  • the weight setting can be set to be the same, for example, when the first image is 3, 1/3 is taken as the weight value. You can also set different weights according to the course of the action.
  • the first feature interval values of all the key points of the human body are used as the first portrait feature, which is bound to the user ID and stored in the database on the server side.
  • the user ID may be a mobile phone number, ID card number and account number used to distinguish different users.
  • the coordinates of the human key points of the N first images are obtained from the positions of the human key points of the N first images; then, the mobile index weighted average algorithm is used for the human keys of the N first images The coordinates of the points are calculated to obtain the first feature interval value; finally, the first feature interval value is used as the first portrait feature.
  • the user's portrait feature is obtained through the mobile index weighted average algorithm, and the feature data of the user's portrait can be smoothly returned as the first portrait feature according to the first image, thereby improving the accuracy of image feature configuration.
  • the standard image features formed by the first portrait feature obtained according to this embodiment are used as the credentials for login, which can effectively avoid the situation of logging in without user authorization or logging in as a fake user, thereby improving the security of user information Sex.
  • the first portrait feature may be obtained after feature extraction or recognition by a preset training model, where the preset training model includes a micro-expression recognition model and a gesture recognition model, specifically, as shown in FIG. 5,
  • the method may further include:
  • N first images are divided into a first face image set and a first limb image set according to key points of the human body.
  • the sample image marked with area division may be input into a preset training model for training so that it can obtain the first face image and the first limb image according to key points of the human body.
  • the first image can be divided into a first face image and a first limb image by using the neck, which is a key point of the human body, as a dividing boundary.
  • the first face image set is composed of the first face image
  • the first limb image set is composed of the first limb image.
  • the first face image set is input into the micro-expression recognition model, and the human face key point characteristics of the first face image are analyzed and recognized according to the micro-expression recognition model, and the expression features of the first face image set are output As a standard face image feature.
  • the facial expression features may include head features, eye features, and lip features.
  • facial expressions such as head up, eyebrow up and mouth corner down. It can be understood that, since the first face image set includes multiple face images, the corresponding expression of the user may be changing. Therefore, the standard facial image features can be acquired after the acquired expression features are stable.
  • the stable expression feature can be set in a continuous preset number of face images to obtain the same expression feature as a stable sign.
  • the expression features of the first face image can be used as the standard face image features, or they can be combined with the results of the expression to form the standard face image features.
  • the result of expression refers to expressions of happiness, anger or sadness.
  • the international micro expression database can be connected through the server to identify the facial expression expression from the micro expression database.
  • the international micro-expression database includes 54 kinds of micro-expressions, and specific expressions can be obtained according to subtle changes in key points of the human body.
  • the first limb image set is input into a gesture recognition model, and the behavior recognition of the key points of the human body of the first limb image is performed according to the gesture recognition model, and the behavior behavior features are output as standard limb image features, for example, output behavior actions Raise the hand for the left hand, then use the left hand as the standard limb image feature.
  • a sample set of a series of actions may be input in advance to allow the gesture recognition model to learn, so that the gesture recognition model recognizes the user's behavior.
  • a set of motion raising hand sample sets is input into the gesture recognition model, so that the gesture recognition model can recognize the motion of raising the hand.
  • the standard face image feature and the standard limb image feature obtained in step S32 'and step S33' are combined into the first portrait feature.
  • a happy facial expression feature and a left-hand raised behavior feature are combined to form a first portrait feature.
  • the first face image set and the first limb image set are obtained by using the human key points of the N first images; then the first face image set is input into the micro-expression recognition model, respectively To get the standard face image features; input the first limb image set into the gesture recognition model to get the standard limb image features; finally, the standard face image features and the standard limb image features form the first portrait features.
  • the standard image features formed by the first portrait feature obtained according to this embodiment are used as credentials for user login, which can effectively avoid the situation of logging in without user authorization or impersonating the user, thereby improving the user information. safety.
  • step S60 that is, obtaining the second portrait feature according to the key points of the human body of the second image, as shown in FIG. 6, it may specifically include:
  • S61 Acquire a second face image and a second limb image according to the key points of the human body of the second image.
  • the process of acquiring the second face image and the second limb image according to the key points of the human body of the second image is similar to the process of acquiring the first face image set and the first limb image set according to the key points of the first image, that is,
  • the second image is input into the trained preset training model, and the second face image and the second limb image are obtained according to the divided boundary.
  • S62 Input the second face image into the micro-expression recognition model to obtain the characteristics of the test face image.
  • the second face image is input into the micro-expression recognition model, and the characteristics of the key points of the human body of the second face image are analyzed and recognized according to the micro-expression recognition model, and the expression features of the second face image are output as the test person Face image features, such as facial expression features such as head up, eyebrow up, and mouth corner down.
  • the test facial image features are consistent with the standard facial image feature settings. For example, if the facial expression features of the first facial image and the results of the expression are set to form the standard facial image features, then the test facial image features are also The facial expression features of the two-face image are composed of the facial expression results.
  • S63 Input the second limb image into the gesture recognition model to obtain the test limb image features.
  • the second limb image is input into the gesture recognition model, and the behavior recognition of the key points of the human body of the second limb image is performed according to the gesture recognition model, and the behavior behavior characteristic is output as the test limb image feature.
  • test face image features and the test limb image features obtained in step S62 and step S63 constitute a second portrait feature.
  • the second face image and the second limb image are obtained according to the key points of the human body of the second image; then the second face image is input into the micro-expression recognition model to obtain the test face image Feature; then input the second limb image into the posture recognition model to obtain the test limb image feature; finally, the test face image feature and the test limb image feature form a second portrait feature, and the second image's portrait feature can be extracted to be used with Standard image features are compared to enable verification of image features.
  • an image feature configuration and verification device is provided, and the image feature configuration and verification device corresponds to the image feature configuration and verification method in the above embodiment in one-to-one correspondence.
  • the image feature configuration and verification device includes a first image acquisition module 10, a first human body key point acquisition module 20, a first human portrait feature acquisition module 30, a standard image feature configuration module 40, and a second human body
  • the key point acquisition module 50, the second portrait feature acquisition module 60, and the portrait feature matching verification module 70 is as follows:
  • the first image acquisition module 10 is configured to acquire N first images, where N is a positive integer greater than or equal to 2.
  • first image acquisition module 10 is also used to:
  • the first human key point obtaining module 20 is configured to obtain N human key points of the first image according to a preset training model.
  • the first portrait feature acquisition module 30 is configured to acquire the first portrait feature according to the N human body key points of the first image.
  • the standard image feature configuration module 40 is configured to configure the first portrait feature as a standard image feature.
  • the second human key point obtaining module 50 is used to obtain a second image and obtain the human key points of the second image according to a preset training model.
  • the second portrait feature acquisition module 60 is configured to acquire the second portrait feature according to the key points of the human body in the second image.
  • the portrait feature matching verification module 70 is configured to match the second portrait feature with the standard image feature. If the matching is successful, the verification result is output.
  • the first portrait feature acquisition module 30 includes a coordinate acquisition unit 31, a feature interval value acquisition unit 32, and a first portrait feature setting unit 33.
  • the coordinate acquiring unit 31 is configured to acquire the coordinates of the human key points of the N first images according to the positions of the human key points of the N first images.
  • the feature interval value obtaining unit 32 is configured to calculate the coordinates of the key points of the human body of the N first images using a moving index weighted average algorithm to obtain the first feature interval value.
  • the first portrait feature setting unit 33 is configured to use the first feature interval value as the first portrait feature.
  • the preset training model includes a micro-expression recognition model and a gesture recognition model; optionally, as shown in FIG. 9, the first portrait feature acquisition module 30 includes an image set acquisition unit 31 ′ and a standard facial feature acquisition unit 32 ', A standard limb feature acquisition unit 33' and a first portrait feature acquisition unit 34 '.
  • the image set acquisition unit 31 ' is used to acquire the first face image set and the first limb image set based on the N human body key points of the first image.
  • the standard facial feature acquisition unit 32 is used to input the first facial image set into the micro-expression recognition model to obtain the standard facial image features.
  • the standard limb feature acquisition unit 33 ' is used to input the first limb image set into the posture recognition model to obtain the standard limb image features.
  • the first portrait feature acquisition unit 34 ' is used to form the standard face image feature and the standard limb image feature into the first portrait feature.
  • the second portrait feature acquisition module 60 is also used to:
  • test face image feature and the test limb image feature are combined to form a second portrait feature.
  • Each module in the above image feature configuration and verification device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and an internal structure diagram thereof may be as shown in FIG.
  • the computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer device is used to store the first image, the first video data, the preset training model, the standard image features, the moving index weighted average algorithm, and the feature interval values.
  • the network interface of the computer device is used to communicate with external terminals through a network connection. When the computer-readable instructions are executed by the processor to implement an image feature configuration and verification method.
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor implements the computer-readable instructions to implement the following steps:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, the result of verification is output.
  • one or more non-volatile readable storage media storing computer-readable instructions are provided.
  • the computer-readable instructions are executed by one or more processors, the one or more Each processor performs the following steps:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, the result of verification is output.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • RDRAM direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de configuration et de vérification de caractéristique d'image, un dispositif informatique et un support. Le procédé consiste à : acquérir N premières images, N étant un nombre entier positif supérieur ou égal à 2 ; acquérir N points clés de corps humain de la première image selon un modèle d'apprentissage prédéfini ; acquérir une première caractéristique de portrait selon les N points clés de corps humain de la première image ; configurer la première caractéristique de portrait en tant que caractéristique d'image standard ; acquérir une seconde image et acquérir un point clé de corps humain de la seconde image selon le modèle d'apprentissage prédéfini ; acquérir une seconde caractéristique de portrait selon le point clé de corps humain de la seconde image ; mettre en correspondance la seconde caractéristique de portrait avec la caractéristique d'image standard, si la correspondance est réussie, délivrer en sortie un résultat de réussite de vérification. La solution technique selon la présente invention peut aboutir à une précision de configuration de caractéristiques d'image élevée et il est difficile pour des non-utilisateurs d'usurper l'identité d'un utilisateur et de se connecter, garantissant ainsi la sécurité des informations d'utilisateur.
PCT/CN2018/122731 2018-10-17 2018-12-21 Procédé et appareil de configuration et de vérification de caractéristique d'image, dispositif informatique et support WO2020077822A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811208048.6 2018-10-17
CN201811208048.6A CN109472269A (zh) 2018-10-17 2018-10-17 图像特征配置及校验方法、装置、计算机设备及介质

Publications (1)

Publication Number Publication Date
WO2020077822A1 true WO2020077822A1 (fr) 2020-04-23

Family

ID=65665930

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122731 WO2020077822A1 (fr) 2018-10-17 2018-12-21 Procédé et appareil de configuration et de vérification de caractéristique d'image, dispositif informatique et support

Country Status (2)

Country Link
CN (1) CN109472269A (fr)
WO (1) WO2020077822A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667479A (zh) * 2020-06-10 2020-09-15 创新奇智(成都)科技有限公司 目标图像的图案核验方法及装置、电子设备、存储介质
CN111968203A (zh) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 动画驱动方法、装置、电子设备及存储介质
CN112101124A (zh) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 一种坐姿检测方法及装置
CN112101123A (zh) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 一种注意力检测方法及装置
CN112257645A (zh) * 2020-11-02 2021-01-22 浙江大华技术股份有限公司 人脸的关键点定位方法和装置、存储介质及电子装置
CN112287866A (zh) * 2020-11-10 2021-01-29 上海依图网络科技有限公司 一种基于人体关键点的人体动作识别方法及装置
CN113177442A (zh) * 2021-04-12 2021-07-27 广东省科学院智能制造研究所 一种基于边缘计算的人体行为检测方法及装置
CN112287866B (zh) * 2020-11-10 2024-05-31 上海依图网络科技有限公司 一种基于人体关键点的人体动作识别方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986260A (zh) * 2020-09-04 2020-11-24 北京小狗智能机器人技术有限公司 一种图像处理的方法、装置及终端设备
CN112418146B (zh) * 2020-12-02 2024-04-30 深圳市优必选科技股份有限公司 表情识别方法、装置、服务机器人和可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005242677A (ja) * 2004-02-26 2005-09-08 Ntt Comware Corp 複合認証システムおよびその方法ならびにプログラム
CN102663413A (zh) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 一种面向多姿态和跨年龄的人脸图像认证方法
CN106650555A (zh) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 一种基于机器学习的真人验证方法及系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679159B (zh) * 2013-12-31 2017-10-17 海信集团有限公司 人脸识别方法
CN104537336B (zh) * 2014-12-17 2017-11-28 厦门立林科技有限公司 一种具备自学习功能的人脸识别方法和系统
CN204926116U (zh) * 2015-04-21 2015-12-30 同方威视技术股份有限公司 一种包含视频分析的安检判图系统
CN104966046B (zh) * 2015-05-20 2017-07-21 腾讯科技(深圳)有限公司 一种人脸关键点位定位结果的评估方法,及评估装置
CN106909870A (zh) * 2015-12-22 2017-06-30 中兴通讯股份有限公司 人脸图像的检索方法及装置
CN105426730A (zh) * 2015-12-28 2016-03-23 小米科技有限责任公司 登录验证处理方法、装置及终端设备
CN107360119A (zh) * 2016-05-09 2017-11-17 中兴通讯股份有限公司 一种云桌面登陆验证方法、云桌面控制系统及客户端
CN106127170B (zh) * 2016-07-01 2019-05-21 重庆中科云从科技有限公司 一种融合关键特征点的训练方法、识别方法及系统
CN106295568B (zh) * 2016-08-11 2019-10-18 上海电力学院 基于表情和行为双模态结合的人类自然状态情感识别方法
CN107679504A (zh) * 2017-10-13 2018-02-09 北京奇虎科技有限公司 基于摄像头场景的人脸识别方法、装置、设备及存储介质
CN108256459B (zh) * 2018-01-10 2021-08-24 北京博睿视科技有限责任公司 基于多摄像机融合的安检门人脸识别和人脸自动建库算法
CN108596039B (zh) * 2018-03-29 2020-05-05 南京邮电大学 一种基于3d卷积神经网络的双模态情感识别方法及系统
CN108537160A (zh) * 2018-03-30 2018-09-14 平安科技(深圳)有限公司 基于微表情的风险识别方法、装置、设备及介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005242677A (ja) * 2004-02-26 2005-09-08 Ntt Comware Corp 複合認証システムおよびその方法ならびにプログラム
CN102663413A (zh) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 一种面向多姿态和跨年龄的人脸图像认证方法
CN106650555A (zh) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 一种基于机器学习的真人验证方法及系统

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667479A (zh) * 2020-06-10 2020-09-15 创新奇智(成都)科技有限公司 目标图像的图案核验方法及装置、电子设备、存储介质
CN111968203B (zh) * 2020-06-30 2023-11-14 北京百度网讯科技有限公司 动画驱动方法、装置、电子设备及存储介质
CN111968203A (zh) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 动画驱动方法、装置、电子设备及存储介质
CN112101124B (zh) * 2020-08-20 2023-12-08 深圳数联天下智能科技有限公司 一种坐姿检测方法及装置
CN112101123A (zh) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 一种注意力检测方法及装置
CN112101124A (zh) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 一种坐姿检测方法及装置
CN112101123B (zh) * 2020-08-20 2024-05-28 深圳数联天下智能科技有限公司 一种注意力检测方法及装置
CN112257645A (zh) * 2020-11-02 2021-01-22 浙江大华技术股份有限公司 人脸的关键点定位方法和装置、存储介质及电子装置
CN112257645B (zh) * 2020-11-02 2023-09-01 浙江大华技术股份有限公司 人脸的关键点定位方法和装置、存储介质及电子装置
CN112287866A (zh) * 2020-11-10 2021-01-29 上海依图网络科技有限公司 一种基于人体关键点的人体动作识别方法及装置
CN112287866B (zh) * 2020-11-10 2024-05-31 上海依图网络科技有限公司 一种基于人体关键点的人体动作识别方法及装置
CN113177442A (zh) * 2021-04-12 2021-07-27 广东省科学院智能制造研究所 一种基于边缘计算的人体行为检测方法及装置
CN113177442B (zh) * 2021-04-12 2024-01-30 广东省科学院智能制造研究所 一种基于边缘计算的人体行为检测方法及装置

Also Published As

Publication number Publication date
CN109472269A (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
WO2020077822A1 (fr) Procédé et appareil de configuration et de vérification de caractéristique d'image, dispositif informatique et support
US10997445B2 (en) Facial recognition-based authentication
Zhang et al. Touch gesture-based active user authentication using dictionaries
Conti et al. Mind how you answer me! Transparently authenticating the user of a smartphone when answering or placing a call
US10275672B2 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
US9355236B1 (en) System and method for biometric user authentication using 3D in-air hand gestures
Zhao et al. Mobile user authentication using statistical touch dynamics images
US20200380246A1 (en) Virtual avatar generation method and apparatus, and storage medium
US20120164978A1 (en) User authentication method for access to a mobile user terminal and corresponding mobile user terminal
US10606994B2 (en) Authenticating access to a computing resource using quorum-based facial recognition
KR20170000128A (ko) 다중 생체 인증을 통한 모바일 전자 문서 시스템
US10599824B2 (en) Authenticating access to a computing resource using pattern-based facial recognition
US10594690B2 (en) Authenticating access to a computing resource using facial recognition based on involuntary facial movement
US10885171B2 (en) Authentication verification using soft biometric traits
US10922533B2 (en) Method for face-to-unlock, authentication device, and non-volatile storage medium
Lu et al. Multifactor user authentication with in-air-handwriting and hand geometry
US20230100874A1 (en) Facial expression-based unlocking method and apparatus, computer device, and storage medium
Oza et al. Federated learning-based active authentication on mobile devices
US9594949B1 (en) Human identity verification via automated analysis of facial action coding system features
WO2020244160A1 (fr) Procédé et appareil de commande d'équipement terminal, dispositif informatique, et support de stockage lisible
CN110633677A (zh) 人脸识别的方法及装置
Malatji et al. Acceptance of biometric authentication security technology on mobile devices
Zhong et al. VeinDeep: Smartphone unlock using vein patterns
KR20210017230A (ko) 얼굴 영상의 생체 감지 장치 및 방법
TWI620076B (zh) 人體動作的分析系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937335

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18937335

Country of ref document: EP

Kind code of ref document: A1