WO2020077822A1 - Image feature configuration and verification method and apparatus, computer device and medium - Google Patents

Image feature configuration and verification method and apparatus, computer device and medium Download PDF

Info

Publication number
WO2020077822A1
WO2020077822A1 PCT/CN2018/122731 CN2018122731W WO2020077822A1 WO 2020077822 A1 WO2020077822 A1 WO 2020077822A1 CN 2018122731 W CN2018122731 W CN 2018122731W WO 2020077822 A1 WO2020077822 A1 WO 2020077822A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
key points
acquiring
portrait
Prior art date
Application number
PCT/CN2018/122731
Other languages
French (fr)
Chinese (zh)
Inventor
胡金丹
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2020077822A1 publication Critical patent/WO2020077822A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present application belongs to the field of image recognition, and more specifically, to an image feature configuration and verification method, device, computer equipment, and storage medium.
  • the use of mobile phones is becoming more and more common, and the information security of users on mobile phones is also getting more and more attention.
  • the mobile phone login method is usually encrypted, and there are many common encryption methods, such as slider, password, voice, portrait or fingerprint. Since the unlocking methods of these encryption methods are single, there are certain cracking methods, and the difficulty of cracking is not very large. Therefore, the user's information security still cannot be well guaranteed.
  • Embodiments of the present application provide an image feature configuration and verification method, device, device, and storage medium, to solve the problem that user login methods are easily cracked.
  • An image feature configuration and verification method including:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  • An image feature configuration and verification device including:
  • the first image acquisition module is used to acquire N first images, where N is a positive integer greater than or equal to 2;
  • a first human key point obtaining module configured to obtain N human key points of the first image according to a preset training model
  • a first portrait feature acquisition module configured to acquire the first portrait feature according to N human key points of the first image
  • a standard image feature configuration module configured to configure the first portrait feature as a standard image feature
  • a second human body key point obtaining module configured to obtain a second image, and obtain human body key points of the second image according to the preset training model
  • a second portrait feature acquisition module configured to acquire a second portrait feature according to the key points of the human body of the second image
  • the portrait feature matching verification module is configured to match the second portrait feature with the standard image feature, and if the match is successful, output a result of verification.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
  • the processor executes the computer-readable instructions, the following steps are implemented:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  • One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  • FIG. 1 is a schematic diagram of an application environment of an image feature configuration and verification method in an embodiment of the present application
  • FIG. 2 is a flowchart of an image feature configuration and verification method in an embodiment of the present application
  • FIG. 3 is another flowchart of the image feature configuration and verification method in an embodiment of the present application.
  • FIG. 6 is another flowchart of an image feature configuration and verification method in an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of an image feature configuration and verification device in an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of a first portrait feature acquisition module in an image feature configuration and verification device according to an embodiment of the present application
  • FIG. 9 is another principle block diagram of a first portrait feature acquisition module in an image feature configuration and verification device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a computer device in an embodiment of the present application.
  • the image feature configuration and verification method provided in this application can be applied in the application environment as shown in FIG. 1, in which the client communicates with the server through the network, and the server obtains N first images through the client according to the preset Train the model to obtain the human key points of the N first images; obtain the first portrait features based on the N human key points of the first images; configure the first portrait features as standard image features; and then obtain the second images from the server, Acquire the human key points of the second image according to the preset training model; acquire the second human portrait features according to the human key points of the second image; finally match the second human portrait features with the standard image features, and if the matching is successful, output to the client Verify the results.
  • the client can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • the server can be implemented with an independent server or a server cluster composed of multiple servers.
  • an image feature configuration and verification method is provided.
  • the method is applied to the server in FIG. 1 as an example for illustration, including the following steps:
  • the first image is a user portrait collected when the user sets image features.
  • the user's portrait can be collected through the shooting tool of the client, for example, the user's portrait can be collected through the shooting function of the camera of the mobile phone.
  • the server sends user login verification to the client to enable the user to enter a password or fingerprint to perform login verification; if the login verification is The result is pass, and then send the instruction to collect the first image to the client to collect the first image.
  • the number of first images is N, and N is a positive integer greater than or equal to 2.
  • the first image may be multiple static images, or multiple images acquired by recording video data.
  • step S10 may specifically include:
  • the first video data is a video recorded to the user himself, for example, a video recording the user blinking.
  • the server sends an instruction to collect the first image to the client.
  • the client opens the shooting tool according to the instruction to collect the first image, records the user's video, and obtains the first video data.
  • S12 Frame the first video data according to a preset time to obtain N images to be processed.
  • the preset time can be specifically set according to the actual situation.
  • the total frame number and overall time of the first video data may be obtained, and then the preset time may be obtained by dividing the total frame number by the overall time.
  • the server then frames the first video data according to the obtained preset time to obtain N images to be processed.
  • S13 Create a normalized image, obtain the height and width information of the normalized image, and obtain the normalized images of N images to be processed based on the height and width information, and replace the normalized images with N pending images
  • the original pixel values of the image are processed to obtain N first images.
  • the server first creates a normalized image, for example, an image of 260 * 260 pixels; then obtains the height and width information of the normalized image; then calculates the height and width information of the image to be processed according to the normalized image After performing the normalized image and replacing the pixel value of the original image to be processed with the normalized image, N first images can be obtained.
  • a normalized image for example, an image of 260 * 260 pixels
  • N first images can be obtained.
  • S20 Acquire N human key points of the first image according to the preset training model.
  • the preset training model may be a face detection model, a feature point detection model, a posture detection model, an emotion detection model, and so on.
  • the key points of the human body refer to the points in the first image that reflect human body characteristics, such as eyebrows, eyes, mouth, shoulders, elbow joints, and wrists.
  • the preset training model can be trained by inputting sample images marked with key points, and learning to obtain key points of the human body.
  • the preset training model can identify the key points of the human body in the first image, thereby acquiring the key points of the human body of the first image.
  • S30 Acquire the first portrait feature according to the key points of the human body of the N first images.
  • the first portrait feature refers to a portrait feature composed of the features of the key points of the human body of the N first images, and is used to determine whether it is the user's own credential.
  • the first portrait feature may be a facial feature, an expression feature, a behavior feature, or the like.
  • the facial expression feature and the behavioral action feature can be combined as the first portrait feature, that is, the user's facial expression and body movement can be used as the login credential to improve the security of user information.
  • Behavioral action features refer to custom behaviors entered when the user sets up login credentials, such as making a hand-raising action or blinking the left eye to the camera, or a combination of blinking the left eye and raising the right hand. That is, the behavior feature may be a single behavior action, or a combination of a plurality of behavior actions.
  • the first portrait feature can be obtained by extracting, calculating, or recognizing features of key points of the human body by a preset training model.
  • a preset training model can be used to extract and recognize the facial expressions formed by key points of the human body, for example, through the features such as the angle of the eyebrow tilt, the downward movement of the mouth and the angle of the eye and the face, the corresponding facial expressions can be identified to obtain Corresponding expression characteristics.
  • the position of the key point of the human body can be tracked, and the change of the position of the key point of the human body can be used as the behavior feature.
  • a coordinate system is established for multiple first images obtained from the video data, and the key point of the human body such as the wrist is moved from position A to position B in the multiple first images
  • the coordinate change information of the position according to the coordinate change information, the change information of the position of the wrist, which is a key point of the human body, can be obtained, so as to obtain the user-defined behavior characteristics of the hand-raising action.
  • the first portrait feature and the user ID are bound in the server database and stored as standard image features, the configuration of the image features is completed, and the standard image features are used as the user login credentials.
  • the user ID is an identifier used by the server to distinguish different users, and may be the user's mobile phone number, account number, or ID number.
  • standard image features and other forms of passwords can be combined as credentials for user login, for example, combined with digital passwords, which can further enhance the security of user information.
  • S50 Acquire the second image, and acquire key points of the human body of the second image according to the preset training model.
  • the second image refers to a portrait image obtained when the user performs login verification.
  • the number of second images is at least one.
  • the second image is acquired through the shooting tool of the client. After acquiring the second image, the second image is input into the preset training model, and the key points of the human body of the second image are acquired according to the preset training model.
  • the process of acquiring the key points of the human body of the second image is the same as the process of acquiring the key points of the human body of the first image, and will not be repeated here.
  • S60 Acquire the second portrait feature according to the key points of the human body in the second image.
  • the process of acquiring the second portrait feature according to the key points of the human body of the second image is the same as the process of acquiring the first portrait feature, which will not be repeated here.
  • the second portrait feature is of the same type as the first portrait feature, for example, all are facial features, facial expression features, or behavioral features.
  • S70 Match the second portrait feature with the standard image feature, and if the match is successful, output a verified result.
  • the server matches the second portrait feature with the standard image feature, and determines whether the acquired second portrait feature matches the standard image feature.
  • the standard image feature corresponds to a facial feature
  • each facial feature in the second portrait feature is compared with each facial feature in the standard image feature to determine whether the facial features are the same, such as whether the eyebrows are raised Or whether the corner of the mouth is moving down, etc. If the facial features are the same, it is determined that the matching is successful, otherwise it is determined that the matching fails.
  • the standard image feature corresponds to the expression feature
  • the expression corresponding to the second portrait feature is compared with the expression corresponding to the standard image feature to determine whether the result of the expression is the same, for example, whether the result of the expression is happy, sad, or surprised.
  • the result of the expression is the same, it is determined that the match is successful, otherwise it is determined that the match fails.
  • the result of the behavior action in the second portrait feature is compared with the result of the behavior action in the standard image feature to determine whether the result of the behavior action is consistent, for example, the behavior action in the standard image feature
  • the result is that the left hand raises the hand, it is judged whether the result of the behavior action of the second portrait feature is also the left hand raise the hand, if the result of the behavior action is the same, it is determined that the match is successful, otherwise it is determined that the match fails.
  • the server judges that the second portrait feature matches the standard image feature, it outputs the result of the verification and allows the user to log in. If the server judges that the second portrait feature does not match the standard image feature, it outputs a result that the verification fails and rejects the user login. It can be understood that when other users want to log in as fake users, they do not know whether the standard image features correspond to facial expression features, behavioral action features, or a combination of facial expression features and behavioral action features, and do not know the specific facial expression features and behavioral actions Features, so it is difficult to crack.
  • the first portrait feature is configured as a standard image feature; then the second image is obtained, the key points of the human body of the second image are obtained according to the preset training model; the second portrait feature is obtained according to the key points of the human body of the second image, and finally the The two portrait features are matched with the standard image features. If the matching is successful, the result of the verification is output.
  • the standard image features are used as the login credentials, the user does not need to enter a password to log in, which facilitates the user's operation.
  • obtaining portrait features based on key points of the human body and configuring the acquired portrait features as standard image features can make the image feature configuration more representative and improve the accuracy of the image feature configuration.
  • using the standard image features as the user's login credential can enable the user to input a customized facial action or behavioral action as the credential for login verification, and it is difficult for non-users to obtain the login credential, which makes it impossible to crack and impersonate the user Log in to improve the security of user information.
  • the first portrait feature can be obtained by establishing a coordinate system for the first image to obtain the coordinates of the key points of the human body, as shown in FIG. Click to obtain the first portrait feature, which may include:
  • S31 Acquire the coordinates of the human key points of the N first images according to the positions of the human key points of the N first images.
  • a coordinate system may be established in the photo frame of the shooting tool that collects the first image.
  • a coordinate system is established using the position of the user's eyebrow center in the photo frame as the origin, and then the coordinates of the key points of the human body of the first image are acquired.
  • the preset training model is used to draw the user's portrait; when the user's portrait is drawn, the coordinates of the drawn point are obtained. For example, when drawing points on the eyebrows, the coordinates of the eyebrow drawing points can be obtained through the coordinate system.
  • the number of human body key points of the first image is counted first, and the coordinates of the human body key points of the first image are acquired after all necessary human body key points have entered the photo frame.
  • the necessary human key points can be obtained according to the training data. For example, after training, the human key points of the face and hands should all enter the photo frame to obtain the corresponding expression characteristics and hand behavior characteristics, you can set After all the key points of the human face and hands enter the photo frame, the coordinates of the key points of the first image are obtained.
  • the first image is N
  • the position of the user each time the frame is entered may be different, or the user's position may move during the recording of the video Therefore, the coordinates of the key points of the human body of the N first images obtained each time may be different. Therefore, in order to make the coordinates more representative, it is necessary to further calculate the acquired coordinates, so as to obtain coordinate values that can be used as the first portrait feature.
  • the key points of the human body are plotted, a key point of the human body obtains a set of coordinates.
  • the coordinates of the key point of the human eyebrow is a set of coordinates when acquiring the coordinates, so for a key point of the human body, the range value of the set of coordinates obtained That is, a characteristic interval value.
  • the coordinates of the key points of the human body of the N first images obtained are calculated by using an exponential weighted average algorithm (Exponential Weighted Moving Average, EWMA for short), and the calculated results are formed into feature interval values of the key points of the human body.
  • EWMA Exposential Weighted Moving Average
  • the formula for calculating the X coordinate with EWMA can be:
  • X is the weighted average coordinate value
  • n is the number of the first image (ie N)
  • x i is the actual value of the ith coordinate
  • is the weight of the ith (the sum of the weights is equal to 1); namely
  • the EWMA calculation of the X coordinate value and the EWMA calculation of the Y coordinate are obtained for the same drawing point of the N first images. Then, the obtained EWMA values of the human body key points are combined together to form the first characteristic interval value of the human body key points.
  • the weight setting can be set to be the same, for example, when the first image is 3, 1/3 is taken as the weight value. You can also set different weights according to the course of the action.
  • the first feature interval values of all the key points of the human body are used as the first portrait feature, which is bound to the user ID and stored in the database on the server side.
  • the user ID may be a mobile phone number, ID card number and account number used to distinguish different users.
  • the coordinates of the human key points of the N first images are obtained from the positions of the human key points of the N first images; then, the mobile index weighted average algorithm is used for the human keys of the N first images The coordinates of the points are calculated to obtain the first feature interval value; finally, the first feature interval value is used as the first portrait feature.
  • the user's portrait feature is obtained through the mobile index weighted average algorithm, and the feature data of the user's portrait can be smoothly returned as the first portrait feature according to the first image, thereby improving the accuracy of image feature configuration.
  • the standard image features formed by the first portrait feature obtained according to this embodiment are used as the credentials for login, which can effectively avoid the situation of logging in without user authorization or logging in as a fake user, thereby improving the security of user information Sex.
  • the first portrait feature may be obtained after feature extraction or recognition by a preset training model, where the preset training model includes a micro-expression recognition model and a gesture recognition model, specifically, as shown in FIG. 5,
  • the method may further include:
  • N first images are divided into a first face image set and a first limb image set according to key points of the human body.
  • the sample image marked with area division may be input into a preset training model for training so that it can obtain the first face image and the first limb image according to key points of the human body.
  • the first image can be divided into a first face image and a first limb image by using the neck, which is a key point of the human body, as a dividing boundary.
  • the first face image set is composed of the first face image
  • the first limb image set is composed of the first limb image.
  • the first face image set is input into the micro-expression recognition model, and the human face key point characteristics of the first face image are analyzed and recognized according to the micro-expression recognition model, and the expression features of the first face image set are output As a standard face image feature.
  • the facial expression features may include head features, eye features, and lip features.
  • facial expressions such as head up, eyebrow up and mouth corner down. It can be understood that, since the first face image set includes multiple face images, the corresponding expression of the user may be changing. Therefore, the standard facial image features can be acquired after the acquired expression features are stable.
  • the stable expression feature can be set in a continuous preset number of face images to obtain the same expression feature as a stable sign.
  • the expression features of the first face image can be used as the standard face image features, or they can be combined with the results of the expression to form the standard face image features.
  • the result of expression refers to expressions of happiness, anger or sadness.
  • the international micro expression database can be connected through the server to identify the facial expression expression from the micro expression database.
  • the international micro-expression database includes 54 kinds of micro-expressions, and specific expressions can be obtained according to subtle changes in key points of the human body.
  • the first limb image set is input into a gesture recognition model, and the behavior recognition of the key points of the human body of the first limb image is performed according to the gesture recognition model, and the behavior behavior features are output as standard limb image features, for example, output behavior actions Raise the hand for the left hand, then use the left hand as the standard limb image feature.
  • a sample set of a series of actions may be input in advance to allow the gesture recognition model to learn, so that the gesture recognition model recognizes the user's behavior.
  • a set of motion raising hand sample sets is input into the gesture recognition model, so that the gesture recognition model can recognize the motion of raising the hand.
  • the standard face image feature and the standard limb image feature obtained in step S32 'and step S33' are combined into the first portrait feature.
  • a happy facial expression feature and a left-hand raised behavior feature are combined to form a first portrait feature.
  • the first face image set and the first limb image set are obtained by using the human key points of the N first images; then the first face image set is input into the micro-expression recognition model, respectively To get the standard face image features; input the first limb image set into the gesture recognition model to get the standard limb image features; finally, the standard face image features and the standard limb image features form the first portrait features.
  • the standard image features formed by the first portrait feature obtained according to this embodiment are used as credentials for user login, which can effectively avoid the situation of logging in without user authorization or impersonating the user, thereby improving the user information. safety.
  • step S60 that is, obtaining the second portrait feature according to the key points of the human body of the second image, as shown in FIG. 6, it may specifically include:
  • S61 Acquire a second face image and a second limb image according to the key points of the human body of the second image.
  • the process of acquiring the second face image and the second limb image according to the key points of the human body of the second image is similar to the process of acquiring the first face image set and the first limb image set according to the key points of the first image, that is,
  • the second image is input into the trained preset training model, and the second face image and the second limb image are obtained according to the divided boundary.
  • S62 Input the second face image into the micro-expression recognition model to obtain the characteristics of the test face image.
  • the second face image is input into the micro-expression recognition model, and the characteristics of the key points of the human body of the second face image are analyzed and recognized according to the micro-expression recognition model, and the expression features of the second face image are output as the test person Face image features, such as facial expression features such as head up, eyebrow up, and mouth corner down.
  • the test facial image features are consistent with the standard facial image feature settings. For example, if the facial expression features of the first facial image and the results of the expression are set to form the standard facial image features, then the test facial image features are also The facial expression features of the two-face image are composed of the facial expression results.
  • S63 Input the second limb image into the gesture recognition model to obtain the test limb image features.
  • the second limb image is input into the gesture recognition model, and the behavior recognition of the key points of the human body of the second limb image is performed according to the gesture recognition model, and the behavior behavior characteristic is output as the test limb image feature.
  • test face image features and the test limb image features obtained in step S62 and step S63 constitute a second portrait feature.
  • the second face image and the second limb image are obtained according to the key points of the human body of the second image; then the second face image is input into the micro-expression recognition model to obtain the test face image Feature; then input the second limb image into the posture recognition model to obtain the test limb image feature; finally, the test face image feature and the test limb image feature form a second portrait feature, and the second image's portrait feature can be extracted to be used with Standard image features are compared to enable verification of image features.
  • an image feature configuration and verification device is provided, and the image feature configuration and verification device corresponds to the image feature configuration and verification method in the above embodiment in one-to-one correspondence.
  • the image feature configuration and verification device includes a first image acquisition module 10, a first human body key point acquisition module 20, a first human portrait feature acquisition module 30, a standard image feature configuration module 40, and a second human body
  • the key point acquisition module 50, the second portrait feature acquisition module 60, and the portrait feature matching verification module 70 is as follows:
  • the first image acquisition module 10 is configured to acquire N first images, where N is a positive integer greater than or equal to 2.
  • first image acquisition module 10 is also used to:
  • the first human key point obtaining module 20 is configured to obtain N human key points of the first image according to a preset training model.
  • the first portrait feature acquisition module 30 is configured to acquire the first portrait feature according to the N human body key points of the first image.
  • the standard image feature configuration module 40 is configured to configure the first portrait feature as a standard image feature.
  • the second human key point obtaining module 50 is used to obtain a second image and obtain the human key points of the second image according to a preset training model.
  • the second portrait feature acquisition module 60 is configured to acquire the second portrait feature according to the key points of the human body in the second image.
  • the portrait feature matching verification module 70 is configured to match the second portrait feature with the standard image feature. If the matching is successful, the verification result is output.
  • the first portrait feature acquisition module 30 includes a coordinate acquisition unit 31, a feature interval value acquisition unit 32, and a first portrait feature setting unit 33.
  • the coordinate acquiring unit 31 is configured to acquire the coordinates of the human key points of the N first images according to the positions of the human key points of the N first images.
  • the feature interval value obtaining unit 32 is configured to calculate the coordinates of the key points of the human body of the N first images using a moving index weighted average algorithm to obtain the first feature interval value.
  • the first portrait feature setting unit 33 is configured to use the first feature interval value as the first portrait feature.
  • the preset training model includes a micro-expression recognition model and a gesture recognition model; optionally, as shown in FIG. 9, the first portrait feature acquisition module 30 includes an image set acquisition unit 31 ′ and a standard facial feature acquisition unit 32 ', A standard limb feature acquisition unit 33' and a first portrait feature acquisition unit 34 '.
  • the image set acquisition unit 31 ' is used to acquire the first face image set and the first limb image set based on the N human body key points of the first image.
  • the standard facial feature acquisition unit 32 is used to input the first facial image set into the micro-expression recognition model to obtain the standard facial image features.
  • the standard limb feature acquisition unit 33 ' is used to input the first limb image set into the posture recognition model to obtain the standard limb image features.
  • the first portrait feature acquisition unit 34 ' is used to form the standard face image feature and the standard limb image feature into the first portrait feature.
  • the second portrait feature acquisition module 60 is also used to:
  • test face image feature and the test limb image feature are combined to form a second portrait feature.
  • Each module in the above image feature configuration and verification device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and an internal structure diagram thereof may be as shown in FIG.
  • the computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer device is used to store the first image, the first video data, the preset training model, the standard image features, the moving index weighted average algorithm, and the feature interval values.
  • the network interface of the computer device is used to communicate with external terminals through a network connection. When the computer-readable instructions are executed by the processor to implement an image feature configuration and verification method.
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor implements the computer-readable instructions to implement the following steps:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, the result of verification is output.
  • one or more non-volatile readable storage media storing computer-readable instructions are provided.
  • the computer-readable instructions are executed by one or more processors, the one or more Each processor performs the following steps:
  • N is a positive integer greater than or equal to 2;
  • the second portrait feature is matched with the standard image feature, and if the match is successful, the result of verification is output.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • RDRAM direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Abstract

Disclosed are an image feature configuration and verification method and apparatus, a computer device and a medium. The method comprises: acquiring N first images, wherein N is a positive integer greater than or equal to 2; acquiring N human body key points of the first image according to a preset training model; acquiring a first portrait feature according to the N human body key points of the first image; configuring the first portrait feature as a standard image feature; acquiring a second image, and acquiring a human body key point of the second image according to the preset training model; acquiring a second portrait feature according to the human body key point of the second image; matching the second portrait feature with the standard image feature, if the match is successful, outputting a verification pass result. The technical solution provided in the present application can make the accuracy of image feature configuration high , and it is difficult for non-users to impersonate and log in, thereby ensuring the security of user information.

Description

图像特征配置及校验方法、装置、计算机设备及介质Image feature configuration and verification method, device, computer equipment and medium
本申请以2018年10月17日提交的申请号为201811208048.6,名称为“图像特征配置及校验方法、装置、计算机设备及介质”的中国发明专利申请为基础,并要求其优先权。This application is based on the Chinese invention patent application with the application number 201811208048.6 filed on October 17, 2018, titled "Image Feature Configuration and Verification Methods, Devices, Computer Equipment, and Media", and claims its priority.
技术领域Technical field
本申请属于图像识别领域,更具体地说,是涉及一种图像特征配置及校验方法、装置、计算机设备及存储介质。The present application belongs to the field of image recognition, and more specifically, to an image feature configuration and verification method, device, computer equipment, and storage medium.
背景技术Background technique
目前,手机的使用越来越普遍,而手机上的用户的信息安全也越来越受到人们的重视。为了保护用户的信息安全,通常对手机登录的方式进行加密,而比较常见的加密方式有很多种,例如滑块、密码、语音、人像或者指纹等。由于这些加密方式的解锁方式都是单一的,有一定的破解方式,且破解的难度不是很大,因此,用户的信息安全依然无法得到很好的保证。At present, the use of mobile phones is becoming more and more common, and the information security of users on mobile phones is also getting more and more attention. In order to protect the user's information security, the mobile phone login method is usually encrypted, and there are many common encryption methods, such as slider, password, voice, portrait or fingerprint. Since the unlocking methods of these encryption methods are single, there are certain cracking methods, and the difficulty of cracking is not very large. Therefore, the user's information security still cannot be well guaranteed.
发明内容Summary of the invention
本申请实施例提供一种图像特征配置及校验方法、装置、设备及存储介质,以解决用户登录方式容易被破解的问题。Embodiments of the present application provide an image feature configuration and verification method, device, device, and storage medium, to solve the problem that user login methods are easily cracked.
一种图像特征配置及校验方法,包括:An image feature configuration and verification method, including:
获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
根据预设训练模型获取N个所述第一图像的人体关键点;Acquiring N human key points of the first image according to a preset training model;
根据N个所述第一图像的人体关键点获取第一人像特征;Acquiring the first portrait feature according to the N human key points of the first image;
将所述第一人像特征配置为为标准图像特征;Configuring the first portrait feature as a standard image feature;
获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;Acquiring a second image, acquiring human key points of the second image according to the preset training model;
根据所述第二图像的人体关键点获取第二人像特征;Acquiring a second portrait feature according to the key points of the human body of the second image;
将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
一种图像特征配置及校验装置,包括:An image feature configuration and verification device, including:
第一图像获取模块,用于获取N个第一图像,其中,N为大于等于2的正整数;The first image acquisition module is used to acquire N first images, where N is a positive integer greater than or equal to 2;
第一人体关键点获取模块,用于根据预设训练模型获取N个所述第一图像的人体关键点;A first human key point obtaining module, configured to obtain N human key points of the first image according to a preset training model;
第一人像特征获取模块,用于根据N个所述第一图像的人体关键点获取第一人像特征;A first portrait feature acquisition module, configured to acquire the first portrait feature according to N human key points of the first image;
标准图像特征配置模块,用于将所述第一人像特征配置为标准图像特征;A standard image feature configuration module, configured to configure the first portrait feature as a standard image feature;
第二人体关键点获取模块,用于获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;A second human body key point obtaining module, configured to obtain a second image, and obtain human body key points of the second image according to the preset training model;
第二人像特征获取模块,用于根据所述第二图像的人体关键点获取第二人像特征;A second portrait feature acquisition module, configured to acquire a second portrait feature according to the key points of the human body of the second image;
人像特征匹配验证模块,用于将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The portrait feature matching verification module is configured to match the second portrait feature with the standard image feature, and if the match is successful, output a result of verification.
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor. When the processor executes the computer-readable instructions, the following steps are implemented:
获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
根据预设训练模型获取N个所述第一图像的人体关键点;Acquiring N human key points of the first image according to a preset training model;
根据N个所述第一图像的人体关键点获取第一人像特征;Acquiring the first portrait feature according to the N human key points of the first image;
将所述第一人像特征配置为标准图像特征;Configuring the first portrait feature as a standard image feature;
获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;Acquiring a second image, acquiring human key points of the second image according to the preset training model;
根据所述第二图像的人体关键点获取第二人像特征;Acquiring a second portrait feature according to the key points of the human body of the second image;
将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
根据预设训练模型获取N个所述第一图像的人体关键点;Acquiring N human key points of the first image according to a preset training model;
根据N个所述第一图像的人体关键点获取第一人像特征;Acquiring the first portrait feature according to the N human key points of the first image;
将所述第一人像特征配置为标准图像特征;Configuring the first portrait feature as a standard image feature;
获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;Acquiring a second image, acquiring human key points of the second image according to the preset training model;
根据所述第二图像的人体关键点获取第二人像特征;Acquiring a second portrait feature according to the key points of the human body of the second image;
将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优 点将从说明书、附图以及权利要求变得明显。The details of one or more embodiments of the present application are set forth in the following drawings and description, and other features and advantages of the present application will become apparent from the description, drawings, and claims.
附图说明BRIEF DESCRIPTION
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly explain the technical solutions in the embodiments of the present application, the following will briefly introduce the drawings required in the embodiments or the description of the prior art. Obviously, the drawings in the following description are only for the application In some embodiments, for those of ordinary skill in the art, without paying creative labor, other drawings may be obtained based on these drawings.
图1是本申请一实施例中图像特征配置及校验方法的一应用环境示意图;1 is a schematic diagram of an application environment of an image feature configuration and verification method in an embodiment of the present application;
图2是本申请一实施例中图像特征配置及校验方法的一流程图;2 is a flowchart of an image feature configuration and verification method in an embodiment of the present application;
图3是本申请一实施例中图像特征配置及校验方法的另一流程图;FIG. 3 is another flowchart of the image feature configuration and verification method in an embodiment of the present application;
图4是本申请一实施例中图像特征配置及校验方法的另一流程图;4 is another flowchart of an image feature configuration and verification method in an embodiment of the present application;
图5是本申请一实施例中图像特征配置及校验方法的另一流程图;5 is another flowchart of the image feature configuration and verification method in an embodiment of the present application;
图6是本申请一实施例中图像特征配置及校验方法的另一流程图;6 is another flowchart of an image feature configuration and verification method in an embodiment of the present application;
图7是本申请一实施例中图像特征配置及校验装置的一原理框图;7 is a schematic block diagram of an image feature configuration and verification device in an embodiment of the present application;
图8是本申请一实施例中图像特征配置及校验装置中的第一人像特征获取模块的一原理框图;8 is a schematic block diagram of a first portrait feature acquisition module in an image feature configuration and verification device according to an embodiment of the present application;
图9是本申请一实施例中图像特征配置及校验装置中的第一人像特征获取模块的另一原理框图;9 is another principle block diagram of a first portrait feature acquisition module in an image feature configuration and verification device according to an embodiment of the present application;
图10是本申请一实施例中计算机设备的一示意图。10 is a schematic diagram of a computer device in an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the scope of protection of this application.
本申请提供的图像特征配置及校验方法,可应用在如图1的应用环境中,其中,客户端通过网络与服务端进行通信,服务端通过客户端获取N个第一图像,根据预设训练模型获取N个第一图像的人体关键点;根据N个第一图像的人体关键点获取第一人像特征;将第一人像特征配置为标准图像特征;然后服务端获取第二图像,根据预设训练模型获取第二图像的人体关键点;根据第二图像的人体关键点获取第二人像特征;最后将第二人像特征与标准图像特征进行匹配,若匹配成功,则向客户端输出验证通过的结果。其中,客户端可以但不限 于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。服务端可以用独立的服务器或者是多个服务器组成的服务器集群来实现。The image feature configuration and verification method provided in this application can be applied in the application environment as shown in FIG. 1, in which the client communicates with the server through the network, and the server obtains N first images through the client according to the preset Train the model to obtain the human key points of the N first images; obtain the first portrait features based on the N human key points of the first images; configure the first portrait features as standard image features; and then obtain the second images from the server, Acquire the human key points of the second image according to the preset training model; acquire the second human portrait features according to the human key points of the second image; finally match the second human portrait features with the standard image features, and if the matching is successful, output to the client Verify the results. Among them, the client can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices. The server can be implemented with an independent server or a server cluster composed of multiple servers.
在一实施例中,如图2所示,提供一种图像特征配置及校验方法,以该方法应用在图1中的服务端为例进行说明,包括如下步骤:In an embodiment, as shown in FIG. 2, an image feature configuration and verification method is provided. The method is applied to the server in FIG. 1 as an example for illustration, including the following steps:
S10:获取N个第一图像,其中,N为大于等于2的正整数。S10: Acquire N first images, where N is a positive integer greater than or equal to 2.
其中,第一图像为用户设置图像特征时采集的用户人像。可选地,可以通过客户端的拍摄工具对用户人像进行采集获得,例如通过手机的相机的拍摄功能进行用户人像采集获得。可选地,为了保证第一图像对应的用户为用户本人,在采集第一图像之前,由服务端向客户端发送用户登录验证,使用户输入密码或指纹等方式进行登录验证;若登录验证的结果为通过,再发送采集第一图像的指令至客户端进行第一图像的采集。The first image is a user portrait collected when the user sets image features. Optionally, the user's portrait can be collected through the shooting tool of the client, for example, the user's portrait can be collected through the shooting function of the camera of the mobile phone. Optionally, in order to ensure that the user corresponding to the first image is the user himself, before acquiring the first image, the server sends user login verification to the client to enable the user to enter a password or fingerprint to perform login verification; if the login verification is The result is pass, and then send the instruction to collect the first image to the client to collect the first image.
为了加强图像特征的代表性,可选地,第一图像的个数为N个,N为大于等于2的正整数。第一图像可以是多幅静态图像,也可以是通过录制视频数据后获取的多幅图像。在一个实施例中,如图3所示,步骤S10具体可以包括:To enhance the representativeness of image features, optionally, the number of first images is N, and N is a positive integer greater than or equal to 2. The first image may be multiple static images, or multiple images acquired by recording video data. In one embodiment, as shown in FIG. 3, step S10 may specifically include:
S11:获取第一视频数据。S11: Acquire the first video data.
其中,第一视频数据是对用户本人录制的视频,例如是录制用户眨眼的视频。Wherein, the first video data is a video recorded to the user himself, for example, a video recording the user blinking.
具体地,服务端向客户端发送采集第一图像的指令,客户端根据采集第一图像的指令打开拍摄工具,录制用户的视频,得到第一视频数据。Specifically, the server sends an instruction to collect the first image to the client. The client opens the shooting tool according to the instruction to collect the first image, records the user's video, and obtains the first video data.
S12:将第一视频数据按照预设时间进行分帧,获取N个待处理图像。S12: Frame the first video data according to a preset time to obtain N images to be processed.
其中,预设时间可以根据实际情况进行具体设定。可选地,可以获取第一视频数据的总帧数和总体时间,再根据总帧数除以总体时间得到预设时间。服务端再根据得到的预设时间对第一视频数据进行分帧,可以得到N个待处理图像。Among them, the preset time can be specifically set according to the actual situation. Optionally, the total frame number and overall time of the first video data may be obtained, and then the preset time may be obtained by dividing the total frame number by the overall time. The server then frames the first video data according to the obtained preset time to obtain N images to be processed.
S13:创建归一化图像,获取归一化图像的高度和宽度信息,并基于高度和宽度信息获取N个待处理图像的归一化后的图像,将归一化后的图像替换N个待处理图像的原像素值,得到N个第一图像。S13: Create a normalized image, obtain the height and width information of the normalized image, and obtain the normalized images of N images to be processed based on the height and width information, and replace the normalized images with N pending images The original pixel values of the image are processed to obtain N first images.
具体地,服务端首先创建归一化的图像,例如260*260像素的图像;然后获取归一化的图像的高度和宽度信息;接着计算待处理图像按照归一化的图像的高度和宽度信息进行归一化后的图像,并将归一化后的图像替换原来的待处理图像的像素值,即可得到N个第一图像。Specifically, the server first creates a normalized image, for example, an image of 260 * 260 pixels; then obtains the height and width information of the normalized image; then calculates the height and width information of the image to be processed according to the normalized image After performing the normalized image and replacing the pixel value of the original image to be processed with the normalized image, N first images can be obtained.
在图3对应的实施例中,通过获取第一视频数据;将所述第一视频数据按照预设时间进行分帧,获取N个待处理图像;创建归一化图像,获取所述归一化图像的高度和宽度信息,并基于所述高度和宽度信息获取所述N个待处理图像的归一化后的图像,将所述归一化后的图像替换所述N个待处理图像的原像素值,得到所述N个第一图像。可以使用户根据需要输 入自定义的面部动作和行为动作用作标准图像特征,使图像特征配置更加准确;且不用多次进行拍摄采集,提高了第一图像的获取效率。In the embodiment corresponding to FIG. 3, by acquiring the first video data; framing the first video data according to a preset time, acquiring N images to be processed; creating a normalized image, acquiring the normalized Information about the height and width of the image, and obtaining the normalized images of the N images to be processed based on the height and width information, and replacing the original images with the normalized images Pixel values to obtain the N first images. The user can input the customized facial movements and behavioral movements as standard image features as required, which makes the image feature configuration more accurate; and does not need to perform multiple shots to collect, which improves the efficiency of acquiring the first image.
S20:根据预设训练模型获取N个第一图像的人体关键点。S20: Acquire N human key points of the first image according to the preset training model.
其中,预设训练模型可以是人脸检测模型、特征点检测模型、姿态检测模型和情绪检测模型等等。人体关键点是指第一图像中体现人体特征的点,例如眉毛、眼情、嘴巴、肩膀、肘关节和手腕等。可选地,预设训练模型可以通过输入标注有关键点的样本图像进行训练,学习人体关键点的获取。当将第一图像输入到预设训练模型时,预设训练模型可以对第一图像中的人体关键点进行识别,从而获取第一图像的人体关键点。The preset training model may be a face detection model, a feature point detection model, a posture detection model, an emotion detection model, and so on. The key points of the human body refer to the points in the first image that reflect human body characteristics, such as eyebrows, eyes, mouth, shoulders, elbow joints, and wrists. Optionally, the preset training model can be trained by inputting sample images marked with key points, and learning to obtain key points of the human body. When the first image is input to the preset training model, the preset training model can identify the key points of the human body in the first image, thereby acquiring the key points of the human body of the first image.
S30:根据N个第一图像的人体关键点获取第一人像特征。S30: Acquire the first portrait feature according to the key points of the human body of the N first images.
其中,第一人像特征是指由N个第一图像的人体关键点的特征组成的人像特征,用于判断是否为用户本人的凭证。例如,第一人像特征可以是面部特征、表情特征或行为动作特征等。可选地,可以将表情特征与行为动作特征组合作为第一人像特征,即根据用户的面部表情和肢体动作结合作为登录的凭证,提高用户信息的安全性。行为动作特征是指用户设置登录凭证时录入的自定义行为动作,例如对着摄像头作一个抬手动作或者眨一下左眼,又或者是眨一下左眼和抬一下右手这样的组合动作。即行为动作特征可以为一个单独的行为动作,也可以是复数个行为动作的组合。Among them, the first portrait feature refers to a portrait feature composed of the features of the key points of the human body of the N first images, and is used to determine whether it is the user's own credential. For example, the first portrait feature may be a facial feature, an expression feature, a behavior feature, or the like. Optionally, the facial expression feature and the behavioral action feature can be combined as the first portrait feature, that is, the user's facial expression and body movement can be used as the login credential to improve the security of user information. Behavioral action features refer to custom behaviors entered when the user sets up login credentials, such as making a hand-raising action or blinking the left eye to the camera, or a combination of blinking the left eye and raising the right hand. That is, the behavior feature may be a single behavior action, or a combination of a plurality of behavior actions.
具体地,第一人像特征可以由预设训练模型对人体关键点的特征进行提取、计算或识别后得到。可选地,可以采用预设训练模型对人体关键点形成的表情特征进行提取和识别,例如通过眉毛倾斜的角度、嘴巴下移和眼脸上扬的角度等特征,识别出相应的面部表情,得到对应的表情特征。可选地,可以通过对人体关键点的位置进行跟踪,将人体关键点的位置的变化作为行为动作特征。例如通过获取用户自定义的抬手动作的视频数据,对根据视频数据获得的多幅第一图像建立坐标系,获取手腕这一人体关键点在这多幅第一图像中从A位置移动到B位置的坐标变化信息,根据坐标变化信息可得到手腕这一人体关键点的位置的变化信息,从而获取用户自定义的抬手动作的行为动作特征。Specifically, the first portrait feature can be obtained by extracting, calculating, or recognizing features of key points of the human body by a preset training model. Optionally, a preset training model can be used to extract and recognize the facial expressions formed by key points of the human body, for example, through the features such as the angle of the eyebrow tilt, the downward movement of the mouth and the angle of the eye and the face, the corresponding facial expressions can be identified to obtain Corresponding expression characteristics. Optionally, the position of the key point of the human body can be tracked, and the change of the position of the key point of the human body can be used as the behavior feature. For example, by acquiring video data of a user-defined hand-lifting action, a coordinate system is established for multiple first images obtained from the video data, and the key point of the human body such as the wrist is moved from position A to position B in the multiple first images According to the coordinate change information of the position, according to the coordinate change information, the change information of the position of the wrist, which is a key point of the human body, can be obtained, so as to obtain the user-defined behavior characteristics of the hand-raising action.
S40:将第一人像特征配置为标准图像特征。S40: Configure the first portrait feature as a standard image feature.
具体地,将第一人像特征与用户ID在服务端的数据库中进行绑定并保存作为标准图像特征,完成图像特征的配置,使标准图像特征作为用户登录的凭证。其中,用户ID为服务端用于区分不同用户的标识,可以为用户的手机号、账号或身份证号等。可选地,可以将标准图像特征与其它形式的密码组合作为用户登录的凭证,例如与数字密码组合,可以使用户信息的安全性进一步加强。Specifically, the first portrait feature and the user ID are bound in the server database and stored as standard image features, the configuration of the image features is completed, and the standard image features are used as the user login credentials. Among them, the user ID is an identifier used by the server to distinguish different users, and may be the user's mobile phone number, account number, or ID number. Optionally, standard image features and other forms of passwords can be combined as credentials for user login, for example, combined with digital passwords, which can further enhance the security of user information.
S50:获取第二图像,根据预设训练模型获取第二图像的人体关键点。S50: Acquire the second image, and acquire key points of the human body of the second image according to the preset training model.
其中,第二图像是指在用户进行登录验证时获取的人像图像。可选地,第二图像的个数为至少一个。The second image refers to a portrait image obtained when the user performs login verification. Optionally, the number of second images is at least one.
具体地,当用户进行登录时,通过客户端的拍摄工具获取第二图像。当获取到第二图像后,将第二图像输入到预设训练模型中,根据预设训练模型获取第二图像的人体关键点。其中,获取第二图像的人体关键点的过程与获取第一图像的人体关键点的过程相同,这里不再赘述。Specifically, when the user logs in, the second image is acquired through the shooting tool of the client. After acquiring the second image, the second image is input into the preset training model, and the key points of the human body of the second image are acquired according to the preset training model. The process of acquiring the key points of the human body of the second image is the same as the process of acquiring the key points of the human body of the first image, and will not be repeated here.
S60:根据第二图像的人体关键点获取第二人像特征。S60: Acquire the second portrait feature according to the key points of the human body in the second image.
其中,根据第二图像的人体关键点获取第二人像特征的过程与获取第一人像特征的过程相同,这里不再赘述。可以理解,第二人像特征与第一人像特征的类型相同,例如都为面部特征、表情特征或行为动作特征等。Wherein, the process of acquiring the second portrait feature according to the key points of the human body of the second image is the same as the process of acquiring the first portrait feature, which will not be repeated here. It can be understood that the second portrait feature is of the same type as the first portrait feature, for example, all are facial features, facial expression features, or behavioral features.
S70:将第二人像特征与标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。S70: Match the second portrait feature with the standard image feature, and if the match is successful, output a verified result.
具体地,服务端将第二人像特征与标准图像特征进行匹配,判断获取的第二人像特征与标准图像特征是否相符。可选地,当标准图像特征对应的为面部特征时,将第二人像特征中的每一个面部特征与标准图像特征中的每一个面部特征进行比较,判断面部特征是否相同,例如眉毛是否为上扬或嘴角是否为下移等,若面部特征相同,则判定匹配成功,否则判定匹配失败。当标准图像特征对应的为表情特征时,将第二人像特征对应的表情与标准图像特征对应的表情进行比较,判断表情的结果是否相同,例如表情的结果是否为高兴、悲伤或惊讶等,若表情的结果相同,则判定匹配成功,否则判定匹配失败。当标准图像特征对应的为行为特征时,将第二人像特征中行为动作的结果与标准图像特征中的行为动作的结果进行比较,判断行为动作的结果是否一致,例如标准图像特征中的行为动作的结果为左手抬手,则判断第二人像特征的行为动作的结果是否也为左手抬手,若行为动作的结果一致,则判定匹配成功,否则判定匹配失败。Specifically, the server matches the second portrait feature with the standard image feature, and determines whether the acquired second portrait feature matches the standard image feature. Optionally, when the standard image feature corresponds to a facial feature, each facial feature in the second portrait feature is compared with each facial feature in the standard image feature to determine whether the facial features are the same, such as whether the eyebrows are raised Or whether the corner of the mouth is moving down, etc. If the facial features are the same, it is determined that the matching is successful, otherwise it is determined that the matching fails. When the standard image feature corresponds to the expression feature, the expression corresponding to the second portrait feature is compared with the expression corresponding to the standard image feature to determine whether the result of the expression is the same, for example, whether the result of the expression is happy, sad, or surprised. If the result of the expression is the same, it is determined that the match is successful, otherwise it is determined that the match fails. When the standard image feature corresponds to the behavior feature, the result of the behavior action in the second portrait feature is compared with the result of the behavior action in the standard image feature to determine whether the result of the behavior action is consistent, for example, the behavior action in the standard image feature The result is that the left hand raises the hand, it is judged whether the result of the behavior action of the second portrait feature is also the left hand raise the hand, if the result of the behavior action is the same, it is determined that the match is successful, otherwise it is determined that the match fails.
具体地,若服务端判断第二人像特征与标准图像特征相匹配,则输出验证通过的结果,允许用户登陆。若服务端判断第二人像特征与标准图像特征不匹配,则输出验证不通过的结果,拒绝用户登陆。可以理解,当其它用户想假冒用户进行登录时,由于不知道标准图像特征对应的是表情特征,还是行为动作特征,还是表情特征和行为动作特征的组合,并且不知道具体的表情特征和行为动作特征,因此难以破解。Specifically, if the server judges that the second portrait feature matches the standard image feature, it outputs the result of the verification and allows the user to log in. If the server judges that the second portrait feature does not match the standard image feature, it outputs a result that the verification fails and rejects the user login. It can be understood that when other users want to log in as fake users, they do not know whether the standard image features correspond to facial expression features, behavioral action features, or a combination of facial expression features and behavioral action features, and do not know the specific facial expression features and behavioral actions Features, so it is difficult to crack.
在图2对应的实施例中,通过获取N个第一图像,然后根据预设训练模型获取N个第一图像的人体关键点;根据N个第一图像的人体关键点获取第一人像特征,将第一人像特征配置为标准图像特征;然后获取第二图像,根据预设训练模型获取第二图像的人体关键点;根据第二图像的人体关键点获取第二人像特征,最后将第二人像特征与标准图像特征进行匹配, 若匹配成功,则输出验证通过的结果。一方面,当将标准图像特征作为登录的凭证后,用户不需要输入密码进行登录,方便了用户的操作。另一方面,根据人体关键点获取人像特征,将获取的人像特征配置为标准图像特征,可以使图像特征的配置更有代表性,提高图像特征配置的准确性。进一步地,将标准图像特征作为用户登录的凭证,可以使用户可以输入自定义的面部动作或行为动作用作登录验证的凭证,非用户本人难以获知登录的凭证,从而无法进行破解,无法假冒用户进行登录,从而提高了用户信息的安全性。In the embodiment corresponding to FIG. 2, by acquiring N first images, and then acquiring human key points of N first images according to a preset training model; acquiring first portrait features based on N human key points of first images , The first portrait feature is configured as a standard image feature; then the second image is obtained, the key points of the human body of the second image are obtained according to the preset training model; the second portrait feature is obtained according to the key points of the human body of the second image, and finally the The two portrait features are matched with the standard image features. If the matching is successful, the result of the verification is output. On the one hand, when the standard image features are used as the login credentials, the user does not need to enter a password to log in, which facilitates the user's operation. On the other hand, obtaining portrait features based on key points of the human body and configuring the acquired portrait features as standard image features can make the image feature configuration more representative and improve the accuracy of the image feature configuration. Further, using the standard image features as the user's login credential can enable the user to input a customized facial action or behavioral action as the credential for login verification, and it is difficult for non-users to obtain the login credential, which makes it impossible to crack and impersonate the user Log in to improve the security of user information.
在一实施例中,第一人像特征可以通过对第一图像建立坐标系获取人体关键点的坐标的方式获得,如图4所示,步骤S30中,即根据N个第一图像的人体关键点获取第一人像特征,具体可以包括:In an embodiment, the first portrait feature can be obtained by establishing a coordinate system for the first image to obtain the coordinates of the key points of the human body, as shown in FIG. Click to obtain the first portrait feature, which may include:
S31:根据N个第一图像的人体关键点的位置获取N个第一图像的人体关键点的坐标。S31: Acquire the coordinates of the human key points of the N first images according to the positions of the human key points of the N first images.
具体地,可以在采集第一图像的拍摄工具的相框中建立坐标系,可选地,以进入相框中的用户的眉心位置为原点建立坐标系,再获取第一图像的人体关键点的坐标。当用户进入相框时,采用预设训练模型对用户的人像进行绘点;当对用户的人像进行绘点时,获取绘点的坐标。例如对眉毛进行绘点时,通过坐标系可以获得眉毛绘点的坐标。可选地,当用户进入相框时,先对第一图像的人体关键点的个数进行统计,当所有必要的人体关键点全部进入相框后再获取第一图像的人体关键点的坐标。其中,必要的人体关键点可以根据训练数据获得,例如通过训练后获知,人脸和手部的人体关键点应全部进入相框才能获取相应的表情特征和手部的行为动作特征,则可以设定人脸和手部的人体关键点全部进入相框后再获取第一图像的人体关键点的坐标。Specifically, a coordinate system may be established in the photo frame of the shooting tool that collects the first image. Optionally, a coordinate system is established using the position of the user's eyebrow center in the photo frame as the origin, and then the coordinates of the key points of the human body of the first image are acquired. When the user enters the photo frame, the preset training model is used to draw the user's portrait; when the user's portrait is drawn, the coordinates of the drawn point are obtained. For example, when drawing points on the eyebrows, the coordinates of the eyebrow drawing points can be obtained through the coordinate system. Optionally, when the user enters the photo frame, the number of human body key points of the first image is counted first, and the coordinates of the human body key points of the first image are acquired after all necessary human body key points have entered the photo frame. Among them, the necessary human key points can be obtained according to the training data. For example, after training, the human key points of the face and hands should all enter the photo frame to obtain the corresponding expression characteristics and hand behavior characteristics, you can set After all the key points of the human face and hands enter the photo frame, the coordinates of the key points of the first image are obtained.
S32:采用移动指数加权平均算法对N个第一图像的人体关键点的坐标进行计算,获得第一特征区间值。S32: Calculate the coordinates of the key points of the human body of the N first images using a moving index weighted average algorithm to obtain the first feature interval value.
可以理解,由于第一图像是N个,当采用预设训练模型对人体关键点进行绘点时,用户每次进入相框的位置可能是不同的,或者在录制视频的过程中用户的位置存在移动,因此每次得到的N个第一图像的人体关键点的坐标可能是不一样的。因此,为了使坐标更有代表性,需要对获取的坐标作进一步的计算,从而得到可以作为第一人像特征的坐标值。另外,由于对人体关键点进行绘点时,一个人体关键点得到的是一组坐标。例如对眉毛进行绘点时,由于眉毛绘点包括多个,那么获取坐标时,眉毛这一人体关键点的坐标就是一组坐标,因此对一个人体关键点,得到的一组坐标的范围值,也即一个特征区间值。It can be understood that since the first image is N, when the preset training model is used to draw the key points of the human body, the position of the user each time the frame is entered may be different, or the user's position may move during the recording of the video Therefore, the coordinates of the key points of the human body of the N first images obtained each time may be different. Therefore, in order to make the coordinates more representative, it is necessary to further calculate the acquired coordinates, so as to obtain coordinate values that can be used as the first portrait feature. In addition, because the key points of the human body are plotted, a key point of the human body obtains a set of coordinates. For example, when drawing the eyebrows, because the eyebrow drawing points include multiple, then the coordinates of the key point of the human eyebrow is a set of coordinates when acquiring the coordinates, so for a key point of the human body, the range value of the set of coordinates obtained That is, a characteristic interval value.
具体地,通过移动指数加权平均算法(Exponential Weighted Moving Average,简称EWMA)对获得的N个第一图像的人体关键点的坐标进行计算,将计算后的结果组成人体关键点的特征区间值。其中,X坐标用EWMA进行计算的公式可以为:Specifically, the coordinates of the key points of the human body of the N first images obtained are calculated by using an exponential weighted average algorithm (Exponential Weighted Moving Average, EWMA for short), and the calculated results are formed into feature interval values of the key points of the human body. Among them, the formula for calculating the X coordinate with EWMA can be:
Figure PCTCN2018122731-appb-000001
Figure PCTCN2018122731-appb-000001
式中,X为加权平均坐标值;n为第一个图像的个数(即N),x i为第i个坐标实际值;β为第i个的权重(权重的和等于1);即对N个第一图像的同一个绘点进行X坐标值的EWMA计算,Y坐标的EWMA计算同理可得。然后再将得到的人体关键点的EWMA值组合在一起,形成人体关键点的第一特征区间值。其中,权重的设置可以设置为一样的,例如第一图像为3幅时,取1/3作为权重值。也可以根据动作的过程设置不同的权重,例如,在抬手的行为动作中,将起始位置和终止位置设置较大的权重,而过程中的位置设置较小的权重,即重点关注起始位置和终止位置的坐标是否到位,而不关注抬手的具体运动路线。 In the formula, X is the weighted average coordinate value; n is the number of the first image (ie N), x i is the actual value of the ith coordinate; β is the weight of the ith (the sum of the weights is equal to 1); namely The EWMA calculation of the X coordinate value and the EWMA calculation of the Y coordinate are obtained for the same drawing point of the N first images. Then, the obtained EWMA values of the human body key points are combined together to form the first characteristic interval value of the human body key points. Among them, the weight setting can be set to be the same, for example, when the first image is 3, 1/3 is taken as the weight value. You can also set different weights according to the course of the action. For example, in the action of raising your hand, set a larger weight for the start position and end position, and set a smaller weight for the position in the process, that is, focus on the start Whether the coordinates of the position and the end position are in place, regardless of the specific movement route of raising the hand.
S33:将第一特征区间值作为第一人像特征。S33: Use the first feature interval value as the first portrait feature.
具体地,将所有的人体关键点的第一特征区间值作为第一人像特征,与用户ID绑定在一起保存在服务端的数据库中。其中,用户ID可以是手机号、身份证号和账号等用于区分不同用户的标识。Specifically, the first feature interval values of all the key points of the human body are used as the first portrait feature, which is bound to the user ID and stored in the database on the server side. Among them, the user ID may be a mobile phone number, ID card number and account number used to distinguish different users.
在图4对应的实施例中,通过N个第一图像的人体关键点的位置获取N个第一图像的人体关键点的坐标;然后采用移动指数加权平均算法对N个第一图像的人体关键点的坐标进行计算,获得第一特征区间值;最后将第一特征区间值作为第一人像特征。通过移动指数加权平均算法获取用户的人像特征,可以根据第一图像平滑地返回用户人像的特征数据作为第一人像特征,提高图像特征配置的准确率。进一步地,将根据本实施例得到的第一人像特征形成标准图像特征作为登录的凭证,可以有效地避免未经用户授权就进行登录或假冒用户进行登录的情况,从而提高了用户信息的安全性。In the embodiment corresponding to FIG. 4, the coordinates of the human key points of the N first images are obtained from the positions of the human key points of the N first images; then, the mobile index weighted average algorithm is used for the human keys of the N first images The coordinates of the points are calculated to obtain the first feature interval value; finally, the first feature interval value is used as the first portrait feature. The user's portrait feature is obtained through the mobile index weighted average algorithm, and the feature data of the user's portrait can be smoothly returned as the first portrait feature according to the first image, thereby improving the accuracy of image feature configuration. Further, the standard image features formed by the first portrait feature obtained according to this embodiment are used as the credentials for login, which can effectively avoid the situation of logging in without user authorization or logging in as a fake user, thereby improving the security of user information Sex.
在一实施例中,第一人像特征可以由预设训练模型进行特征提取或者识别后获得,其中,预设训练模型包括微表情识别模型和姿态识别模型,具体地,如图5所示,步骤S30中,即根据N个第一图像的人体关键点获取第一人像特征,还可以包括:In an embodiment, the first portrait feature may be obtained after feature extraction or recognition by a preset training model, where the preset training model includes a micro-expression recognition model and a gesture recognition model, specifically, as shown in FIG. 5, In step S30, that is, obtaining the first portrait feature according to the N human body key points of the first image, the method may further include:
S31’:根据N个第一图像的人体关键点获取第一人脸图像集和第一肢体图像集。S31 ': Acquire the first face image set and the first limb image set according to the N human body key points of the first image.
具体地,根据人体关键点将N个第一图像划分为第一人脸图像集和第一肢体图像集。可选地,可以通过将标注有区域划分的样本图像输入到预设训练模型中进行训练,使其可以根据人体关键点来获取第一人脸图像和第一肢体图像。例如可以通过脖子这一人体关键点所在区域作为划分边界,将第一图像划分为第一人脸图像和第一肢体图像。并且由第一人脸图像组成第一人脸图像集,由第一肢体图像组成第一肢体图像集。Specifically, N first images are divided into a first face image set and a first limb image set according to key points of the human body. Optionally, the sample image marked with area division may be input into a preset training model for training so that it can obtain the first face image and the first limb image according to key points of the human body. For example, the first image can be divided into a first face image and a first limb image by using the neck, which is a key point of the human body, as a dividing boundary. And the first face image set is composed of the first face image, and the first limb image set is composed of the first limb image.
S32’:将第一人脸图像集输入到微表情识别模型中,得到标准人脸图像特征。S32 ': input the first face image set into the micro-expression recognition model to obtain standard face image features.
具体地,将第一人脸图像集输入到微表情识别模型中,根据微表情识别模型中对第一人 脸图像的人体关键点的特征进行分析识别,输出第一人脸图像集的表情特征作为标准人脸图像特征。可选地,还可以判断第一人脸图像集中的第一人脸图像属于哪种表情。其中,表情特征可以包括头部特征、眼部特征和唇部特征等特征。例如头部上仰、眉毛上扬和嘴角下移等表情特征。可以理解,由于第一人脸图像集包括多幅人脸图像,用户相应的表情可能在变化中,因此可以设定在获取的表情特征稳定后再获取标准人脸图像特征。其中,表情特征稳定可以设定在连续的预设数量的人脸图像获取到同样的表情特征作为稳定的标志。Specifically, the first face image set is input into the micro-expression recognition model, and the human face key point characteristics of the first face image are analyzed and recognized according to the micro-expression recognition model, and the expression features of the first face image set are output As a standard face image feature. Optionally, it may also be determined which expression the first face image in the first face image set belongs to. The facial expression features may include head features, eye features, and lip features. For example, facial expressions such as head up, eyebrow up and mouth corner down. It can be understood that, since the first face image set includes multiple face images, the corresponding expression of the user may be changing. Therefore, the standard facial image features can be acquired after the acquired expression features are stable. Among them, the stable expression feature can be set in a continuous preset number of face images to obtain the same expression feature as a stable sign.
可选地,可以将第一人脸图像的表情特征作为标准人脸图像特征,也可以与表情的结果结合组成标准人脸图像特征。其中,表情的结果是指高兴、生气或悲伤等表情。可选地,当获取到第一人脸图像时,可以通过服务端连接国际的微表情数据库,从微表情数据库中识别出人脸图像的表情。其中,国际的微表情数据库包括54种微表情,可以根据人体关键点的细微变化得出具体的表情。Optionally, the expression features of the first face image can be used as the standard face image features, or they can be combined with the results of the expression to form the standard face image features. Among them, the result of expression refers to expressions of happiness, anger or sadness. Optionally, when the first face image is acquired, the international micro expression database can be connected through the server to identify the facial expression expression from the micro expression database. Among them, the international micro-expression database includes 54 kinds of micro-expressions, and specific expressions can be obtained according to subtle changes in key points of the human body.
S33’:将第一肢体图像集输入到姿态识别模型中,得到标准肢体图像特征。S33 ': Input the first limb image set into the gesture recognition model to obtain standard limb image features.
具体地,将第一肢体图像集输入到姿态识别模型中,根据姿态识别模型对第一肢体图像的人体关键点的特征作行为动作识别,输出行为动作特征作为标准肢体图像特征,例如输出行为动作为左手抬手,则将左手抬手作为标准肢体图像特征。可选地,可以预先输入一系列动作的样本集让姿态识别模型进行学习,从而使姿态识别模型识别用户的行为动作。例如将一组抬手的动作样本集输入到姿态识别模型中,从而姿态识别模型可以识别出抬手的动作。Specifically, the first limb image set is input into a gesture recognition model, and the behavior recognition of the key points of the human body of the first limb image is performed according to the gesture recognition model, and the behavior behavior features are output as standard limb image features, for example, output behavior actions Raise the hand for the left hand, then use the left hand as the standard limb image feature. Optionally, a sample set of a series of actions may be input in advance to allow the gesture recognition model to learn, so that the gesture recognition model recognizes the user's behavior. For example, a set of motion raising hand sample sets is input into the gesture recognition model, so that the gesture recognition model can recognize the motion of raising the hand.
S34’:将标准人脸图像特征和标准肢体图像特征组成第一人像特征。S34 ': The standard face image feature and the standard limb image feature are combined into the first portrait feature.
具体地,将步骤S32’和步骤S33’得到的标准人脸图像特征和标准肢体图像特征组成第一人像特征。例如将高兴的表情特征和左手抬手的行为动作特征结合在一起形成第一人像特征。Specifically, the standard face image feature and the standard limb image feature obtained in step S32 'and step S33' are combined into the first portrait feature. For example, a happy facial expression feature and a left-hand raised behavior feature are combined to form a first portrait feature.
在图5对应的实施例中,通过根据N个第一图像的人体关键点获取第一人脸图像集和第一肢体图像集;然后分别将第一人脸图像集输入到微表情识别模型中,得到标准人脸图像特征;将第一肢体图像集输入到姿态识别模型中,得到标准肢体图像特征;最后将标准人脸图像特征和标准肢体图像特征组成第一人像特征。通过根据微表情识别模型和姿态识别模型分别获取人脸图像特征和肢体图像特征,最后将两个特征结合形成第一人像特征,可以提高图像特征配置的准确性。进一步地,将根据本实施例得到的第一人像特征形成标准图像特征作为用户登录的凭证,可以有效地避免未经用户授权就进行登录或假冒用户进行登录的情况,从而提高了用户信息的安全性。In the embodiment corresponding to FIG. 5, the first face image set and the first limb image set are obtained by using the human key points of the N first images; then the first face image set is input into the micro-expression recognition model, respectively To get the standard face image features; input the first limb image set into the gesture recognition model to get the standard limb image features; finally, the standard face image features and the standard limb image features form the first portrait features. By obtaining the facial image feature and the limb image feature respectively according to the micro-expression recognition model and the gesture recognition model, and finally combining the two features to form the first portrait feature, the accuracy of the image feature configuration can be improved. Further, the standard image features formed by the first portrait feature obtained according to this embodiment are used as credentials for user login, which can effectively avoid the situation of logging in without user authorization or impersonating the user, thereby improving the user information. safety.
在一实施例中,在步骤S60中,即根据第二图像的人体关键点获取第二人像特征,如图6所示,具体可以包括:In an embodiment, in step S60, that is, obtaining the second portrait feature according to the key points of the human body of the second image, as shown in FIG. 6, it may specifically include:
S61:根据第二图像的人体关键点获取第二人脸图像和第二肢体图像。S61: Acquire a second face image and a second limb image according to the key points of the human body of the second image.
其中,根据第二图像的人体关键点获取第二人脸图像和第二肢体图像的过程,与根据第一图像的关键点获取第一人脸图像集和第一肢体图像集的过程类似,即将第二图像输入至训练好的预设训练模型中,根据划分边界获得第二人脸图像和第二肢体图像。Among them, the process of acquiring the second face image and the second limb image according to the key points of the human body of the second image is similar to the process of acquiring the first face image set and the first limb image set according to the key points of the first image, that is, The second image is input into the trained preset training model, and the second face image and the second limb image are obtained according to the divided boundary.
S62:将第二人脸图像输入到微表情识别模型中,得到测试人脸图像特征。S62: Input the second face image into the micro-expression recognition model to obtain the characteristics of the test face image.
具体地,将第二人脸图像输入到微表情识别模型中,根据微表情识别模型对第二人脸图像的人体关键点的特征进行分析识别,输出第二人脸图像的表情特征作为测试人脸图像特征,例如头部上仰、眉毛上扬和嘴角下移等表情特征。其中,测试人脸图像特征与标准人脸图像特征的设置一致,例如,若设置将第一人脸图像的表情特征与表情的结果组成标准人脸图像特征,则测试人脸图像特征也是由第二人脸图像的表情特征与表情的结果组成。Specifically, the second face image is input into the micro-expression recognition model, and the characteristics of the key points of the human body of the second face image are analyzed and recognized according to the micro-expression recognition model, and the expression features of the second face image are output as the test person Face image features, such as facial expression features such as head up, eyebrow up, and mouth corner down. Among them, the test facial image features are consistent with the standard facial image feature settings. For example, if the facial expression features of the first facial image and the results of the expression are set to form the standard facial image features, then the test facial image features are also The facial expression features of the two-face image are composed of the facial expression results.
S63:将第二肢体图像输入到姿态识别模型中,得到测试肢体图像特征。S63: Input the second limb image into the gesture recognition model to obtain the test limb image features.
具体地,将第二肢体图像输入到姿态识别模型中,根据姿态识别模型对第二肢体图像的人体关键点的特征作行为动作识别,输出行为动作特征作为测试肢体图像特征。Specifically, the second limb image is input into the gesture recognition model, and the behavior recognition of the key points of the human body of the second limb image is performed according to the gesture recognition model, and the behavior behavior characteristic is output as the test limb image feature.
S64:将测试人脸图像特征和测试肢体图像特征组成第二人像特征。S64: Combine the test face image feature and the test limb image feature into a second portrait feature.
具体地,将步骤S62和步骤S63得到的测试人脸图像特征和测试肢体图像特征组成第二人像特征。Specifically, the test face image features and the test limb image features obtained in step S62 and step S63 constitute a second portrait feature.
在图6对应的实施例中,通过根据第二图像的人体关键点获取第二人脸图像和第二肢体图像;然后将第二人脸图像输入到微表情识别模型中,得到测试人脸图像特征;接着将第二肢体图像输入到姿态识别模型中,得到测试肢体图像特征;最后将测试人脸图像特征和测试肢体图像特征组成第二人像特征,可以提取第二图像的人像特征用以与标准图像特征进行比较,从而实现图像特征的校验。In the embodiment corresponding to FIG. 6, the second face image and the second limb image are obtained according to the key points of the human body of the second image; then the second face image is input into the micro-expression recognition model to obtain the test face image Feature; then input the second limb image into the posture recognition model to obtain the test limb image feature; finally, the test face image feature and the test limb image feature form a second portrait feature, and the second image's portrait feature can be extracted to be used with Standard image features are compared to enable verification of image features.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence numbers of the steps in the above embodiments does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
在一实施例中,提供一种图像特征配置及校验装置,该图像特征配置及校验装置与上述实施例中图像特征配置及校验方法一一对应。如图7所示,该图像特征配置及校验装置包括第一图像获取模块10、第一人体关键点获取模块20、第一人像特征获取模块30、标准图像特征配置模块40、第二人体关键点获取模块50、第二人像特征获取模块60和人像特征匹配验证模块70。各功能模块详细说明如下:In one embodiment, an image feature configuration and verification device is provided, and the image feature configuration and verification device corresponds to the image feature configuration and verification method in the above embodiment in one-to-one correspondence. As shown in FIG. 7, the image feature configuration and verification device includes a first image acquisition module 10, a first human body key point acquisition module 20, a first human portrait feature acquisition module 30, a standard image feature configuration module 40, and a second human body The key point acquisition module 50, the second portrait feature acquisition module 60, and the portrait feature matching verification module 70. The detailed description of each functional module is as follows:
第一图像获取模块10,用于获取N个第一图像,其中,N为大于等于2的正整数。The first image acquisition module 10 is configured to acquire N first images, where N is a positive integer greater than or equal to 2.
进一步地,第一图像获取模块10还用于:Further, the first image acquisition module 10 is also used to:
获取第一视频数据;Get the first video data;
将第一视频数据按照预设时间进行分帧,获取N个待处理图像;Frame the first video data according to a preset time to obtain N images to be processed;
创建归一化图像,获取归一化图像的高度和宽度信息,并基于高度和宽度信息获取N个待处理图像的归一化后的图像,将归一化后的图像替换N个待处理图像的原像素值,得到N个第一图像。Create a normalized image, obtain the height and width information of the normalized image, and obtain the normalized image of N images to be processed based on the height and width information, and replace the normalized image with N images to be processed The original pixel value of N to get N first images.
第一人体关键点获取模块20,用于根据预设训练模型获取N个第一图像的人体关键点。The first human key point obtaining module 20 is configured to obtain N human key points of the first image according to a preset training model.
第一人像特征获取模块30,用于根据N个第一图像的人体关键点获取第一人像特征。The first portrait feature acquisition module 30 is configured to acquire the first portrait feature according to the N human body key points of the first image.
标准图像特征配置模块40,用于将第一人像特征配置为标准图像特征。The standard image feature configuration module 40 is configured to configure the first portrait feature as a standard image feature.
第二人体关键点获取模块50,用于获取第二图像,根据预设训练模型获取第二图像的人体关键点。The second human key point obtaining module 50 is used to obtain a second image and obtain the human key points of the second image according to a preset training model.
第二人像特征获取模块60,用于根据第二图像的人体关键点获取第二人像特征。The second portrait feature acquisition module 60 is configured to acquire the second portrait feature according to the key points of the human body in the second image.
人像特征匹配验证模块70,用于将第二人像特征与标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The portrait feature matching verification module 70 is configured to match the second portrait feature with the standard image feature. If the matching is successful, the verification result is output.
进一步地,如图8所示,第一人像特征获取模块30包括坐标获取单元31、特征区间值获取单元32和第一人像特征设置单元33。Further, as shown in FIG. 8, the first portrait feature acquisition module 30 includes a coordinate acquisition unit 31, a feature interval value acquisition unit 32, and a first portrait feature setting unit 33.
坐标获取单元31,用于根据N个第一图像的人体关键点的位置获取N个第一图像的人体关键点的坐标。The coordinate acquiring unit 31 is configured to acquire the coordinates of the human key points of the N first images according to the positions of the human key points of the N first images.
特征区间值获取单元32,用于采用移动指数加权平均算法对N个第一图像的人体关键点的坐标进行计算,获得第一特征区间值。The feature interval value obtaining unit 32 is configured to calculate the coordinates of the key points of the human body of the N first images using a moving index weighted average algorithm to obtain the first feature interval value.
第一人像特征设置单元33,用于将第一特征区间值作为第一人像特征。The first portrait feature setting unit 33 is configured to use the first feature interval value as the first portrait feature.
进一步地,预设训练模型包括微表情识别模型和姿态识别模型;可选地,如图9所示,第一人像特征获取模块30包括图像集获取单元31’、标准人脸特征获取单元32’、标准肢体特征获取单元33’和第一人像特征获取单元34’。Further, the preset training model includes a micro-expression recognition model and a gesture recognition model; optionally, as shown in FIG. 9, the first portrait feature acquisition module 30 includes an image set acquisition unit 31 ′ and a standard facial feature acquisition unit 32 ', A standard limb feature acquisition unit 33' and a first portrait feature acquisition unit 34 '.
图像集获取单元31’,用于根据N个第一图像的人体关键点获取第一人脸图像集和第一肢体图像集。The image set acquisition unit 31 'is used to acquire the first face image set and the first limb image set based on the N human body key points of the first image.
标准人脸特征获取单元32’,用于将第一人脸图像集输入到微表情识别模型中,得到标准人脸图像特征。The standard facial feature acquisition unit 32 'is used to input the first facial image set into the micro-expression recognition model to obtain the standard facial image features.
标准肢体特征获取单元33’,用于将第一肢体图像集输入到姿态识别模型中,得到标准肢体图像特征。The standard limb feature acquisition unit 33 'is used to input the first limb image set into the posture recognition model to obtain the standard limb image features.
第一人像特征获取单元34’,用于将标准人脸图像特征和标准肢体图像特征组成第一人像特征。The first portrait feature acquisition unit 34 'is used to form the standard face image feature and the standard limb image feature into the first portrait feature.
进一步地,第二人像特征获取模块60还用于:Further, the second portrait feature acquisition module 60 is also used to:
根据第二图像的人体关键点获取第二人脸图像和第二肢体图像;Acquire the second face image and the second limb image according to the key points of the human body of the second image;
将第二人脸图像输入到微表情识别模型中,得到测试人脸图像特征;Input the second face image into the micro-expression recognition model to obtain the test face image features;
将第二肢体图像输入到姿态识别模型中,得到测试肢体图像特征;Input the second limb image into the gesture recognition model to obtain the test limb image features;
将测试人脸图像特征和测试肢体图像特征组成第二人像特征。The test face image feature and the test limb image feature are combined to form a second portrait feature.
关于图像特征配置及校验装置的具体限定可以参见上文中对于图像特征配置及校验方法的限定,在此不再赘述。上述图像特征配置及校验装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific limitation of the image feature configuration and verification device, reference may be made to the above limitation on the image feature configuration and verification method, which will not be repeated here. Each module in the above image feature configuration and verification device may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图10所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储第一图像、第一视频数据、预设训练模型、标准图像特征、移动指数加权平均算法和特征区间值等。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种图像特征配置及校验方法。In one embodiment, a computer device is provided. The computer device may be a server, and an internal structure diagram thereof may be as shown in FIG. The computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer-readable instructions, and a database. The internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium. The database of the computer device is used to store the first image, the first video data, the preset training model, the standard image features, the moving index weighted average algorithm, and the feature interval values. The network interface of the computer device is used to communicate with external terminals through a network connection. When the computer-readable instructions are executed by the processor to implement an image feature configuration and verification method.
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现以下步骤:In one embodiment, a computer device is provided, including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor. The processor implements the computer-readable instructions to implement the following steps:
获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
根据预设训练模型获取N个第一图像的人体关键点;Acquire N key points of the human body in the first image according to the preset training model;
根据N个第一图像的人体关键点获取第一人像特征;Obtain the first portrait features according to the key points of the human body of the N first images;
将第一人像特征配置为标准图像特征;Configure the first portrait feature as a standard image feature;
获取第二图像,根据预设训练模型获取第二图像的人体关键点;Obtain the second image, and obtain the key points of the human body of the second image according to the preset training model;
根据第二图像的人体关键点获取第二人像特征;Obtain the second portrait feature according to the key points of the human body in the second image;
将第二人像特征与标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The second portrait feature is matched with the standard image feature, and if the match is successful, the result of verification is output.
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:In one embodiment, one or more non-volatile readable storage media storing computer-readable instructions are provided. When the computer-readable instructions are executed by one or more processors, the one or more Each processor performs the following steps:
获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
根据预设训练模型获取N个第一图像的人体关键点;Acquire N key points of the human body in the first image according to the preset training model;
根据N个第一图像的人体关键点获取第一人像特征;Obtain the first portrait features according to the key points of the human body of the N first images;
将第一人像特征配置为标准图像特征;Configure the first portrait feature as a standard image feature;
获取第二图像,根据预设训练模型获取第二图像的人体关键点;Obtain the second image, and obtain the key points of the human body of the second image according to the preset training model;
根据第二图像的人体关键点获取第二人像特征;Obtain the second portrait feature according to the key points of the human body in the second image;
将第二人像特征与标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The second portrait feature is matched with the standard image feature, and if the match is successful, the result of verification is output.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art may understand that all or part of the process in the method of the foregoing embodiments may be completed by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions may be stored in a non-volatile computer In the readable storage medium, when the computer-readable instructions are executed, they may include the processes of the foregoing method embodiments. Wherein, any reference to the memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and / or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。Those skilled in the art can clearly understand that, for convenience and conciseness of description, only the above-mentioned division of each functional unit and module is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated by different functional units, Module completion means that the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that they can still implement the foregoing The technical solutions described in the examples are modified, or some of the technical features are equivalently replaced; and these modifications or replacements do not deviate from the spirit and scope of the technical solutions of the embodiments of the present application. Within the scope of protection of this application.

Claims (20)

  1. 一种图像特征配置及校验方法,其特征在于,包括:An image feature configuration and verification method, which includes:
    获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
    根据预设训练模型获取N个所述第一图像的人体关键点;Acquiring N human key points of the first image according to a preset training model;
    根据N个所述第一图像的人体关键点获取第一人像特征;Acquiring the first portrait feature according to the N human key points of the first image;
    将所述第一人像特征配置为标准图像特征;Configuring the first portrait feature as a standard image feature;
    获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;Acquiring a second image, acquiring human key points of the second image according to the preset training model;
    根据所述第二图像的人体关键点获取第二人像特征;Acquiring a second portrait feature according to the key points of the human body of the second image;
    将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  2. 如权利要求1所述的图像特征配置及校验方法,其特征在于,所述根据N个所述第一图像的人体关键点获取第一人像特征,包括:The image feature configuration and verification method according to claim 1, wherein the acquiring the first portrait feature according to the N human body key points of the first image includes:
    根据N个所述第一图像的人体关键点的位置获取N个所述第一图像的人体关键点的坐标;Acquiring the coordinates of the N human key points of the first image according to the positions of the N human key points of the first image;
    采用移动指数加权平均算法对N个所述第一图像的人体关键点的坐标进行计算,获得第一特征区间值;Use the moving index weighted average algorithm to calculate the coordinates of the key points of the human body of the N first images to obtain the first feature interval value;
    将所述第一特征区间值作为第一人像特征。Use the first feature interval value as the first portrait feature.
  3. 如权利要求1所述的图像特征配置及校验方法,其特征在于,所述预设训练模型包括微表情识别模型和姿态识别模型;The image feature configuration and verification method according to claim 1, wherein the preset training model includes a micro-expression recognition model and a gesture recognition model;
    所述根据N个所述第一图像的人体关键点获取第一人像特征,包括:The obtaining the first portrait feature according to the N human key points of the first image includes:
    根据N个所述第一图像的人体关键点获取第一人脸图像集和第一肢体图像集;Acquiring a first face image set and a first limb image set according to the N human body key points of the first image;
    将所述第一人脸图像集输入到所述微表情识别模型中,得到标准人脸图像特征;Input the first face image set into the micro-expression recognition model to obtain standard face image features;
    将所述第一肢体图像集输入到所述姿态识别模型中,得到标准肢体图像特征;Input the first limb image set into the gesture recognition model to obtain standard limb image features;
    将所述标准人脸图像特征和所述标准肢体图像特征组成所述第一人像特征。The standard face image feature and the standard limb image feature are combined into the first portrait feature.
  4. 如权利要求3所述的图像特征配置及校验方法,其特征在于,所述根据所述第二图像的人体关键点获取第二人像特征,包括:The image feature configuration and verification method according to claim 3, wherein the acquiring the second portrait feature according to the key points of the human body of the second image includes:
    根据所述第二图像的人体关键点获取第二人脸图像和第二肢体图像;Acquiring a second face image and a second limb image according to the key points of the human body of the second image;
    将所述第二人脸图像输入到所述微表情识别模型中,得到测试人脸图像特征;Input the second face image into the micro-expression recognition model to obtain the characteristics of the test face image;
    将所述第二肢体图像输入到所述姿态识别模型中,得到测试肢体图像特征;Input the second limb image into the posture recognition model to obtain test limb image characteristics;
    将所述测试人脸图像特征和所述测试肢体图像特征组成所述第二人像特征。The test face image feature and the test limb image feature are combined to form the second portrait feature.
  5. 如权利要求1所述的图像特征配置及校验方法,其特征在于,所述获取N个第一图像,包括:The image feature configuration and verification method according to claim 1, wherein the acquiring N first images includes:
    获取第一视频数据;Get the first video data;
    将所述第一视频数据按照预设时间进行分帧,获取N个待处理图像;Frame the first video data according to a preset time to obtain N images to be processed;
    创建归一化图像,获取所述归一化图像的高度和宽度信息,并基于所述高度和宽度信息获取所述N个待处理图像的归一化后的图像,将所述归一化后的图像替换所述N个待处理图像的原像素值,得到所述N个第一图像。Create a normalized image, obtain height and width information of the normalized image, and obtain a normalized image of the N images to be processed based on the height and width information, and normalize the normalized image Replace the original pixel values of the N images to be processed to obtain the N first images.
  6. 一种图像特征配置及校验装置,其特征在于,包括:An image feature configuration and verification device, characterized in that it includes:
    第一图像获取模块,用于获取N个第一图像,其中,N为大于等于2的正整数;The first image acquisition module is used to acquire N first images, where N is a positive integer greater than or equal to 2;
    第一人体关键点获取模块,用于根据预设训练模型获取N个所述第一图像的人体关键点;A first human key point obtaining module, configured to obtain N human key points of the first image according to a preset training model;
    第一人像特征获取模块,用于根据N个所述第一图像的人体关键点获取第一人像特征;A first portrait feature acquisition module, configured to acquire the first portrait feature according to N human key points of the first image;
    标准图像特征配置模块,用于将所述第一人像特征配置为标准图像特征;A standard image feature configuration module, configured to configure the first portrait feature as a standard image feature;
    第二人体关键点获取模块,用于获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;A second human body key point obtaining module, configured to obtain a second image, and obtain human body key points of the second image according to the preset training model;
    第二人像特征获取模块,用于根据所述第二图像的人体关键点获取第二人像特征;A second portrait feature acquisition module, configured to acquire a second portrait feature according to the key points of the human body of the second image;
    人像特征匹配验证模块,用于将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The portrait feature matching verification module is configured to match the second portrait feature with the standard image feature, and if the match is successful, output a result of verification.
  7. 如权利要求6所述的图像特征配置及校验装置,其特征在于,所述第一人像特征获取模块包括坐标获取单元、特征区间值获取单元和第一人像特征设置单元;The image feature configuration and verification device according to claim 6, wherein the first portrait feature acquisition module includes a coordinate acquisition unit, a feature interval value acquisition unit, and a first portrait feature setting unit;
    所述坐标获取单元,用于根据N个所述第一图像的人体关键点的位置获取N个所述第一图像的人体关键点的坐标;The coordinate acquiring unit is configured to acquire coordinates of N human key points of the first image according to positions of N human key points of the first image;
    所述特征区间值获取单元,用于采用移动指数加权平均算法对N个所述第一图像的人体关键点的坐标进行计算,获得第一特征区间值;The characteristic interval value obtaining unit is used to calculate the coordinates of the key points of the human body of the N first images using a moving index weighted average algorithm to obtain the first characteristic interval value;
    所述第一人像特征设置单元,用于将所述第一特征区间值作为第一人像特征。The first portrait feature setting unit is configured to use the first feature interval value as a first portrait feature.
  8. 如权利要求6所述的图像特征配置及校验装置,其特征在于,所述预设训练模型包括微表情识别模型和姿态识别模型,所述第一人像特征获取模块包括图像集获取单元、标准人脸特征获取单元、标准肢体图像特征获取单元和第一人像特征获取单元;The image feature configuration and verification device according to claim 6, wherein the preset training model includes a micro-expression recognition model and a gesture recognition model, and the first portrait feature acquisition module includes an image set acquisition unit, A standard facial feature acquisition unit, a standard limb image feature acquisition unit, and a first portrait feature acquisition unit;
    所述图像集获取单元,用于根据N个所述第一图像的人体关键点获取第一人脸图像集和第一肢体图像集;The image set obtaining unit is configured to obtain a first face image set and a first limb image set according to N human body key points of the first image;
    所述标准人脸特征获取单元,用于将所述第一人脸图像集输入到所述微表情识别模型中,得到标准人脸图像特征;The standard facial feature acquisition unit is configured to input the first facial image set into the micro-expression recognition model to obtain standard facial image features;
    所述标准肢体特征获取单元,用于将所述第一肢体图像集输入到所述姿态识别模型中,得到标准肢体图像特征;The standard limb feature acquisition unit is configured to input the first limb image set into the gesture recognition model to obtain standard limb image features;
    所述第一人像特征获取单元,用于将所述标准人脸图像特征和所述标准肢体图像特征组成所述第一人像特征。The first portrait feature acquisition unit is configured to compose the standard face image feature and the standard limb image feature into the first portrait feature.
  9. 如权利要求8所述的图像特征配置及校验装置,其特征在于,所述第二人像特征获取模块还用于:The image feature configuration and verification device according to claim 8, wherein the second portrait feature acquisition module is further used to:
    根据所述第二图像的人体关键点获取第二人脸图像和第二肢体图像;Acquiring a second face image and a second limb image according to the key points of the human body of the second image;
    将所述第二人脸图像输入到所述微表情识别模型中,得到测试人脸图像特征;Input the second face image into the micro-expression recognition model to obtain the characteristics of the test face image;
    将所述第二肢体图像输入到所述姿态识别模型中,得到测试肢体图像特征;Input the second limb image into the posture recognition model to obtain test limb image characteristics;
    将所述测试人脸图像特征和所述测试肢体图像特征组成所述第二人像特征。The test face image feature and the test limb image feature are combined to form the second portrait feature.
  10. 如权利要求6所述的图像特征配置及校验装置,其特征在于,所述第一图像获取模块还用于:The image feature configuration and verification device according to claim 6, wherein the first image acquisition module is further used to:
    获取第一视频数据;Get the first video data;
    将所述第一视频数据按照预设时间进行分帧,获取N个待处理图像;Frame the first video data according to a preset time to obtain N images to be processed;
    创建归一化图像,获取所述归一化图像的高度和宽度信息,并基于所述高度和宽度信息获取所述N个待处理图像的归一化后的图像,将所述归一化后的图像替换所述N个待处理图像的原像素值,得到所述N个第一图像。Create a normalized image, obtain height and width information of the normalized image, and obtain a normalized image of the N images to be processed based on the height and width information, and normalize the normalized image Replace the original pixel values of the N images to be processed to obtain the N first images.
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, characterized in that, when the processor executes the computer-readable instructions, it is implemented as follows step:
    获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
    根据预设训练模型获取N个所述第一图像的人体关键点;Acquiring N human key points of the first image according to a preset training model;
    根据N个所述第一图像的人体关键点获取第一人像特征;Acquiring the first portrait feature according to the N human key points of the first image;
    将所述第一人像特征配置为标准图像特征;Configuring the first portrait feature as a standard image feature;
    获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;Acquiring a second image, acquiring human key points of the second image according to the preset training model;
    根据所述第二图像的人体关键点获取第二人像特征;Acquiring a second portrait feature according to the key points of the human body of the second image;
    将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结果。The second portrait feature is matched with the standard image feature, and if the match is successful, a result of verification is output.
  12. 如权利要求11所述的计算机设备,其特征在于,所述根据N个所述第一图像的人体关键点获取第一人像特征,包括:The computer device according to claim 11, wherein the acquiring the first portrait feature based on the N human key points of the first image includes:
    根据N个所述第一图像的人体关键点的位置获取N个所述第一图像的人体关键点的坐标;Acquiring the coordinates of the N human key points of the first image according to the positions of the N human key points of the first image;
    采用移动指数加权平均算法对N个所述第一图像的人体关键点的坐标进行计算,获得第一特征区间值;Use the moving index weighted average algorithm to calculate the coordinates of the key points of the human body of the N first images to obtain the first feature interval value;
    将所述第一特征区间值作为第一人像特征。Use the first feature interval value as the first portrait feature.
  13. 如权利要求11所述的计算机设备,其特征在于,所述预设训练模型包括微表情识别模型和姿态识别模型;The computer device according to claim 11, wherein the preset training model includes a micro-expression recognition model and a gesture recognition model;
    所述根据N个所述第一图像的人体关键点获取第一人像特征,包括:The obtaining the first portrait feature according to the N human key points of the first image includes:
    根据N个所述第一图像的人体关键点获取第一人脸图像集和第一肢体图像集;Acquiring a first face image set and a first limb image set according to the N human body key points of the first image;
    将所述第一人脸图像集输入到所述微表情识别模型中,得到标准人脸图像特征;Input the first face image set into the micro-expression recognition model to obtain standard face image features;
    将所述第一肢体图像集输入到所述姿态识别模型中,得到标准肢体图像特征;Input the first limb image set into the gesture recognition model to obtain standard limb image features;
    将所述标准人脸图像特征和所述标准肢体图像特征组成所述第一人像特征。The standard face image feature and the standard limb image feature are combined into the first portrait feature.
  14. 如权利要求13所述的计算机设备,其特征在于,所述根据所述第二图像的人体关键点获取第二人像特征,包括:The computer device according to claim 13, wherein the acquiring the second portrait feature according to the key points of the human body of the second image comprises:
    根据所述第二图像的人体关键点获取第二人脸图像和第二肢体图像;Acquiring a second face image and a second limb image according to the key points of the human body of the second image;
    将所述第二人脸图像输入到所述微表情识别模型中,得到测试人脸图像特征;Input the second face image into the micro-expression recognition model to obtain the characteristics of the test face image;
    将所述第二肢体图像输入到所述姿态识别模型中,得到测试肢体图像特征;Input the second limb image into the posture recognition model to obtain test limb image characteristics;
    将所述测试人脸图像特征和所述测试肢体图像特征组成所述第二人像特征。The test face image feature and the test limb image feature are combined to form the second portrait feature.
  15. 如权利要求11所述的计算机设备,其特征在于,所述获取N个第一图像,包括:The computer device according to claim 11, wherein the acquiring N first images includes:
    获取第一视频数据;Get the first video data;
    将所述第一视频数据按照预设时间进行分帧,获取N个待处理图像;Frame the first video data according to a preset time to obtain N images to be processed;
    创建归一化图像,获取所述归一化图像的高度和宽度信息,并基于所述高度和宽度信息获取所述N个待处理图像的归一化后的图像,将所述归一化后的图像替换所述N个待处理图像的原像素值,得到所述N个第一图像。Create a normalized image, obtain height and width information of the normalized image, and obtain a normalized image of the N images to be processed based on the height and width information, and normalize the normalized image Replace the original pixel values of the N images to be processed to obtain the N first images.
  16. 一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
    获取N个第一图像,其中,N为大于等于2的正整数;Acquire N first images, where N is a positive integer greater than or equal to 2;
    根据预设训练模型获取N个所述第一图像的人体关键点;Acquiring N human key points of the first image according to a preset training model;
    根据N个所述第一图像的人体关键点获取第一人像特征;Acquiring the first portrait feature according to the N human key points of the first image;
    将所述第一人像特征配置为标准图像特征;Configuring the first portrait feature as a standard image feature;
    获取第二图像,根据所述预设训练模型获取所述第二图像的人体关键点;Acquiring a second image, acquiring human key points of the second image according to the preset training model;
    根据所述第二图像的人体关键点获取第二人像特征;Acquiring a second portrait feature according to the key points of the human body of the second image;
    将所述第二人像特征与所述标准图像特征进行匹配,若匹配成功,则输出验证通过的结 果。The second portrait feature is matched with the standard image feature, and if the match is successful, the result of verification is output.
  17. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述根据N个所述第一图像的人体关键点获取第一人像特征,包括:The non-volatile readable storage medium according to claim 16, wherein the acquiring the first portrait feature based on the N human key points of the first image includes:
    根据N个所述第一图像的人体关键点的位置获取N个所述第一图像的人体关键点的坐标;Acquiring the coordinates of the N human key points of the first image according to the positions of the N human key points of the first image;
    采用移动指数加权平均算法对N个所述第一图像的人体关键点的坐标进行计算,获得第一特征区间值;Use the moving index weighted average algorithm to calculate the coordinates of the key points of the human body of the N first images to obtain the first feature interval value;
    将所述第一特征区间值作为第一人像特征。Use the first feature interval value as the first portrait feature.
  18. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述预设训练模型包括微表情识别模型和姿态识别模型;The non-volatile readable storage medium of claim 16, wherein the preset training model includes a micro-expression recognition model and a gesture recognition model;
    所述根据N个所述第一图像的人体关键点获取第一人像特征,包括:The obtaining the first portrait feature according to the N human key points of the first image includes:
    根据N个所述第一图像的人体关键点获取第一人脸图像集和第一肢体图像集;Acquiring a first face image set and a first limb image set according to the N human body key points of the first image;
    将所述第一人脸图像集输入到所述微表情识别模型中,得到标准人脸图像特征;Input the first face image set into the micro-expression recognition model to obtain standard face image features;
    将所述第一肢体图像集输入到所述姿态识别模型中,得到标准肢体图像特征;Input the first limb image set into the gesture recognition model to obtain standard limb image features;
    将所述标准人脸图像特征和所述标准肢体图像特征组成所述第一人像特征。The standard face image feature and the standard limb image feature are combined into the first portrait feature.
  19. 如权利要求18所述的非易失性可读存储介质,其特征在于,所述根据所述第二图像的人体关键点获取第二人像特征,包括:The non-volatile readable storage medium of claim 18, wherein the acquiring the second portrait feature according to the key points of the human body of the second image includes:
    根据所述第二图像的人体关键点获取第二人脸图像和第二肢体图像;Acquiring a second face image and a second limb image according to the key points of the human body of the second image;
    将所述第二人脸图像输入到所述微表情识别模型中,得到测试人脸图像特征;Input the second face image into the micro-expression recognition model to obtain the characteristics of the test face image;
    将所述第二肢体图像输入到所述姿态识别模型中,得到测试肢体图像特征;Input the second limb image into the posture recognition model to obtain test limb image characteristics;
    将所述测试人脸图像特征和所述测试肢体图像特征组成所述第二人像特征。The test face image feature and the test limb image feature are combined to form the second portrait feature.
  20. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述获取N个第一图像,包括:The non-volatile readable storage medium of claim 16, wherein the acquiring N first images includes:
    获取第一视频数据;Get the first video data;
    将所述第一视频数据按照预设时间进行分帧,获取N个待处理图像;Frame the first video data according to a preset time to obtain N images to be processed;
    创建归一化图像,获取所述归一化图像的高度和宽度信息,并基于所述高度和宽度信息获取所述N个待处理图像的归一化后的图像,将所述归一化后的图像替换所述N个待处理图像的原像素值,得到所述N个第一图像。Create a normalized image, obtain height and width information of the normalized image, and obtain a normalized image of the N images to be processed based on the height and width information, and normalize the normalized image Replace the original pixel values of the N images to be processed to obtain the N first images.
PCT/CN2018/122731 2018-10-17 2018-12-21 Image feature configuration and verification method and apparatus, computer device and medium WO2020077822A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811208048.6 2018-10-17
CN201811208048.6A CN109472269A (en) 2018-10-17 2018-10-17 Characteristics of image configuration and method of calibration, device, computer equipment and medium

Publications (1)

Publication Number Publication Date
WO2020077822A1 true WO2020077822A1 (en) 2020-04-23

Family

ID=65665930

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122731 WO2020077822A1 (en) 2018-10-17 2018-12-21 Image feature configuration and verification method and apparatus, computer device and medium

Country Status (2)

Country Link
CN (1) CN109472269A (en)
WO (1) WO2020077822A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667479A (en) * 2020-06-10 2020-09-15 创新奇智(成都)科技有限公司 Pattern verification method and device for target image, electronic device and storage medium
CN111968203A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Animation driving method, animation driving device, electronic device, and storage medium
CN112101124A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Sitting posture detection method and device
CN112101123A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Attention detection method and device
CN112257645A (en) * 2020-11-02 2021-01-22 浙江大华技术股份有限公司 Face key point positioning method and device, storage medium and electronic device
CN112287866A (en) * 2020-11-10 2021-01-29 上海依图网络科技有限公司 Human body action recognition method and device based on human body key points
CN113177442A (en) * 2021-04-12 2021-07-27 广东省科学院智能制造研究所 Human behavior detection method and device based on edge calculation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986260A (en) * 2020-09-04 2020-11-24 北京小狗智能机器人技术有限公司 Image processing method and device and terminal equipment
CN112418146B (en) * 2020-12-02 2024-04-30 深圳市优必选科技股份有限公司 Expression recognition method, apparatus, service robot, and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005242677A (en) * 2004-02-26 2005-09-08 Ntt Comware Corp Composite authentication system and method, and program for the same
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005242677A (en) * 2004-02-26 2005-09-08 Ntt Comware Corp Composite authentication system and method, and program for the same
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667479A (en) * 2020-06-10 2020-09-15 创新奇智(成都)科技有限公司 Pattern verification method and device for target image, electronic device and storage medium
CN111968203A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Animation driving method, animation driving device, electronic device, and storage medium
CN111968203B (en) * 2020-06-30 2023-11-14 北京百度网讯科技有限公司 Animation driving method, device, electronic equipment and storage medium
CN112101124A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Sitting posture detection method and device
CN112101123A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Attention detection method and device
CN112101124B (en) * 2020-08-20 2023-12-08 深圳数联天下智能科技有限公司 Sitting posture detection method and device
CN112257645A (en) * 2020-11-02 2021-01-22 浙江大华技术股份有限公司 Face key point positioning method and device, storage medium and electronic device
CN112257645B (en) * 2020-11-02 2023-09-01 浙江大华技术股份有限公司 Method and device for positioning key points of face, storage medium and electronic device
CN112287866A (en) * 2020-11-10 2021-01-29 上海依图网络科技有限公司 Human body action recognition method and device based on human body key points
CN113177442A (en) * 2021-04-12 2021-07-27 广东省科学院智能制造研究所 Human behavior detection method and device based on edge calculation
CN113177442B (en) * 2021-04-12 2024-01-30 广东省科学院智能制造研究所 Human behavior detection method and device based on edge calculation

Also Published As

Publication number Publication date
CN109472269A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
WO2020077822A1 (en) Image feature configuration and verification method and apparatus, computer device and medium
US10997445B2 (en) Facial recognition-based authentication
Conti et al. Mind how you answer me! Transparently authenticating the user of a smartphone when answering or placing a call
US10275672B2 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
US9355236B1 (en) System and method for biometric user authentication using 3D in-air hand gestures
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
Zhao et al. Mobile user authentication using statistical touch dynamics images
KR101710478B1 (en) Mobile electric document system of multiple biometric
KR101242390B1 (en) Method, apparatus and computer-readable recording medium for identifying user
US20120164978A1 (en) User authentication method for access to a mobile user terminal and corresponding mobile user terminal
US10606994B2 (en) Authenticating access to a computing resource using quorum-based facial recognition
US10599824B2 (en) Authenticating access to a computing resource using pattern-based facial recognition
US10594690B2 (en) Authenticating access to a computing resource using facial recognition based on involuntary facial movement
JP2018538608A (en) Face verification method and electronic device
US10922533B2 (en) Method for face-to-unlock, authentication device, and non-volatile storage medium
US20200302039A1 (en) Authentication verification using soft biometric traits
Lu et al. Multifactor user authentication with in-air-handwriting and hand geometry
US20230100874A1 (en) Facial expression-based unlocking method and apparatus, computer device, and storage medium
US9594949B1 (en) Human identity verification via automated analysis of facial action coding system features
WO2020244160A1 (en) Terminal device control method and apparatus, computer device, and readable storage medium
CN110633677A (en) Face recognition method and device
Malatji et al. Acceptance of biometric authentication security technology on mobile devices
Zhong et al. VeinDeep: Smartphone unlock using vein patterns
KR20210017230A (en) Device and method for face liveness detection of facial image
TWI620076B (en) Analysis system of humanity action

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937335

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18937335

Country of ref document: EP

Kind code of ref document: A1