CN112132979B - Virtual resource selection method, device and equipment - Google Patents

Virtual resource selection method, device and equipment Download PDF

Info

Publication number
CN112132979B
CN112132979B CN202011051577.7A CN202011051577A CN112132979B CN 112132979 B CN112132979 B CN 112132979B CN 202011051577 A CN202011051577 A CN 202011051577A CN 112132979 B CN112132979 B CN 112132979B
Authority
CN
China
Prior art keywords
target user
face
key points
feature vector
classification category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011051577.7A
Other languages
Chinese (zh)
Other versions
CN112132979A (en
Inventor
宋远鹏
潘付文
严红樟
张岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202011051577.7A priority Critical patent/CN112132979B/en
Publication of CN112132979A publication Critical patent/CN112132979A/en
Application granted granted Critical
Publication of CN112132979B publication Critical patent/CN112132979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The embodiment of the specification discloses a method, a device and equipment for selecting virtual resources, wherein the method comprises the steps of obtaining key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points for representing the hair style outline of the target user; classifying the acquired key points to obtain classified key points; determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category; determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.

Description

Virtual resource selection method, device and equipment
Technical Field
The present specification relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for selecting a virtual resource.
Background
The term "virtual image" is a product of the digital era of the rapid network development in recent years, and people can create an image by modifying a virtual and kneaded image. The virtual image can reflect the direction of people to another state, and the creator can give any role and scene to the virtual image.
The user can design own virtual image according to own preference, including the hairstyle, clothes, expression, scene, etc. of the virtual image, generally, the virtual image is generated based on a model constructed by algorithms such as machine learning, specifically, a large number of training samples (including the facial image of the user and the corresponding virtual image) are obtained, the preset machine learning model is trained by using the training samples to obtain the corresponding model, then, the relevant information of the virtual image of the user can be output based on the model, and then, the output virtual image is dynamically rendered in real time through the corresponding rendering engine, so that the user can look up the own virtual image. However, the above method needs to perform complicated processes such as model construction and model training, so that a large amount of resources are consumed, and the above method has low processing efficiency in an online and highly concurrent service scenario, and therefore, it is necessary to provide an avatar processing scheme with lower resource consumption and better suitability for an online and highly concurrent service scenario.
Disclosure of Invention
The embodiment of the specification aims to provide an avatar processing scheme which is lower in resource consumption, more suitable for an online and high-concurrency service scene.
In order to implement the above technical solution, the embodiments of the present specification are implemented as follows:
the method for selecting virtual resources provided by the embodiment of the present specification includes: the method comprises the steps of obtaining key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points used for representing the hairstyle contour of the target user. And classifying the acquired key points of the preset part in the face of the target user to obtain classified key points. And determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category. And determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
An embodiment of the present specification provides a virtual resource selection apparatus, where the apparatus includes: the key point acquisition module is used for acquiring key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points used for representing the hair style contour of the target user. And the classification module is used for classifying the acquired key points of the preset part in the face of the target user to obtain the classified key points. And the vector determining module is used for determining the feature vector corresponding to the key point of each classification category aiming at the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category. And the virtual resource selection module is used for determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
An embodiment of the present specification provides a virtual resource selection device, where the virtual resource selection device includes: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: the method comprises the steps of obtaining key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points used for representing the hairstyle contour of the target user. And classifying the acquired key points of the preset part in the face of the target user to obtain classified key points. And determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category. And determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
Embodiments of the present specification also provide a storage medium, where the storage medium is used to store computer-executable instructions, and the executable instructions, when executed, implement the following processes: the method comprises the steps of obtaining key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points used for representing the hairstyle contour of the target user. And classifying the acquired key points of the preset part in the face of the target user to obtain classified key points. And determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category. And determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a diagram illustrating an embodiment of a method for selecting virtual resources according to the present disclosure;
FIG. 2 is a schematic illustration of a face and facial key points in accordance with the present description;
FIG. 3 is a flowchart of another embodiment of a method for selecting virtual resources;
FIG. 4 is a schematic diagram of keypoint coordinates of an image of a cut-off face and a tilted face according to the present description;
FIG. 5 is a schematic diagram of the coordinates of the keypoints of a rotated face and the keypoints of the rotated face and the face;
FIG. 6 is a schematic diagram of a polygon formed by key points in the present specification;
FIG. 7 is a flowchart of another embodiment of a method for selecting virtual resources;
FIG. 8 is a block diagram illustrating an embodiment of a virtual resource selection apparatus;
fig. 9 is an embodiment of a virtual resource selecting apparatus according to the present disclosure.
Detailed Description
The embodiment of the specification provides a method, a device and equipment for selecting virtual resources.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Example one
As shown in fig. 1, an execution subject of the method may be a terminal device or a server, where the terminal device may be a mobile terminal device such as a mobile phone and a tablet computer, or may also be a device such as a personal computer, and the server may be an independent server, or a server cluster formed by multiple servers. The terminal device or server may be a terminal device or server that handles a certain service. In the embodiments of the present specification, the execution main body is taken as an example to be described in detail, and for the case that the execution main body is a terminal device, the following related content execution may be referred to, and details are not described herein again. The method may specifically comprise the steps of:
in step S102, key points of a preset portion in the face of the target user are obtained, where the key points include a plurality of points for characterizing the hair style contour of the target user.
The target user may be any user, and in this embodiment of the present specification, the target user may be a user who needs to determine or acquire a virtual resource. The predetermined portion in the face may be a certain portion or a plurality of different portions in the face, and the like, for example, the predetermined portion in the face may be a nose, a mouth, or the like. The key points may be points distributed on or around a certain part of the face, the key points may be points at key positions in the part, and all the key points of a certain part may represent important features and important characteristics of the part. The hairstyle contour may be the shape of the user's hair, etc.
In practice, the term "virtual image" is a product of the digital era of the rapid network development in recent years, and people create a virtual image by modifying a virtual and kneaded image. The virtual image can reflect the direction of people to another state, and the creator can give any role and scene to the virtual image.
The user can design his/her avatar according to his/her preference, including the avatar's hairstyle, apparel, expression, scene, etc., and generally, the above-mentioned virtual image is generated based on the model constructed by the algorithm of machine learning, etc., concretely, a large number of training samples (including the face image of the user and the corresponding virtual image) are obtained, the training samples are used to train the preset machine learning model to obtain the corresponding model, then, the related information of the avatar of the user can be output based on the model, and then the output avatar is dynamically rendered in real time through a corresponding rendering engine, so that the user can view the own virtual image, however, the above-mentioned method needs to perform complicated model construction, model training and other processes, this can result in a large amount of resources being consumed and the above approach can be inefficient for online and highly concurrent service scenarios.
In addition to the above, the following may be implemented: the method comprises the steps of adopting a graphics related algorithm to perform dimension reduction processing on high-dimensional data of a human face, performing binarization processing on effective information of the face through a series of processing methods, filtering a background, establishing a face contour, and finally performing virtual image generation on the face contour through a corresponding algorithm and synthesizing the face contour with a related decorative picture. However, the above method can effectively identify multidimensional data such as the expression and color of a character, but the related data has still more dimensions, the processing and implementation process is still more complex and requires more resource consumption, and according to the business characteristics and related processing experience, the expression of the character only needs to have certain similarity (i.e., "look-alike"), and other dimensions (such as a wound, the texture of the face, the skin color, and accessories such as a hat) can be further optimized.
Therefore, an avatar processing scheme with lower resource consumption, better suitability for online and high-concurrency service scenes is needed. The embodiment of the present specification provides an optional processing manner, which may specifically include the following:
generally, a part having a higher recognition degree among the body parts of the user may include a face, and when an avatar is set for the user, a virtual resource having a certain similarity with the face of the user may be selected or generated for the user, and for this reason, it may be preset to set the avatar for the user through the face. In addition, the face part also comprises a plurality of different parts, such as nose, mouth, eyes, eyebrows, hair and the like, and one or a plurality of different parts in the face part can be preset as main matching parts, for example, hair, eyes, eyebrows, and mouth can be preset as the main matching parts, thereby selecting or generating virtual resources with certain similarity with the parts for the user, in practical applications, different matching parts of the face can be set for different users according to practical situations, for example, for user a, hair, eyes, mouth as the main matching parts can be set, for user B, hair, eyes, mouth, nose, eyebrows, and face contour may be set as main matching portions, and for user C, hair, face contour may be set as main matching portions, and the like.
Based on the above, after the main matching portion of the face is preset, an algorithm for detecting the key points of the portion in the face may be preset, for example, an algorithm based on an ASM (Active Shape Model) and an AAM (Active appearance Model), an algorithm based on a cascaded Shape regression cpr (cascaded position regression), an algorithm based on a deep learning (such as a convolutional neural network algorithm), and the like. When a user (i.e., a target user) needs to set an avatar, an image of the target user may be captured by an image capturing device (e.g., a mobile phone or a camera), where the image may include at least a facial image of the target user, and then the target user or an avatar obtaining party may provide the image of the target user to a terminal device or a server, as shown in fig. 2, the terminal device or the server may perform key point detection on the image by using the key point detection algorithm to obtain key points of a preset portion in the face of the target user, where the obtained key points of the preset portion in the face of the target user may include coordinates and the like of each key point. In addition, in order to more vividly depict the target user in the embodiment of the present specification, an algorithm for detecting key points of the hair, the hairstyle, and the like of the target user may be further set, for example, an algorithm based on deep learning (such as a convolutional neural network algorithm), a hairstyle key point detection algorithm based on OpenCV, and the like, and the key point detection algorithm may be specifically selected according to an actual situation, which is not limited in the embodiment of the present specification. Based on this, the obtained key points of the preset part in the face of the target user may include a plurality of key points for characterizing the hair style contour of the target user (including coordinates of the plurality of key points for characterizing the hair style contour of the target user, etc.).
In step S104, the obtained key points of the preset portion in the face of the target user are classified, so as to obtain classified key points.
In the implementation, considering that the number of key points included in each part of the face may be large, if a plurality of key points of different parts are collected to be processed, since there may be no association relationship between different key points, the processing process of the key points is complicated, and a large amount of processing time is required to be consumed, and the processing efficiency is low, for this reason, the obtained key points of the preset part in the face of the target user may be classified, a classification algorithm, such as a decision tree classification algorithm, a support vector machine algorithm, a naive bayes classification algorithm, may be preset according to an actual situation, and the obtained key points of the preset part in the face of the target user are classified through the classification algorithm to obtain classified key points, for example, the key points may include K1, K2, K3 … K10, and then the key points are classified through the classification algorithm, the key points after classification are (K1, K5, K8), (K3, K6, K7) and (K2, K4, K9, K10).
It should be noted that the above-mentioned key point classification is implemented by a preset classification algorithm, and in practical application, the above-mentioned key points may also be classified by other methods, and the following provides a further optional processing method, which may specifically include the following: the above-mentioned detection of the key points of the preset portion in the face of the target user, the obtained key points may be classified according to different portions of the face, for example, the key points of the eye portion may be classified into one category, and the key points of the hair portion may be classified into another category, and the like, specifically, the key points may include a1, a1 a500, B1, C1, and the like, where the key points of the hair portion may include a1, a1 a500, the key points of the eye portion may include B1, and B1, and the key points of the mouth portion may include C1, C1, and the above-mentioned key points are classified by the above-mentioned classification algorithm, and the classified key points are (a1, a1, B1, 1C 1).
In step S106, for the key points of each classification category, the feature vectors corresponding to the key points of each classification category are determined, and a feature vector group corresponding to the face of the target user is constructed based on the determined feature vectors corresponding to the key points of each classification category.
In implementation, a vectorization processing mechanism of the key points may be preset, the vectorization processing mechanism may include multiple types, and the key points of different classification categories may be provided with different vectorization processing mechanisms. After the classified key points are obtained in the above manner, for the key points of each classification category, a vectorization processing mechanism corresponding to the corresponding classification category may be obtained, and the obtained vectorization processing mechanism may perform vectorization processing on the key points of the corresponding classification category, so as to obtain a feature vector corresponding to the key point of a certain classification category. For example, the preset part in the face of the target user may include a hair part, an eye part, a mouth part, and the like, then the key points of the classification category corresponding to the hair part, the key points of the classification category corresponding to the eye part, and the key points of the classification category corresponding to the mouth part may be obtained, for the key points of the classification category corresponding to the hair part, the vectorization processing mechanism corresponding to the hair part may be obtained, the vectorization processing may be performed on the key points of the classification category corresponding to the hair part based on the obtained vectorization processing mechanism, so as to obtain the feature vectors corresponding to the key points of the hair part, and correspondingly, through the processing manners, the feature vectors corresponding to the key points of the eye part, the feature vectors corresponding to the key points of the mouth part, and the like may also be obtained, respectively.
After the feature vectors corresponding to the key points of each classification category are obtained in the above manner, fusion processing or combination processing may be performed on the feature vectors corresponding to the key points of the plurality of classification categories, so that a feature vector group after the fusion processing or the combination processing may be finally obtained, and the feature vector group after the fusion processing or the combination processing may be used as a feature vector group corresponding to the face of the target user.
In step S108, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources stored in advance, the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources is determined, and based on the determined similarity, a virtual resource matching the target user is selected from the plurality of different virtual resources.
The virtual resources may include multiple types, such as virtual head portraits, virtual images, virtual characters, expressions, and the like, which may be set according to actual conditions, and are not limited in this description.
In implementation, the terminal device or the server may store virtual resources of a type required by a user in advance, for example, a target user needs to obtain an avatar of the target user, and then the terminal device or the server may store a virtual resource library for characterizing the avatar of the user in advance, where the virtual resource library may include a plurality of different virtual resources (such as the avatar described above), and each virtual resource may correspond to a corresponding feature vector group, where the feature vector group may include one or more different feature vectors. Furthermore, in order to determine the virtual resource that is closest to or has the highest correlation with the face of the target user in the virtual resource library, a similarity calculation method, such as a cosine distance similarity calculation method, may be set in advance, and then, a similarity between the feature vector group corresponding to the face of the target user and the feature vector group corresponding to any one virtual resource in the virtual resource library may be calculated by the above-described similarity calculation method, and thereafter, a similarity between the feature vector group corresponding to the face of the target user and the feature vector group corresponding to another virtual resource in the virtual resource library may be calculated again by the above-described similarity calculation method until a similarity between the feature vector group corresponding to the face of the target user and the feature vector group corresponding to each of the plurality of different virtual resources is obtained.
After the similarity between the feature vector group corresponding to the face of the target user and the feature vector group corresponding to each virtual resource in the multiple different virtual resources is obtained in the above manner, the obtained numerical values of the similarity can be compared, the maximum value of the numerical values of the similarity can be obtained, and the virtual resource corresponding to the maximum value of the similarity can be used as the virtual resource matched with the target user. For example, the virtual resource that the target user needs to acquire is the avatar of the target user, and may be based on the similarity algorithm in the above manner. The similarity between the feature vector group corresponding to the face of the target user and the feature vector group corresponding to each of the multiple different avatars is calculated, the obtained numerical values of the similarity can be compared, the avatar corresponding to the maximum value of the similarity is taken as the avatar matched with the target user, and at this time, the terminal device or the server can output the determined avatar to the target user. The target user can view the avatar. It should be noted that the target user may also adjust the obtained avatar, for example, the target user may adjust the mouth, nose, or hair style of the avatar, so as to obtain the avatar meeting the user's needs.
The embodiments of the present specification provide a method for selecting virtual resources, where a key point of a preset portion in a face of a target user is obtained, where the key point includes a plurality of points for representing a hairstyle contour of the target user, so that the hairstyle contour of the user can be taken into consideration when determining virtual resources, and accuracy of obtaining virtual resources is improved. And classifying the key points of the preset parts in the obtained face of the target user to obtain classified key points, determining the feature vector corresponding to the key point of each classification category aiming at the key points of each classification category, constructing a feature vector group corresponding to the face of the target user based on the feature vector corresponding to the key point of each classification category, and finally selecting virtual resources matched with the target user from a plurality of different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, so that the dimension of processing information is effectively reduced by abstracting high-dimensional information in the face of the target user into key point representation without high-dimensional information processing, the performance of face identification processing is improved, and meanwhile, the matching processing is performed in the form of a plurality of virtual resources stored in advance, and a complex processing flow is not needed, so that the resource loss is reduced, and the method is more suitable for the virtual resource processing of an online and high-concurrency service scene.
Example two
As shown in fig. 3, an execution subject of the method may be a terminal device or a server, where the terminal device may be a mobile terminal device such as a mobile phone and a tablet computer, or may also be a device such as a personal computer, and the server may be an independent server, or a server cluster formed by multiple servers. The terminal device or server may be a terminal device or server that handles a certain service. In the embodiments of the present specification, the execution main body is taken as an example to be described in detail, and for the case that the execution main body is a terminal device, the following related content execution may be referred to, and details are not described herein again. The method may specifically comprise the steps of:
in step S302, key points of a preset portion in the face of the target user are obtained, where the key points include a plurality of points for characterizing the hair style contour of the target user.
It should be noted that, the number of the key points used for characterizing the hair style contour of the target user may be multiple, and the specific number may be set according to an adopted algorithm, specifically, 500 key points, 400 key points, or 100 key points, and the like, where the hair style contour may include a front end hair style contour (which may be a hair style contour on the side where the face is located), a rear end hair style contour, and the like. The obtained key points may include coordinates of the key points, which may be two-dimensional coordinates or three-dimensional coordinates.
In step S304, based on the coordinates of the key points of the preset portion in the face of the target user and the length and width of the canvas where the face of the target user is located, the key points of the preset portion in the face of the target user are normalized to obtain the normalized key points of the preset portion in the face of the target user.
The canvas may be a canvas where the image of the target user's face is located, or may be a canvas required for normalization processing, and the size of the canvas (including the length and width of the canvas) may be set according to actual conditions, specifically, the width and length of the canvas may be 10 centimeters, or the width and length of the canvas may be represented based on the number of image color pixel points.
In implementation, after the coordinates of the key points of the preset part in the face of the target user are obtained in the above manner, the size of the canvas required for normalization processing can be determined, so that the length and the width of the canvas (that is, the length and the width of the canvas where the face of the target user is located) can be obtained. After obtaining the coordinates of the key points of the preset portion in the face of the target user and the length and width of the canvas in the above manner, normalization processing may be performed on the key points of the preset portion in the face of the target user, specifically, if the coordinates of the center point of the face of the target user are (X, Y), the width of the canvas is W, the length of the canvas is H, the key points of the preset portion in the face of the target user are (X1, Y1), (X2, Y2), (X3, Y3) … (Xn, Yn), n represents the number or number of the key points, normalization processing is performed on the key points of the preset portion in the face of the target user, and the obtained key points of the preset portion in the face of the target user after normalization may be: (X-X1) × W, (Y-Y1) × H), ((X-X2) × W, (Y-Y2) × H), ((X-X3) × W, (Y-Y3) × H) … ((X-Xn) × W, (Y-Yn) × H). Through the normalization processing, the key points of the preset parts in the face of the target user can be moved to the coordinate origin, so that the central point of the face of the target user is used as the coordinate origin, and the subsequent coordinate processing and the unification of a coordinate system can be facilitated.
In step S306, the key points of the preset portion in the normalized face of the target user are classified, so as to obtain classified key points.
In implementation, the coordinates of the key points of the preset portion in the face of the target user may be transformed through the normalization processing, so as to obtain new coordinates of the key points, and the obtained key points may be classified based on the new coordinates of each key point, so as to obtain classified key points, where a specific processing process for classifying the key points may refer to relevant contents in the first embodiment, and is not described herein again.
In practical applications, the above-mentioned specific processing procedure for classifying the keypoints to obtain the classified keypoints may be various, and the following provides another optional processing method, which may specifically include the following: and classifying the key points of the preset parts in the acquired face of the target user by taking each preset part in the face of the target user as a classification category to obtain classified key points.
In implementation, different parts in the face of the target user often have different properties or attributes, such as different key point distribution positions, different key point distribution trends, and the like of different parts in the face, and therefore, each preset part in the face may be taken as a classification category, and the classification category may include: a classification category corresponding to an ear part (which may include a left ear and/or a right ear), a classification category corresponding to a nose part, a classification category corresponding to a mouth part, a classification category corresponding to an eye part (which may include a left eye and/or a right eye), a classification category corresponding to an eyebrow part (which may include a left eyebrow and/or a right eyebrow), a classification category corresponding to a face contour, a classification category corresponding to hair (which may include front hair and/or rear hair), and the like. Based on this, the key points of each preset part in the face can be classified according to the parts in the face corresponding to the key points to obtain the classified key points, for example, the key points of the hairstyle of hair can be classified into one category, the key points of the nose can be classified into one category, and the like, so that the key points of each part in the face can be obtained.
In practical applications, as shown in fig. 4, the image provided by the target user may have a certain inclination angle, and therefore, the obtained dot matrix of the key points also has a certain inclination angle, and during the processing, the dot matrix of the key points may also be rotated, which may be specifically referred to the processing in step S308 below.
In step S308, if the coordinates of the key points of the preset portion in the face of the target user are not within the preset coordinate range, the coordinates of the key points of the preset portion in the face of the target user are rotated based on a preset rotation algorithm, so as to obtain the coordinates of the key points of the preset portion in the face of the target user after rotation.
In implementation, the preset coordinate range may be set according to actual conditions, specifically, a standard outline pattern of the face image may be set in advance, and the coordinate range of the reference is set based on the outline pattern, for example, the outline pattern of the face image is generally: and if the outline of the face image of a certain user is in the shape similar to the ellipse, but the major axis of the face image deviates from the vertical direction, the coordinates of the key points of the preset part in the face of the target user can be determined not to be in the preset coordinate range. For a face image that needs to be rotated, a rotation algorithm may be preset, and coordinates of a key point of a preset portion in the face of a target user are rotated based on the preset rotation algorithm to obtain coordinates of the key point of the preset portion in the face of the target user after the rotation, where the rotation algorithm may include multiple types, and the following provides a selectable processing manner, which may specifically include the following contents: the coordinates of the rotated key points of the preset portion in the face of the target user are determined by the coordinates of the central point corresponding to the key points of the preset portion in the face of the target user, and the included angle between the straight line formed by the key points of the leftmost key point of the left eye and the rightmost key point of the right eye of the target user and the horizontal axis of the coordinate system corresponding to the coordinates of the key points in the key points of the preset portion in the face of the target user, specifically, as shown in fig. 5, by the above-mentioned contents, the following formula can be used to determine the included angle between the straight line and the horizontal axis of the coordinate system corresponding to the coordinates of the key points
x1=xcosβ+ysinβ
y1=ycosβ-xsinβ (1)
And rotating the coordinates of the key points of the preset part in the face of the target user to obtain the coordinates of the key points of the preset part in the face of the rotated target user, wherein (x1, y1) represents the coordinates of the key points of the preset part in the face of the rotated target user, (x, y) represents the coordinates of the key points of the preset part in the face of the target user, and beta represents an included angle between a straight line formed by the key points on the leftmost side of the left eye and the rightmost side of the right eye of the target user in the key points of the preset part in the face of the target user and a horizontal axis of a coordinate system corresponding to the coordinates of the key points.
In addition, the above processing method is based on the case where the key point coordinates are two-dimensional coordinates, and in the case where the key point coordinates are three-dimensional coordinates, the following rotation algorithm may be used: and rotating the coordinates of the key points of the preset part in the face of the target user by adopting a preset rotation matrix and preset rotation vector construction, an Euler angle rotation algorithm and an algorithm constructed based on quaternion to obtain the coordinates of the key points of the preset part in the face of the target user after rotation.
The specific processing process may refer to the two-dimensional coordinate condition, and processing is performed by combining with a corresponding algorithm, which is not described herein again.
In practical applications, the feature vector corresponding to the key point of each classification category may be composed of a distance feature vector and an angle feature vector, and then the distance feature vector and the angle feature vector corresponding to the key point of each classification category may be respectively calculated, and the following processing in step S310 and step S312 may be specifically performed.
In step S310, based on the key points of the first classification category, the successor key points of each key point in the key points of the first classification category, and the predecessor key points of each key point, the perimeter of the polygon formed by the key points of the first classification category, and the included angle between the vector formed by each key point in the key points of the first classification category and its successor key point, and the vector formed by each key point and its predecessor key point are determined.
Wherein the first classification category is any classification category among the above classification categories. The successor keypoint may be a keypoint in the reference direction for a keypoint, and the predecessor keypoint may be a keypoint in the opposite direction of the reference direction for a keypoint, as shown in fig. 6, for keypoint Pi, if the reference direction is clockwise, the successor keypoint may be Pi-next and the predecessor keypoint may be Pi-pre.
In practice, for any part in the face, the key points of the part may be constructed as a polygon, for example, the key point of the right eye in the face may be constructed as a polygon as shown in fig. 6, and for the key point Pi therein, if the reference direction is clockwise, the succeeding key point may be Pi-next, and the preceding key point may be Pi-pre, the vector formed by the key point Pi and the succeeding key point is
Figure BDA0002709732950000121
The vector formed by the key point Pi and the predecessor key point is
Figure BDA0002709732950000122
And i represents the ith key point, and the vector formed by each key point and the successor key point in the polygon and the vector formed by the each key point and the predecessor key point can be obtained through the method. Then, it can be represented by the following formula
Figure BDA0002709732950000123
And calculating the perimeter of a polygon formed by the key points of the classification category, wherein S represents the perimeter of the polygon. In addition, the included angle corresponding to each key point can be determined, that is, as shown in fig. 6, the included angle corresponding to the key point Pi is θi. Through the method, the perimeter of the polygon formed by the key points of the first classification category and the included angle between the vector formed by each key point and the subsequent key point in the key points of the first classification category and the vector formed by each key point and the previous key point can be obtained.
In step S312, distance feature vectors corresponding to the key points of the first classification category are determined based on the perimeter of the polygon formed by the key points of the first classification category, and angle feature vectors corresponding to the key points of the first classification category are determined based on the included angle, so as to determine the distance feature vectors corresponding to the key points of each classification category and the angle feature vectors corresponding to the key points of each classification category.
In implementation, after the perimeter of the polygon formed by the key points of the first classification category is obtained in the above manner, the following expression may be used to obtain the perimeter
Figure BDA0002709732950000131
Determining distance feature vectors corresponding to the key points of the first classification category, wherein df represents the distance feature vectors corresponding to the key points of the first classification category.
Based on the above-mentioned angle, the following expression can be used
af={arccosθ1,arccosθ2,…arccosθn} (4)
And determining the angle characteristic vector corresponding to the key point of the first classification category, wherein af represents the angle characteristic vector corresponding to the key point of the first classification category.
The processing procedure may be repeatedly executed to obtain the distance feature vector corresponding to the key point of each classification category and the angle feature vector corresponding to the key point of each classification category.
In step S314, a feature vector group corresponding to the face of the target user is constructed based on the determined feature vectors corresponding to the key points of each classification category.
In step S316, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources stored in advance, similarities between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources, respectively, are determined by a preset similarity algorithm, the similarity algorithm including one or more of: cosine similarity algorithm, Euclidean distance algorithm, Manhattan distance algorithm, Minkowski distance algorithm, Jaccard similarity algorithm, and Pearson correlation coefficient algorithm.
It should be noted that the above-mentioned multiple similarity algorithms are only similarity algorithms that can implement the above-mentioned processing, and in practical applications, besides the above-mentioned similarity algorithms, multiple different similarity algorithms may be included, which may be set specifically according to actual situations, and this is not limited in the embodiments of the present specification.
In an embodiment of the present specification, an optional similarity calculation method is further provided, and the step S316 may specifically include the following processing of step a2 and step a 4.
In step a2, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources stored in advance, the maximum and minimum values of the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources are determined.
In step a4, the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources is determined based on the maximum and minimum similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources, and a preset weighting coefficient.
In practice, for the above steps a2 and a4, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources stored in advance, the following formula is used
Figure BDA0002709732950000141
Determining that a set of feature vectors corresponding to a face of a target user respectively corresponds to a plurality of different virtual resourcesSimilarity between corresponding sets of feature vectors, wherein scorekThe similarity value is represented, α and β represent weighting coefficients, i represents an i-th feature vector group corresponding to the face of the target user (i.e., the number or serial number of the feature vector group corresponding to the face of the target user), k represents a k-th feature vector group corresponding to the virtual resource (i.e., the number or serial number of the feature vector group corresponding to the virtual resource), X represents a feature vector group corresponding to the face of the target user (a feature vector group configured by distance feature vectors), Y represents a feature vector group corresponding to the face of the target user (a feature vector group configured by distance feature vectors), M represents a feature vector group corresponding to the virtual resource (a feature vector group configured by distance feature vectors), and N represents a feature vector group corresponding to the virtual resource (a feature vector group configured by angle feature vectors).
In implementation, through the above processing, the feature vector group corresponding to the face of the target user may be obtained as follows:
df={X1,X2,…Xn}
af={Y1,Y2,…Yn} (6)
the feature vector groups corresponding to the plurality of different virtual resources may be:
df’={M1k,M2k,…Mnk}
af’={N1k,N2k,…Nnk} (7)
the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources may be substituted into the formula to perform calculation, so as to obtain similarities between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources.
It should be noted that, the weights of the features may be adjusted by the weighting coefficients α and β, and finally, by using the algorithm, the information of other parts and hairstyles in the face of the target user may be matched with a plurality of different virtual resources stored in the virtual resource library, so as to find an optimal virtual resource, and thus, mapping between the face of the user and the virtual resource library may be completed.
In step S318, a virtual resource matching the target user is selected from the plurality of different virtual resources based on the determined similarity.
In step S320, the selected relevant information of the virtual resource matching the target user is sent to the target user, so that the target user obtains the virtual resource matching the target user based on the relevant information.
The selected relevant information of the virtual resource matched with the target user includes download address information corresponding to the selected virtual resource matched with the target user, and the download address information may be, for example, a URL.
In implementation, after receiving the relevant information of the virtual resource, the terminal device of the target user may obtain the corresponding virtual resource, and after obtaining the virtual resource, the terminal device of the target user may render the virtual resource through the corresponding rendering engine and component, and finally, the target user may view the virtual resource.
The embodiments of the present specification provide a method for selecting virtual resources, where a key point of a preset portion in a face of a target user is obtained, where the key point includes a plurality of points for representing a hairstyle contour of the target user, so that the hairstyle contour of the user can be taken into consideration when determining virtual resources, and accuracy of obtaining virtual resources is improved. And classifying the key points of the preset parts in the obtained face of the target user to obtain classified key points, determining the feature vector corresponding to the key point of each classification category aiming at the key points of each classification category, constructing a feature vector group corresponding to the face of the target user based on the feature vector corresponding to the key point of each classification category, and finally selecting virtual resources matched with the target user from a plurality of different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, so that the dimension of processing information is effectively reduced by abstracting high-dimensional information in the face of the target user into key point representation without high-dimensional information processing, the performance of face identification processing is improved, and meanwhile, the matching processing is performed in the form of a plurality of virtual resources stored in advance, and a complex processing flow is not needed, so that the resource loss is reduced, and the method is more suitable for the virtual resource processing of an online and high-concurrency service scene.
In addition, the algorithm used by the scheme is light, the calculation scale is only related to the lattice scale of the key points, the number of the identified key points can be adjusted, and optimization is further performed, and the scheme innovatively applies the polygon similarity calculation method to the virtual resource generation of the facial image. Meanwhile, the high-dimensional feature data of the face are reduced to the two-dimensional features of the edge and the angle, so that the related technology and the operation loss for realizing the virtual resource generation are greatly reduced.
EXAMPLE III
In this embodiment, a detailed description will be given of the method for selecting virtual resources provided in the embodiment of the present invention in combination with a specific application scenario, where the corresponding application scenario is an application scenario of an online service, where the preset part of the target may include a hair style contour and five sense organs of the face (i.e., an eye, a nose, an ear, a mouth, and a face contour), the virtual resources may be an avatar, and the related information of the virtual resources may be a URL of the avatar.
As shown in fig. 7, an execution subject of the method may be a terminal device or a server, where the terminal device may be a mobile terminal device such as a mobile phone and a tablet computer, or may also be a device such as a personal computer, and the server may be an independent server, or a server cluster formed by multiple servers. The terminal device or server may be a terminal device or server that handles a certain service. In the embodiments of the present specification, the execution main body is taken as an example to be described in detail, and for the case that the execution main body is a terminal device, the following related content execution may be referred to, and details are not described herein again. The method may specifically comprise the steps of:
in step S702, key points of the face of the target user are obtained, where the key points include a plurality of key points for characterizing the hair style contour of the target user and key points of the five sense organs of the face.
In step S704, based on the coordinates of the key points of the hair style contour of the target user and the key points of the facial features of the target user, and the length and width of the preset canvas, the key points of the hair style contour of the target user and the key points of the facial features of the target user are normalized, so as to obtain the normalized key points of the facial feature of the target user.
In step S706, the normalized key points of the face of the target user are classified by using each part of the face of the target user as a classification category, so as to obtain classified key points.
Wherein, each part in the face of the target user can comprise a part corresponding to facial five sense organs, a part corresponding to hair and the like.
In step S708, if the coordinates of the key points of the target user' S face are not within the preset coordinate range, it is possible to perform the following algorithm
x1=xcosβ+ysinβ
y1=ycosβ-xsinβ
And rotating the coordinates of the key points of the face of the target user to obtain the coordinates of the rotated key points of the face of the target user.
In step S710, based on the key points of the first classification category, the successor key points of each key point in the key points of the first classification category, and the predecessor key points of each key point, a perimeter of a polygon formed by the key points of the first classification category, and an included angle between a vector formed by each key point in the key points of the first classification category and its successor key point, and a vector formed by each key point and its predecessor key point are determined.
Wherein the first classification category is any classification category among the above classification categories.
In step S712, distance feature vectors corresponding to the key points of the first classification category are determined based on the perimeter of the polygon formed by the key points of the first classification category, and angle feature vectors corresponding to the key points of the first classification category are determined based on the included angle, so as to determine the distance feature vectors corresponding to the key points of each classification category and the angle feature vectors corresponding to the key points of each classification category.
In step S714, a feature vector group corresponding to the face of the target user is constructed based on the determined feature vectors corresponding to the key points of each classification category.
In step S716, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different avatars stored in advance, the following formula is used
Figure BDA0002709732950000171
Similarity between sets of feature vectors corresponding to the faces of the target users and sets of feature vectors corresponding to a plurality of different avatars, respectively, is determined.
In step S718, an avatar matching the target user is selected from a plurality of different avatars based on the determined similarity.
In step S720, the URL of the selected avatar matching the target user is sent to the target user, so that the target user obtains the avatar matching the target user based on the URL, where the avatar is a map of the skeletal animation.
The embodiments of the present specification provide a method for selecting virtual resources, where a key point of a preset portion in a face of a target user is obtained, where the key point includes a plurality of points for representing a hairstyle contour of the target user, so that the hairstyle contour of the user can be taken into consideration when determining virtual resources, and accuracy of obtaining virtual resources is improved. And classifying the key points of the preset parts in the obtained face of the target user to obtain classified key points, determining the feature vector corresponding to the key point of each classification category aiming at the key points of each classification category, constructing a feature vector group corresponding to the face of the target user based on the feature vector corresponding to the key point of each classification category, and finally selecting virtual resources matched with the target user from a plurality of different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, so that the dimension of processing information is effectively reduced by abstracting high-dimensional information in the face of the target user into key point representation without high-dimensional information processing, the performance of face identification processing is improved, and meanwhile, the matching processing is performed in the form of a plurality of virtual resources stored in advance, and a complex processing flow is not needed, so that the resource loss is reduced, and the method is more suitable for the virtual resource processing of an online and high-concurrency service scene.
In addition, the algorithm used by the scheme is light, the calculation scale is only related to the lattice scale of the key points, the number of the identified key points can be adjusted, and optimization is further performed, and the scheme innovatively applies the polygon similarity calculation method to the virtual resource generation of the facial image. Meanwhile, the high-dimensional feature data of the face are reduced to the two-dimensional features of the edge and the angle, so that the related technology and the operation loss for realizing the virtual resource generation are greatly reduced.
Example four
Based on the same idea, the foregoing method for selecting virtual resources provided in the embodiment of the present specification further provides a device for selecting virtual resources, as shown in fig. 8.
The virtual resource selecting device comprises: a key point obtaining module 801, a classification module 802, a vector determination module 803, and a virtual resource selection module 804, wherein:
a key point obtaining module 801, configured to obtain key points of a preset portion in a face of a target user, where the key points include a plurality of points used for representing a hairstyle contour of the target user;
the classification module 802 is configured to classify the obtained key points of the preset portion in the face of the target user to obtain classified key points;
the vector determination module 803 determines, for the key points of each classification category, the feature vectors corresponding to the key points of each classification category, and constructs a feature vector group corresponding to the face of the target user based on the determined feature vectors corresponding to the key points of each classification category;
the virtual resource selecting module 804 determines similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources, which are stored in advance, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources, and selects a virtual resource matched with the target user from the different virtual resources based on the determined similarity.
In an embodiment of this specification, the apparatus further includes:
and the information sending module is used for sending the selected related information of the virtual resource matched with the target user to the target user so as to enable the target user to obtain the virtual resource matched with the target user based on the related information, wherein the virtual resource is a mapping of the skeleton animation.
In an embodiment of this specification, the selected information about the virtual resource that matches the target user includes download address information corresponding to the selected virtual resource that matches the target user.
In an embodiment of this specification, the apparatus further includes:
the normalization module is used for normalizing the key points of the preset parts in the face of the target user based on the coordinates of the key points of the preset parts in the face of the target user and the length and width of the canvas where the face of the target user is located to obtain the normalized key points of the preset parts in the face of the target user;
the classification module 802 classifies the key points of the preset portion in the normalized face of the target user to obtain classified key points.
In an embodiment of this specification, the apparatus further includes:
and the rotating module is used for rotating the coordinates of the key points of the preset part in the face of the target user based on a preset rotating algorithm if the coordinates of the key points of the preset part in the face of the target user are not in a preset coordinate range, so as to obtain the rotated coordinates of the key points of the preset part in the face of the target user.
In embodiments of the present description, the rotation algorithm includes one or more of:
the rotated coordinates of the key points of the preset part in the face of the target user are determined by the coordinates of the central point corresponding to the key points of the preset part in the face of the target user and the included angle between the straight line formed by the key points on the leftmost side of the left eye and the rightmost side of the right eye of the target user in the key points of the preset part in the face of the target user and the transverse axis of the coordinate system corresponding to the coordinates of the key points;
the rotation algorithm is constructed by adopting a preset rotation matrix and a preset rotation vector;
the rotation algorithm is an Euler angle rotation algorithm;
the rotation algorithm is an algorithm constructed based on quaternions.
In this embodiment of the present specification, the classifying module 802 classifies the obtained key points of the preset portion in the face of the target user by using each preset portion in the face of the target user as a classification category, so as to obtain the classified key points.
In this embodiment of the present specification, the vector determining module 803 includes:
the parameter determining unit is used for determining the perimeter of a polygon formed by the key points of the first classification category and an included angle between a vector formed by each key point of the key points of the first classification category and the succeeding key point and a vector formed by each key point and the preceding key point based on the key points of the first classification category, the succeeding key point of each key point of the key points of the first classification category and the preceding key point of each key point;
and the vector determining unit is used for determining distance feature vectors corresponding to the key points of the first classification category based on the perimeter of a polygon formed by the key points of the first classification category, determining angle feature vectors corresponding to the key points of the first classification category based on the included angle, further determining distance feature vectors corresponding to the key points of each classification category and angle feature vectors corresponding to the key points of each classification category, wherein the first classification category is any classification category in the classification categories.
In this embodiment of the present specification, the virtual resource selecting module 804 determines, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources stored in advance, similarities between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources respectively by using a preset similarity algorithm, where the similarity algorithm includes one or more of: cosine similarity algorithm, Euclidean distance algorithm, Manhattan distance algorithm, Minkowski distance algorithm, Jaccard similarity algorithm, and Pearson correlation coefficient algorithm.
In this embodiment of the present specification, the virtual resource selecting module 804 determines, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources stored in advance, a maximum value and a minimum value of similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources;
and determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the maximum value and the minimum value of the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources and a preset weighting coefficient.
In this embodiment of the present specification, the virtual resource selecting module 804, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources stored in advance, uses the following formula
Figure BDA0002709732950000201
Determining similarity between a set of feature vectors corresponding to the target user's face and a set of feature vectors corresponding to the plurality of different virtual resources, respectively, wherein scorekThe similarity value is represented, alpha and beta represent weighting coefficients respectively, i represents an ith feature vector group corresponding to the face of the target user, k represents a feature vector group corresponding to a kth virtual resource, X represents the feature vector group corresponding to the face of the target user, and M represents the feature vector group corresponding to the virtual resource.
The embodiment of the present specification provides a virtual resource selection device, which acquires a key point of a preset portion in a face of a target user, where the key point includes a plurality of points for representing a hairstyle contour of the target user, so that the hairstyle contour of the user can be taken into consideration when determining a virtual resource, and accuracy of acquiring the virtual resource is improved. And classifying the key points of the preset parts in the obtained face of the target user to obtain classified key points, determining the feature vector corresponding to the key point of each classification category aiming at the key points of each classification category, constructing a feature vector group corresponding to the face of the target user based on the feature vector corresponding to the key point of each classification category, and finally selecting virtual resources matched with the target user from a plurality of different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, so that the dimension of processing information is effectively reduced by abstracting high-dimensional information in the face of the target user into key point representation without high-dimensional information processing, the performance of face identification processing is improved, and meanwhile, the matching processing is performed in the form of a plurality of virtual resources stored in advance, and a complex processing flow is not needed, so that the resource loss is reduced, and the method is more suitable for the virtual resource processing of an online and high-concurrency service scene.
In addition, the algorithm used by the scheme is light, the calculation scale is only related to the lattice scale of the key points, the number of the identified key points can be adjusted, and optimization is further performed, and the scheme innovatively applies the polygon similarity calculation method to the virtual resource generation of the facial image. Meanwhile, the high-dimensional feature data of the face are reduced to the two-dimensional features of the edge and the angle, so that the related technology and the operation loss for realizing the virtual resource generation are greatly reduced.
EXAMPLE five
Based on the same idea, the foregoing apparatus for selecting virtual resources provided in this embodiment of this specification further provides a device for selecting virtual resources, as shown in fig. 9.
The selection device of the virtual resource may be the terminal device or the server provided in the above embodiment.
The selected devices of the virtual resources may have a large difference due to different configurations or performances, and may include one or more processors 901 and a memory 902, where the memory 902 may store one or more stored applications or data. Memory 902 may be, among other things, transient storage or persistent storage. The application program stored in memory 902 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in a selected device for a virtual resource. Still further, the processor 901 may be configured to communicate with the memory 902 to execute a series of computer-executable instructions in the memory 902 on a selected device of the virtual resource. The selection device for virtual resources may also include one or more power sources 903, one or more wired or wireless network interfaces 904, one or more input-output interfaces 905, and one or more keyboards 906.
Specifically, in this embodiment, the selection device of the virtual resource includes a memory and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the selection device of the virtual resource, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
acquiring key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points for representing the hair style contour of the target user;
classifying the acquired key points of the preset part in the face of the target user to obtain classified key points;
determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category;
and determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
In the embodiment of this specification, the method further includes:
and sending the selected related information of the virtual resource matched with the target user to the target user so that the target user can obtain the virtual resource matched with the target user based on the related information, wherein the virtual resource is a mapping of the skeleton animation.
In this embodiment of this specification, after obtaining the key point of the preset part in the face of the target user, the method further includes:
based on the coordinates of the key points of the preset part in the face of the target user and the length and width of the canvas where the face of the target user is located, carrying out normalization processing on the key points of the preset part in the face of the target user to obtain the normalized key points of the preset part in the face of the target user;
the classifying the obtained key points of the preset part in the face of the target user to obtain the classified key points comprises the following steps:
and classifying the key points of the preset part in the normalized face of the target user to obtain the classified key points.
In an embodiment of this specification, the method further includes:
and if the coordinates of the key points of the preset part in the face of the target user are not in the preset coordinate range, rotating the coordinates of the key points of the preset part in the face of the target user based on a preset rotation algorithm to obtain the rotated coordinates of the key points of the preset part in the face of the target user.
In this embodiment of this specification, the determining, for the keypoint of each classification category, a feature vector corresponding to the keypoint of each classification category includes:
determining the perimeter of a polygon formed by the key points of the first classification category and an included angle between a vector formed by each key point and the subsequent key point of the key point and a vector formed by each key point and the precursor key point of the first classification category based on the key points of the first classification category, the subsequent key point of each key point of the key points of the first classification category and the precursor key point of each key point;
determining distance feature vectors corresponding to the key points of the first classification category based on the perimeter of a polygon formed by the key points of the first classification category, determining angle feature vectors corresponding to the key points of the first classification category based on the included angle, further determining distance feature vectors corresponding to the key points of each classification category and angle feature vectors corresponding to the key points of each classification category, wherein the first classification category is any classification category in the classification categories.
In this embodiment of the present specification, the determining, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources stored in advance, similarities between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources, includes:
based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources stored in advance, the method comprises the following formula
Figure BDA0002709732950000231
Determining similarity between a set of feature vectors corresponding to the target user's face and a set of feature vectors corresponding to the plurality of different virtual resources, respectively, wherein scorekThe similarity value is represented, alpha and beta represent weighting coefficients respectively, i represents an ith feature vector group corresponding to the face of the target user, k represents a feature vector group corresponding to a kth virtual resource, X represents the feature vector group corresponding to the face of the target user, and M represents the feature vector group corresponding to the virtual resource.
The embodiment of the present specification provides a virtual resource selection device, which acquires a key point of a preset portion in a face of a target user, where the key point includes a plurality of points for representing a hairstyle contour of the target user, so that the hairstyle contour of the user can be taken into consideration when determining a virtual resource, and accuracy of acquiring the virtual resource is improved. And classifying the key points of the preset parts in the obtained face of the target user to obtain classified key points, determining the feature vector corresponding to the key point of each classification category aiming at the key points of each classification category, constructing a feature vector group corresponding to the face of the target user based on the feature vector corresponding to the key point of each classification category, and finally selecting virtual resources matched with the target user from a plurality of different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, so that the dimension of processing information is effectively reduced by abstracting high-dimensional information in the face of the target user into key point representation without high-dimensional information processing, the performance of face identification processing is improved, and meanwhile, the matching processing is performed in the form of a plurality of virtual resources stored in advance, and a complex processing flow is not needed, so that the resource loss is reduced, and the method is more suitable for the virtual resource processing of an online and high-concurrency service scene.
EXAMPLE six
Further, based on the methods shown in fig. 1 to fig. 7, one or more embodiments of the present specification further provide a storage medium for storing computer-executable instruction information, in a specific embodiment, the storage medium may be a usb disk, an optical disk, a hard disk, or the like, and when executed by a processor, the storage medium stores the computer-executable instruction information, which can implement the following processes:
acquiring key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points for representing the hair style contour of the target user;
classifying the acquired key points of the preset part in the face of the target user to obtain classified key points;
determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category;
and determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
In the embodiment of this specification, the method further includes:
and sending the selected related information of the virtual resource matched with the target user to the target user so that the target user can obtain the virtual resource matched with the target user based on the related information, wherein the virtual resource is a mapping of the skeleton animation.
In this embodiment of this specification, after obtaining the key point of the preset part in the face of the target user, the method further includes:
based on the coordinates of the key points of the preset part in the face of the target user and the length and width of the canvas where the face of the target user is located, carrying out normalization processing on the key points of the preset part in the face of the target user to obtain the normalized key points of the preset part in the face of the target user;
the classifying the obtained key points of the preset part in the face of the target user to obtain the classified key points comprises the following steps:
and classifying the key points of the preset part in the normalized face of the target user to obtain the classified key points.
In an embodiment of this specification, the method further includes:
and if the coordinates of the key points of the preset part in the face of the target user are not in the preset coordinate range, rotating the coordinates of the key points of the preset part in the face of the target user based on a preset rotation algorithm to obtain the rotated coordinates of the key points of the preset part in the face of the target user.
In this embodiment of this specification, the determining, for the keypoint of each classification category, a feature vector corresponding to the keypoint of each classification category includes:
determining the perimeter of a polygon formed by the key points of the first classification category and an included angle between a vector formed by each key point and the subsequent key point of the key point and a vector formed by each key point and the precursor key point of the first classification category based on the key points of the first classification category, the subsequent key point of each key point of the key points of the first classification category and the precursor key point of each key point;
determining distance feature vectors corresponding to the key points of the first classification category based on the perimeter of a polygon formed by the key points of the first classification category, determining angle feature vectors corresponding to the key points of the first classification category based on the included angle, further determining distance feature vectors corresponding to the key points of each classification category and angle feature vectors corresponding to the key points of each classification category, wherein the first classification category is any classification category in the classification categories.
In this embodiment of the present specification, the determining, based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources stored in advance, similarities between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources, includes:
based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources stored in advance, the method comprises the following formula
Figure BDA0002709732950000251
Determining similarity between a set of feature vectors corresponding to the target user's face and a set of feature vectors corresponding to the plurality of different virtual resources, respectively, wherein scorekThe similarity value is represented, alpha and beta represent weighting coefficients respectively, i represents an ith feature vector group corresponding to the face of the target user, k represents a feature vector group corresponding to a kth virtual resource, X represents the feature vector group corresponding to the face of the target user, and M represents the feature vector group corresponding to the virtual resource.
The embodiment of the present specification provides a storage medium, which obtains key points of a preset part in a face of a target user, where the key points include a plurality of points for representing a hair style contour of the target user, so that the hair style contour of the user can be taken into account when determining a virtual resource, and the accuracy of obtaining the virtual resource can be improved. And classifying the key points of the preset parts in the obtained face of the target user to obtain classified key points, determining the feature vector corresponding to the key point of each classification category aiming at the key points of each classification category, constructing a feature vector group corresponding to the face of the target user based on the feature vector corresponding to the key point of each classification category, and finally selecting virtual resources matched with the target user from a plurality of different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, so that the dimension of processing information is effectively reduced by abstracting high-dimensional information in the face of the target user into key point representation without high-dimensional information processing, the performance of face identification processing is improved, and meanwhile, the matching processing is performed in the form of a plurality of virtual resources stored in advance, and a complex processing flow is not needed, so that the resource loss is reduced, and the method is more suitable for the virtual resource processing of an online and high-concurrency service scene.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable virtual resource selecting device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable virtual resource selecting device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable virtual resource selecting device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable virtual resource selecting device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (16)

1. A method for selecting virtual resources comprises the following steps:
acquiring key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points for representing the hair style contour of the target user;
classifying the acquired key points of the preset part in the face of the target user to obtain classified key points;
determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category;
and determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
2. The method of claim 1, further comprising:
sending the relevant information of the selected virtual resource matched with the target user to the target user so that the target user can obtain the virtual resource matched with the target user based on the relevant information, wherein the virtual resource is a mapping of skeletal animation, and the relevant information of the selected virtual resource matched with the target user comprises the download address information corresponding to the selected virtual resource matched with the target user.
3. The method of claim 1, after obtaining key points of a preset part in the face of the target user, the method further comprising:
based on the coordinates of the key points of the preset part in the face of the target user and the length and width of the canvas where the face of the target user is located, carrying out normalization processing on the key points of the preset part in the face of the target user to obtain the normalized key points of the preset part in the face of the target user;
the classifying the obtained key points of the preset part in the face of the target user to obtain the classified key points comprises the following steps:
and classifying the key points of the preset part in the normalized face of the target user to obtain the classified key points.
4. The method of claim 1 or 3, further comprising:
and if the coordinates of the key points of the preset part in the face of the target user are not in the preset coordinate range, rotating the coordinates of the key points of the preset part in the face of the target user based on a preset rotation algorithm to obtain the rotated coordinates of the key points of the preset part in the face of the target user.
5. The method of claim 4, the rotation algorithm comprising one or more of:
the rotated coordinates of the key points of the preset part in the face of the target user are determined by the coordinates of the central point corresponding to the key points of the preset part in the face of the target user and the included angle between the straight line formed by the key points on the leftmost side of the left eye and the rightmost side of the right eye of the target user in the key points of the preset part in the face of the target user and the transverse axis of the coordinate system corresponding to the coordinates of the key points;
the rotation algorithm is constructed by adopting a preset rotation matrix and a preset rotation vector;
the rotation algorithm is an Euler angle rotation algorithm;
the rotation algorithm is an algorithm constructed based on quaternions.
6. The method of claim 1, wherein the classifying the obtained key points of the preset part in the face of the target user to obtain classified key points comprises:
and classifying the acquired key points of the preset parts in the face of the target user by taking each preset part in the face of the target user as a classification category to obtain classified key points.
7. The method of claim 6, wherein determining, for the keypoints for each classification category, the feature vector corresponding to the keypoint for each classification category comprises:
determining the perimeter of a polygon formed by the key points of the first classification category and an included angle between a vector formed by each key point and the subsequent key point of the key point and a vector formed by each key point and the precursor key point of the first classification category based on the key points of the first classification category, the subsequent key point of each key point of the key points of the first classification category and the precursor key point of each key point;
determining distance feature vectors corresponding to the key points of the first classification category based on the perimeter of a polygon formed by the key points of the first classification category, determining angle feature vectors corresponding to the key points of the first classification category based on the included angle, further determining distance feature vectors corresponding to the key points of each classification category and angle feature vectors corresponding to the key points of each classification category, wherein the first classification category is any classification category in the classification categories.
8. The method of claim 1, wherein the determining the similarity between the feature vector group corresponding to the target user's face and the feature vector groups corresponding to the plurality of different virtual resources based on the feature vector group corresponding to the target user's face and the feature vector groups corresponding to the plurality of different virtual resources stored in advance comprises:
determining the similarity between the feature vector group corresponding to the target user face and the feature vector groups corresponding to the plurality of different virtual resources through a preset similarity algorithm based on the feature vector group corresponding to the target user face and the feature vector groups corresponding to the plurality of different virtual resources stored in advance, wherein the similarity algorithm comprises one or more of the following methods: cosine similarity algorithm, Euclidean distance algorithm, Manhattan distance algorithm, Minkowski distance algorithm, Jaccard similarity algorithm, and Pearson correlation coefficient algorithm.
9. The method according to claim 1 or 8, wherein the determining the similarity between the feature vector group corresponding to the target user's face and the feature vector groups corresponding to the plurality of different virtual resources based on the feature vector group corresponding to the target user's face and the feature vector groups corresponding to the plurality of different virtual resources stored in advance comprises:
determining the maximum value and the minimum value of the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to a plurality of different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the plurality of different virtual resources stored in advance;
and determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the maximum value and the minimum value of the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources and a preset weighting coefficient.
10. An apparatus for selecting a virtual resource, the apparatus comprising:
the key point acquisition module is used for acquiring key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points used for representing the hair style contour of the target user;
the classification module is used for classifying the acquired key points of the preset part in the face of the target user to obtain classified key points;
the vector determination module is used for determining a feature vector corresponding to the key point of each classification category aiming at the key point of each classification category and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category;
and the virtual resource selection module is used for determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
11. The apparatus of claim 10, the apparatus further comprising:
the information sending module is used for sending the relevant information of the selected virtual resource matched with the target user to the target user so that the target user can obtain the virtual resource matched with the target user based on the relevant information, the virtual resource is a mapping of the skeleton animation, and the relevant information of the selected virtual resource matched with the target user comprises the download address information corresponding to the selected virtual resource matched with the target user.
12. The apparatus of claim 10, the apparatus further comprising:
the normalization module is used for normalizing the key points of the preset parts in the face of the target user based on the coordinates of the key points of the preset parts in the face of the target user and the length and width of the canvas where the face of the target user is located to obtain the normalized key points of the preset parts in the face of the target user;
the classification module is used for classifying the key points of the preset part in the normalized face of the target user to obtain the classified key points.
13. The apparatus of claim 10 or 12, further comprising:
and the rotating module is used for rotating the coordinates of the key points of the preset part in the face of the target user based on a preset rotating algorithm if the coordinates of the key points of the preset part in the face of the target user are not in a preset coordinate range, so as to obtain the rotated coordinates of the key points of the preset part in the face of the target user.
14. The apparatus according to claim 10, wherein the classifying module classifies the obtained key points of the preset portions in the face of the target user by using each preset portion in the face of the target user as a classification category, so as to obtain the classified key points.
15. A virtual resource selecting device comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points for representing the hair style contour of the target user;
classifying the acquired key points of the preset part in the face of the target user to obtain classified key points;
determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category;
and determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
16. A storage medium for storing computer-executable instructions, which when executed implement the following:
acquiring key points of a preset part in the face of a target user, wherein the key points comprise a plurality of points for representing the hair style contour of the target user;
classifying the acquired key points of the preset part in the face of the target user to obtain classified key points;
determining a feature vector corresponding to the key point of each classification category according to the key point of each classification category, and constructing a feature vector group corresponding to the face of the target user based on the determined feature vector corresponding to the key point of each classification category;
and determining the similarity between the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources respectively based on the feature vector group corresponding to the face of the target user and the feature vector groups corresponding to the different virtual resources stored in advance, and selecting the virtual resource matched with the target user from the different virtual resources based on the determined similarity.
CN202011051577.7A 2020-09-29 2020-09-29 Virtual resource selection method, device and equipment Active CN112132979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011051577.7A CN112132979B (en) 2020-09-29 2020-09-29 Virtual resource selection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011051577.7A CN112132979B (en) 2020-09-29 2020-09-29 Virtual resource selection method, device and equipment

Publications (2)

Publication Number Publication Date
CN112132979A CN112132979A (en) 2020-12-25
CN112132979B true CN112132979B (en) 2022-04-22

Family

ID=73844805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011051577.7A Active CN112132979B (en) 2020-09-29 2020-09-29 Virtual resource selection method, device and equipment

Country Status (1)

Country Link
CN (1) CN112132979B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764649B (en) * 2021-01-29 2023-01-31 北京字节跳动网络技术有限公司 Virtual image generation method, device, equipment and storage medium
CN114119935B (en) * 2021-11-29 2023-10-03 北京百度网讯科技有限公司 Image processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120070985A (en) * 2010-12-22 2012-07-02 에스케이플래닛 주식회사 Virtual experience system based on facial feature and method therefore
WO2017003031A1 (en) * 2015-06-29 2017-01-05 김영자 Method for providing lifelike avatar emoticon-based ultralight data animation creation system, and terminal device providing lifelike avatar emoticon for implementing same
CN106652015A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Virtual figure head portrait generation method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9013489B2 (en) * 2011-06-06 2015-04-21 Microsoft Technology Licensing, Llc Generation of avatar reflecting player appearance
WO2016161553A1 (en) * 2015-04-07 2016-10-13 Intel Corporation Avatar generation and animations

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120070985A (en) * 2010-12-22 2012-07-02 에스케이플래닛 주식회사 Virtual experience system based on facial feature and method therefore
WO2017003031A1 (en) * 2015-06-29 2017-01-05 김영자 Method for providing lifelike avatar emoticon-based ultralight data animation creation system, and terminal device providing lifelike avatar emoticon for implementing same
CN106652015A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Virtual figure head portrait generation method and apparatus

Also Published As

Publication number Publication date
CN112132979A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
US11798246B2 (en) Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same
KR102523512B1 (en) Creation of a face model
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
US11481869B2 (en) Cross-domain image translation
US11568645B2 (en) Electronic device and controlling method thereof
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
CN112132979B (en) Virtual resource selection method, device and equipment
US20230081982A1 (en) Image processing method and apparatus, computer device, storage medium, and computer program product
CN111108508B (en) Face emotion recognition method, intelligent device and computer readable storage medium
CN113850168A (en) Fusion method, device and equipment of face pictures and storage medium
JP2020507159A (en) Picture push method, mobile terminal and storage medium
CN111553838A (en) Model parameter updating method, device, equipment and storage medium
WO2021169556A1 (en) Method and apparatus for compositing face image
CN115690276A (en) Video generation method and device of virtual image, computer equipment and storage medium
Wang et al. Generative model with coordinate metric learning for object recognition based on 3D models
US11488352B1 (en) Modeling a geographical space for a computer-generated reality experience
KR102160955B1 (en) Method and apparatus of generating 3d data based on deep learning
CN117561547A (en) Scene determination method, device and computer readable storage medium
CN116246009B (en) Virtual image processing method and device
Krishnakumari et al. Automatic Facial Expression Recognition Employing CNN-Inception and the ResNet 50
CN112419249B (en) Special clothing picture conversion method, terminal device and storage medium
US20230401777A1 (en) Method and apparatus for creating avatar based on body shape
KR102661019B1 (en) Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof
CN117252791A (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant