CN112101072A - Face matching method, device, equipment and medium - Google Patents
Face matching method, device, equipment and medium Download PDFInfo
- Publication number
- CN112101072A CN112101072A CN201910526561.8A CN201910526561A CN112101072A CN 112101072 A CN112101072 A CN 112101072A CN 201910526561 A CN201910526561 A CN 201910526561A CN 112101072 A CN112101072 A CN 112101072A
- Authority
- CN
- China
- Prior art keywords
- video
- similarity
- face
- user
- hair style
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012216 screening Methods 0.000 claims abstract description 14
- 230000037308 hair color Effects 0.000 claims description 22
- 230000003741 hair volume Effects 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 230000001815 facial effect Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a face matching method, a face matching device, face matching equipment and a face matching medium, wherein the face matching method comprises the following steps: acquiring a face image of a user; extracting the face features of the user from the face image; calculating the face similarity of the face of the user and the face in each video according to the face features of the user and video face features corresponding to each video in a pre-stored video library; identifying gender characteristics of a user and hair style characteristics of the user by using the face image; calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video; and screening a target video corresponding to the user in a video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with those of the user. According to the embodiment of the invention, the video corresponding to the face similar to the user can be screened out from the video library, so that the video character of the user after face changing is closer to the face of the user, and the experience degree of the user is improved.
Description
Technical Field
The invention relates to the technical field of face image changing, in particular to a face matching method, a face matching device, face matching equipment and a face matching medium.
Background
Face conversion is a popular application in the field of computer vision, and can be generally used for video synthesis, privacy service provision, portrait replacement or other innovative applications.
At present, after a user changes a face in a video, the user is greatly different from the face of the user, and the video corresponding to the face closer to the face of the user cannot be recommended to the user by combining the face characteristics of the user, so that the user experience is too low.
Disclosure of Invention
The embodiment of the invention provides a face matching method, a face matching device and a face matching medium, which can screen out videos corresponding to faces similar to a user from a video library, so that the video characters of the user after face changing are closer to the face of the user, and the experience degree of the user is improved.
In a first aspect, an embodiment of the present invention provides a face matching method, where the method includes:
acquiring a face image of a user;
extracting the face features of the user from the face image;
calculating the face similarity of the face of the user and the face in each video according to the face features of the user and video face features corresponding to each video in a pre-stored video library;
identifying gender characteristics of a user and hair style characteristics of the user by using the face image;
calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video;
and screening a target video corresponding to the user in a video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with those of the user.
In a second aspect, an embodiment of the present invention provides a face matching apparatus, where the apparatus includes:
the acquisition module is used for acquiring a face image of a user;
the extraction module is used for extracting the face features of the user from the face image;
the first calculation module is used for calculating the similarity between the face of the user and the face of the user in each video according to the face characteristics of the user and the video face characteristics corresponding to each video in the pre-stored video library;
the identification module is used for identifying the gender characteristics of the user and the hair style characteristics of the user by using the face image;
the second calculation module is used for calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video;
and the selection module is used for screening a target video corresponding to the user in the video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with the gender characteristics of the user.
In a third aspect, an embodiment of the present invention provides a computer device, including: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method as in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which computer program instructions are stored, wherein the computer program instructions, when executed by a processor, implement the method according to the first aspect.
The embodiment of the invention provides a face matching method, a face matching device and a face matching medium, wherein a face image of a user is obtained; extracting the face features of the user from the face image; calculating the face similarity of the face of the user and the face in each video according to the face features of the user and video face features corresponding to each video in a pre-stored video library; identifying gender characteristics of a user and hair style characteristics of the user by using the face image; calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video; and screening a target video corresponding to the user in a video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with those of the user. According to the embodiment of the invention, the video corresponding to the face similar to the user can be screened out from the video library, so that the video character of the user after face changing is closer to the face of the user, and the experience degree of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 illustrates a flow diagram of a face matching method provided in accordance with some embodiments of the invention;
fig. 2 is a schematic structural diagram of a face matching apparatus according to some embodiments of the present invention;
FIG. 3 illustrates a schematic structural diagram of a computing device provided in accordance with some embodiments of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, an embodiment of the present invention provides a face matching method, where the method includes: S101-S106.
S101: and acquiring a face image of the user.
In a specific implementation, the face image of the user may be a picture taken by the user using the terminal, or may be a picture uploaded by the user before the face-changed video is recorded.
S102: and extracting the facial features of the user from the facial image.
In the implementation, the facial features of the user include the shape and position of the five sense organs of the face of the user. For example, a three-dimensional deformation model (3D morphable model, 3DMM) may be used to extract feature point coordinates from a face image of a user. The extracted face features may be a vector of dimension 1 × P, where P is an integer greater than 1.
S103: and calculating the face similarity of the face of the user and the face in each video according to the face features of the user and the video face features corresponding to each video in the pre-stored video library.
In specific implementation, the video face features corresponding to each video in the video library are extracted in advance, and the extracted video face features are generated into video feature vectors to be stored, for example, if the total number of videos in the video library is M, the video feature vectors are M × P. Calculating the similarity between the face features and the face in each video, for example, if the face features are vectors of 1 XP dimension, and the M video face features are vectors of M XP dimension, then 1 XP (M XP) is calculatedTAnd obtaining a feature vector with 1 multiplied by M dimensions, wherein the feature vector represents M personal face similarity degrees, and the M personal face similarity degrees are in one-to-one correspondence with the M videos, namely represent the face similarity degrees of the face of the user and the faces of the M videos respectively.
In some embodiments, videos with the faces of the M video faces having the face similarity greater than a first preset value with the face of the user may be recommended to the user for face changing.
S104: and identifying the gender characteristic of the user and the hair style characteristic of the user by using the face image.
When the method is implemented, the gender characteristics of the user and the hair style characteristics of the user are identified according to the face image, for example, the hair style, the hair length, the hair volume, the hair color and the Liuhai shape of the user all belong to the hair style characteristics of the user.
S105: and calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video.
In implementation, the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video is calculated.
In some embodiments, the hair style feature comprises N types of hair style features, the hair style feature of the person in each video in the video library comprises N types of hair style features, and for each video, calculating the hair style similarity between the hair style feature of the user and the hair style feature of the person in the first video, wherein the first video is any one of the videos in the video library, by: traversing each type in the N types, and taking each type as a type to be processed; and calculating the hair style similarity between the hair style characteristics of the to-be-processed type of the user and the hair style characteristics of the to-be-processed type of the person in the first video.
For example, the N hair style features include a hair style, a hair length, a hair volume, a hair color, and a bang shape, and the hair style similarity between the hair style feature of the to-be-processed type of the user and the hair style feature of the to-be-processed type of the person in the first video is calculated as follows:
and calculating the similarity between the hairstyle pattern of the user and the hairstyle pattern of the person in the first video as the hairstyle pattern similarity of the first video.
And calculating the similarity between the hair length of the user and the hair length of the person in the first video as the hair length similarity of the first video.
And calculating the similarity between the hair volume of the user and the hair volume of the person in the first video as the hair volume similarity of the first video.
And calculating the similarity between the hair color of the user and the hair color of the person in the first video as the hair color similarity of the first video.
And calculating the similarity between the shape of the bang of the user and the shape of the bang of the character in the first video library as the hair color similarity of the first video.
S106: and screening a target video corresponding to the user in a video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with those of the user.
And after the hair style similarity between the hair style characteristics of the types to be processed is obtained through calculation, screening a target video corresponding to the user in a video library through the hair style similarity and the face similarity in the following two ways.
One is as follows:
selecting a video with the face similarity larger than a first preset value from a video library as a second video set, traversing each video in the video library, taking each video as a first video, adding the first video into the first video set if the hair style and style similarity, the hair length similarity, the hair volume similarity, the hair color similarity and the Liuhai shape similarity of the first video are larger than a second preset value, taking the intersection of the second video set and the first video set after traversing each video, selecting a target video from the videos of the intersection, and recommending the target video to a user.
The second step is as follows:
aiming at any video in a video library, carrying out weighted summation on the face similarity of the first video, the hair style similarity of the first video, the hair length similarity of the first video, the hair volume similarity of the first video, the hair color similarity of the first video and the Liuhai shape similarity of the first video by using a similarity calculation model to obtain the comprehensive similarity between the user and the character in the first video, and traversing each video to obtain the comprehensive similarity between the user and the character in each video; and screening the target video with the comprehensive similarity larger than a third preset value from the video library, and recommending the target video to the user.
Here, the gender characteristics of the persons in all the target videos are consistent with those of the user. The gender characteristics can be identified through the face image, the gender of the user can be identified according to the face characteristics and the hair style characteristics of the user, and the gender characteristics can be marked through an account corresponding to the user by collecting user data.
In some embodiments, the face matching method provided in the embodiments of the present invention further includes training a similarity calculation model, which specifically includes:
and acquiring a sample face image, and respectively calculating the similarity of the sample face image and the hair style characteristics of the figure corresponding to each video and the similarity of the face by using a plurality of models to be trained. The model to be trained comprises a plurality of parameters, and each parameter corresponds to the face similarity and the hair style similarity one to one, namely, the face similarity and the hair style similarity correspond to the weight. Here, a plurality of sets of parameters, that is, a plurality of models to be trained, are preset. Selecting a video with the face similarity larger than a first preset value and the similarity of the hair style characteristics larger than a second preset value for each model to be trained; changing the face of the sample face image to the face of the person corresponding to the selected video by using the face image changing model to obtain a plurality of target face images; the face image model may be a Generative Adaptive Networks (GAN) or a Cycle GAN. Inputting the sample face image and a plurality of target face images into a face recognition model, and calculating the similarity between the sample face image and the plurality of target face images; calculating the average value of the similarity of a plurality of target face images corresponding to each model to be trained; and selecting the model to be trained corresponding to the highest average value as a similarity calculation model.
The embodiment of the invention provides a face matching method, which comprises the steps of obtaining a face image of a user; extracting the face features of the user from the face image; calculating the face similarity of the face of the user and the face in each video according to the face features of the user and video face features corresponding to each video in a pre-stored video library; identifying gender characteristics of a user and hair style characteristics of the user by using the face image; calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video; and screening a target video corresponding to the user in a video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with those of the user. According to the embodiment of the invention, the video corresponding to the face similar to the user can be screened out from the video library, so that the video character of the user after face changing is closer to the face of the user, and the experience degree of the user is improved.
Referring to fig. 2, an embodiment of the present invention provides a face matching apparatus, including:
an obtaining module 201, configured to obtain a face image of a user;
an extraction module 202, configured to extract facial features of a user from a facial image;
the first calculating module 203 is configured to calculate a human face similarity between a human face of a user and a human face in each video according to the human face features of the user and video human face features corresponding to each video in a pre-stored video library;
the identification module 204 is used for identifying the gender characteristics of the user and the hair style characteristics of the user by using the face image;
the second calculating module 205 is configured to calculate a hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video;
and the selecting module 206 is configured to screen a target video corresponding to the user in the video library according to the face similarity and the hair style similarity, where the gender feature of the person in the target video is consistent with the gender feature of the user.
In some embodiments, the hair style features of the user include N types of hair style features, the hair style features of the person in each video include N types of hair style features, the total number of videos in the video library is M, and M and N are integers greater than 1, respectively;
the second calculating module 205 is configured to calculate a hair style similarity between the hair style feature of the user and the hair style feature of the person in each video, and includes:
calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in the first video by the following steps:
traversing each type in the N types, and taking each type as a type to be processed;
calculating the hair style similarity between the hair style characteristics of the type to be processed of the user and the hair style characteristics of the type to be processed of the person in the first video; wherein the first video is any one of the videos in the video library.
In some embodiments, the N types of hair style features include: style, hair length, hair curl, hair color, bang shape;
the second calculating module 205 is specifically configured to calculate a hair style similarity between the hair style feature of the to-be-processed type of the user and the hair style feature of the to-be-processed type of the person in the first video, and includes:
calculating the similarity between the hairstyle pattern of the user and the hairstyle pattern of the person in the first video to serve as the hairstyle pattern similarity of the first video;
calculating the similarity between the hair length of the user and the hair length of the person in the first video to serve as the hair length similarity of the first video;
calculating the similarity between the hair volume of the user and the hair volume of the person in the first video to serve as the hair volume similarity of the first video;
calculating the similarity between the hair color of the user and the hair color of the character in the first video to serve as the hair color similarity of the first video;
and calculating the similarity between the shape of the bang of the user and the shape of the bang of the character in the first video as the hair color similarity of the first video.
The selecting module 206 is configured to filter a target video corresponding to the user in the video library according to the face similarity and the hair style similarity, and includes:
selecting videos with the face similarity larger than a first preset value from a video library to form a second video set;
traversing each video, taking each video as a first video, and if the similarity of the hair style pattern of the first video, the similarity of the hair length of the first video, the similarity of the hair volume of the first video, the similarity of the hair color of the first video and the similarity of the Liuhai shape of the first video are all greater than a second preset value, adding the first video into the first video set;
and after traversing each video, taking the intersection of the second video set and the first video set, and selecting a target video from the intersected videos.
In some embodiments, the face similarity comprises a face similarity of the first video, the face similarity of the first video being a face similarity of a face of the user to a face in the first video;
the selecting module 206 is configured to filter a target video corresponding to the user in the video library according to the face similarity and the hair style similarity, and includes:
weighting and summing the similarity of the face of the first video, the similarity of the hairstyle pattern of the first video, the similarity of the hair length of the first video, the similarity of the hair volume of the first video, the similarity of the hair color of the first video and the similarity of the Liuhai shape of the first video by using a similarity calculation model to obtain the comprehensive similarity of the user and the character in the first video;
traversing each video to obtain the comprehensive similarity between the user and the character in each video;
and screening the target videos with the comprehensive similarity larger than a third preset value in a video library.
In some embodiments, further comprising: a training module 207 for training the similarity calculation model;
a training module 207, configured to train the similarity calculation model, including:
and acquiring a sample face image.
And respectively calculating the similarity of the hair style characteristics of the sample face image and the person corresponding to each video and the face similarity by using a plurality of models to be trained.
And selecting a video with the face similarity being larger than a first preset value and the similarity of the hair style characteristics being larger than a second preset value for each model to be trained.
And changing the face of the sample face image to the face of the person corresponding to the selected video by using the face image changing model to obtain a plurality of target face images.
And inputting the sample face image and the plurality of target face images into a face recognition model, and calculating the similarity between the sample face image and the plurality of target face images.
Calculating the average value of the similarity of a plurality of target face images corresponding to each model to be trained;
and selecting the model to be trained corresponding to the highest average value as a similarity calculation model.
In some embodiments, further comprising, a recommendation module 208;
and the recommending module 208 is used for recommending the target video to the user.
In addition, the face matching method described in conjunction with fig. 1 according to the embodiment of the present invention may be implemented by a computing device. Fig. 3 is a schematic diagram illustrating a hardware structure of a computing device according to an embodiment of the present invention.
The computing device may include a processor 301 and a memory 302 storing computer program instructions.
In particular, the processor 301 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement any one of the face matching methods in the above embodiments.
In one example, the computing device may also include a communication interface 303 and a bus 310. As shown in fig. 3, the processor 301, the memory 302, and the communication interface 303 are connected via a bus 310 to complete communication therebetween.
The communication interface 303 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiment of the present invention.
In addition, in combination with the face matching method in the foregoing embodiment, the embodiment of the present invention may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the face matching methods in the above embodiments.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.
Claims (10)
1. A face matching method, characterized in that the method comprises:
acquiring a face image of a user;
extracting the facial features of the user from the facial image;
calculating the face similarity of the face of the user and the face in each video according to the face features of the user and video face features corresponding to each video in a pre-stored video library;
identifying gender characteristics of the user and hair style characteristics of the user by using the face image;
calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video;
and screening a target video corresponding to the user in the video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with those of the user.
2. The method according to claim 1, wherein the hair style features of the user comprise N types of hair style features, the hair style features of the person in each video comprise the N types of hair style features, the total number of videos in the video library is M, and M and N are integers greater than 1, respectively;
the calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in each video comprises:
calculating the hair style similarity between the hair style characteristics of the user and the hair style characteristics of the person in the first video by the following steps:
traversing each type in the N types, and taking each type as a type to be processed;
calculating the hair style similarity between the hair style characteristics of the type to be processed of the user and the hair style characteristics of the type to be processed of the person in the first video; wherein the first video is any one of the videos in the video library.
3. The method of claim 2,
the N types of hair style features include: style, hair length, hair curl, hair color, bang shape;
the calculating the hair style similarity between the hair style feature of the to-be-processed type of the user and the hair style feature of the to-be-processed type of the person in the first video comprises:
calculating the similarity between the hairstyle pattern of the user and the hairstyle pattern of the person in the first video as the hairstyle pattern similarity of the first video;
calculating the similarity between the hair length of the user and the hair length of the person in the first video to serve as the hair length similarity of the first video;
calculating the similarity between the hair volume of the user and the hair volume of the person in the first video to serve as the hair volume similarity of the first video;
calculating the similarity between the hair color of the user and the hair color of the person in the first video as the hair color similarity of the first video;
and calculating the similarity between the shape of the bang of the user and the shape of the bang of the character in the first video as the hair color similarity of the first video.
4. The method according to claim 3, wherein the screening of the target video corresponding to the user in the video library according to the face similarity and the hair style similarity comprises:
selecting the videos with the face similarity larger than a first preset value from the video library to form a second video set;
traversing each video, taking each video as the first video, and if the hair style similarity of the first video, the hair length similarity of the first video, the hair volume similarity of the first video, the hair color similarity of the first video and the bang shape similarity of the first video are all greater than a second preset value, adding the first video into a first video set;
and after traversing each video, taking the intersection of the second video set and the first video set, and selecting the target video from the intersected videos.
5. The method of claim 3, wherein the face similarity comprises a face similarity of the first video, the face similarity of the first video being a face similarity of the user's face to a face in the first video;
the screening of the target video corresponding to the user in the video library according to the face similarity and the hair style similarity comprises:
weighting and summing the face similarity of the first video, the hair style similarity of the first video, the hair length similarity of the first video, the hair volume similarity of the first video, the hair color similarity of the first video and the Liuhai shape similarity of the first video by using a similarity calculation model to obtain the comprehensive similarity between the user and the person in the first video;
traversing each video to obtain the comprehensive similarity of the user and the characters in each video;
and screening the target video with the comprehensive similarity larger than a third preset value in the video library.
6. The method of claim 5, further comprising: training the similarity calculation model;
the training the similarity calculation model includes:
acquiring a sample face image;
respectively calculating the similarity of the hair style characteristics of the sample face image and the person corresponding to each video and the face similarity by using a plurality of models to be trained;
selecting a video with the face similarity being larger than a first preset value and the similarity of the hair style characteristics being larger than a second preset value for each model to be trained;
changing the face of the sample face image to the face of the person corresponding to the selected video by using a face image changing model to obtain a plurality of target face images;
inputting the sample face image and the target face images into a face recognition model, and calculating the similarity between the sample face image and the target face images;
calculating the average value of the similarity of the plurality of target face images corresponding to each model to be trained;
and selecting the model to be trained corresponding to the highest average value as the similarity calculation model.
7. The method of claim 1, further comprising:
and recommending the target video to the user.
8. A face matching apparatus, the apparatus comprising:
the acquisition module is used for acquiring a face image of a user;
the extraction module is used for extracting the face features of the user from the face image;
the first calculation module is used for calculating the face similarity of the face of the user and the face in each video according to the face features of the user and video face features corresponding to each video in a pre-stored video library;
the identification module is used for identifying the gender characteristic of the user and the hair style characteristic of the user by utilizing the face image;
a second calculating module, configured to calculate a hair style similarity between the hair style feature of the user and the hair style feature of the person in each video;
and the selecting module is used for screening a target video corresponding to the user in the video library according to the face similarity and the hair style similarity, wherein the gender characteristics of the person in the target video are consistent with the gender characteristics of the user.
9. A computing device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method of any of claims 1-7.
10. A computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526561.8A CN112101072A (en) | 2019-06-18 | 2019-06-18 | Face matching method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526561.8A CN112101072A (en) | 2019-06-18 | 2019-06-18 | Face matching method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112101072A true CN112101072A (en) | 2020-12-18 |
Family
ID=73749385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910526561.8A Pending CN112101072A (en) | 2019-06-18 | 2019-06-18 | Face matching method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112101072A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113658035A (en) * | 2021-08-17 | 2021-11-16 | 北京百度网讯科技有限公司 | Face transformation method, device, equipment, storage medium and product |
CN113965802A (en) * | 2021-10-22 | 2022-01-21 | 深圳市兆驰股份有限公司 | Immersive video interaction method, device, equipment and storage medium |
CN115776597A (en) * | 2021-08-30 | 2023-03-10 | 海信集团控股股份有限公司 | Audio and video generation method and device and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120049761A (en) * | 2010-11-09 | 2012-05-17 | 김양웅 | Apparatus and method of searching for target based on matching possibility |
CN105005777A (en) * | 2015-07-30 | 2015-10-28 | 科大讯飞股份有限公司 | Face-based audio and video recommendation method and face-based audio and video recommendation system |
CN105069746A (en) * | 2015-08-23 | 2015-11-18 | 杭州欣禾圣世科技有限公司 | Video real-time human face substitution method and system based on partial affine and color transfer technology |
CN105868684A (en) * | 2015-12-10 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Video information acquisition method and apparatus |
CN106372068A (en) * | 2015-07-20 | 2017-02-01 | 中兴通讯股份有限公司 | Method and device for image search, and terminal |
CN108009521A (en) * | 2017-12-21 | 2018-05-08 | 广东欧珀移动通信有限公司 | Humanface image matching method, device, terminal and storage medium |
-
2019
- 2019-06-18 CN CN201910526561.8A patent/CN112101072A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120049761A (en) * | 2010-11-09 | 2012-05-17 | 김양웅 | Apparatus and method of searching for target based on matching possibility |
CN106372068A (en) * | 2015-07-20 | 2017-02-01 | 中兴通讯股份有限公司 | Method and device for image search, and terminal |
CN105005777A (en) * | 2015-07-30 | 2015-10-28 | 科大讯飞股份有限公司 | Face-based audio and video recommendation method and face-based audio and video recommendation system |
CN105069746A (en) * | 2015-08-23 | 2015-11-18 | 杭州欣禾圣世科技有限公司 | Video real-time human face substitution method and system based on partial affine and color transfer technology |
CN105868684A (en) * | 2015-12-10 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Video information acquisition method and apparatus |
CN108009521A (en) * | 2017-12-21 | 2018-05-08 | 广东欧珀移动通信有限公司 | Humanface image matching method, device, terminal and storage medium |
Non-Patent Citations (2)
Title |
---|
FLETCHER, KI: "Attention to internal face features in unfamiliar face matching", BRITISH JOURNAL OF PSYCHOLOGY, vol. 99, 1 August 2008 (2008-08-01), pages 379 - 394 * |
黄孝平;: "基于体绘制思维的人脸识别算法优化研究", 现代电子技术, no. 24, 15 December 2015 (2015-12-15) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113658035A (en) * | 2021-08-17 | 2021-11-16 | 北京百度网讯科技有限公司 | Face transformation method, device, equipment, storage medium and product |
CN113658035B (en) * | 2021-08-17 | 2023-08-08 | 北京百度网讯科技有限公司 | Face transformation method, device, equipment, storage medium and product |
CN115776597A (en) * | 2021-08-30 | 2023-03-10 | 海信集团控股股份有限公司 | Audio and video generation method and device and electronic equipment |
CN113965802A (en) * | 2021-10-22 | 2022-01-21 | 深圳市兆驰股份有限公司 | Immersive video interaction method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107330408B (en) | Video processing method and device, electronic equipment and storage medium | |
CN104573652B (en) | Determine the method, apparatus and terminal of the identity of face in facial image | |
CN112101072A (en) | Face matching method, device, equipment and medium | |
CN107330904A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN107423306B (en) | Image retrieval method and device | |
CN111860041B (en) | Face conversion model training method, device, equipment and medium | |
CN108229375B (en) | Method and device for detecting face image | |
CN111263955A (en) | Method and device for determining movement track of target object | |
CN106056083A (en) | Information processing method and terminal | |
CN110991298A (en) | Image processing method and device, storage medium and electronic device | |
CN108171208A (en) | Information acquisition method and device | |
CN113627334A (en) | Object behavior identification method and device | |
CN111626303A (en) | Sex and age identification method, sex and age identification device, storage medium and server | |
CN109977745B (en) | Face image processing method and related device | |
CN111639545A (en) | Face recognition method, device, equipment and medium | |
CN114360015A (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN112464862A (en) | Image recognition method, device, equipment and computer storage medium | |
CN116959113A (en) | Gait recognition method and device | |
CN111950507A (en) | Data processing and model training method, device, equipment and medium | |
CN111931148A (en) | Image processing method and device and electronic equipment | |
CN107563362B (en) | Method, client and system for evaluation operation | |
CN112613488B (en) | Face recognition method and device, storage medium and electronic equipment | |
CN114387651B (en) | Face recognition method, device, equipment and storage medium | |
CN114170651A (en) | Expression recognition method, device, equipment and computer storage medium | |
CN110956098B (en) | Image processing method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |