CN115130493A - Face deformation recommendation method, device, equipment and medium based on image recognition - Google Patents

Face deformation recommendation method, device, equipment and medium based on image recognition Download PDF

Info

Publication number
CN115130493A
CN115130493A CN202111604399.0A CN202111604399A CN115130493A CN 115130493 A CN115130493 A CN 115130493A CN 202111604399 A CN202111604399 A CN 202111604399A CN 115130493 A CN115130493 A CN 115130493A
Authority
CN
China
Prior art keywords
facial
face
deformation
user
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111604399.0A
Other languages
Chinese (zh)
Inventor
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN115130493A publication Critical patent/CN115130493A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and provides a facial deformation recommendation method, device, equipment and medium based on image recognition. The method comprises the following steps: the method comprises the steps of displaying multiple selectable facial styles, responding to a style selection instruction of a user for at least one facial style in the multiple facial styles, comparing facial features corresponding to a to-be-deformed facial image provided by the user with facial common features corresponding to the selected facial style, obtaining facial deformation recommendation information corresponding to a facial feature comparison result, and displaying facial deformation effects obtained by acting the facial deformation recommendation information and the facial deformation recommendation information on the to-be-deformed facial image, so that abundant selectable facial styles can be provided for the user, the provided facial features corresponding to the to-be-deformed facial image are compared with the facial common features corresponding to the selected style according to the style selected by the user, and facial deformation recommendation information and facial deformation effect previews corresponding to the comparison result are objectively and accurately provided for the user.

Description

Face deformation recommendation method, device, equipment and medium based on image recognition
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a facial deformation recommendation method and apparatus based on image recognition, a computer device, and a storage medium.
Background
With the development of artificial intelligence technology, facial image recognition technology has emerged. The technology can identify the facial features contained in the image, and is widely applied to scenes related to faces. The face image recognition technology is applied to the face deformation recommendation scene, and corresponding face deformation suggestions can be conveniently and rapidly provided for the user.
However, the facial deformation recommendation method provided by the current technology needs to recommend a facial deformation scheme to the user depending on a facial deformation threshold set by a specific user group according to subjective standards of the specific user group, and the standard facial deformation threshold set by the method is often too single, so that the facial deformation scheme cannot be objectively and accurately recommended to the user.
Disclosure of Invention
In view of the above, it is necessary to provide a face deformation recommendation method, apparatus, computer device and storage medium based on image recognition to solve the above technical problems.
A method for facial deformation recommendation based on image recognition, the method comprising:
displaying a plurality of facial styles;
responding to a style selection instruction of at least one face style in the plurality of face styles, and acquiring face deformation recommendation information corresponding to a face feature comparison result; the facial feature comparison result is obtained according to the comparison between the facial features corresponding to the facial image to be deformed and the facial common features corresponding to the selected facial style, wherein the facial features are provided by the user;
and displaying the facial deformation recommendation information, and displaying a facial deformation effect obtained by the facial deformation recommendation information acting on the facial image to be deformed.
An apparatus for recommending a facial deformation based on image recognition, the apparatus comprising:
the style display module is used for displaying various facial styles;
the information acquisition module is used for responding to a style selection instruction of at least one face style in the plurality of face styles and acquiring face deformation recommendation information corresponding to a face feature comparison result; the facial feature comparison result is obtained according to the comparison between the facial features corresponding to the facial image to be deformed and the facial common features corresponding to the selected facial style, wherein the facial features are provided by the user;
and the deformation display module is used for displaying the facial deformation recommendation information and displaying the facial deformation recommendation information to act on the facial deformation effect obtained by the facial image to be deformed.
A computer device comprising a memory storing a computer program and a processor implementing the following steps when the computer program is executed:
displaying a plurality of facial styles; responding to a style selection instruction of at least one face style in the plurality of face styles, and acquiring face deformation recommendation information corresponding to a face feature comparison result; the facial feature comparison result is obtained according to the comparison between the facial features corresponding to the facial image to be deformed provided by the user and the facial common features corresponding to the selected facial style; and displaying the facial deformation recommendation information, and displaying a facial deformation effect obtained by the facial deformation recommendation information acting on the facial image to be deformed.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
displaying a plurality of facial styles; responding to a style selection instruction of at least one face style in the plurality of face styles, and acquiring face deformation recommendation information corresponding to a face feature comparison result; the facial feature comparison result is obtained according to the comparison between the facial features corresponding to the facial image to be deformed and the facial common features corresponding to the selected facial style, wherein the facial features are provided by the user; and displaying the facial deformation recommendation information, and displaying a facial deformation effect obtained by the facial deformation recommendation information acting on the facial image to be deformed.
A computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read by a processor of the computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps of the method described above.
The facial deformation recommendation method, the facial deformation recommendation device, the computer equipment and the storage medium based on the image recognition show multiple selectable facial styles, respond to a style selection instruction of a user for at least one facial style in the multiple facial styles, compare facial features corresponding to a facial image to be deformed provided by the user with facial common features corresponding to the selected facial style, acquire facial deformation recommendation information corresponding to a facial feature comparison result, and then display the facial deformation recommendation information and a facial deformation effect obtained by the facial deformation recommendation information acting on the facial image to be deformed. According to the scheme, abundant selectable facial styles can be provided for the user, the facial features corresponding to the to-be-deformed facial image provided by the user are compared with the facial common features corresponding to the selected styles according to the styles selected by the user, and therefore facial deformation recommendation information and facial deformation effect previewing corresponding to comparison results are objectively and accurately provided for the user.
Drawings
FIG. 1 is a diagram of an embodiment of an application environment of a facial deformation recommendation method based on image recognition;
FIG. 2 is a flowchart illustrating a method for recommending facial deformation based on image recognition according to an embodiment;
FIG. 3 is a schematic diagram of an interface showing a facial style in one embodiment;
FIG. 4 is a flowchart illustrating steps of obtaining facial deformation recommendation information corresponding to a facial feature comparison result in one embodiment;
FIG. 5 is an interface diagram showing a face template image characterized by face commonality features in one embodiment;
FIG. 6 is a schematic diagram of an interface for importing a face image to be deformed according to an embodiment;
FIG. 7 is a diagram illustrating an interface for recommending facial deformation information, according to an embodiment;
FIG. 8 is a schematic diagram of an interface for previewing a face deformation effect in one embodiment;
FIG. 9 is a flowchart illustrating the steps for obtaining face commonality characteristics corresponding to each face style in one embodiment;
FIG. 10 is a flowchart illustrating the steps of determining a face commonality feature that a plurality of face image samples have in one embodiment;
FIG. 11 is a flowchart illustrating a face-lift scheme intelligent recommendation and effect preview method based on a face recognition algorithm and deep learning in an application example;
FIG. 12 is a schematic structural diagram of a face-lift scheme intelligent recommendation and effect preview system based on a face recognition algorithm and deep learning in an application example;
FIG. 13 is a data processing flow diagram of a schema generation module of the system in an application example;
FIG. 14 is a block diagram showing an exemplary configuration of a face-shape-change recommendation apparatus according to an embodiment;
FIG. 15 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The facial deformation recommendation method based on image recognition can be applied to the application environment shown in fig. 1. Wherein the terminal 110 may communicate with the server 120 through a network. The terminal 110 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 120 may be implemented by an independent server or a server cluster formed by a plurality of servers.
Specifically, the facial deformation recommendation method based on image recognition provided by the present application may be executed by the terminal 110 alone, or may be executed by the terminal 110 in cooperation with the server 120. Wherein, for individual execution by the terminal 110, the terminal 110 presents a plurality of facial styles to the user, and the user can select at least one of the facial styles on the terminal 110, the selected at least one facial style being referred to as a selected facial style. Then, the terminal 110 responds to a style selection instruction of the user for at least one of the displayed multiple facial styles, and obtains facial deformation recommendation information corresponding to a facial feature comparison result, where the facial feature comparison result can be obtained by the terminal 110 according to comparison between a facial feature corresponding to the to-be-deformed facial image provided by the user and a facial common feature corresponding to the selected facial style, after the facial feature comparison result is obtained, the terminal 110 can obtain facial deformation recommendation information corresponding to the facial feature comparison result, display the facial deformation recommendation information for the user, and display a facial deformation effect obtained by the facial deformation recommendation information acting on the to-be-deformed facial image.
In addition, in a scenario executed by the terminal 110 in cooperation with the server 120, the display step may be mainly executed by the terminal 110, and the facial feature comparison process and the facial style information may be provided by the server 120 for the terminal 110. Specifically, the server 120 may analyze, by means of an artificial intelligence technique, a common feature, called a face common feature, of each face image in each face style for a face based on learning of the face images in multiple face styles, where the face common feature may be stored in advance and establish an association relationship with the corresponding face style, that is, the corresponding face common feature may be found in the server 120 through the face styles.
Therefore, the server 120 can provide a plurality of facial styles for the terminal 110 to be selected by the user according to the facial image learning result based on the artificial intelligence technology, when the user needs facial deformation recommendation, the terminal 110 displays the plurality of facial styles, the number of the plurality of facial styles can be continuously updated according to the learning condition of the facial image by the server 120, and the facial styles which can be selected by the user can be continuously enriched. After the terminal 110 displays a plurality of facial styles, the user can select at least one of the facial styles on the terminal 110, then the terminal 110 responds to a style selection instruction of the user for at least one of the facial styles, and requests the server 120 to acquire facial deformation recommendation information corresponding to the facial feature comparison result, that is, in a scene where the terminal 110 and the server 120 are mutually matched, the facial feature comparison process and the facial deformation recommendation information acquisition process can be executed by the server 120 to reduce the data processing pressure of the terminal 110, similarly, the server 120 can compare the facial features corresponding to the facial image to be deformed provided by the user with the facial common features corresponding to the selected facial style for the terminal 110, thereby acquiring the facial deformation recommendation information corresponding to the facial feature comparison result, and acquiring a facial effect obtained after the facial deformation recommendation information is applied to the facial image to be deformed, to this end, the server 120 may feed back the face deformation recommendation information and the corresponding face deformation effect to the terminal 110, and the terminal 110 displays the received face deformation recommendation information and the face deformation effect for the user.
It is to be understood that, as described above, the process of deriving the common facial features based on the learning analysis of the facial images of a plurality of facial styles by means of the artificial intelligence technology and the process of comparing the facial features after the user selects the styles and providing the facial deformation recommendation information and the facial deformation effect, which are involved in the scenario executed by the terminal 110 in cooperation with the server 120, may also be executed by the terminal 110 in whole or in part in the scenario in which the terminal 110 alone executes the method provided by the present application.
The facial deformation recommendation method based on image recognition can be embodied in the scene, and can be realized by means of an artificial intelligence technology, so that objective and accurate facial deformation recommendation information and corresponding facial deformation effect preview are provided for a user on the basis of intellectualization.
Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. That is, artificial intelligence is a comprehensive technique of computer science, trying to understand the essence of intelligence and produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject, and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Among them, computer vision technology (CV) is a science for studying how to make a machine "see", and more specifically, it refers to using a camera and a computer to replace human eyes to perform machine vision such as identifying and measuring a target, and further performing image processing, so that the computer processing becomes an image more suitable for human eyes to observe or transmitting to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis and algorithm complexity theory. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
Accordingly, the facial deformation recommendation method based on image recognition can recognize and learn facial feature information of a large number of facial images by means of computer vision technology and machine learning in artificial intelligence technology, obtains the facial common features corresponding to different facial styles through analysis, is convenient for a user to provide a facial image to be deformed and confirm the selected facial style, for which the facial feature difference from the selected face style is analyzed and corresponding face deformation recommendation information and a face deformation effect preview are provided, therefore, the problems of single, subjective and inaccurate facial deformation recommendation effect caused by the fact that a traditional technology relies on a facial deformation threshold set by a specific user group according to the subjective standard of the specific user group can be solved, and the facial deformation recommendation information and the facial deformation preview effect corresponding to the comparison result can be objectively and accurately provided for the user.
The following describes the image recognition-based facial deformation recommendation method provided in the present application with reference to the following embodiments and accompanying drawings.
In one embodiment, as shown in fig. 2, a facial deformation recommendation method based on image recognition is provided, which is described by taking the method as an example applied to the terminal 110 in fig. 1, and includes the following steps:
step S201, displaying a plurality of facial styles;
in this step, as shown in fig. 3, after the terminal 110 enters the face deformation recommendation system, a plurality of face styles may be displayed on the first interface 310 for the user to select, and the user may select one or more of the face styles, so as to assist the user in selecting the face style, the terminal 110 may further display a face template image corresponding to the face style near a position corresponding to the face style on the first interface 310. For example, the user may select at least one of the facial styles 1 to 5 on the first interface 310, and the user may also select a customized facial style on the first interface 310 to meet the personalized needs of the user for facial deformation.
The face style refers to a style of a face of an object represented mainly by a person having attributes such as different regions, identities, ages, and/or genders, that is, faces of people having attributes such as different regions, identities, ages, and/or genders generally have different styles, while faces of people having attributes such as the same regions, identities, ages, and/or genders generally have a certain commonality.
Therefore, the terminal 110 can provide a plurality of face styles and display face template images corresponding to the face styles for the user to reference and select. For example, the facial styles provided by the terminal 110 may be, for example, an a-zone style, a B-zone style, an AB-zone mixed style, and the like, and the facial styles provided by the terminal 110 may further include a user-defined style provided by the user, where the user may refer to a corresponding facial template image and select one or more facial styles from the facial template images to be submitted to the terminal 110 for a subsequent recommendation process, and in the subsequent process description, the process description of facial deformation recommendation mainly takes the selection of one facial style as an example.
Step S202, responding to a style selection instruction of at least one of the facial styles, and acquiring facial deformation recommendation information corresponding to the facial feature comparison result.
Specifically, the user may trigger a style selection instruction for a face style (referred to as a selected face style) by applying operations such as clicking on a face template image corresponding to the face style desired to be selected on the first interface 310 provided by the terminal 110, and in response to the style selection instruction, the terminal 110 may compare a facial feature corresponding to the face image to be deformed provided by the user with a facial common feature corresponding to the selected face style, where a result of the feature comparison is referred to as a facial feature comparison result, and acquire facial deformation recommendation information corresponding to the facial feature comparison result.
Before the terminal 110 performs the facial feature comparison process, it is required that the user provides a facial image to be deformed, where the facial image to be deformed is an image including a face to be deformed, specifically, the facial image of the user itself or facial images of other users, after obtaining the facial image to be deformed provided by the user and determining the facial style selected by the user, the terminal 110 may analyze the facial feature of the facial image to be deformed based on a facial recognition algorithm and compare the facial feature with the facial common feature corresponding to the facial style selected by the user, the facial feature comparison result obtained by the terminal 110 may be used to represent a main difference between the facial feature of the facial image to be deformed and the facial common feature corresponding to the facial style selected by the user, the terminal 110 may give facial deformation recommendation information, such as a bridge height of 1mm and the like, that is, the facial deformation recommendation information, which is information that recommends what kind of deformation should be performed on the face presented by the face image to be deformed in order to achieve the selected face style.
In addition to obtaining the facial deformation recommendation information, the terminal 110 may further obtain a facial deformation effect obtained by applying the facial deformation recommendation information to the facial image to be deformed, where the facial deformation effect may be presented in a static or dynamic form, the static state may specifically be an image obtained by applying the facial deformation recommendation information to the facial image to be deformed, and the dynamic state may be a process of displaying an image obtained by applying the facial deformation recommendation information to the facial image to be deformed in a form of a video or a moving picture.
Step S203, displaying the face deformation recommendation information and displaying the face deformation effect obtained by the face deformation recommendation information acting on the face image to be deformed.
The terminal 110 obtains the facial deformation recommendation information and applies the facial deformation recommendation information to the facial deformation effect obtained by the facial image to be deformed, and then displays the facial deformation recommendation information and presents the facial deformation effect for the user.
In the facial deformation recommendation method based on image recognition, the terminal 110 displays a plurality of selectable facial styles, the terminal 110 responds to a style selection instruction of a user for at least one of the facial styles, can compare the facial features corresponding to the facial image to be deformed provided by the user with the facial common features corresponding to the selected facial style, obtains the facial deformation recommendation information corresponding to the facial feature comparison result, and then the terminal 110 displays the facial deformation recommendation information and the facial deformation effect obtained by the facial deformation recommendation information acting on the facial image to be deformed. According to the scheme, abundant selectable facial styles can be provided for the user, and the facial features corresponding to the to-be-deformed facial image provided by the user are compared with the facial common features corresponding to the selected styles according to the styles selected by the user, so that facial deformation recommendation information and facial deformation effect previews corresponding to comparison results are objectively and accurately provided for the user.
In some embodiments, the terminal 110 may provide the user with adjustment to the common facial features corresponding to the selected facial style before performing facial feature comparison for the user, so that the facial deformation recommendation is more personalized. Specifically, as shown in fig. 4, the obtaining of the facial deformation recommendation information corresponding to the comparison result of the facial features in response to the style selection instruction for at least one of the facial styles in step S202 may include the following steps:
step S401, responding to a style selection instruction, displaying a face template image characterized by face common characteristics, and adaptively changing the display of the face template image when the face common characteristics are adjusted by a user;
referring to fig. 5, after the user selects, for example, one of the facial styles, the terminal 110 responds to the style selection instruction, and the terminal 110 displays, on the second interface 510, a facial template image 511 represented by a facial common feature K1 corresponding to the selected facial style, it should be noted that the facial common feature K1 displayed on the second interface 510 mainly illustrates a set of partial feature points located on the facial contour, and the facial common feature K1 corresponding to the facial style is not limited to the set of feature points located on the facial contour, but includes a combination of feature point sets formed by feature points located on other parts such as eyes, a mouth, eyebrows, and a nose. In practical applications, in the case where the user is satisfied with the selected face style, a marking operation in accordance with the degree of satisfaction may be performed on the face style in the degree of satisfaction selection area 512.
Further, the user may adjust the face commonality feature, and the terminal 110 adaptively changes the display of the face template image as the face commonality feature is adjusted by the user. Specifically, when the face template image 511 represented by the face commonality feature K1 is presented on the second interface 510, the user may adjust the position of one or more feature points in the face commonality feature K1, and when the position of one or more feature points is adjusted, the terminal 110 may change the display of the face template image for the user in real time according to the adjusted position of the corresponding feature point, that is, may provide the user with a self-adjusting modification of the selected face style and preview the modification effect in real time during the display of the face template image.
Specifically, the user' S self-adjustment of the selected face style may be achieved by the following steps, and in one embodiment, the adaptively changing the display of the face template image when the face commonality feature is adjusted by the user in step S401 may include:
and providing position parameters of the face common characteristic points corresponding to the face common characteristic points on the face template image, and adaptively changing the display of the face template image according to the adjustment of the position parameters of the face common characteristic points by a user.
Still referring to fig. 5, the terminal 110 may present a parameter adjustment area 513 on the second interface 510, where the face commonality feature K1 providing the selected face style in the parameter adjustment area 513 corresponds to the face commonality feature point location parameter (or location coordinate) on the face template image 511. Therefore, the user can change the position coordinates (such as x1, y 1; x2, … …) of one or more face common feature points in the parameter adjusting area 513, so that the terminal 110 adaptively changes the display of the face template image 511 according to the adjustment of the position parameters of the face common feature points in the parameter adjusting area 513 by the user, and the terminal can assist the user in conveniently and conveniently carrying out personalized adjustment and modification on the face common features after selecting corresponding face styles and providing a function of previewing the modification effect in real time.
Step S402, responding to a facial feature comparison instruction from a user, and acquiring facial deformation recommendation information corresponding to a facial feature comparison result.
As shown in fig. 5, after the user browses the face template image corresponding to the selected face style, a face feature comparison instruction may be triggered in the second interface 510 by "compare with me", and the terminal 110 responds to the face feature comparison instruction, and after the user provides the face image to be deformed, may perform face feature comparison between the face image to be deformed and the face common feature to obtain corresponding face deformation recommendation information. Since the user may select to adjust or not to adjust and modify the common facial features corresponding to the selected facial style on the second interface 510, under the condition that the user does not adjust the common facial features corresponding to the selected facial style, the facial feature comparison result obtained by the terminal 110 is a comparison result between the common facial features that are not adjusted by the user and the facial features corresponding to the facial image to be deformed, and under the condition that the user is adjusted, the facial feature comparison result obtained by the terminal 110 is a comparison result between the common facial features that are adjusted by the user and the facial features corresponding to the facial image to be deformed.
The terminal 110 may support a plurality of modes for the user to provide the to-be-deformed facial image, for example, the method may be performed by acquiring multi-angle facial information of the user in real time through a camera of the terminal 110 or importing an image from a picture library of the terminal 110, so as to satisfy facial information collection of the user in a situation where the user is inconvenient to start the camera. Specifically, in some embodiments, after the user clicks "compare with me" on the second interface 510 shown in fig. 5, the terminal 110 may display a third interface 610 shown in fig. 6, where the third interface 610 may be used for the user to use a camera or import a to-be-deformed face image from a picture library of the terminal 110, and after the user imports the to-be-deformed face image, the user may click "start comparison" to instruct the terminal 110 to perform a comparison process between a face feature corresponding to the to-be-deformed face image and a face common feature corresponding to the selected face style, so that the terminal 110 obtains corresponding face deformation recommendation information according to a face feature comparison result.
As shown in fig. 7, the terminal 110 may display the face deformation recommendation information on the fourth interface 710 after obtaining the face deformation recommendation information, and in some embodiments, when the terminal 110 displays the face deformation recommendation information 714 on the fourth interface 710, the user may modify and adjust the face deformation recommendation information 714 and click "effect preview" to view a preview effect (i.e., display a face deformation effect obtained by applying the face deformation recommendation information to the face image to be deformed).
Specifically, the step S203 of displaying the facial deformation recommendation information to act on the facial deformation effect obtained by the facial image to be deformed may include:
obtaining adjusted face deformation recommendation information according to the adjustment of the face deformation recommendation information by the user; and responding to an effect preview instruction from a user, and displaying a face deformation effect obtained by applying the adjusted face deformation recommendation information to the face image to be deformed.
In this embodiment, after the user clicks the "start comparison" trigger terminal 110 on the third interface 610 shown in fig. 6 to perform the facial feature comparison process, the terminal 110 obtains the facial deformation recommendation information corresponding to the facial feature comparison result and provides the fourth interface 710 shown in fig. 7, the terminal 110 displays the facial template image 711 represented by the facial common feature K1, the to-be-deformed facial image 712 provided by the user and the corresponding facial feature K2 on the fourth interface 710, and displays the facial deformation recommendation information corresponding to the facial feature comparison result on the recommendation information display area 714, if the comparison process and result of the facial deformation recommendation information by the user indicate satisfaction, the comparison process and result of the facial deformation recommendation information may be marked/collected in the comparison result marking/collecting area 713.
Further, the terminal 110 may provide the facial deformation recommendation information in the recommendation information display area 714 of the fourth interface 710 for the user to modify and adjust. Specifically, the user may modify parameters of various items in the facial deformation recommendation information, or even delete or add some additional adjustment items, for example, the user may modify "2 mm" of "2 mm of nose bridge pad height," or even delete "2 mm of nose bridge pad height," or add other additional adjustment items, so that the terminal 110 may provide the user with more autonomy and customization capability in the fourth interface 710, and avoid the defect that the facial deformation effect is uniformly lack of personalization.
After the user adjusts the facial deformation recommendation information, the user can click the "effect preview" to trigger the terminal 110 to apply the adjusted facial deformation recommendation information to the facial image 712 to be deformed to obtain a facial deformation effect, and the facial deformation effect is displayed on the fifth interface 810 shown in fig. 8, where the facial deformation effect may be a picture, and the user can store the preview picture, that is, the user can check the expected facial deformation effect of the recommendation scheme applied to the facial image 712 to be deformed after adjusting the facial deformation recommendation scheme by himself, so that the user can be supported to adjust the facial deformation recommendation scheme more accurately to achieve the expected facial shape effect.
In an embodiment, as shown in fig. 9, before displaying a plurality of facial styles in step S201, the method may further include, by the following steps, obtaining a facial common feature corresponding to each facial style, specifically including:
in step S901, a face image sample set corresponding to each face style is acquired.
In this step, the terminal 110 may obtain a face image sample set corresponding to various face styles, where the face image sample set corresponding to each face style may include a plurality of face image samples that satisfy a preset recommendation condition, that is, a plurality of face image samples included in each face image sample set all satisfy the preset recommendation condition, where the preset recommendation condition may include, for example, a condition having a preset identity attribute and a preset age attribute, and may further include a condition imported by the user, that is, an image including a face of an object that satisfies the preset identity attribute and the preset age attribute may be included in the face image sample set, and an image imported by the user may be included in the face image sample set.
In some embodiments, step S901 may include:
the method comprises the steps of collecting a plurality of first face image samples of an object with a preset identity attribute and a preset age attribute, and classifying the plurality of first face image samples according to a plurality of preset face style classification attributes to obtain a face image sample set corresponding to each face style.
Specifically, the object with the preset identity attribute and the preset age attribute may be, for example, a star of a male or a female in the age range of 15 to 30, and the star of a male or a female is selected because the facial features of the star may substantially represent different definitions for beauty at different times, so the terminal 110 may collect a plurality of facial image samples of such an object with the preset identity attribute and the preset age attribute to obtain a plurality of first facial image samples, and the terminal 110 may further improve the universality and accuracy of the data source by performing measures such as rating and scoring on the first facial image samples. After obtaining the plurality of first face image samples, the terminal 110 may classify the first face image samples according to a plurality of preset face style classification attributes, so as to form a face image sample set corresponding to various face styles, for example, the preset face style classification attributes may include face style classification attributes of different regions and different genders, that is, the terminal 110 may classify the plurality of first face image samples into different face styles according to different regions and different genders to form a corresponding face image sample set.
In some other embodiments, step S901 may include:
and acquiring a plurality of second facial image samples imported by a user, and taking the plurality of second facial image samples as a facial image sample set corresponding to the custom facial style.
As shown in fig. 3, the terminal 110 may provide a user-customized style image import interface at the first interface 310, enabling the user to self-define a desired facial style by importing an image that is expected to conform to his/her facial deformation. Specifically, the terminal 110 obtains a plurality of second face image samples imported by the user, and directly classifies the plurality of second face image samples into a face image sample set corresponding to the custom face style.
Step S902 is to determine the face common features of the plurality of face image samples for the face image sample sets corresponding to the face styles, and obtain the face common features corresponding to the face styles.
The terminal 110 mainly analyzes the face common characteristics of a plurality of face image samples contained in the face image sample set corresponding to each face style to obtain the face common characteristics corresponding to various face styles, for example, the terminal 110 may obtain the face common characteristics of different face styles corresponding to different regions and different genders, may also obtain the face common characteristics corresponding to a user-defined face style, and the like, so as to provide rich and personalized face deformation scheme selection for the user.
Further, in some embodiments, as shown in fig. 10, the determining the face common features of the plurality of face image samples in step S902 specifically includes:
in step S1001, a face feature sample included in each face image sample included in the face image sample set is acquired.
In this step, after obtaining the face image sample set, the terminal 110 may analyze, by using a face recognition algorithm, a face feature (referred to as a face feature sample) of each face image sample in the face image sample set, so as to facilitate subsequent analysis of the face common feature of the face image sample sets of the face styles.
Step S1002, inputting the facial feature samples of the facial image samples into a pre-constructed facial common feature recommendation model, and acquiring recommendation probability values of facial image sample sets output by the facial common feature recommendation model to the initial facial common features.
In this step, the terminal 110 inputs the facial feature samples of each facial image sample into a pre-constructed facial common feature recommendation model, the facial common feature recommendation model is constructed based on a preset initial facial common feature, the terminal 110 inputs the facial feature samples of each facial image sample into the model so that the facial common feature recommendation model obtains a recommendation probability value of the input facial feature samples for the preset initial facial common feature based on the input facial feature samples, the recommendation probability value represents the recommendation degree of the input facial feature samples for the preset initial facial common feature, the terminal 110 obtains a recommendation probability value of the facial image sample set output by the model for the initial facial common feature, the recommendation probability value can specifically indicate the probability of the preset initial facial common feature appearing in the facial feature samples of the facial image samples, the higher the probability of occurrence, the higher the recommendation level of these face image sample sets for the initial face commonality feature.
The facial common feature recommendation model may adopt, for example, LR, FM, DNN, W & D, deep FM, DIN, etc., taking LR (Logistic Regression model) as an example, the terminal 110 may use a currently approved facial feature data range in the industry as the preset initial facial common feature, combine facial feature data of different regions and different genders to generate training data in a time span of 5 years, train a binary model for each facial style using a Spark-plate LR algorithm, if the introduced training data satisfies the initial facial common feature within a given time, the training data is taken as a positive example, otherwise, as a negative example, the trained model may be saved as the facial common feature recommendation model for predicting the occurrence probability of the initial facial common feature. Therefore, after the facial feature samples of the facial image samples are input into the facial common feature recommendation model which is constructed in advance, the recommendation probability values of the facial feature samples of the input facial image samples to the initial facial common features can be obtained according to the output result of the facial common feature recommendation model.
And step S1003, if the recommendation probability value is larger than or equal to the recommendation probability threshold value, taking the initial face common feature as the face common feature.
Specifically, if the recommendation probability value output by the face common feature recommendation model is greater than or equal to the recommendation probability threshold value, it indicates that the probability of the preset initial face common feature appearing in the face feature samples of the input face image samples exceeds the probability threshold value of, for example, 85%, the terminal 110 may use the initial face common feature as the face common feature corresponding to the face style, and if the recommendation probability value is less than the recommendation probability threshold value, the terminal 110 may further train and adjust the initial face common feature until the requirement of the recommendation probability threshold value condition can be met.
The scheme provided by the embodiment can objectively, efficiently and accurately analyze the facial common characteristics corresponding to the more scientific and reasonable facial style by means of the image recognition and machine learning/deep learning technology in artificial intelligence, avoids the malicious facial deformation scheme recommended by an invalid mechanism, and is favorable for objectively, scientifically and reasonably recommending the facial deformation scheme for a user under the condition that the user does not know and know the expected effect.
The present application further provides an application scenario, where the method for recommending facial deformation based on image recognition is applied, and specifically, the application scenario is to recommend a facial shaping solution to a user, where the solution can be implemented on the terminal 110 based on interaction with the user, as shown in fig. 11, a flow diagram of the recommendation method is shown, and a main flow of the method is described with reference to fig. 3, 5 to 8, and includes:
the terminal 110 displays the first interface 310 for the user to select the face style, after the user selects the face style, the terminal 110 displays the face template image 511 represented by the face common feature K1 corresponding to the face style at the second interface 510, the user can adjust and modify the position parameters of the face common feature by himself, and the terminal 110 provides a function of previewing and modifying the effect in real time during the adjustment process of the parameters by the user. Then, the user can click "compare with me" on the second interface 510, the terminal 110 enters the third interface 610 for the user to input a facial image to be deformed through picture import or camera shooting, then the user can click "start comparison" on the third interface 610, the terminal 110 carefully compares the facial features of the facial image to be deformed with the facial common features of the selected facial style to generate corresponding facial deformation recommendation information (or called facial deformation recommendation scheme), the fourth interface 710 is provided to display the facial deformation recommendation scheme, the user can modify and adjust data in the facial deformation recommendation scheme on the fourth interface 710, and if the facial deformation recommendation scheme is unsatisfactory and the scheme is not selected to be manually adjusted, the terminal 110 returns the first interface 310 for the user to reselect the facial style; if the user adjusts the face deformation recommendation scheme on the fourth interface 710 and clicks 'effect preview', the terminal 110 provides a fifth interface 810, and displays a face deformation effect map on the fifth interface 810.
The scheme supports the introduction of facial graphs to be deformed in various modes, and particularly supports various modes such as a camera and pictures to realize the collection of multi-angle facial information of a user, and meets the requirement that the user collects the facial information under the condition that the camera is inconvenient to open, and in the recommendation process of the facial deformation scheme, the user can automatically adjust the scheme, for example, the user can modify the parameters of the scheme or even delete or add partial adjustment items after the terminal 110 displays the facial deformation recommendation scheme, so that the more autonomous and customized capability is provided for the user, and the defect that the facial deformation effect in the traditional technology is completely lack of individuality can be avoided. In addition, the method can also be used for the user to check the expected effect corresponding to the current scheme in real time after the user adjusts the face deformation scheme, and can support the user to use a more accurate face deformation scheme to achieve the expected effect.
Further, the method may be implemented based on a system architecture as shown in fig. 12, specifically:
the data processing module is mainly responsible for collecting and processing facial image sample data, and the system can collect data in a classified manner according to attributes of different age groups, different regions, different sexes and the like in a sample data collection stage, analyze facial features of the data through a facial recognition algorithm and process and analyze the facial feature data. The system can further improve the universality and the accuracy of data sources subsequently or through measures such as evaluation and scoring of facial image sample data, and the like, and collects corresponding facial image sample data, and the data post-processing module arranges the facial features of different regions and different sexes into facial image sample sets belonging to different facial styles so as to facilitate further processing by the intelligent recommendation module.
For the intelligent recommendation module, the main contents of which are model training and model prediction, models such as LR, FM, DNN, W & D, deep FM, DIN, and the like can be adopted, taking LR (Logistic Regression model) as an example, and a set of facial features of different regions and different genders generated by processing with an extracted data processing module generates training data with 5 years as a time span, a range of beautiful facial feature data approved in the industry at present can be used as an initial facial common feature, the obtained training data is combined, a binary model is trained for each facial style by using a Spark-version LR algorithm, if the training data is imported to satisfy the initial facial common feature within a given time, the model is used as a positive example, otherwise, the trained model is used as a negative example to store the probability of occurrence of the initial facial common feature for prediction, and after facial feature samples of each facial image sample are input to a facial common feature recommendation model, and obtaining a recommendation probability value output by a face common feature recommendation model, if the recommendation probability value output by the face common feature recommendation model is greater than or equal to a recommendation probability threshold value, taking the initial face common feature as a face common feature corresponding to the face style, and if the recommendation probability value output by the face common feature recommendation model does not meet the threshold value condition, further training and adjusting the initial face common feature until the constructed recommendation model can meet the requirement of the recommendation probability threshold value condition.
The data processing module generates facial feature data of different regions and different genders, and model training is respectively carried out on the facial feature data to generate recommendation models (or called prediction models) of facial styles of different regions, different genders and the like, so that facial common features (or called recommendation features) of facial styles of different regions, different genders and the like can be obtained, and the facial styles corresponding to the facial common features are provided for a user to select.
For the scheme generation module, the functional characteristics of the scheme generation module are shown in combination with fig. 13, a user can select different facial styles provided by the system in advance in the scheme generation stage and can also support user customization, the user customization mainly includes that the system can analyze the user-introduced photos and generate the facial common characteristics corresponding to the user-defined style through other modes capable of analyzing and extracting facial features such as images of expected facial styles and the like which are automatically introduced by the user. Then, the user imports own facial information (corresponding to the facial image to be deformed) through a camera or a photo, the system analyzes facial features of the facial image to be deformed based on a facial recognition algorithm and compares the facial features with facial common features corresponding to a facial style selected by the user or a user-defined style, a facial deformation recommendation scheme is generated according to main differences among the features, for example, the nose bridge is raised by 1mm, and the generated facial deformation recommendation scheme can support the user to adjust and modify, such as deleting part of content, and the like.
For the effect preview module, the effect preview module can generate a face deformation expected result by combining the face image to be deformed provided by the user and the face deformation recommendation scheme, and if the user modifies the face deformation recommendation scheme in the recommendation scheme generation stage, the system can also adjust in real time according to the modified face deformation recommendation scheme of the user.
On the whole, the facial reshaping recommendation scheme and the effect preview system based on the facial recognition algorithm and the deep learning provided by the application can analyze the facial recognition algorithm of a large number of star images in different regions and different sexes, train the recommendation model through a large amount of analysis data, intelligently calculate facial reshaping templates in different facial styles for a user to select, and simultaneously continuously introduce the latest photos to continuously improve the scientificity and reliability of the facial reshaping templates; the user can import the multi-angle image of input oneself through camera or picture after the face plastic template of selection different facial styles, and the system carries out the analysis and compares and gives face plastic recommendation scheme, and the user can revise face plastic recommendation scheme and look over the expected effect by oneself simultaneously, and its beneficial effect that has can include:
the method has the advantages that a more scientific facial shaping scheme is provided for a user, malicious recommendation of no good beauty institution is avoided, and an objective, scientific and reasonable facial deformation scheme is recommended for the user under the condition that the user does not know and grasp the expected effect;
providing richer and personalized facial shaping scheme selection for the user, wherein the system not only provides the capability of the user to select facial styles such as different regions and different categories, but also supports the user to define the expected styles by importing the expected photos or other modes capable of identifying and extracting facial features;
after the facial reshaping recommendation scheme is generated, the system can also support the user to modify the facial reshaping recommendation scheme according to personal desire and check expected effects in real time, and can meet the more personalized requirements of the user.
It should be understood that, although the steps in the above flowcharts are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the above flowcharts may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In one embodiment, as shown in fig. 14, there is provided an apparatus for recommending facial deformation based on image recognition, where the apparatus 1400 may be a part of a computer device using a software module or a hardware module, or a combination of the two modules, and the apparatus specifically includes:
a style presentation module 1401 for presenting a plurality of facial styles;
an information obtaining module 1402, configured to obtain, in response to a style selection instruction for at least one style of face among the multiple styles of faces, facial deformation recommendation information corresponding to a facial feature comparison result; the facial feature comparison result is obtained according to the comparison between the facial features corresponding to the facial image to be deformed and the facial common features corresponding to the selected facial style, wherein the facial features are provided by the user;
a deformation display module 1403, configured to display the facial deformation recommendation information and display a facial deformation effect obtained by applying the facial deformation recommendation information to the facial image to be deformed.
In one embodiment, the information obtaining module 1402 is configured to display a face template image characterized by the face commonality feature in response to the style selection instruction, and to adapt the display of the face template image when the face commonality feature is adjusted by a user; responding to a facial feature comparison instruction from the user, and acquiring facial deformation recommendation information corresponding to the facial feature comparison result; the facial feature comparison result comprises a comparison result of the facial common feature which is not adjusted by the user and the facial feature corresponding to the facial image to be deformed, or a comparison result of the facial common feature which is adjusted by the user and the facial feature corresponding to the facial image to be deformed.
In one embodiment, the information obtaining module 1402 is configured to provide a location parameter of the face commonality feature corresponding to the face commonality feature point on the face template image; and adaptively changing the display of the face template image according to the adjustment of the user on the position parameter of the face common characteristic point.
In an embodiment, the deformation presenting module 1403 is configured to obtain the adjusted facial deformation recommendation information according to the adjustment of the facial deformation recommendation information by the user; and responding to an effect preview instruction from the user, and displaying a face deformation effect obtained by the adjusted face deformation recommendation information acting on the face image to be deformed.
In one embodiment, the apparatus 1400 further comprises: the feature acquisition unit is used for acquiring a face image sample set corresponding to each face style; the facial image sample set comprises a plurality of facial image samples meeting preset recommendation conditions; and determining the face common characteristics of the face image samples according to the face image sample set corresponding to each face style to obtain the face common characteristics corresponding to each face style.
In one embodiment, the system comprises a feature acquisition unit for acquiring a plurality of first facial image samples of a subject having a preset identity attribute and a preset age attribute; the preset recommendation condition comprises the preset identity attribute and the preset age attribute; and classifying the plurality of first face image samples according to a plurality of preset face style classification attributes to obtain a face image sample set corresponding to each face style.
In one embodiment, the apparatus includes a feature acquisition unit for acquiring a plurality of second face image samples imported by a user; the preset recommendation condition comprises the user import; and taking the plurality of second facial image samples as a facial image sample set corresponding to the custom facial style.
In one embodiment, the feature acquisition unit is configured to acquire a facial feature sample that each facial image sample included in the facial image sample set has; inputting the facial feature samples of the facial image samples into a pre-constructed facial common feature recommendation model, and acquiring recommendation probability values of the facial image sample sets output by the facial common feature recommendation model to initial facial common features; the face common feature recommendation model is constructed on the basis of preset initial face common features; and if the recommendation probability value is greater than or equal to a recommendation probability threshold value, taking the initial face common characteristic as the face common characteristic.
For specific definition of the facial deformation recommendation device based on image recognition, see the above definition of the facial deformation recommendation method based on image recognition, which is not described herein again. The modules in the facial deformation recommendation device based on image recognition can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 15. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method for facial deformation recommendation based on image recognition. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 15 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM may take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A facial deformation recommendation method based on image recognition is characterized by comprising the following steps:
displaying a plurality of facial styles;
responding to a style selection instruction of at least one face style in the plurality of face styles, and acquiring face deformation recommendation information corresponding to a face feature comparison result; the facial feature comparison result is obtained according to the comparison between the facial features corresponding to the facial image to be deformed and the facial common features corresponding to the selected facial style, wherein the facial features are provided by the user;
and displaying the facial deformation recommendation information, and displaying a facial deformation effect obtained by the facial deformation recommendation information acting on the facial image to be deformed.
2. The method according to claim 1, wherein the obtaining facial deformation recommendation information corresponding to the comparison result of the facial features in response to a style selection instruction for at least one of the plurality of facial styles comprises:
in response to the style selection instruction, displaying a face template image characterized by the face commonality feature and adaptively altering the display of the face template image when the face commonality feature is adjusted by a user;
responding to a facial feature comparison instruction from the user, and acquiring facial deformation recommendation information corresponding to the facial feature comparison result;
the facial feature comparison result comprises a comparison result of the facial common feature which is not adjusted by the user and the facial feature corresponding to the facial image to be deformed, or a comparison result of the facial common feature which is adjusted by the user and the facial feature corresponding to the facial image to be deformed.
3. The method of claim 2, wherein said adapting the display of the face template image as the face commonality features are adjusted by a user comprises:
providing position parameters of the face common characteristic points corresponding to the face common characteristic on the face template image;
and adaptively changing the display of the face template image according to the adjustment of the user on the position parameter of the face common characteristic point.
4. The method according to claim 1, wherein the displaying of the face-deformation effect of the face-deformation recommendation information on the face image to be deformed comprises:
obtaining adjusted face deformation recommendation information according to the adjustment of the face deformation recommendation information by the user;
and responding to an effect preview instruction from the user, and displaying a face deformation effect obtained by the adjusted face deformation recommendation information acting on the face image to be deformed.
5. The method of claim 1, wherein prior to said presenting a plurality of facial styles, said method further comprises:
acquiring a face image sample set corresponding to each face style; the facial image sample set comprises a plurality of facial image samples meeting preset recommendation conditions;
and determining the face common characteristics of the face image samples according to the face image sample set corresponding to each face style to obtain the face common characteristics corresponding to each face style.
6. The method of claim 5,
the acquiring of the face image sample set corresponding to each face style includes:
collecting a plurality of first facial image samples of a subject having a preset identity attribute and a preset age attribute;
the preset recommendation condition comprises the preset identity attribute and the preset age attribute;
classifying the plurality of first face image samples according to a plurality of preset face style classification attributes to obtain a face image sample set corresponding to each face style;
alternatively, the first and second electrodes may be,
the acquiring of the face image sample set corresponding to each face style includes:
obtaining a plurality of second facial image samples imported by a user; the preset recommendation condition comprises the user importing;
and taking the plurality of second facial image samples as a facial image sample set corresponding to the custom facial style.
7. The method of claim 5 or 6, wherein the determining the face commonality characteristics that the plurality of face image samples have comprises:
acquiring facial feature samples of each facial image sample in the facial image sample set;
inputting the facial feature samples of the facial image samples into a pre-constructed facial common feature recommendation model, and acquiring recommendation probability values of the facial image sample sets output by the facial common feature recommendation model to initial facial common features; the face common feature recommendation model is constructed on the basis of preset initial face common features;
and if the recommendation probability value is greater than or equal to a recommendation probability threshold value, taking the initial face common characteristic as the face common characteristic.
8. An apparatus for recommending facial deformation based on image recognition, the apparatus comprising:
the style display module is used for displaying various facial styles;
the information acquisition module is used for responding to a style selection instruction of at least one face style in the plurality of face styles and acquiring face deformation recommendation information corresponding to a face feature comparison result; the facial feature comparison result is obtained according to the comparison between the facial features corresponding to the facial image to be deformed and the facial common features corresponding to the selected facial style, wherein the facial features are provided by the user;
and the deformation display module is used for displaying the facial deformation recommendation information and displaying the facial deformation recommendation information to act on the facial deformation effect obtained by the facial image to be deformed.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202111604399.0A 2021-03-12 2021-12-24 Face deformation recommendation method, device, equipment and medium based on image recognition Pending CN115130493A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110268581 2021-03-12
CN2021102685817 2021-03-12

Publications (1)

Publication Number Publication Date
CN115130493A true CN115130493A (en) 2022-09-30

Family

ID=83375206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111604399.0A Pending CN115130493A (en) 2021-03-12 2021-12-24 Face deformation recommendation method, device, equipment and medium based on image recognition

Country Status (1)

Country Link
CN (1) CN115130493A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116844217A (en) * 2023-08-30 2023-10-03 成都睿瞳科技有限责任公司 Image processing system and method for generating face data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116844217A (en) * 2023-08-30 2023-10-03 成都睿瞳科技有限责任公司 Image processing system and method for generating face data
CN116844217B (en) * 2023-08-30 2023-11-14 成都睿瞳科技有限责任公司 Image processing system and method for generating face data

Similar Documents

Publication Publication Date Title
CN110110118A (en) Dressing recommended method, device, storage medium and mobile terminal
WO2021155691A1 (en) User portrait generating method and apparatus, storage medium, and device
CN113538441A (en) Image segmentation model processing method, image processing method and device
Ren et al. Semantic facial descriptor extraction via axiomatic fuzzy set
US11263436B1 (en) Systems and methods for matching facial images to reference images
US20200118029A1 (en) General Content Perception and Selection System.
CN105721936A (en) Intelligent TV program recommendation system based on context awareness
CN116097320A (en) System and method for improved facial attribute classification and use thereof
CN104951770A (en) Construction method and application method for face image database as well as corresponding devices
Karbauskaitė et al. Kriging predictor for facial emotion recognition using numerical proximities of human emotions
CN106528676A (en) Entity semantic retrieval processing method and device based on artificial intelligence
CN115130493A (en) Face deformation recommendation method, device, equipment and medium based on image recognition
CN115935049A (en) Recommendation processing method and device based on artificial intelligence and electronic equipment
CN117523275A (en) Attribute recognition method and attribute recognition model training method based on artificial intelligence
CN116701706A (en) Data processing method, device, equipment and medium based on artificial intelligence
CN116977992A (en) Text information identification method, apparatus, computer device and storage medium
US20220319082A1 (en) Generating modified user content that includes additional text content
CN115408611A (en) Menu recommendation method and device, computer equipment and storage medium
JP6320844B2 (en) Apparatus, program, and method for estimating emotion based on degree of influence of parts
CN117011449A (en) Reconstruction method and device of three-dimensional face model, storage medium and electronic equipment
WO2022212669A1 (en) Determining classification recommendations for user content
CN112102304A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
US20220198825A1 (en) Systems and methods for matching facial images to reference images
CN113269176B (en) Image processing model training method, image processing device and computer equipment
US11928167B2 (en) Determining classification recommendations for user content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination