CN114299270A - Special effect prop generation and application method, device, equipment and medium - Google Patents

Special effect prop generation and application method, device, equipment and medium Download PDF

Info

Publication number
CN114299270A
CN114299270A CN202111664536.XA CN202111664536A CN114299270A CN 114299270 A CN114299270 A CN 114299270A CN 202111664536 A CN202111664536 A CN 202111664536A CN 114299270 A CN114299270 A CN 114299270A
Authority
CN
China
Prior art keywords
head
model
point
target
prop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111664536.XA
Other languages
Chinese (zh)
Inventor
李亦彤
陈志兴
程远初
周自旺
范佳敏
宋旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111664536.XA priority Critical patent/CN114299270A/en
Publication of CN114299270A publication Critical patent/CN114299270A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a method, apparatus, device and medium for generating and applying special effect props. The method for generating the special effect prop comprises the following steps: displaying a property making interface comprising a property display area; responding to the material import operation, and importing an initial head model and an initial article model corresponding to the target head article; displaying the adjusted head model and the adjusted item model in the prop display area in response to an adjustment operation on the initial head model and the initial item model; and establishing a mapping relation between at least two target article points in the adjusted article model and at least one head point corresponding to the target article point in the adjusted head model, and generating a head special effect prop corresponding to the target head article based on the adjusted article model and the mapping relation. The multi-point binding between the head model and the head special effect prop is realized, and the dynamic adaptability of the head special effect prop to the head model is improved.

Description

Special effect prop generation and application method, device, equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for generating and applying a special effect prop.
Background
With the development of computer technology, many virtual special effects such as virtual head special effects based on augmented reality appear at present to enrich the interest of people in image/video shooting.
However, the existing special-effect prop for the head has the problem of die penetration. For example, for the hat special effect prop shown in fig. 1(a), under the condition that the head size of the user is basically consistent with the hat size of the hat special effect prop, the real-life enhancement effect is better. However, when the head size of the user is larger than the hat size of the hat special effect prop, the display effect is as shown in fig. 1(b), that is, a part of the head of the user passes through the hat range to show the die-threading phenomenon. Such virtual headwear items inevitably affect the shooting effect of the shot images/videos, and reduce the shooting interest.
Disclosure of Invention
In order to solve the technical problem, the present disclosure provides a method, an apparatus, a device, and a medium for generating and applying a special effect prop.
In a first aspect, the present disclosure provides a method for generating a special effect prop, including:
displaying a prop making interface; the property making interface comprises a property display area;
responding to material import operation, importing an initial head model and an initial article model corresponding to a target head article, and displaying the initial head model and the initial article model in the prop display area;
displaying the adjusted head model and the adjusted item model in the prop display area in response to an adjustment operation on the initial head model and the initial item model;
and establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating a head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation.
In a second aspect, the present disclosure also provides a special effect prop application method, including:
acquiring a face image, and determining a head key point based on the face image;
responding to application triggering operation of the head special effect prop, determining an initial head special effect prop, and determining a prop key point based on the head key point and a mapping relation contained in the initial head special effect prop; the mapping relation is used for recording the corresponding relation between a head point in a head model and an article point in an article model corresponding to the initial head special effect prop;
and adjusting the initial head special effect prop based on the prop key point to generate a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image.
In a third aspect, the present disclosure further provides a special effect item generating device, including:
the prop making interface display module is used for displaying a prop making interface; the property making interface comprises a property display area;
the model importing module is used for responding to material importing operation, importing an initial head model and an initial article model corresponding to a target head article, and displaying the initial head model and the initial article model in the prop display area;
a model adjustment module for displaying the adjusted head model and the adjusted item model in the item display area in response to an adjustment operation on the initial head model and the initial item model;
and the head special effect prop generation module is used for establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating the head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation.
In a fourth aspect, the present disclosure further provides a special effect prop application device, including:
the head key point determining module is used for acquiring a face image and determining head key points based on the face image;
the system comprises a prop key point determining module, a head special effect prop determining module and a prop key point determining module, wherein the prop key point determining module is used for responding to application triggering operation of the head special effect prop, determining an initial head special effect prop and determining a prop key point based on a mapping relation contained in the head key point and the initial head special effect prop; the mapping relation is used for recording the corresponding relation between a head point in a head model and an article point in an article model corresponding to the initial head special effect prop;
and the head special effect image generation module is used for adjusting the initial head special effect prop based on the prop key point, generating a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image.
In a fifth aspect, the present disclosure provides an electronic device comprising:
a processor;
a memory for storing executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the special effect item generation method described in any embodiment of the present disclosure, or to implement the special effect item application method described in any embodiment of the present disclosure.
In a sixth aspect, the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the special effect item generation method described in any of the embodiments of the present disclosure, or the special effect item application method described in any of the embodiments of the present disclosure.
According to the special effect prop generation method, the special effect prop generation device, the special effect prop generation equipment and the special effect prop generation medium, a prop making interface comprising a prop display area can be displayed in the head special effect prop generation process; responding to the material importing operation, importing an initial head model and an initial article model corresponding to the target head article, and displaying the initial head model and the initial article model in the prop display area; displaying the adjusted head model and the adjusted item model in the prop display area in response to an adjustment operation on the initial head model and the initial item model; and establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating a head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation. The method and the device realize point-to-point binding between the head model and the head special effect prop, solve the problem that the special effect prop such as die-through is not suitable for the head model due to the fact that the head special effect prop is bound as a whole, improve the dynamic adaptability of the head special effect prop to the head model, and further improve the fidelity and the interestingness of the head special effect prop.
The special effect prop application method, the special effect prop application device, the special effect prop application equipment and the special effect prop application medium can acquire a face image of a real user and determine a head key point based on the face image; responding to application triggering operation of the head special effect prop, determining an initial head special effect prop, and determining a prop key point based on a mapping relation between the head key point and the initial head special effect prop; and adjusting the initial head special effect prop based on the prop key point to generate a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image. The problem of special effect props such as die-piercing that appear among the head special effect props application process not adapt to user's head is solved, realized that head special effect props follow user's head size and head action and carry out dynamic adaptation, improved the reality reinforcing degree and the interest of head special effect props in the application process to promote user experience and user viscidity.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of a display of a hat special effect prop in the prior art;
fig. 2 is a schematic flow chart of a method for generating a special effect prop according to an embodiment of the present disclosure;
fig. 3 is a schematic display view of a fixture manufacturing interface provided in the embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a method for applying a special effect prop according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a special effect item generation device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a special effect prop application device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In the current process of manufacturing the special-effect prop for the head, a binding relationship is often established between the virtual head ornament as a whole and a head model, so that the phenomenon that the virtual head ornament cannot be well dynamically adapted to the head size and the head action of a user easily occurs in the application process of the manufactured special-effect prop for the head. For example, a mold-through phenomenon that the head of the user penetrates out of the virtual head ornament, a proportion maladjustment phenomenon that the virtual head ornament is too large or too small relative to the head of the user, a movement disjointing phenomenon that the movement distance and the movement direction of the virtual head ornament are not matched with the movement distance and the movement direction of the head of the user, and the like.
Based on the above situation, the embodiment of the present disclosure provides a special effect prop generation scheme and a special effect prop application scheme, so as to establish a multipoint binding relationship between the head special effect prop and the head model, and perform dynamic adaptation of the head special effect prop and the head of the user based on the multipoint binding relationship, thereby solving the problem that the head special effect prop such as a die-piercing does not adapt to the head model, improving the dynamic adaptation of the head special effect prop to the head of the user, and thus improving the fidelity and the interestingness of the head special effect prop.
First, a method for generating a special effect item provided by the embodiment of the disclosure is described with reference to fig. 2 to 3.
In the embodiment of the disclosure, the method for generating the special effect prop is suitable for the case of manufacturing the special effect prop, and is particularly suitable for the case of manufacturing the head special effect prop. For example, the cap special effect prop is manufactured, and the non-cap special effect props such as hair pin, hairpin, crown, hairpin and the like are manufactured. The special effect prop generating method can be executed by a special effect prop generating device, the device can be realized by a software and/or hardware mode, and the device can be integrated in electronic equipment capable of installing special effect prop manufacturing software. The electronic device may include, but is not limited to, devices such as a smart phone, a tablet computer, a laptop computer, a desktop computer, and the like.
Fig. 2 shows a flow chart of a special effect item generation method provided by the embodiment of the disclosure. As shown in fig. 2, the method for generating special effects prop may include the following steps:
and S210, displaying a prop making interface.
Specifically, when a user needs to make a head special effect prop, the user needs to start a special effect prop making tool, such as a special effect prop making application program in an electronic device. Therefore, the electronic equipment can display an application program interface, namely a prop making interface for making the special-effect prop. The property making interface at least comprises a material operation area and a property display area. The material operation area is used for displaying and operating materials (namely special effect materials) required for manufacturing the head special effect prop, such as head model materials, head article model materials, article material materials, article style materials and the like. The prop display area is used for displaying the effect of combining various imported special effect materials together, so that a user can more intuitively see the head special effect prop and adjust the head special effect prop.
As shown in fig. 3, an item creation interface 300 is displayed in the electronic device, a material operation area 310 is displayed on the left side of the item creation interface 300, an item display area 320 is displayed in the middle, and a parameter setting area 330 is displayed on the right side. Two sub-areas, namely, a special effect material sub-area and a resource management sub-area, are displayed in the material operation area 310, and the sub-areas correspond to a material editing function and a material importing function, respectively.
And S220, responding to the material importing operation, importing an initial head model and an initial article model corresponding to the target head article, and displaying the initial head model and the initial article model in the prop display area.
The material importing operation is an operation of importing special effect materials required for producing the special effect properties from the outside of the special effect property production application program (such as a local storage medium, an external storage medium, a network end and the like). The initial head model is a standard, static model of the pre-constructed head, which may be, for example, a three-dimensional mesh (mesh) model of the head. The initial item model is a standard static model of the pre-built target head item, which may be, for example, a three-dimensional mesh (mesh) model of the item. The target head article refers to an article worn on the head to be made into a special effect prop, and can be a hat, a hair pin, a hairpin, a crown, a hair pin and the like.
Specifically, the user can execute the material import operation through the control related to the material import function set in the special effect property creation application program. For example, in the upper right corner of the "resource management" sub-area in fig. 3, import function controls, i.e., an import file control 311 and an import folder control 312, are provided. The user can click on the import file control 311 or the import folder control 312 to perform the material import operation. Or, a prompting message such as "drag material to this" is set at a certain position in the "resource management" sub-area to prompt the user to directly drag the special effect material that the user wants to import to the "resource management" sub-area, so that the user can also execute the material import operation by dragging the material.
In the process of executing the material importing operation, a user can select special effect materials of a part (head) corresponding to the special effect prop which the user wants to make and special effect materials required by a target head object. After detecting the material importing operation of the user, the electronic equipment receives the special effect material imported in the material importing operation and displays the special effect material in the material operation area.
In some embodiments, the material handling area 310 is divided into two sub-areas, "special effects material" and "resource management. When a user wants to make a hat special effect prop, the user selects a head try-on function provided in a special effect prop making application program and a hat related special effect material when executing material importing operation, so that the electronic equipment can receive the head and hat related special effect material. Then, the head model is displayed in the "special effects material" sub-area, and the imported "hat material package" is displayed in the "resource management" sub-area.
Then, in order to facilitate visualization operation, a loading trigger operation of a user may be further detected, and in response to the loading trigger operation, the special effect material to be loaded is loaded from the "resource management" sub-area to the "special effect material" sub-area. The loading trigger operation may include a double-click operation, a right-click operation, a drag operation, or the like. For example, the user double-clicks the "hat material package" in the "resource management" sub-area, and after the electronic device detects the double-click operation, the electronic device displays the special effect materials according to the model structure of the article model material, that is, the "hat material package" is loaded below the "head article" in the "special effect material" sub-area, and displays the special effect materials contained in the "hat material package", such as the special effect materials of style and material contained in the hat model and below the hat model in fig. 3. For another example, the user right clicks the "hat material package" in the "resource management" sub-area and selects the corresponding function option, so that the function of loading and displaying the "hat material package" below the "head article" in the "special effect material" sub-area can also be realized. For another example, if the user drags the "hat material package" in the "resource management" sub-region to the position below the "head article" in the "special effects material" sub-region, the electronic device may display each special effects material contained in the "hat material package" below the "head article" in the "special effects material" sub-region.
In other embodiments, when the material operation area is one area, the user may directly load all the special effect materials corresponding to the target head object into the material operation area through the material introduction operation, and display each special effect material according to the model structure of the object model material.
In order to make the user intuitively make the head special effect prop, the electronic device may display the imported initial head model and the initial article model in the prop display area 320 according to the default parameters such as the position and the orientation during import.
And S230, responding to the adjustment operation of the initial head model and the initial article model, and displaying the adjusted head model and the adjusted article model in the prop display area.
Specifically, the user can adjust dimensions such as size, position and orientation of the initial head model and/or the initial article model according to own needs. For example, the user may perform the adjustment operation by dragging the initial head model and/or the initial item model. For another example, the user may select the initial head model and/or the initial article model by a selection operation on the model, display a parameter setting sub-region (i.e., a first parameter setting sub-region) of the selected model in the parameter setting region 330 shown in fig. 3, and then perform an adjustment operation by setting relevant parameters in the first parameter setting sub-region.
After the electronic device detects the above adjustment operation performed by the user, the execution result of the adjustment operation, such as the size, position, and orientation after the model is dragged, and such as the parameter value after the parameter setting, may be obtained. Then, the electronic device obtains the adjusted head model and/or the adjusted article model according to the execution result, and displays the adjusted head model and/or the adjusted article model in real time in the item display area 320 shown in fig. 3.
In some embodiments, for the case that the property creation interface includes a parameter setting area, the step S230 may be implemented as: displaying a first parameter setting subarea corresponding to the target model in the parameter setting area; and in response to the parameter setting operation on the first target parameter displayed in the first parameter setting sub-area, determining a first target parameter value, and determining and displaying the adjusted target model based on the first target parameter value.
Wherein the target model is a static model of a criterion selected by a user to be adjusted, and may be one or both of an initial head model and an initial item model, for example. The first target parameter refers to an adjustable parameter in the target model.
Specifically, the electronic device may provide parameters required by the model adjustment, that is, display a first parameter setting sub-region corresponding to the target model in the property making interface, and display the first target parameter in the first parameter setting sub-region.
In an example, because the parameter setting of the target model is important, the first parameter setting sub-region may be displayed in the property creation interface by default, that is, the first parameter setting sub-region corresponding to the target model is displayed when the property creation interface is displayed.
In another example, because the displayable content in the parameter setting area is limited, the first parameter setting sub-area corresponding to the target model may not be displayed when the property creation interface is displayed. In this way, the user can perform a selection operation such as clicking on the target model. And after the electronic equipment detects the selected operation, displaying the first parameter setting sub-area in the parameter setting area.
In some embodiments, where the target model comprises an initial head model and an initial item model, the first target parameters are relative position, relative proportion and relative orientation between the initial head model and the initial item model. Specifically, in order to improve the model adjustment efficiency, the first target parameters may be set as the linkage adjustment parameters, i.e., the relative position, relative proportion, and relative orientation, between the initial head model and the initial article model. The mask, the electronic device, in the first parameter setting sub-region, displays the relative position, relative scale and relative orientation.
As shown in fig. 3, when the user selects the head model and the hat model displayed in the material operation area 310, the electronic apparatus displays a first parameter setting sub-area 331 corresponding to the model adjustment in the parameter setting area 330 in response to the selection operation. A relative position option, a relative scale option, and a relative orientation option are displayed in the first parameter setting sub-region 331.
When the user wants to adjust the target model, the parameter setting may be performed on the first target parameter displayed in the first parameter setting sub-area. For example, the user performs parameter setting operations such as selection, parameter value filling, parameter value modification, and the like on the first target parameter. After the electronic device detects the parameter setting operation, the parameter value of the first target parameter (i.e. the first target parameter value) is obtained from the parameter setting operation. And then, the electronic equipment adjusts the target model according to the first target parameter value to obtain the adjusted target model, and the adjusted target model is displayed in the prop display area in real time so that the user can check the adjustment result.
For example, the user may fill in parameter values for the relative position option, the relative scale option, and the relative orientation option displayed in the first parameter setting sub-region 331 of fig. 3. After receiving the filled first target parameter value, the electronic device may perform synchronous adjustment on the initial head model and the initial item model according to the first target parameter value, obtain an adjusted head model and an adjusted item model, and display them in the property display area 320.
S240, establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating a head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation.
The target item point refers to a point in the item model that is selected and used for establishing a binding relationship (i.e., a mapping relationship) between the head model and the item model. The head point refers to a point in the head model for establishing a mapping relationship between the head model and the article model.
For example, in order to avoid a problem that the head special effect prop cannot be dynamically adapted to the head model due to the fact that the article model is used as a whole to establish the mapping relationship in the related art, the number of the target article points in the embodiment of the present disclosure is at least two. Therefore, the electronic equipment can establish a multi-point binding relationship between the head model and the article model, so that the subsequently generated head special effect prop can be subjected to multi-point adjustment and restriction along with the movement of the head model, and the effect of dynamically adapting the head special effect prop to the head model is achieved. Considering that the more the number of the target item points, the better the dynamic adaptation between the subsequently generated head special effect prop and the head model, and the excessive number of the target item points may cause the transition deformation of the head special effect prop and the larger consumption of computing resources, in the embodiment of the present disclosure, the dynamic adaptation effect and the resource consumption may be balanced, and an appropriate number of the target item points is determined.
Specifically, the electronic device detects whether the model adjustment of the head model and the article model is finished. For example, the electronic device determines whether the adjustment operation is not detected within a preset time period, or whether the electronic device detects that a trigger operation of the user on a related control for ending the adjustment is received. If the detection result is negative, the adjustment of the model is not finished, and the electronic equipment can continue to respond to the adjustment operation of the model to perform the adjustment of the model. If the detection result is yes, the model adjustment is finished. At this time, the electronic device may determine the target item point from the adjusted item model. For example, the electronic device selects, as the target item point, an item point that satisfies the above condition from among the item points in the adjusted item model according to a preset inter-point distance threshold or a mesh granularity of the mesh model. The electronic device then determines at least one head point from the adjusted head model that corresponds to each target item point. For example, the electronic device calculates the distance between the target item point and each head point in the corresponding region in the adjusted head model, and selects a head point whose distance satisfies a preset third distance threshold, or selects a preset number of head points whose distances are smaller, as the head points corresponding to the corresponding target item points. Similarly, the greater the number of head points, the more stable the mapping relationship, and the greater the computational resource consumption requirement, so that the stability of the mapping relationship and the resource consumption can be balanced to determine a suitable third distance threshold or preset number, thereby determining the head point corresponding to each target item point.
Then, for each target item point, the electronic device may construct a related conversion relation according to the item point information (e.g., the item point position and the item point orientation) of the target item point and the head point information (e.g., the head point position and the head point orientation) of each corresponding head point, and solve a conversion coefficient matrix of the conversion relation, so as to obtain an information conversion relation between the target item point and each corresponding head point, and determine the information conversion relation as a mapping relation corresponding to the target item point.
And obtaining the mapping relation corresponding to each target item point according to the process. And then, the electronic equipment can pack the adjusted object model and the associated mapping relations together to generate the head special effect prop corresponding to the target head object.
It should be noted that, in addition to the information conversion relationship, the information included in the mapping relationship may also include a distance value between the target item point and each corresponding head point, so as to further constrain the information conversion relationship, and further optimize the head special effect prop.
In some embodiments, after generating the head specific prop corresponding to the target head item based on the adjusted item model and the mapping relationship, the method further includes: and deriving the head special effect prop so as to carry out special effect configuration on the head in the original shooting data.
Specifically, after the user manufactures the special-effect prop for the head, the manufactured special-effect prop for the head can be led out to be a finished special-effect prop product by using a leading-out function of the special-effect prop provided by the electronic equipment. Therefore, in the process of special effect shooting/special effect synthesis of a user, the special effect prop finished product can be selected to carry out special effect configuration on the head in the image/video.
According to the special effect prop generation method provided by each embodiment of the disclosure, a prop making interface including a prop display area can be displayed in the generation process of the head special effect prop; responding to the material importing operation, importing an initial head model and an initial article model corresponding to the target head article, and displaying the initial head model and the initial article model in the prop display area; displaying the adjusted head model and the adjusted item model in the prop display area in response to an adjustment operation on the initial head model and the initial item model; and establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating a head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation. The method and the device realize point-to-point binding between the head model and the head special effect prop, solve the problem that the special effect prop such as die-through is not suitable for the head model due to the fact that the head special effect prop is bound as a whole, improve the dynamic adaptability of the head special effect prop to the head model, and further improve the fidelity and the interestingness of the head special effect prop.
In some embodiments, after S220, the special effect item generation method further includes a correlation step of rendering the head special effect item in a partition manner and determining a distance value in the mapping relationship, which may be implemented as the following steps a to C.
Step A, responding to triggering operation of a partition rendering function in a parameter setting area contained in a prop making interface, and determining a target distance threshold.
The partition rendering function is a preset function for performing partition rendering on the article model. The target distance threshold is a distance value for partitioning the item model, and may be one distance value or a plurality of distance values.
Specifically, through the description of the above embodiments, the electronic device may establish a binding relationship between a plurality of target item points in an item model and a head model, so as to solve the problem that the head special effect prop cannot be well adapted to the head model. On this basis, it is further contemplated that not all contact between the target head item and the head model may be made, i.e., that at least a portion of the area of the target head item does not require the same magnitude of motion as the head motion. Therefore, the embodiment of the present disclosure further provides a partition rendering function, so as to divide the object model corresponding to the target head object into different regions, and the association degree between each region and the head model is different, so that different regions in the head special effect prop have different deformation degrees in the motion process of the head model, and further the head special effect prop is more vivid in the motion process, and has a better enhanced display effect.
In specific implementation, the electronic device presets a partition rendering function, for example, a function control of the partition rendering function, in a parameter setting area in the property making interface. The user only needs to select or click and the like to trigger the functional control with the partition rendering function. At this time, the electronic device may detect a trigger operation on the partition rendering function, and may determine the target distance threshold. For example, the electronic device determines a default distance threshold preset in the partition rendering function as a target distance threshold; as another example, the electronic device determines a distance value entered by a user through the zone rendering function as the target distance threshold.
In some embodiments, step a may be implemented as: responding to the triggering operation of the partition rendering function, and displaying a second parameter setting sub-area corresponding to the partition rendering function in the parameter setting area; the target distance threshold is determined in response to a parameter setting operation on the second target parameter displayed in the second parameter setting sub-area.
Wherein the second target parameter is a parameter that sets a distance threshold in the partition rendering function.
Specifically, referring to fig. 3, after the user selects the "partition rendering" function, the electronic device may detect a trigger operation on the partition rendering function. After that, the electronic device displays a second parameter setting sub-region 332 corresponding to the partition rendering function in the parameter setting region 330. The second parameter setting sub-area 332 displays parameter setting options for the second target parameter, such as distance threshold setting options 333 corresponding to the "first distance threshold" and the "second distance threshold", respectively. The user may perform a parameter setting operation of numerical value input to the distance threshold setting option 333. The electronic device receives the input numerical value and determines it as the target distance threshold. Therefore, the user can conveniently set the partition result by himself, and the head special effect prop which meets the requirements of the user can be obtained subsequently.
It should be noted that, referring to fig. 3, the electronic device may display the corresponding numerical reference range in the peripheral region of each second target parameter, so as to prompt the user to perform the parameter setting operation.
And B, dividing the target object model into at least two display ranges based on the target distance threshold value and the distance between the object point in the target object model and the target head model.
Wherein the target item model comprises an initial item model or an adjusted item model and the target head model comprises an initial head model or an adjusted head model.
Specifically, the portions of the target item model are at different distances from the target head model, which varies with the degree to which the head model is deformed. Therefore, the electronic device may calculate the distance between the item point in the target item model and the target head model. For example, the distance between each item point and each head point in the target head model is calculated, and the minimum distance of the distances is determined as the distance between the item point and the target head model. The electronic device then classifies each distance, and thus each item point, using the target distance threshold. Then, the electronic device divides the target object model into different regions according to the category to which each object point in the target object model belongs, namely, each region corresponds to a display range.
If the target distance threshold is only one distance threshold, the target item model may be divided into two display ranges according to the above process. If the target distance threshold is two distance thresholds, then the target item model may be divided into three display ranges, and so on, according to the process described above.
And C, updating the target object model based on the display ranges and the display styles corresponding to the display ranges, and displaying the target head model and the updated target object model in the prop display area.
Wherein the number of display styles is at least two.
Specifically, the electronic device updates the target object model according to the divided display ranges and the display style (such as display color, display transparency, display texture, and the like) corresponding to each display range. At this time, the model-related parameters (such as model size, position, orientation, rotation angle, and the like) of the target object model are not changed, but only the display style of different display ranges is updated. And then, the electronic equipment refreshes and displays the updated target object model in the prop display area, and simultaneously displays the target head model which is not updated. In order to enable a user to judge the relationship between each part in the target object model and each part in the target head model more intuitively and assist the user in making the head special-effect prop better, the display styles corresponding to the display ranges can be different. For example, the display styles of the display ranges corresponding to the same category are the same, while the display styles of the display ranges corresponding to different categories may be different.
Referring to fig. 3, in the case where the target distance threshold is two distance thresholds, the target hat model 321 may be divided into three different display ranges according to the above-described process. The three display ranges are a main display range 3211, a transition display range 3212, and a visor display range 3213, respectively. The distance between each item point in the main body display range 3211 and the target head model 322 is the smallest, and belongs to the part which generates the largest deformation along with the movement of the head model; the distance between each item point in the transition display range 3212 and the target head model 322 is the second smallest, and belongs to a portion that deforms to a smaller extent with the movement of the head model; the distance between each article point in the visor display range 3213 and the target head model 322 is the largest, and the article point is a portion that hardly deforms as the head model moves.
The execution sequence between S230 and steps a to C is not limited. The adjustment of the two models and the adjustment of the object model partition display are not limited in sequence, and the two models can be adjusted in a crossed mode until the adjustment result satisfied by the user is achieved.
On the basis of the embodiment that the target object model is divided into different display ranges, the mapping relationship established in S240 may include distance values between the target object point and each corresponding head point, which may specifically be implemented as the following steps D and E:
and D, determining the target distance corresponding to each target item point based on the distance corresponding to the target item point.
And the target distance is used for constructing a distance value of the mapping relation.
Specifically, according to the above description, the electronic device may calculate a distance between each item point in the target item model and the target head model, and the target item point is selected from the item points, so that the distance between each target item point and the target head model may also be obtained. The electronic device then determines the distance as the target distance corresponding to the target item point. Or, in order to facilitate calculation and denoising, the electronic device may perform certain processing on the distance corresponding to the target article point, and determine the processing result as the target distance corresponding to the target article point.
And E, aiming at each target article point, establishing an information conversion relation between the target article point and a head point corresponding to the target article point, and determining the information conversion relation and a target distance corresponding to the target article point as a mapping relation corresponding to the target article point.
The information conversion relation comprises a position conversion relation and an orientation conversion relation.
Specifically, for each target item point, the electronic device may calculate a conversion relationship between the item point information of the target item point and the corresponding head point information of each head point, so as to obtain an information conversion relationship. Then, the electronic device determines the information conversion relation corresponding to each target item point and the target distance together as the mapping relation corresponding to the corresponding target item point so as to optimize the mapping relation between the head model and the item model and further optimize the subsequently generated head special-effect prop.
In some embodiments, the step D may be implemented as: and based on the first distance threshold and the second distance threshold in the target distance thresholds, normalizing the distances corresponding to the target object points to obtain the target distance corresponding to each target object point.
Specifically, according to the above description, each target item point has a mapping relationship with each corresponding head point, so that a plurality of mapping relationships exist in the whole head special effect prop, which may cause the head special effect prop to have different deformation degrees between adjacent points in the application process, thereby causing the problem of abnormal display of the head special effect prop. Therefore, in the embodiment of the present disclosure, the item point information corresponding to the target item point is further added in the mapping relationship, so that in the prop application process, the item point information and the result calculated by using the mapping relationship are fused, the effect of smoothing the result obtained by each mapping relationship is achieved, and the display fidelity in the head special effect prop application process is further ensured. The fusion of the information of the item points and the result calculated by using the mapping relationship may be a weighting, and the weighting may be determined according to the target distance corresponding to the target item point.
Based on the above description, in order to simplify the calculation process and enable the target distance to participate in the calculation as the weight, in this embodiment, the distance corresponding to each target item is normalized by using two different distance thresholds in the target distance threshold, that is, the first distance threshold and the second distance threshold, and the obtained result is the target distance corresponding to the corresponding target item.
For example, where the first distance threshold is less than the second distance threshold, the electronic device may normalize distances greater than the second distance threshold to a target distance having a value of 1, normalize distances less than the first distance threshold to a target distance having a value of 0, and normalize distances between the first distance threshold and the second distance threshold to a target distance having a value between 0 and 1.
On the basis of the embodiment in which the target distance is obtained through the normalization processing, the determining, in the step E, the information conversion relationship and the target distance corresponding to the target item point as the mapping relationship corresponding to the target item point includes: determining the information conversion relation, the target distance corresponding to the target article point and the article point information as a mapping relation corresponding to the target article point; wherein the item point information comprises an item point location and an item point orientation.
Specifically, the electronic device uses the information conversion relationship, the target distance, and the item point information corresponding to each target item point as the mapping relationship of the corresponding target item point.
First, the application method of the special effect prop provided by the embodiment of the disclosure is described with reference to fig. 4.
In the embodiment of the disclosure, the special effect item application method is suitable for a use scene of the head special effect item, for example, a shooting scene of an image/video by using the head special effect item, a post-synthesis scene of the image/video by using the head special effect item, and the like. The special effect prop application method can be executed by a special effect prop application device, the device can be realized by software and/or hardware, and the device can be integrated in electronic equipment capable of using the head special effect prop. The electronic device may include, but is not limited to, devices such as a smart phone, a tablet computer, a laptop computer, a desktop computer, and the like.
Fig. 4 shows a flow chart of a special effect item application method provided by the embodiment of the disclosure. As shown in fig. 4, the method for applying a special effect prop may include the following steps:
s410, obtaining a face image, and determining a head key point based on the face image.
Specifically, the electronic device obtains a face image by shooting through a camera, or obtains the face image by reading from a storage medium, or obtains the face image by loading from a network side. Then, the electronic equipment constructs a head model of the user according to the face image, and further extracts points from the head model of the user as key points of the head.
In some embodiments, determining the head keypoints based on the face images comprises: determining a face key point, a forehead key point and an ear key point based on the face image; generating a user head model based on the face key points, the forehead key points and the ear key points, and determining at least two head key points based on the user head model.
Specifically, because the head special effect prop is applied and is related to a plurality of parts of the face, forehead, ears and the like of the user, in order to more accurately construct the head model of the user, the electronic device can identify and obtain face key points, forehead key points and ear key points according to the face image. And then, the electronic equipment adjusts the initial head model according to the face key points, the forehead key points and the ear key points to obtain a user head model matched with the face image. Then, the electronic device extracts a plurality of points from the user head model according to the information (such as the position and number of the head point in the head model) of the head point selected when the mapping relationship is constructed, and uses the extracted points as key points of the head.
S420, responding to application triggering operation of the head special effect prop, determining an initial head special effect prop, and determining a prop key point based on the head key point and a mapping relation contained in the initial head special effect prop.
The mapping relation is used for recording the corresponding relation between the head point in the head model and the article point in the article model corresponding to the initial head special effect prop.
Specifically, the user may select the provided special-effect head prop, and the electronic device may detect an application trigger operation on the special-effect head prop, and determine the triggered special-effect head prop as the initial special-effect head prop. The initial head special effect prop is a special effect prop finished product derived by a prop making application program, and may not be adapted to a user head model. Therefore, the electronic device can calculate the point in the head special effect prop corresponding to the head key point, namely the prop key point, by using the obtained mapping relation between the head key point and the initial head special effect prop.
For example, when the mapping relationship is an information conversion relationship between the head model and the article model, the electronic device may substitute information such as the position and orientation of each head key point into the information conversion relationship, and the calculated result is a prop key point corresponding to the head key point.
It should be noted that, if one target item point corresponds to multiple head points in the process of generating a head special effect prop, in this step, when the electronic device calculates the prop key points by using the mapping relationship, the same number of head points at the same position also need to be substituted into the information conversion relationship, so as to obtain one prop key point corresponding to the head points.
In some embodiments, the mapping relationship may include an information transfer relationship between the head point and the item point and a target distance between the item point and the head point; the information conversion relation comprises a position conversion relation and an orientation conversion relation.
Correspondingly, determining at least two item key points based on the mapping relationship included in each head key point and the initial head special effect item in S420 includes: determining at least two initial prop points based on the head key points and the information conversion relation; and correcting the prop point information of the initial prop point based on the target distance corresponding to the article point matched with the initial prop point aiming at each initial prop point to obtain a prop key point corresponding to the initial prop point.
Specifically, the information such as the position and the orientation of the key point of the head is substituted into the information conversion relation, and the position and the orientation of the prop point, namely the initial prop point, can be obtained through preliminary calculation. And then, the electronic equipment corrects the initial prop points according to the corresponding target distance of each initial prop point in the mapping relation. For example, when the absolute value of the difference between the distance from the initial prop point to the head model of the user and the target distance is greater than the fourth distance threshold, the initial prop point may be adjusted until its corresponding distance falls within the range of (target distance ± fourth distance threshold). The result obtained through the correction process is the key point of the prop corresponding to the initial prop point. Therefore, the key points of the prop can be smoothed to a certain extent, the accuracy of the key points of the prop is improved, the problem of poor adaptability between the head special effect prop and a user head model is avoided, and the fidelity of the follow-up target head special effect prop is further improved.
On the basis of the above embodiment, the mapping relationship may further include item point information of the item point.
Correspondingly, the above-mentioned goal distance that corresponds based on the article point that initial stage property point matches revises stage property point information of initial stage property point, and the stage property key point that obtains initial stage property point and corresponds includes: respectively determining a first weight and a second weight corresponding to the prop point information and the article point information based on the target distance; and based on the first weight and the second weight, carrying out weighting processing on the item point information and the item point information to obtain item key points.
Specifically, in order to further smooth the key points of the prop, so as to further improve the adaptation degree between the special-effect prop at the head and the head model of the user, in this embodiment, the object distance may be used to perform weighting processing on the initial prop point and the corresponding object point information of the object point in the adjusted object model, so as to obtain more accurate key points of the prop.
According to the above description, the target distance is normalized by the first distance threshold and the second distance threshold, and has a value range of [0,1], so that it can be directly used as a weight. Namely, the target distance is determined as the second weight corresponding to the item point information, and the (1-target distance) is determined as the first weight corresponding to the initial item point. Then, the electronic equipment weights the initial prop point by using the first weight, weights the item point information by using the second weight, and adds the two weighted results to obtain the prop key point corresponding to the initial prop point.
S430, adjusting the initial head special effect prop based on the prop key point, generating a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image.
Specifically, the prop key point is calculated according to the head key point and the mapping relationship, so that the prop key point is obtained by dynamically adapting the head model of the user. Therefore, the electronic equipment can adjust the initial head special effect prop by using the prop key points to obtain the target head special effect prop which is dynamically adaptive to the head model of the user. For example, the initial head special effect prop corresponds to the mesh model, and interpolation calculation of grid points can be performed on the whole mesh model by using key points of all the props to obtain the adjusted mesh model, so that the target head special effect prop is obtained.
Then, the electronic device synthesizes the target head special effect prop and the face image to obtain a head special effect image, and the head special effect image is displayed. Because the target head special effect prop is obtained by carrying out multi-dimensional adaptation adjustment on the size, the position, the orientation and the like according to the head model of the user, the effect presented by the head special effect image is consistent with the effect of the user actually wearing the target head object, and the target head special effect prop has higher fidelity.
The special effect prop application method provided by each embodiment of the disclosure can acquire a face image of a real user, and determine a head key point based on the face image; responding to application triggering operation of the head special effect prop, determining an initial head special effect prop, and determining a prop key point based on a mapping relation between the head key point and the initial head special effect prop; and adjusting the initial head special effect prop based on the prop key point to generate a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image. The problem of special effect props such as die-piercing that appear among the head special effect props application process not adapt to user's head is solved, realized that head special effect props follow user's head size and head action and carry out dynamic adaptation, improved the reality reinforcing degree and the interest of head special effect props in the application process to promote user experience and user viscidity.
Fig. 5 shows a schematic structural diagram of a special effect item generation device provided in an embodiment of the present disclosure. As shown in fig. 5, the special effect item generating device 500 may include:
a prop making interface display module 510, configured to display a prop making interface; the property making interface comprises a property display area;
the model importing module 520 is configured to import an initial head model and an initial article model corresponding to the target head article in response to the material importing operation, and display the initial head model and the initial article model in the prop display area;
a model adjusting module 530, configured to display the adjusted head model and the adjusted item model in the item display area in response to an adjusting operation on the initial head model and the initial item model;
the head special effect prop generating module 540 is configured to establish a mapping relationship between a target item point in the adjusted item model and at least one head point in the adjusted head model corresponding to the target item point, and generate a head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relationship.
The special effect prop generation device provided by the embodiment of the disclosure can display a prop making interface including a prop display area in the generation process of the head special effect prop; responding to the material importing operation, importing an initial head model and an initial article model corresponding to the target head article, and displaying the initial head model and the initial article model in the prop display area; displaying the adjusted head model and the adjusted item model in the prop display area in response to an adjustment operation on the initial head model and the initial item model; and establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating a head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation. The method and the device realize point-to-point binding between the head model and the head special effect prop, solve the problem that the special effect prop such as die-through is not suitable for the head model due to the fact that the head special effect prop is bound as a whole, improve the dynamic adaptability of the head special effect prop to the head model, and further improve the fidelity and the interestingness of the head special effect prop.
In some embodiments, the number of target item points is at least two.
In some embodiments, the property making interface further includes a parameter setting area;
accordingly, the model adjustment module 530 includes:
the first parameter setting sub-region display sub-module is used for displaying a first parameter setting sub-region corresponding to the target model in the parameter setting region; wherein the target model comprises an initial head model and/or an initial item model;
and the model adjusting sub-module is used for responding to the parameter setting operation of the first target parameter displayed in the first parameter setting sub-area, determining a first target parameter value, and determining and displaying the adjusted target model based on the first target parameter value.
Further, in the case where the object model includes an initial head model and an initial article model, the first object parameters are a relative position, a relative proportion, and a relative orientation between the initial head model and the initial article model.
In some embodiments, the special effects prop generating apparatus 500 further comprises a partition rendering module for:
the target distance threshold value determining submodule is used for determining a target distance threshold value in response to triggering operation of a partition rendering function in a parameter setting area contained in a property making interface after a head model and an article model corresponding to a target head article are imported in response to material import operation and an initial head model and an initial article model are displayed in a property display area;
the display range dividing submodule is used for dividing the target object model into at least two display ranges based on the target distance threshold value and the distance between the object point in the target object model and the target head model; wherein the target item model comprises an initial item model or an adjusted item model, and the target head model comprises an initial head model or an adjusted head model;
the partition rendering and displaying submodule is used for updating the target object model based on each display range and the display style corresponding to the display range, and displaying the target head model and the updated target object model in the prop display area; wherein the number of display styles is at least two.
In some embodiments, head special effects prop generation module 540 is specifically configured to:
determining a target distance corresponding to each target item point based on the distance corresponding to the target item point;
aiming at each target article point, establishing an information conversion relation between the target article point and a head point corresponding to the target article point, and determining the information conversion relation and a target distance corresponding to the target article point as a mapping relation corresponding to the target article point; the information conversion relation comprises a position conversion relation and an orientation conversion relation.
Further, the head special effect item generating module 540 is specifically configured to:
based on a first distance threshold and a second distance threshold in the target distance thresholds, normalizing the distances corresponding to the target object points to obtain a target distance corresponding to each target object point;
determining the information conversion relationship and the target distance corresponding to the target item point as the mapping relationship corresponding to the target item point comprises the following steps:
determining the information conversion relation, the target distance corresponding to the target article point and the article point information as a mapping relation corresponding to the target article point; wherein the item point information comprises an item point location and an item point orientation.
In some embodiments, the target distance threshold determination submodule is specifically configured to:
responding to the triggering operation of the partition rendering function, and displaying a second parameter setting sub-area corresponding to the partition rendering function in the parameter setting area;
the target distance threshold is determined in response to a parameter setting operation on the second target parameter displayed in the second parameter setting sub-area.
In some embodiments, the special effects item generating device 500 further comprises a special effects item deriving module configured to:
and after generating a head special effect prop corresponding to the target head article based on the adjusted article model and the mapping relation, deriving the head special effect prop so as to carry out special effect configuration on the head in the original shooting data.
It should be noted that the special effect item generation device 500 shown in fig. 5 may execute each step in the special effect item generation method provided in any embodiment of the present disclosure, and implement each process and effect in the special effect item generation method provided in any embodiment of the present disclosure, which are not described herein again.
Fig. 6 shows a schematic structural diagram of a special effect item application device provided in an embodiment of the present disclosure. As shown in fig. 6, the special effect item application device 600 may include:
a head key point determining module 610, configured to obtain a face image, and determine a head key point based on the face image;
a prop key point determining module 620, configured to determine an initial head special effect prop in response to an application trigger operation on the head special effect prop, and determine a prop key point based on a mapping relationship included in the head key point and the initial head special effect prop; the mapping relation is used for recording the corresponding relation between the head point in the head model and the article point in the article model corresponding to the initial head special effect prop;
the head special effect image generation module 630 is configured to adjust the initial head special effect prop based on the prop key point, generate a target head special effect prop, and generate and display a head special effect image based on the target head special effect prop and the face image.
The special effect prop application device provided by the embodiment of the disclosure can acquire a face image of a real user and determine a head key point based on the face image; responding to application triggering operation of the head special effect prop, determining an initial head special effect prop, and determining a prop key point based on a mapping relation between the head key point and the initial head special effect prop; and adjusting the initial head special effect prop based on the prop key point to generate a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image. The problem of special effect props such as die-piercing that appear among the head special effect props application process not adapt to user's head is solved, realized that head special effect props follow user's head size and head action and carry out dynamic adaptation, improved the reality reinforcing degree and the interest of head special effect props in the application process to promote user experience and user viscidity.
In some embodiments, the mapping relationship includes an information transfer relationship between the head point and the item point and a target distance between the item point and the head point; the information conversion relation comprises a position conversion relation and an orientation conversion relation;
accordingly, the item keypoint determination module 620 comprises:
the initial prop point determining submodule is used for determining at least two initial prop points based on the head key points and the information conversion relation;
and the prop key point obtaining submodule is used for correcting the prop point information of the initial prop point based on the target distance corresponding to the article point matched with the initial prop point aiming at each initial prop point to obtain the prop key point corresponding to the initial prop point.
Further, the mapping relation further comprises article point information of the article points, and the article point information comprises article point positions and article point orientations;
correspondingly, the property key point obtaining submodule is specifically configured to:
respectively determining a first weight and a second weight corresponding to the prop point information and the article point information based on the target distance;
and based on the first weight and the second weight, carrying out weighting processing on the item point information and the item point information to obtain item key points.
In some embodiments, the head keypoint determination module 610 is specifically configured to:
determining a face key point, a forehead key point and an ear key point based on the face image;
generating a user head model based on the face key points, the forehead key points and the ear key points, and determining at least two head key points based on the user head model.
It should be noted that the special effect item application device 600 shown in fig. 6 may execute each step in the special effect item application method provided in any embodiment of the present disclosure, and implement each process and effect in the special effect item application method provided in any embodiment of the present disclosure, which are not described herein again.
Embodiments of the present disclosure also provide an electronic device that may include a processor and a memory, which may be used to store executable instructions. The processor may be configured to read the executable instructions from the memory and execute the executable instructions to implement the special effect item generation method or the special effect item application method in any embodiment of the present disclosure.
Fig. 7 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure. It should be noted that the electronic device 700 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the information processing apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output interface (I/O interface) 705 is also connected to the bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
Embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the processor is enabled to implement a special effect item generation method or a special effect item application method in any embodiment of the present disclosure.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. When executed by the processing device 701, the computer program performs the above-described functions defined in the special effect item generation method or the special effect item application method in any embodiment of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP, and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs, which when executed by the electronic device, cause the electronic device to perform the steps of the special effect item generation method or the special effect item application method in any embodiment of the present disclosure.
In embodiments of the present disclosure, computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (17)

1. A method for generating special effects props is characterized by comprising the following steps:
displaying a prop making interface; the property making interface comprises a property display area;
responding to material import operation, importing an initial head model and an initial article model corresponding to a target head article, and displaying the initial head model and the initial article model in the prop display area;
displaying the adjusted head model and the adjusted item model in the prop display area in response to an adjustment operation on the initial head model and the initial item model;
and establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating a head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation.
2. The method of claim 1, wherein the number of target item points is at least two.
3. The method according to claim 1, wherein the item making interface further comprises a parameter setting area;
the displaying the adjusted head model and the adjusted item model in the prop display area in response to the adjusting operation of the initial head model and the initial item model comprises:
displaying a first parameter setting subarea corresponding to the target model in the parameter setting area; wherein the target model comprises the initial head model and/or the initial item model;
and in response to the parameter setting operation of the first target parameter displayed in the first parameter setting subarea, determining a first target parameter value, and determining and displaying an adjusted target model based on the first target parameter value.
4. A method according to claim 3, wherein where the object model comprises the initial head model and the initial item model, the first object parameters are relative position, relative proportion and relative orientation between the initial head model and the initial item model.
5. The method according to any one of claims 1 to 3, wherein after said importing a head model and an item model corresponding to a target head item in response to a material import operation and displaying the initial head model and the initial item model in the prop display area, the method further comprises:
determining a target distance threshold value in response to a triggering operation of a partition rendering function in a parameter setting area included in the property making interface;
dividing the target item model into at least two display ranges based on the target distance threshold and the distance between an item point in the target item model and a target head model; wherein the target item model comprises the initial item model or the adjusted item model, and the target head model comprises the initial head model or the adjusted head model;
updating the target object model based on each display range and the display style corresponding to the display range, and displaying the target head model and the updated target object model in the prop display area; wherein the number of the display styles is at least two.
6. The method according to claim 5, wherein the establishing a mapping relationship between a target item point in the adjusted item model and at least one head point in the adjusted head model corresponding to the target item point comprises:
determining a target distance corresponding to each target item point based on the distance corresponding to the target item point;
aiming at each target article point, establishing an information conversion relation between the target article point and the head point corresponding to the target article point, and determining the information conversion relation and the target distance corresponding to the target article point as the mapping relation corresponding to the target article point; the information conversion relation comprises a position conversion relation and an orientation conversion relation.
7. The method of claim 6, wherein the determining a target distance for each of the target item point correspondences based on the distances for the target item point correspondences comprises:
based on a first distance threshold and a second distance threshold in the target distance thresholds, normalizing the distance corresponding to the target object point to obtain the target distance corresponding to each target object point;
determining the information conversion relationship and the target distance corresponding to the target item point as the mapping relationship corresponding to the target item point comprises:
determining the information conversion relationship, the target distance corresponding to the target article point and the article point information as the mapping relationship corresponding to the target article point; wherein the item point information includes an item point location and an item point orientation.
8. The method according to claim 5, wherein the determining a target distance threshold in response to a triggering operation of a zone rendering function in a parameter setting area included in the prop preparation interface comprises:
responding to the triggering operation of the partition rendering function, and displaying a second parameter setting sub-area corresponding to the partition rendering function in the parameter setting area;
determining the target distance threshold in response to a parameter setting operation on a second target parameter displayed in the second parameter setting sub-area.
9. The method of claim 1, wherein after generating the head specific prop corresponding to the target head item based on the adjusted item model and the mapping relationship, the method further comprises:
and exporting the head special effect prop so as to carry out special effect configuration on the head in the original shooting data.
10. A special effect prop application method is characterized by comprising the following steps:
acquiring a face image, and determining a head key point based on the face image;
responding to application triggering operation of the head special effect prop, determining an initial head special effect prop, and determining a prop key point based on the head key point and a mapping relation contained in the initial head special effect prop; the mapping relation is used for recording the corresponding relation between a head point in a head model and an article point in an article model corresponding to the initial head special effect prop;
and adjusting the initial head special effect prop based on the prop key point to generate a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image.
11. The method of claim 10, wherein the mapping relationship comprises an information transfer relationship between the head point and the item point and a target distance between the item point and the head point; the information conversion relation comprises a position conversion relation and an orientation conversion relation;
the determining the item key points based on the mapping relationship included in the head key points and the initial head special effect item comprises:
determining at least two initial prop points based on each head key point and the information conversion relation;
and correcting the prop point information of the initial prop point based on the target distance corresponding to the article point matched with the initial prop point aiming at each initial prop point to obtain the prop key point corresponding to the initial prop point.
12. The method of claim 11, wherein the mapping further includes item point information for the item points, the item point information including item point locations and item point orientations;
the correcting the prop point information of the initial prop point based on the target distance corresponding to the item point matched with the initial prop point, and the obtaining the prop key point corresponding to the initial prop point comprises:
respectively determining a first weight and a second weight corresponding to the prop point information and the item point information based on the target distance;
and performing weighting processing on the item point information and the item point information based on the first weight and the second weight to obtain the item key point.
13. The method of claim 10, wherein determining head keypoints based on the face image comprises:
determining face key points, forehead key points and ear key points based on the face image;
generating a user head model based on the face keypoints, the forehead keypoints and the ear keypoints, and determining at least two head keypoints based on the user head model.
14. A special effect prop generation device, comprising:
the prop making interface display module is used for displaying a prop making interface; the property making interface comprises a property display area;
the model importing module is used for responding to material importing operation, importing an initial head model and an initial article model corresponding to a target head article, and displaying the initial head model and the initial article model in the prop display area;
a model adjustment module for displaying the adjusted head model and the adjusted item model in the item display area in response to an adjustment operation on the initial head model and the initial item model;
and the head special effect prop generation module is used for establishing a mapping relation between a target item point in the adjusted item model and at least one head point corresponding to the target item point in the adjusted head model, and generating the head special effect prop corresponding to the target head item based on the adjusted item model and the mapping relation.
15. A special effect prop application device, comprising:
the head key point determining module is used for acquiring a face image and determining head key points based on the face image;
the system comprises a prop key point determining module, a head special effect prop determining module and a prop key point determining module, wherein the prop key point determining module is used for responding to application triggering operation of the head special effect prop, determining an initial head special effect prop and determining a prop key point based on a mapping relation contained in the head key point and the initial head special effect prop; the mapping relation is used for recording the corresponding relation between a head point in a head model and an article point in an article model corresponding to the initial head special effect prop;
and the head special effect image generation module is used for adjusting the initial head special effect prop based on the prop key point, generating a target head special effect prop, and generating and displaying a head special effect image based on the target head special effect prop and the face image.
16. An electronic device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the special effects item generation method of any one of claims 1 to 9 or the special effects item application method of any one of claims 10 to 13.
17. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, causes the processor to implement the special effects item generation method of any of the above claims 1-9 or the special effects item application method of any of the above claims 10-13.
CN202111664536.XA 2021-12-31 2021-12-31 Special effect prop generation and application method, device, equipment and medium Pending CN114299270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111664536.XA CN114299270A (en) 2021-12-31 2021-12-31 Special effect prop generation and application method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111664536.XA CN114299270A (en) 2021-12-31 2021-12-31 Special effect prop generation and application method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114299270A true CN114299270A (en) 2022-04-08

Family

ID=80973927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111664536.XA Pending CN114299270A (en) 2021-12-31 2021-12-31 Special effect prop generation and application method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114299270A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037557A1 (en) * 2022-08-19 2024-02-22 北京字跳网络技术有限公司 Special effect prop processing method and apparatus, electronic device, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037557A1 (en) * 2022-08-19 2024-02-22 北京字跳网络技术有限公司 Special effect prop processing method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
WO2017202383A1 (en) Animation generation method, terminal, and storage medium
KR20210119438A (en) Systems and methods for face reproduction
WO2021169307A1 (en) Makeup try-on processing method and apparatus for face image, computer device, and storage medium
WO2021008166A1 (en) Method and apparatus for virtual fitting
EP3839879B1 (en) Facial image processing method and apparatus, image device, and storage medium
CN110782515A (en) Virtual image generation method and device, electronic equipment and storage medium
US11494999B2 (en) Procedurally generating augmented reality content generators
US20130321412A1 (en) Systems and methods for adjusting a virtual try-on
US20220237812A1 (en) Item display method, apparatus, and device, and storage medium
US10147240B2 (en) Product image processing method, and apparatus and system thereof
CN111275650B (en) Beauty treatment method and device
CN111369428A (en) Virtual head portrait generation method and device
CN110148191A (en) The virtual expression generation method of video, device and computer readable storage medium
CN108170282A (en) For controlling the method and apparatus of three-dimensional scenic
CN112734633A (en) Virtual hair style replacing method, electronic equipment and storage medium
CN115512014A (en) Method for training expression driving generation model, expression driving method and device
CN114299270A (en) Special effect prop generation and application method, device, equipment and medium
CN113298956A (en) Image processing method, nail beautifying method and device, and terminal equipment
CN110580677A (en) Data processing method and device and data processing device
US20220101419A1 (en) Ingestion pipeline for generating augmented reality content generators
CN115702443A (en) Applying stored digital makeup enhancements to recognized faces in digital images
CN107133347B (en) Method and device for displaying visual analysis chart, readable storage medium and terminal
CN111291218B (en) Video fusion method, device, electronic equipment and readable storage medium
CN112099712B (en) Face image display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination