CN115187759A - Special effect making method, device, equipment, storage medium and program product - Google Patents

Special effect making method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN115187759A
CN115187759A CN202210793075.4A CN202210793075A CN115187759A CN 115187759 A CN115187759 A CN 115187759A CN 202210793075 A CN202210793075 A CN 202210793075A CN 115187759 A CN115187759 A CN 115187759A
Authority
CN
China
Prior art keywords
target
special effect
picture
virtual object
responding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210793075.4A
Other languages
Chinese (zh)
Inventor
叶展鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210793075.4A priority Critical patent/CN115187759A/en
Publication of CN115187759A publication Critical patent/CN115187759A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Abstract

The present disclosure relates to a special effect making method, apparatus, device, storage medium and program product, comprising: responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface; responding to a second operation on the picture recognition control, and obtaining a picture recognition model based on the received sample picture; responding to a third operation on the special effect making interface, and acquiring a target virtual object; and generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video. According to the method and the device, the AR picture recognition special effect can be simply and conveniently manufactured through the opened special effect prop manufacturing tool, and the requirements of different users are met.

Description

Special effect making method, device, equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of computer processing technologies, and in particular, to a special effect creating method, apparatus, device, storage medium, and program product.
Background
With the development of computer technology, many special-effect props based on technologies such as augmented reality and virtual reality appear, for example, special-effect props such as virtual headwear props and virtual sticker props.
The AR capability is an indispensable part of the special effect capability, and the AR picture recognition special effect shows a specific AR object above a picture after a certain picture is recognized, so that the interestingness is strong. The method can be better applied to target people aiming at a mechanism of specific picture triggering.
At present, some suppliers provide tools for making special effects, so that users can make the required special effects by themselves. However, the existing tool for manufacturing the special effect prop cannot simply manufacture the special effect for realizing the AR picture recognition.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present disclosure provide a special effect manufacturing method, apparatus, device, storage medium, and program product, which implement simply and conveniently manufacturing an AR picture recognition special effect through an open special effect prop manufacturing tool, and meet requirements of different users.
In a first aspect, an embodiment of the present disclosure provides a special effect making method, where the method includes:
responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface;
responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture;
responding to a third operation on the special effect making interface, and acquiring a target virtual object;
and generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
In a second aspect, an embodiment of the present disclosure provides a special effect making apparatus, where the apparatus includes:
the picture recognition control display module is used for responding to a first operation on a special effect making interface and displaying a picture recognition control in the special effect making interface;
the picture identification model obtaining module is used for responding to a second operation on the picture identification control and obtaining a picture identification model based on the received sample picture;
the target virtual object acquisition module is used for responding to a third operation on the special effect making interface and acquiring a target virtual object;
and the target special effect prop generation module is used for generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the special effects production method according to any one of the first aspects.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the special effects production method according to any one of the above first aspects.
In a fifth aspect, the disclosed embodiments provide a computer program product, which includes a computer program or instructions, and when the computer program or instructions are executed by a processor, the computer program or instructions implement the special effect making method according to any one of the above first aspects.
The embodiment of the disclosure provides a special effect manufacturing method, a device, equipment, a medium and a product, wherein the special effect manufacturing method comprises the following steps: responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface; responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture; responding to a third operation on the special effect making interface, and acquiring a target virtual object; and generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video. According to the method and the device, the AR picture recognition special effect can be simply and conveniently manufactured through the opened special effect prop manufacturing tool, and the requirements of different users are met.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a special effect manufacturing method in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a step of displaying a picture recognition control in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a step of setting parameters of a picture recognition model according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating a method for creating special effects according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a special effect manufacturing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
At present, some suppliers provide tools for making special effects, so that users can make the required special effects by themselves. However, the existing special effect prop making tools cannot simply make the special effect for realizing the AR picture recognition.
In order to solve the above problems, embodiments of the present disclosure provide a special effect making method, apparatus, device, storage medium, and program product, which implement obtaining an AR picture recognition prop through simple interface operation and parameter input, downloading the AR picture recognition prop to a client by a user, recognizing an uploaded target picture through the AR picture recognition prop, and controlling a virtual object (i.e., an AR object) to be displayed on the target picture according to a recognition result, so as to implement an AR picture recognition special effect, and facilitate user operation.
The special effect manufacturing method provided by the embodiment of the disclosure can be applied to the situation of manufacturing the picture recognition special effect prop. For example, the image recognition special effect item is used for displaying a corresponding virtual object on the target image in a virtual reality manner when a target object exists in the target image, wherein the target object may be a human face, and the virtual object may be a dynamically displayed or statically displayed virtual hair accessory. Namely, in the process that the user uses the picture to identify the special effect prop, if the face is identified on the target picture, a virtual hair accessory is added on the target picture.
The following describes the specific manufacturing method provided by the embodiment of the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a special effect making method in an embodiment of the present disclosure, where the embodiment is applicable to a case where a special effect prop is made in an open platform for making the special effect prop, the method may be executed by a special effect making device, the special effect making device may be implemented in a software and/or hardware manner, and the special effect making device may be configured in an electronic device.
The electronic device may be a mobile terminal, a fixed terminal, or a portable terminal, such as a mobile handset, a station, a unit, a device, a multimedia computer, a multimedia tablet, an internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a Personal Communication System (PCS) device, a personal navigation device, a Personal Digital Assistant (PDA), an audio/video player, a digital camera/camcorder, a positioning device, a television receiver, a radio broadcast receiver, an electronic book device, a gaming device, or any combination thereof, including accessories and peripherals of these devices, or any combination thereof.
As shown in fig. 1, the special effect manufacturing method provided by the embodiment of the disclosure mainly includes steps S101 to S104.
S101, responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface.
In one embodiment of the disclosure, responding to the first operation on the special effect making interface further comprises: and displaying a special effect making interface.
Specifically, when a user needs to make a special effect prop, the user needs to start a special effect prop making tool, such as a special effect prop making application program in an electronic device. Therefore, the electronic equipment can display an application program interface, namely a special effect making interface for making the special effect prop. The special effect making interface comprises a material operation area and a parameter information area.
In one embodiment of the disclosure, in response to a first operation on the special effects production interface, an add operation on the special effects prop may be understood. Specifically, an adding control for AR picture recognition can be displayed in the special effect making interface, and a corresponding picture recognition control is added in the material operation area in response to the triggering operation of the adding control.
In a specific embodiment, as shown in fig. 2, a special effect making interface 200 is displayed in the electronic device, a special effect information area 21 is displayed on the left side of the special effect making interface 200, and a parameter information area 22 is displayed on the right side. Two sub-areas, namely "special effect information" and "resource management", are displayed in the special effect information area 21, and correspond to an area corresponding to the material editing function and an area corresponding to the material importing function, respectively. Responding to the triggering operation of the special effect information adding control 211 in the area corresponding to the material editing function, displaying a producible special effect tool list in a primary list form in the special effect making interface 200, responding to the triggering operation of the AR producing control 212 in the producible special effect tool list, displaying a producible AR special effect item list in a secondary list form, and responding to the triggering operation of the AR picture identifying and producing control 213 in the AR special effect item list, and displaying a corresponding picture identifying control 214 in the 'special effect information' sub-area. The target special effect prop can be understood as a prop which has the picture recognition capability and is added with a corresponding virtual object according to a recognition result.
And S102, responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture.
The realization of the picture recognition function mainly needs two parts of contents: model training parameters and AR recognition scripts. The AR recognition script is hidden from the user and is not visible in the special effects production interface 200. Model training parameters are exposed to a user, and a parameter setting area is added to the special effect making interface 200, and in the parameter setting area, the user can input a sample picture. The sample picture refers to a picture that the user wants to identify and upload, for example: when a lantern image is identified on a target picture, a target special effect prop of a virtual lantern is added on the target picture, and then the sample picture is a picture with the lantern image.
In an embodiment of the disclosure, the second operation on the picture identification control may be a trigger operation on the picture identification control, a parameter setting region is displayed, and a sample picture corresponding to the input operation is acquired in response to the user operation on the parameter setting region.
In an embodiment of the present disclosure, obtaining the picture identification model based on the received sample picture may include: and training a preset neural network based on the received sample picture to obtain a picture recognition model.
In one embodiment of the present disclosure, a local (electronic device) trains a preset neural network based on a received sample picture, resulting in a picture recognition model.
In one embodiment of the disclosure, a local (electronic device) sends a received sample picture and a model training request to a server, after receiving the sample picture and the model training request, the server trains a preset neural network based on the sample picture to obtain a picture recognition model, and the server sends the trained picture recognition model to the electronic device.
In one embodiment of the disclosure, in the process of model training, a model training information prompt area is displayed in a special effect making interface, and the model training information prompt area comprises prompt information which comprises character information and a model training progress bar. The text information is used for prompting the user that model training is performed, and the model training progress bar is used for prompting the progress of model training of the user. The model training information prompt area also comprises a training cancellation control, and the ongoing model training is cancelled in response to the triggering operation of the training cancellation control.
In an embodiment of the present disclosure, after the picture recognition model is obtained locally, the model training information prompting area disappears to prompt the user that the model training is completed.
In one embodiment of the present disclosure, generating a picture recognition model based on a received sample picture in response to a second operation on the picture recognition control generally comprises steps a-C.
And step A, responding to the triggering operation of the picture recognition control, and displaying a parameter setting area in the special effect making interface.
In the embodiment of the present disclosure, as shown in fig. 3, the trigger operation may be a click operation. The parameter setting area 215 is one sub-area in the parameter information area 22.
In one embodiment of the present disclosure, in response to a triggering operation of the picture recognition control, a plurality of parameter information corresponding to the picture recognition control is displayed in the parameter information area 22. For example: a picture recognition control basic parameter region, a virtual object change information region, a model rendering parameter region, and a parameter setting region 215 are respectively included in the parameter information region 22. The basic parameter area of the picture identification control mainly comprises the name, ID, display level of the picture identification control, whether the picture identification control is synchronized to the next level, whether the element is started or not and other related information. The virtual object change information area mainly includes position coordinates, zoom coordinates, rotation coordinates, and the like of the virtual object. The model rendering parameter area mainly comprises a model name, selection and setting of model materials, shadow casting, automatic sequencing, view clipping, rendering sequence setting and the like.
Further, a sample picture input control and a model training control 216 are mainly included in the model rendering parameter area.
S1022, responding to a fourth operation on the parameter setting area, and acquiring a sample picture corresponding to the fourth operation.
In the embodiment of the present disclosure, the parameter setting area 215 mainly includes a sample picture selection control 219 and/or a sample picture addition control 220.
In one embodiment of the present disclosure, in response to a trigger operation on the sample picture selection control 219, a historical sample picture is displayed; and responding to the selection operation of the historical sample pictures, and determining the sample pictures corresponding to the selection operation. And if no history sample picture exists, displaying nothing to prompt the user that no history sample picture exists.
In one embodiment of the present disclosure, the sample picture is dragged into the sample picture adding control 220 to obtain the sample picture. Or responding to the trigger operation of the sample picture adding control 220, displaying a local storage picture, responding to the trigger operation of the local storage picture, and acquiring the sample picture corresponding to the trigger operation.
And C, responding to the operation of a model training control in the parameter setting area, and obtaining a picture recognition model based on the sample picture.
In the embodiment of the present disclosure, as shown in fig. 3, after the sample picture is obtained according to the above steps, in response to a trigger operation on the model training control 216 in the parameter setting region, model training is performed based on the model training parameters, so as to obtain a picture recognition model.
In the embodiment of the present disclosure, the model training is performed based on the model training parameter, the model training may be performed locally (on the electronic device), and the model training parameter may also be sent to the server, so that the server performs training according to the model training parameter to obtain the picture recognition model, and sends the picture recognition model to the local.
In the embodiment of the disclosure, the step of obtaining the sample picture is provided, model training is performed after the sample picture is obtained, and the model training can be completed only by simple clicking operation of a user, so that the method is simple, convenient and easy to use and operate.
In an embodiment of the present disclosure, after obtaining a sample picture corresponding to the fourth operation, the method further includes: obtaining model parameters, wherein the model parameters comprise identification types and/or identification modes; obtaining a picture identification model based on the sample picture, including: and obtaining a picture identification model based on the sample picture and the model parameters.
In the embodiment of the present disclosure, the model parameters may be obtained first according to the user operation, and the sample picture may be obtained, or the sample picture may be obtained first according to the user operation, and the model parameters may be obtained. The acquisition sequence of the model parameters and the sample pictures is determined by the operation of a user, and is not particularly limited in the embodiment of the disclosure.
It should be noted that, when the user does not perform the input operation of the model parameters, that is, when the user does not input the model parameters, the default model parameters may be obtained, so as to facilitate the subsequent model training.
Wherein the model training parameters mainly comprise recognition types and/or recognition modes. The identification type comprises a plane and a cylinder which respectively correspond to whether the identified object is a plane or a cylinder. The recognition mode has the advantages of rapidness, high quality and moderate quality, and respectively corresponds to the accuracy of the image recognition model for recognizing the object.
In an embodiment of the present disclosure, the process of obtaining the identification type includes: in response to the triggering operation on the identification type control 217, a primary menu list corresponding to the identification type is displayed, where the menu list includes a plurality of preset identification types, for example: plane, cylinder. And responding to the selection operation in the primary menu list, and determining the identification type corresponding to the selection operation. Specifically, the identification type is mainly used for indicating a picture type that the target special effect prop can identify.
In the embodiment of the disclosure, after the image recognition model is obtained based on the recognition type and the sample image, a plane or a cylinder corresponding to an object to be recognized by a user is displayed in the special effect preview area of the special effect making interface, so that the user can conveniently perform subsequent placing operation on the target virtual object.
In an embodiment of the present disclosure, the process of acquiring the recognition pattern includes: in response to a triggering operation of the recognition mode control 218, a one-level menu list corresponding to the recognition mode is displayed. The menu list includes a plurality of preset identification modes, for example: and responding to the selection operation in the primary menu list to determine the identification mode corresponding to the selection operation.
In the embodiment of the disclosure, the recognition mode corresponds to the accuracy of the picture recognition model for recognizing the picture, and a plurality of recognition modes are set for the user to select so as to meet the requirements of different users.
S103, responding to a third operation on the special effect making interface, and acquiring a target virtual object;
the target virtual object can understand the target special effect prop and adds the target special effect prop to the virtual object in the picture after picture recognition. Alternatively, the target virtual object may be an AR virtual object.
In one embodiment of the disclosure, in response to a third operation on the special effects production interface, obtaining a target virtual object includes: in response to the selection or the introduction of the special effect material in the "resource management" sub-area, the selected or introduced special effect material is determined as a target virtual object, and after the target virtual object is acquired, as shown in fig. 2, the target virtual object 224 is displayed in the "special effect information" sub-area.
In one embodiment of the present disclosure, in response to a trigger operation on the special effect information adding control 211 in the area corresponding to the material editing function, a plurality of virtual objects are displayed in the special effect making interface 200 in the form of a primary list, and in response to the trigger operation on a virtual object, the virtual object corresponding to the trigger operation is determined as a target virtual object. The adding manner of the target virtual object is not specifically limited in the embodiment of the present disclosure.
After the target virtual object is obtained, an association relationship between the target virtual object and the picture identification control may be established, where the association relationship may be that the target virtual object and the picture identification control are located in the same rendering layer.
In an embodiment of the present disclosure, the target virtual object may be a two-dimensional virtual object, or may be an AR virtual object. The target virtual object may be an object of the same category as the object included in the sample picture. Further, the target virtual object is an AR representation of an object included in the sample picture. For example: when the image included in the sample picture is a lantern image, a three-dimensional lantern may be selected as the target virtual object.
In one implementation of the present disclosure, one picture recognition control may be associated with multiple target virtual objects. When one picture recognition control can be associated with a plurality of target virtual objects, the target special effect prop is used for controlling the plurality of target virtual objects to be displayed on the target picture or the target video after the target picture or the target video is recognized.
And S104, generating a target special effect prop based on the image recognition model and the target virtual object, wherein the target special effect prop is used for recognizing a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
In the embodiment of the disclosure, the image recognition model and the target virtual object are encapsulated to obtain the target special effect prop.
In one embodiment of the present disclosure, the target virtual object is an AR virtual object, and the picture recognition model is an AR picture recognition model. And further, generating an AR image recognition special effect prop based on the AR image recognition model and the AR virtual object.
Furthermore, in the using process of the AR picture recognition special effect prop, the captured real world image is recognized, and after the target object is recognized, the AR virtual object included in the AR picture recognition special effect prop is obtained and superposed on the real world view. For example: in the use process of the AR picture recognition special-effect prop, a captured real world image is recognized, and after a lantern image is recognized, a three-dimensional virtual lantern is superposed on a real world view.
In the embodiment of the disclosure, in the use process of the AR picture recognition special effect prop, after the target object is recognized, the AR virtual object is superposed on the view of the real world, and the special effect with richer functions is manufactured.
In one embodiment of the disclosure, a picture recognition model in a target special effect prop is used for recognizing a target picture or video uploaded by a user to obtain an algorithm result; and then adjusting the display parameters of the target virtual object according to the algorithm result, so that the target virtual object is displayed on the target picture. The display parameters of the target virtual object include a display position, a rotation angle, and a display size of the target virtual object.
And adjusting and determining the display parameters of the target virtual object by the algorithm result of the sample picture. And assigning the numerical values included in the algorithm result to the model matrix corresponding to the target virtual object according to a preset assignment rule so that the target virtual object is displayed on the target picture or the target video.
In one embodiment of the disclosure, in the use process of the target special effect prop, when a target picture uploaded by a user is not identified, a model parameter corresponding to a target virtual object is unchanged. In the use process of the target special effect prop, when a target picture uploaded by a user is identified, the target picture uploaded by the user is identified, and an algorithm result corresponding to the target picture is obtained; and assigning the numerical value in the algorithm result corresponding to the target picture to the model matrix corresponding to the target virtual object according to a preset assignment rule so as to change the display position and the display form of the target virtual object on the target picture. Meanwhile, the characteristic that the picture parameters uploaded by the user are not changed is not recognized, so that the existing visual creation function can be combined, and a special effect with more abundant functions can be manufactured.
In one embodiment of the present disclosure, when the target special effect channel identifies an uploaded video, for each video frame, "identifying the video frame" is performed to obtain an algorithm result; and then adjusting the display parameters of the target virtual object according to the algorithm result, so that the target virtual object is displayed on the video frame. Because the target object included in each video frame may change position at any time, the target special effect channel identifies objects at different positions to obtain different algorithm results, so as to adjust the display position of the target virtual object on each video frame, so that the target virtual object can move along with the movement of the object to be displayed in a dynamic effect.
In an embodiment of the present disclosure, the picture identification control further includes a picture identification script, where the picture identification script is configured to control the picture identification model to identify a target picture uploaded by a user, obtain an algorithm result, and adjust a model matrix parameter of the target virtual object according to the algorithm result, so that the target virtual object is displayed on the target picture.
In the embodiment of the present disclosure, the above manner of assigning the model parameter may be driven by the picture recognition script.
The embodiment of the disclosure provides a special effect manufacturing method, which includes: responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface; responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture; responding to a third operation on the special effect making interface, and acquiring a target virtual object; and generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video. According to the embodiment of the disclosure, the AR picture recognition special effect can be simply and conveniently manufactured through the opened special effect prop manufacturing tool, and the requirements of different users are met.
On the basis of the above embodiment, the special effect manufacturing method is optimized, and the optimized special effect manufacturing method mainly includes steps S201 to S204.
S201, responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface;
it should be noted that step S201 provided in the embodiment of the present disclosure is the same as the execution flow of step S101 in the embodiment, and for specific reference, the description in the embodiment is referred to, and details are not repeated in this embodiment.
S202, determining a virtual camera associated with the picture identification control;
the virtual camera associated with the picture recognition control can be understood as a virtual camera dedicated to AR picture recognition. In order to realize the AR picture recognition function, the specific parameter setting of the virtual camera may be different from that of a common virtual camera. In this embodiment, there is no specific limitation, and the specific parameter setting manner of the virtual camera is described above.
In an embodiment of the present disclosure, as shown in fig. 2, in response to a triggering operation on an AR picture recognition making control 213 in the AR special effect item list, an AR picture recognition control 214 is obtained, and while the AR picture recognition control 214 is displayed in the "special effect information" sub-area, a virtual camera associated with the picture recognition control is obtained, and the virtual camera control 222 is displayed in the "special effect information" sub-area.
In one embodiment of the present disclosure, in response to a trigger operation on the virtual camera control 222, a parameter setting area of the virtual camera is displayed in the parameter information area 22, and the user can set relevant parameters of the virtual camera in the parameter setting area of the virtual camera.
Relevant parameters of the virtual camera can be set according to specific functions of the target special effect prop by the user, and details are not repeated in the embodiment of the disclosure.
In an embodiment of the present disclosure, a rendering layer of the virtual camera is the same as a layer where the picture recognition control is located.
And the layer where the picture identification control is located is the same as the layer where the target virtual object and/or the example virtual object are located.
In the embodiment of the disclosure, a layer where a picture identification control is located and rendering layers of virtual cameras are obtained, and when the rendering layers of the virtual cameras are the same as the layer where the picture identification control is located, the virtual camera associated with the picture identification control is determined. And if the rendering layers of all the virtual cameras are different from the layer where the picture identification control is located, newly building a virtual camera with the same rendering layer as the layer where the picture identification control is located, and determining the newly built virtual camera as the virtual camera associated with the picture identification control.
In the embodiment of the disclosure, the virtual camera and the picture recognition are limited to be located on the same rendering layer, so that the same-layer rendering is realized, and the rendering effect of the target virtual object in the use process of the target special effect prop is improved.
In one embodiment of the present disclosure, the method further comprises: responding to a first operation on a special effect making interface, acquiring an example virtual object, and displaying the example virtual object in the special effect making interface; the example virtual object is used to determine a size of the target virtual object.
In the embodiment of the present disclosure, as shown in fig. 2, in response to a triggering operation on an AR picture recognition special effect making control 213 in the AR special effect item list, an AR picture recognition control 214 is obtained, and while the picture recognition control 214 is displayed in the "special effect information" sub-area, an example virtual object 223 corresponding to the picture recognition control is obtained, and the example virtual object 223 is displayed in the "special effect information" sub-area.
In the embodiment of the present disclosure, the example virtual object is mainly used for prompting the user of the display position and the display form of the virtual object in the process of manufacturing the target special effect prop. The example virtual object is also used for prompting a user to adjust the size of the target virtual object according to the size in the example virtual object in the manufacturing process of the target special effect prop, so that the phenomenon that the size of the target virtual object is too large or too small is avoided.
It should be noted that the example virtual object 223 only prompts the user to make an effect in the process of making the special effect item, and before the target special effect item is generated, the example virtual object 223 may be deleted, that is, the target special effect item may not include the example virtual object 223.
S203, responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture.
S204, responding to a third operation on the special effect making interface, and acquiring a target virtual object
It should be noted that steps S203 to S204 provided in the embodiment of the present disclosure are the same as steps S102 to S103 in the embodiment described above, and specific reference is made to the description in the embodiment described above, which is not repeated in this embodiment.
S204, generating a target special effect prop based on the image recognition model, the virtual camera and the target virtual object.
In an embodiment of the present disclosure, the method for identifying a target picture or a target video and controlling the target virtual object to be displayed on the target picture or the target video includes: and the image identification model included in the target special effect prop is used for identifying a target image or a target video to obtain an algorithm result, and the algorithm result is used for adjusting the projection matrix of the virtual camera and the model matrix of the target virtual object so as to display the target virtual object on the target image or the target video.
In the embodiment of the disclosure, the image recognition script can realize the AR image recognition effect according to the trained image recognition model. Specifically, the picture recognition model recognizes a target picture uploaded by a user to obtain an algorithm result, and the picture recognition script modifies a projection matrix of the virtual camera and a model matrix of the target virtual object in real time according to the algorithm result, and actually changes the display position of the target virtual object under the picture recognition control. The projection matrix of the virtual camera and the model matrix of the target virtual object will determine the position, angle and size of the target virtual object. When the picture uploaded by the user is not recognized, the projection matrix of the virtual camera and the model matrix of the target virtual object are not changed, and when the picture uploaded by the user is recognized, the projection matrix of the virtual camera and the model matrix of the target virtual object are modified according to the algorithm result, so that the target virtual object appears near the picture. Simultaneously, also because not discerning the unchangeable characteristic of user upload picture parameter, can combine the visual creation function that has now, make the special effect that has richer function, if: and after the specific picture is identified, a group of AR object animations is played.
In the embodiment of the disclosure, the adjustment of the display effect of the target virtual object is realized by adjusting the projection matrix and the model matrix according to the algorithm result, the realization process is simple to edit, and the user does not need to perform complicated operation.
In the embodiment of the present disclosure, a process of modifying the projection matrix of the pseudo-camera and the model matrix of the AR picture recognition control according to the algorithm result is briefly described.
In the rendering pipeline, objects that can be rendered on the screen at the end typically need to undergo three matrix transformations, respectively a Model (Model) matrix, a View (View) matrix, and a Projection (Projection) matrix, collectively referred to as MVP matrices. The model matrix corresponds to the position of an object, the observation matrix corresponds to the position of a camera, the projection matrix corresponds to the observation range of the camera, wherein the observation range refers to the size of the closest plane, the farthest plane, the observation angle and the like which can be observed by the camera, and the MVP matrix determines the position and the angle of the picture where the object shot by the camera finally appears.
In general, the vertex coordinates of an object are local coordinates in a model space centered on itself, and the model matrix can scale, rotate, and translate the local coordinates to world coordinates. The viewing matrix then converts the world coordinates to a visual space centered around the camera. Because the camera has a certain observation range, the visual space needs to be cut, the projection matrix can cut the coordinates of the vertex of the object in the visual space, and finally the x, y and z values of the coordinates of the vertex of the object can be transformed into the range of [ -1,1] through projection, namely the coordinates of the object are finally displayed in the screen.
The model matrix and the observation matrix can be combined into a matrix, and the finally obtained position of the object on the screen is moved leftwards in consideration of the two conditions of leftward translation of the object and rightward translation of the camera, so that the effect of observing the matrix can be realized by modifying the model matrix. Therefore, the position and the state of the object on the screen can be determined through the model matrix and the projection matrix. Corresponding to the operation in the script, the projection matrix of the virtual camera and the model matrix of the target virtual object are modified according to the algorithm result, so that the target virtual object appears on the picture to be identified in the screen.
In the embodiment of the present disclosure, a virtual camera associated with the picture recognition control is added, and the projection moment of the virtual camera is combined with the model matrix of the target virtual object to perform adjustment, so as to better control the display effect of the target virtual object.
Fig. 5 is a schematic structural diagram of a special effect making device in the embodiment of the present disclosure, where the embodiment is applicable to a case where a special effect item is made in an open special effect item making platform, the special effect making device may be implemented in a software and/or hardware manner, and the special effect making device may be configured in an electronic device.
As shown in fig. 5, an interaction apparatus 50 provided in the embodiment of the present disclosure mainly includes: the system comprises a picture identification control display module 51, a picture identification model obtaining module 52, a target virtual object obtaining module 53 and a target special effect prop generating module 53.
The system comprises a picture recognition control display module, a picture recognition control display module and a picture recognition control display module, wherein the picture recognition control display module is used for responding to a first operation on a special effect making interface and displaying a picture recognition control in the special effect making interface; the picture identification model obtaining module is used for responding to a second operation on the picture identification control and obtaining a picture identification model based on the received sample picture; the target virtual object acquisition module is used for responding to a third operation on the special effect making interface and acquiring a target virtual object; and the target special effect prop generation module is used for generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
In one embodiment of the present disclosure, the image recognition model obtaining module 52 includes: the parameter setting area display unit is used for responding to the triggering operation of the picture identification control and displaying a parameter setting area in the special effect manufacturing interface; the sample picture unit is used for responding to a fourth operation on the parameter setting area and acquiring a sample picture corresponding to the fourth operation; and the picture recognition model obtaining unit is used for responding to the operation of a model training control in the parameter setting area and obtaining a picture recognition model based on the sample picture.
In an embodiment of the present disclosure, the image recognition model obtaining module 52 further includes: a model parameter obtaining unit, configured to obtain a model parameter after obtaining the sample picture corresponding to the fourth operation, where the model parameter includes an identification type and/or an identification mode; and the picture identification model obtaining unit is also used for obtaining a picture identification model based on the sample picture and the model parameters.
In one embodiment of the present disclosure, the method further includes: a virtual camera determination module for determining a virtual camera associated with the picture recognition control; and a target special effect item generating module 53, further configured to generate a target special effect item based on the picture recognition model, the virtual camera, and the target virtual object.
In an embodiment of the present disclosure, the target special effect prop is configured to identify a target picture or a target video, and control the target virtual object to be displayed on the target picture or the target video, and includes: the image recognition model included in the target special effect prop is used for recognizing a target image or a target video to obtain an algorithm result, and the algorithm result is used for adjusting a projection matrix of the virtual camera and a model matrix of the target virtual object so that the target virtual object is displayed on the target image or the target video.
In an embodiment of the present disclosure, a rendering layer of the virtual camera is the same as a layer where the picture recognition control is located.
In one embodiment of the present disclosure, the apparatus further comprises: the system comprises a special effect making interface, an example virtual object display module and a special effect making module, wherein the example virtual object display module is used for responding to a first operation on the special effect making interface, acquiring an example virtual object and displaying the example virtual object in the special effect making interface; the example virtual object is used to determine a size of the target virtual object.
The special effect making device provided by the embodiment of the disclosure can execute the steps executed in the special effect making method provided by the embodiment of the disclosure, and the execution steps and the beneficial effects are not repeated herein.
Fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring now specifically to fig. 6, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 600 in the disclosed embodiments may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), wearable terminal devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes to implement the special effects making method of the embodiments as described in this disclosure according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage device 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the terminal apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the terminal device 600 to perform wireless or wired communication with other devices to exchange data. While fig. 6 illustrates a terminal apparatus 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart, thereby implementing the special effects production method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or installed from the storage means 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface; responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture; responding to a third operation on the special effect making interface, and acquiring a target virtual object; and generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
Optionally, when the one or more programs are executed by the terminal device, the terminal device may further perform other steps described in the above embodiments.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a special effect manufacturing method including: responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface; responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture; responding to a third operation on the special effect making interface, and acquiring a target virtual object; and generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
According to one or more embodiments of the present disclosure, there is provided a special effect making method, wherein generating a picture recognition model based on a received sample picture in response to a second operation on the picture recognition control includes: responding to the triggering operation of the picture recognition control, and displaying a parameter setting area in the special effect making interface; responding to a fourth operation on the parameter setting area, and acquiring a sample picture corresponding to the fourth operation; and responding to the operation of a model training control in the parameter setting area, and obtaining a picture recognition model based on the sample picture.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effect manufacturing method, wherein after obtaining a sample picture corresponding to the fourth operation, the method further includes: obtaining model parameters, wherein the model parameters comprise identification types and/or identification modes; obtaining a picture identification model based on the sample picture, including: and obtaining a picture identification model based on the sample picture and the model parameters.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effect manufacturing method, further including: determining a virtual camera associated with the picture recognition control; generating a target special effect prop based on the picture recognition model, the virtual camera and the target virtual object.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effect making method, where the target special effect prop is used to identify a target picture or a target video, and control the target virtual object to be displayed on the target picture or the target video, and the method includes: the image recognition model included in the target special effect prop is used for recognizing a target image or a target video to obtain an algorithm result, and the algorithm result is used for adjusting a projection matrix of the virtual camera and a model matrix of the target virtual object so that the target virtual object is displayed on the target image or the target video.
According to one or more embodiments of the present disclosure, a special effect manufacturing method is provided, where a rendering layer of a virtual camera is the same as a layer where a picture recognition control is located.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effect producing method, wherein the method further includes: responding to a first operation on a special effect making interface, acquiring an example virtual object, and displaying the example virtual object in the special effect making interface; the example virtual object is used to determine a size of the target virtual object.
According to one or more embodiments of the present disclosure, there is provided a special effects producing apparatus including: the picture recognition control display module is used for responding to a first operation on a special effect manufacturing interface and displaying a picture recognition control in the special effect manufacturing interface; the picture identification model obtaining module is used for responding to a second operation on the picture identification control and obtaining a picture identification model based on the received sample picture; the target virtual object acquisition module is used for responding to a third operation on the special effect making interface and acquiring a target virtual object; and the target special effect prop generation module is used for generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
According to one or more embodiments of the present disclosure, there is provided a special effect producing apparatus, wherein the image recognition model obtaining module includes: the parameter setting area display unit is used for responding to the triggering operation of the picture identification control and displaying a parameter setting area in the special effect manufacturing interface; the sample picture unit is used for responding to a fourth operation on the parameter setting area and acquiring a sample picture corresponding to the fourth operation; and the picture recognition model obtaining unit is used for responding to the operation of a model training control in the parameter setting area and obtaining a picture recognition model based on the sample picture.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effect producing apparatus, where the image recognition model obtaining module further includes: the model parameter acquiring unit is used for acquiring model parameters after acquiring the sample picture corresponding to the fourth operation, wherein the model parameters comprise identification types and/or identification modes; and the picture identification model obtaining unit is also used for obtaining a picture identification model based on the sample picture and the model parameters.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effect producing apparatus, further including: a virtual camera determination module for determining a virtual camera associated with the picture recognition control; and the target special effect prop generation module is also used for generating a target special effect prop based on the image recognition model, the virtual camera and the target virtual object.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effect making apparatus, where the target special effect prop is configured to identify a target picture or a target video, and control a target virtual object to be displayed on the target picture or the target video, and the apparatus includes: the image recognition model included in the target special effect prop is used for recognizing a target image or a target video to obtain an algorithm result, and the algorithm result is used for adjusting a projection matrix of the virtual camera and a model matrix of the target virtual object so that the target virtual object is displayed on the target image or the target video.
According to one or more embodiments of the present disclosure, a special effect producing apparatus is provided, where a rendering layer of the virtual camera is the same as a layer where the picture recognition control is located.
According to one or more embodiments of the present disclosure, the present disclosure provides a special effects making apparatus, wherein the apparatus further includes: the system comprises a special effect making interface, an example virtual object display module and a special effect making module, wherein the example virtual object display module is used for responding to a first operation on the special effect making interface, acquiring an example virtual object and displaying the example virtual object in the special effect making interface; the example virtual object is used to determine a size of the target virtual object.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement any of the special effects production methods provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the special effects production method according to any one of the embodiments provided by the present disclosure.
Embodiments of the present disclosure also provide a computer program product, which includes a computer program or instructions, when executed by a processor, to implement the special effects production method as described above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. A special effect making method is characterized by comprising the following steps:
responding to a first operation on a special effect making interface, and displaying a picture recognition control in the special effect making interface;
responding to a second operation on the picture identification control, and obtaining a picture identification model based on the received sample picture;
responding to a third operation on the special effect making interface, and acquiring a target virtual object;
and generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
2. The method of claim 1, wherein generating a picture recognition model based on the received sample picture in response to the second operation on the picture recognition control comprises:
responding to the triggering operation of the picture recognition control, and displaying a parameter setting area in the special effect making interface;
responding to a fourth operation on the parameter setting area, and acquiring a sample picture corresponding to the fourth operation;
and responding to the operation of a model training control in the parameter setting area, and obtaining a picture recognition model based on the sample picture.
3. The method according to claim 2, wherein after obtaining the sample picture corresponding to the fourth operation, further comprising: obtaining model parameters, wherein the model parameters comprise identification types and/or identification modes; obtaining a picture recognition model based on the sample picture, including:
and obtaining a picture identification model based on the sample picture and the model parameters.
4. The method of claim 1, further comprising:
determining a virtual camera associated with the picture recognition control;
generating a target special effect prop based on the picture recognition model, the virtual camera and the target virtual object.
5. The method according to claim 4, wherein the target special effect object is used for identifying a target picture or a target video and controlling the target virtual object to be displayed on the target picture or the target video, and the method comprises:
the image recognition model included in the target special effect prop is used for recognizing a target image or a target video to obtain an algorithm result, and the algorithm result is used for adjusting a projection matrix of the virtual camera and a model matrix of the target virtual object so that the target virtual object is displayed on the target image or the target video.
6. The method of claim 4, wherein the rendered layer of the virtual camera is the same as the layer in which the picture recognition control is located.
7. The method of claim 1, further comprising:
responding to a first operation on the special effect making interface, acquiring an example virtual object, and displaying the example virtual object in the special effect making interface; wherein the example virtual object is used to determine a size of the target virtual object.
8. An effect creation device, comprising:
the picture recognition control display module is used for responding to a first operation on a special effect manufacturing interface and displaying a picture recognition control in the special effect manufacturing interface;
the picture identification model obtaining module is used for responding to a second operation on the picture identification control and obtaining a picture identification model based on the received sample picture;
the target virtual object acquisition module is used for responding to a third operation on the special effect making interface and acquiring a target virtual object;
and the target special effect prop generation module is used for generating a target special effect prop based on the image identification model and the target virtual object, wherein the target special effect prop is used for identifying a target image or a target video and controlling the target virtual object to be displayed on the target image or the target video.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
11. A computer program product comprising a computer program or instructions which, when executed by a processor, implement the method of any one of claims 1 to 7.
CN202210793075.4A 2022-07-05 2022-07-05 Special effect making method, device, equipment, storage medium and program product Pending CN115187759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210793075.4A CN115187759A (en) 2022-07-05 2022-07-05 Special effect making method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210793075.4A CN115187759A (en) 2022-07-05 2022-07-05 Special effect making method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115187759A true CN115187759A (en) 2022-10-14

Family

ID=83516857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210793075.4A Pending CN115187759A (en) 2022-07-05 2022-07-05 Special effect making method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115187759A (en)

Similar Documents

Publication Publication Date Title
CN112929582A (en) Special effect display method, device, equipment and medium
CN111970571B (en) Video production method, device, equipment and storage medium
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CA3114601A1 (en) A cloud-based system and method for creating a virtual tour
CN111414225A (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
CN114679628B (en) Bullet screen adding method and device, electronic equipment and storage medium
CN114363686B (en) Method, device, equipment and medium for publishing multimedia content
CN116934577A (en) Method, device, equipment and medium for generating style image
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN109636917B (en) Three-dimensional model generation method, device and hardware device
CN115830224A (en) Multimedia data editing method and device, electronic equipment and storage medium
CN115619904A (en) Image processing method, device and equipment
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN116847147A (en) Special effect video determining method and device, electronic equipment and storage medium
CN115187759A (en) Special effect making method, device, equipment, storage medium and program product
CN114117092A (en) Remote cooperation method, device, electronic equipment and computer readable medium
CN114528433A (en) Template selection method and device, electronic equipment and storage medium
CN111696214A (en) House display method and device and electronic equipment
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
CN116459508A (en) Special effect prop generation method, picture processing method and device and electronic equipment
CN116301530A (en) Virtual scene processing method and device, electronic equipment and storage medium
CN114549607A (en) Method and device for determining main body material, electronic equipment and storage medium
CN116578226A (en) Image processing method, apparatus, device, storage medium, and program product
CN115941841A (en) Associated information display method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination