WO2019154339A1 - 特效程序文件包的生成及特效生成方法与装置、电子设备 - Google Patents

特效程序文件包的生成及特效生成方法与装置、电子设备 Download PDF

Info

Publication number
WO2019154339A1
WO2019154339A1 PCT/CN2019/074503 CN2019074503W WO2019154339A1 WO 2019154339 A1 WO2019154339 A1 WO 2019154339A1 CN 2019074503 W CN2019074503 W CN 2019074503W WO 2019154339 A1 WO2019154339 A1 WO 2019154339A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
materials
key point
parameter
program file
Prior art date
Application number
PCT/CN2019/074503
Other languages
English (en)
French (fr)
Inventor
许亲亲
李展鹏
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2020536227A priority Critical patent/JP7167165B2/ja
Priority to AU2019218423A priority patent/AU2019218423A1/en
Priority to SG11202006351VA priority patent/SG11202006351VA/en
Priority to KR1020207019275A priority patent/KR102466689B1/ko
Priority to EP19750743.7A priority patent/EP3751413A4/en
Publication of WO2019154339A1 publication Critical patent/WO2019154339A1/zh
Priority to US16/914,622 priority patent/US11368746B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4351Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reassembling additional data, e.g. rebuilding an executable program from recovered modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/72Code refactoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/12Rule based animation

Definitions

  • the present application relates to computer vision technology, and in particular to a method and device for generating a special effect program file package and a special effect, and an electronic device.
  • Augmented Reality is a new technology that integrates real world information and virtual world information "seamlessly". It simulates the entity information in a certain time and space within the real world. Superimposing virtual information, applying virtual information to the real world, superimposing real-world characters, environments and virtual objects in real time on the same picture or space, so as to achieve a sensory experience beyond reality.
  • the embodiment of the present application provides a technical solution for generating a special effect program file package and a technical solution for generating a special effect.
  • a method for generating a special effect program file package includes:
  • the set of sub-materials includes a plurality of sub-materials
  • a special effect program file package is generated according to the set of sub-materials and parameter values of the play parameters.
  • the plurality of sub-materials have a predetermined playing sequence.
  • the playing timing of the plurality of sub-materials is determined based on file names of the plurality of sub-materials.
  • a special effect generating method including:
  • a special effect based on the at least one set of sub-materials is generated on the video image according to the detected key points and parameter values of the play parameters of the at least one set of sub-materials.
  • a device for generating a special effect program file package includes:
  • a first import module configured to import a set of sub-materials; the set of sub-materials includes a plurality of sub-materials;
  • a first acquiring module configured to acquire a parameter value of a playing parameter of the set of sub-materials
  • a first generating module configured to generate a special effect program file package according to the parameter values of the set of sub-materials and the playing parameters.
  • a special effect generating apparatus including:
  • a second obtaining module configured to obtain a parameter value of a playing parameter of at least one set of sub-materials in the special effect program file package; wherein the set of sub-materials includes a plurality of sub-materials;
  • a first detecting module configured to perform key point detection on the video image
  • a second generating module configured to generate an effect based on the at least one set of sub-materials on the video image according to the detected key points and parameter values of the playing parameters of the at least one set of sub-materials.
  • an electronic device including:
  • a memory for storing a computer program
  • a processor configured to execute a computer program stored in the memory, and when the computer program is executed, implements the method described in any one of the embodiments of the present application.
  • a computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the method of any of the embodiments of the present application.
  • a computer program comprising computer instructions that, when executed in a processor of a device, implement the method of any of the embodiments of the present application.
  • an electronic device, a program, and a medium when generating a special effect program file package, a group of sub-materials is imported, and the set of sub-materials includes a plurality of sub-materials;
  • the parameter value of the playing parameter of the group material; the special effect program file package is generated according to the parameter values of the group of the child material and the playing parameter, so that the dynamic special effect processing is performed based on the video of the special effect program file package, and the dynamic special effect is realized on the played video
  • the application embodiment can generate the special effect program file executable by the rendering engine without manually writing the program file, and the operation is simple and the required time is short, which improves the overall efficiency of the dynamic special effect implementation and effectively ensures the accuracy of the special effects effect.
  • the method and device for generating a special effect, the electronic device, the program, and the medium provided by the foregoing embodiment of the present application obtain parameter values of playing parameters of at least one set of sub-materials in the special effect program file package, where the set of sub-materials includes multiple sub-materials; Performing key point detection on the video image; generating an effect based on at least one set of sub-materials on the video image according to the detected key points and parameter values of the playing parameters of the at least one set of sub-materials.
  • a dynamic special effect is generated on the video by using a parameter value of a play parameter of at least one set of the sub-materials in the pre-generated special program file package and a key point in the video image, and the dynamic special effect play is implemented in the video, and the video is improved. Playback effect.
  • FIG. 1 is a flowchart of an embodiment of a method for generating a special effect program file package of the present application.
  • FIG. 2 is a diagram showing an example of an operation interface of a device for generating a special effect program file package according to an embodiment of the present application.
  • FIG. 3 is an exemplary schematic diagram of a play parameter setting interface in which the reference part is a hand-time sub-material in the embodiment of the present application.
  • FIG. 4 is an exemplary schematic diagram of a hand motion in an embodiment of the present application.
  • FIG. 5 is an exemplary schematic diagram of key points of a face in an embodiment of the present application.
  • FIG. 6 is a flowchart of another embodiment of a method for generating a special effect program file package of the present application.
  • FIG. 7 is a flowchart of an embodiment of a method for generating a special effect of the present application.
  • FIG. 8 is a flowchart of another embodiment of a method for generating a special effect of the present application.
  • FIG. 9 is a schematic structural diagram of an embodiment of a device for generating a special effect program file package according to the present application.
  • FIG. 10 is a schematic structural diagram of another embodiment of a device for generating a special effect program file package according to the present application.
  • FIG. 11 is a schematic structural diagram of an embodiment of a special effect generating apparatus of the present application.
  • FIG. 12 is a schematic structural diagram of another embodiment of a special effect generating apparatus of the present application.
  • FIG. 13 is a schematic structural diagram of an application embodiment of an electronic device according to the present application.
  • a plurality may mean two or more, and “at least one” may mean one, two or more.
  • the term "and/or" in the disclosure is merely an association relationship describing an associated object, indicating that there may be three relationships, for example, A and/or B, which may indicate that A exists separately, and A and B exist simultaneously. There are three cases of B alone.
  • the character "/" in the present application generally indicates that the context of the context is an "or" relationship.
  • Embodiments of the present application can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
  • Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
  • program modules may be located on a local or remote computing system storage medium including storage devices.
  • FIG. 1 is a flowchart of an embodiment of a method for generating a special effect program file package of the present application.
  • the method for generating the special effect program package of each embodiment of the present application can be implemented by, for example, but not limited to, one device (the embodiment of the present application is called a device for generating a special effect file package).
  • the method for generating the special effect program file package of the embodiment includes:
  • a plurality of sub-materials in a set of sub-materials have predetermined playback timings.
  • the playing timing of the plurality of sub-materials in the set of sub-materials may be determined based on the file names of the plurality of sub-materials.
  • the operation 102 may be performed by a processor invoking a corresponding instruction stored in a memory or by a first import module executed by the processor.
  • the operation 104 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first acquisition module executed by the processor.
  • the operation 106 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first generation module executed by the processor.
  • a set of sub-materials may be imported, or multiple sets of sub-materials may be imported.
  • operations 102-104 may be respectively performed for each group of sub-materials, and then operation 106 is performed for the plurality of sets of sub-materials, and a deformation effect program file is generated from parameter values of the plurality of sets of sub-materials and their play parameters.
  • a package that is, a special effects package, can include a set of sub-materials, or can include multiple sets of sub-materials.
  • the special effect program file package can be used for special effects processing of the video, and a dynamic special effect of a set of sub-materials is generated on the video, for example, an AR effect is rendered by the AR engine or an electronic device having an AR drawing function. deal with.
  • the method for generating the special effect program file package provided by the above embodiment of the present application, when generating the special effect program file package, a set of sub-materials is imported, the set of sub-materials includes a plurality of sub-materials; and the parameter values of the play parameters of the set of sub-materials are obtained. And generating a special effect program file package according to the parameter values of the set of sub-materials and the playing parameters, so as to perform dynamic special effect processing based on the special effect program file package video, and implementing dynamic special effects on the played video, the embodiment of the present application does not need to manually write the program file.
  • the special effect program file executable by the rendering engine can be generated, the operation is simple, the time required is short, the overall efficiency of the dynamic special effect realization is improved, and the accuracy of the special effect effect is effectively guaranteed.
  • the device for generating the special effect program package may include a preset effect program file, which may be, for example, a lightweight data exchange format based on the JavaScript language (JavaScript Object Notiation, json). ) a file, or any other executable program file.
  • the parameter value of the play parameter in the special effect program file may be vacant or preset to a default value.
  • the generating device of the special effect program file package may include an operation bar, where the operation bar is provided with at least one interaction interface for receiving a parameter value set for a play parameter of a group of sub-materials; in addition, the special effect program file
  • the package generating device may further include a program file display field, and the program file for displaying the play parameters of the set of sub-materials is as shown in FIG. 2 , which is an example of an operation interface of the device for generating the special effect program file package in the embodiment of the present application.
  • the operation interface of the generating device of the special effect program package includes an operation bar and a program file display column.
  • the program file display column displays the special effects when the playing parameter of the set of sub-materials is vacant or preset to the default value.
  • the program file when receiving the parameter value set by the playing parameter of a group of sub-materials through the interactive interface of the operation bar, updating the parameter value set by the playing parameter of the group of sub-materials to the recently received parameter value, and displaying the program file
  • the bar displays the effect file of the updated parameter value in real time.
  • the operation 102 may include: receiving an import instruction sent through an interaction interface of the operation bar, and importing a plurality of sub-materials in the material folder pointed by the import instruction As the above set of sub-materials.
  • the operation bar may include a play parameter setting interface including at least one interactive interface; and may further include other areas, such as a reference part display. In the area, the play parameter setting interface at this time may be a play parameter setting interface under each reference part.
  • Reference sites in various embodiments of the present application may include, but are not limited to, any one or more of the following: ear, hand, face, hair, neck, limb.
  • FIG. 3 in the embodiment of the present application, when the reference part is a hand, an exemplary schematic diagram of a play parameter setting interface of a group of sub-materials is shown.
  • receiving an import instruction input through an interaction interface of the operation bar, and importing a plurality of sub-materials in the material folder pointed by the import instruction may include: receiving a play parameter by using an operation bar An import instruction sent by the interactive interface in the setting interface is imported, and multiple sub-materials in the material folder pointed to by the import instruction are imported.
  • receiving an import instruction input through an interaction interface of the operation bar, and importing a plurality of sub-materials in the material folder pointed by the import instruction may include: receiving an operation block
  • the selection instruction sent by the interaction interface uses the reference part selected by the selection instruction as the target part currently needed to add the special effect, and displays the play parameter setting interface under the target part of the operation block; the receiving is sent through the interactive interface in the play parameter setting interface.
  • the importing an input instruction input through the interaction interface of the operation bar, and importing the plurality of sub-materials in the material folder pointed by the import instruction may include:
  • Each material folder may include a plurality of sub-materials.
  • the material folder may include a plurality of sub-materials of different shapes, colors, earrings, earmuffs, and the like, one of the embodiments of the present application.
  • the preset position or the preset serial number in the material folder pointed to by the import instruction may be preset.
  • Sub-materials For example, when the user does not select a child clip, the first to fifth child clips in the clip folder are selected and imported by default.
  • obtaining the parameter values of the playing parameters of the plurality of sub-materials in operation 104 may include: responding to receiving, by the interaction interface in the playing parameter setting interface, the group of the foregoing a parameter value set by the play parameter of the material, the parameter value set as the parameter value of the play parameter of the set of sub-materials; and/or, in response to not receiving the one sent by the interactive interface in the play parameter setting interface
  • the parameter value set by the play parameter of the group material, and the preset parameter value is used as the parameter value of the play parameter of the above group of child materials.
  • the embodiment of the present application does not need to generate a rendering engine executable file by manually writing a program file, and the special operation package can be generated based on the user's selection operation of a set of sub-materials in the operation bar and the setting operation of the parameter values, and the operation is simple.
  • the required time is short, which improves the overall efficiency of dynamic special effects and effectively guarantees the accuracy of special effects.
  • all the sub-materials in the set of sub-materials may be applied.
  • the parameter values of the playback parameters of the material are the same.
  • the play parameters of a set of sub-materials may include, for example but are not limited to, any one or more of the following:
  • display parameter used to indicate whether to display the above multiple sub-materials.
  • the parameter values include “Yes” and “No”. When the parameter value is "Yes”, it means that the corresponding plurality of sub-materials need to be displayed during video playback, and the parameter value is selected as "No”. (No)" means that the plurality of sub-materials are not displayed during video playback;
  • interval parameter used to indicate the number of frames of the adjacent two sub-materials in the plurality of sub-materials
  • trigger action parameter used to indicate the trigger action of triggering the display of the plurality of sub-materials, which means that the action of triggering the plurality of sub-materials is triggered, and the parameter value may include each trigger action, and the user may select from the preset action set. Select at least one action as the trigger action. That is, during the video playing process, when the corresponding triggering action is detected, the corresponding plurality of sub-materials may be triggered to be displayed. For example, when the triggering action “opening mouth” specified in the triggering action parameter is detected in the video, the animation of the spit rainbow is started.
  • the plurality of sub-materials, the start display time, the end display time, the display time, and the like of the plurality of sub-materials may be determined according to parameter values of other parameters, for example, according to the delay trigger parameter, the trigger end parameter, and the parameter value of the loop parameter. determine;
  • loop parameter used to indicate the number of loop playback of the above multiple sub-materials, you can set or select the specific value of the number of loop play as its parameter value, for example, 1, 5, etc., you can agree that the parameter value is set to 0 Infinite loop playback;
  • delay trigger parameter used to indicate the time to delay display of the plurality of sub-materials, that is, when a trigger action in the trigger action parameter is detected from a certain frame in the video, how many frames are delayed to start displaying the plurality of sub-subjects Material, you can set or select the time to delay display of the above multiple sub-materials as its parameter value;
  • trigger end parameter used to indicate the end of the display of the plurality of sub-materials, refers to what action to end the plurality of sub-material display, the parameter value includes each trigger action, the user can select from the preset action set At least one action is an action of ending the display of the plurality of sub-materials. That is, during the video playing process, when the triggering action specified by the triggering end parameter is detected, the display/playing of the corresponding plurality of sub-materials may be ended, for example, when the triggering action “opening mouth” specified in the triggering action parameter occurs in the detected video. Playing the rainbow, which is formed by the above plurality of sub-materials, can set the parameter value in the trigger end parameter to "shutdown", and when the "shutdown" action occurs in the video, the rainbow disappears;
  • Display size parameter (Scale): a reference basis for indicating a change in the display size of the plurality of sub-materials, and is used to achieve a display effect of the plurality of sub-materials being large and small.
  • the parameter value of the display size parameter (ie, the reference basis for the display size change of the plurality of sub-materials) may be two or more key points in the preset key points (which may be represented as: PointA and PointB), and at this time, the above
  • the display size of the plurality of sub-materials will vary according to the proportion of the size formed by the two or more key points in the video as reference, for example, the plurality of sub-materials are glasses, and the parameter values of the selected display size parameters represent the left eye.
  • the display size of the plurality of sub-materials will change according to the change of the length of the connection between the two key points. If the parameter value of the display size parameter is not changed, the default parameter value may be two key points on the reference part corresponding to the plurality of sub-materials;
  • Position type parameter (PositionType): a relationship type for indicating the plurality of sub-materials and positions;
  • Position correlation parameter used to indicate whether the plurality of sub-materials follow the preset reference part movement, and refers to whether the plurality of sub-materials follow the positional movement of the reference part, and may include “Yes, Move With Position” and "No” option, when the parameter value selects "Yes”, the above multiple sub-materials follow the position of the reference part. If the parameter value of the position type parameter is foreground, the parameter value selects "No". , indicating that the plurality of sub-materials do not follow the positional movement of the reference portion;
  • Position parameter used to indicate the position binding relationship between the plurality of sub-materials and the preset key points, and refers to the positional relationship between the plurality of sub-materials and the preset key points during the video playing process, and may be selected. Binding the positions of the plurality of sub-materials and the key points of the preset key points;
  • RotateCenter A key point for indicating the rotation of the plurality of sub-materials, and selecting which key point the plurality of sub-materials will rotate according to during the video playing.
  • the trigger action corresponding to the trigger action parameter includes any one or more of the following:
  • NULL action trigger
  • Eye movements for example, blinking, closing eyes, blinking, etc.
  • Head movements for example, shaking his head, nodding his head, hoeing his head, turning his head, etc.
  • Eyebrow movements for example, eyebrows, etc.
  • Hand movements for example, loving hands, holding hands, palms, thumbing, congratulations, one-handedness, OK hands, scissors hands, pistols, index fingers, etc.;
  • Mouth movements for example, opening a mouth, closing a mouth, etc.
  • FIG. 4 it is an exemplary schematic diagram of the hand motion in the embodiment of the present application.
  • the location type parameter described above includes, for example, any of the following:
  • the parameter used to represent the foreground (Foreground): at this time, the corresponding plurality of sub-materials will be displayed as foreground in the video playing, and the plurality of sub-materials will be associated with the screen position of the display of the playing terminal during the playing, and the center point is The position on the screen of the display will remain the same;
  • a parameter for indicating that the plurality of sub-materials are positioned and/or moved following the face position indicating that the reference part corresponding to the plurality of sub-materials is a face, and the plurality of sub-materials will be in the video playing process. Positioning and/or moving following the face position;
  • a parameter for indicating that the plurality of sub-materials follow the position of the hand for positioning and/or moving indicating that the reference part corresponding to the plurality of sub-materials is a gesture (ie, a hand), and the plurality of sub-materials are played in the video at this time. The position will be followed and/or moved in accordance with the position of the hand;
  • a parameter for indicating that the plurality of sub-materials are positioned and/or moved following the position of the foot for indicating that the plurality of sub-materials are positioned and/or moved following the position of the foot during video playback;
  • a parameter for indicating that the plurality of sub-materials are positioned and/or moved following the position of the human skeleton for indicating that the plurality of sub-materials are positioned and/or moved following the position of the human skeleton during video playback;
  • the playback position relationship associated with the reference portion may include, for example, any one or more of the following positional relationships: the plurality of sub-materials are moved following the position of the reference portion, and the plurality of sub-materials are scaled according to the size of the reference portion (Size) The plurality of sub-materials are moved along the position of the reference portion, and the plurality of sub-materials are scaled according to the size of the reference portion, and the plurality of sub-materials are depth-scaled (Depth) following the rotation of the reference portion; The sub-materials are moved along the position of the reference part, and the plurality of sub-materials are scaled according to the size of the reference part, and the plurality of sub-materials are depth-scaled (Depth) following the rotation of the reference part, and the plurality of sub-materials follow the reference.
  • the plane of the part is selected for rotation (Rotation);
  • a parameter for indicating a background indicating that the plurality of sub-materials are to be displayed as a background during video playback, and the plurality of sub-materials are associated with a screen position of a display of the playback terminal during video playback, the plurality of sub- The size of the material will be adjusted so that the four vertex coordinates of the plurality of sub-materials coincide with the four vertices of the screen of the display.
  • the playing parameter further includes: a correspondence between a display position of the set of sub-materials and a predetermined at least one key point.
  • the key points may include, but are not limited to, any one or more of the following: a head key point, a face key point, a shoulder key point, an arm key point, a gesture key point, a waist key point, and a leg key point. , key points of the foot, key points of the human skeleton, and so on.
  • the head key points may include, but are not limited to, any one or more of the following: an overhead key point, a nose tip key point, and a chin key point, and the like.
  • the facial key points may include, but are not limited to, any one or more of the following: facial contour key points, eye key points, eyebrow key points, nose key points, mouth key points, etc. Wait.
  • the eye key points may include, but are not limited to, any one or more of the following: a left eyelid key point, a left eye pupil center key point, a left eye center key point, a right eye key point, and a right eye pupil center key point. , and the key points in the center of the right eye, and so on.
  • the eyebrow key points may include, but are not limited to, any one or more of the following: a left eyebrow key point and a right eyebrow key point, and the like.
  • the key points of the nose may include, but are not limited to, any one or more of the following: a key point of the nose bridge, a key point under the nose, and a key point on the outside of the nose, and the like.
  • the key points of the mouth may include, for example but are not limited to, any one or more of the following: an upper lip key point, a lower lip key point, and the like.
  • the shoulder key points may include, but are not limited to, any one or more of the following: a shoulder intersection key point at the intersection of the shoulder and the head, and a key point and shoulder at the arm root contour The midpoint of the shoulder contour at the midpoint between the key points, and so on.
  • the arm key points may include, but are not limited to, any one or more of the following: wrist contour key points, elbow contour key points, arm root contour key points, and wrist contour points and elbows The midpoint point of the arm contour at the midpoint position between the contour key points, and the midpoint of the boom midpoint at the midpoint between the key point of the elbow contour and the key point of the arm root contour, and so on.
  • the gesture key points may include, but are not limited to, any one or more of the following: four vertex keys of the gesture frame (ie, the gesture detection frame), and the center key point of the gesture frame, etc. Wait.
  • the leg key points may include, but are not limited to, any one or more of the following: ankle key point, a knee contour key point, an ankle contour key point, a thigh root outer contour key point, located at the knee The key point of the calf contour at the midpoint between the contour key point and the ankle contour key point, the midpoint of the inner contour of the thigh at the midpoint between the key point of the knee inner contour and the key point of the ankle, And the midpoint of the outer contour of the thigh at the midpoint between the key point of the outer contour of the knee and the key point of the outer contour of the root of the thigh, and so on.
  • the waist key points may include, but are not limited to, any one or more of the following: N-divide N between the outer contour points of the thigh root and the key points of the arm root contour, and generate N, etc. A point where N is greater than one.
  • the foot key points can include, for example, but are not limited to, any one or more of the following: toe key points and heel key points, and the like.
  • the human bone key points may include, but are not limited to, any one or more of the following: a right shoulder bone key point, a right elbow bone key point, a right wrist bone key point, a left shoulder bone key point, and a left elbow Bone key points, left wrist bone key points, right hip bone key points, right knee bone key points, right skeletal key points, left hip bone key points, left knee bone key points, left skeletal key points, head skeletal key points, And the key points of the neck bones, and so on.
  • the positions of the plurality of key points may be set in advance to correspond to the positional relationship between the display positions of the plurality of sub-materials and the key points.
  • the key points may be directly selected from the set of key points set in advance as the parameter values in the corresponding playing parameters.
  • multiple key points may be defined for the face and the gesture (hand) based on face detection and gesture detection, respectively, in the effect generation, based on the face key point or the gesture key point. Correspondence of positional relationship.
  • FIG. 5 is an exemplary schematic diagram of a key point of a face in the embodiment of the present application.
  • a face key point may be defined as follows:
  • the key points of the opponent can be defined as follows:
  • the key points of the numbers 110-113 are the four vertices of the gesture detection frame (ie, the outer frame of the hand), and the key point of the number 114 is the center of the gesture detection frame.
  • the method may further include: establishing a correspondence between a display position of the plurality of sub-materials and a predetermined at least one key point; and/or establishing a display position and detection of the plurality of sub-materials The correspondence between the key points in the center of the box.
  • the key points in the correspondence established in the above embodiments of the present application are a head key point, a face key point, a shoulder key point, an arm key point, a waist key point, a leg key point, a foot key point, and a human body.
  • the correspondence between the display position of the plurality of sub-materials and at least one of the key points may be established; in the correspondence established in the above embodiment of the present application, the key points are the head key points and the face
  • the central position of the display position of the plurality of sub-materials and the corresponding detection frame (for example, the head detection frame, the face detection frame, the gesture detection frame, and the human detection frame) are established. The correspondence between points.
  • the device for generating the special effect program package of the embodiment of the present application may further include a content display column.
  • the method further includes: displaying a reference image through the content display column, and displaying a key point on the reference image.
  • the reference image includes at least one reference portion.
  • the reference site may include, for example, any one or more of the following: ears, hands, face, hair, neck, shoulders, and the like.
  • the reference image may be, for example, at least a part of an image of a reference character, such as an image of any one or more of the following: a complete image, a head image, a face image, a shoulder image, an arm image, Gesture images, waist images, leg images, foot images, complete images of reference characters, and more.
  • the method further includes: displaying the policy according to the preset parameter according to the parameter value of the playing parameter of the group of the sub-materials Displaying, in the content display column, each of the imported sub-materials in sequence, or simultaneously displaying the plurality of sub-materials of the imported set of sub-materials; or receiving the sub-materials of the set of sub-materials Select the operation to display the sub-material selected by the above selection operation in the content display column.
  • the method further includes: updating the set of sub-materials in the content display according to the position moving operation of the set of sub-materials or one of the sub-materials received through the content display column The display position of the column, and the corresponding parameter values in the play parameters of the above set of sub-materials are updated.
  • the user can select the above-mentioned group of sub-materials or one of the sub-materials displayed in the content display column by the mouse, move the mouse to the small frame in the lower right corner of the selected sub-material, and zoom the selected sub-material by moving the small frame.
  • the user can select a group of child material or one of the child materials displayed in the content display column by the mouse and directly move the position thereof, and move the selected child material to the correct or desired position.
  • the position and the display ratio of the plurality of sub-materials on the playback terminal will be consistent with the position and display ratio in the content display column.
  • the user may add special effects to multiple reference parts.
  • the ear, the face, and the hand may be used as the target parts that need to add special effects, and any of the above embodiments may be implemented to implement the ear and the face.
  • the method further includes: adjusting, according to the layer parameter adjustment instruction sent by the interaction interface of the operation column, for two or more sub-materials, The occlusion relationship between the two or more sub-materials is displayed, and the two or more pieces of the plurality of sub-materials are displayed in the content display column according to the adjusted occlusion relationship and the parameter value of the play parameter.
  • the method may further include:
  • the special effect program file of the set of sub-materials is generated according to the preset effect program file and the parameter values of the play parameters of the set of sub-materials, and the special effect program file of the set of sub-materials is displayed through the program file column.
  • the above-mentioned effect program file may include, for example, but not limited to, a special effect program file generated by a json program or any other executable program.
  • the method further includes: generating, by the device that generates the special effect program file package, the operation interface, where the operation interface includes: an operation bar, Content display bar and program file bar.
  • the above operation interface includes three areas of the left side, the middle side, and the right side.
  • the displaying the operation interface may include: displaying an operation column on a left side of the operation interface, displaying a content display column in a middle portion of the operation interface, and displaying the program file column on a right side of the operation interface.
  • the foregoing group of sub-materials can be imported through the interaction interface 20 in the left operation column, and the occlusion relationship between the plurality of groups of sub-material layers can be adjusted through the interaction interface 21, and the layer parameters of each group of sub-materials can be set.
  • the parameter value is set by the interaction interface 23 for the playing parameters of a group of sub-materials;
  • the content display column uses the average human face as the reference human face, and all the imported sub-materials are directly displayed, and the position of the displayed sub-material can be moved by the mouse;
  • the program file display column on the right side is used to display the content of the player file of a group of sub-materials of the current setting parameter value through the display area 24 therein, and the special effect program file package can be exported through the save instruction interface 25 in the program file display column. That is: the special effects package is generated and saved.
  • FIG. 6 is a flowchart of another embodiment of a method for generating a special effect program file package of the present application. As shown in FIG. 6, the method for generating the special effect program file package of the embodiment includes:
  • the generating device of the special effect program file package is started according to the received startup command, and displays an operation interface.
  • the operation interface includes: an operation bar, a content display bar and a program file bar.
  • the reference image includes at least one reference portion.
  • the operation 304 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an operator interface or a content display bar that is executed by the processor.
  • the operation 306 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by the first import module executed by the processor.
  • the operation 308 may be performed by a processor invoking a corresponding instruction stored in a memory or by a first acquisition module executed by the processor.
  • the operation 310 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an operator interface or a program file column that is executed by the processor.
  • the operation 312 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first generation module executed by the processor.
  • the method may further include: saving the special effect program file package at a location pointed by the save instruction according to the received save instruction.
  • the saving of the special effect file package at the location pointed by the save instruction according to the received save instruction may include:
  • the special effect program file package can be compressed and saved, so as to be imported into the mobile phone terminal for special effect generation.
  • the embodiment of the present application only compresses the size of the special effect file package, and does not change the size of the sub-material in the special effect file package, that is, the size of the sub-material in the special effect file package is kept before the plurality of sub-materials are imported. size.
  • the special effect program file package can be imported into the terminal, and the dynamic effect of the video played by the terminal is generated.
  • FIG. 7 is a flowchart of an embodiment of a method for generating a special effect of the present application.
  • the special effect generating method of this embodiment includes:
  • a group of sub-materials includes multiple sub-materials.
  • the plurality of sub-materials have a predetermined playback timing.
  • the operation 402 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a second acquisition module executed by the processor.
  • the video image may be subjected to key point detection related to the corresponding relationship by the neural network, and the key point detection result may be output.
  • the key point detection result may include, but is not limited to, any one or more of the following: the position of the key point involved in the correspondence in the image in the video; the preset number of the key point involved in the corresponding relationship in the special program package .
  • the operation 404 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first detection module executed by the processor.
  • the operation 406 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a second generation module executed by the processor.
  • the parameter values of the playing parameters of at least one set of the sub-materials in the special effect program file package are obtained, wherein the set of sub-materials includes a plurality of sub-materials; the key point detection is performed on the video image; A detected key point and a parameter value of a play parameter of at least one set of sub-materials, and an effect based on at least one set of sub-materials is generated on the video image.
  • a dynamic special effect is generated on the video by using a parameter value of a play parameter of at least one set of the sub-materials in the pre-generated special program file package and a key point in the video image, and the dynamic special effect play is implemented in the video, and the video is improved. Playback effect.
  • the method may further include: importing a special effect program file package.
  • the effect program file package may include parameter values of at least one set of sub-materials and play parameters of the at least one set of sub-materials, and the parameter values of the play parameters of the set of sub-materials include display positions and reservations of the set of sub-materials. The correspondence between at least one key point.
  • importing the special effect program file package may include: reading the special effect program file package into the memory by calling a first interface function for reading the sticker material; and parsing the special effect program file package to obtain the at least the foregoing A set of sub-materials and special effect program files, wherein the special effect program file includes parameter values of playing parameters of the at least one set of sub-materials.
  • the above-mentioned effect program file may include: a json program or an effect program file of other executable programs.
  • the special effect program file package in the embodiment of the special effect generation method of the present application may be the special effect program file package generated by the embodiment of the method for generating the special effect program file package of the present application.
  • the operation 402 may include: creating a sticker handle by using a second interface function for creating a sticker handle; reading parameter values of the playing parameters in the plurality of sub-materials and the effect program file package, and storing the above Sticker handles.
  • the method further includes: determining a play timing of the plurality of sub-materials according to a file name of the plurality of sub-materials; and at least one group of the special effect program files in the sticker handle Obtaining a parameter value of a play parameter of the material and a play timing of the plurality of sub-materials, acquiring a position and a video frame number of each of the at least one set of the sub-materials in the video, and reading the video frame from the video in advance The number of corresponding video images.
  • the operation 406 may include: reading, by using a third interface function for calling the rendering sticker material, the plurality of sub-materials that need to be displayed on the current video image of the video from the sticker handle; a third interface function for calling the rendering sticker material, reading the sub-material that needs to be displayed on the current video image of the video from the sticker handle; determining the above according to the detected key point and the parameter value of the playing parameter The position of the sub-material to be displayed on the current video image; the sub-material that needs to be displayed on the current video image is displayed on the display position on the current video image.
  • the method further includes: destroying the sticker handle by a fourth interface function for invoking the destruction of the sticker handle in response to the effect program package being played.
  • FIG. 8 is a flowchart of another embodiment of a method for generating a special effect of the present application.
  • the special effect generating method of this embodiment includes:
  • the parameter value of the play parameter of the set of sub-materials includes a correspondence between a display position of the set of sub-materials and a predetermined at least one key point.
  • the above-mentioned effect program file may include: a json program or an effect program file of other executable programs.
  • the special effect program file package in the specific effect generation method embodiment of the present application may be the special effect program file package generated by the method for generating the special effect program file package according to any one of the embodiments of the present application.
  • the operations 502-504 may be performed by a processor invoking a corresponding instruction stored in a memory or by a second import module executed by the processor.
  • the operations 506-508 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a second acquisition module executed by the processor.
  • the operation 510 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a third acquisition module executed by the processor.
  • the operation 512 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first detection module executed by the processor.
  • the operations 514-518 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a second generation module executed by the processor.
  • the embodiments of the special effects generating method of the present application can be used in various video playing scenarios, for example, for a live video scene containing a character, and a dynamic special effect is generated for the live video, and at least one set of sub-materials in the special effect file package is superimposed on the character. Play on the corresponding part.
  • the corresponding parts can be, for example, ears, hands, face, hair, neck, shoulders, and the like.
  • the method for generating the special effect program file package and the special effect generation method provided by the embodiments of the present application may be performed by any suitable device having data processing capability, including but not limited to: a terminal device, a server, and the like.
  • the method for generating the special effect program file package and the special effect generation method provided by the embodiment of the present application may be executed by a processor, such as the processor executing any special effect program mentioned in the embodiment of the present application by calling a corresponding instruction stored in the memory.
  • File package generation method and special effect generation method This will not be repeated below.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • FIG. 9 is a schematic structural diagram of an embodiment of a device for generating a special effect program file package according to the present application.
  • the apparatus for generating the special effect program file package of the embodiment can be used to implement the embodiment of the method for generating the special effect program file package of the present application.
  • the apparatus for generating a special effect program file package of the embodiment includes: a first import module, a first acquisition module, and a first generation module. among them:
  • a first import module configured to import a set of sub-materials; the set of sub-materials includes a plurality of sub-materials.
  • a plurality of sub-materials in a group of sub-materials have a predetermined playing timing, and a playing timing of the plurality of sub-materials may be determined based on file names of the plurality of sub-materials.
  • the first obtaining module is configured to obtain a parameter value of a playing parameter of the set of sub-materials.
  • a first generating module configured to generate a special effect program file package according to the parameter values of the set of sub-materials and the playing parameters.
  • the special effects package may include a set of sub-materials or sets of sub-materials.
  • the device for generating a special effect program file package is configured to import a set of sub-materials, the set of sub-materials includes a plurality of sub-materials; and obtain parameter values of playing parameters of a group of sub-materials;
  • the parameter value of the play parameter is generated to generate a special effect program file package, so as to perform dynamic special effect processing based on the special effect program file package video, and implement dynamic special effects on the played video.
  • the rendering engine can be generated without manually writing the program file.
  • the executed special effect program file is simple in operation and short in time, which improves the overall efficiency of the dynamic effect realization and effectively guarantees the accuracy of the special effects.
  • FIG. 10 is a schematic structural diagram of another embodiment of a device for generating a special effect program file package according to the present application.
  • the generating apparatus of this embodiment further includes an operation interface including an operation bar as shown in FIG. 2, which is an example view of the operation interface, as compared with the embodiment shown in FIG.
  • the first importing module is configured to receive an importing instruction input through the interactive interface of the operation bar, and import a plurality of sub-materials in the material folder pointed by the importing instruction as the set of sub-materials.
  • the first importing module is configured to: receive an import instruction sent through an interaction interface in a play parameter setting interface under the operation bar, and import multiple sub-materials in the material folder pointed by the import instruction; or, receive The selection instruction sent by the interaction interface of the operation bar selects the reference part selected by the instruction as the target part currently needed to add the special effect, and displays the play parameter setting interface under the target part in the operation bar; receives the interaction in the play parameter setting interface.
  • the import instruction sent by the interface imports multiple sub-materials in the material folder pointed to by the import instruction.
  • the first importing module is configured to: receive an import instruction sent through the interactive interface, obtain and display a material folder pointed to by the import instruction; and in response to receiving a selection operation on the child material in the material folder, Importing a plurality of sub-materials selected by the sub-material selection operation; and/or, in response to not receiving the sub-material selection operation in the material folder, selecting all the sub-materials or partial sub-materials in the material folder according to the preset setting, and importing The selected child material is set according to the preset.
  • the method when the first import module imports the plurality of sub-materials in the material folder pointed to by the import instruction, the method is configured to: in response to the import instruction, include multiple sub-materials in the material folder pointed by the import instruction Display order, reading and importing multiple child materials in the display order, and displaying the file names of the imported multiple child materials in the display column in the display order; and/or, in response to the import instruction not including the material folder pointed to by the import instruction
  • the display order of the plurality of sub-materials is read and imported in a preset order, and the file names of the imported plurality of sub-materials are displayed in a preset order in the operation bar.
  • the first obtaining module is configured to: in response to receiving the parameter value set by the interaction parameter in the playing parameter setting interface for the playing parameter of the set of sub-materials, set the parameter value as a group a parameter value of a play parameter of the child material; and/or, in response to not receiving a parameter value set by a play parameter of the set of child material transmitted through the interactive interface in the play parameter setting interface, using the preset parameter value as a group The parameter value of the playback parameter of the child material.
  • the play parameter may further include: a correspondence between a display position of the set of sub-materials and a predetermined at least one key point.
  • the first obtaining module may be further configured to establish a correspondence between a display position of a group of sub-materials and at least one key point; and/or, establish a display position of the group of sub-materials and a detection frame. The correspondence between the key points of the center.
  • the operation interface may further include a content display bar for displaying the reference image and displaying key points on the reference image, wherein the reference image includes at least one reference portion.
  • a content display bar for displaying the reference image and displaying key points on the reference image, wherein the reference image includes at least one reference portion.
  • the reference image may be, for example, at least a part of an image of a reference character, such as an image of any one or more of the following: a complete image, a head image, a face image, a shoulder image, an arm image, Gesture images, waist images, leg images, foot images, and more.
  • the content display column may be further configured to display the imported set of sub-materials in the content display column according to the preset display strategy according to the parameter values of the playing parameters of the set of sub-materials.
  • Each sub-material, or multiple sub-materials of the imported set of sub-materials are simultaneously displayed; or, a selection operation of the sub-materials in a set of sub-materials is received, and the sub-materials selected by the selection operation are displayed in the content display column.
  • the generating apparatus of the foregoing embodiments of the present application may further include: a first updating module, configured to perform a position moving operation on a set of sub-materials or one of the sub-materials received according to the content display column, Updates the display position of a group of child assets in the content display bar, and updates the corresponding parameter values in the playback parameters of a group of child materials.
  • a first updating module configured to perform a position moving operation on a set of sub-materials or one of the sub-materials received according to the content display column, Updates the display position of a group of child assets in the content display bar, and updates the corresponding parameter values in the playback parameters of a group of child materials.
  • the generating apparatus of the foregoing embodiments of the present application may further include: a second updating module, configured to perform an operation of resizing a set of sub-materials or one of the sub-materials received according to the content display column. Updates the display size of a set of child assets in the content display bar and updates the corresponding parameter values in the playback parameters of a group of child assets.
  • the generating apparatus of the foregoing embodiments of the present application may further include: an adjusting module, configured to adjust, according to the layer parameter adjustment instruction sent by the interaction interface of the operation bar for two or more sub-materials, Adjust the occlusion relationship between two or more sub-materials, and display two or more sub-materials according to the adjusted occlusion relationship and the parameter values of the playback parameters.
  • an adjusting module configured to adjust, according to the layer parameter adjustment instruction sent by the interaction interface of the operation bar for two or more sub-materials, Adjust the occlusion relationship between two or more sub-materials, and display two or more sub-materials according to the adjusted occlusion relationship and the parameter values of the playback parameters.
  • the operation interface may further include: a program file bar, configured to generate a special effect program file of a set of sub-materials according to a parameter value of the preset effect program file and a set of sub-material play parameters, and display a program file column The effect file of the group material.
  • the special effect program file therein may include, for example but is not limited to: a special effect program file generated by a json program.
  • the operation interface may include three areas of the left side, the middle side, and the right side.
  • the operation bar is displayed on the left side of the operation interface
  • the content display column is displayed in the middle of the operation interface
  • the program file column is displayed on the right side of the operation interface.
  • the generating apparatus of the foregoing embodiments of the present application may further include: a saving module, configured to save the special effect program file package at a location pointed by the save instruction according to the received save instruction.
  • the saving module is configured to: in response to receiving the save instruction, display the save path selection interface and the compression interface; receive the save location sent by the save path selection interface; and receive the compression mode sent by the compression interface, and The effect file package of the child material is compressed according to the compression method to generate a compressed file package; the compressed file package is stored in a folder pointed to by the save location.
  • the size of the child material in the effects package is kept the size before each child is imported.
  • the special effects generating apparatus may be used to implement the foregoing specific effect generating method embodiments of the present application, and may be, but not limited to, an AR engine or an electronic device having an AR special effect drawing function.
  • FIG. 11 is a schematic structural diagram of an embodiment of a special effect generating apparatus of the present application.
  • the special effect generating apparatus of this embodiment includes: a second acquiring module, a first detecting module, and a second generating module. among them:
  • the second obtaining module is configured to obtain a parameter value of the playing parameter of the at least one set of the sub-materials in the special effect program file package, where the set of sub-materials includes a plurality of sub-materials.
  • the plurality of sub-materials have a predetermined playback timing.
  • the special effect program file package in the embodiment of each special effect generating device of the present application may be the special effect program file package generated by the generating method or the device embodiment of any of the above-mentioned special effect program file packages.
  • the first detecting module is configured to perform key point detection on the video image.
  • a second generating module configured to generate an effect based on the at least one set of sub-materials on the video image according to the detected key points and parameter values of the playing parameters of the at least one set of sub-materials.
  • the special effect generating apparatus acquires parameter values of playing parameters of at least one set of sub-materials in the special effect program file package, wherein the set of sub-materials includes a plurality of sub-materials; and the key points are detected on the video image; A detected key point and a parameter value of a play parameter of at least one set of sub-materials, and an effect based on at least one set of sub-materials is generated on the video image.
  • a dynamic special effect is generated on the video by using a parameter value of a play parameter of at least one set of the sub-materials in the pre-generated special program file package and a key point in the video image, and the dynamic special effect play is implemented in the video, and the video is improved. Playback effect.
  • FIG. 12 is a schematic structural diagram of another embodiment of a special effect generating apparatus of the present application.
  • the special effect generating apparatus of this embodiment further includes: a second importing module for importing the special effect program file package, as compared with the embodiment shown in FIG.
  • the effect program file package includes at least one set of sub-materials and parameter values of play parameters of at least one set of sub-materials, and the parameter values of the play parameters of the set of sub-materials include a display position of the set of sub-materials and at least one predetermined key point. Correspondence between them.
  • the playing parameter includes: a triggering event parameter, and the triggering event parameter is used to indicate a triggering event that triggers display of the plurality of sub-materials.
  • the special effect generating apparatus of the embodiment may further include: a second detecting module, configured to detect whether a triggering action corresponding to the parameter value of the triggering action parameter occurs in the video image.
  • the second generating module is configured to: in response to detecting the triggering action corresponding to the parameter value of the triggering action parameter in the video image, according to the detected key point and the playing parameter of the at least one set of the child material The parameter value that generates an effect based on at least one set of sub-materials on the video.
  • the playing parameter further includes: a trigger end parameter: the trigger end parameter is used to indicate an action of ending the display of the plurality of sub-materials.
  • the second detecting module is further configured to detect whether a trigger action corresponding to the parameter value of the trigger end parameter occurs in the video image.
  • the second generating module is further configured to end, in response to detecting the triggering action corresponding to the parameter value of the triggering end parameter in the video image, ending the generating of the effect of the at least one set of the sub-material on the currently playing video.
  • the playing parameter further includes: a beauty/beauty effect parameter, wherein the beauty/beauty effect parameter is used to indicate the beauty displayed in the preset part when the sub-material is displayed/ Beauty effect.
  • the second generating module is further configured to: when generating the special effect based on the at least one set of the sub-materials on the video image, according to the detected key points and the parameter values of the playing parameters of the at least one set of sub-materials, according to The beauty/beauty effect parameter displays the beauty/beauty effect in the preset part of the video image.
  • the second importing module is configured to: read the special effect program file package into the memory by calling a first interface function for reading the sticker material; parsing the dynamic special effect program file package to obtain at least one set of sub-materials And the effect program file, the special effect program file includes parameter values of the play parameters of at least one set of sub-materials.
  • the special effect program file therein may include, for example, a special effect program file of the json program.
  • the second obtaining module is configured to: create a sticker handle by using a second interface function for creating a sticker handle; and read at least one set of sub-materials and at least one set of sub-materials in the special effect file package.
  • the parameter value of the parameter is stored in the sticker handle.
  • the special effect generating apparatus may further include: a determining module and a third acquiring module.
  • the determining module is configured to determine a playing sequence of the plurality of sub-materials according to file names of the plurality of sub-materials.
  • a third obtaining module configured to acquire, according to a parameter value of a play parameter of at least one set of sub-materials in the special effect program file of the sticker handle and a play timing of the plurality of sub-materials, a position of each of the at least one set of sub-materials displayed in the video And the number of video frames, and the video image corresponding to the number of video frames is read in advance from the video.
  • the second generating module is configured to: read, by using a third interface function for calling the rendering sticker material, the sub-material that needs to be displayed on the current video image of the video from the sticker handle; The key point and the parameter value of the play parameter determine the position of the child material to be displayed on the current video image; the child material that needs to be displayed on the current video image is displayed on the current video image.
  • the second obtaining module is further configured to be destroyed by using a fourth interface function for calling the destroying sticker handle in response to the special effect program file package being played. Sticker handle.
  • the first detecting module is configured to perform key point detection on the video image by using a neural network, and output a key point detection result.
  • the key point detection result may include any one or more of the following: the position of the key point involved in the correspondence in the image in the video; and the preset number of the key point involved in the correspondence.
  • another electronic device provided by the embodiment of the present application includes:
  • a memory for storing a computer program
  • the processor is configured to execute a computer program stored in a memory, and when the computer program is executed, implement a method for generating a special effect program file package or a special effect generation method according to any one of the embodiments of the present application.
  • FIG. 13 is a schematic structural diagram of an application embodiment of an electronic device according to the present application.
  • the electronic device includes one or more processors, a communication unit, etc., such as one or more central processing units (CPUs), and/or one or more images.
  • processors such as one or more central processing units (CPUs), and/or one or more images.
  • a processor GPU or the like, the processor can perform various appropriate actions and processes according to executable instructions stored in a read only memory (ROM) or executable instructions loaded from a storage portion into a random access memory (RAM) .
  • ROM read only memory
  • RAM random access memory
  • the communication portion may include, but is not limited to, a network card, which may include, but is not limited to, an IB (Infiniband) network card, and the processor may communicate with the read only memory and/or the random access memory to execute executable instructions, and connect to the communication portion through the bus. And communicating with the other target device by the communication unit, so as to complete the operation corresponding to any method provided by the embodiment of the present application, for example, importing a set of sub-materials; the set of sub-materials includes a plurality of sub-materials; a parameter value of a play parameter of the child material; generating a special effect program file package according to the parameter values of the set of child material and the play parameter.
  • a network card which may include, but is not limited to, an IB (Infiniband) network card
  • the processor may communicate with the read only memory and/or the random access memory to execute executable instructions, and connect to the communication portion through the bus. And communicating with the other target device by the communication unit, so as to complete the operation corresponding
  • the CPU, ROM, and RAM are connected to each other through a bus.
  • the ROM is an optional module.
  • the RAM stores executable instructions, or writes executable instructions to the ROM at runtime, the executable instructions causing the processor to perform operations corresponding to the methods described in any of the embodiments of the present application.
  • An input/output (I/O) interface is also connected to the bus.
  • the communication unit can be integrated or set up with multiple sub-modules (eg multiple IB network cards) and on the bus link.
  • the following components are connected to the I/O interface: an input portion including a keyboard, a mouse, and the like; an output portion including a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a speaker; a storage portion including a hard disk or the like; The communication part of the network interface card of the LAN card, modem, etc.
  • the communication section performs communication processing via a network such as the Internet.
  • the drive is also connected to the I/O interface as needed.
  • a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive as needed so that a computer program read therefrom is installed into the storage portion as needed.
  • FIG. 13 is only an optional implementation manner.
  • the number and type of the components in FIG. 13 may be selected, deleted, added, or replaced according to actual needs;
  • Different function components can also be implemented in separate settings or integrated settings, such as GPU and CPU detachable settings or GPU can be integrated on the CPU, the communication part can be separated, or integrated on the CPU or GPU. and many more.
  • an embodiment of the present application includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising The instructions corresponding to the steps of the face anti-counterfeiting detection method provided by the embodiment of the present application are executed.
  • the computer program can be downloaded and installed from the network via a communication portion, and/or installed from a removable medium.
  • the embodiment of the present application further provides a computer program, including a computer instruction, when the computer instruction is run in a processor of the device, to implement a method for generating a special effect program file package according to any embodiment of the present application, or a special effect.
  • the generation method is not limited to any embodiment of the present application, or a special effect.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer program is stored, and when the computer program is executed by the processor, the special effect program file package described in any one of the foregoing embodiments of the present application is implemented. Method, or special effect generation method.
  • the methods and apparatus of the present application may be implemented in a number of ways.
  • the methods and apparatus of the present application can be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present application are not limited to the order described above unless otherwise specifically stated.
  • the present application can also be implemented as a program recorded in a recording medium, the programs including machine readable instructions for implementing the method according to the present application.
  • the present application also covers a recording medium storing a program for executing the method according to the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stored Programmes (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种特效程序文件包的生成及特效生成方法与装置、电子设备,其中,特效程序文件包的生成方法包括:导入一组子素材;所述一组子素材包括多个子素材;获取所述一组子素材的播放参数的参数值;根据所述一组子素材和所述播放参数的参数值生成特效程序文件包。本申请实施例无需通过手动书写程序文件,便可生成渲染引擎可执行的动态特效程序文件,操作简单、所需时间短,提升了特效实现的整体效率,有效保障了动态特效的准确性。

Description

特效程序文件包的生成及特效生成方法与装置、电子设备
本申请要求在2018年02月08日提交中国专利局、申请号为CN201810129969.7、发明名称为“特效程序文件包的生成及特效生成方法与装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉技术,尤其是一种特效程序文件包的生成及特效生成方法与装置、电子设备。
背景技术
增强现实技术(Augmented Reality,AR),是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内的实体信息,模拟仿真后再叠加虚拟信息,将虚拟信息应用到真实世界,将真实世界的人物、环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在,从而达到超越现实的感官体验。
发明内容
本申请实施例提供一种特效程序文件包生成的技术方案和一种特效生成的技术方案。
根据本申请实施例的一个方面,提供的一种特效程序文件包的生成方法,包括:
导入一组子素材;所述一组子素材包括多个子素材;
获取所述一组子素材的播放参数的参数值;
根据所述一组子素材和所述播放参数的参数值生成特效程序文件包。
可选地,在另一实施例的特效程序文件包的生成方法中,所述多个子素材具有预定的播放时序。
可选地,在另一实施例的特效程序文件包的生成方法中,所述多个子素材的播放时序基于所述多个子素材的文件名确定。
可选地,在另一实施例的特效程序文件包的生成方法中,
根据本申请实施例的另一个方面,提供的一种特效生成方法,包括:
获取特效程序文件包中至少一组子素材的播放参数的参数值;其中,一组子素材包括多个子素材;
对视频图像进行关键点检测;
根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效。
根据本申请实施例的又一个方面,提供的一种特效程序文件包的生成装置,包括:
第一导入模块,用于导入一组子素材;所述一组子素材包括多个子素材;
第一获取模块,用于获取所述一组子素材的播放参数的参数值;
第一生成模块,用于根据所述一组子素材和所述播放参数的参数值生成特效程序文件包。
根据本申请实施例的又一个方面,提供的一种特效生成装置,包括:
第二获取模块,用于获取特效程序文件包中至少一组子素材的播放参数的参数值;其中,一组子素材包括多个子素材;
第一检测模块,用于对视频图像进行关键点检测;
第二生成模块,用于根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效。
根据本申请实施例的再一个方面,提供的一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现本申请任一实施例所述的方法。
根据本申请实施例的再一个方面,提供的一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,实现本申请任一实施例所述的方法。
根据本申请实施例的再一个方面,提供的一种计算机程序,包括计算机指令,当所述计算机指令在设备的处理器中运行时,实现本申请任一实施例所述的方法。
基于本申请上述实施例提供的特效程序文件包的生成方法与装置、电子设备、程序和介质,生成特效程序文件包时,导入一组子素材,该一组子素材包括多个子素材;获取一组子素材的播放参数的参数值;根据一组子素材和播放参数的参数值生成特效程序文件包,以便基于该特效程序文件包视频进行动态特效处理,在播放的视频上实现动态特效,本申请实施例无需通过手动书写程序文件,便可生成渲染引擎可执行的特效程序文件,操作简单、所需时间短,提升了动态特效实现的整体效率,有效保障了特效效果的准确性。
基于本申请上述实施例提供的特效生成方法与装置、电子设备、程序和介质,获取特效程序文件包中至少一组子素材的播放参数的参数值,其中,一组子素材包括多个子素材;对视频图像进行关键点检测;根据检测到的关键点和至少一组子素材的播放参数的参数值,在视频图像上生成基于至少一组子素材的特效。本申请实施例通过预先生成的特效程序文件包中至少一组子素材的播放参数的参数值和视频图像中的关键点,在视频上生成动态特效,在视频实现了动态特效播放,提升了视频播放效果。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1为本申请特效程序文件包的生成方法一个实施例的流程图。
图2为本申请实施例中特效程序文件包的生成装置的一个操作界面示例图。
图3为本申请实施例中参考部位为手时子素材的播放参数设置界面的一个示例性示意图。
图4为本申请实施例中手部动作的一个示例性示意图。
图5为本申请实施例中脸部关键点的一个示例性示意图。
图6为本申请特效程序文件包的生成方法另一实施例的流程图。
图7为本申请特效生成方法一个实施例的流程图。
图8为本申请特效生成方法另一个实施例的流程图。
图9为本申请特效程序文件包的生成装置一个实施例的结构示意图。
图10为本申请特效程序文件包的生成装置另一个实施例的结构示意图。
图11为本申请特效生成装置一个实施例的结构示意图。
图12为本申请特效生成装置另一个实施例的结构示意图。
图13为本申请电子设备一个应用实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
还应理解,在本申请实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
本领域技术人员可以理解,本申请实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,对于本申请实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
还应理解,本申请对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
另外,公开中的术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本申请中字符“/”,一般表示前后关联对象是一种“或”的关系。
本申请实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1为本申请特效程序文件包的生成方法一个实施例的流程图。本申请各实施例特效程序文件包的生成方法,例如可以通过但不限于一个装置(本申请实施例称为特效程序文件包的生成装置)实现。如图1所示,该实施例特效程序文件包的生成方法包括:
102,导入一组子素材。
在本申请各实施例的一个实施方式中,一组子素材中的多个子素材具有预定的播放时序。示例性地,一组子素材中多个子素材的播放时序可以基于该多个子素材的文件名确定。
在一个可选示例中,该操作102可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一导入模块执行。
104,获取上述一组子素材的播放参数的参数值。
在一个可选示例中,该操作104可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块执行。
106,根据上述一组子素材及其播放参数的参数值生成特效程序文件包。
在一个可选示例中,该操作106可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一生成模块执行。
在本申请各实施例中,可以导入一组子素材,也可以导入多组子素材。导入多组子素材时,可以分别针对各组子素 材分别执行操作102-104,然后针对该多组子素材执行操作106,由多组子素材及其播放参数的参数值生成一个变形特效程序文件包,即特效程序文件包可以包括一组子素材,也可以包括多组子素材。
本申请实施例中,特效程序文件包可用于对视频的特效处理,在视频上生成一组子素材的动态特效,例如,通过AR引擎或者具有AR绘制功能的电子设备对视频进行AR效果的渲染处理。
基于本申请上述实施例提供的特效程序文件包的生成方法,生成特效程序文件包时,导入一组子素材,该一组子素材包括多个子素材;获取一组子素材的播放参数的参数值;根据一组子素材和播放参数的参数值生成特效程序文件包,以便基于该特效程序文件包视频进行动态特效处理,在播放的视频上实现动态特效,本申请实施例无需通过手动书写程序文件,便可生成渲染引擎可执行的特效程序文件,操作简单、所需时间短,提升了动态特效实现的整体效率,有效保障了特效效果的准确性。
在本申请各实施例的一个实施方式中,特效程序文件包的生成装置中可以包括预先设置的特效程序文件,其例如可以是基于JavaScript语言的轻量级的数据交换格式(JavaScript Object Notiation,json)文件、或者其他任意可执行程序文件。该特效程序文件中播放参数的参数值可以空缺或者被预设为默认值,接收到针对一组子素材的播放参数设置的参数值时,该特效程序文件中的相应参数值被自动更新为接收到的参数值。可选地,上述特效程序文件包的生成装置可以包括操作栏,该操作栏中设置有至少一个交互接口,用于接收针对一组子素材的播放参数设置的参数值;另外,该特效程序文件包的生成装置还可以包括程序文件显示栏,用于显示一组子素材的播放参数的程序文件如图2所示,为本申请实施例中特效程序文件包的生成装置的一个操作界面示例图,该特效程序文件包的生成装置的操作界面包括操作拦和程序文件显示栏。该特效程序文件包的生成装置启动后,对应于操作栏中一组子素材的播放参数设置界面,程序文件显示栏显示该一组子素材的播放参数空缺或者被预设为默认值时的特效程序文件,通过操作栏的交互接口接收到针对一组子素材的播放参数设置的参数值时,将该一组子素材的播放参数设置的参数值更新为最近接收到的参数值,程序文件显示栏实时显示参数值更新后的特效程序文件。
在本申请各特效程序文件包的生成方法实施例的一个实施方式中,操作102可以包括:接收通过操作栏的交互接口发送的导入指令,导入该导入指令指向的素材文件夹中的多个子素材作为上述一组子素材。
如图2所示,作为本申请各实施例的一个可选示例而非限制,操作栏中可以包括播放参数设置界面,其包括至少一个交互接口;另外还可以包括其他的区域,例如参考部位显示区域,此时的播放参数设置界面可以是各参考部位下的播放参数设置界面。本申请各实施例中参考部位,例如可以包括但不限于以下任意一项或多项:耳朵(ear),手(hand),脸(face),头发(hair),颈部,肢体。如图3所示,为本申请实施例中,参考部位为手时,一组子素材的播放参数设置界面的一个示例性示意图。
在本申请上述实施方式的一个可选示例中,接收通过操作栏的交互接口输入的导入指令,导入该导入指令指向的素材文件夹中的多个子素材,可以包括:接收通过操作栏下播放参数设置界面中的交互接口发送的导入指令,导入该导入指令指向的素材文件夹中的多个子素材。
或者,在本申请上述实施方式的另一个可选示例中,接收通过操作栏的交互接口输入的导入指令,导入该导入指令指向的素材文件夹中的多个子素材,可以包括:接收通过操作拦的交互接口发送的选取指令,以该选取指令选取的参考部位作为当前需要添加特效的目标部位,并在操作拦显示目标部位下的播放参数设置界面;接收通过播放参数设置界面中的交互接口发送的导入指令,导入该导入指令指向的素材文件夹中的多个子素材。
在本申请上述实施方式的又一个可选示例中,接收通过操作栏的交互接口输入的导入指令,导入该导入指令指向的素材文件夹中的多个子素材,可以包括:
接收通过交互接口发送的导入指令,获取并显示该导入指令指向的素材文件夹;
响应于接收到对素材文件夹中的子素材选取操作,导入该多个子素材选取操作选取的多个子素材;和/或,响应于未接收到对素材文件夹中的子素材选取操作,根据预先设置选取素材文件夹中的全部子素材或者部分子素材,并导入根据预先设置选取的子素材。
每个素材文件夹中可能包括多个子素材,例如,若目标部位为耳朵时,素材文件夹中可能包括不同形状、颜色的耳环、耳罩等等多个子素材,在本申请各实施例的一个实施方式中,导入多个子素材时,在未接收到用户对素材文件夹中的子素材选取操作时,可以预先设置,导入该导入指令指向的素材文件夹中预设位置或预设序号的多个子素材。例如,在用户不选取子素材时,默认选取并导入该素材文件夹中的第一个至第五个子素材。
在本申请各实施例的一个实施方式中,操作104中,获取多个子素材的播放参数的参数值,可以包括:响应于接收到通过播放参数设置界面中的交互接口发送的针对上述一组子素材的播放参数设置的参数值,以设置的参数值作为上述一组子素材的播放参数的参数值;和/或,响应于未接收到通过播放参数设置界面中的交互接口发送的针对上述一组子素材的播放参数设置的参数值,以预设参数值作为上述一组子素材的播放参数的参数值。
本申请实施例无需通过手动书写程序文件生成渲染引擎可执行文件,基于用户的在操作栏中对一组子素材的选择操作和对参数值的设置操作便可以实现特效程序包的生成,操作简单、所需时间短,提升了动态特效实现的整体效率,有效保障了特效效果的准确性。
在本申请各实施例的一个实施方式中,一组子素材的播放参数的参数值设定后,可应用于该一组子素材中的所有子素材,即,一组子素材中的所有子素材的播放参数的参数值相同。一组子素材的播放参数例如可以包括但不限于以下任意一项或多项:
1,显示参数(Display):用于表示是否显示上述多个子素材。其参数值包括“是(Yes)”和“否(No)”两个选项,参数值选择“是(Yes)”时表示在视频播放过程中需要显示相应上述多个子素材,参数值选择“否(No)”时表示在视频播放过程中不显示上述多个子素材;
2,间隔参数:用于表示显示上述多个子素材中相邻两个子素材间隔的帧数;
3,触发动作参数(TriggerType):用于表示触发上述多个子素材显示的触发动作,是指通过什么动作触发上述多个子素材显示,其参数值可以包括各触发动作,用户可以从预设动作集合中选择至少一个动作作为触发动作。即:在视频播放过程中,检测相应的触发动作时可触发显示相应上述多个子素材,例如,检测到视频中出现该触发动作参数中规定 的触发动作“张嘴”时开始播放吐彩虹这一动画上述多个子素材,该上述多个子素材的开始显示时间、结束显示时间、显示多久等,具体可以根据其他参数的参数值确定,例如可以分别根据延迟触发参数、触发结束参数、循环参数的参数值确定;
4,循环参数(TriggerLoop):用于表示上述多个子素材的循环播放次数,可以设置或选择循环播放次数的具体数值作为其参数值,例如,1、5等,可以约定参数值设置为0为无限循环播放;
5,延迟触发参数(TriggerDelay):用于表示延迟显示上述多个子素材的时间,即:从视频中的某一帧检测到触发动作参数中的触发动作时,延迟多少帧开始显示该上述多个子素材,可以设置或选择延迟显示上述多个子素材的时间作为其参数值;
6,触发结束参数(TriggerStop):用于表示结束上述多个子素材显示的动作,是指通过什么动作结束上述多个子素材显示,其参数值包括各触发动作,用户可以从预设动作集合中选择至少一个动作作为结束上述多个子素材显示的动作。即:在视频播放过程中,检测该触发结束参数规定的触发动作时可结束相应上述多个子素材的显示/播放,例如针对检测到视频中出现触发动作参数中规定的触发动作“张嘴”时开始播放吐彩虹这一由上述多个子素材形成的动画,可以设置触发结束参数中的参数值为“闭嘴”,则检测到视频中出现该“闭嘴”动作时,彩虹消失;
7,显示尺寸参数(Scale):用于表示上述多个子素材的显示大小变化的参考依据,用来实现上述多个子素材近大远小的显示效果。该显示尺寸参数的参数值(即:上述多个子素材的显示大小变化的参考依据)可以是预设关键点中的两个或以上关键点(可以表示为:PointA和PointB),此时,上述多个子素材的显示大小将依据视频中作为参考依据的两个或以上关键点形成的大小的比例变化而变化,例如,上述多个子素材为眼镜、选取的显示尺寸参数的参数值为表示左眼瞳孔中心的关键点和表示右眼瞳孔中心的关键点时,在视频播放中,该上述多个子素材的显示大小将依据这两个关键点之间连线的长短比例变化而变化。如果未更改该显示尺寸参数的参数值,其默认参数值可以是与上述多个子素材对应的参考部位上的两个关键点;
8,位置类型参数(PositionType):用于表示上述多个子素材和位置的关系类型;
9,位置关联参数(PositionRelationType):用于表示上述多个子素材是否跟随预设参考部位移动,指上述多个子素材是否跟随参考部位的位置移动,可以包括“是(Yes,Move With Position)”和“否(No)”两个选项,参数值选择“是(Yes)”时上述多个子素材跟随参考部位的位置移动,如果位置类型参数的参数值是前景,参数值选择“否(No)”,表示上述多个子素材不跟随参考部位的位置移动;
10,位置参数(Position):用于表示上述多个子素材与预设关键点之间的位置绑定关系,指在视频播放过程中上述多个子素材和预设关键点的位置关系,可以选定上述多个子素材和预设关键点中的哪些关键点的位置进行绑定;
11,旋转参数(RotateCenter):用于表示上述多个子素材旋转依据的关键点,可以选择该上述多个子素材在视频播放过程中将根据哪个关键点进行旋转。
在其中一个可选示例中,上述触发动作参数对应的触发动作包括以下任意一项或多项:
无动作触发(NULL),即:不需任何动作即可显示该上述多个子素材;
眼部动作,例如,眨眼、闭眼、睁眼等;
头部动作,例如,摇头、点头、歪头、转头等;
眉部动作,例如,挑眉等;
手部动作,例如,爱心手、托手、手掌、大拇哥、抱拳恭喜、单手比心、OK手、剪刀手、手枪手、食指等;
嘴部动作,例如,张嘴、闭嘴等;
肩部动作,例如,耸肩等;
其他动作。
如图4所示,为本申请实施例中手部动作的一个示例性示意图。
在其中一个可选示例中,上述位置类型参数例如包括以下任意一项:
用于表示前景(Foreground)的参数:此时表示相应上述多个子素材将在视频播放中作为前景显示,播放过程中该上述多个子素材将和播放终端的显示器的屏幕位置关联,其中心点在显示器的屏幕上的位置将保持不变;
用于表示所述上述多个子素材跟随脸部位置进行定位和/或移动的参数:表示相应上述多个子素材对应的参考部位为脸部,此时该上述多个子素材在视频播放过程中将会跟随脸部位置进行定位和/或移动;
用于表示所述上述多个子素材跟随手的位置进行定位和/或移动的参数:表示相应上述多个子素材对应的参考部位为手势(即:手),此时该上述多个子素材在视频播放过程中将会跟随手的位置进行定位和/或移动;
用于表示所述上述多个子素材跟随头部的位置进行定位和/或移动的参数:用于表示该上述多个子素材在视频播放过程中将会跟随头部的位置进行定位和/或移动;
用于表示所述上述多个子素材跟随肩部的位置进行定位和/或移动的参数:用于表示该上述多个子素材在视频播放过程中将会跟随肩部的位置进行移动;
用于表示所述上述多个子素材跟随手臂的位置进行定位和/或移动的参数:用于表示该上述多个子素材在视频播放过程中将会跟随手臂的位置进行定位和/或移动;
用于表示所述上述多个子素材跟随腰部的位置进行定位和/或移动的参数:用于表示该上述多个子素材在视频播放过程中将会跟随腰部的位置进行定位和/或移动;
用于表示所述上述多个子素材跟随腿部的位置进行定位和/或移动的参数:用于表示该上述多个子素材在视频播放过程中将会跟随腿部的位置进行定位和/或移动;
用于表示所述上述多个子素材跟随脚部的位置进行定位和/或移动的参数:用于表示该上述多个子素材在视频播放过程中将会跟随脚部的位置进行定位和/或移动;
用于表示所述上述多个子素材跟随人体骨骼的位置进行定位和/或移动的参数:用于表示该上述多个子素材在视频播放过程中将会跟随人体骨骼的位置进行定位和/或移动;
与参考部位相关的播放位置关系,例如可以包括以下任意一项或多项位置关系:上述多个子素材跟随参考部位的位 置进行移动(Position),上述多个子素材跟随参考部位的大小进行缩放(Size);上述多个子素材跟随参考部位的位置进行移动(Position),上述多个子素材跟随参考部位的大小进行缩放(Size),上述多个子素材跟随参考部位的旋转进行纵深缩放(Depth);上述多个子素材跟随参考部位的位置进行移动(Position),上述多个子素材跟随参考部位的大小进行缩放(Size),上述多个子素材跟随参考部位的旋转进行纵深缩放(Depth),上述多个子素材跟随参考部位的平面选择进行旋转(Rotation);
用于表示背景(Background)的参数:表示相应上述多个子素材将在视频播放过程中作为背景显示,视频播放过程中该上述多个子素材将和播放终端的显示器的屏幕位置关联,该上述多个子素材的大小将进行调整,使该上述多个子素材的四个顶点坐标与显示器的屏幕的四个顶点一致。
在本申请各实施例的一个实施方式中,上述播放参数中还包括:上述一组子素材的显示位置和预定的至少一关键点之间的对应关系。
其中,上述关键点例如可以包括但不限于以下任意一种或多种:头部关键点,脸部关键点,肩部关键点,手臂关键点,手势关键点,腰部关键点,腿部关键点,脚部关键点,人体骨骼关键点,等等。
在其中一个可选示例中,头部关键点例如可以包括但不限于以下任意一项或多项:头顶关键点,鼻尖关键点,以及下巴关键点,等等。
在其中一个可选示例中,脸部关键点例如可以包括但不限于以下任意一项或多项:脸部轮廓关键点,眼睛关键点,眉毛关键点,鼻子关键点,嘴部关键点,等等。
示例性地,眼睛关键点例如可以包括但不限于以下任意一项或多项:左眼眶关键点,左眼瞳孔中心关键点,左眼中心关键点,右眼眶关键点,右眼瞳孔中心关键点,以及右眼中心关键点,等等。眉毛关键点例如可以包括但不限于以下任意一项或多项:左眉毛关键点以及右眉毛关键点,等等。鼻子关键点例如可以包括但不限于以下任意一项或多项:鼻梁关键点,鼻子下沿关键点,以及鼻子外侧轮廓关键点,等等。嘴部关键点例如可以包括但不限于以下任意一项或多项:上嘴唇关键点,以及下嘴唇关键点,等等。
在其中一个可选示例中,肩部关键点例如可以包括但不限于以下任意一项或多项:位于肩部与头部交汇位置处的肩头交汇关键点,以及位于臂根轮廓关键点与肩头交汇关键点之间的中点位置处的肩轮廓中点关键点,等等。
在其中一个可选示例中,手臂关键点例如可以包括但不限于以下任意一项或多项:手腕轮廓关键点,胳膊肘轮廓关键点,臂根轮廓关键点,位于手腕轮廓关键点与胳膊肘轮廓关键点之间的中点位置处的小臂轮廓中点关键点,以及位于胳膊肘轮廓关键点与臂根轮廓关键点之间的中点位置处的大臂中点关键点,等等。
在其中一个可选示例中,手势关键点例如可以包括但不限于以下任意一项或多项:手势框(即:手势检测框)的四个顶点关键点,以及手势框的中心关键点,等等。
在其中一个可选示例中,腿部关键点例如可以包括但不限于以下任意一项或多项:裆部关键点,膝盖轮廓关键点,脚踝轮廓关键点,大腿根部外侧轮廓关键点,位于膝盖轮廓关键点与脚踝轮廓关键点之间的中点位置处的小腿轮廓中点关键点,位于膝盖内轮廓关键点与裆部关键点之间的中点位置处的大腿内轮廓中点关键点,以及位于膝盖外轮廓关键点与大腿根部外侧轮廓关键点之间的中点位置处的大腿外轮廓中点关键点,等等。
在其中一个可选示例中,腰部关键点例如可以包括但不限于以下任意一项或多项:将大腿根部外侧轮廓关键点与臂根轮廓关键点之间N等分,所产生的N个等分点;其中,N大于1。
在其中一个可选示例中,脚部关键点例如可以包括但不限于以下任意一项或多项:脚尖关键点以及足跟关键点,等等。
在其中一个可选示例中,人体骨骼关键点例如可以包括但不限于以下任意一项或多项:右肩骨骼关键点,右肘骨骼关键点,右腕骨骼关键点,左肩骨骼关键点,左肘骨骼关键点,左腕骨骼关键点,右髋骨骼关键点,右膝骨骼关键点,右踝骨骼关键点,左髋骨骼关键点,左膝骨骼关键点,左踝骨骼关键点,头顶骨骼关键点,以及脖子骨骼关键点,等等。
在本申请各实施例中,可以预先设置多个关键点的位置,以便将上述多个子素材的显示位置和关键点进行位置关系对应。设置上述多个子素材的播放参数的参数值时,可以直接从预先设置的关键点集合中选取关键点作为相应播放参数中的参数值。
例如,在其中一个实施方式中,可以基于人脸检测和手势检测,分别针对脸部和手势(手部)定义多个关键点,以实现特效生成中,基于人脸关键点或者手势关键点进行位置关系的对应。
例如,图5为本申请实施例中脸部关键点的一个示例性示意图,结合图5,在一个可选示例中,可以对脸部关键点进行如下定义:
关键点项目 关键点编号 关键点项目 关键点编号
脸框(脸部轮廓关键点) 0-32 鼻梁 43-46
左眉毛 33-37,64-67 右眉毛 38-42,68-71
左眼眶 52-57,72-73 右眼眶 58-63,75-76
左眼瞳孔 74,104, 右眼瞳孔 77,105
鼻子下沿 47-51 鼻子外侧轮廓 78-83
上嘴唇 84-90,96-100 下嘴唇 91-95,101-103
在一个可选示例中,可以对手部关键点进行如下定义:
关键点项目 关键点编号 关键点项目 关键点编号
手势框 110-113 中心 114
其中,编号110-113的关键点分别为手势检测框(即手部的外接框)的四个顶点,编号114的关键点为手势检测框的中心。
另外,在本申请上述各实施例中,还可以包括:建立上述多个子素材的显示位置和预定的至少一关键点之间的对应关系;和/或,建立上述多个子素材的显示位置和检测框的中心关键点之间的对应关系。
例如,在本申请上述实施例建立的对应关系中关键点为头部关键点、脸部关键点、肩部关键点、手臂关键点、腰部关键点、腿部关键点、脚部关键点、人体骨骼关键点时,可以建立上述多个子素材的显示位置和上述关键点中至少一关键点之间的对应关系;在本申请上述实施例建立的对应关系中关键点为头部关键点、脸部关键点、手势关键点、人体骨骼关键点时,建立上述多个子素材的显示位置和对应的检测框(例如,头部检测框、人脸检测框、手势检测框、人体检测框)的中心关键点之间的对应关系。
另外,再参见图2,本申请实施例的特效程序文件包的生成装置还可以包括内容显示栏。相应地,在本申请特效程序文件包的生成方法另一个实施例中,还可以包括:通过内容显示栏显示参考图像,并显示所述参考图像上的关键点。其中,该参考图像包括至少一个参考部位。该参考部位例如可以包括以下任意一项或多项:耳朵,手,脸,头发,脖子,肩膀等。
示例性地,上述参考图像例如可以是:参考人物的至少一部分图像,例如参考人物的以下任意一项或多项的图像:完整图像,头部图像,脸部图像,肩部图像,手臂图像,手势图像,腰部图像,腿部图像,脚部图像,参考人物的完整图像,等等。
另外,在本申请特效程序文件包的生成方法又一个实施例中,通过操作102导入上述多个子素材之后,还可以包括:根据上述一组子素材的播放参数的参数值,按照预设显示策略在上述内容显示栏依次显示导入的上述一组子素材中的每个子素材、或者同时显示导入的上述一组子素材中的多个子素材;或者,接收到对上述一组子素材中子素材的选取操作,在上述内容显示栏显示上述选取操作选取的子素材。
在内容显示栏显示导入的上述一组子素材后,用户可以更改该显示的上述一组子素材或其中一个子素材在内容显示栏中的显示位置或者调整其显示大小。由此,在进一步可选实施例中,还可以包括:根据通过上述内容显示栏接收到的对上述一组子素材或其中一个子素材的位置移动操作,更新上述一组子素材在上述内容显示栏的显示位置,并对上述一组子素材的播放参数中的相应参数值进行更新。和/或,还可以包括:根据通过上述内容显示栏接收到的对上述一组子素材或其中一个子素材的大小调整操作,更新上述一组子素材在上述内容显示栏的显示大小,并对上述一组子素材的播放参数中的相应参数值进行更新。
例如,用户可以通过鼠标选中内容显示栏中显示的上述一组子素材或其中一个子素材,将鼠标移动至选中的子素材右下角的小框处,通过移动该小框缩放选中的子素材,从而调整该选中的子素材的显示大小;用户可以通过鼠标选中内容显示栏中显示的一组子素材或其中一个子素材并直接移动其位置,将选中的子素材移动至正确或者想要的位置。在后续上述多个子素材的特效程序文件包的播放中,该上述多个子素材在播放终端上的位置、显示比例将会与在该内容显示栏中的位置、显示比例一致。基于本申请上述任一实施例,用户可以针对多个参考部位添加特效,例如,可以分别以耳朵、脸、手作为当前需要添加特效的目标部位,执行上述任一实施例,实现对耳朵、脸、手部位的一组多个子素材的特效效果。
当用户导入两组或以上上述多个子素材时,可以调整各组子素材的显示图层(即:遮挡关系)。由此,在本申请特效程序文件包的生成方法又一个实施例中,还可以包括:根据通过上述操作栏的交互接口接收到的针对两组或以上子素材发送的图层参数调整指令,调整上述两组或以上子素材之间的遮挡关系,并根据调整后的遮挡关系和上述播放参数的参数值,在内容显示栏显示该两组或以上上述多个子素材。
另外,在本申请特效程序文件包的生成方法的再一个实施例中,在生成特效程序文件包之前,还可以包括:
根据预先设置的特效程序文件和上述一组子素材的播放参数的参数值,生成上述一组子素材的特效程序文件,并通过程序文件栏显示上述一组子素材的特效程序文件。
示例性地,上述特效程序文件例如可以包括但不限于:以json程序或其他任意可执行程序生成的特效程序文件。
另外,在本申请特效程序文件包的生成方法的再一个实施例中,还包括:特效程序文件包的生成装置根据接收到的启动指令启动,并显示操作界面,该操作界面包括:操作栏,内容显示栏和程序文件栏。
如图2所示,在其中一个可选示例中,上述操作界面包括左侧、中部和右侧三个区域。相应地,上述显示操作界面可以包括:在操作界面的左侧显示操作栏,在操作界面的中部显示内容显示栏,在操作界面的右侧显示所述程序文件栏。
其中,可通过左侧操作栏中的交互接口20导入上述一组子素材,可通过交互接口21调整上述多组子素材图层之间的遮挡关系,设置每组子素材的图层参数,可通过交互接口23对一组子素材的播放参数设置参数值;内容显示栏以平均人脸为参考人脸,所有导入的一组子素材均直接显示,可通过鼠标移动显示的子素材的位置;右侧的程序文件显示栏用于通过其中的显示区域24显示当前设置参数值的一组子素材的播放程序文件的内容,通过程序文件显示栏中的保存指令接口25可以导出特效程序文件包,即:生成并保存该特效程序文件包。
图6为本申请特效程序文件包的生成方法另一实施例的流程图。如图6所示,该实施例特效程序文件包的生成方法包括:
302,特效程序文件包的生成装置根据接收到的启动指令启动,并显示操作界面。
该操作界面包括:操作栏,内容显示栏和程序文件栏。
304,通过内容显示栏显示参考图像,并显示该参考图像上的关键点。
其中,该参考图像包括至少一个参考部位。
在一个可选示例中,该操作304可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的操作界面或者内容显示栏执行。
306,接收通过操作栏的交互接口输入的导入指令,导入该导入指令指向的素材文件夹中的多个子素材作为一组子素材。
在一个可选示例中,该操作306可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的所述第一导入模块执行。
308,获取上述一组子素材的播放参数的参数值,并建立上述多个子素材的显示位置和预定的至少一关键点之间的对应关系。
在一个可选示例中,该操作308可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取 模块执行。
310,根据预先设置的特效程序文件和获取到的上述一组子素材的播放参数的参数值,生成上述一组子素材的特效程序文件,并通过程序文件栏显示该一组子素材的特效程序文件。
在一个可选示例中,该操作310可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的操作界面或者程序文件栏执行。
312,根据上述一组子素材及其播放参数的参数值、以及其显示位置和预定的至少一关键点之间的对应关系和特效程序文件生成特效程序文件包。
在一个可选示例中,该操作312可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一生成模块执行。
另外,基于本申请上述任一实施例生成特效程序文件包之后,还可以包括:根据接收到的保存指令,在该保存指令指向的位置保存该特效程序文件包。
在其中一个实施方式中,根据接收到的保存指令,在该保存指令指向的位置保存该特效程序文件包,可以包括:
响应于接收到保存指令,显示保存路径选择接口和压缩接口;
接收通过上述保存路径选择接口发送的保存位置;以及接收上述基于压缩接口发送的压缩方式,并根据该压缩方式对上述多个子素材的特效程序文件包进行压缩,生成压缩文件包;
将上述压缩文件包存储至上述保存位置指向的文件夹中。
在特效程序文件包的大小较大时,不适合在手机终端中运行,本申请实施例可以对特效程序文件包进行压缩后保存,以便于导入手机终端中进行特效生成。本申请实施例仅对特效程序文件包的大小进行压缩,并不更改特效程序文件包中子素材的大小,即:在特效程序文件包中子素材的大小保持为该多个子素材被导入前的大小。
基于本申请上述各实施例生成特效程序文件包后,便可以将该特效程序文件包导入终端中,对该终端播放的视频进行动态特效的生成。
图7为本申请特效生成方法一个实施例的流程图。如图7所示,该实施例的特效生成方法包括:
402,获取特效程序文件包中至少一组子素材的播放参数的参数值。
其中,一组子素材包括多个子素材。该多个子素材具有预定的播放时序。
在一个可选示例中,该操作402可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二获取模块执行。
404,对视频图像进行关键点检测。
在其中一个实施方式中,可以通过神经网络对视频图像进行对应关系涉及的关键点检测,并输出关键点检测结果。
其中的关键点检测结果,例如可以包括但不限于以下任意一项或多项:对应关系涉及的关键点在视频中图像中的位置;特效程序文件包中对应关系涉及的关键点的预设编号。
在一个可选示例中,该操作404可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一检测模块执行。
406,基于检测到的关键点和上述至少一组子素材的播放参数的参数值,在视频图像上生成基于上述至少一组子素材的特效。
在一个可选示例中,该操作406可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二生成模块执行。
基于本申请上述实施例提供的特效生成方法,获取特效程序文件包中至少一组子素材的播放参数的参数值,其中,一组子素材包括多个子素材;对视频图像进行关键点检测;根据检测到的关键点和至少一组子素材的播放参数的参数值,在视频图像上生成基于至少一组子素材的特效。本申请实施例通过预先生成的特效程序文件包中至少一组子素材的播放参数的参数值和视频图像中的关键点,在视频上生成动态特效,在视频实现了动态特效播放,提升了视频播放效果。
另外,在本申请特效生成方法的另一个实施例中,还可以包括:导入特效程序文件包。
其中,该特效程序文件包可以包括至少一组子素材和上述至少一组子素材的播放参数的参数值,上述一组子素材的播放参数的参数值包括上述一组子素材的显示位置和预定的至少一关键点之间的对应关系。
在其中一个实施方式中,导入特效程序文件包,可以包括:通过调用用于读取贴纸素材的第一接口函数,将上述特效程序文件包读入内存;解析上述特效程序文件包,获得上述至少一组子素材和特效程序文件,上述特效程序文件包括上述至少一组子素材的播放参数的参数值。
在其中一个可选示例中,上述特效程序文件可以包括:json程序或者其他可执行程序的特效程序文件。
在其中一个实施方式中,本申请各特效生成方法实施例中的特效程序文件包可以是通过本申请上述任一特效程序文件包的生成方法实施例生成的特效程序文件包。
在其中一个实施方式中,操作402可以包括:通过用于创建贴纸句柄的第二接口函数创建贴纸句柄;读取上述多个子素材和特效程序文件包中的播放参数的参数值、并存储至上述贴纸句柄中。
另外,在本申请特效生成方法的另一个实施例中,还可以包括:根据上述多个子素材的文件名确定上述多个子素材的播放时序;根据上述贴纸句柄中上述特效程序文件中至少一组子素材的播放参数的参数值和上述多个子素材的播放时序,获取上述至少一组子素材中每个子素材在上述视频中显示的位置和视频帧数,并预先从上述视频中读取上述视频帧数对应的视频图像。
相应地,在其中一个实施方式中,操作406可以包括:通过用于调用渲染贴纸素材的第三接口函数,从贴纸句柄中读取需要显示在视频的当前视频图像上的上述多个子素材;通过用于调用渲染贴纸素材的第三接口函数,从上述贴纸句柄中读取需要显示在上述视频的当前视频图像上的子素材;根据上述检测到的关键点和上述播放参数的参数值,确定上述需要显示的子素材在当前视频图像上显示的位置;上述将上述需要显示在上述当前视频图像上的子素材显示在上述当前视频图像上的上述显示的位置上。
在其中一个实施方式中,还可以包括:响应于特效程序文件包播放完毕,通过用于调用销毁贴纸句柄的第四接口函 数销毁贴纸句柄。
图8为本申请特效生成方法另一个实施例的流程图。如图8所示,该实施例的特效生成方法包括:
502,通过调用用于读取贴纸素材的第一接口函数,将该特效程序文件包读入内存。
504,解析该特效程序文件包,获得至少一组子素材和特效程序文件,该特效程序文件包括一组子素材的播放参数的参数值。
其中,该一组子素材的播放参数的参数值包括该一组子素材的显示位置和预定的至少一关键点之间的对应关系。
在其中一个可选示例中,上述特效程序文件可以包括:json程序或者其他可执行程序的特效程序文件。
在其中一个实施方式中,本申请各特效生成方法实施例中的特效程序文件包可以是通过本申请任一实施例的特效程序文件包的生成方法生成的特效程序文件包。
在一个可选示例中,该操作502-504可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二导入模块执行。
506,通过用于创建贴纸句柄的第二接口函数创建贴纸句柄。
508,从内存中读取上述至少一组子素材和特效程序文件中的播放参数的参数值、并存储至上述贴纸句柄中。
在一个可选示例中,该操作506-508可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二获取模块执行。
510,分别针对各组子素材:根据贴纸句柄中同一组子素材中多个子素材的文件名确定该多个子素材的播放时序;以及根据贴纸句柄中上述特效程序文件中至少一组子素材的播放参数的参数值和同一组子素材中多个子素材的播放时序,获取上述至少一组子素材中每个子素材在上述视频中显示的位置和视频帧数,并预先从上述视频中读取上述视频帧数对应的视频图像。
在一个可选示例中,该操作510可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三获取模块执行。
512,通过神经网络对视频图像进行对应关系涉及的关键点检测,并输出关键点检测结果。
在一个可选示例中,该操作512可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一检测模块执行。
514,通过用于调用渲染贴纸素材的第三接口函数,从贴纸句柄中读取需要显示在视频的当前视频图像上的子素材。
516,根据上述检测到的关键点和上述播放参数的参数值,确定上述需要显示的子素材在当前视频图像上显示的位置。
518,将上述需要显示在上述当前视频图像上的子素材显示在上述当前视频图像上的上述显示的位置上。
在一个可选示例中,该操作514-518可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二生成模块执行。
本申请各特效生成方法实施例可以用于各种视频播放场景,例如用于包含人物的视频直播场景,为该直播视频生成动态特效,特效程序文件包将中的至少一组子素材叠加在人物的相应部位上播放。其中的相应部位例如可以是:耳朵,手,脸,头发,脖子,肩膀等。
本申请实施例提供的任一特效程序文件包的生成方法和特效生成方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本申请实施例提供的任一特效程序文件包的生成方法和特效生成方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本申请实施例提及的任一特效程序文件包的生成方法和特效生成方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图9为本申请特效程序文件包的生成装置一个实施例的结构示意图。该实施例特效程序文件包的生成装置可用于实现本申请上述各特效程序文件包的生成方法实施例。如图9所示,该实施例特效程序文件包的生成装置包括:第一导入模块,第一获取模块和第一生成模块。其中:
第一导入模块,用于导入一组子素材;所述一组子素材包括多个子素材。
在本申请各实施例的一个实施方式中,一组子素材中的多个子素材具有预定的播放时序,该多个子素材的播放时序可以基于该多个子素材的文件名确定。
第一获取模块,用于获取所述一组子素材的播放参数的参数值。
第一生成模块,用于根据所述一组子素材和所述播放参数的参数值生成特效程序文件包。
在本申请各实施例中,特效程序文件包可以包括一组子素材或者多组子素材。
基于本申请上述实施例提供的特效程序文件包的生成装置,导入一组子素材,该一组子素材包括多个子素材;获取一组子素材的播放参数的参数值;根据一组子素材和播放参数的参数值生成特效程序文件包,以便基于该特效程序文件包视频进行动态特效处理,在播放的视频上实现动态特效,本申请实施例无需通过手动书写程序文件,便可生成渲染引擎可执行的特效程序文件,操作简单、所需时间短,提升了动态特效实现的整体效率,有效保障了特效效果的准确性。
图10为本申请特效程序文件包的生成装置另一个实施例的结构示意图。如图10所示,与图9所示的实施例相比,该实施例的生成装置还包括操作界面,该操作界面包括操作栏,如图2所示,为操作界面的一个示例图。相应地,该实施例中,第一导入模块用于接收通过操作栏的交互接口输入的导入指令,导入该导入指令指向的素材文件夹中的多个子素材作为上述一组子素材。
在其中一个实施方式中,第一导入模块用于:接收通过操作栏下播放参数设置界面中的交互接口发送的导入指令,导入该导入指令指向的素材文件夹中的多个子素材;或者,接收通过操作栏的交互接口发送的选取指令,以选取指令选取的参考部位作为当前需要添加特效的目标部位,并在操作栏显示目标部位下的播放参数设置界面;接收通过播放参数设置界面中的交互接口发送的导入指令,导入该导入指令指向的素材文件夹中的多个子素材。
在其中一个可选示例中,第一导入模块用于:接收通过交互接口发送的导入指令,获取并显示导入指令指向的素材文件夹;响应于接收到对素材文件夹中的子素材选取操作,导入子素材选取操作选取的多个子素材;和/或,响应于未接收到对素材文件夹中的子素材选取操作,根据预先设置选取素材文件夹中的全部子素材或者部分子素材,并导入根据预先设置选取的子素材。
在其中一个可选示例中,第一导入模块导入上述导入指令指向的素材文件夹中的多个子素材时,用于:响应于导入指令中包括导入指令指向的素材文件夹中的多个子素材的显示顺序,按照显示顺序读取并导入多个子素材,并在操作栏按照显示顺序显示导入的多个子素材的文件名;和/或,响应于导入指令中未包括导入指令指向的素材文件夹中的多个子素材的显示顺序,按照预设顺序读取并导入多个子素材,并在操作栏按照预设顺序显示导入的多个子素材的文件名。
在其中一个实施方式中,第一获取模块用于:响应于接收到通过播放参数设置界面中的交互接口发送的针对一组子素材的播放参数设置的参数值,以设置的参数值作为一组子素材的播放参数的参数值;和/或,响应于未接收到通过播放参数设置界面中的交互接口发送的针对一组子素材的播放参数设置的参数值,以预设参数值作为一组子素材的播放参数的参数值。
在其中一个实施方式中,播放参数还可以包括:一组子素材的显示位置和预定的至少一关键点之间的对应关系。
在其中一个实施方式中,第一获取模块,还可用于建立一组子素材的显示位置和至少一关键点之间的对应关系;和/或,建立一组子素材的显示位置和检测框的中心关键点之间的对应关系。
另外,在另一个实施方式中,操作界面还可以包括内容显示栏,用于显示参考图像,并显示参考图像上的关键点,其中的参考图像包括至少一个参考部位。如图2所示,为操作界面的一个示例图。
示例性地,上述参考图像例如可以是:参考人物的至少一部分图像,例如参考人物的以下任意一项或多项的图像:完整图像,头部图像,脸部图像,肩部图像,手臂图像,手势图像,腰部图像,腿部图像,脚部图像,等等。
在本申请生成装置的又一个实施例中,内容显示栏,还可用于:根据一组子素材的播放参数的参数值,按照预设显示策略在内容显示栏依次显示导入的一组子素材中的每个子素材、或者同时显示导入的一组子素材中的多个子素材;或者,接收到对一组子素材中子素材的选取操作,在内容显示栏显示选取操作选取的子素材。
另外,再参见图10,本申请上述各实施例的生成装置还可以包括:第一更新模块,用于根据通过内容显示栏接收到的对一组子素材或其中一个子素材的位置移动操作,更新一组子素材在内容显示栏的显示位置,并对一组子素材的播放参数中的相应参数值进行更新。
另外,再参见图10,本申请上述各实施例的生成装置还可以包括:第二更新模块,用于根据通过内容显示栏接收到的对一组子素材或其中一个子素材的大小调整操作,更新一组子素材在内容显示栏的显示大小,并对一组子素材的播放参数中的相应参数值进行更新。
另外,再参见图10,本申请上述各实施例的生成装置还可以包括:调整模块,用于根据通过操作栏的交互接口接收到的针对两组或以上子素材发送的图层参数调整指令,调整两组或以上子素材之间的遮挡关系,并根据调整后的遮挡关系和播放参数的参数值显示两组或以上子素材。
另外,操作界面还可以包括:程序文件栏,用于根据预先设置的特效程序文件和一组子素材的播放参数的参数值,生成一组子素材的特效程序文件,并通过程序文件栏显示一组子素材的特效程序文件。其中的特效程序文件例如可以包括但不限于:以json程序生成的特效程序文件。
参见图2,该操作界面可以包括左侧、中部和右侧三个区域。其中,在操作界面的左侧显示操作栏,在操作界面的中部显示内容显示栏,在操作界面右侧显示程序文件栏。
另外,再参见图10,本申请上述各实施例的生成装置还可以包括:保存模块,用于根据接收到的保存指令在保存指令指向的位置保存特效程序文件包。
在其中一个实施方式中,保存模块用于:响应于接收到保存指令,显示保存路径选择接口和压缩接口;接收通过保存路径选择接口发送的保存位置;以及接收基于压缩接口发送的压缩方式,并根据压缩方式对子素材的特效程序文件包进行压缩,生成压缩文件包;将压缩文件包存储至保存位置指向的文件夹中。
在其中一个可选示例中,特效程序文件包中子素材的大小保持为各子素材被导入前的大小。
本申请各实施例中,特效生成装置可用于实现本申请上述各特效生成方法实施例,可以但不限于为AR引擎或者具有AR特效绘制功能的电子设备。
图11为本申请特效生成装置一个实施例的结构示意图。如图11所示,该实施例的特效生成装置包括:第二获取模块,第一检测模块和第二生成模块。其中:
第二获取模块,用于获取特效程序文件包中至少一组子素材的播放参数的参数值;其中,该一组子素材包括多个子素材。
在其中一个实施方式中,上述多个子素材具有预定的播放时序。
在其中一个实施方式中,本申请各特效生成装置实施例中的特效程序文件包可以是通过本申请上述任一特效程序文件包的生成方法或者装置实施例生成的特效程序文件包。
第一检测模块,用于对视频图像进行关键点检测。
第二生成模块,用于根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效。
基于本申请上述实施例提供的特效生成装置,获取特效程序文件包中至少一组子素材的播放参数的参数值,其中,一组子素材包括多个子素材;对视频图像进行关键点检测;根据检测到的关键点和至少一组子素材的播放参数的参数值,在视频图像上生成基于至少一组子素材的特效。本申请实施例通过预先生成的特效程序文件包中至少一组子素材的播放参数的参数值和视频图像中的关键点,在视频上生成动态特效,在视频实现了动态特效播放,提升了视频播放效果。
图12为本申请特效生成装置另一个实施例的结构示意图。如图12所示,与图11所示的实施例相比,该实施例的特效生成装置还包括:第二导入模块,用于导入特效程序文件包。其中,该特效程序文件包至少一组子素材和至少一组子素材的播放参数的参数值,一组子素材的播放参数的参数值包括一组子素材的显示位置和预定的至少一关键点之间的对 应关系。
在本申请特效生成装置的又一个实施例中,播放参数包括:触发事件参数,该触发事件参数用于表示触发多个子素材显示的触发事件。相应地,再参见图12,该实施例的特效生成装置还可以包括:第二检测模块,用于检测视频图像中是否出现触发动作参数的参数值对应的触发动作。相应地,该实施例中,第二生成模块用于:响应于检测到所视频图像中出现触发动作参数的参数值对应的触发动作,根据检测到的关键点和至少一组子素材的播放参数的参数值,在视频上生成基于至少一组子素材的特效。
在本申请特效生成装置的再一个实施例中,播放参数还包括:触发结束参数:该触发结束参数用于表示结束多个子素材显示的动作。相应地,该实施例中,第二检测模块,还用于检测视频图像中是否出现触发结束参数的参数值对应的触发动作。第二生成模块,还用于响应于检测到视频图像中出现触发结束参数的参数值对应的触发动作,结束在当前正在播放的视频上生成至少一组子素材的特效。
在本申请特效生成装置的再一个实施例中,播放参数还包括:美颜/美妆效果参数,该美颜/美妆效果参数用于表示显示子素材时在预设部位显示的美颜/美妆效果。相应地,该实施例,第二生成模块,还用于根据检测到的关键点和至少一组子素材的播放参数的参数值,在视频图像上生成基于至少一组子素材的特效时,根据美颜/美妆效果参数,在视频图像中的预设部位显示美颜/美妆效果。
在其中一个实施方式中,第二导入模块用于:通过调用用于读取贴纸素材的第一接口函数,将特效程序文件包读入内存;解析动态特效程序文件包,获得至少一组子素材和特效程序文件,特效程序文件包括至少一组子素材的播放参数的参数值。其中的特效程序文件例如可以包括:json程序的特效程序文件。
在其中一个可选示例中,第二获取模块用于:通过用于创建贴纸句柄的第二接口函数创建贴纸句柄;读取至少一组子素材和特效程序文件包中至少一组子素材的播放参数的参数值、并存储至贴纸句柄中。
另外,再参见图12,在再一实施例中,特效生成装置还可以包括:确定模块和第三获取模块。其中,确定模块,用于根据多个子素材的文件名确定多个子素材的播放时序。第三获取模块,用于根据贴纸句柄中特效程序文件中至少一组子素材的播放参数的参数值和多个子素材的播放时序,获取至少一组子素材中每个子素材在视频中显示的位置和视频帧数,并预先从视频中读取视频帧数对应的视频图像。
在其中一个实施方式中,第二生成模块,用于:通过用于调用渲染贴纸素材的第三接口函数,从贴纸句柄中读取需要显示在视频的当前视频图像上的子素材;根据检测到的关键点和播放参数的参数值,确定需要显示的子素材在当前视频图像上显示的位置;将需要显示在当前视频图像上的子素材显示在当前视频图像上的显示的位置上。
另外,再参见图12,在本申请特效生成装置的又一个实施例中,第二获取模块,还可用于响应于特效程序文件包播放完毕,通过用于调用销毁贴纸句柄的第四接口函数销毁贴纸句柄。
在本申请各特效生成装置实施例的一个实施方式中,第一检测模块,用于通过神经网络,对视频图像进行对应关系涉及的关键点检测,并输出关键点检测结果。
其中的关键点检测结果例如可以包括以下任意一项或多项:对应关系涉及的关键点在视频中图像中的位置;对应关系涉及的关键点的预设编号。
另外,本申请实施例提供的另一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行存储器中存储的计算机程序,且计算机程序被执行时,实现本申请任一实施例所述的特效程序文件包的生成方法、或者特效生成方法。
图13为本申请电子设备一个应用实施例的结构示意图。下面参考图13,其示出了适于用来实现本申请实施例的终端设备或服务器的电子设备的结构示意图。如图13所示,该电子设备包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU),和/或一个或多个图像处理器(GPU)等,处理器可以根据存储在只读存储器(ROM)中的可执行指令或者从存储部分加载到随机访问存储器(RAM)中的可执行指令而执行各种适当的动作和处理。通信部可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,处理器可与只读存储器和/或随机访问存储器中通信以执行可执行指令,通过总线与通信部相连、并经通信部与其他目标设备通信,从而完成本申请实施例提供的任一方法对应的操作,例如,导入一组子素材;所述一组子素材包括多个子素材;获取所述一组子素材的播放参数的参数值;根据所述一组子素材和所述播放参数的参数值生成特效程序文件包。再如,获取特效程序文件包中至少一组子素材的播放参数的参数值;其中,一组子素材包括多个子素材;对视频图像进行关键点检测;根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效。
此外,在RAM中,还可存储有装置操作所需的各种程序和数据。CPU、ROM以及RAM通过总线彼此相连。在有RAM的情况下,ROM为可选模块。RAM存储可执行指令,或在运行时向ROM中写入可执行指令,可执行指令使处理器执行本申请任一实施例所述的方法对应的操作。输入/输出(I/O)接口也连接至总线。通信部可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
需要说明的,如图13所示的架构仅为一种可选实现方式,在具体实践过程中,可根据实际需要对上述图13的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU和CPU可分离设置或者可将GPU集成在CPU上,通信部可分离设置,也可集成设置在CPU或GPU上,等等。这些可替换的实施方式均落入本申请公开的保护范围。
特别地,根据本申请的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请实施例提供的人脸防伪检测方法步骤对应的指令。在这样的实施例中, 该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被CPU执行时,执行本申请的方法中限定的上述功能。
另外,本申请实施例还提供了一种计算机程序,包括计算机指令,当计算机指令在设备的处理器中运行时,实现本申请任一实施例所述的特效程序文件包的生成方法、或者特效生成方法。
另外,本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,实现本申请上述任一实施例所述的特效程序文件包的生成方法、或者特效生成方法。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本申请的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本申请的方法的步骤不限于以上描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请的方法的程序的记录介质。
本申请的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施例。

Claims (93)

  1. 一种特效程序文件包的生成方法,其特征在于,包括:
    导入一组子素材,所述一组子素材包括多个子素材;
    获取所述一组子素材的播放参数的参数值;
    根据所述一组子素材和所述播放参数的参数值生成特效程序文件包。
  2. 根据权利要求1所述的方法,其特征在于,所述多个子素材具有预定的播放时序。
  3. 根据权利要求2所述的方法,其特征在于,所述多个子素材的播放时序基于所述多个子素材的文件名确定。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述导入一组子素材,包括:接收通过操作栏的交互接口输入的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材作为所述一组子素材。
  5. 根据权利要求4所述的方法,其特征在于,所述接收通过操作栏的交互接口输入的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材,包括:接收通过所述操作栏下播放参数设置界面中的交互接口发送的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材;或者,接收通过所述操作栏的交互接口发送的选取指令,以所述选取指令选取的参考部位作为所述当前需要添加特效的目标部位,并在所述操作栏显示所述目标部位下的播放参数设置界面;接收通过所述播放参数设置界面中的交互接口发送的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材。
  6. 根据权利要求4或5所述的方法,其特征在于,接收通过交互接口发送的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材,包括:
    接收通过所述交互接口发送的导入指令,获取并显示所述导入指令指向的素材文件夹;
    响应于接收到对所述素材文件夹中的子素材选取操作,导入所述子素材选取操作选取的多个子素材;和/或,响应于未接收到对所述素材文件夹中的子素材选取操作,根据预先设置选取素材文件夹中的全部子素材或者部分子素材,并导入根据预先设置选取的子素材。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述特效程序文件包包括一组子素材;或者,所述特效程序文件包包括多组子素材。
  8. 根据权利要求4-7任一所述的方法,其特征在于,所述导入所述导入指令指向的素材文件夹中的多个子素材,包括:响应于所述导入指令中包括所述导入指令指向的素材文件夹中的多个子素材的显示顺序,按照所述显示顺序读取并导入所述多个子素材,并在所述操作栏按照所述显示顺序显示导入的所述多个子素材的文件名;和/或,响应于所述导入指令中未包括所述导入指令指向的素材文件夹中的多个子素材的显示顺序,按照预设顺序读取并导入所述多个子素材,并在所述操作栏按照预设顺序显示导入的所述多个子素材的文件名。
  9. 根据权利要求5-8任一所述的方法,其特征在于,所述获取所述一组子素材的播放参数的参数值,包括:响应于接收到通过所述播放参数设置界面中的交互接口发送的针对所述一组子素材的播放参数设置的参数值,以所述设置的参数值作为所述一组子素材的播放参数的参数值;和/或,响应于未接收到通过所述播放参数设置界面中的交互接口发送的针对所述一组子素材的播放参数设置的参数值,以预设参数值作为所述一组子素材的播放参数的参数值。
  10. 根据权利要求1-9任一所述的方法,其特征在于,所述一组子素材的播放参数包括以下任意一项或多项;其中:
    显示参数:用于表示是否显示所述多个子素材;
    间隔参数:用于表示显示所述多个子素材中相邻两个子素材间隔的帧数;
    触发动作参数:用于表示触发所述多个子素材显示的触发动作;
    循环参数:用于表示所述多个子素材的循环播放次数;
    延迟触发参数:用于表示延迟显示所述多个子素材的时间;
    触发结束参数:用于表示结束所述多个子素材显示的动作;
    显示尺寸参数:用于表示多个子素材的显示大小变化的参考依据;
    位置类型参数:用于表示多个子素材和位置的关系类型;
    位置关联参数:用于表示多个子素材是否跟随预设参考部位移动;
    位置参数:用于表示多个子素材与预设关键点之间的位置绑定关系;
    旋转参数:用于表示多个子素材旋转依据的关键点;
    美颜/美妆效果参数:用于表示显示子素材时在预设部位显示的美颜/美妆效果。
  11. 根据权利要求10所述的方法,其特征在于,所述触发动作参数对应的触发动作包括以下任意一项或多项:无动作触发,眼部动作,头部动作,眉部动作,手部动作,嘴部动作,肩部动作。
  12. 根据权利要求10或11所述的方法,其特征在于,所述位置类型参数包括以下任意一项:
    用于表示前景的参数;
    用于表示所述多个子素材跟随脸部位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随手的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随头部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随肩部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随手臂的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随腰部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随腿部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随脚部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随人体骨骼的位置进行定位和/或移动的参数;
    与参考部位相关的播放位置关系;
    用于表示背景的参数。
  13. 根据权利要求12所述的方法,其特征在于,所述与参考部位相关播放位置关系包括以下任意一项或多项:
    所述多个子素材跟随所述参考部位的位置进行移动,所述多个子素材跟随所述参考部位的大小进行缩放;
    所述多个子素材跟随所述参考部位的位置进行移动,所述多个子素材跟随所述参考部位的大小进行缩放,所述多个子素材跟随所述参考部位的旋转进行纵深缩放;
    所述多个子素材跟随所述参考部位的位置进行移动,所述多个子素材跟随所述参考部位的大小进行缩放,所述多个子素材跟随所述参考部位的旋转进行纵深缩放,所述多个子素材跟随所述参考部位的平面旋转进行旋转。
  14. 根据权利要求1-13任一所述的方法,其特征在于,所述播放参数中包括:所述一组子素材的显示位置和预定的至少一关键点之间的对应关系;
    所述关键点包括以下任意一种或多种:头部关键点,脸部关键点,肩部关键点,手臂关键点,手势关键点,腰部关键点,腿部关键点,脚部关键点,人体骨骼关键点。
  15. 根据权利要求14所述的方法,其特征在于,所述头部关键点包括以下任意一项或多项:头顶关键点,鼻尖关键点,以及下巴关键点;和/或,
    所述面部脸部关键点包括以下任意一项或多项:脸部轮廓关键点,眼睛关键点,眉毛关键点,鼻子关键点,嘴部关键点;和/或,
    所述肩部关键点包括以下任意一项或多项:位于肩部与头部交汇位置处的肩头交汇关键点,以及位于臂根轮廓关键点与肩头交汇关键点之间的中点位置处的肩轮廓中点关键点;和/或,
    所述手臂关键点包括以下任意一项或多项:手腕轮廓关键点,胳膊肘轮廓关键点,臂根轮廓关键点,位于手腕轮廓关键点与胳膊肘轮廓关键点之间的中点位置处的小臂轮廓中点关键点,以及位于胳膊肘轮廓关键点与臂根轮廓关键点之间的中点位置处的大臂中点关键点;和/或,
    所述手势关键点包括以下任意一项或多项:手势框的四个顶点关键点,以及手势框的中心关键点;和/或,
    所述腿部关键点包括以下任意一项或多项:裆部关键点,膝盖轮廓关键点,脚踝轮廓关键点,大腿根部外侧轮廓关键点,位于膝盖轮廓关键点与脚踝轮廓关键点之间的中点位置处的小腿轮廓中点关键点,位于膝盖内轮廓关键点与裆部关键点之间的中点位置处的大腿内轮廓中点关键点,以及位于膝盖外轮廓关键点与大腿根部外侧轮廓关键点之间的中点位置处的大腿外轮廓中点关键点;和/或,
    所述腰部关键点包括以下任意一项或多项:将大腿根部外侧轮廓关键点与臂根轮廓关键点之间N等分,所产生的N个等分点;其中,所述N大于1;和/或,
    所述脚部关键点包括以下任意一项或多项:脚尖关键点以及足跟关键点;和/或,
    所述人体骨骼关键点把包括以下任意一项或多项:右肩骨骼关键点,右肘骨骼关键点,右腕骨骼关键点,左肩骨骼关键点,左肘骨骼关键点,左腕骨骼关键点,右髋骨骼关键点,右膝骨骼关键点,右踝骨骼关键点,左髋骨骼关键点,左膝骨骼关键点,左踝骨骼关键点,头顶骨骼关键点,以及脖子骨骼关键点。
  16. 根据权利要求15所述的方法,其特征在于,所述眼睛关键点包括以下任意一项或多项:左眼眶关键点,左眼瞳孔中心关键点,左眼中心关键点,右眼眶关键点,右眼瞳孔中心关键点,以及右眼中心关键点;和/或,
    所述眉毛关键点包括以下任意一项或多项:左眉毛关键点以及右眉毛关键点;和/或,
    所述鼻子关键点包括以下任意一项或多项:鼻梁关键点,鼻子下沿关键点,以及鼻子外侧轮廓关键点;和/或,
    所述嘴部关键点包括以下任意一项或多项:上嘴唇关键点,以及下嘴唇关键点。
  17. 根据权利要求14-16任一所述的方法,其特征在于,还包括:建立所述一组子素材的显示位置和所述至少一关键点之间的对应关系;和/或,建立所述一组子素材的显示位置和检测框的中心关键点之间的对应关系。
  18. 根据权利要求14-17任一所述的方法,其特征在于,还包括:通过内容显示栏显示参考图像,并显示所述参考图像上的关键点;所述参考图像包括至少一个参考部位。
  19. 根据权利要求18所述的方法,其特征在于,所述参考图像包括:参考人物的至少一部分图像。
  20. 根据权利要求19所述的方法,其特征在于,所述参考人物的至少一部分图像包括所述参考人物的以下任意一项或多项的图像:完整图像,头部图像,脸部图像,肩部图像,手臂图像,手势图像,腰部图像,腿部图像,脚部图像。
  21. 根据权利要求18-20任一所述的方法,其特征在于,所述导入一组子素材之后,还包括:根据所述一组子素材的播放参数的参数值,按照预设显示策略在所述内容显示栏依次显示导入的所述一组子素材中的每个子素材、或者同时显示导入的所述一组子素材中的多个子素材;或者,接收到对所述一组子素材中子素材的选取操作,在所述内容显示栏显示所述选取操作选取的子素材。
  22. 根据权利要求21所述的方法,其特征在于,还包括:根据通过所述内容显示栏接收到的对所述一组子素材或其中一个子素材的位置移动操作,更新所述一组子素材在所述内容显示栏的显示位置,并对所述一组子素材的播放参数中的相应参数值进行更新。
  23. 根据权利要求21或22所述的方法,其特征在于,还包括:根据通过所述内容显示栏接收到的对所述一组子素材或其中一个子素材的大小调整操作,更新所述一组子素材在所述内容显示栏的显示大小,并对所述一组子素材的播放参数中的相应参数值进行更新。
  24. 根据权利要求1-23任一所述的方法,其特征在于,还包括:根据通过所述操作栏的交互接口接收到的针对两组或以上子素材发送的图层参数调整指令,调整所述两组或以上子素材之间的遮挡关系,并根据调整后的遮挡关系和所述播放参数的参数值显示所述两组或以上子素材。
  25. 根据权利要求1-24任一所述的方法,其特征在于,所述生成特效程序文件包之前,还包括:根据预先设置的特效程序文件和所述一组子素材的播放参数的参数值,生成所述一组子素材的特效程序文件,并通过程序文件栏显示所述一组子素材的特效程序文件。
  26. 根据权利要求25所述的方法,其特征在于,所述特效程序文件包括:以json程序生成的特效程序文件。
  27. 根据权利要求1-26任一所述的方法,其特征在于,还包括:根据接收到的启动指令启动,并显示操作界面,所 述操作界面包括:操作栏,内容显示栏和/或程序文件栏。
  28. 根据权利要求27所述的方法,其特征在于,所述操作界面包括左侧、中部和右侧三个区域;
    所述显示操作界面,包括:在所述操作界面的左侧显示所述操作栏,在所述操作界面的中部显示所述内容显示栏,在所述操作界面右侧显示所述程序文件栏。
  29. 根据权利要求1-28任一所述的方法,其特征在于,所述生成特效程序文件包之后,还包括:根据接收到的保存指令在所述保存指令指向的位置保存所述特效程序文件包。
  30. 根据权利要求29所述的方法,其特征在于,所述根据接收到的保存指令在所述保存指令指向的位置保存所述特效程序文件包,包括:
    响应于接收到保存指令,显示保存路径选择接口和压缩接口;
    接收通过所述保存路径选择接口发送的保存位置;以及接收基于所述压缩接口发送的压缩方式,并根据所述压缩方式对所述特效程序文件包进行压缩,生成压缩文件包;
    将所述压缩文件包存储至所述保存位置指向的文件夹中。
  31. 根据权利要求1-30任一所述的方法,其特征在于,所述特效程序文件包中子素材的大小保持为子素材被导入前的大小。
  32. 一种特效生成方法,其特征在于,包括:
    获取特效程序文件包中至少一组子素材的播放参数的参数值;其中,一组子素材包括多个子素材;
    对视频图像进行关键点检测;
    根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效。
  33. 根据权利要求32所述的方法,其特征在于,还包括:导入所述特效程序文件包;
    所述特效程序文件包包括:至少一组子素材和所述至少一组子素材的播放参数的参数值,所述一组子素材的播放参数的参数值包括所述一组子素材的显示位置和预定的至少一关键点之间的对应关系。
  34. 根据权利要求32或33所述的方法,其特征在于,所述多个子素材具有预定的播放时序。
  35. 根据权利要求34所述的方法,其特征在于,所述播放参数包括:触发事件参数,所述触发事件参数用于表示触发所述多个子素材显示的触发事件;
    所述方法还包括:检测所述视频图像中是否出现所述触发动作参数的参数值对应的触发动作;
    所述基于检测到的关键点和所述至少一组子素材的播放参数的参数值,在所述当前正在播放的视频上生成所述至少一组子素材的特效,包括:响应于检测到所视频图像中出现所述触发动作参数的参数值对应的触发动作,根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在所述视频上生成基于所述至少一组子素材的特效。
  36. 根据权利要求35所述的方法,其特征在于,所述播放参数还包括:触发结束参数:所述触发结束参数用于表示结束所述多个子素材显示的动作;
    所述方法还包括:检测所述视频图像中是否出现所述触发结束参数的参数值对应的触发动作;响应于检测到所述视频图像中出现所述触发结束参数的参数值对应的触发动作,结束在所述当前正在播放的视频上生成所述至少一组子素材的特效。
  37. 根据权利要求35或36所述的方法,其特征在于,所述播放参数包括:美颜/美妆效果参数,所述美颜/美妆效果参数用于表示显示子素材时在预设部位显示的美颜/美妆效果;
    所述方法还包括:根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效时,根据所述美颜/美妆效果参数,在所述视频图像中的所述预设部位显示美颜/美妆效果。
  38. 根据权利要求32-37任一所述的方法,其特征在于,所述导入特效程序文件包,包括:通过调用用于读取贴纸素材的第一接口函数,将所述特效程序文件包读入内存;解析所述特效程序文件包,获得所述至少一组子素材和特效程序文件,所述特效程序文件包括所述至少一组子素材的播放参数的参数值。
  39. 根据权利要求38所述的方法,其特征在于,所述特效程序文件包括:json程序的特效程序文件。
  40. 根据权利要求38或39所述的方法,其特征在于,所述获取所述特效程序文件包中至少一组子素材的播放参数的参数值,包括:通过用于创建贴纸句柄的第二接口函数创建贴纸句柄;读取所述至少一组子素材和所述特效程序文件包中至少一组子素材的播放参数的参数值、并存储至所述贴纸句柄中。
  41. 根据权利要求40所述的方法,其特征在于,还包括:根据所述多个子素材的文件名确定所述多个子素材的播放时序;根据所述贴纸句柄中所述特效程序文件中至少一组子素材的播放参数的参数值和所述多个子素材的播放时序,获取所述至少一组子素材中每个子素材在所述视频中显示的位置和视频帧数,并预先从所述视频中读取所述视频帧数对应的视频图像。
  42. 根据权利要求40或41所述的方法,其特征在于,所述基于检测到的关键点和所述至少一组子素材的播放参数的参数值,在所述当前正在播放的视频上生成所述至少一组子素材的特效,包括:通过用于调用渲染贴纸素材的第三接口函数,从所述贴纸句柄中读取需要显示在所述视频的当前视频图像上的子素材;根据所述检测到的关键点和所述播放参数的参数值,确定所述需要显示的子素材在当前视频图像上显示的位置;将所述需要显示在所述当前视频图像上的子素材显示在所述当前视频图像上的所述显示的位置上。
  43. 根据权利要求40-42任一所述的方法,其特征在于,还包括:响应于所述特效程序文件包播放完毕,通过用于调用销毁贴纸句柄的第四接口函数销毁所述贴纸句柄。
  44. 根据权利要求33-43任一所述的方法,其特征在于,所述对所述视频图像进行关键点检测,包括:通过神经网络,对所述视频图像进行所述对应关系涉及的关键点检测,并输出关键点检测结果。
  45. 根据权利要求44所述的方法,其特征在于,所述关键点检测结果包括以下任意一项或多项:所述对应关系涉及的关键点在所述视频中图像中的位置;所述对应关系涉及的关键点的预设编号。
  46. 根据权利要求32-45任一所述的方法,其特征在于,所述特效程序文件包为采用如权利要求1-31任一所述的方 法生成的特效程序文件包。
  47. 一种特效程序文件包的生成装置,其特征在于,包括:
    第一导入模块,用于导入一组子素材,所述一组子素材包括多个子素材;
    第一获取模块,用于获取所述一组子素材的播放参数的参数值;
    第一生成模块,用于根据所述一组子素材和所述播放参数的参数值生成特效程序文件包。
  48. 根据权利要求47所述的装置,其特征在于,所述多个子素材具有预定的播放时序。
  49. 根据权利要求48所述的装置,其特征在于,所述多个子素材的播放时序基于所述多个子素材的文件名确定。
  50. 根据权利要求47-49任一所述的装置,其特征在于,还包括:操作界面,包括操作栏;
    所述第一导入模块,用于接收通过操作栏的交互接口输入的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材作为所述一组子素材。
  51. 根据权利要求50所述的装置,其特征在于,所述第一导入模块,用于:接收通过所述操作栏下播放参数设置界面中的交互接口发送的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材;或者,接收通过所述操作栏的交互接口发送的选取指令,以所述选取指令选取的参考部位作为所述当前需要添加特效的目标部位,并在所述操作栏显示所述目标部位下的播放参数设置界面;接收通过所述播放参数设置界面中的交互接口发送的导入指令,导入所述导入指令指向的素材文件夹中的多个子素材。
  52. 根据权利要求50或51所述的装置,其特征在于,所述第一导入模块,用于:接收通过所述交互接口发送的导入指令,获取并显示所述导入指令指向的素材文件夹;响应于接收到对所述素材文件夹中的子素材选取操作,导入所述子素材选取操作选取的多个子素材;和/或,响应于未接收到对所述素材文件夹中的子素材选取操作,根据预先设置选取素材文件夹中的全部子素材或者部分子素材,并导入根据预先设置选取的子素材。
  53. 根据权利要求47-52任一所述的装置,其特征在于,所述特效程序文件包包括一组子素材;或者,所述特效程序文件包包括多组子素材。
  54. 根据权利要求50-53任一所述的装置,其特征在于,所述第一导入模块导入所述导入指令指向的素材文件夹中的多个子素材时,用于:响应于所述导入指令中包括所述导入指令指向的素材文件夹中的多个子素材的显示顺序,按照所述显示顺序读取并导入所述多个子素材,并在所述操作栏按照所述显示顺序显示导入的所述多个子素材的文件名;和/或,响应于所述导入指令中未包括所述导入指令指向的素材文件夹中的多个子素材的显示顺序,按照预设顺序读取并导入所述多个子素材,并在所述操作栏按照预设顺序显示导入的所述多个子素材的文件名。
  55. 根据权利要求51-54任一所述的装置,其特征在于,所述第一获取模块用于:响应于接收到通过所述播放参数设置界面中的交互接口发送的针对所述一组子素材的播放参数设置的参数值,以所述设置的参数值作为所述一组子素材的播放参数的参数值;和/或,响应于未接收到通过所述播放参数设置界面中的交互接口发送的针对所述一组子素材的播放参数设置的参数值,以预设参数值作为所述一组子素材的播放参数的参数值。
  56. 根据权利要求51-55任一所述的装置,其特征在于,所述一组子素材的播放参数包括以下任意一项或多项;其中:
    显示参数:用于表示是否显示所述多个子素材;
    间隔参数:用于表示显示所述多个子素材中相邻两个子素材间隔的帧数;
    触发动作参数:用于表示触发所述多个子素材显示的触发动作;
    循环参数:用于表示所述多个子素材的循环播放次数;
    延迟触发参数:用于表示延迟显示所述多个子素材的时间;
    触发结束参数:用于表示结束所述多个子素材显示的动作;
    显示尺寸参数:用于表示多个子素材的显示大小变化的参考依据;
    位置类型参数:用于表示多个子素材和位置的关系类型;
    位置关联参数:用于表示多个子素材是否跟随预设参考部位移动;
    位置参数:用于表示多个子素材与预设关键点之间的位置绑定关系;
    旋转参数:用于表示多个子素材旋转依据的关键点;
    美颜/美妆效果参数:用于表示显示子素材时在预设部位显示的美颜/美妆效果。
  57. 根据权利要求56所述的装置,其特征在于,所述触发动作参数对应的触发动作包括以下任意一项或多项:无动作触发,眼部动作,头部动作,眉部动作,手部动作,嘴部动作,肩部动作。
  58. 根据权利要求55或56所述的装置,其特征在于,所述位置类型参数包括以下任意一项:
    用于表示前景的参数;
    用于表示所述多个子素材跟随脸部位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随手的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随头部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随肩部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随手臂的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随腰部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随腿部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随脚部的位置进行定位和/或移动的参数;
    用于表示所述多个子素材跟随人体骨骼的位置进行定位和/或移动的参数;
    与参考部位相关的播放位置关系;
    用于表示背景的参数。
  59. 根据权利要求58所述的装置,其特征在于,所述与参考部位相关播放位置关系包括以下任意一项或多项:
    所述多个子素材跟随所述参考部位的位置进行移动,所述多个子素材跟随所述参考部位的大小进行缩放;
    所述多个子素材跟随所述参考部位的位置进行移动,所述多个子素材跟随所述参考部位的大小进行缩放,所述多个子素材跟随所述参考部位的旋转进行纵深缩放;
    所述多个子素材跟随所述参考部位的位置进行移动,所述多个子素材跟随所述参考部位的大小进行缩放,所述多个子素材跟随所述参考部位的旋转进行纵深缩放,所述多个子素材跟随所述参考部位的平面旋转进行旋转。
  60. 根据权利要求47-59任一所述的装置,其特征在于,所述播放参数中包括:所述一组子素材的显示位置和预定的至少一关键点之间的对应关系;
    所述关键点包括以下任意一种或多种:头部关键点,脸部关键点,肩部关键点,手臂关键点,手势关键点,腰部关键点,腿部关键点,脚部关键点,人体骨骼关键点。
  61. 根据权利要求60所述的装置,其特征在于,所述头部关键点包括以下任意一项或多项:头顶关键点,鼻尖关键点,以及下巴关键点;和/或,
    所述面部脸部关键点包括以下任意一项或多项:脸部轮廓关键点,眼睛关键点,眉毛关键点,鼻子关键点,嘴部关键点;和/或,
    所述肩部关键点包括以下任意一项或多项:位于肩部与头部交汇位置处的肩头交汇关键点,以及位于臂根轮廓关键点与肩头交汇关键点之间的中点位置处的肩轮廓中点关键点;和/或,
    所述手臂关键点包括以下任意一项或多项:手腕轮廓关键点,胳膊肘轮廓关键点,臂根轮廓关键点,位于手腕轮廓关键点与胳膊肘轮廓关键点之间的中点位置处的小臂轮廓中点关键点,以及位于胳膊肘轮廓关键点与臂根轮廓关键点之间的中点位置处的大臂中点关键点;和/或,
    所述手势关键点包括以下任意一项或多项:手势框的四个顶点关键点,以及手势框的中心关键点;和/或,
    所述腿部关键点包括以下任意一项或多项:裆部关键点,膝盖轮廓关键点,脚踝轮廓关键点,大腿根部外侧轮廓关键点,位于膝盖轮廓关键点与脚踝轮廓关键点之间的中点位置处的小腿轮廓中点关键点,位于膝盖内轮廓关键点与裆部关键点之间的中点位置处的大腿内轮廓中点关键点,以及位于膝盖外轮廓关键点与大腿根部外侧轮廓关键点之间的中点位置处的大腿外轮廓中点关键点;和/或,
    所述腰部关键点包括以下任意一项或多项:将大腿根部外侧轮廓关键点与臂根轮廓关键点之间N等分,所产生的N个等分点;其中,所述N大于1;和/或,
    所述脚部关键点包括以下任意一项或多项:脚尖关键点以及足跟关键点;和/或,
    所述人体骨骼关键点把包括以下任意一项或多项:右肩骨骼关键点,右肘骨骼关键点,右腕骨骼关键点,左肩骨骼关键点,左肘骨骼关键点,左腕骨骼关键点,右髋骨骼关键点,右膝骨骼关键点,右踝骨骼关键点,左髋骨骼关键点,左膝骨骼关键点,左踝骨骼关键点,头顶骨骼关键点,以及脖子骨骼关键点。
  62. 根据权利要求61所述的装置,其特征在于,所述眼睛关键点包括以下任意一项或多项:左眼眶关键点,左眼瞳孔中心关键点,左眼中心关键点,右眼眶关键点,右眼瞳孔中心关键点,以及右眼中心关键点;和/或,
    所述眉毛关键点包括以下任意一项或多项:左眉毛关键点以及右眉毛关键点;和/或,
    所述鼻子关键点包括以下任意一项或多项:鼻梁关键点,鼻子下沿关键点,以及鼻子外侧轮廓关键点;和/或,
    所述嘴部关键点包括以下任意一项或多项:上嘴唇关键点,以及下嘴唇关键点。
  63. 根据权利要求47-62任一所述的装置,其特征在于,所述第一获取模块,还用于:建立所述一组子素材的显示位置和所述至少一关键点之间的对应关系;和/或,建立所述一组子素材的显示位置和检测框的中心关键点之间的对应关系。
  64. 根据权利要求60-63任一所述的装置,其特征在于,所述操作界面还包括内容显示栏,用于显示参考图像,并显示所述参考图像上的关键点;所述参考图像包括至少一个参考部位。
  65. 根据权利要求64所述的装置,其特征在于,所述参考图像包括:参考人物的至少一部分图像。
  66. 根据权利要求65所述的装置,其特征在于,所述参考人物的至少一部分图像包括所述参考人物的以下任意一项或多项的图像:完整图像,头部图像,脸部图像,肩部图像,手臂图像,手势图像,腰部图像,腿部图像,脚部图像。
  67. 根据权利要求64-66任一所述的装置,其特征在于,所述内容显示栏,还用于:根据所述一组子素材的播放参数的参数值,按照预设显示策略在所述内容显示栏依次显示导入的所述一组子素材中的每个子素材、或者同时显示导入的所述一组子素材中的多个子素材;或者,接收到对所述一组子素材中子素材的选取操作,在所述内容显示栏显示所述选取操作选取的子素材。
  68. 根据权利要求67所述的装置,其特征在于,还包括:第一更新模块,用于根据通过所述内容显示栏接收到的对所述一组子素材或其中一个子素材的位置移动操作,更新所述一组子素材在所述内容显示栏的显示位置,并对所述一组子素材的播放参数中的相应参数值进行更新。
  69. 根据权利要求67或68所述的装置,其特征在于,还包括:第二更新模块,用于根据通过所述内容显示栏接收到的对所述一组子素材或其中一个子素材的大小调整操作,更新所述一组子素材在所述内容显示栏的显示大小,并对所述一组子素材的播放参数中的相应参数值进行更新。
  70. 根据权利要求47-69任一所述的装置,其特征在于,还包括:调整模块,用于根据通过所述操作栏的交互接口接收到的针对两组或以上子素材发送的图层参数调整指令,调整所述两组或以上子素材之间的遮挡关系,并根据调整后的遮挡关系和所述播放参数的参数值显示所述两组或以上子素材。
  71. 根据权利要求47-70任一所述的装置,其特征在于,所述操作界面还包括:程序文件栏,用于根据预先设置的特效程序文件和所述一组子素材的播放参数的参数值,生成所述一组子素材的特效程序文件,并通过程序文件栏显示所述一组子素材的特效程序文件。
  72. 根据权利要求71所述的装置,其特征在于,所述特效程序文件包括:以json程序生成的特效程序文件。
  73. 根据权利要求47-72任一所述的装置,其特征在于,所述操作界面包括左侧、中部和右侧三个区域;在所述操作界面的左侧显示所述操作栏,在所述操作界面的中部显示所述内容显示栏,在所述操作界面右侧显示所述程序文件栏。
  74. 根据权利要求47-73任一所述的装置,其特征在于,还包括:保存模块,用于根据接收到的保存指令在所述保 存指令指向的位置保存所述特效程序文件包。
  75. 根据权利要求74所述的装置,其特征在于,所述保存模块,用于:响应于接收到保存指令,显示保存路径选择接口和压缩接口;接收通过所述保存路径选择接口发送的保存位置;以及接收基于所述压缩接口发送的压缩方式,并根据所述压缩方式对所述子素材的特效程序文件包进行压缩,生成压缩文件包;将所述压缩文件包存储至所述保存位置指向的文件夹中。
  76. 根据权利要求47-75任一所述的装置,其特征在于,所述特效程序文件包中子素材的大小保持为所述子素材被导入前的大小。
  77. 一种特效生成装置,其特征在于,包括:
    第二获取模块,用于获取特效程序文件包中至少一组子素材的播放参数的参数值;其中,一组子素材包括多个子素材;
    第一检测模块,用于对视频图像进行关键点检测;
    第二生成模块,用于根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效。
  78. 根据权利要求77所述的装置,其特征在于,还包括:第二导入模块,用于导入所述特效程序文件包;
    所述特效程序文件包至少一组子素材和所述至少一组子素材的播放参数的参数值,所述一组子素材的播放参数的参数值包括所述一组子素材的显示位置和预定的至少一关键点之间的对应关系。
  79. 根据权利要求77或78所述的装置,其特征在于,所述多个子素材具有预定的播放时序。
  80. 根据权利要求79所述的装置,其特征在于,所述播放参数包括:触发事件参数,所述触发事件参数用于表示触发所述多个子素材显示的触发事件;
    所述装置还包括:第二检测模块,用于检测所述视频图像中是否出现所述触发动作参数的参数值对应的触发动作;
    所述第二生成模块用于:响应于检测到所视频图像中出现所述触发动作参数的参数值对应的触发动作,根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在所述视频上生成基于所述至少一组子素材的特效。
  81. 根据权利要求80所述的装置,其特征在于,所述播放参数还包括:触发结束参数:所述触发结束参数用于表示结束所述多个子素材显示的动作;
    所述第二检测模块,还用于检测所述视频图像中是否出现所述触发结束参数的参数值对应的触发动作;
    所述第二生成模块,还用于响应于检测到所述视频图像中出现所述触发结束参数的参数值对应的触发动作,结束在所述当前正在播放的视频上生成所述至少一组子素材的特效。
  82. 根据权利要求80或81所述的装置,其特征在于,所述播放参数包括:美颜/美妆效果参数,所述美颜/美妆效果参数用于表示显示子素材时在预设部位显示的美颜/美妆效果;
    所述第二生成模块,还用于根据检测到的关键点和所述至少一组子素材的播放参数的参数值,在视频图像上生成基于所述至少一组子素材的特效时,根据所述美颜/美妆效果参数,在所述视频图像中的所述预设部位显示美颜/美妆效果。
  83. 根据权利要求78-82任一所述的装置,其特征在于,所述第二导入模块,用于:通过调用用于读取贴纸素材的第一接口函数,将所述特效程序文件包读入内存;解析所述动态特效程序文件包,获得所述至少一组子素材和特效程序文件,所述特效程序文件包括所述至少一组子素材的播放参数的参数值。
  84. 根据权利要求83所述的装置,其特征在于,所述特效程序文件包括:json程序的特效程序文件。
  85. 根据权利要求83或84所述的装置,其特征在于,所述第二获取模块,用于:通过用于创建贴纸句柄的第二接口函数创建贴纸句柄;读取所述至少一组子素材和所述特效程序文件包中至少一组子素材的播放参数的参数值、并存储至所述贴纸句柄中。
  86. 根据权利要求85所述的装置,其特征在于,还包括:确定模块,用于根据所述多个子素材的文件名确定所述多个子素材的播放时序;
    第三获取模块,用于根据所述贴纸句柄中所述特效程序文件中至少一组子素材的播放参数的参数值和所述多个子素材的播放时序,获取所述至少一组子素材中每个子素材在所述视频中显示的位置和视频帧数,并预先从所述视频中读取所述视频帧数对应的视频图像。
  87. 根据权利要求85或86所述的装置,其特征在于,所述第二生成模块,用于:通过用于调用渲染贴纸素材的第三接口函数,从所述贴纸句柄中读取需要显示在所述视频的当前视频图像上的子素材;根据所述检测到的关键点和所述播放参数的参数值,确定所述需要显示的子素材在当前视频图像上显示的位置;将所述需要显示在所述当前视频图像上的子素材显示在所述当前视频图像上的所述显示的位置上。
  88. 根据权利要求85-87任一所述的装置,其特征在于,所述第二获取模块,还用于响应于所述特效程序文件包播放完毕,通过用于调用销毁贴纸句柄的第四接口函数销毁所述贴纸句柄。
  89. 根据权利要求77-88任一所述的装置,其特征在于,所述第一检测模块,用于通过神经网络,对所述视频图像进行所述对应关系涉及的关键点检测,并输出关键点检测结果。
  90. 根据权利要求89所述的装置,其特征在于,所述关键点检测结果包括以下任意一项或多项:所述对应关系涉及的关键点在所述视频中图像中的位置;所述对应关系涉及的关键点的预设编号。
  91. 根据权利要求77-90任一所述的装置,其特征在于,所述特效程序文件包为采用如权利要求1-31任一所述的方法或者如权利要求47-76任一所述的装置生成的特效程序文件包。
  92. 一种电子设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现上述权利要求1-46中任一项所述的方法。
  93. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时,实现上述权利要求1-46中任一项所述的方法。
PCT/CN2019/074503 2018-02-08 2019-02-01 特效程序文件包的生成及特效生成方法与装置、电子设备 WO2019154339A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2020536227A JP7167165B2 (ja) 2018-02-08 2019-02-01 特殊効果プログラムファイルパッケージの生成及び特殊効果生成方法、装置並びに電子機器
AU2019218423A AU2019218423A1 (en) 2018-02-08 2019-02-01 Method and device for generating special effect program file package, method and device for generating special effect, and electronic device
SG11202006351VA SG11202006351VA (en) 2018-02-08 2019-02-01 Method and device for generating special effect program file package, method and device for generating special effect, and electronic device
KR1020207019275A KR102466689B1 (ko) 2018-02-08 2019-02-01 특수 효과 프로그램 파일 패키지 및 특수 효과의 생성 방법과 장치, 전자 기기
EP19750743.7A EP3751413A4 (en) 2018-02-08 2019-02-01 METHOD AND DEVICE FOR GENERATING A SET OF SPECIAL EFFECT PROGRAM FILES, METHOD AND DEVICE FOR GENERATING A SPECIAL EFFECT, AND ELECTRONIC DEVICE
US16/914,622 US11368746B2 (en) 2018-02-08 2020-06-29 Method and device for generating special effect program file package, method and device for generating special effect, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810129969.7A CN108388434B (zh) 2018-02-08 2018-02-08 特效程序文件包的生成及特效生成方法与装置、电子设备
CN201810129969.7 2018-02-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/914,622 Continuation US11368746B2 (en) 2018-02-08 2020-06-29 Method and device for generating special effect program file package, method and device for generating special effect, and electronic device

Publications (1)

Publication Number Publication Date
WO2019154339A1 true WO2019154339A1 (zh) 2019-08-15

Family

ID=63075383

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/074503 WO2019154339A1 (zh) 2018-02-08 2019-02-01 特效程序文件包的生成及特效生成方法与装置、电子设备

Country Status (8)

Country Link
US (1) US11368746B2 (zh)
EP (1) EP3751413A4 (zh)
JP (1) JP7167165B2 (zh)
KR (1) KR102466689B1 (zh)
CN (2) CN108388434B (zh)
AU (1) AU2019218423A1 (zh)
SG (1) SG11202006351VA (zh)
WO (1) WO2019154339A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591267A (zh) * 2021-06-17 2021-11-02 东风汽车集团股份有限公司 一种变速箱壳体悬置强度的分析方法及装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259496B (zh) 2018-01-19 2021-06-04 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备
CN108388434B (zh) * 2018-02-08 2021-03-02 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备
CN110858409A (zh) * 2018-08-24 2020-03-03 北京微播视界科技有限公司 动画生成方法和装置
CN109167936A (zh) * 2018-10-29 2019-01-08 Oppo广东移动通信有限公司 一种图像处理方法、终端及存储介质
CN110428390B (zh) * 2019-07-18 2022-08-26 北京达佳互联信息技术有限公司 一种素材展示方法、装置、电子设备和存储介质
CN113497898B (zh) 2020-04-02 2023-04-07 抖音视界有限公司 视频特效配置文件生成方法、视频渲染方法及装置
CN112637518B (zh) * 2020-12-21 2023-03-24 北京字跳网络技术有限公司 模拟拍照特效的生成方法、装置、设备及介质
CN115239845A (zh) * 2021-04-25 2022-10-25 北京字跳网络技术有限公司 一种特效配置文件的生成方法、装置、设备及介质
CN113760161A (zh) * 2021-08-31 2021-12-07 北京市商汤科技开发有限公司 数据生成、图像处理方法、装置、设备及存储介质
CN116225267A (zh) * 2021-11-30 2023-06-06 北京字节跳动网络技术有限公司 图像特效包的生成方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801924A (zh) * 2012-07-20 2012-11-28 合肥工业大学 一种基于Kinect的电视节目主持互动系统
CN102984465A (zh) * 2012-12-20 2013-03-20 北京中科大洋科技发展股份有限公司 一种适用于网络化智能化数字媒体的节目合成系统及方法
CN107341435A (zh) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 视频图像的处理方法、装置和终端设备
CN108388434A (zh) * 2018-02-08 2018-08-10 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1118005A (ja) 1997-06-26 1999-01-22 Nagano Nippon Denki Software Kk 画像効果処理方式およびコンピュータ
GB2340358B (en) 1998-07-31 2002-11-13 Sony Uk Ltd Video special effects
GB2340360B (en) 1998-07-31 2002-11-06 Sony Uk Ltd Animation of video special effects
JP2001307123A (ja) 2000-02-18 2001-11-02 Nippon Telegr & Teleph Corp <Ntt> 表情変形のある似顔絵作成方法及び装置、似顔絵作成システム、似顔絵作成システム用送信機及び受信機、並びに、似顔絵作成プログラム及び似顔絵作成プログラムを記録した記録媒体
WO2002050657A1 (en) * 2000-12-19 2002-06-27 Coolernet, Inc. System and method for multimedia authoring and playback
WO2003005201A1 (fr) 2001-07-04 2003-01-16 Okyz Procede et systeme d'exportation de donnees associees a des entites geometriques bidimensionnelles ou tridimensionnelles
JP2003092706A (ja) * 2001-09-18 2003-03-28 Sony Corp 効果付加装置、効果付加方法、及び効果付加プログラム
US8737810B2 (en) 2002-11-15 2014-05-27 Thomson Licensing Method and apparatus for cropping of subtitle elements
JP2004171184A (ja) * 2002-11-19 2004-06-17 Toppan Printing Co Ltd Webサーバ及びWebコンテンツ配信方法
TWI227444B (en) * 2003-12-19 2005-02-01 Inst Information Industry Simulation method for make-up trial and the device thereof
JP2005242566A (ja) * 2004-02-25 2005-09-08 Canon Inc 画像合成装置及び方法
CN1564202A (zh) * 2004-03-16 2005-01-12 无敌科技(西安)有限公司 图像过场动画特效的生成及播放方法
US7903927B2 (en) * 2004-07-08 2011-03-08 Sony Corporation Editing apparatus and control method thereof, and program and recording medium
JP2006260198A (ja) * 2005-03-17 2006-09-28 Toshiba Corp 仮想化粧装置、仮想化粧方法および仮想化粧プログラム
FR2884008A1 (fr) 2005-03-31 2006-10-06 France Telecom Systeme et procede de localisation de points d'interet dans une image d'objet mettant en oeuvre un reseau de neurones
JP4799105B2 (ja) * 2005-09-26 2011-10-26 キヤノン株式会社 情報処理装置及びその制御方法、コンピュータプログラム、記憶媒体
JP4760349B2 (ja) 2005-12-07 2011-08-31 ソニー株式会社 画像処理装置および画像処理方法、並びに、プログラム
US20070153091A1 (en) 2005-12-29 2007-07-05 John Watlington Methods and apparatus for providing privacy in a communication system
JP2007257585A (ja) 2006-03-27 2007-10-04 Fujifilm Corp 画像処理方法および装置ならびにプログラム
KR20100069648A (ko) 2007-09-12 2010-06-24 휴베이스-아이.인크 플래쉬 파일 생성 시스템 및 오리지널 화상 정보 생성 시스템
US20110289455A1 (en) * 2010-05-18 2011-11-24 Microsoft Corporation Gestures And Gesture Recognition For Manipulating A User-Interface
JP2012113677A (ja) * 2010-11-05 2012-06-14 Aitia Corp 情報処理装置および情報処理プログラム
CN102567031A (zh) 2012-03-01 2012-07-11 盛乐信息技术(上海)有限公司 视频特效扩充方法及系统
US8824793B2 (en) 2012-03-02 2014-09-02 Adobe Systems Incorporated Methods and apparatus for applying a bokeh effect to images
CN102760303A (zh) * 2012-07-24 2012-10-31 南京仕坤文化传媒有限公司 一种虚拟现实动态场景视频的拍摄技术与嵌入方法
US9076247B2 (en) 2012-08-10 2015-07-07 Ppg Industries Ohio, Inc. System and method for visualizing an object in a simulated environment
WO2014028009A1 (en) 2012-08-15 2014-02-20 Empire Technology Development Llc Digital media privacy protection
CN104637078B (zh) 2013-11-14 2017-12-15 腾讯科技(深圳)有限公司 一种图像处理方法及装置
US9501701B2 (en) * 2014-01-31 2016-11-22 The Charles Stark Draper Technology, Inc. Systems and methods for detecting and tracking objects in a video stream
CN103928039B (zh) * 2014-04-15 2016-09-21 北京奇艺世纪科技有限公司 一种视频合成方法及装置
CN104967893B (zh) * 2014-07-10 2019-03-29 腾讯科技(北京)有限公司 便携电子设备的视频生成方法和装置
CN105451090B (zh) * 2014-08-26 2019-03-29 联想(北京)有限公司 图像处理方法和图像处理装置
EP2993895A1 (en) * 2014-09-05 2016-03-09 Canon Kabushiki Kaisha Image capturing apparatus and control method therefor
CN104394331A (zh) * 2014-12-05 2015-03-04 厦门美图之家科技有限公司 一种画面视频中添加匹配音效的视频处理方法
CN104469179B (zh) * 2014-12-22 2017-08-04 杭州短趣网络传媒技术有限公司 一种将动态图片结合到手机视频中的方法
CN104778712B (zh) * 2015-04-27 2018-05-01 厦门美图之家科技有限公司 一种基于仿射变换的人脸贴图方法和系统
CN106296781B (zh) 2015-05-27 2020-09-22 深圳超多维科技有限公司 特效图像生成方法及电子设备
JP6754619B2 (ja) 2015-06-24 2020-09-16 三星電子株式会社Samsung Electronics Co.,Ltd. 顔認識方法及び装置
CN105975935B (zh) * 2016-05-04 2019-06-25 腾讯科技(深圳)有限公司 一种人脸图像处理方法和装置
CN106097417B (zh) 2016-06-07 2018-07-27 腾讯科技(深圳)有限公司 主题生成方法、装置、设备
CN106101576B (zh) 2016-06-28 2019-07-16 Oppo广东移动通信有限公司 一种增强现实照片的拍摄方法、装置及移动终端
CN106231205B (zh) 2016-08-10 2019-07-30 苏州黑盒子智能科技有限公司 增强现实移动终端
CN106341720B (zh) * 2016-08-18 2019-07-26 北京奇虎科技有限公司 一种在视频直播中添加脸部特效的方法及装置
CN106373170A (zh) 2016-08-31 2017-02-01 北京云图微动科技有限公司 一种视频制作方法及装置
EP3321846A1 (en) 2016-11-15 2018-05-16 Mastercard International Incorporated Systems and methods for secure biometric sample raw data storage
US10733699B2 (en) * 2017-10-24 2020-08-04 Deep North, Inc. Face replacement and alignment
CN108259496B (zh) * 2018-01-19 2021-06-04 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备
US20190236547A1 (en) * 2018-02-01 2019-08-01 Moxtra, Inc. Record and playback for online collaboration sessions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801924A (zh) * 2012-07-20 2012-11-28 合肥工业大学 一种基于Kinect的电视节目主持互动系统
CN102984465A (zh) * 2012-12-20 2013-03-20 北京中科大洋科技发展股份有限公司 一种适用于网络化智能化数字媒体的节目合成系统及方法
CN107341435A (zh) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 视频图像的处理方法、装置和终端设备
CN108388434A (zh) * 2018-02-08 2018-08-10 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3751413A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591267A (zh) * 2021-06-17 2021-11-02 东风汽车集团股份有限公司 一种变速箱壳体悬置强度的分析方法及装置
CN113591267B (zh) * 2021-06-17 2023-12-19 东风汽车集团股份有限公司 一种变速箱壳体悬置强度的分析方法及装置

Also Published As

Publication number Publication date
AU2019218423A1 (en) 2020-09-24
JP2021508883A (ja) 2021-03-11
KR102466689B1 (ko) 2022-11-14
KR20200093034A (ko) 2020-08-04
US20200329272A1 (en) 2020-10-15
CN108388434A (zh) 2018-08-10
CN108388434B (zh) 2021-03-02
EP3751413A1 (en) 2020-12-16
US11368746B2 (en) 2022-06-21
JP7167165B2 (ja) 2022-11-08
EP3751413A4 (en) 2021-04-07
SG11202006351VA (en) 2020-07-29
CN112860168B (zh) 2022-08-02
CN112860168A (zh) 2021-05-28

Similar Documents

Publication Publication Date Title
WO2019154339A1 (zh) 特效程序文件包的生成及特效生成方法与装置、电子设备
WO2019141126A1 (zh) 特效程序文件包的生成及特效生成方法与装置、电子设备
WO2019154337A1 (zh) 变形特效程序文件包的生成及变形特效生成方法与装置
CN108711180B (zh) 美妆和/或换脸特效程序文件包的生成及美妆和/或换脸特效生成方法与装置
WO2019154338A1 (zh) 描边特效程序文件包的生成及描边特效生成方法与装置
CN109035373B (zh) 三维特效程序文件包的生成及三维特效生成方法与装置
US20220327755A1 (en) Artificial intelligence for capturing facial expressions and generating mesh data
CN116437137B (zh) 直播处理方法、装置、电子设备及存储介质
US20230063681A1 (en) Dynamic augmentation of stimuli based on profile of user
TW202247107A (zh) 用於訓練模型之臉部擷取人工智慧
CN115272631A (zh) 一种虚拟面部模型处理方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19750743

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020536227

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207019275

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019750743

Country of ref document: EP

Effective date: 20200907

ENP Entry into the national phase

Ref document number: 2019218423

Country of ref document: AU

Date of ref document: 20190201

Kind code of ref document: A