CN110490956A - Dynamic effect material generation method, device, electronic equipment and storage medium - Google Patents

Dynamic effect material generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110490956A
CN110490956A CN201910750943.9A CN201910750943A CN110490956A CN 110490956 A CN110490956 A CN 110490956A CN 201910750943 A CN201910750943 A CN 201910750943A CN 110490956 A CN110490956 A CN 110490956A
Authority
CN
China
Prior art keywords
dynamic effect
effect material
key frame
data
facial expressions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910750943.9A
Other languages
Chinese (zh)
Inventor
李亚男
肖明凯
黄群互
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Internet Security Software Co Ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201910750943.9A priority Critical patent/CN110490956A/en
Publication of CN110490956A publication Critical patent/CN110490956A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application proposes that a kind of move imitates material generation method, device, electronic equipment and storage medium, and wherein method includes: acquisition true man's video;Obtain multiple key frames in true man's video;For each key frame in multiple key frames, image recognition is carried out to key frame, obtains the facial expressions and acts data of true man in key frame;Facial expressions and acts data include: expression data and action data;According to the facial expressions and acts data of true man in key frame, the expression and movement for drawing preset vivid icon are copied, the facial expressions and acts data of vivid icon are obtained;Image icon facial expressions and acts data corresponding to key frame each in multiple key frames are combined, generate dynamic effect material, this method is based on the facial expressions and acts data in true man's video, imitation drafting is carried out in conjunction with vivid icon, it obtains the facial expressions and acts data of vivid icon and is combined, to generate dynamic effect material, cartoon making efficiency is improved, saves the manpower and time cost of vivid animation exploitation.

Description

Dynamic effect material generation method, device, electronic equipment and storage medium
Technical field
This application involves field of artificial intelligence, more particularly to a kind of dynamic effect material generation method, device, electronic equipment And computer readable storage medium.
Background technique
With the development of artificial intelligence technology, intelligent terminal is also increasingly universal in work, the application of life of people, For example, having children's home-teaching study machine of interactive voice ability, built-in a large amount of audio-video frequency contents, voice technical ability and application can expire The amusement of sufficient children and learning demand.Meanwhile the dynamic effect experience of IP-based is liked by children deeply, imitates in terms of education and amusement Fruit is preferable.But because of the dynamic effect script of independent some Scenario Design IP to develop the dynamic effect of IP, higher cost and utilization rate is lower.
Summary of the invention
The application purpose is intended to solve one of above-mentioned technical problem at least to a certain extent.
For this purpose, first purpose of the application is to propose that a kind of dynamic effect material generation method, this method are regarded based on true man Facial expressions and acts data in frequency carry out the combination of facial expressions and acts data in conjunction with vivid icon, to generate dynamic effect material, improve animation Producing efficiency saves the manpower and time cost of vivid animation exploitation.
Second purpose of the application is to propose a kind of dynamic effect material generating means.
The third purpose of the application is to propose a kind of electronic equipment.
The 4th purpose of the application is to propose a kind of computer readable storage medium.
In order to achieve the above object, the application first aspect embodiment proposes a kind of dynamic effect material generation method, comprising: acquisition True man's video;Obtain multiple key frames in true man's video;For each key frame in the multiple key frame, to institute It states key frame and carries out image recognition, obtain the facial expressions and acts data of true man in the key frame;The facial expressions and acts data include: Expression data and action data;According to the facial expressions and acts data of true man in the key frame, copies and draw preset vivid icon Expression and movement, obtain the facial expressions and acts data of vivid icon;Shape corresponding to key frame each in the multiple key frame As icon facial expressions and acts data are combined, dynamic effect material is generated.
The dynamic effect material generation method of the embodiment of the present application, by obtaining multiple key frames in true man's video;Needle To each key frame in the multiple key frame, image recognition is carried out to the key frame, obtains true man in the key frame Facial expressions and acts data;The facial expressions and acts data include: expression data and action data;According to true man in the key frame Facial expressions and acts data copy the expression and movement for drawing preset vivid icon, obtain the facial expressions and acts data of vivid icon;It is right The corresponding vivid icon facial expressions and acts data of each key frame are combined in the multiple key frame, generate dynamic effect material.It should Method carries out the combination of facial expressions and acts data based on the facial expressions and acts data in true man's video, in conjunction with vivid icon, to generate dynamic effect Material improves cartoon making efficiency, saves the manpower and time cost of vivid animation exploitation.
According to one embodiment of the application, the table of image icon corresponding to key frame each in the multiple key frame Feelings action data is combined, and is generated after moving effect material, further includes: the display dynamic effect material;
When receiving the adjustment movement to the dynamic effect material, the dynamic effect material is carried out according to adjustment movement Adjustment.
According to one embodiment of the application, the vivid icon corresponding to key frame each in the multiple key frame Facial expressions and acts data are combined, and are generated after moving effect material, further includes: obtain the scene of true man's video;It will be described true The scene of people's video is determined as the corresponding scene of the dynamic effect material;By the dynamic effect material and corresponding scene, store to In material database.
According to one embodiment of the application, further includes: the dynamic effect material obtaining of receiving terminal apparatus is requested, the dynamic effect Material obtaining request includes: the mark of the first scene;The material database is inquired according to the mark of first scene, is obtained wait ask The dynamic effect material asked;The dynamic effect material to be requested is supplied to the terminal device, so that the terminal device is according to institute Dynamic effect material and preset vivid icon to be requested are stated, the vivid animation of the first scene is generated.
According to one embodiment of the application, in the material database further include: the corresponding audio material of each scene;It is described Method further include: the material database is inquired according to the mark of first scene, obtains audio material to be requested;It will be described Audio material to be requested is supplied to the terminal device, so as to the terminal device according to the dynamic effect material to be requested, The audio material to be requested and preset vivid icon, generate the vivid animation for carrying audio.
According to one embodiment of the application, the facial expressions and acts data of the image icon include: the face of vivid icon Each motor unit action data, and vivid icon limbs each motor unit action data;The movement Data include: the current action of motor unit and the difference data of deliberate action.
In order to achieve the above object, the application second aspect embodiment proposes a kind of dynamic effect material generating means, comprising: acquisition Module, for acquiring true man's video;Module is obtained, for obtaining multiple key frames in true man's video;Image recognition mould Block, for carrying out image recognition to the key frame, obtaining the key for each key frame in the multiple key frame The facial expressions and acts data of true man in frame;The facial expressions and acts data include: expression data and action data;Drafting module is used for According to the facial expressions and acts data of true man in the key frame, the expression and movement for drawing preset vivid icon are copied, shape is obtained As the facial expressions and acts data of icon;Generation module, for vivid icon corresponding to key frame each in the multiple key frame Facial expressions and acts data are combined, and generate dynamic effect material.
The dynamic effect material generating means of the embodiment of the present application, by obtaining multiple key frames in true man's video;Needle To each key frame in the multiple key frame, image recognition is carried out to the key frame, obtains true man in the key frame Facial expressions and acts data;The facial expressions and acts data include: expression data and action data;According to true man in the key frame Facial expressions and acts data copy the expression and movement for drawing preset vivid icon, obtain the facial expressions and acts data of vivid icon;It is right The corresponding vivid icon facial expressions and acts data of each key frame are combined in the multiple key frame, generate dynamic effect material.It should Device can be realized based on the facial expressions and acts data in true man's video, the combination of facial expressions and acts data be carried out in conjunction with vivid icon, with life At dynamic effect material, cartoon making efficiency is improved, saves the manpower and time cost of vivid animation exploitation.
According to one embodiment of the application, the dynamic effect material generating means of the embodiment of the present application further include: display module With adjustment module;The display module, for showing the dynamic effect material;The adjustment module, for receiving to described When the adjustment movement of dynamic effect material, the dynamic effect material is adjusted according to adjustment movement.
According to one embodiment of the application, the dynamic effect material generating means of the embodiment of the present application further include: determining module And memory module;The acquisition module is also used to obtain the scene of true man's video;The determining module, being used for will be described The scene of true man's video is determined as the corresponding scene of the dynamic effect material;The memory module, for by the dynamic effect material with And corresponding scene, it stores into material database.
According to one embodiment of the application, the dynamic effect material generating means of the embodiment of the present application further include receiving module and Module is provided;The receiving module, the dynamic effect material obtaining for receiving terminal apparatus are requested, the dynamic effect material obtaining request It include: the mark of the first scene;The acquisition module is also used to inquire the material database according to the mark of first scene, Obtain dynamic effect material to be requested;The offer module is set for the dynamic effect material to be requested to be supplied to the terminal It is standby, so that the terminal device is according to the dynamic effect material to be requested and preset vivid icon, generate the first scene Vivid animation.
According to one embodiment of the application, in the material database further include: the corresponding audio material of each scene;It is described Module is obtained, is also used to inquire the material database according to the mark of first scene, obtains audio material to be requested;It is described There is provided module, be also used to the audio material to be requested being supplied to the terminal device, so as to the terminal device according to The dynamic effect material to be requested, the audio material to be requested and preset vivid icon, generation carry audio Vivid animation.
According to one embodiment of the application, the facial expressions and acts data of the image icon include: the face of vivid icon Each motor unit action data, and vivid icon limbs each motor unit action data;The movement Data include: the current action of motor unit and the difference data of deliberate action.
In order to achieve the above object, the application third aspect embodiment proposes a kind of electronic equipment, including memory, processor And the computer program that can be run on a memory and on a processor is stored, the processor realizes this when executing described program Apply for dynamic effect material generation method described in first aspect embodiment.
To achieve the goals above, the application fourth aspect embodiment proposes a kind of computer readable storage medium, when When instruction in the storage medium is executed by processor, realize that dynamic effect material described in the application first aspect embodiment generates Method.
The additional aspect of the application and advantage will be set forth in part in the description, and will partially become from the following description It obtains obviously, or recognized by the practice of the application.
Detailed description of the invention
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow diagram for the dynamic effect material generation method that the application one embodiment provides;
Fig. 2 is the flow diagram for the dynamic effect material generation method that another embodiment of the application provides;
Fig. 3 is the flow diagram for the dynamic effect material generation method that another embodiment of the application provides;
Fig. 4 is the structural schematic diagram for the dynamic effect material generating means that the application one embodiment provides;
Fig. 5 is the structural schematic diagram for the dynamic effect material generating means that another embodiment of the application provides;
Fig. 6 is the structural schematic diagram for the dynamic effect material generating means that another embodiment of the application provides;
Fig. 7 is the structural schematic diagram for the dynamic effect material generating means that the application further embodiment provides;
Fig. 8 shows the block diagram for being suitable for the example electronic device for being used to realize the application embodiment.
Specific embodiment
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and should not be understood as the limitation to the application.
Below with reference to the accompanying drawings dynamic effect material generation method, device, electronic equipment and the storage for describing the embodiment of the present application are situated between Matter.
Fig. 1 is a kind of flow diagram of dynamic effect material generation method provided by the embodiments of the present application.As shown in Figure 1, should Dynamic effect material generation method the following steps are included:
Step 101, true man's video is acquired.
It in the embodiment of the present application, can be by recording or collecting true man in order to improve the authority and authenticity of video Expert or teacher's gives lessons video material to obtain true man's video.
Step 102, multiple key frames in true man's video are obtained.
It specifically,, can be according to default in order to preferably obtain the related data in true man's video after acquiring true man's video Algorithm carries out key-frame extraction to the true man's video got, to obtain multiple key frames in true man's video.Wherein pre- imputation Method can include but is not limited on OpenCV platform based on frame-to-frame differences method, based on content, based on cluster and based on light stream Key frame of video extraction algorithm etc..
Step 103, for each key frame in multiple key frames, image recognition is carried out to key frame, obtains key frame The facial expressions and acts data of middle true man;Facial expressions and acts data include: expression data and action data.
Further, true man's video is acquired, after getting multiple key frames in true man's video, for multiple key frames In each key frame, can to key frame carry out image recognition, obtain key frame in true man facial expressions and acts data, i.e., progress face The capture of portion's expression and action data, wherein facial expressions and acts data can include: expression data and action data.
Step 104, according to the facial expressions and acts data of true man in key frame, copy the expression for drawing preset vivid icon and Movement, obtains the facial expressions and acts data of vivid icon.
Specifically, in order to which vivid animation is generated according to facial expressions and acts in true man's video, the expression of true man in key frame is obtained After action data, can according to the facial expressions and acts data of true man in key frame, copy the expression for drawing preset vivid icon and Movement, obtains the facial expressions and acts data of vivid icon.Wherein, the facial expressions and acts data of vivid icon can include: vivid icon The action data of each motor unit of face, and vivid icon limbs each motor unit action data;Movement Data include: the current action of motor unit and the difference data of deliberate action.
That is, the expression of the facial expressions and acts data of vivid icon for convenience, can use facial and limbs each The action data of motor unit indicates the facial expressions and acts data of vivid icon.Wherein, the action data of motor unit can be The current action of motor unit and the difference data of deliberate action.For example, by taking motor unit is hand as an example, deliberate action can be with Sagging for hand, current action can lift for hand.The position when action data of motor unit can lift for hand with And position when angle and sagging hand and the difference data between angle.In conjunction with the difference data of motor unit and default The data of movement, so that it may obtain the current action data of vivid icon.Wherein, vivid icon is specifically as follows vivid icon, It can be used between terminal device and user interacting, such as leopard leopard dragon etc..
Step 105, vivid icon facial expressions and acts data corresponding to key frame each in multiple key frames are combined, raw At dynamic effect material.
It is understandable to be, it is every in getting multiple key frames in order to further generate the dynamic effect material of vivid icon It, can vivid icon corresponding to key frame each in multiple key frames after the corresponding vivid icon facial expressions and acts data of a key frame Facial expressions and acts data are combined to be driven with animation, generates the dynamic effect material of vivid icon.
In addition, the accuracy for imitating material is moved in order to further increase, it, can be the following steps are included: aobvious after step 105 Show effect material;When receiving the adjustment movement to dynamic effect material, dynamic effect material is adjusted according to adjustment movement.Its In, after showing dynamic effect material, also dynamic effect material can be adjusted and be checked and accepted by related personnel, such as animation personnel.Into One step, it is optionally, raw such as Fig. 2 in order to facilitate the subsequent calling and promotion cartoon making efficiency to dynamic effect material of related personnel After dynamic effect material, dynamic effect material and corresponding scene can be stored into material database.Specific step is as follows:
Step 201, the scene of true man's video is obtained.
Step 202, by the scene of true man's video, it is determined as the corresponding scene of dynamic effect material.
Step 203, it by dynamic effect material and corresponding scene, stores into material database.
It is understood that as can be seen from the above embodiments, moving effect material can be according in multiple key frame in true man's video The facial expressions and acts data of true man and generate, correspondingly the scene of true man's video can be identified as the corresponding scene of dynamic effect material, later, Dynamic effect material and corresponding scene are stored into material database, wherein the type of the scene of true man's video may include but unlimited In: Novel Temporal Scenario, living scene, weather scene, chat scenario, teaching scene, red-letter day scene etc..For example, Novel Temporal Scenario can be with Be but not limited to morning, evening, living scene can be but it is unlimited wash one's face, brush teeth, reading, listen song, play ball, have a meal, sleep, Weather scene can be but not limited to fine day, rainy day, haze sky, strong wind, snow etc., and chat scenario can be but not limited to laugh at one It is a, have a dance, you prepare assorted youngest it is small it is pleasantly surprised, who are you, teaching scene can be but not limited to course teaching, course guidance Deng red-letter day scene can be but not limited to birthday, festivals or holidays etc..
As an example, by dynamic effect material and corresponding scene, when storing to material database, can according to scene identity and Dynamic effect material mark, which corresponds, to be stored.For example, ID can be respectively configured for dynamic effect material and scene, according to scene ID Corresponding dynamic effect material is inquired, it can also be according to dynamic effect material ID quick search to required dynamic effect material.
In addition, as shown in figure 3, by dynamic effect material and corresponding scene, it, can be according to scene after storing into material database Mark obtains dynamic effect material corresponding with the scene, and terminal device can scheme according to the dynamic effect material and preset image of acquisition Mark generates the corresponding vivid animation of the scene.Specific step is as follows:
Step 301, the dynamic effect material obtaining request of receiving terminal apparatus, moving effect material obtaining request includes: the first scene Mark.
Step 302, material database is inquired according to the mark of the first scene, obtains dynamic effect material to be requested.
Step 303, dynamic effect material to be requested is supplied to terminal device, so that terminal device is according to dynamic effect to be requested Material and preset vivid icon, generate the vivid animation of the first scene.
In the embodiment of the present application, server-side can receive the dynamic effect material obtaining request that terminal device is sent, the request packet The mark for including the first scene inquires material database according to the mark of the first scene, to obtain dynamic effect material to be requested.Later, will Dynamic effect material to be requested is supplied to terminal device, so that terminal device is according to dynamic effect material to be requested and preset image Icon generates the vivid animation of the first scene.Wherein the mark of the first scene can be but not limited to scene ID.
As an example, it for example, terminal device sends scene ID to obtain dynamic effect material corresponding with the scene, services Material database is inquired according to scene ID in end, will acquire dynamic effect material corresponding with the scene and is supplied to terminal device, terminal is set Preset vivid icon, produces this in conjunction with the standby dynamic effect material corresponding with the scene that will acquire in unity platform Vivid animation under scape.
In order to further make vivid animation more abundant, it may also include that the corresponding audio element of each scene in material database Material.Optionally, material database is inquired according to the mark of the first scene, obtains audio material to be requested;By audio element to be requested Material is supplied to terminal device, so that terminal device is according to dynamic effect material to be requested, audio material to be requested and preset Vivid icon generates the vivid animation for carrying audio.
That is, server-side can inquire material database according to the mark of the first scene, dynamic effect corresponding with the scene is inquired Material and audio material, and the dynamic effect material and audio material that will acquire are supplied to terminal device, terminal device can be by this The corresponding dynamic effect material of scape, audio material combine preset vivid icon, generate the vivid animation that audio is carried under the scene, Such as, the vivid animation with background music.
The dynamic effect material generation method of the embodiment of the present application, by obtaining multiple key frames in true man's video;Needle To each key frame in the multiple key frame, image recognition is carried out to the key frame, obtains true man in the key frame Facial expressions and acts data;The facial expressions and acts data include: expression data and action data;According to true man in the key frame Facial expressions and acts data copy the expression and movement for drawing preset vivid icon, obtain the facial expressions and acts data of vivid icon;It is right The corresponding vivid icon facial expressions and acts data of each key frame are combined in the multiple key frame, generate dynamic effect material.It should Method carries out the combination of facial expressions and acts data based on the facial expressions and acts data in true man's video, in conjunction with vivid icon, to generate dynamic effect Material improves cartoon making efficiency, saves the manpower and time cost of vivid animation exploitation, meanwhile, related personnel can root Material database is flexibly called to generate vivid animation according to scene demand.
Corresponding with the dynamic effect material generation method that above-mentioned several embodiments provide, a kind of embodiment of the application also provides The dynamic effect material generating means of one kind, since dynamic effect material generating means provided by the embodiments of the present application are mentioned with above-mentioned several embodiments The dynamic effect material generation method supplied is corresponding, therefore is also applied for this implementation in the embodiment of aforementioned dynamic effect material generation method The dynamic effect material generating means that example provides, are not described in detail in the present embodiment.Fig. 4 is according to the application one embodiment The structural schematic diagram of dynamic effect material generating means.As shown in figure 4, the dynamic effect material generating means include: acquisition module 410, obtain Modulus block 420, picture recognition module 430, drafting module 440, generation module 450.
Specifically, acquisition module 410, for acquiring true man's video;Module 420 is obtained, for obtaining in true man's video Multiple key frames;Picture recognition module 430, for carrying out image to key frame for each key frame in multiple key frames Identification obtains the facial expressions and acts data of true man in key frame;Facial expressions and acts data include: expression data and action data;It draws Module 440 is copied the expression for drawing preset vivid icon and is moved for the facial expressions and acts data according to true man in key frame Make, obtains the facial expressions and acts data of vivid icon;Generation module 450, for corresponding to each key frame in multiple key frames Vivid icon facial expressions and acts data are combined, and generate dynamic effect material.
As a kind of possible implementation of the embodiment of the present application, as shown in figure 5, moving effect material on the basis of shown in Fig. 4 Generating means can further include: display module 460 and adjustment module 470.
Wherein, display module 460, for showing dynamic effect material;Module 470 is adjusted, for receiving to dynamic effect material Adjustment movement when, according to adjustment movement dynamic effect material is adjusted.
As a kind of possible implementation of the embodiment of the present application, as shown in fig. 6, moving effect material on the basis of shown in Fig. 4 Generating means can further include determining module 480 and memory module 490.
Specifically, module 420 is obtained, is also used to obtain the scene of true man's video;Determining module 480, for regarding true man The scene of frequency is determined as the corresponding scene of dynamic effect material;Memory module 490, for depositing dynamic effect material and corresponding scene Storage is into material database.
As a kind of possible implementation of the embodiment of the present application, as shown in fig. 7, moving effect material on the basis of shown in Fig. 6 Generating means can further include receiving module 4100 and offer module 4110.
Specifically, receiving module 4100, the dynamic effect material obtaining for receiving terminal apparatus are requested, and dynamic effect material obtaining is asked Ask include: the first scene mark;Module 420 is obtained, is also used to inquire material database according to the mark of the first scene, obtain wait ask The dynamic effect material asked;Module 4110 is provided, for dynamic effect material to be requested to be supplied to terminal device, so as to terminal device root According to dynamic effect material and preset vivid icon to be requested, the vivid animation of the first scene is generated.
As a kind of possible implementation of the embodiment of the present application, in material database further include: the corresponding audio of each scene Material;Module 420 is obtained, is also used to inquire material database according to the mark of the first scene, obtains audio material to be requested;It provides Module 4110 is also used to audio material to be requested being supplied to terminal device, so that terminal device is according to dynamic effect to be requested Material, audio material to be requested and preset vivid icon, generate the vivid animation for carrying audio.
As a kind of possible implementation of the embodiment of the present application, the facial expressions and acts data of vivid icon include: vivid figure The action data of each motor unit of target face, and vivid icon limbs each motor unit action data; Action data includes: the current action of motor unit and the difference data of deliberate action.
The dynamic effect material generating means of the embodiment of the present application, by obtaining multiple key frames in true man's video;Needle To each key frame in the multiple key frame, image recognition is carried out to the key frame, obtains true man in the key frame Facial expressions and acts data;The facial expressions and acts data include: expression data and action data;According to true man in the key frame Facial expressions and acts data copy the expression and movement for drawing preset vivid icon, obtain the facial expressions and acts data of vivid icon;It is right The corresponding vivid icon facial expressions and acts data of each key frame are combined in the multiple key frame, generate dynamic effect material.It should Device can be realized based on the facial expressions and acts data in true man's video, the combination of facial expressions and acts data be carried out in conjunction with vivid icon, with life At dynamic effect material, cartoon making efficiency is improved, saves the manpower and time cost of vivid animation exploitation, meanwhile, relevant people Member can flexibly call material database to generate vivid animation according to scene demand.
In order to realize above-described embodiment, the embodiment of the present application also proposes a kind of electronic equipment, comprising: memory, processor And store the computer program that can be run on a memory and on a processor, which is characterized in that described in the processor executes Dynamic effect material generation method described in the embodiment of the present application is realized when program.
Fig. 8 shows the block diagram for being suitable for the example electronic device for being used to realize the application embodiment.The electricity that Fig. 8 is shown Sub- equipment 12 is only an example, should not function to the embodiment of the present application and use scope bring any restrictions.
As shown in figure 8, electronic equipment 12 is showed in the form of universal computing device.The component of electronic equipment 12 may include But be not limited to: one or more processor or processing unit 16, system storage 28, connect different system components (including System storage 28 and processing unit 16) bus 18.
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (Industry Standard Architecture;Hereinafter referred to as: ISA) bus, microchannel architecture (Micro Channel Architecture;Below Referred to as: MAC) bus, enhanced isa bus, Video Electronics Standards Association (Video Electronics Standards Association;Hereinafter referred to as: VESA) local bus and peripheral component interconnection (Peripheral Component Interconnection;Hereinafter referred to as: PCI) bus.
Electronic equipment 12 typically comprises a variety of computer system readable media.These media can be it is any can be electric The usable medium that sub- equipment 12 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 28 may include the computer system readable media of form of volatile memory, such as random access memory Device (Random Access Memory;Hereinafter referred to as: RAM) 30 and/or cache memory 62.Electronic equipment 12 can be into One step includes other removable/nonremovable, volatile/non-volatile computer system storage mediums.Only as an example, it deposits Storage system 64 can be used for reading and writing immovable, non-volatile magnetic media, and (Fig. 8 do not show, commonly referred to as " hard drive Device ").Although being not shown in Fig. 8, the disk for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") can be provided and driven Dynamic device, and to removable anonvolatile optical disk (such as: compact disc read-only memory (Compact Disc Read Only Memory;Hereinafter referred to as: CD-ROM), digital multi CD-ROM (Digital Video Disc Read Only Memory;Hereinafter referred to as: DVD-ROM) or other optical mediums) read-write CD drive.In these cases, each driving Device can be connected by one or more data media interfaces with bus 18.Memory 28 may include that at least one program produces Product, the program product have one group of (for example, at least one) program module, and it is each that these program modules are configured to perform the application The function of embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store in such as memory 28 In, such program module 42 include but is not limited to operating system, one or more application program, other program modules and It may include the realization of network environment in program data, each of these examples or certain combination.Program module 42 is usual Execute the function and/or method in embodiments described herein.
Electronic equipment 12 can also be with one or more external equipments 14 (such as keyboard, sensing equipment, display 24 etc.) Communication, can also be enabled a user to one or more equipment interact with the electronic equipment 12 communicate, and/or with make the electricity Any equipment (such as network interface card, modem etc.) that sub- equipment 12 can be communicated with one or more of the other calculating equipment Communication.This communication can be carried out by input/output (I/O) interface 52.Also, electronic equipment 12 can also be suitable by network Orchestration 20 and one or more network (such as local area network (Local Area Network;Hereinafter referred to as: LAN), wide area network (Wide Area Network;Hereinafter referred to as: WAN) and/or public network, for example, internet) communication.As shown, network is suitable Orchestration 20 is communicated by bus 18 with other modules of electronic equipment 12.It should be understood that although not shown in the drawings, can be in conjunction with electricity Sub- equipment 12 uses other hardware and/or software module, including but not limited to: microcode, device driver, redundancy processing are single Member, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processing unit 16 by the program that is stored in system storage 28 of operation, thereby executing various function application and Data processing, such as realize and refer to dynamic effect material generation method in previous embodiment.
In order to realize above-described embodiment, the embodiment of the present application also proposes a kind of computer readable storage medium, stores thereon There is computer program, the dynamic effect material generation method as described in above-described embodiment is realized when which is executed by processor.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discussed suitable Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be by the application Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or their combination.Above-mentioned In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used Any one of art or their combination are realized: have for data-signal is realized the logic gates of logic function from Logic circuit is dissipated, the specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, can integrate in a processing module in each functional unit in each embodiment of the application It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of application Type.

Claims (10)

1. the dynamic effect material generation method of one kind characterized by comprising
Acquire true man's video;
Obtain multiple key frames in true man's video;
For each key frame in the multiple key frame, image recognition is carried out to the key frame, obtains the key frame The facial expressions and acts data of middle true man;The facial expressions and acts data include: expression data and action data;
According to the facial expressions and acts data of true man in the key frame, the expression and movement for drawing preset vivid icon are copied, is obtained To the facial expressions and acts data of vivid icon;
The facial expressions and acts data of image icon corresponding to key frame each in the multiple key frame are combined, and generate dynamic effect Material.
2. the method according to claim 1, wherein described corresponding to key frame each in the multiple key frame The facial expressions and acts data of vivid icon be combined, after generating dynamic effect material, further includes:
Show the dynamic effect material;
When receiving the adjustment movement to the dynamic effect material, the dynamic effect material is adjusted according to adjustment movement It is whole.
3. method according to claim 1 or 2, which is characterized in that described to each key frame in the multiple key frame The facial expressions and acts data of corresponding image icon are combined, and are generated after moving effect material, further includes:
Obtain the scene of true man's video;
By the scene of true man's video, it is determined as the corresponding scene of the dynamic effect material;
The dynamic effect material and corresponding scene are stored into material database.
4. according to the method described in claim 3, it is characterized by further comprising:
The dynamic effect material obtaining of receiving terminal apparatus is requested, and the dynamic effect material obtaining request includes: the mark of the first scene;
The material database is inquired according to the mark of first scene, obtains dynamic effect material to be requested;
The dynamic effect material to be requested is supplied to the terminal device, so that the terminal device is according to described to be requested Dynamic effect material and preset vivid icon, generate the vivid animation of the first scene.
5. according to the method described in claim 4, it is characterized in that, in the material database further include: the corresponding sound of each scene Frequency material;
The method further include:
The material database is inquired according to the mark of first scene, obtains audio material to be requested;
The audio material to be requested is supplied to the terminal device, so that the terminal device is according to described to be requested Dynamic effect material, the audio material to be requested and preset vivid icon, generate the vivid animation for carrying audio.
6. the method according to claim 1, wherein the facial expressions and acts data of the image icon include: image The action data of each motor unit of the face of icon, and vivid icon limbs each motor unit movement number According to;
The action data includes: the current action of motor unit and the difference data of deliberate action.
7. the dynamic effect material generating means of one kind characterized by comprising
Acquisition module, for acquiring true man's video;
Module is obtained, for obtaining multiple key frames in true man's video;
Picture recognition module, for carrying out image knowledge to the key frame for each key frame in the multiple key frame Not, the facial expressions and acts data of true man in the key frame are obtained;The facial expressions and acts data include: expression data and movement number According to;
Drafting module is copied for the facial expressions and acts data according to true man in the key frame and draws preset vivid icon Expression and movement obtain the facial expressions and acts data of vivid icon;
Generation module is carried out for the facial expressions and acts data to the corresponding vivid icon of key frame each in the multiple key frame Combination generates dynamic effect material.
8. device according to claim 7, which is characterized in that further include: display module and adjustment module;
The display module, for showing the dynamic effect material;
The adjustment module, for being acted to institute according to the adjustment when receiving the adjustment movement to the dynamic effect material Effect material is stated to be adjusted.
9. device according to claim 7 or 8, which is characterized in that further include: determining module and memory module;
The acquisition module is also used to obtain the scene of true man's video;
The determining module, for being determined as the corresponding scene of the dynamic effect material for the scene of true man's video;
The memory module, for storing the dynamic effect material and corresponding scene into material database.
10. device according to claim 9, which is characterized in that further include: receiving module and offer module;
The receiving module, the dynamic effect material obtaining for receiving terminal apparatus are requested, and the dynamic effect material obtaining request includes: The mark of first scene;
The acquisition module is also used to inquire the material database according to the mark of first scene, obtains dynamic effect to be requested Material;
The offer module, for the dynamic effect material to be requested to be supplied to the terminal device, so that the terminal is set Standby dynamic effect material to be requested according to and preset vivid icon, generate the vivid animation of the first scene.
CN201910750943.9A 2019-08-14 2019-08-14 Dynamic effect material generation method, device, electronic equipment and storage medium Pending CN110490956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910750943.9A CN110490956A (en) 2019-08-14 2019-08-14 Dynamic effect material generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910750943.9A CN110490956A (en) 2019-08-14 2019-08-14 Dynamic effect material generation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110490956A true CN110490956A (en) 2019-11-22

Family

ID=68551048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910750943.9A Pending CN110490956A (en) 2019-08-14 2019-08-14 Dynamic effect material generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110490956A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951357A (en) * 2020-08-11 2020-11-17 深圳市前海手绘科技文化有限公司 Application method of sound material in hand-drawn animation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739515A (en) * 2010-02-03 2010-06-16 深圳市新飞扬数码技术有限公司 Game generation control method and system
CN107004287A (en) * 2014-11-05 2017-08-01 英特尔公司 Incarnation video-unit and method
CN107274464A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of methods, devices and systems of real-time, interactive 3D animations
CN107945255A (en) * 2017-11-24 2018-04-20 北京德火新媒体技术有限公司 A kind of virtual actor's facial expression driving method and system
WO2018107918A1 (en) * 2016-12-15 2018-06-21 腾讯科技(深圳)有限公司 Method for interaction between avatars, terminals, and system
CN108986190A (en) * 2018-06-21 2018-12-11 珠海金山网络游戏科技有限公司 A kind of method and system of the virtual newscaster based on human-like persona non-in three-dimensional animation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739515A (en) * 2010-02-03 2010-06-16 深圳市新飞扬数码技术有限公司 Game generation control method and system
CN107004287A (en) * 2014-11-05 2017-08-01 英特尔公司 Incarnation video-unit and method
WO2018107918A1 (en) * 2016-12-15 2018-06-21 腾讯科技(深圳)有限公司 Method for interaction between avatars, terminals, and system
CN107274464A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of methods, devices and systems of real-time, interactive 3D animations
CN107945255A (en) * 2017-11-24 2018-04-20 北京德火新媒体技术有限公司 A kind of virtual actor's facial expression driving method and system
CN108986190A (en) * 2018-06-21 2018-12-11 珠海金山网络游戏科技有限公司 A kind of method and system of the virtual newscaster based on human-like persona non-in three-dimensional animation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951357A (en) * 2020-08-11 2020-11-17 深圳市前海手绘科技文化有限公司 Application method of sound material in hand-drawn animation

Similar Documents

Publication Publication Date Title
Park et al. A metaverse: Taxonomy, components, applications, and open challenges
CN109996107A (en) Video generation method, device and system
Selby Animation
CN109348275A (en) Method for processing video frequency and device
Cornell et al. Mass Effect: Art and the Internet in the Twenty-First Century
CN105190699A (en) Karaoke avatar animation based on facial motion data
CN108536302A (en) A kind of teaching method and system based on human body gesture and voice
Ikeuchi et al. Describing upper-body motions based on labanotation for learning-from-observation robots
CN107092664A (en) A kind of content means of interpretation and device
CN101563698A (en) Personalizing a video
Wojcik Typecasting
CN110232722A (en) A kind of image processing method and device
CN108629821A (en) Animation producing method and device
Camurri et al. The MEGA project: Analysis and synthesis of multisensory expressive gesture in performing art applications
CN113633983A (en) Method, device, electronic equipment and medium for controlling expression of virtual character
CN110490956A (en) Dynamic effect material generation method, device, electronic equipment and storage medium
Kawaler et al. Database of speech and facial expressions recorded with optimized face motion capture settings
CN110502112A (en) Intelligent recommendation method and device, electronic equipment and storage medium
CN115529500A (en) Method and device for generating dynamic image
CN113918755A (en) Display method and device, storage medium and electronic equipment
Davenport Smarter tools for storytelling: Are they just around the corner?
CN1089922C (en) Cartoon interface editing method
CN108805951B (en) Projection image processing method, device, terminal and storage medium
Barrett Technological catastrophe and the robots of Nam June Paik
Jarman Drawing time: Winsor McCay’s lightning sketches on stage and screen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191122