WO2020113733A1 - Procédé et appareil de génération d'animation, dispositif électronique, et support d'informations lisible par ordinateur - Google Patents

Procédé et appareil de génération d'animation, dispositif électronique, et support d'informations lisible par ordinateur Download PDF

Info

Publication number
WO2020113733A1
WO2020113733A1 PCT/CN2018/125392 CN2018125392W WO2020113733A1 WO 2020113733 A1 WO2020113733 A1 WO 2020113733A1 CN 2018125392 W CN2018125392 W CN 2018125392W WO 2020113733 A1 WO2020113733 A1 WO 2020113733A1
Authority
WO
WIPO (PCT)
Prior art keywords
music
animation
target
target music
picture
Prior art date
Application number
PCT/CN2018/125392
Other languages
English (en)
Chinese (zh)
Inventor
都之夏
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2020113733A1 publication Critical patent/WO2020113733A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, the present disclosure relates to an animation generation method, device, electronic device, and computer-readable storage medium.
  • the playback mode of the pictures in the corresponding multimedia animation generated based on the selected pictures and music is randomly determined, that is, the playback mode of the pictures in the generated animation is not related to the selected music.
  • the characteristics of the music (such as the rhythm, beat, melody, etc.) of the music do not affect the playback mode of the generated pictures, and thus the characteristics of the music and the playback of the pictures The way is not related, which causes the viewing experience of the animation viewer to be low.
  • the rhythm of the music in a certain cartoon segment is relatively soothing, and the corresponding picture transition method is very fast. This abrupt picture playback method may be It brings discomfort to the video viewer, which reduces the user's viewing experience. Therefore, in the prior art, there is a problem that the playback method of the pictures in the animation generated based on the pictures and the music is not related to the music, resulting in a poor video viewer experience.
  • the present disclosure provides an animation generation method, the method including:
  • the target music and the target animation are synthesized so that the target music and the target animation can be synchronously played and displayed accordingly.
  • an animation generating device which includes:
  • the first determining module is used to determine the music element characteristics of the target music through a predetermined voice recognition method
  • the second determination module is used to determine a plurality of animation composition pictures, and determine the animation playback effect matching each animation composition picture according to the music element characteristics of the target music determined by the first determination module;
  • the animation generation module is used to generate a target animation according to the multiple animation composition pictures determined by the second determination module and the animation playback effects matching each animation composition picture;
  • the synthesis processing module is used for synthesizing the target music and the target animation generated by the animation generating module, so that the target music and the target animation can be synchronously played and displayed accordingly.
  • the present disclosure provides an electronic device including:
  • a memory that stores at least one application program, and when the application program is executed by the processor, causes the electronic device to execute the animation generation method according to the first aspect.
  • the present disclosure provides a computer-readable storage medium for storing computer instructions, which when executed on a computer, causes the computer to execute the animation generation method according to the first aspect.
  • the animation playback effect is determined according to the music element characteristics of the target music, and each animation composition picture corresponds to a playback effect matching the music element characteristics of the target music, which can avoid the picture playback effect and the music element characteristics Discomfort caused by irrelevance (for example, the speed characteristics of the corresponding target music are more comfortable, and the discomfort caused by the extremely fast transition of the corresponding picture does not match the music characteristics), thereby improving the user's Animation viewing experience.
  • FIG. 1 is a schematic flowchart of an animation generation method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of an animation generation device according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of another animation generating device according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • An embodiment of the present disclosure provides an animation generation method. As shown in FIG. 1, the method may include steps S101 to S104.
  • Step S101 Determine the music element characteristics of the target music through a predetermined voice recognition method.
  • music recognition is a cross-type research field, involving music knowledge and signal processing technology.
  • Music recognition includes analyzing the music to obtain the music element characteristics of the target music.
  • the target music can be a music file in WAV (Wave form audio format) format.
  • the WAV file is a waveform file that stores lossless music;
  • the target music can also be a music file in MIDI (Musical Instrument Digital Interface) format.
  • MIDI Musical Instrument Digital Interface
  • the MIDI file does not sample the music, but records each note of the music as a number, so the file is much smaller than the wave file.
  • the target music may be input by playing or humming, or it may be obtained by searching a local music library or downloading through the network.
  • the target music is a file in the format of MP3, WMA, etc.
  • the music file in the format of MP3, WMA, etc. is a music file in a compressed format
  • the format of the target music can be decoded (ie decompressed), and decoded into a format such as WAV document.
  • the music element characteristics of the target music are determined by a predetermined voice recognition method, wherein the predetermined voice recognition method may be a method based on time-frequency analysis, a method based on time-domain analysis, or a method based on frequency-domain analysis
  • the predetermined voice recognition method may be a method based on time-frequency analysis, a method based on time-domain analysis, or a method based on frequency-domain analysis
  • the method, or through other corresponding methods, is not limited here.
  • Step S102 Determine a plurality of animation composition pictures, and determine an animation playback effect matching each animation composition picture according to the music element characteristics of the target music.
  • multiple animation composition pictures are determined, where the multiple animation composition pictures may be manually selected by the user from the picture library, or may be automatically determined from the picture library through a corresponding picture determination method.
  • the animation playback effect matching each animation composition picture may be determined based on the predetermined music element characteristic and the animation composition picture playing effect matching rule, according to the music element characteristic of the target music.
  • Step S103 Generate a target animation according to multiple animation composition pictures and animation playback effects matching each animation composition picture.
  • each animation component picture can be processed based on the matching animation playback effect corresponding to each animation component picture, so that each animation component picture can be played according to the determined matching playback effect, and then processed The subsequent animations form a picture and undergo fusion processing to generate the target animation.
  • Step S104 Synthesizing the target music and the target animation so that the target music and the target animation can be synchronously played and displayed accordingly.
  • the target music and the target animation can be synthesized based on the time state information, so that the target music and the target animation can be played and displayed synchronously accordingly, wherein the corresponding synchronized play and display can be an animation composed picture according to the corresponding time point Or, the music element characteristics of the target music in the time period are played and displayed according to the corresponding play effect.
  • the animation playback effect is determined according to the music element characteristics of the target music, and each animation composition picture corresponds to a playback effect matching the music element characteristics of the target music, which can avoid the picture playback effect and the music element Discomfort caused by irrelevant features (for example, the speed characteristic of the corresponding target music is more comfortable, and the discomfort caused by the extremely fast transition of the corresponding picture does not match the music features), thereby improving the user Animation viewing experience.
  • step S101 may include step S1011 to step S1013.
  • Step S1011 extract the audio information of the target music.
  • the audio information of the target music is extracted through the corresponding audio information extraction technology, where the audio information of the target music may be mixed with a lot of noise information, because it may be interfered by electrical equipment when recording music, or other
  • the noise of the object may also be the interference of the power frequency signal, so the noise information is inevitable, so you can pre-process the extracted audio information to remove the influence of the corresponding noise information, and compress the audio information data to reduce the amount of calculation.
  • Step S1012 (not shown in the figure): perform acoustic feature extraction on the extracted audio information to obtain corresponding acoustic feature information.
  • the audio information can be processed through a corresponding filter to obtain the acoustic characteristic information of the target music; wherein, the audio information can be processed through a Gaussian low-pass FIR filter to obtain the target music signal in PCM format Envelope, and then peak detection through a combination of frequency domain analysis and time domain analysis to determine the position of the note of the target music, and then obtain the acoustic feature information of each note;
  • the acoustic feature information includes but is not limited to zero-crossing rate, Characteristic information such as short-term energy.
  • Step S1013 determine the music element characteristics of the target music based on the extracted acoustic characteristic information.
  • the music element features of the target music are obtained.
  • characteristics of music elements include but are not limited to at least one of the following:
  • the characteristics of music elements include, but are not limited to, features such as pitch, pitch, pitch, beat, rhythm, tempo, melody and so on.
  • the pitch is determined by the frequency of the object's vibration. If the frequency of the vibration is faster, the sound is higher, otherwise, the sound is lower; the sound length is determined by the length of the object's vibration time, the longer the vibration time, the longer the sound; Sound intensity refers to the size of the sound, which is determined by the vibration amplitude of the object. If the vibration amplitude is larger, the sound is stronger; rhythm refers to the length of the sound organized by the strength and weakness.
  • the length is related, and it is related to the strength of the sound; the beat is the same time segment with strong and weak, repeated in a certain order to form the beat; the speed is a measure of the speed of the music beat, such as 144BMP, It means that there are 144 notes per minute; melody usually refers to an organized and rhythmic sequence formed by several musical sounds through artistic conception. Melody is formed by the organic combination of many basic elements of music, such as rhythm, beat, intensity, and length. There are three forms of melody, namely: descending, horizontal and ascending.
  • music is composed of music segments, music segments are composed of music bars, music bars are composed of musical notes, and musical notes are the most basic elements of music;
  • the extraction of the characteristics of the music elements of the target music can first determine the target The characteristics of musical notes, where the characteristics of notes include pitch, intensity, length and other basic characteristics, and then through the analysis of the note characteristics of the target music to obtain more complex musical element characteristics of the target music, such as beat, rhythm, speed, melody And other characteristics.
  • the music element features include pitch, pitch, length, beat, rhythm, speed, and melody. Such diverse music element features provide an improvement in the correlation between the playback effect of the animation composition picture and the target music basis.
  • step S102 may include step S1021 and step S1022.
  • Step S1021 determine the music type of the target music according to the music element characteristics of the target music.
  • the feature vector of the target music can be obtained based on the music element characteristics of the target music, and then the music type of the target music can be obtained through a pre-trained neural network model; wherein, the music type of the target music can be a certain segment of the target music
  • the music type of may also be the music type of the target music as a whole; where the music type of the target music includes but is not limited to gentle, intense, calm, etc.
  • Step S1022 (not shown in the figure): based on the music type of the target music, determine a plurality of animation composition pictures matching the target music.
  • a plurality of animation composition pictures matching the target music are determined respectively.
  • multiple animation composition pictures matching the target music are determined according to the music type of the target music, thereby improving the relevance of the animation composition pictures and the target music.
  • step S1022 may include step S10221 and step S10222.
  • Step S10221 determine the picture scene type that matches the music type of the target music
  • determine the picture scene type that matches the music type of the target music for example, for gentle music, it can be used in the scene of tourist scenery, for intense music, it can be used in sports competitions, rock concerts Scenes such as the scene.
  • Step S10222 it is determined that a plurality of animations conforming to the picture scene type constitute a picture.
  • multiple animation composition pictures are determined according to the picture scene type that matches the music type of the target music, which solves the problem of how to determine the animation composition picture according to the type of the target music.
  • step S102 may include step S1023 and step S1024.
  • Step S1023 perform segmentation processing on the target music according to the music element characteristics of the target music to obtain multiple music fragments.
  • the target music is segmented according to the music element characteristics of the target music to obtain multiple music fragments; wherein, the music fragments may correspond to one or more music bars of the target music, or may be the target music One is the music segment; among them, the division of the music bars can be obtained based on the strong and weak characteristics of the musical elements between the notes, and the division of the music bars can be based on the similarity between the divided music bars.
  • Step S1024 according to the characteristics of the music elements corresponding to the respective music fragments, determine the animation playback effect of the corresponding animations in each music segment to constitute the picture, the animation playback effect includes the transition mode, animation special effects and filtering At least one item in mirror mode.
  • the animation playback effect of each animation component picture corresponding to each music segment is determined according to the music element characteristics corresponding to each music segment, wherein each animation component can be determined according to a certain feature or a combination of multiple features
  • the animation playback effect of the picture for example, the transition mode of the animation composition picture can be determined according to the rhythm and speed, and the corresponding filter mode can be determined according to the melody.
  • the target music is segmented according to the music element characteristics of the target music to obtain a plurality of music fragments, and then the corresponding animation composition pictures in each music segment are determined according to the music element characteristics corresponding to the respective music fragments
  • the animation playback effect solves the problem of how to determine the animation playback effect of an animation composition picture according to the characteristics of music elements.
  • FIG. 2 is an animation generation device provided in an embodiment of the present disclosure.
  • the device 20 includes: a first determination module 201, a second determination module 202, an animation generation module 203, and a synthesis processing module 204, where:
  • the first determination module 201 is used to determine the music element characteristics of the target music through a predetermined voice recognition method
  • the second determination module 202 is used to determine a plurality of animation composition pictures, and determine the animation playback effect matching each animation composition picture according to the music element characteristics of the target music determined by the first determination module 201;
  • the animation generation module 203 is used to generate a target animation according to the plurality of animation composition pictures determined by the second determination module 202 and the animation playback effect matching each animation composition picture;
  • the synthesis processing module 204 is used for synthesizing the target music and the target animation generated by the animation generating module 203, so that the target music and the target animation can be synchronously played and displayed accordingly.
  • the device of this embodiment can execute an animation generation method provided in the above embodiments of the present disclosure, and its implementation principles are similar, and will not be repeated here.
  • the apparatus 30 of this embodiment may include: a first determining module 301, a second determining module 302, an animation generating module 303, and a synthesis processing module 304 .
  • the first determination module 301 is used to determine the music element characteristics of the target music through a predetermined voice recognition method.
  • the functions of the first determining module 301 in FIG. 3 and the first determining module 201 in FIG. 2 are the same or similar.
  • the second determination module 302 is used to determine a plurality of animation composition pictures, and determine an animation playing effect matching each animation composition picture according to the music element characteristics of the target music determined by the first determination module 301.
  • the functions of the second determination module 302 in FIG. 3 and the second determination module 202 in FIG. 2 are the same or similar.
  • the animation generation module 303 is used to generate a target animation according to the plurality of animation component pictures determined by the second determination module 302 and the animation playback effect matching each animation component picture.
  • the functions of the animation generating module 303 in FIG. 3 and the animation generating module 203 in FIG. 2 are the same or similar.
  • the synthesis processing module 304 is used for synthesizing the target music and the target animation generated by the animation generating module 303, so that the target music and the target animation can be synchronously played and displayed accordingly.
  • the functions of the synthesis processing module 304 in FIG. 3 and the synthesis processing module 204 in FIG. 2 are the same or similar.
  • the first determination module 301 may include a first extraction unit 3011, a second extraction unit 3012, and a first determination unit 3013, where:
  • the first extraction unit 3011 is used to extract audio information of the target music
  • the second extraction unit 3012 is used to perform acoustic feature extraction on the audio information extracted by the first extraction unit 3011 to obtain corresponding acoustic feature information;
  • the first determination unit 3013 is used to determine the music element characteristics of the target music based on the acoustic characteristic information extracted by the second extraction unit 3012.
  • the music element characteristics include at least one of the following:
  • the second determination module 302 may include a second determination unit 3021 and a third determination unit 3022, where:
  • the second determining unit 3021 is used to determine the music type of the target music according to the music element characteristics of the target music
  • the third determination unit 3022 is used to determine a plurality of animation composition pictures matching the target music based on the music type of the target music determined by the second determination unit 3021.
  • the third determining unit 3022 may be further configured to determine a picture scene type that matches the music type of the target music, and determine a plurality of animation constituent pictures that conform to the picture scene type.
  • the second determination module 302 may further include a segment processing unit 3023 and a fourth determination unit 3024, where:
  • the segmentation processing unit 3023 is used to segment the target music according to the characteristics of the music elements of the target music to obtain multiple music segments;
  • the fourth determining unit 3024 is used to determine the animation playing effect of each animation composition picture corresponding to each music segment according to the music element characteristics corresponding to each music segment obtained by the segment processing by the segment processing unit 3023. At least one of field mode, animation effects, and filter mode.
  • the animation generating device provided by the embodiment of the present disclosure is applicable to the method shown in the above embodiment, and will not be repeated here.
  • an electronic device is provided, as shown in FIG. 4, which shows a schematic structural diagram of an electronic device (eg, terminal device or server) 40 suitable for implementing the embodiments of the present disclosure.
  • the terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), and in-vehicle terminals ( Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 4 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
  • the electronic device 40 may include a processing device (for example, a central processing unit, a graphics processor, etc.) 401, which may be loaded into a random storage according to a program stored in a read-only memory (ROM) 402 or from the storage device 408
  • the program in the memory (RAM) 403 is fetched to perform various appropriate actions and processes.
  • various programs and data necessary for the operation of the electronic device 40 are also stored.
  • the processing device 401, ROM 402, and RAM 403 are connected to each other via a bus 404.
  • An input/output (I/O) interface 405 is also connected to the bus 404.
  • the following devices can be connected to the I/O interface 405: including input devices 406 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speaker, vibration
  • input devices 406 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.
  • An output device 407 such as a storage device; a storage device 408 including, for example, a magnetic tape or a hard disk; and a communication device 409.
  • the communication device 409 may allow the electronic device 40 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 4 shows an electronic device 40 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • This embodiment provides an electronic device applicable to the foregoing method embodiments, and details are not described herein again.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 409, or from the storage device 408, or from the ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the method of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device is caused to: execute the animation generation method shown in the above method embodiment.
  • the computer program code for performing the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above programming languages include object-oriented programming languages such as Java, Smalltalk, C++, as well as conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may be executed entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, through an Internet service provider Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider Internet connection for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • This embodiment provides a computer-readable storage medium suitable for the foregoing method embodiments, and details are not described herein again.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks represented in succession may actually be executed in parallel, and they may sometimes be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé et un appareil de génération d'animation, un dispositif électronique, ainsi qu'un support d'informations lisible par ordinateur. Le procédé consiste : à déterminer, au moyen d'un procédé de reconnaissance vocale prédéterminé, une caractéristique d'élément musical d'une musique cible (S101) ; à déterminer une pluralité d'images de composition d'animation, et à déterminer, en fonction de la caractéristique d'élément musical de la musique cible, des effets de lecture d'animation correspondant à des images de composition d'animation respectives (S102) ; à générer, en fonction des effets de lecture d'animation et de la pluralité d'images de composition d'animation, une animation cible (S103) ; et à synthétiser la musique cible et l'animation cible, de telle sorte que la musique cible et l'animation cible peuvent être lues de manière correspondante et synchrone (S104). L'effet de lecture d'animation de chaque image est déterminé sur la base de la caractéristique d'élément musical de la musique cible, ce qui permet d'empêcher une sensation d'inconfort provoquée par des incongruités entre les effets de lecture des images et la caractéristique d'élément musical, et d'améliorer l'expérience de visualisation d'animation pour des utilisateurs.
PCT/CN2018/125392 2018-12-07 2018-12-29 Procédé et appareil de génération d'animation, dispositif électronique, et support d'informations lisible par ordinateur WO2020113733A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811496521.5A CN109615682A (zh) 2018-12-07 2018-12-07 动画生成方法、装置、电子设备及计算机可读存储介质
CN201811496521.5 2018-12-07

Publications (1)

Publication Number Publication Date
WO2020113733A1 true WO2020113733A1 (fr) 2020-06-11

Family

ID=66008353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/125392 WO2020113733A1 (fr) 2018-12-07 2018-12-29 Procédé et appareil de génération d'animation, dispositif électronique, et support d'informations lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN109615682A (fr)
WO (1) WO2020113733A1 (fr)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097618B (zh) * 2019-05-09 2023-05-12 广州小鹏汽车科技有限公司 一种音乐动画的控制方法、装置、车辆及存储介质
CN110209844B (zh) * 2019-05-17 2021-08-31 腾讯音乐娱乐科技(深圳)有限公司 多媒体数据匹配方法、装置和存储介质
CN111611430A (zh) * 2020-05-26 2020-09-01 广州酷狗计算机科技有限公司 歌曲播放方法、装置、终端及存储介质
CN113852521A (zh) * 2020-06-09 2021-12-28 广东美的制冷设备有限公司 家电及其控制方法和计算机可读存储介质
CN113938744B (zh) * 2020-06-29 2024-01-23 抖音视界有限公司 视频转场类型处理方法、设备及存储介质
CN113938751B (zh) * 2020-06-29 2023-12-22 抖音视界有限公司 视频转场类型确定方法、设备及存储介质
CN111857923B (zh) * 2020-07-17 2022-10-28 北京字节跳动网络技术有限公司 特效展示方法、装置、电子设备及计算机可读介质
CN112164128B (zh) * 2020-09-07 2024-06-11 广州汽车集团股份有限公司 一种车载多媒体的音乐可视化交互方法、计算机设备
CN112804578A (zh) * 2021-01-28 2021-05-14 广州虎牙科技有限公司 氛围特效生成方法、装置、电子设备和存储介质
CN113365134B (zh) * 2021-06-02 2022-11-01 北京字跳网络技术有限公司 音频分享方法、装置、设备及介质
CN114154003A (zh) * 2021-11-11 2022-03-08 北京达佳互联信息技术有限公司 图片的获取方法、装置及电子设备
CN116152393A (zh) * 2021-11-18 2023-05-23 脸萌有限公司 视频生成方法、装置、设备及存储介质
CN116800908A (zh) * 2022-03-18 2023-09-22 北京字跳网络技术有限公司 一种视频生成方法、装置、电子设备和存储介质
CN114820888A (zh) * 2022-04-24 2022-07-29 广州虎牙科技有限公司 动画生成方法、系统及计算机设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826217A (zh) * 2010-05-07 2010-09-08 上海交通大学 人脸动画快速生成方法
US20140178043A1 (en) * 2012-12-20 2014-06-26 International Business Machines Corporation Visual summarization of video for quick understanding
CN107172485A (zh) * 2017-04-25 2017-09-15 北京百度网讯科技有限公司 一种用于生成短视频的方法与装置
CN108428441A (zh) * 2018-02-09 2018-08-21 咪咕音乐有限公司 多媒体文件生成方法、电子设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4660861B2 (ja) * 2006-09-06 2011-03-30 富士フイルム株式会社 楽曲画像シンクロ動画シナリオ生成方法、プログラムおよび装置
CN101853668B (zh) * 2010-03-29 2014-10-29 北京中星微电子有限公司 一种将midi音乐生成动画的方法和系统
CN101901595B (zh) * 2010-05-05 2014-10-29 北京中星微电子有限公司 一种根据音频音乐生成动画的方法和系统
CN105224581B (zh) * 2014-07-03 2019-06-21 北京三星通信技术研究有限公司 在播放音乐时呈现图片的方法和装置
CN105227864A (zh) * 2015-10-16 2016-01-06 南阳师范学院 一种图片生成动画并与视频片段拼接合成的视频编辑方法
CN105550251A (zh) * 2015-12-08 2016-05-04 小米科技有限责任公司 图片播放方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826217A (zh) * 2010-05-07 2010-09-08 上海交通大学 人脸动画快速生成方法
US20140178043A1 (en) * 2012-12-20 2014-06-26 International Business Machines Corporation Visual summarization of video for quick understanding
CN107172485A (zh) * 2017-04-25 2017-09-15 北京百度网讯科技有限公司 一种用于生成短视频的方法与装置
CN108428441A (zh) * 2018-02-09 2018-08-21 咪咕音乐有限公司 多媒体文件生成方法、电子设备和存储介质

Also Published As

Publication number Publication date
CN109615682A (zh) 2019-04-12

Similar Documents

Publication Publication Date Title
WO2020113733A1 (fr) Procédé et appareil de génération d'animation, dispositif électronique, et support d'informations lisible par ordinateur
WO2020253806A1 (fr) Procédé et appareil de génération d'une vidéo d'affichage, dispositif et support de stockage
CN109543064B (zh) 歌词显示处理方法、装置、电子设备及计算机存储介质
WO2020098115A1 (fr) Procédé d'ajout de sous-titres, appareil, dispositif électronique et support de stockage lisible par ordinateur
CN111798821B (zh) 声音转换方法、装置、可读存储介质及电子设备
CN112153460B (zh) 一种视频的配乐方法、装置、电子设备和存储介质
WO2021259300A1 (fr) Procédé et appareil d'ajout d'effet sonore, support de stockage et dispositif électronique
CN113257218B (zh) 语音合成方法、装置、电子设备和存储介质
WO2021057740A1 (fr) Procédé et appareil de génération de vidéo, dispositif électronique et support lisible par ordinateur
JP2019091014A (ja) マルチメディアを再生するための方法及び装置
CN111782576B (zh) 背景音乐的生成方法、装置、可读介质、电子设备
WO2023051246A1 (fr) Procédé et appareil d'enregistrement vidéo, dispositif, et support de stockage
JP2019015951A (ja) 電子機器のウェイクアップ方法、装置、デバイス及びコンピュータ可読記憶媒体
US11272136B2 (en) Method and device for processing multimedia information, electronic equipment and computer-readable storage medium
CN112908292A (zh) 文本的语音合成方法、装置、电子设备及存储介质
WO2023040520A1 (fr) Procédé et appareil pour effectuer une mise en correspondance musicale de vidéo et dispositif informatique et support de stockage
WO2022160603A1 (fr) Procédé et appareil de recommandation de chanson, dispositif électronique et support de stockage
JP7497523B2 (ja) カスタム音色歌声の合成方法、装置、電子機器及び記憶媒体
WO2024078293A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support d'enregistrement associés
WO2024001548A1 (fr) Procédé et appareil de génération de liste de chansons, dispositif électronique et support de stockage
WO2023061229A1 (fr) Procédé et dispositif de génération de vidéo
WO2022143530A1 (fr) Procédé et appareil de traitement audio, dispositif informatique et support de stockage
CN109495786B (zh) 视频处理参数信息的预配置方法、装置及电子设备
WO2023010949A1 (fr) Procédé et appareil de traitement de données audio
CN113345394B (zh) 音频数据的处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18941956

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18941956

Country of ref document: EP

Kind code of ref document: A1