WO2021129628A1 - 视频特效处理方法及装置 - Google Patents

视频特效处理方法及装置 Download PDF

Info

Publication number
WO2021129628A1
WO2021129628A1 PCT/CN2020/138415 CN2020138415W WO2021129628A1 WO 2021129628 A1 WO2021129628 A1 WO 2021129628A1 CN 2020138415 W CN2020138415 W CN 2020138415W WO 2021129628 A1 WO2021129628 A1 WO 2021129628A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
special effect
effect processing
frame image
target object
Prior art date
Application number
PCT/CN2020/138415
Other languages
English (en)
French (fr)
Inventor
李小奇
周景锦
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to KR1020227023958A priority Critical patent/KR20220106848A/ko
Priority to EP20905557.3A priority patent/EP4068757A4/en
Priority to BR112022012742A priority patent/BR112022012742A2/pt
Priority to JP2022539328A priority patent/JP7427792B2/ja
Publication of WO2021129628A1 publication Critical patent/WO2021129628A1/zh
Priority to US17/849,029 priority patent/US11882244B2/en
Priority to US18/528,745 priority patent/US20240106968A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/366Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems with means for modifying or correcting the external signal, e.g. pitch correction, reverberation, changing a singer's voice
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/076Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera

Definitions

  • This application relates to the technical field of video special effects processing, and in particular to a method and device for processing video special effects.
  • the technical problem solved by the present disclosure is to provide a video special effect processing method to at least partially solve the technical problem of relatively single video special effect in the prior art.
  • a video special effect processing device, an electronic device, a computer-readable storage medium, and a video special effect processing terminal are also provided.
  • a video special effect processing method including:
  • a video special effect processing device including:
  • the music detection module is used to detect the music played along with the video during the playback of the video; wherein the video contains the target object;
  • An image acquisition module configured to acquire a video frame image to be played in the video when it is detected that the music is played to a preset rhythm
  • a special effect processing module configured to perform special effect processing on the target object in the video frame image to obtain a video frame image after the special effect processing
  • the special effect display module is used to display and play the video frame image after the special effect processing.
  • An electronic device including:
  • Memory for storing non-transitory computer readable instructions
  • the processor is configured to run the computer-readable instructions so that the processor implements any of the video special effects processing methods described above when executed.
  • a computer-readable storage medium for storing non-transitory computer-readable instructions.
  • the computer is caused to execute the video special effect processing method described in any one of the above .
  • a video special effect processing terminal includes any of the above-mentioned video special effect processing devices.
  • the embodiments of the present disclosure detect the music that is played along with the video during the playback of the video, and when it is detected that the music is played to a preset rhythm, the video frame image to be played in the video is acquired, and The target object in the video frame image is subjected to special effect processing to obtain the video frame image after the special effect processing, and the video frame image after the special effect processing is displayed and played, which can combine music with special effects and enrich the video special effect function.
  • Fig. 1 is a schematic flowchart of a video special effect processing method according to an embodiment of the present disclosure
  • Fig. 2 is a schematic structural diagram of a video special effects processing device according to an embodiment of the present disclosure
  • Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the video special effect processing method mainly includes the following steps S11 to S14.
  • Step S11 In the process of playing the video, detect the music played along with the video; wherein the video contains the target object.
  • the video may be a video obtained in real time.
  • the video may be obtained in real time through a camera or a video camera of the terminal.
  • the terminal may be a mobile terminal (for example, a smart phone, an iPhone, a tablet computer, a notebook or a wearable device), or a fixed terminal (for example, a desktop computer).
  • the target object can be preset, such as a human face, an animal, a plant, a human body, a gesture, etc., specifically, an existing target detection algorithm can be used to detect the video to obtain the target object.
  • the target detection algorithm that can be used can be a target detection algorithm based on deep learning and an image recognition algorithm based on neural network.
  • the music can be the background music contained in the video itself, or it can be the music selected by the user when shooting the video. Specifically, during the video playback process, music is played at the same time, and the music may be music containing lyrics or light music.
  • Step S12 When it is detected that the music is played to a preset rhythm, obtain the video frame image to be played in the video.
  • a corresponding music signal is acquired according to the music, and then an audio signal detection algorithm is used to detect the acquired music signal to obtain rhythm information of the music.
  • the audio signal detection algorithms include BPM (Beat Per Minute, beats per minute) algorithm or improved BPM algorithm.
  • the rhythm information includes at least one of beat points, accent points, drum points, or preset melody points.
  • beats include 1/4, 2/4, 3/4, 4/4, 3/8, 6/8, 7/8, 9/8, 12/8 and so on.
  • the accent includes general accent, double accent, or drum beat. Generally stress is to add force and air to a certain sound. Double accent is to add force or air to a certain tone until the end of the time, that is, after the piano presses the key, its vibrating sine wave is still working.
  • Drumbeats are a hit or percussion on a drum, or the beat drumbeat of the percussion part in an orchestra.
  • Step S13 Perform special effect processing on the target object in the video frame image to obtain a video frame image after the special effect processing.
  • the processing can be divided into the following two situations according to the video: first, each frame of the video in the video contains the target object, in this case, the special effect processing can be directly performed on the target object according to the preset rhythm; Two, because not every frame of the video contains the target object, at this time, it is only determined that the target object appears at the current playback position of the video, and it is determined that the music is played to the preset rhythm at the same time , And then perform special effects processing on the target object according to the preset rhythm.
  • Step S14 Display and play the video frame image after the special effect processing.
  • the music played with the video is detected during the playback of the video.
  • the video frame image to be played in the video is acquired, and all The target object in the video frame image is processed with special effects to obtain the video frame image after the special effect processing, and the video frame image after the special effect processing is displayed and played, which can combine music with special effects and enrich the video special effect function.
  • step S13 specifically includes:
  • Step S131 Determine a special effect processing mode corresponding to the preset rhythm according to the preset rhythm.
  • Step S132 Perform special effect processing on the target object in the video frame image according to the special effect processing manner to obtain a video frame image after the special effect processing.
  • the special effect processing method is zooming processing or zooming processing: the zooming processing is zooming in the target object in the video frame image to achieve the video effect of the zooming, and the zooming processing is The target object is reduced in the video frame image to achieve the video effect of the push mirror.
  • only the target object in the video frame image may be zoomed in. For example, first extract the target object from the video frame image, use the image area other than the target object in the video frame image as the background area, only enlarge the target object, and keep the background area unchanged, and then change the enlarged The target object is superimposed on the background area to obtain the video frame image after the special effect processing.
  • the target object is reduced, only the target object in the video frame image may be reduced.
  • first extract the target object from the video frame image use the image area other than the target object in the video frame image as the background area, only shrink the target object, and keep the background area unchanged, and then reduce the reduced The target object is superimposed on the background area to obtain the video frame image after the special effect processing.
  • the special effect processing method is zoom processing; if the preset rhythm is a beat point of a weak beat, the special effect processing method If the preset rhythm is a beat point of a strong beat, the special effect processing method is a zoom processing; if the preset rhythm is a beat point of a weak beat, the special effect processing method is a pull Mirror processing.
  • the target object is enlarged in the video frame image to achieve a video effect of zooming in, and if the preset rhythm is a beat of a weak beat , The target object is reduced in the video frame image to realize the video effect of the push mirror. Or, if the preset rhythm is a beat point of a strong beat, then the target object is reduced in the video frame image to achieve a video effect of pushing a mirror, and if the preset rhythm is a beat point of a weak beat, Then, the target object is enlarged in the video frame image to realize the video effect of the zooming lens.
  • the special effect processing method is a zooming process using a first zoom parameter; if the preset rhythm is a beat point of a weak beat , The special effect processing method is zooming processing using a second magnification parameter; wherein the first magnification parameter and the second magnification parameter are different parameters; or, if the preset rhythm is a strong beat Point, the special effect processing method is the zoom processing using the first zoom-out parameter; if the preset rhythm is the beat point of the downbeat, the special effect processing method is the zoom processing using the second zoom-out parameter; The first reduction parameter and the second reduction parameter are different parameters.
  • the corresponding zoom parameter can be set in advance according to the beat point in the preset rhythm, and the specific zoom parameter can be a reduction parameter or an enlargement parameter.
  • the target object is zoomed out; when it is a zoom-in parameter, the target object is zoomed out in the preset rhythm In the process, when the corresponding beat point is played, the target object is enlarged.
  • the first zoom-in parameter is greater than the second zoom-in parameter, and the first zoom-in parameter is greater than the second zoom-in parameter.
  • the preset rhythm may include multiple accents, or multiple drum beats, or at least one accent and at least one drum beat. And you can set different zoom parameters according to the volume of the accent or drums, that is, the accents or drums with higher volume correspond to the larger amplification parameters, and the accents or drums with lower volume correspond to the smaller amplification parameters.
  • the effect of zooming in, zooming out, zooming in, or gradually zooming in or zooming out of the target object can be achieved.
  • the sequence of strong and weak beats can be determined according to the number of beats contained in the preset rhythm. For example, when the preset rhythm contains one beat, the sequence of strong and weak beats can be determined to be [Strong Weak], and when the preset rhythm contains two beats, the sequence of strong and weak beats can be determined to be [Strong Weak Strong Weak]. By analogy, I won't repeat them here.
  • the beat pattern is strong, weak, and weak.
  • the sequence of strong and weak beats can be determined according to the number of beats contained in the preset rhythm. For example, when the preset rhythm contains one beat, the sequence of strong and weak beats can be determined to be [Strong Weak], and when the preset rhythm contains two beats, the sequence of strong and weak beats can be determined to be [Strong Weak Strong Weak] , And so on, so I won’t repeat it here.
  • the beat pattern is strong, weak, second strong, and weak.
  • the sequence of strong and weak beats can be determined according to the number of beats contained in the preset rhythm. For example, when the preset rhythm contains one beat, the sequence of strong and weak beats can be determined to be [Strong Weak Sub Strong Weak], and when the preset rhythm contains two beats, the sequence of strong and weak beats can be determined to be [Strong Weak Sub Strong Weak]. Strong and weak second strong and weak], and so on, so I won’t repeat them here.
  • the beat pattern is strong, weak, weak, second strong, weak, and weak.
  • the sequence of strong and weak beats can be determined according to the number of beats contained in the preset rhythm. For example, when the preset rhythm contains one beat, the sequence of strong and weak beats can be determined to be [Strong Weak Weak Second Strong Weak], and when the preset rhythm contains two beats, the sequence of strong and weak beats can be determined to be [Strong Weak Weak]. Second strong, weak, weak], and so on, so I won’t repeat them here.
  • the zoom-out parameters or zoom-in parameters can be set separately according to the corresponding strong beat and weak beat.
  • the set rule may be that the zoom factor of the strong shot is greater than the zoom factor of the down shot, or the zoom factor of the strong shot is smaller than the reduction factor of the down shot.
  • the special effect parameters corresponding to the strong shots can be set as the zoom-in parameters
  • the special effect parameters corresponding to the down beats can be set as the zoom-out parameters, so that during the playback of the preset rhythm
  • the target object is zoomed in
  • the target object is zoomed out, so as to achieve the effect of zooming in and out on the target object.
  • the zooming process is specifically: in the video frame image, zooming in on the target object to achieve a zooming video effect until the target object is zoomed to a maximum threshold , Execute the function of the video image corresponding to the target object after dithering and zooming to the maximum threshold; the zooming process is specifically: in the video frame image, zooming out the target object to achieve the video effect of the zooming until The target object is reduced to the minimum threshold, and the function of the video image corresponding to the target object after the jitter is reduced to the minimum threshold is executed.
  • the jitter parameter when the target object displayed on the terminal screen is zoomed in to the maximum or zoomed out to the minimum, the jitter parameter is acquired.
  • jitter parameters include jitter direction (for example, jitter before and after), jitter amplitude, and jitter frequency.
  • the jitter amplitude can be related to the scaling parameter. For example, it can be set that the larger the zoom parameter is, the larger the jitter amplitude is, and the larger the zoom parameter is, the smaller the jitter amplitude is.
  • the jitter direction and jitter frequency can be customized.
  • the jitter parameter can be set in advance, stored locally on the terminal or on the network, and obtained locally or on the network when obtaining it.
  • the video image corresponding to the target object is jittered according to the obtained jitter parameter when zoomed in to the maximum or zoomed out to the minimum.
  • the device embodiments of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
  • an embodiment of the present disclosure provides a video special effect processing device.
  • the device can execute the steps in the embodiment of the video special effect processing method described in the first embodiment.
  • the device mainly includes: a music detection module 21, an image acquisition module 22, a special effect processing module 23, and a special effect display module 24; among them,
  • the music detection module 21 is used to detect the music played along with the video during the playback of the video; wherein the video contains the target object;
  • the image acquisition module 22 is configured to acquire a video frame image to be played in the video when it is detected that the music is played to a preset rhythm;
  • the special effect processing module 23 is configured to perform special effect processing on the target object in the video frame image to obtain a video frame image after the special effect processing;
  • the special effect display module 24 is used for displaying and playing the video frame image after the special effect processing.
  • the special effect processing module 23 is specifically configured to: determine a special effect processing method corresponding to the preset rhythm according to the preset rhythm; perform special effects on the target object in the video frame image according to the special effect processing method Processing to obtain video frame images after special effects processing;
  • the special effect processing method is zooming processing or zooming processing: the zooming processing is zooming in the target object in the video frame image to achieve the video effect of the zooming, and the zooming processing is The target object is reduced in the video frame image to achieve the video effect of the push mirror.
  • the special effect processing method is a zoom processing
  • the preset rhythm is a beat point of a strong beat
  • the special effect processing method is a zoom processing
  • the special effect processing method is a zooming process; if the preset rhythm is a beat point of a down beat, the special effect processing method is a lens pulling process.
  • the special effect processing method is a zooming process using a first zoom parameter; if the preset rhythm is a beat point of a weak beat, the special effect processing method Is a zooming process using a second magnification parameter; wherein the first magnification parameter and the second magnification parameter are different parameters;
  • the special effect processing method is a zooming process using the first zoom-out parameter; if the preset rhythm is the beat point of the weak beat, the special effect processing method is the use of the second 2.
  • first zoom-in parameter is greater than the second zoom-in parameter
  • first zoom-in parameter is greater than the second zoom-in parameter
  • the zooming process is specifically: in the video frame image, zooming in on the target object to achieve a zooming video effect, until the target object is zoomed to a maximum threshold, and jitter zooming is performed to the maximum The function of the video screen corresponding to the target object after the threshold;
  • the zoom-in processing is specifically: in the video frame image, the target object is reduced to achieve the video effect of the zoom-in, until the target object is reduced to a minimum threshold, and the jitter is reduced to the minimum threshold.
  • the preset rhythm is a beat point, an accent point, a drum point or a preset melody point in the music.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 3 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 300 may include a processing device (such as a central processing unit, a graphics processor, etc.) 301, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 302 or from a storage device 308
  • the program in the memory (RAM) 303 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 300 are also stored.
  • the processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
  • An input/output (I/O) interface 305 is also connected to the bus 304.
  • the following devices can be connected to the I/O interface 305: including input devices 306 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibrations
  • input devices 306 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.
  • LCD liquid crystal displays
  • An output device 307 such as a device
  • a storage device 308 such as a magnetic tape and a hard disk
  • the communication device 309 may allow the electronic device 300 to perform wireless or wired communication with other devices to exchange data.
  • Fig. 3 shows an electronic device 300 with various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302.
  • the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication e.g., communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: during the video playback process, detect the music played along with the video Wherein, the video contains a target object; when it is detected that the music is played to a preset rhythm, obtain the video frame image to be played in the video; perform special effects processing on the target object in the video frame image, Obtain the video frame image after the special effect processing; display and play the video frame image after the special effect processing.
  • the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logic function.
  • Executable instructions can also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure can be implemented in software or hardware. Wherein, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first obtaining unit can also be described as "a unit for obtaining at least two Internet Protocol addresses.”
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • a video special effect processing method including:
  • the performing special effect processing on the target object in the video frame image to obtain the video frame image after the special effect processing includes:
  • the special effect processing method is zooming processing or zooming processing: the zooming processing is zooming in the target object in the video frame image to achieve the video effect of the zooming, and the zooming processing is The target object is reduced in the video frame image to achieve the video effect of the push mirror.
  • the special effect processing mode is a zoom processing
  • the preset rhythm is a beat point of a down beat
  • the special effect processing method is a zoom processing
  • the special effect processing method is a zooming process; if the preset rhythm is a beat point of a down beat, the special effect processing method is a lens pulling process.
  • the special effect processing method is a zooming process using a first zoom parameter; if the preset rhythm is a beat point of a weak beat, the special effect processing method Is a zooming process using a second magnification parameter; wherein the first magnification parameter and the second magnification parameter are different parameters;
  • the special effect processing method is a zooming process using the first zoom-out parameter; if the preset rhythm is the beat point of the weak beat, the special effect processing method is the use of the second 2.
  • first zoom-in parameter is greater than the second zoom-in parameter
  • first zoom-in parameter is greater than the second zoom-in parameter
  • the zooming process is specifically: in the video frame image, zooming in on the target object to achieve a zooming video effect, until the target object is zoomed to a maximum threshold, and jitter zooming is performed to the maximum The function of the video screen corresponding to the target object after the threshold;
  • the zoom-in processing is specifically: in the video frame image, the target object is reduced to achieve the video effect of the zoom-in, until the target object is reduced to a minimum threshold, and the jitter is reduced to the minimum threshold.
  • the preset rhythm is a beat point, an accent point, a drum point or a preset melody point in the music.
  • a video special effects processing device including:
  • the music detection module is used to detect the music played along with the video during the playback of the video; wherein the video contains the target object;
  • An image acquisition module configured to acquire a video frame image to be played in the video when it is detected that the music is played to a preset rhythm
  • a special effect processing module configured to perform special effect processing on the target object in the video frame image to obtain a video frame image after the special effect processing
  • the special effect display module is used to display and play the video frame image after the special effect processing.
  • the special effect processing module is specifically configured to: determine a special effect processing method corresponding to the preset rhythm according to the preset rhythm; perform special effect processing on the target object in the video frame image according to the special effect processing method , Get the video frame image after special effect processing;
  • the special effect processing method is zooming processing or zooming processing: the zooming processing is zooming in the target object in the video frame image to achieve the video effect of the zooming, and the zooming processing is The target object is reduced in the video frame image to achieve the video effect of the push mirror.
  • the special effect processing method is a zoom processing
  • the preset rhythm is a beat point of a strong beat
  • the special effect processing method is a zoom processing
  • the special effect processing method is a zooming process; if the preset rhythm is a beat point of a down beat, the special effect processing method is a lens pulling process.
  • the special effect processing method is a zooming process using a first zoom parameter; if the preset rhythm is a beat point of a weak beat, the special effect processing method Is a zooming process using a second magnification parameter; wherein the first magnification parameter and the second magnification parameter are different parameters;
  • the special effect processing method is a zooming process using the first zoom-out parameter; if the preset rhythm is the beat point of the weak beat, the special effect processing method is the use of the second 2.
  • first zoom-in parameter is greater than the second zoom-in parameter
  • first zoom-in parameter is greater than the second zoom-in parameter
  • the zooming process is specifically: in the video frame image, zooming in on the target object to achieve a zooming video effect, until the target object is zoomed to a maximum threshold, and jitter zooming is performed to the maximum The function of the video screen corresponding to the target object after the threshold;
  • the zoom-in processing is specifically: in the video frame image, the target object is reduced to achieve the video effect of the zoom-in, until the target object is reduced to a minimum threshold, and the jitter is reduced to the minimum threshold.
  • the preset rhythm is a beat point, an accent point, a drum point or a preset melody point in the music.
  • an electronic device including:
  • Memory for storing non-transitory computer readable instructions
  • the processor is configured to run the computer-readable instructions so that the processor implements the above-mentioned video special effect processing method when executed.
  • a computer-readable storage medium for storing non-transitory computer-readable instructions.
  • the non-transitory computer-readable instructions are executed by a computer, the The computer executes the above-mentioned video special effect processing method.

Abstract

本公开实施例公开了一种视频特效处理方法及装置。该方法包括:在视频的播放过程中,对伴随视频一起播放的音乐进行检测;当检测到音乐播放至预设节奏时,获取视频中待播放的视频帧图像;对视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;对特效处理后的视频帧图像进行显示播放。本公开实施例通过在视频的播放过程中,对伴随视频一起播放的音乐进行检测,当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像,对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像,对特效处理后的视频帧图像进行显示播放,可以将音乐与特效相结合,更加丰富了视频特效功能。

Description

视频特效处理方法及装置
本申请要求于2019年12月26日提交中国专利局、申请号为201911364890.3、申请名称为“视频特效处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频特效处理技术领域,特别是涉及一种视频特效处理方法及装置。
背景技术
随着互联网技术以及视频特效处理技术的不断发展,在视频拍摄时,在视频中添加特效的方式逐渐受到了人们的追捧。用户可以通过选择相应的特效功能在拍摄视频中添加自己喜欢的特效,进而增加视频拍摄的趣味性。
而在现有技术中,仅针对视频进行处理,视频特效比较单一。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
本公开解决的技术问题是提供一种视频特效处理方法,以至少部分地解决现有技术中视频特效比较单一的技术问题。此外,还提供一种视频特效处理装置、电子设备、计算机可读存储介质和视频特效处理终端。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种视频特效处理方法,包括:
在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;
当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;
对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
对特效处理后的视频帧图像进行显示播放。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种视频特效处理装置,包括:
音乐检测模块,用于在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;
图像获取模块,用于当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;
特效处理模块,用于对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
特效显示模块,用于对特效处理后的视频帧图像进行显示播放。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种电子设备,包括:
存储器,用于存储非暂时性计算机可读指令;以及
处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述任一项所述的视频特效处理方法。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述任一项所述的视频特效处理方法。
为了实现上述目的,根据本公开的又一个方面,还提供以下技术方案:
一种视频特效处理终端,包括上述任一视频特效处理装置。
本公开实施例通过在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测,当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像,对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像,对特效处理后的视频帧图像进行显示播放,可以将音乐与特效相结合,更加丰富了视频特效功能。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
图1为根据本公开一个实施例的视频特效处理方法的流程示意图;
图2为根据本公开一个实施例的视频特效处理装置的结构示意图;
图3为根据本公开一个实施例的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于” 是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
实施例一
为了解决现有技术中视频特效比较单一的技术问题,本公开实施例提供一种视频特效处理方法。如图1所示,该视频特效处理方法主要包括如下步骤S11至步骤S14。
步骤S11:在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象。
其中,视频可以为实时获取的视频,具体的,可以通过终端的摄像头或摄像机实时获取视频。或者预先存储在终端本地的视频、或者动态图像、或者由一系列静态图片组成的图像序列。其中,终端可以为移动终端(例如,智能手机、iPhone、平板电脑、笔记本或可穿戴设备),也可以为固定终端(例如,台式电脑)。
其中,目标对象可以预先设置,例如可以为人脸、动物、植物、人体、手势等,具体的,可以采用现有的目标检测算法对视频进行检测得到目标对象。可采用的目标检测算法可以为基于深度学习的目标检测算法、基于神经网络的图像识别算法。
其中,音乐可以为视频本身包含的背景音乐,也可以为用户在拍摄视频时选择的音乐。具体的,在视频播放过程中,同时播放音乐,该音乐可以为包含歌词的音乐,也可以为轻音乐。
步骤S12:当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像。
具体的,首先根据所述音乐获取对应的音乐信号,然后采用音频信号检测算法对获取的音乐信号进行检测,得到所述音乐的节奏信息。可采用的音频信号检测算法包括BPM(Beat Per Minute,每分钟节拍数)算法或改进的BPM算法。
其中,所述节奏信息包含节拍点、重音点、鼓点或预设旋律点中的至少一种。其中,节拍包括1/4、2/4、3/4、4/4、3/8、6/8、7/8、9/8、12/8拍等等。重音包括一般重音、倍重音或鼓点。一般重音为在某个音上加力、加气。倍重音为在某个音上加力或加气直至时值结束,即在钢琴发力触键结束后,它的振弦波仍然在进行作用。鼓点为鼓上的一击或敲击声,或者为管弦乐队中打击乐声部的节拍鼓点。
步骤S13:对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像。
具体的,根据视频可分如下两种情况进行处理:第一种,视频中的每一帧图像中均包含目标对象,此时,可直接根据预设节奏对所述目标对象进行特效处理;第二种,由于视频中并非每一帧图像中均包含所述目标对象,此时,只有确定视频在当前时刻播放位置处出现所述目标对象,并且确定所述音乐同时播放至所述预设节奏,才根据所述预设节奏对所述目标对象进行特效处理。
步骤S14:对特效处理后的视频帧图像进行显示播放。
本实施例通过在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测,当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像,对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像,对特效处理 后的视频帧图像进行显示播放,可以将音乐与特效相结合,更加丰富了视频特效功能。
在一个可选的实施例中,步骤S13具体包括:
步骤S131:根据所述预设节奏确定与所述预设节奏对应的特效处理方式。
步骤S132:按照所述特效处理方式对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像。
其中,所述特效处理方式为拉镜处理或推镜处理:所述拉镜处理为在所述视频帧图像中对所述目标对象进行放大以实现拉镜的视频效果,所述推镜处理为在所述视频帧图像中对所述目标对象进行缩小以实现推镜的视频效果。
具体的,在对所述目标对象进行放大时,可以只放大视频帧图像中的目标对象。例如,首先从视频帧图像中抠出所述目标对象,将视频帧图像中目标对象以外的图像区域作为背景区域,只对所述目标对象进行放大,背景区域保持不变,然后把放大后的目标对象叠加到背景区域中,得到特效处理后的视频帧图像。同理,在对所述目标对象进行缩小时,可以只缩小视频帧图像中的目标对象。例如,首先从视频帧图像中抠出所述目标对象,将视频帧图像中目标对象以外的图像区域作为背景区域,只对所述目标对象进行缩小,背景区域保持不变,然后把缩小后的目标对象叠加到背景区域中,得到特效处理后的视频帧图像。
在一个可选的实施例中,若所述预设节奏为强拍的节拍点,所述特效处理方式为拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为推镜处理;或,若所述预设节奏为强拍的节拍点,所述特效处理方式为推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为拉镜处理。
具体的,若所述预设节奏为强拍的节拍点,则在所述视频帧图像中对所述目标对象进行放大实现拉镜的视频效果,若所述预设节奏为弱拍的节拍点,则在所述视频帧图像中对所述目标对象进行缩小实现推镜的视频效果。或者,若所述预设节奏为强拍的节拍点,则在所述视频帧图像中对所述目标对象进行缩小实现推镜的视频效果,若所述预设节奏为弱拍的节拍点,则在所述视频帧图像中对所述目标对象进行放大实现拉镜的视频效果。
在一个可选的实施例中,若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一放大参数的拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二放大参数的拉镜处理;其中,所述第一放大参数与所述第二放大参数为不同的参数;或,若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一缩小参数的推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二缩小参数的推镜处理;其中,所述第一缩小参数与所述第二缩小参数为不同的参数。
其中,可以预先根据预设节奏中的节拍点设置对应的缩放参数,具体可以为缩小参数或放大参数。当为缩小参数时,则在所述预设节奏的播放过程中,当播放到对应的节拍点时,对所述目标对象进行缩小,当为放大参数时,则在所述预设节奏的播放过程中,当播放到对应的节拍点时,对所述目标对象进行放大。
在一个可选的实施例中,所述第一放大参数大于所述第二放大参数,所述第一缩小参数大于所述第二缩小参数。
具体的,所述预设节奏可以包含多个重音、或多个鼓点、或至少一个重音和至少 一个鼓点。且可以根据重音或鼓点的音量高低分别设置不同的缩放参数,即音量较高的重音或鼓点对应较大的放大参数,音量较低的重音或鼓点对应较小的放大参数,这样在所述预设节奏的播放过程中,当播放到不同的重音或鼓点时,可以实现对所述目标对象进行放大缩小、或缩小放大、或逐渐放大、或逐渐缩小的效果。
例如,当节拍为2/4拍时,每小节只有两拍,其节拍规律为强、弱,则可以根据预设节奏包含的节拍数确定强弱拍序列。例如,当预设节奏包含一节拍时,则可以确定强弱拍序列为[强弱],当预设节奏包含两节拍时,则可以确定强弱拍序列为[强弱强弱],以此类推,这里不再赘述。
当节拍为3/4拍时,每小节只有三拍,节拍规律为强、弱、弱。可以根据预设节奏包含的节拍数确定强弱拍序列。例如,当预设节奏包含一节拍时,则可以确定强弱拍序列为[强弱弱],当预设节奏包含两节拍时,则可以确定强弱拍序列为[强弱弱强弱弱],以此类推,这里不再赘述。
当节拍为4/4拍时,每小节只有四拍,节拍规律为强、弱、次强、弱。可以根据预设节奏包含的节拍数确定强弱拍序列。例如,当预设节奏包含一节拍时,则可以确定强弱拍序列为[强弱次强弱],当预设节奏包含两节拍时,则可以确定强弱拍序列为[强弱次强弱强弱次强弱],以此类推,这里不再赘述。
当节拍为6/8拍时,每小节只有六拍,节拍规律为强、弱、弱、次强、弱、弱。可以根据预设节奏包含的节拍数确定强弱拍序列。例如,当预设节奏包含一节拍时,则可以确定强弱拍序列为[强弱弱次强弱弱],当预设节奏包含两节拍时,则可以确定强弱拍序列为[强弱弱次强弱弱],以此类推,这里不再赘述。
可以根据对应的强拍弱拍分别设置缩小参数或放大参数。设置的规则可以为所述强拍的缩放系数大于所述弱拍的缩放系数,或所述强拍的缩放系数小于所述弱拍的缩小系数。
例如,当强弱拍序列为[强弱]时,可以将强拍对应的特效参数设为放大参数,将弱拍对应的特效参数设为缩小参数,这样在所述预设节奏的播放过程中,当播放到强拍时,对所述目标对象进行放大,当播放到弱拍时,对所述目标对象进行缩小,进而实现对所述目标对象进行放大缩小的效果。或者,将强拍对应的特效参数设为缩小参数,将弱拍对应的特效参数设为放大参数,这样在所述预设节奏的播放过程中,当播放到强拍时,对所述目标对象进行缩小,当播放到弱拍时,对所述目标对象进行放大,进而实现对所述目标对象进行缩小放大的效果。或者,将强拍对应的特效参数设为较小的缩小参数,将弱拍对应的特效参数设为较大的缩小参数,这样在所述预设节奏的播放过程中,当播放到强拍时,对所述目标对象进行缩小,当播放到弱拍时,对所述目标对象进行进一步缩小,进而实现对所述目标对象进行逐渐缩小的效果。或者,将强拍对应的特效参数设为较小的放大参数,将弱拍对应的特效参数设为较大的放大参数,这样在所述预设节奏的播放过程中,当播放到强拍时,对所述目标对象进行放大,当播放到弱拍时,对所述目标对象进行进一步放大,进而实现对所述目标对象进行逐渐放大的效果。
在一个可选的实施例中,所述拉镜处理具体为:在所述视频帧图像中,对所述目标对象进行放大以实现拉镜的视频效果,直至所述目标对象被放大至最大阈值,执行 抖动放大至最大阈值后的目标对象对应的视频画面的功能;所述推镜处理具体为:在所述视频帧图像中,对所述目标对象进行缩小以实现推镜的视频效果,直至所述目标对象被缩小至最小阈值,执行抖动缩小至最小阈值后的目标对象对应的视频画面的功能。
具体的,当所述终端屏幕上显示所述目标对象放大至最大或缩小至最小时,获取抖动参数。其中,抖动参数包含抖动方向(例如前后抖动)、抖动幅度和抖动频率等。其中,抖动幅度可以和缩放参数相关。例如,可以设为放大参数越大抖动幅度越大,缩放参数越大抖动幅度越小。抖动方向和抖动频率可以自定义设置。其中,抖动参数可以预先设置,将其保存在终端本地或网络上,在获取时从终端本地或网络上获取。根据获取的抖动参数抖动放大至最大或缩小至最小时所述目标对象对应的视频画面。
本领域技术人员应能理解,在上述各个实施例的基础上,还可以进行明显变型(例如,对所列举的模式进行组合)或等同替换。
在上文中,虽然按照上述的顺序描述了视频特效处理方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。
实施例二
为了解决现有技术中视频特效比较单一的技术问题,本公开实施例提供一种视频特效处理装置。该装置可以执行上述实施例一所述的视频特效处理方法实施例中的步骤。如图2所示,该装置主要包括:音乐检测模块21、图像获取模块22、特效处理模块23和特效显示模块24;其中,
音乐检测模块21用于在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;
图像获取模块22用于当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;
特效处理模块23用于对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
特效显示模块24用于对特效处理后的视频帧图像进行显示播放。
进一步的,所述特效处理模块23具体用于:根据所述预设节奏确定与所述预设节奏对应的特效处理方式;按照所述特效处理方式对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
其中,所述特效处理方式为拉镜处理或推镜处理:所述拉镜处理为在所述视频帧图像中对所述目标对象进行放大以实现拉镜的视频效果,所述推镜处理为在所述视频帧图像中对所述目标对象进行缩小以实现推镜的视频效果。
进一步的,若所述预设节奏为强拍的节拍点,所述特效处理方式为拉镜处理;若 所述预设节奏为弱拍的节拍点,所述特效处理方式为推镜处理;
或,
若所述预设节奏为强拍的节拍点,所述特效处理方式为推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为拉镜处理。
进一步的,若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一放大参数的拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二放大参数的拉镜处理;其中,所述第一放大参数与所述第二放大参数为不同的参数;
或,
若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一缩小参数的推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二缩小参数的推镜处理;其中,所述第一缩小参数与所述第二缩小参数为不同的参数。
进一步的,所述第一放大参数大于所述第二放大参数,所述第一缩小参数大于所述第二缩小参数。
进一步的,所述拉镜处理具体为:在所述视频帧图像中,对所述目标对象进行放大以实现拉镜的视频效果,直至所述目标对象被放大至最大阈值,执行抖动放大至最大阈值后的目标对象对应的视频画面的功能;
所述推镜处理具体为:在所述视频帧图像中,对所述目标对象进行缩小以实现推镜的视频效果,直至所述目标对象被缩小至最小阈值,执行抖动缩小至最小阈值后的目标对象对应的视频画面的功能。
进一步的,所述预设节奏为所述音乐中的节拍点、重音点、鼓点或预设旋律点。
有关视频特效处理装置实施例的工作原理、实现的技术效果等详细说明可以参考前述视频特效处理方法实施例中的相关说明,在此不再赘述。
实施例三
下面参考图3,其示出了适于用来实现本公开实施例的电子设备300的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图3示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图3所示,电子设备300可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储器(ROM)302中的程序或者从存储装置308加载到随机访问存储器(RAM)303中的程序而执行各种适当的动作和处理。在RAM 303中,还存储有电子设备300操作所需的各种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(I/O)接口305也连接至总线304。
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图3示出了具有各种装置的电子设备300,但是应理解的是,并不要求 实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;对特效处理后的视频帧图像进行显示播放。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为 一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种视频特效处理方法,包括:
在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;
当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;
对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
对特效处理后的视频帧图像进行显示播放。
进一步的,所述对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像,包括:
根据所述预设节奏确定与所述预设节奏对应的特效处理方式;
按照所述特效处理方式对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
其中,所述特效处理方式为拉镜处理或推镜处理:所述拉镜处理为在所述视频帧图像中对所述目标对象进行放大以实现拉镜的视频效果,所述推镜处理为在所述视频帧图像中对所述目标对象进行缩小以实现推镜的视频效果。
进一步的,若所述预设节奏为强拍的节拍点,所述特效处理方式为拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为推镜处理;
或,
若所述预设节奏为强拍的节拍点,所述特效处理方式为推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为拉镜处理。
进一步的,若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一放大参数的拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二放大参数的拉镜处理;其中,所述第一放大参数与所述第二放大参数为不同的参数;
或,
若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一缩小参数的推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二缩小参数的推镜处理;其中,所述第一缩小参数与所述第二缩小参数为不同的参数。
进一步的,所述第一放大参数大于所述第二放大参数,所述第一缩小参数大于所述第二缩小参数。
进一步的,所述拉镜处理具体为:在所述视频帧图像中,对所述目标对象进行放大以实现拉镜的视频效果,直至所述目标对象被放大至最大阈值,执行抖动放大至最大阈值后的目标对象对应的视频画面的功能;
所述推镜处理具体为:在所述视频帧图像中,对所述目标对象进行缩小以实现推镜的视频效果,直至所述目标对象被缩小至最小阈值,执行抖动缩小至最小阈值后的目标对象对应的视频画面的功能。
进一步的,所述预设节奏为所述音乐中的节拍点、重音点、鼓点或预设旋律点。
根据本公开的一个或多个实施例,提供了一种视频特效处理装置,包括:
音乐检测模块,用于在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;
图像获取模块,用于当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;
特效处理模块,用于对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
特效显示模块,用于对特效处理后的视频帧图像进行显示播放。
进一步的,所述特效处理模块具体用于:根据所述预设节奏确定与所述预设节奏对应的特效处理方式;按照所述特效处理方式对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
其中,所述特效处理方式为拉镜处理或推镜处理:所述拉镜处理为在所述视频帧图像中对所述目标对象进行放大以实现拉镜的视频效果,所述推镜处理为在所述视频帧图像中对所述目标对象进行缩小以实现推镜的视频效果。
进一步的,若所述预设节奏为强拍的节拍点,所述特效处理方式为拉镜处理;若 所述预设节奏为弱拍的节拍点,所述特效处理方式为推镜处理;
或,
若所述预设节奏为强拍的节拍点,所述特效处理方式为推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为拉镜处理。
进一步的,若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一放大参数的拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二放大参数的拉镜处理;其中,所述第一放大参数与所述第二放大参数为不同的参数;
或,
若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一缩小参数的推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二缩小参数的推镜处理;其中,所述第一缩小参数与所述第二缩小参数为不同的参数。
进一步的,所述第一放大参数大于所述第二放大参数,所述第一缩小参数大于所述第二缩小参数。
进一步的,所述拉镜处理具体为:在所述视频帧图像中,对所述目标对象进行放大以实现拉镜的视频效果,直至所述目标对象被放大至最大阈值,执行抖动放大至最大阈值后的目标对象对应的视频画面的功能;
所述推镜处理具体为:在所述视频帧图像中,对所述目标对象进行缩小以实现推镜的视频效果,直至所述目标对象被缩小至最小阈值,执行抖动缩小至最小阈值后的目标对象对应的视频画面的功能。
进一步的,所述预设节奏为所述音乐中的节拍点、重音点、鼓点或预设旋律点。
根据本公开的一个或多个实施例,提供了一种电子设备,包括:
存储器,用于存储非暂时性计算机可读指令;以及
处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述的视频特效处理方法。
根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述的视频特效处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当 理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (11)

  1. 一种视频特效处理方法,其特征在于,包括:
    在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;
    当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;
    对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
    对特效处理后的视频帧图像进行显示播放。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像,包括:
    根据所述预设节奏确定与所述预设节奏对应的特效处理方式;
    按照所述特效处理方式对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
    其中,所述特效处理方式为拉镜处理或推镜处理:所述拉镜处理为在所述视频帧图像中对所述目标对象进行放大以实现拉镜的视频效果,所述推镜处理为在所述视频帧图像中对所述目标对象进行缩小以实现推镜的视频效果。
  3. 根据权利要求2所述的方法,其特征在于,
    若所述预设节奏为强拍的节拍点,所述特效处理方式为拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为推镜处理;
    或,
    若所述预设节奏为强拍的节拍点,所述特效处理方式为推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为拉镜处理。
  4. 根据权利要求2所述的方法,其特征在于,
    若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一放大参数的拉镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二放大参数的拉镜处理;其中,所述第一放大参数与所述第二放大参数为不同的参数;
    或,
    若所述预设节奏为强拍的节拍点,所述特效处理方式为采用第一缩小参数的推镜处理;若所述预设节奏为弱拍的节拍点,所述特效处理方式为采用第二缩小参数的推镜处理;其中,所述第一缩小参数与所述第二缩小参数为不同的参数。
  5. 根据权利要求4所述的方法,其特征在于,所述第一放大参数大于所述第二放大参数,所述第一缩小参数大于所述第二缩小参数。
  6. 根据权利要求2-5任一项所述的方法,其特征在于,
    所述拉镜处理具体为:在所述视频帧图像中,对所述目标对象进行放大以实现拉镜的视频效果,直至所述目标对象被放大至最大阈值,执行抖动放大至最大阈值后的目标对象对应的视频画面的功能;
    所述推镜处理具体为:在所述视频帧图像中,对所述目标对象进行缩小以实现推镜的视频效果,直至所述目标对象被缩小至最小阈值,执行抖动缩小至最小阈值后的目标对象对应的视频画面的功能。
  7. 根据权利要求1-5任一项所述的方法,其特征在于,所述预设节奏为所述音乐 中的节拍点、重音点、鼓点或预设旋律点。
  8. 一种视频特效处理装置,其特征在于,包括:
    音乐检测模块,用于在视频的播放过程中,对伴随所述视频一起播放的音乐进行检测;其中,所述视频中包含目标对象;
    图像获取模块,用于当检测到所述音乐播放至预设节奏时,获取所述视频中待播放的视频帧图像;
    特效处理模块,用于对所述视频帧图像中的目标对象进行特效处理,得到特效处理后的视频帧图像;
    特效显示模块,用于对特效处理后的视频帧图像进行显示播放。
  9. 一种电子设备,包括:
    存储器,用于存储非暂时性计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-7任一项所述的视频特效处理方法。
  10. 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,其特征在于,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-7任一项所述的视频特效处理方法。
  11. 一种计算机程序,其特征在于,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1-7任一项所述的视频特效处理方法。
PCT/CN2020/138415 2019-12-26 2020-12-22 视频特效处理方法及装置 WO2021129628A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020227023958A KR20220106848A (ko) 2019-12-26 2020-12-22 비디오 특수 효과 처리 방법 및 장치
EP20905557.3A EP4068757A4 (en) 2019-12-26 2020-12-22 METHOD AND APPARATUS FOR PROCESSING VIDEO SPECIAL EFFECTS
BR112022012742A BR112022012742A2 (pt) 2019-12-26 2020-12-22 Método e dispositivo de processamento de efeito especial de vídeo, dispositivo eletrônico e meio de armazenamento não transitório legível por computador
JP2022539328A JP7427792B2 (ja) 2019-12-26 2020-12-22 ビデオエフェクト処理方法及び装置
US17/849,029 US11882244B2 (en) 2019-12-26 2022-06-24 Video special effects processing method and apparatus
US18/528,745 US20240106968A1 (en) 2019-12-26 2023-12-04 Video special effects processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911364890.3A CN113055738B (zh) 2019-12-26 2019-12-26 视频特效处理方法及装置
CN201911364890.3 2019-12-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/849,029 Continuation US11882244B2 (en) 2019-12-26 2022-06-24 Video special effects processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2021129628A1 true WO2021129628A1 (zh) 2021-07-01

Family

ID=76505990

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138415 WO2021129628A1 (zh) 2019-12-26 2020-12-22 视频特效处理方法及装置

Country Status (7)

Country Link
US (2) US11882244B2 (zh)
EP (1) EP4068757A4 (zh)
JP (1) JP7427792B2 (zh)
KR (1) KR20220106848A (zh)
CN (1) CN113055738B (zh)
BR (1) BR112022012742A2 (zh)
WO (1) WO2021129628A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905177B (zh) * 2021-09-29 2024-02-02 北京字跳网络技术有限公司 视频生成方法、装置、设备及存储介质
WO2023051245A1 (zh) * 2021-09-29 2023-04-06 北京字跳网络技术有限公司 视频处理方法、装置、设备及存储介质
CN113923378B (zh) * 2021-09-29 2024-03-19 北京字跳网络技术有限公司 视频处理方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967706A (zh) * 2017-11-27 2018-04-27 腾讯音乐娱乐科技(深圳)有限公司 多媒体数据的处理方法、装置及计算机可读存储介质
CN108322802A (zh) * 2017-12-29 2018-07-24 广州市百果园信息技术有限公司 视频图像的贴图处理方法、计算机可读存储介质及终端
US10127943B1 (en) * 2017-03-02 2018-11-13 Gopro, Inc. Systems and methods for modifying videos based on music
CN109729297A (zh) * 2019-01-11 2019-05-07 广州酷狗计算机科技有限公司 在视频中添加特效的方法和装置
CN110072047A (zh) * 2019-01-25 2019-07-30 北京字节跳动网络技术有限公司 图像形变的控制方法、装置和硬件装置
CN110070896A (zh) * 2018-10-19 2019-07-30 北京微播视界科技有限公司 图像处理方法、装置、硬件装置
CN110392297A (zh) * 2018-04-18 2019-10-29 腾讯科技(深圳)有限公司 视频处理方法及设备、存储介质、终端

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100408297B1 (ko) 2001-09-11 2003-12-01 삼성전자주식회사 영상신호 처리 시스템에서의 특수 효과 발생 방법 및 장치
EP1422668B1 (en) * 2002-11-25 2017-07-26 Panasonic Intellectual Property Management Co., Ltd. Short film generation/reproduction apparatus and method thereof
CN101247482B (zh) * 2007-05-16 2010-06-02 北京思比科微电子技术有限公司 一种实现动态图像处理的方法和装置
JP2009069185A (ja) * 2007-09-10 2009-04-02 Toshiba Corp 映像処理装置及び映像処理方法
CN104754372A (zh) * 2014-02-26 2015-07-01 苏州乐聚一堂电子科技有限公司 同步节拍特效系统及同步节拍特效处理方法
CN104811787B (zh) * 2014-10-27 2019-05-07 深圳市腾讯计算机系统有限公司 游戏视频录制方法及装置
CN107124624B (zh) * 2017-04-21 2022-09-23 腾讯科技(深圳)有限公司 视频数据生成的方法和装置
CN108259983A (zh) * 2017-12-29 2018-07-06 广州市百果园信息技术有限公司 一种视频图像处理方法、计算机可读存储介质和终端
CN108259984A (zh) * 2017-12-29 2018-07-06 广州市百果园信息技术有限公司 视频图像处理方法、计算机可读存储介质及终端
CN108111911B (zh) * 2017-12-25 2020-07-28 北京奇虎科技有限公司 基于自适应跟踪框分割的视频数据实时处理方法及装置
JP2018107834A (ja) 2018-04-05 2018-07-05 株式会社ニコン 再生演出プログラムおよび再生演出装置
CN108810597B (zh) * 2018-06-25 2021-08-17 百度在线网络技术(北京)有限公司 视频特效处理方法及装置
CN109040615A (zh) * 2018-08-10 2018-12-18 北京微播视界科技有限公司 视频特效添加方法、装置、终端设备及计算机存储介质
US11297244B2 (en) * 2020-02-11 2022-04-05 Samsung Electronics Co., Ltd. Click-and-lock zoom camera user interface

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10127943B1 (en) * 2017-03-02 2018-11-13 Gopro, Inc. Systems and methods for modifying videos based on music
CN107967706A (zh) * 2017-11-27 2018-04-27 腾讯音乐娱乐科技(深圳)有限公司 多媒体数据的处理方法、装置及计算机可读存储介质
CN108322802A (zh) * 2017-12-29 2018-07-24 广州市百果园信息技术有限公司 视频图像的贴图处理方法、计算机可读存储介质及终端
CN110392297A (zh) * 2018-04-18 2019-10-29 腾讯科技(深圳)有限公司 视频处理方法及设备、存储介质、终端
CN110070896A (zh) * 2018-10-19 2019-07-30 北京微播视界科技有限公司 图像处理方法、装置、硬件装置
CN109729297A (zh) * 2019-01-11 2019-05-07 广州酷狗计算机科技有限公司 在视频中添加特效的方法和装置
CN110072047A (zh) * 2019-01-25 2019-07-30 北京字节跳动网络技术有限公司 图像形变的控制方法、装置和硬件装置

Also Published As

Publication number Publication date
BR112022012742A2 (pt) 2022-09-06
JP7427792B2 (ja) 2024-02-05
JP2023508462A (ja) 2023-03-02
US20240106968A1 (en) 2024-03-28
CN113055738B (zh) 2022-07-29
KR20220106848A (ko) 2022-07-29
EP4068757A1 (en) 2022-10-05
CN113055738A (zh) 2021-06-29
EP4068757A4 (en) 2023-01-25
US11882244B2 (en) 2024-01-23
US20220321802A1 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
WO2021129628A1 (zh) 视频特效处理方法及装置
WO2020253806A1 (zh) 展示视频的生成方法、装置、设备及存储介质
WO2021093737A1 (zh) 生成视频的方法、装置、电子设备和计算机可读介质
WO2022083148A1 (zh) 特效展示方法、装置、电子设备及计算机可读介质
JP7199527B2 (ja) 画像処理方法、装置、ハードウェア装置
WO2021027631A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
US20230421716A1 (en) Video processing method and apparatus, electronic device and storage medium
CN110688082B (zh) 确定音量的调节比例信息的方法、装置、设备及存储介质
CN109243479B (zh) 音频信号处理方法、装置、电子设备及存储介质
CN108831425B (zh) 混音方法、装置及存储介质
WO2021135864A1 (zh) 图像处理方法及装置
CN111445901A (zh) 音频数据获取方法、装置、电子设备及存储介质
CN113257218B (zh) 语音合成方法、装置、电子设备和存储介质
CN110312162A (zh) 精选片段处理方法、装置、电子设备及可读介质
CN109003621B (zh) 一种音频处理方法、装置及存储介质
CN111833460A (zh) 增强现实的图像处理方法、装置、电子设备及存储介质
CN112153460A (zh) 一种视频的配乐方法、装置、电子设备和存储介质
WO2023169356A1 (zh) 图像处理方法、装置、设备及存储介质
CN114945892A (zh) 播放音频的方法、装置、系统、设备及存储介质
CN109065068B (zh) 音频处理方法、装置及存储介质
WO2021227953A1 (zh) 图像特效配置方法、图像识别方法、装置及电子设备
CN111048109A (zh) 声学特征的确定方法、装置、计算机设备及存储介质
CN111081277A (zh) 音频测评的方法、装置、设备及存储介质
CN112435641B (zh) 音频处理方法、装置、计算机设备及存储介质
WO2021027547A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905557

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022539328

Country of ref document: JP

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022012742

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2020905557

Country of ref document: EP

Effective date: 20220629

ENP Entry into the national phase

Ref document number: 20227023958

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112022012742

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20220624