WO2021052130A1 - Procédé de traitement vidéo, appareil et dispositif, et support d'enregistrement lisible par ordinateur - Google Patents

Procédé de traitement vidéo, appareil et dispositif, et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2021052130A1
WO2021052130A1 PCT/CN2020/111462 CN2020111462W WO2021052130A1 WO 2021052130 A1 WO2021052130 A1 WO 2021052130A1 CN 2020111462 W CN2020111462 W CN 2020111462W WO 2021052130 A1 WO2021052130 A1 WO 2021052130A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
animation
special effect
video file
processing method
Prior art date
Application number
PCT/CN2020/111462
Other languages
English (en)
Chinese (zh)
Inventor
余俊
Original Assignee
西安中兴新软件有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安中兴新软件有限责任公司 filed Critical 西安中兴新软件有限责任公司
Publication of WO2021052130A1 publication Critical patent/WO2021052130A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present invention relates to the field of video processing, and in particular to a video processing method, device, equipment and computer-readable storage medium.
  • the video processing method, device, device, and computer-readable storage medium provided by the embodiments of the present invention are intended to at least to some extent solve the problem of a single viewing mode of videos and pictures in some situations, which is not conducive to improving user experience satisfaction.
  • an embodiment of the present invention provides a video processing method, including: obtaining a video file and picture to be processed, and obtaining a corresponding video playback special effect; and decoding the video file and picture according to the obtained video playing special effect And after rendering, play, and/or, according to the acquired video play special effects, decode and render the video files and pictures, and store them in a new video file.
  • the embodiment of the present invention also provides a video processing device, including: an acquisition module for acquiring video files and pictures to be processed, and acquiring corresponding video playback special effects; a processing module, for playing special effects according to the acquired video, After the video file and picture are decoded and rendered, they are played, and/or the video file and picture are decoded and rendered according to the acquired video playing special effects, and then stored in a new video file.
  • a video processing device including: an acquisition module for acquiring video files and pictures to be processed, and acquiring corresponding video playback special effects; a processing module, for playing special effects according to the acquired video, After the video file and picture are decoded and rendered, they are played, and/or the video file and picture are decoded and rendered according to the acquired video playing special effects, and then stored in a new video file.
  • the embodiment of the present invention also provides a video processing device, including a processor and a memory; the processor is configured to execute a computer program stored in the memory to implement the steps of the video processing method as described above.
  • the embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program can be executed by a processor to implement the steps of the video processing method as described above.
  • FIG. 1 is a schematic flowchart of a video processing method according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of the structure of a video processing device according to the second embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a video playback process according to the second embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a new video generation process according to the second embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of a video processing device according to the third embodiment of the present invention.
  • the embodiment of the present invention can play and/or save the video files and pictures to be processed after decoding and rendering according to the obtained video playback special effects.
  • New video files are used to generate new video files for easy export or sharing, etc., so that video files and pictures can be mixed/spliced and played with video playback special effects, which enriches the viewing methods of videos and pictures, and improves the playback of videos and pictures The effect is more conducive to improving the satisfaction of user experience.
  • FIG. 1 For ease of understanding, this embodiment is described below with reference to the video processing method shown in FIG. 1 as an example, including:
  • video files and pictures can be stored in an application database or in other locations, and the application database can be a network-side database or a local database of the device, which can be flexibly set according to application scenarios.
  • the video file and/or picture can be obtained by, but not limited to, inputting the corresponding query keyword query, or directly according to the received selection instruction to obtain the video file and/or picture corresponding to the selection instruction.
  • a corresponding selection interface and query interface may be provided for selection and/or query.
  • the video file to be processed may be a complete video file or a part of a video file.
  • a complete video file may be divided into at least two parts (that is, divided into at least two subunits), and the to-be-processed video file may be one of the subunits.
  • the subunits of the video file can be processed flexibly by using the video processing method in this embodiment according to requirements. And it should be understood that when the video file is divided into subunits in this embodiment, the division method can be flexibly determined according to requirements.
  • it can be divided according to the set playing time, or flexibly divided according to the division instructions received in real time, and the division of the subunits can be executed before the video file is played, or during the video file playback, for the currently unplayed part Divide.
  • the corresponding relationship may be preset between the video file to be processed and the picture. At this time, after the video file to be processed is obtained, the corresponding picture can be obtained according to the corresponding relationship. In other examples of this embodiment, there may be no pre-set correspondence between the video file to be processed and the picture. At this time, after the video file to be processed is obtained, it can be obtained according to the corresponding picture selection or query command. Go to the corresponding picture as the picture corresponding to the to-be-processed video file.
  • the picture corresponding to a video file to be processed in this embodiment may be one, or may be set to two or more according to requirements.
  • a video image frame or two or more video image frames can be generated according to the picture, and the generated video image frame is in the video file to be processed
  • the position in the video file can also be flexibly set according to requirements.
  • a corresponding editing interface or configuration interface may be provided for editing and configuring the corresponding relationship between the video file and the picture.
  • the method of obtaining the corresponding video playback special effect may include, but is not limited to, any one of the following:
  • Method 1 Obtain the video playback special effect indicated by the playback special effect selection instruction according to the received playback special effect selection instruction; the playback special effect selection instruction can be issued by but not limited to the user, or automatically triggered when the set conditions are met
  • Method 2 Obtain special effect matching information, and according to the obtained special effect matching information and the preset correspondence between special effect matching information and video playback special effects, match the video playback special effects corresponding to the acquired special effect matching information; you can also match the information according to the special effects Automatically match out the video playback special effects corresponding to the current video files and pictures to be processed.
  • the special effect matching information can be any information that can correspond to the video playing special effect for subsequent selection and matching.
  • the method for obtaining the corresponding video playback special effect to obtain the special effect matching information includes the second method described above, the obtaining of the special effect matching information may include at least one of the following:
  • the music style of the background music is acquired as the special effect matching information
  • At least one of the current time and location is acquired as special effect matching information.
  • the special effect matching information may include, but is not limited to, the music style of the background music, the theme and/or scene to which the image content belongs, the theme and/or scene to which the video content belongs, the current time, and the current location.
  • the video files and pictures to be processed can be decoded and rendered according to the acquired video playback special effects for real-time playback; or the acquired video playback special effects can be played according to the video files and pictures to be processed.
  • the picture is decoded and rendered, it is stored in a new video file to obtain a new video file, and the new video file can also be exported.
  • the new video file has the same effect as the previous real-time playback during playback; the resulting new video file can be saved, You can also share.
  • decoding and rendering of video files and pictures includes: generating at least one video image frame after decoding the picture to be processed and inserting it into the corresponding position in the decoded video file, and then comparing the data in the obtained video file Each video image frame is rendered.
  • the video file can be encoded and decoded through but not limited to a hard decoder, and pictures can be encoded and decoded through but not limited to using a graphics decoding framework.
  • the decoded texture of the video file and picture may be specifically rendered.
  • the decoded texture may be processed and rendered using but not limited to the use of graphic programming interface technology.
  • the foregoing video processing method may further include:
  • Playing the aforementioned video files and pictures after decoding and rendering also includes: decoding background music and playing synchronously; this process may include, but not limited to, decoding the background music through a hard decoder and playing synchronously;
  • decoding and rendering the above-mentioned video files and pictures and storing them into a new video file also includes: storing the audio code stream obtained by decoding the background music into the new video file.
  • the video playback special effects in this embodiment can also be flexibly selected according to specific application scenarios.
  • the video playback special effect may include but is not limited to at least one of the following:
  • Play animation includes but is not limited to at least one of the following: pan animation, zoom in animation, zoom out animation, crop animation, rotation animation, fade in and/or fade out animation, jelly effect animation, flip animation, clone animation, mask animation, bullet screen animation , Superimposition animation, numerical value change animation, delay animation; it should be understood that in addition to the animation in the above example, other animation effects can be flexibly selected according to the needs;
  • the lens focus rules include: the focus of the face is greater than the focus of the scene. That is, when the face is recognized, the face area is used as the focus of the lens, and when the face difference is not recognized, the scene area is used as the focus of the lens; it can also be set as the focus of the face is greater than other focus, or according to the current focus
  • the key content changes and flexibly update the lens focus rules
  • Filter methods can include but are not limited to: at least one of noise filters, distortion filters, extraction filters, rendering filters, CSS filters, stylized filters, liquefaction filters, and blur filters; it should be understood What's more, in addition to the filter method in the above example, other animation effects can be flexibly selected according to the needs.
  • the acquired video playback special effects include filter mode
  • the filter mode determined according to the method but not limited to the above At least one of the exemplary path methods) performs filter processing on the decoded video file and texture obtained from the picture.
  • the foregoing decoding and rendering of the video file and picture may include: rendering the decoded video file and the texture obtained from the picture according to the playback animation .
  • the video playback special effects include lens focus rules
  • it may also include: acquiring and recording at least one image of the video file according to the acquired lens focus rules The focus area in the frame, and/or, acquiring and recording the focus area in at least one image frame of the picture;
  • rendering the video file and the picture may include: performing the video according to the recorded focus area The corresponding focus area in the corresponding video frame in the file is rendered as the lens focus, and/or the corresponding focus area in at least one frequency frame generated according to the picture is rendered as the lens focus.
  • the video file and picture to be processed are decoded and rendered according to the acquired video playing special effect, and then played in real time, it may also include:
  • the video file is edited according to the received video playback editing instruction.
  • the video playback editing instruction in this embodiment can edit at least one of the following objects, but not limited to:
  • Background music including but not limited to the length and content of the background music
  • the current video file to be processed including but not limited to the length and content of the video file, etc.
  • the current picture to be processed including but not limited to the length and content of the video file, etc.
  • the picture and video file to be processed can be mixed and played in real time.
  • the above method can also be used to export the entire process to generate a new video file, and the generated new video file can be stored , Share, and also play.
  • the above-mentioned video processing method provided in this embodiment can be applied to various terminal devices (for example, various terminal devices whose operating system is but not limited to the Android system), and the pictures, video files, and audios currently to be processed can be processed according to the above examples.
  • Method for encoding and decoding you can use but not limited to the graphical programming interface technology environment to process and render the decoded texture, use the audio system for music playback, and use but not limited to intelligent recognition technology to change the shots of the pictures and video files to be processed
  • the focus is marked, the music style of the background music is recognized, the theme and/or scene of the image content, the theme and/or scene of the video content, the current time, and the current location are used to determine the playback animation and filter used.
  • At least one of mode and lens focus rule can implement but is not limited to the following functions:
  • a complete video file can be divided into at least two subunits, or a complete video file can be divided into one subunit, and at least one subunit can be processed using the video processing method in the above example, and the effect of the video obtained after processing can be It is simply understood that the pictures and video files to be processed are played out according to the determined video playback special effects with the cooperation of background music.
  • a complete video can be divided into several independent sub-units according to requirements or set rules.
  • Each sub-unit is the display of the corresponding pictures and video files of the sub-unit.
  • Animation effects on the pictures and video files are added during the display process.
  • the graphics programming interface technology can be used, but not limited to, to filter the texture generated by video and picture decoding (that is, filter effect), and to render the animation (that is, to play on the terminal).
  • Smart recognition can be used, but not limited to, to recognize scenes and faces in each frame of image. After recognition, the image rectangles are recorded and connected in series, and the lens uses this rectangle as the focal point of the lens for rendering during playback.
  • intelligent recognition can be used to identify the music style and identify different music styles.
  • the style for example, recognizing scenes such as cheerful (light music), warm (slow rhythm), rhythm (fast rhythm and consistent rhythm), etc.
  • the focus is different according to the recognition of the background music during playback.
  • the picture recognizes the scene and the face separately, and after the cheerful music is recognized, the focus of the face is played first, followed by the scene focus or other focus.
  • the scene and/or theme to which the content of the video file and/or picture belongs can also be determined by video playback special effects.
  • the recognized music style it can dynamically match and determine the play animation of the sub-unit to be processed (the sub-unit is the time period for the separate display of a picture and video file to be processed), and all the play animations can be stored On the terminal device, it is read during initialization, and the playing animation of a subunit may be a combination of at least one of the above example animations.
  • the animation combination of a subunit is the display time of the subunit, and the display time of all subunits is the duration of a complete video file.
  • the terminal device can replace/edit the entire video file or the background music of a single subunit. After changing the background music, it can automatically generate and update the playback animation, filter mode, and focus effect of the corresponding subunit according to the result of intelligently identifying the music. If the duration of the background music is less than the duration of the entire video, it can be played in but not limited to loop playback.
  • At least one of animation playback, filter mode, and focus effect can be configured or updated and edited in units of subunits.
  • its subunits can be edited, deleted, or new subunits can be added. After editing, it can automatically generate the corresponding playback animation, filter mode, and focus effect of each sub-unit according to the result of intelligently identifying the music.
  • the hard decoder and multiplexer can be used to encode and decode the rendering to generate new video files for viewing, sharing or playing.
  • this embodiment can realize the mixing/splicing playback of video files and pictures with special effects of video playback during video playback, which enriches the viewing methods of videos and pictures, improves the playback effect of videos and pictures, and is more conducive to improving user experience. Satisfaction.
  • This embodiment provides a video processing device, which can be set in various video processing devices.
  • the video processing device in this embodiment can be, but is not limited to, a set-top box, various smart terminals (such as mobile phones, IPADs, and laptops). , Server, etc.
  • the video processing device includes:
  • the obtaining module 201 is used to obtain video files and pictures to be processed, and obtain corresponding video playback special effects; for the specific obtaining process, please refer to the above-mentioned embodiment, which will not be repeated here.
  • the processing module 202 is configured to decode and render the video files and pictures according to the acquired video playback special effects, and then play them, and/or, according to the acquired video playback special effects, decode and render the video files and pictures. , Save the new video file.
  • the specific processing process please refer to the above-mentioned embodiment, which will not be repeated here.
  • S302 Enter the application database, and obtain the current to-be-processed video files, pictures, background music information, etc.
  • S304 Determine the playing animation effect, filter mode, and lens focus rule of the video file (or subunit of the video file) according to the recognized music style.
  • S305 Obtain a set of video files and picture files to be processed, and intelligently identify the focus of each video image frame or image.
  • S402 Enter the application database, and obtain the current to-be-processed video files, pictures, background music information, etc.
  • S404 Determine the playing animation effect, filter mode, and lens focus rule of the video file (or subunit of the video file) according to the recognized music style.
  • S405 Obtain a set of video files and picture files to be processed, and intelligently identify the focus of each video image frame or image.
  • S406 According to the determined play animation effect, filter effect, and lens focus, use the graphic programming interface to start rendering the texture, and store the texture texture in a new video file.
  • S408 Store the decoded audio code stream into the above-mentioned new video file.
  • the application of intelligent recognition can give the user a good user experience.
  • the user only needs to determine the picture and video that he wants to view, then a video can be generated intelligently, and the user can adjust Corresponding animation effects, filter effects, lens focus effects, etc.
  • the smart lens focus can highlight the focus of the display, and a better user experience.
  • intelligent music recognition can make the coordination of animation effects, filter effects, lens focus effects, and music more tacit, making it more convenient for users to operate.
  • This embodiment also provides a video processing device. As shown in FIG. 5, it includes a processor 501, a memory 502, and a communication bus 503;
  • the communication bus 503 is used to implement a communication connection between the processor 501 and the memory 502;
  • the processor 501 may be used to execute a computer program stored in the memory 502 to implement the steps of the task video processing method in the above embodiments.
  • the video processing device in this embodiment may be, but is not limited to, a set-top box, various intelligent terminals (for example, mobile phones, IPAD, notebook computers), servers, and the like.
  • This embodiment also provides a computer-readable storage medium, which is included in any method or technology for storing information (such as computer-readable instructions, data structures, computer program modules, or other data). Volatile or non-volatile, removable or non-removable media.
  • Computer-readable storage media include but are not limited to RAM (Random Access Memory), ROM (Read-Only Memory, read-only memory), EEPROM (Electrically Erasable Programmable read only memory, charged Erasable Programmable Read-Only Memory) ), flash memory or other memory technology, CD-ROM (Compact Disc Read-Only Memory), digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, Or any other medium that can be used to store desired information and that can be accessed by a computer.
  • the computer-readable storage medium in this embodiment may be used to store a computer program, and the computer program may be executed by a processor to implement the steps of the video processing method for the tasks in the above embodiments.
  • This embodiment also provides a computer program (or computer software).
  • the computer program can be distributed on a computer-readable medium and executed by a computable device to implement at least one of the video processing methods in the above embodiments.
  • One step; and in some cases, at least one step shown or described can be performed in an order different from that described in the foregoing embodiment.
  • This embodiment also provides a computer program product, including a computer readable device, and any computer program as shown above is stored on the computer readable device.
  • the computer-readable device in this embodiment may include the computer-readable storage medium as shown above.
  • the video files and pictures to be processed are acquired, and the corresponding video playback special effects are acquired; After the video files and pictures are decoded and rendered, they are played, and/or stored in new video files to generate new video files for easy export or sharing, etc.; in this way, video playback special effects can be used to apply video playback effects to video files and pictures during video playback.
  • Mixed/spliced playback enriches the viewing methods of videos and pictures, can improve the playback effect of videos and pictures, and is more conducive to improving user experience satisfaction.
  • communication media usually contain computer-readable instructions, data structures, computer program modules, or other data in a modulated data signal such as carrier waves or other transmission mechanisms, and may include any information delivery medium. Therefore, the present invention is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

Procédé de traitement vidéo, appareil et dispositif, et support d'enregistrement lisible par ordinateur. Le procédé comprend les étapes consistant en : l'acquisition d'un fichier vidéo et d'une image à traiter, et l'acquisition d'un effet spécial de lecture de vidéo correspondant (S101) ; puis, une fois le fichier vidéo et l'image à traiter décodés et restitués, la lecture de ceux-ci selon l'effet spécial de lecture de vidéo acquis, et/ou la sauvegarde d'un nouveau fichier vidéo de façon à générer un nouveau fichier vidéo (S102).
PCT/CN2020/111462 2019-09-17 2020-08-26 Procédé de traitement vidéo, appareil et dispositif, et support d'enregistrement lisible par ordinateur WO2021052130A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910877779.8 2019-09-17
CN201910877779.8A CN112533058A (zh) 2019-09-17 2019-09-17 视频处理方法、装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021052130A1 true WO2021052130A1 (fr) 2021-03-25

Family

ID=74883323

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111462 WO2021052130A1 (fr) 2019-09-17 2020-08-26 Procédé de traitement vidéo, appareil et dispositif, et support d'enregistrement lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN112533058A (fr)
WO (1) WO2021052130A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115099B (zh) * 2021-05-14 2022-07-05 北京市商汤科技开发有限公司 一种视频录制方法、装置、电子设备以及存储介质
CN113422912B (zh) * 2021-05-25 2023-05-23 深圳市闪剪智能科技有限公司 短视频的交互生成方法、装置、设备及存储介质
CN114900736A (zh) * 2022-03-28 2022-08-12 网易(杭州)网络有限公司 视频生成方法、装置和电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050003690A (ko) * 2003-07-04 2005-01-12 주식회사 엠투그래픽스 동영상의 자동편집장치와 그 방법 및 동영상자동편집방법이 저장된 기록매체
CN103391414A (zh) * 2013-07-24 2013-11-13 杭州趣维科技有限公司 一种应用于手机平台的视频处理装置及处理方法
CN103905885A (zh) * 2014-03-25 2014-07-02 广州华多网络科技有限公司 视频直播方法及装置
CN106993209A (zh) * 2016-01-20 2017-07-28 上海慧体网络科技有限公司 一种基于移动端技术进行短视频剪辑的方法
CN107241646A (zh) * 2017-07-12 2017-10-10 北京奇虎科技有限公司 多媒体视频的编辑方法及装置
CN107967706A (zh) * 2017-11-27 2018-04-27 腾讯音乐娱乐科技(深圳)有限公司 多媒体数据的处理方法、装置及计算机可读存储介质
CN110611776A (zh) * 2018-05-28 2019-12-24 腾讯科技(深圳)有限公司 特效处理方法、计算机设备和计算机存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108471554A (zh) * 2017-02-23 2018-08-31 合网络技术(北京)有限公司 多媒体资源合成方法及装置
CN108259984A (zh) * 2017-12-29 2018-07-06 广州市百果园信息技术有限公司 视频图像处理方法、计算机可读存储介质及终端
CN108307127A (zh) * 2018-01-12 2018-07-20 广州市百果园信息技术有限公司 视频处理方法及计算机存储介质、终端
CN108769562B (zh) * 2018-06-29 2021-03-26 广州酷狗计算机科技有限公司 生成特效视频的方法和装置
CN109040615A (zh) * 2018-08-10 2018-12-18 北京微播视界科技有限公司 视频特效添加方法、装置、终端设备及计算机存储介质
CN109462776B (zh) * 2018-11-29 2021-08-20 北京字节跳动网络技术有限公司 一种视频特效添加方法、装置、终端设备及存储介质
CN109618222B (zh) * 2018-12-27 2019-11-22 北京字节跳动网络技术有限公司 一种拼接视频生成方法、装置、终端设备及存储介质
CN110049371A (zh) * 2019-05-14 2019-07-23 北京比特星光科技有限公司 视频合成、播放和修改方法、视频合成系统及设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050003690A (ko) * 2003-07-04 2005-01-12 주식회사 엠투그래픽스 동영상의 자동편집장치와 그 방법 및 동영상자동편집방법이 저장된 기록매체
CN103391414A (zh) * 2013-07-24 2013-11-13 杭州趣维科技有限公司 一种应用于手机平台的视频处理装置及处理方法
CN103905885A (zh) * 2014-03-25 2014-07-02 广州华多网络科技有限公司 视频直播方法及装置
CN106993209A (zh) * 2016-01-20 2017-07-28 上海慧体网络科技有限公司 一种基于移动端技术进行短视频剪辑的方法
CN107241646A (zh) * 2017-07-12 2017-10-10 北京奇虎科技有限公司 多媒体视频的编辑方法及装置
CN107967706A (zh) * 2017-11-27 2018-04-27 腾讯音乐娱乐科技(深圳)有限公司 多媒体数据的处理方法、装置及计算机可读存储介质
CN110611776A (zh) * 2018-05-28 2019-12-24 腾讯科技(深圳)有限公司 特效处理方法、计算机设备和计算机存储介质

Also Published As

Publication number Publication date
CN112533058A (zh) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2021052130A1 (fr) Procédé de traitement vidéo, appareil et dispositif, et support d'enregistrement lisible par ordinateur
JP7134248B2 (ja) ビデオ作成方法並びにその装置、コンピュータ機器、記憶媒体、及びコンピュータプログラム
US9620169B1 (en) Systems and methods for creating a processed video output
US20220188357A1 (en) Video generating method and device
KR20210082232A (ko) 실시간 비디오 특수 효과 시스템 및 방법
WO2020062683A1 (fr) Procédé et dispositif d'acquisition vidéo, terminal et support
US20210350545A1 (en) Image processing method and apparatus, and hardware apparatus
EP3361738A1 (fr) Procédé et dispositif pour assembler des fichiers multimédias
US11895425B2 (en) Methods and apparatus for metadata-based processing of media content
WO2020062685A1 (fr) Procédé et appareil de traitement vidéo, terminal, et support
US8943020B2 (en) Techniques for intelligent media show across multiple devices
US20140193138A1 (en) System and a method for constructing and for exchanging multimedia content
WO2019114330A1 (fr) Procédé et appareil de lecture de vidéo et dispositif terminal
US20180090177A1 (en) Device and method for playing an interactive audiovisual movie
US9325776B2 (en) Mixed media communication
WO2022194070A1 (fr) Procédé de traitement vidéo pour application et dispositif électronique
CN105578224A (zh) 一种多媒体数据的获取方法、装置、智能电视及机顶盒
JP4940333B2 (ja) 電子機器及び動画像再生方法
US20170047093A1 (en) Methods and systems of creation and catalog of media recordings
JP5225330B2 (ja) 電子機器及び画像処理方法
EP3547698A1 (fr) Procédé et dispositif de détermination d'un intervalle de temps entre coupures dans un contenu audiovisuel
CN115002335B (zh) 视频处理方法、装置、电子设备和计算机可读存储介质
CN116095388A (zh) 视频生成方法、视频播放方法及相关设备
CN115243087A (zh) 音视频合拍处理方法、装置、终端设备及存储介质
CN111800663A (zh) 一种视频合成方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20866060

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20866060

Country of ref document: EP

Kind code of ref document: A1