WO2022160699A1 - 视频处理方法和视频处理装置 - Google Patents

视频处理方法和视频处理装置 Download PDF

Info

Publication number
WO2022160699A1
WO2022160699A1 PCT/CN2021/115125 CN2021115125W WO2022160699A1 WO 2022160699 A1 WO2022160699 A1 WO 2022160699A1 CN 2021115125 W CN2021115125 W CN 2021115125W WO 2022160699 A1 WO2022160699 A1 WO 2022160699A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
image material
target
control
image
Prior art date
Application number
PCT/CN2021/115125
Other languages
English (en)
French (fr)
Inventor
汪谷
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022160699A1 publication Critical patent/WO2022160699A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Definitions

  • the present disclosure relates to the field of computers, and in particular, to a video processing method, apparatus, electronic device, and computer-readable storage medium.
  • the video content can be captured by a video production tool (such as a short video application), and the video content can be processed to a certain extent, wherein adding a sticker to the video is a common processing method, and the sticker here refers to the is an image element that can be displayed on top of video content.
  • adding a sticker to a video you can also adjust the position or size of the sticker displayed in the video, adding interest to the video.
  • the present disclosure provides a video processing method, apparatus, electronic device, and computer-readable storage medium.
  • a video processing method comprising: in response to triggering a first operation instruction generated by a specified control, displaying a list of audio-visual materials, where the audio-visual material list includes at least one audio-visual material, wherein: The sound image material includes an image material and a sound material associated with the image material; in response to a selection operation in the sound image material list, a target sound image material is determined from at least one sound image material; the target sound image material is displayed on the video content to be processed The image material in the audio image material, and the sound material in the target audio image material is played.
  • the identification of the target audiovisual material is displayed as an edit control.
  • a sound editing interface of the target sound image material is displayed; the sound material in the target sound image material is edited in the sound editing interface.
  • the sound editing interface includes: a sound selection control
  • the step of editing and processing the sound material in the target sound image material in the sound editing interface includes: when the sound selection control is triggered, displaying a preset waiting A sound list is selected, and a target sound is determined from the to-be-selected sound list according to the selection operation, and the target sound is used to replace the sound material in the target sound image material.
  • the sound editing interface includes: a recording control
  • the step of editing and processing the sound material in the target audio image material in the sound editing interface includes: when the recording control is triggered, displaying the recording interface, and recording The interface obtains the recorded sound through recording, and the recorded sound is used to replace the sound material in the target sound image material.
  • the above method further includes: a time editing interface for displaying the target audio image material, wherein the time editing interface includes: a time axis of the video content and a time window on the time axis, and the time window is used to adjust the target audio image material The start and end times displayed on top of the video content.
  • the time window includes a start control on the left and a termination control on the right
  • the above method further includes: receiving a first movement operation of the start control on the time axis, and adjusting the target according to the first movement operation Start time of the audio image material on the time axis; receive a second movement operation on the end control on the time axis, and adjust the end time of the target audio image material on the time axis according to the second movement operation.
  • a waveform representing sound material is displayed in the time window, and in response to the termination control being moved to a position less than a preset distance from the tail of the waveform, the termination control is automatically adsorbed to the tail of the waveform.
  • the start and end times of the sound material in the audio image material are the same as the start and end times of the image material in the audio image material; or the start time of the sound material in the audio image material is the same as the start time of the audio material in the audio image material. , and the end time of the sound material in the audio image material is the end time of the sound material.
  • the above method further includes: receiving a deletion operation on the target audio image material; deleting the image material and sound material of the target audio image material from the video content.
  • the above method further includes: in response to playing the sound material in the target sound image material, reducing the volume of the sound information of the video content.
  • a video processing apparatus comprising: a display unit configured to display a list of audio-visual materials in response to triggering a first operation instruction generated by a specified control, where the audio-visual material list includes at least one an image material with sound, wherein the image material with sound includes an image material and a sound material associated with the image material; the determining unit is configured to, in response to a selection operation in the list of image materials with sound, determine a target sound material from at least one image material with sound Image material; wherein the presentation unit is further configured to display the image material in the target audio image material on the video content to be processed, and play the sound material in the target audio image material.
  • the above-mentioned apparatus further includes: an editing control display unit configured to display the identification of the target sound image material as an editing control.
  • the above-mentioned apparatus further includes: an editing interface display unit, configured to display a sound editing interface of the target audio image material in response to a second operation instruction generated by triggering the editing control; In the editing interface, the sound material in the target sound image material is edited and processed.
  • the sound editing interface includes: a sound selection control
  • the material determination unit includes: a sound selection unit, which is configured to display a preset list of sounds to be selected when the sound selection control is triggered, and select from a selection operation according to a selection operation.
  • the target sound is determined in the to-be-selected sound list, and the target sound is used to replace the sound material in the target sound image material.
  • the sound editing interface includes: a recording control
  • the material determining unit includes: a sound recording unit, which is configured to display the recording interface when the recording control is triggered, and obtain the recording sound by recording on the recording interface. Used to replace the sound material in the target audiovisual material.
  • the above apparatus further includes: a time editing unit configured to display a time editing interface of the target audio image material, wherein the time editing interface includes: a time axis of the video content and a time window on the time axis, the time window Used to adjust the start and end time of the target audio image material displayed on the video content.
  • a time editing unit configured to display a time editing interface of the target audio image material, wherein the time editing interface includes: a time axis of the video content and a time window on the time axis, the time window Used to adjust the start and end time of the target audio image material displayed on the video content.
  • the time window includes a left start control and a right end control
  • the above device further includes: a first adjustment unit configured to receive a first movement operation on the start control on the time axis, and adjust the start time of the target audio image material on the time axis according to the first movement operation; the second adjustment unit is configured to receive the second movement operation of the termination control on the time axis, and adjust the target according to the second movement operation The end time of the audio image material on the timeline.
  • a waveform representing sound material is displayed in the time window, and in response to the termination control being moved to a position less than a preset distance from the tail of the waveform, the termination control is automatically adsorbed to the tail of the waveform.
  • the start and end times of the sound material in the audio image material are the same as the start and end times of the image material in the audio image material; or the start time of the sound material in the audio image material is the same as the start time of the audio material in the audio image material. , and the end time of the sound material in the audio image material is the end time of the sound material.
  • the above-mentioned apparatus further includes: a deletion operation receiving unit configured to receive a deletion operation on the target audio image material; a deletion unit configured to delete the image material and sound material of the target audio image material from the video content .
  • the above apparatus further includes: a volume adjustment unit configured to reduce the volume of the sound information of the video content in response to playing the sound material in the target audio image material when the video content includes sound information.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the above video processing method .
  • a computer-readable storage medium when instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device can execute the above video processing method.
  • a computer program product comprising a computer program/instruction, wherein the computer program/instruction, when executed by a processor, implements the above-mentioned video processing method in the claims.
  • the above-mentioned embodiments of the present application display a list of audiovisual materials in response to a first operation instruction generated by triggering a specified control, and the audiovisual material list includes at least one audiovisual material, wherein the audiovisual material includes an image material and a sound associated with the image material material; in response to a selection operation in the sound image material list, determining a target sound image material from at least one sound image material; displaying the image material in the target sound image material on the video content to be processed, and playing the target sound image material sound material in .
  • the above solution combines sound effects and stickers (image materials), so that users can add sound effects while selecting stickers, so that the stickers change from image elements without sound to multimedia elements containing sound, so that the content of the stickers is enriched, and A new way of playing stickers is added, which avoids the fact that the sticker playing method in short video applications in related technologies is relatively simple, and improves the fun of the video.
  • Fig. 1 is a flowchart of a video processing method according to an exemplary embodiment.
  • Fig. 2 is a schematic diagram of selecting a sound map material according to an exemplary embodiment.
  • Fig. 3 is a schematic diagram showing a selected sound map material according to an exemplary embodiment.
  • Fig. 4 is a schematic diagram showing a list of sounds to be selected according to an exemplary embodiment.
  • Fig. 5 is a schematic diagram of acquiring sound material by recording according to an exemplary embodiment.
  • FIG. 6 is a schematic diagram illustrating a recording process according to an exemplary embodiment.
  • Fig. 7 is a schematic diagram of a time window according to an exemplary embodiment.
  • Fig. 8 is a block diagram of a video processing apparatus according to an exemplary embodiment.
  • Fig. 9 is a block diagram of an electronic device according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a video processing method according to an exemplary embodiment. As shown in Fig. 1 , the video processing method can be used in a short video application. The video processing method shown in FIG. 1 can be executed by any electronic device having a video processing function.
  • the electronic device may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant.
  • step S11 in response to triggering the first operation instruction generated by the designated control, a list of sound image materials is displayed, and the sound image material list includes at least one sound image material, wherein the sound image material includes the image material and the sound associated with the image material. material.
  • the above-mentioned specified control may be a soundmap control, and in response to the soundmap control being triggered, a list of audiovisual materials including a plurality of audiovisual materials is displayed on the current interface.
  • the image material refers to the preset image content that can be displayed on the video.
  • the position and size of the image material displayed on the video can be adjusted.
  • the sound material may be sound effect information, such as: specially recorded sound effect information, or sound effect information intercepted from a film and television program.
  • the sound image material in the embodiment of the present disclosure includes an image material and a sound material, and the image material and the sound material are associated, that is, in response to adding the sound image material to the video, both the image material and the sound material in the sound image material will be was added to the video.
  • Fig. 2 is a schematic diagram of selecting a sound map material according to an exemplary embodiment.
  • the short video application provides a sound map control (Fig. 2 "Audio" in the audio map), in response to the user touching the audio map control, a audio map window is displayed below the video to display thumbnails of the plurality of audio image clips in the audio map window.
  • step S13 in response to a selection operation in the list of sound image materials, a target sound image material is determined from at least one sound image material.
  • the user can select any audio image material through a touch operation.
  • the user has selected the second audio image material in the first row.
  • step S14 the image material in the target audio image material is displayed on the video content to be processed, and the sound material in the target audio image material is played, wherein the image in the target audio image material is displayed on the video content to be processed material, and play the sound material in the target audiovisual material.
  • the image material is displayed on the video content, and the sound material is played, thereby achieving the purpose of browsing the sound image material. Still referring to FIG. 2 , the selected target audio image material has been displayed on the video.
  • the image material in the target audio image material can be adjusted, for example, the position, size and direction of the image material on the video can be adjusted. It should also be noted that, in the case where the sound material in the target audio image material needs to be played again, the thumbnail of the target audio image material can be clicked again, or the thumbnail of the target audio image material can be long-pressed. to play again.
  • the above embodiments of the present application display a list of sound image materials in response to triggering a first operation instruction generated by a specified control, and the sound image material list includes at least one sound image material, wherein the sound image material includes an image material and a sound associated with the image material. material; in response to a selection operation in the sound image material list, determine a target sound image material from at least one sound image material; display the image material in the target sound image material on the video content to be processed, and play the target sound image material sound material in .
  • the above solution combines sound effects and stickers (image materials), so that users can add sound effects while selecting stickers, so that the stickers change from image elements without sound to multimedia elements containing sound, so that the content of the stickers is enriched and enhanced. interactivity.
  • the above method further includes: displaying the identification of the target audiovisual material in the audiovisual material list as an editing control.
  • the above-mentioned editing control is a control for performing sound editing on the target audiovisual material.
  • the sound material in the target sound image material used on the current video content can be edited. Editing of the sound material may include: changing the sound material, adjusting the playback speed of the sound material, and performing voice-changing processing on the sound material.
  • the above method further includes: in response to triggering the second operation instruction generated by the editing control, displaying a sound editing interface of the target sound image material; and performing editing processing on the sound material in the target sound image material in the sound editing interface .
  • Fig. 3 is a schematic diagram showing a selected sound map material according to an exemplary embodiment.
  • the sound image material After the second sound image material in the first row is selected, the sound image material The thumbnail of the icon is changed to the editing control. After clicking the editing control, you can enter the sound editing interface.
  • the sound editing interface includes: a sound selection control and a recording control
  • the step of editing and processing the sound material in the target sound image material in the sound editing interface includes: when the sound selection control is triggered, displaying a preset Set a list of sounds to be selected, and determine the target sound from the list of sounds to be selected according to the selection operation.
  • the target sound is used to replace the sound material in the target audio image material; when the recording control is triggered, the recording interface is displayed, and the recording is performed.
  • the interface obtains the recorded sound through recording, and the recorded sound is used to replace the sound material in the target sound image material.
  • Fig. 4 is a schematic diagram of a list of sounds to be selected according to an exemplary embodiment.
  • a list of sounds to be selected is displayed, The list includes sound effects that are allowed to be selected, and the user can select one of them as the sound element in the current target audio image element.
  • Fig. 5 is a schematic diagram of obtaining sound material by recording according to an exemplary embodiment.
  • the user when the user selects "record", the user switches to the recording interface to provide recording Control and display the longest recording time, the user can long press the recording control to record.
  • the interface can be shown in Figure 6. After the recording is completed, the recording can be used as a sound element in the current target sound image element.
  • the recording interface appears after clicking, and the recording duration corresponding to each audio image material is limited to a certain extent, but it can be different. Users can freely choose to add sound effects by using sound effects from the sound effect library or recording their own. You can also restore the original sound effect through the "Restore Default” button, and finally confirm the operation through the "Tick" on the panel.
  • the above method further includes: displaying a time editing interface of the target audio image material, wherein the time The editing interface includes: a time axis of the video content and a time window on the time axis, and the time window is used to adjust the start and end time of the target audio image material displayed on the video content.
  • the time window is displayed on the time axis to indicate the start and end time of displaying the target audio image material in the video, and the start time and end time indicated by the start and end time are is the time that the target audiovisual material is displayed in the video.
  • the display time of the target audio image material in the video content can be adjusted by adjusting the time window.
  • Fig. 7 is a schematic diagram of a time window according to an exemplary embodiment.
  • a time axis and a time window above the time axis are displayed below the video browsing area.
  • the time axis may be composed of images of the video at a specified time point (not shown in the figure), and a sound wave logo is displayed on the time window to indicate that the currently adjusted image material is an audio image material.
  • the vertical line on the time window is used to indicate the playing position of the current browsing area.
  • the above solution provides the time axis of the video and the time window of the target audio image material, so that the display of the target audio image material in the video is controllable, thereby providing users with more choices and improving the diversity of the video.
  • the time window includes a start control on the left and a termination control on the right
  • the above method further includes: receiving a first movement operation of the start control on the time axis, and adjusting the target according to the first movement operation Start time of the audio image material on the time axis; receive a second movement operation on the end control on the time axis, and adjust the end time of the target audio image material on the time axis according to the second movement operation.
  • the left and right sides of the time window have two handles respectively, and these two handles are the above-mentioned start control and end control, and adjust the start control on the time axis.
  • the above solution provides a solution for adjusting the revelation time and termination time of the target audio image material displayed in the video by adjusting the time window, thereby making the display of the sticker in the video more diverse.
  • a waveform representing sound material is displayed in the time window, and in response to the termination control being moved to a position less than a preset distance from the tail of the waveform, the termination control is automatically adsorbed to the tail of the waveform.
  • a waveform for representing the sound material is displayed in the time window, and the start and end positions of the waveform represent the start and end time of the sound material.
  • the start position of the waveform is by default the same as where the start control is located.
  • the position of the waveform graph is the same, and the end position of the waveform graph is used to indicate the end time when the audio material is played once.
  • the termination control is moved to a position where the distance from the end of the waveform graph is less than the preset distance, it means that the position of the termination control is close to the end of the waveform graph. It is usually difficult to be affected by the precision of the mobile terminal device and the precision of the user's operation. Move the stop control completely to the end of the waveform graph, so in the above case, the control end control automatically snaps to the end of the waveform graph.
  • the display time of the image material is the same as the playback time of the sound material, that is, the two start and end at the same time.
  • the start and end times of the sound material in the audio image material are the same as the start and end times of the image material in the audio image material; or the start time of the sound material in the audio image material is the same as the start time of the audio material in the audio image material. , and the end time of the sound material in the audio image material is the end time of the sound material.
  • the start and end times of the sound material and the start and end times of the image material are the same, that is, the two start and end at the same time.
  • the sound material may be continuously played in the process of displaying the image material by means of repeated playback.
  • the duration of the sound material is greater than the duration of the image material, the sound material may be played by means of interception.
  • the sound material starts at the same time as the image material, but the end time of the sound material is determined according to the duration of its own sound. Stops when finished.
  • the method further includes: receiving a deletion operation on the target audio image material; Deletes the image material and sound material of the target audio image material from the video content.
  • the target audiovisual material that has been added to the video may be deleted, and in response to receiving the deletion operation, the image material and sound material in the target audiovisual material are simultaneously deleted.
  • the above method further includes: in response to playing the sound material in the target sound image material, reducing the volume of the sound information of the video content.
  • the video includes existing sound information, so in response to playing to the target sound image material, the sound information included in the video itself can be reduced in volume to highlight the sound material in the target sound image material.
  • Fig. 8 is a block diagram of a video processing apparatus according to an exemplary embodiment.
  • the apparatus includes a display unit 81 and a determination module 82 .
  • the presentation unit 81 is configured to, in response to triggering the first operation instruction generated by the designated control, display a list of sound image materials, where the sound image material list includes at least one sound image material, wherein the sound image material includes an image material and a sound image material associated with the image material. sound material.
  • the determination module 82 is configured to be configured to determine a target sounded image material from the at least one sounded image material in response to a selection operation in the sounded image material list.
  • the presentation unit 81 is further configured to display the image material in the target audio image material on the video content to be processed, and play the sound material in the target audio image material.
  • the above-mentioned apparatus further includes: an editing control display unit configured to display the identification of the target audiovisual material in the audiovisual material list as an editing control.
  • the above-mentioned apparatus further includes: an editing interface display unit, configured to display a sound editing interface of the target audio image material in response to a second operation instruction generated by triggering the editing control; In the editing interface, the sound material in the target sound image material is edited and processed.
  • the sound editing interface includes: a sound selection control and a recording control
  • the material determination unit includes: a sound selection unit, configured to display a preset list of sounds to be selected when the sound selection control is triggered, and according to The selection operation determines the target sound from the to-be-selected sound list, and the target sound is used to replace the sound material in the target sound image material;
  • the sound recording unit is configured to display the recording interface when the recording control is triggered, and pass the recording interface through The recorded sound is obtained by recording, and the recorded sound is used to replace the sound material in the target sound image material.
  • the above apparatus further includes: a time editing unit configured to display the target sound image material after determining the target sound image material from at least one sound image material in response to a selection operation in the sound image material list
  • the time editing interface includes: a time axis of the video content and a time window on the time axis, and the time window is used to adjust the start and end time of the target audio image material displayed on the video content.
  • the time window includes a left start control and a right end control
  • the above device further includes: a first adjustment unit configured to receive a first movement operation on the start control on the time axis, and adjust the start time of the target audio image material on the time axis according to the first movement operation; the second adjustment unit is configured to receive the second movement operation of the termination control on the time axis, and adjust the target according to the second movement operation The end time of the audio image material on the timeline.
  • a waveform representing sound material is displayed in the time window, and in response to the termination control being moved to a position less than a preset distance from the tail of the waveform, the termination control is automatically adsorbed to the tail of the waveform.
  • the start and end times of the sound material in the audio image material are the same as the start and end times of the image material in the audio image material; or the start time of the sound material in the audio image material is the same as the start time of the audio material in the audio image material. , and the end time of the sound material in the audio image material is the end time of the sound material.
  • the above-mentioned apparatus further includes: a deletion operation receiving unit, configured to display the image material in the target audio image material on the video content to be processed, and after playing the sound material in the target audio image material, receive A deletion operation on the target sound image material; a deletion unit configured to delete the image material and the sound material of the target sound image material from the video content.
  • a deletion operation receiving unit configured to display the image material in the target audio image material on the video content to be processed, and after playing the sound material in the target audio image material, receive A deletion operation on the target sound image material
  • a deletion unit configured to delete the image material and the sound material of the target sound image material from the video content.
  • the above apparatus further includes: a volume adjustment unit configured to reduce the volume of the sound information of the video content in response to playing the sound material in the target audio image material when the video content includes sound information.
  • the present application also provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions, so as to implement an implementation according to an embodiment of the present disclosure. video processing method.
  • FIG. 9 is a block diagram of an electronic device 900 for executing the above video processing method according to an exemplary embodiment.
  • electronic device 900 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.
  • an electronic device 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914 , and the communication component 916 .
  • the processing component 902 generally controls the overall operation of the electronic device 900, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 902 may include one or more processors 920 to execute instructions to perform all or some of the steps of the methods described above. Additionally, processing component 902 may include one or more modules to facilitate interaction between processing component 902 and other components. For example, processing component 902 may include a multimedia module to facilitate interaction between multimedia component 908 and processing component 902.
  • Memory 904 is configured to store various types of data to support operation at device 900 . Examples of such data include instructions for any application or method operating on electronic device 900, contact data, phonebook data, messages, pictures, videos, and the like. Memory 904 may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 906 provides power to various components of electronic device 900 .
  • Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 900 .
  • Multimedia component 908 includes a screen that provides an output interface between the electronic device 900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP).
  • the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. When the device 900 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data.
  • Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 910 is configured to output and/or input audio signals.
  • audio component 910 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 900 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signal may be further stored in memory 904 or transmitted via communication component 916 .
  • audio component 910 also includes a speaker for outputting audio signals.
  • the I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 914 includes one or more sensors for providing status assessments of various aspects of electronic device 900 .
  • the sensor assembly 914 can detect the open/closed state of the device 900, the relative positioning of the components, such as the display and keypad of the electronic device 900, the sensor assembly 914 can also detect the electronic device 900 or a component of the electronic device 900 The position of the electronic device 900 changes, the presence or absence of the user's contact with the electronic device 900, the orientation or acceleration/deceleration of the electronic device 900, and the temperature change of the electronic device 900.
  • Sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 916 is configured to facilitate wired or wireless communication between electronic device 900 and other devices.
  • Electronic device 900 may access wireless networks based on communication standards, such as WiFi, carrier networks (eg, 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 916 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 916 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 900 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • a computer-readable storage medium including instructions, such as a memory 904 including instructions, which are executable by the processor 920 of the electronic device 900 to perform the above-described method.
  • the computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • the present application also provides a computer-readable storage medium, when the instructions in the computer-readable storage medium are executed by the processor of the electronic device, the electronic device can execute the video processing method according to the embodiment of the present disclosure.
  • the present application also provides a computer program product, including a computer program/instruction, wherein the computer program/instruction, when executed by a processor, implements the video processing method according to an embodiment of the present disclosure.

Abstract

本公开关于一种视频处理方法,涉及计算机领域,该方法包括:响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,有声图像素材列表包括至少一个有声图像素材,其中,有声图像素材包括图像素材和与图像素材相关联的声音素材;响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材;在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材。

Description

视频处理方法和视频处理装置
相关申请的交叉引用
本申请基于申请号为202110130737.5、申请日为2021年01月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及计算机领域,尤其涉及视频处理方法、装置、电子设备和计算机可读存储介质。
背景技术
相关技术中,可以通过视频生产工具(例如短视频应用)拍摄视频内容,并对视频内容进行一定的处理,其中,在视频中添加贴纸即为一种常用的处理方式,此处的贴纸指的是可以显示在视频内容上层的图像元素。在视频中添加贴纸后,还可以调整贴纸显示在视频中的位置或大小,从而增添了视频的趣味性。
发明内容
本公开提供一种视频处理方法、装置、电子设备和计算机可读存储介质。
根据本公开实施例的第一方面,提供一种视频处理方法,包括:响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,有声图像素材列表包括至少一个有声图像素材,其中,有声图像素材包括图像素材和与图像素材相关联的声音素材;响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材;在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材。
作为一种实施例,将目标有声图像素材的标识显示为编辑控件。
作为一种实施例,响应于触发编辑控件生成的第二操作指令,显示目标有声图像素材的声音编辑界面;在声音编辑界面中对目标有声图像素材中的声音素材进行编辑处理。
作为一种实施例,声音编辑界面包括:声音选择控件,在声音编辑界面中对目标有声图像素材中的声音素材进行编辑处理的步骤包括:在触发声音选择控件的情况下,显示预设的待选择声音列表,并根据选择操作从待选择声音列表中确定目标声音,目标声音用于替换目标有声图像素材中的声音素材。
作为一种实施例,声音编辑界面包括:录音控件,在声音编辑界面中对目标有声图像素材中的声音素材进行编辑处理的步骤包括:在触发录音控件的情况下,显示录音界面,并在 录音界面通过录音得到录制声音,录制声音用于替换目标有声图像素材中的声音素材。
作为一种实施例,上述方法还包括:显示目标有声图像素材的时间编辑界面,其中,时间编辑界面包括:视频内容的时间轴以及时间轴上的时间窗,时间窗用于调整目标有声图像素材显示在视频内容之上的起止时间。
作为一种实施例,时间窗包括左侧的起始控件和右侧的终止控件,上述方法还包括:接收对起始控件在时间轴上的第一移动操作,并根据第一移动操作调整目标有声图像素材在时间轴上的起始时间;接收对终止控件在时间轴上的第二移动操作,并根据第二移动操作调整目标有声图像素材在时间轴上的终止时间。
作为一种实施例,时间窗内显示有用于表示声音素材的波形图,响应于终止控件被移动至与波形图尾部的距离小于预设距离的位置,终止控件自动吸附至波形图尾部。
作为一种实施例,有声图像素材中声音素材的起止时间与有声图像素材中图像素材的起止时间相同;或有声图像素材中声音素材的起始时间与有声图像素材中图像素材的起始时间相同,且有声图像素材中声音素材的终止时间为声音素材的结束时间。
作为一种实施例,上述方法还包括:接收对目标有声图像素材的删除操作;从视频内容中删除目标有声图像素材的图像素材和声音素材。
作为一种实施例,在视频内容包括声音信息的情况下,上述方法还包括:响应于播放目标有声图像素材中的声音素材,降低视频内容的声音信息的音量。
根据本公开实施例的第二方面,提供一种视频处理装置,包括:展示单元,被配置为响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,有声图像素材列表包括至少一个有声图像素材,其中,有声图像素材包括图像素材和与图像素材相关联的声音素材;确定单元,被配置为响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材;其中,展示单元进一步被配置为在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材。
作为一种实施例,上述装置还包括:编辑控件显示单元,被配置为将目标有声图像素材的标识显示为编辑控件。
作为一种实施例,上述装置还包括:编辑界面显示单元,被配置为响应于触发编辑控件生成的第二操作指令,显示目标有声图像素材的声音编辑界面;素材确定单元,被配置为在声音编辑界面中对目标有声图像素材中的声音素材进行编辑处理。
作为一种实施例,声音编辑界面包括:声音选择控件,素材确定单元包括:声音选择单元,被配置为在触发声音选择控件的情况下,显示预设的待选择声音列表,并根据选择操作从待选择声音列表中确定目标声音,目标声音用于替换目标有声图像素材中的声音素材。
作为一种实施例,声音编辑界面包括:录音控件,素材确定单元包括:声音录制单元,被配置为在触发录音控件的情况下,显示录音界面,并在录音界面通过录音得到录音声音,录音声音用于替换目标有声图像素材中的声音素材。
作为一种实施例,上述装置还包括:时间编辑单元,被配置为显示目标有声图像素材的时间编辑界面,其中,时间编辑界面包括:视频内容的时间轴以及时间轴上的时间窗,时间窗用于调整目标有声图像素材显示在视频内容之上的起止时间。
作为一种实施例,时间窗包括左侧的起始控件和右侧的终止控件,上述装置还包括:第一调整单元,被配置为接收对起始控件在时间轴上的第一移动操作,并根据第一移动操作调整目标有声图像素材在时间轴上的起始时间;第二调整单元,被配置为接收对终止控件在时间轴上的第二移动操作,并根据第二移动操作调整目标有声图像素材在时间轴上的终止时间。
作为一种实施例,时间窗内显示有用于表示声音素材的波形图,响应于终止控件被移动至与波形图尾部的距离小于预设距离的位置,终止控件自动吸附至波形图尾部。
作为一种实施例,有声图像素材中声音素材的起止时间与有声图像素材中图像素材的起止时间相同;或有声图像素材中声音素材的起始时间与有声图像素材中图像素材的起始时间相同,且有声图像素材中声音素材的终止时间为声音素材的结束时间。
作为一种实施例,上述装置还包括:删除操作接收单元,被配置为接收对目标有声图像素材的删除操作;删除单元,被配置为从视频内容中删除目标有声图像素材的图像素材和声音素材。
作为一种实施例,上述装置还包括:音量调整单元,被配置为在视频内容包括声音信息的情况下,响应于播放目标有声图像素材中的声音素材,降低视频内容的声音信息的音量。
根据本公开实施例的第三方面,提供一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令,以实现如上述的视频处理方法。
根据本公开实施例的第四方面,提供一种计算机可读存储介质,当计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如上述的视频处理方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,包括计算机程序/指令,其中,计算机程序/指令被处理器执行时实现权利要求上述的视频处理方法。
本申请上述实施例响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,有声图像素材列表包括至少一个有声图像素材,其中,有声图像素材包括图像素材和与图像素材相关联的声音素材;响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材;在待处理的视频内容上显示目标有声图像素材中的图像素材, 并播放目标有声图像素材中的声音素材。上述方案将音效和贴纸(图像素材)结合起来,使用户在选择贴纸的同时能够加上音效,让贴纸从没有声音的图像元素变为包含声音的多媒体元素,使得贴纸的内容得到了丰富,并增加了贴纸的新玩法,避免了相关技术中短视频应用中的贴纸玩法比较单一的事实,提高了视频的趣味性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据一示例性实施例示出的一种视频处理方法的流程图。
图2是根据一示例性实施例示出的一种选择有声贴图素材的示意图。
图3是根据一示例性实施例示出的一种选中有声贴图素材的示意图。
图4是根据一示例性实施例示出的一种待选择声音列表的示意图。
图5是根据一示例性实施例示出的一种录音获取声音素材的示意图。
图6是根据一示例性实施例示出的一种录音过程中的示意图。
图7是根据一示例性实施例示出的一种时间窗的示意图。
图8是根据一示例性实施例示出的一种视频处理装置框图。
图9是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种视频处理方法的流程图,如图1所示,视频处理方法可以用于短视频应用程序中。图1所示的视频处理方法可由任意具有视频处理功能的电子设备执行。电子设备可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制 台,平板设备,医疗设备,健身设备,个人数字助理。
在步骤S11中,响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,有声图像素材列表包括至少一个有声图像素材,其中,有声图像素材包括图像素材和与图像素材相关联的声音素材。
在一些实施例中,上述指定控件可以是有声贴图控件,响应于有声贴图控件被触发,在当前界面显示包含多个有声图像素材的有声图像素材列表。
图像素材指的是预设的可以叠加在视频上显示的图像内容,图像素材显示在视频上的位置、大小均可调整。声音素材可以是音效信息,例如:特殊录制的音效信息,或从影视节目中截取的音效信息等。本公开的实施例中的有声图像素材包括图像素材和声音素材,且图像素材和声音素材相关联,也即响应于在视频中添加该有声图像素材,有声图像素材中的图像素材和声音素材都会被添加在视频中。
图2是根据一示例性实施例示出的一种选择有声贴图素材的示意图,在一种实施例中,结合图2为例,在拍摄完短视频之后,短视频应用提供有声贴图控件(图2中的“有声”),响应于用户触该有声发贴图控件,在视频下方显示有声贴图窗口,以在有声贴图窗口中显示多个有声图像素材的缩略图。
在步骤S13中,响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材。
在移动终端上,用户可以通过触摸操作选中任意一个有声图像素材。在图2的示例中,用户选中了第一行第二个有声图像素材。
在步骤S14中,在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材,其中,在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材。
当任意一个有声图像素材被选中后,将该图像素材显示在视频内容之上,并播放声音素材,从而起到了浏览该有声图像素材的目的。仍结合图2所示,选中的目标有声图像素材已显示在视频之上。
需要说明的是,在目标有声图像素材显示在视频之上的情况下,可以对目标有声图像素材中的图像素材进行调整,例如,可以调整图像素材在视频上的位置、大小以及方向等。还需要说明的是,在需要再次播放目标有声图像素材中的声音素材的情况下,可以再次点击该目标有声图像素材的缩略图,或长按该目标有声图像素材的缩略图等,该声音素材即可再次播放。
本申请上述实施例响应于触发指定控件生成的第一操作指令,显示有声图像素材列表, 有声图像素材列表包括至少一个有声图像素材,其中,有声图像素材包括图像素材和与图像素材相关联的声音素材;响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材;在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材。上述方案将音效和贴纸(图像素材)结合起来,使用户在选择贴纸的同时能够加上音效,让贴纸从没有声音的图像元素变为包含声音的多媒体元素,使得贴纸的内容得到了丰富,增强了互动性。
作为一种实施例,上述方法还包括:将有声图像素材列表中目标有声图像素材的标识显示为编辑控件。
在一些实施例中,上述编辑控件是用于对目标有声图像素材进行声音编辑的控件。在上述声音编辑界面可以对使用在当前视频内容上的目标有声图像素材中的声音素材进行编辑。对声音素材的编辑可以包括:更改声音素材、调整声音素材的播放速度以及对声音素材进行变声处理等。
作为一种实施例,上述方法还包括:响应于触发编辑控件生成的第二操作指令,显示目标有声图像素材的声音编辑界面;在声音编辑界面中对目标有声图像素材中的声音素材进行编辑处理。
图3是根据一示例性实施例示出的一种选中有声贴图素材的示意图,在一种实施例中,结合图3所示,在选中第一行第二个有声图像素材之后,该有声图像素材的缩略图变更为编辑控件,在点击编辑控件后,即可进入声音编辑界面。
需要说明的是,在上述声音编辑界面对声音素材更改,仅在当前的视频内容中生效,也即目标有声图像素材默认相关联的声音素材并未发生变化。
通过上述方案,在实现了有声贴纸的情况下,还实现了对贴纸音效的编辑,极大的丰富了用户的选择。
作为一种实施例,声音编辑界面包括:声音选择控件和录音控件,在声音编辑界面中对目标有声图像素材中的声音素材进行编辑处理的步骤包括:在触发声音选择控件的情况下,显示预设的待选择声音列表,并根据选择操作从待选择声音列表中确定目标声音,目标声音用于替换目标有声图像素材中的声音素材;在触发录音控件的情况下,显示录音界面,并在录音界面通过录音得到录制声音,录制声音用于替换目标有声图像素材中的声音素材。
上述方案提供了两种编辑声音素材的方式,下面分别进行说明。
在第一种方式中,为用户提供待选择声音列表,该列表包括允许被选择的声音信息。图4是根据一示例性实施例示出的一种待选择声音列表的示意图,在一种实施例中,结合图4所示,当用户选择“音效库”的情况下,显示待选择声音列表,该列表中包括允许被选择的音 效,用户可以选择其中一个作为当前目标有声图像元素中的声音元素。
在第二种方式中,为用户提供了录音作为声音素材的功能。图5是根据一示例性实施例示出的一种录音获取声音素材的示意图,在一种实施例中,结合图5所示,当用户选择“录音”的情况下,切换到录音界面,提供录音控件并显示最长录音时间,用户长按该录音控件即可进行录音,在录音过程中,界面可如图6所示。录音完成后,即可将该录音作为当前目标有声图像元素中的声音元素。
需要说明的是,在第二种方式中,点击后出现录音界面,每个有声图像素材对应的录音时长有一定限制,但可以不同。用户可自由选择通过使用音效库的音效或自己录音添加音效。也可通过“恢复默认”按钮恢复原始音效,最终通过面板的“勾”确认操作。
通过上述方案,在实现了有声贴纸的情况下,还实现了自定义贴纸绑定音效的目的,极大的丰富了用户的选择。
作为一种实施例,响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材之后,上述方法还包括:显示目标有声图像素材的时间编辑界面,其中,时间编辑界面包括:视频内容的时间轴以及时间轴上的时间窗,时间窗用于调整目标有声图像素材显示在视频内容之上的起止时间。
在一些实施例中,在上述方案中,时间窗显示在时间轴之上,用于指示在视频中展示目标有声图像素材的起止时间,而起止时间所指示的起始时间和终止时间之间即为目标有声图像素材在视频内中展示的时间。可以通过调整时间窗来调整目标有声图像素材在视频内容中展示的时间。
图7是根据一示例性实施例示出的一种时间窗的示意图,在一种实施例中,结合图7所示,视频浏览区域的下方显示有是时间轴和时间轴之上的时间窗,时间轴可以由视频在指定时间点的图像构成(图中未示出),时间窗上显示有声波标识,以表示当前调整的图像素材为有声图像素材。时间窗上的竖线用于表示当前浏览区域所播放的位置。
上述方案提供了视频的时间轴和目标有声图像素材的时间窗,从而使得目标有声图像素材在视频中的展示可控,进而为用户提供了跟多的选择,提高了视频的多样性。
作为一种实施例,时间窗包括左侧的起始控件和右侧的终止控件,上述方法还包括:接收对起始控件在时间轴上的第一移动操作,并根据第一移动操作调整目标有声图像素材在时间轴上的起始时间;接收对终止控件在时间轴上的第二移动操作,并根据第二移动操作调整目标有声图像素材在时间轴上的终止时间。
在一种实施例中,仍结合图7所示,时间窗的左侧和右侧分别具有两个把手,这两个把手即为上述的起始控件和终止控件,调整起始控件在时间轴上的位置即可以调整目标有声图 像素材在视频内容中显示的起始时间,调整终止控件在时间轴上的位置即可以调整目标有声图像素材在视频内容中显示的终止时间。需要注意的是,由于目标有声图像素材显示在视频中的起始时间不能晚于终止时间,因此起始控件一定处于终止控件左侧。
上述方案提供了通过调整时间窗来调整目标有声图像素材在视频中展示的启示时间和终止时间的方案,从而使得贴纸在视频中的展示更加多样化。
作为一种实施例,时间窗内显示有用于表示声音素材的波形图,响应于终止控件被移动至与波形图尾部的距离小于预设距离的位置,终止控件自动吸附至波形图尾部。
在上述方案中,时间窗内显示有用于表示声音素材的波形图,该波形图的起始位置和终止位置则表示了声音素材的起止时间,该波形图的起始位置默认与起始控件所在的位置相同,该波形图的终止位置用于表示播放完成一次有声素材的终止时间。当终止控件被移动至与波形图尾部的距离小于预设距离的位置的情况下,说明终止控件的位置已经接近于波形图的尾部,受到移动终端设备的精度以及用户操作精度的影响,通常难以将终止控件完全移动至波形图尾部,因此在上述情况下,控制终止控件自动吸附至波形图尾部。
在控制终止控件自动吸附至波形图尾部后,使得图像素材的显示时间与声音素材的播放时间相同,也即二者同时开始并同时结束。
上述方案响应于终止控件被移动至接近波形图尾部,自动吸附至波形图尾部,从而弥补了由于设备精度或操作精度所导致的操作不便。
作为一种实施例,有声图像素材中声音素材的起止时间与有声图像素材中图像素材的起止时间相同;或有声图像素材中声音素材的起始时间与有声图像素材中图像素材的起始时间相同,且有声图像素材中声音素材的终止时间为声音素材的结束时间。
上述方案提供了两种播放声音素材的方式,下面分别进行说明。
在第一种方式中,声音素材的起止时间和图像素材的起止时间相同,即二者同时开始并同时结束。在声音素材的时长短于图像素材的时长的情况下,可以通过重复播放的方式在图像素材显示的过程中持续播放声音素材。而在声音素材的时长大于图像素材的时长的情况下,可以通过截取的方式播放声音素材。
在第二种方式中,声音素材与图像素材同时开始,但声音素材的终止时间根据其自身声音的时长来确定,也即无论声音素材的时长大于图像素材的时长或小于图像素材的时长,均播放完即停止。
作为一种实施例,在在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材之后,方法还包括:接收对目标有声图像素材的删除操作;从视频内容中删除目标有声图像素材的图像素材和声音素材。
在上述方案中,可以对已添加至视频中目标有声图像素材进行删除,响应于接收到删除操作,同时删除目标有声图像素材中的图像素材和声音素材。
作为一种实施例,在视频内容包括声音信息的情况下,上述方法还包括:响应于播放目标有声图像素材中的声音素材,降低视频内容的声音信息的音量。
在上述方案中,视频中包括已有的声音信息,因此响应于播放至目标有声图像素材,可以将视频本身所包括的声音信息进行降低音量的处理,从而突出目标有声图像素材中的声音素材。
图8是根据一示例性实施例示出的一种视频处理装置框图。参照图8,该装置包括展示单元81和确定模块82。
该展示单元81被配置为响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,有声图像素材列表包括至少一个有声图像素材,其中,有声图像素材包括图像素材和与图像素材相关联的声音素材。
该确定模块82被配置为被配置为响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材。
该展示单元81进一步被配置为在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材。
作为一种实施例,上述装置还包括:编辑控件显示单元,被配置为将有声图像素材列表中目标有声图像素材的标识显示为编辑控件。
作为一种实施例,上述装置还包括:编辑界面显示单元,被配置为响应于触发编辑控件生成的第二操作指令,显示目标有声图像素材的声音编辑界面;素材确定单元,被配置为在声音编辑界面中对目标有声图像素材中的声音素材进行编辑处理。
作为一种实施例,声音编辑界面包括:声音选择控件和录音控件,素材确定单元包括:声音选择单元,被配置为在触发声音选择控件的情况下,显示预设的待选择声音列表,并根据选择操作从待选择声音列表中确定目标声音,目标声音用于替换目标有声图像素材中的声音素材;声音录制单元,被配置为在触发录音控件的情况下,显示录音界面,并在录音界面通过录音得到录音声音,录音声音用于替换目标有声图像素材中的声音素材。
作为一种实施例,上述装置还包括:时间编辑单元,被配置为在响应于在有声图像素材列表中的选择操作,从至少一个有声图像素材中确定目标有声图像素材之后,显示目标有声图像素材的时间编辑界面,其中,时间编辑界面包括:视频内容的时间轴以及时间轴上的时间窗,时间窗用于调整目标有声图像素材显示在视频内容之上的起止时间。
作为一种实施例,时间窗包括左侧的起始控件和右侧的终止控件,上述装置还包括:第一调整单元,被配置为接收对起始控件在时间轴上的第一移动操作,并根据第一移动操作调整目标有声图像素材在时间轴上的起始时间;第二调整单元,被配置为接收对终止控件在时间轴上的第二移动操作,并根据第二移动操作调整目标有声图像素材在时间轴上的终止时间。
作为一种实施例,时间窗内显示有用于表示声音素材的波形图,响应于终止控件被移动至与波形图尾部的距离小于预设距离的位置,终止控件自动吸附至波形图尾部。
作为一种实施例,有声图像素材中声音素材的起止时间与有声图像素材中图像素材的起止时间相同;或有声图像素材中声音素材的起始时间与有声图像素材中图像素材的起始时间相同,且有声图像素材中声音素材的终止时间为声音素材的结束时间。
作为一种实施例,上述装置还包括:删除操作接收单元,被配置为在在待处理的视频内容上显示目标有声图像素材中的图像素材,并播放目标有声图像素材中的声音素材之后,接收对目标有声图像素材的删除操作;删除单元,被配置为从视频内容中删除目标有声图像素材的图像素材和声音素材。
作为一种实施例,上述装置还包括:音量调整单元,被配置为在视频内容包括声音信息的情况下,响应于播放目标有声图像素材中的声音素材,降低视频内容的声音信息的音量。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本申请还提供了一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现根据本公开实施例的视频处理方法。
图9是根据一示例性实施例示出的一种用于执行上述视频处理方法的电子设备900的框图。例如,电子设备900可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图9,电子设备900可以包括以下一个或多个组件:处理组件902,存储器904,电力组件906,多媒体组件908,音频组件910,输入/输出(I/O)的接口912,传感器组件914,以及通信组件916。
处理组件902通常控制电子设备900的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件902可以包括一个或多个处理器920来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件902可以包括一个或多个模块,便于处理组件902和其他组件之间的交互。例如,处理组件902可以包括多媒体模块,以方 便多媒体组件908和处理组件902之间的交互。
存储器904被配置为存储各种类型的数据以支持在设备900的操作。这些数据的示例包括用于在电子设备900上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器904可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件906为电子设备900的各种组件提供电力。电源组件906可以包括电源管理系统,一个或多个电源,及其他与为电子设备900生成、管理和分配电力相关联的组件。
多媒体组件908包括在所述电子设备900和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。在屏幕包括触摸面板的情况下,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件908包括一个前置摄像头和/或后置摄像头。在设备900处于操作模式的情况下,如拍摄模式或视频模式,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件910被配置为输出和/或输入音频信号。例如,音频组件910包括一个麦克风(MIC),在电子设备900处于操作模式的情况下,如呼叫模式、记录模式和语音识别模式,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器904或经由通信组件916发送。在一些实施例中,音频组件910还包括一个扬声器,用于输出音频信号。
I/O接口912为处理组件902和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件914包括一个或多个传感器,用于为电子设备900提供各个方面的状态评估。例如,传感器组件914可以检测到设备900的打开/关闭状态,组件的相对定位,例如所述组件为电子设备900的显示器和小键盘,传感器组件914还可以检测电子设备900或电子设备900一个组件的位置改变,用户与电子设备900接触的存在或不存在,电子设备900方位或加速/减速和电子设备900的温度变化。传感器组件914可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件914还可以包括光传感器,如 CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件914还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件916被配置为便于电子设备900和其他设备之间有线或无线方式的通信。电子设备900可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件916经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件916还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备900可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的计算机可读存储介质,例如包括指令的存储器904,上述指令可由电子设备900的处理器920执行以完成上述方法。在一些实施例中,计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本申请还提供了一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行根据本公开实施例的视频处理方法。
本申请还提供了一种计算机程序产品,包括计算机程序/指令,其中,所述计算机程序/指令被处理器执行时实现根据本公开实施例的视频处理方法。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。

Claims (25)

  1. 一种视频处理方法,包括:
    响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,所述有声图像素材列表包括至少一个有声图像素材,其中,所述有声图像素材包括图像素材和与所述图像素材相关联的声音素材;
    响应于在所述有声图像素材列表中的选择操作,从所述至少一个有声图像素材中确定目标有声图像素材;
    在待处理的视频内容上显示所述目标有声图像素材中的图像素材,并播放所述目标有声图像素材中的声音素材。
  2. 根据权利要求1所述的视频处理方法,其中,所述方法还包括:
    将所述目标有声图像素材的标识显示为编辑控件。
  3. 根据权利要求2所述的视频处理方法,其中,所述方法还包括:
    响应于触发所述编辑控件生成的第二操作指令,显示所述目标有声图像素材的声音编辑界面;
    在所述声音编辑界面中对所述目标有声图像素材中的声音素材进行编辑处理。
  4. 根据权利要求3所述的视频处理方法,其中,所述声音编辑界面包括:声音选择控件,所述在所述声音编辑界面中对所述目标有声图像素材中的声音素材进行编辑处理的步骤包括:
    在触发所述声音选择控件的情况下,显示预设的待选择声音列表,并根据选择操作从所述待选择声音列表中确定目标声音,所述目标声音用于替换所述目标有声图像素材中的声音素材。
  5. 根据权利要求3所述的视频处理方法,其中,所述声音编辑界面包括:录音控件,所述在所述声音编辑界面中对所述目标有声图像素材中的声音素材进行编辑处理的步骤包括:
    在触发所述录音控件的情况下,显示录音界面,并在所述录音界面通过录音得到录制声音,所述录制声音用于替换所述目标有声图像素材中的声音素材。
  6. 根据权利要求1所述的视频处理方法,其中,所述方法还包括:
    显示所述目标有声图像素材的时间编辑界面,其中,所述时间编辑界面包括:所述视频内容的时间轴以及所述时间轴上的时间窗,所述时间窗用于调整所述目标有声图像素材显示在所述视频内容之上的起止时间。
  7. 根据权利要求6所述的视频处理方法,其中,所述时间窗包括左侧的起始控件和右 侧的终止控件,所述方法还包括:
    接收对所述起始控件在所述时间轴上的第一移动操作,并根据所述第一移动操作调整所述目标有声图像素材在所述时间轴上的起始时间;
    接收对所述终止控件在所述时间轴上的第二移动操作,并根据所述第二移动操作调整所述目标有声图像素材在所述时间轴上的终止时间。
  8. 根据权利要求7所述的视频处理方法,其中,所述时间窗内显示有用于表示所述声音素材的波形图,响应于所述终止控件被移动至与所述波形图尾部的距离小于预设距离的位置,所述终止控件自动吸附至所述波形图尾部。
  9. 根据权利要求1所述的视频处理方法,其中,
    所述有声图像素材中声音素材的起止时间与所述有声图像素材中图像素材的起止时间相同;或
    所述有声图像素材中声音素材的起始时间与所述有声图像素材中图像素材的起始时间相同,且所述有声图像素材中声音素材的终止时间为所述声音素材的结束时间。
  10. 根据权利要求1所述的视频处理方法,其中,所述方法还包括:
    接收对所述目标有声图像素材的删除操作;
    从所述视频内容中删除所述目标有声图像素材的图像素材和声音素材。
  11. 根据权利要求1所述的视频处理方法,其中,在所述视频内容包括声音信息的情况下,所述方法还包括:
    响应于播放所述目标有声图像素材中的声音素材,降低所述视频内容的声音信息的音量。
  12. 一种视频处理装置,包括:
    展示单元,被配置为响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,所述有声图像素材列表包括至少一个有声图像素材,其中,所述有声图像素材包括图像素材和与所述图像素材相关联的声音素材;
    确定单元,被配置为响应于在所述有声图像素材列表中的选择操作,从所述至少一个有声图像素材中确定目标有声图像素材;
    其中,所述展示单元进一步被配置为在待处理的视频内容上显示所述目标有声图像素材中的图像素材,并播放所述目标有声图像素材中的声音素材。
  13. 根据权利要求12所述的视频处理装置,其中,所述装置还包括:
    编辑控件显示单元,被配置为将所述目标有声图像素材的标识显示为编辑控件。
  14. 根据权利要求13所述的视频处理装置,其中,所述装置还包括:
    编辑界面显示单元,被配置为响应于触发所述编辑控件生成的第二操作指令,显示所述目标有声图像素材的声音编辑界面;
    素材确定单元,被配置为在所述声音编辑界面中对所述目标有声图像素材中的声音素材进行编辑处理。
  15. 根据权利要求14所述的视频处理装置,其中,所述声音编辑界面包括:声音选择控件,所述素材确定单元包括:
    声音选择单元,被配置为在触发所述声音选择控件的情况下,显示预设的待选择声音列表,并根据选择操作从所述待选择声音列表中确定目标声音,所述目标声音用于替换所述目标有声图像素材中的声音素材。
  16. 根据权利要求14所述的视频处理装置,其中,所述声音编辑界面包括:录音控件,所述素材确定单元包括:
    声音录制单元,被配置为在触发所述录音控件的情况下,显示录音界面,并在所述录音界面通过录音得到录音声音,所述录音声音用于替换所述目标有声图像素材中的声音素材。
  17. 根据权利要求12所述的视频处理装置,其中,所述装置还包括:
    时间编辑单元,被配置为显示所述目标有声图像素材的时间编辑界面,其中,所述时间编辑界面包括:所述视频内容的时间轴以及所述时间轴上的时间窗,所述时间窗用于调整所述目标有声图像素材显示在所述视频内容之上的起止时间。
  18. 根据权利要求17所述的视频处理装置,其中,所述时间窗包括左侧的起始控件和右侧的终止控件,所述装置还包括:
    第一调整单元,被配置为接收对所述起始控件在所述时间轴上的第一移动操作,并根据所述第一移动操作调整所述目标有声图像素材在所述时间轴上的起始时间;
    第二调整单元,被配置为接收对所述终止控件在所述时间轴上的第二移动操作,并根据所述第二移动操作调整所述目标有声图像素材在所述时间轴上的终止时间。
  19. 根据权利要求18所述的视频处理装置,其中,所述时间窗内显示有用于表示所述声音素材的波形图,响应于所述终止控件被移动至与所述波形图尾部的距离小于预设距离的位置,所述终止控件自动吸附至所述波形图尾部。
  20. 根据权利要求12所述的视频处理装置,其中,
    所述有声图像素材中声音素材的起止时间与所述有声图像素材中图像素材的起止时间相同;或
    所述有声图像素材中声音素材的起始时间与所述有声图像素材中图像素材的起始时间相同,且所述有声图像素材中声音素材的终止时间为所述声音素材的结束时间。
  21. 根据权利要求12所述的视频处理装置,其中,所述装置还包括:
    删除操作接收单元,被配置为接收对所述目标有声图像素材的删除操作;
    删除单元,被配置为从所述视频内容中删除所述目标有声图像素材的图像素材和声音素材。
  22. 根据权利要求12所述的视频处理装置,其中,所述装置还包括:
    音量调整单元,被配置为在所述视频内容包括声音信息的情况下,响应于播放所述目标有声图像素材中的声音素材,降低所述视频内容的声音信息的音量。
  23. 一种电子设备,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:
    响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,所述有声图像素材列表包括至少一个有声图像素材,其中,所述有声图像素材包括图像素材和与所述图像素材相关联的声音素材;
    响应于在所述有声图像素材列表中的选择操作,从所述至少一个有声图像素材中确定目标有声图像素材;
    在待处理的视频内容上显示所述目标有声图像素材中的图像素材,并播放所述目标有声图像素材中的声音素材。
  24. 一种计算机可读存储介质,其中,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行以下步骤:
    响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,所述有声图像素材列表包括至少一个有声图像素材,其中,所述有声图像素材包括图像素材和与所述图像素材相关联的声音素材;
    响应于在所述有声图像素材列表中的选择操作,从所述至少一个有声图像素材中确定目标有声图像素材;
    在待处理的视频内容上显示所述目标有声图像素材中的图像素材,并播放所述目标有声图像素材中的声音素材。
  25. 一种计算机程序产品,包括计算机程序/指令,其中,所述计算机程序/指令被处理器执行时实现以下步骤:
    响应于触发指定控件生成的第一操作指令,显示有声图像素材列表,所述有声图像素材列表包括至少一个有声图像素材,其中,所述有声图像素材包括图像素材和与所述图像素材 相关联的声音素材;
    响应于在所述有声图像素材列表中的选择操作,从所述至少一个有声图像素材中确定目标有声图像素材;
    在待处理的视频内容上显示所述目标有声图像素材中的图像素材,并播放所述目标有声图像素材中的声音素材。
PCT/CN2021/115125 2021-01-29 2021-08-27 视频处理方法和视频处理装置 WO2022160699A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110130737.5 2021-01-29
CN202110130737.5A CN112764636A (zh) 2021-01-29 2021-01-29 视频处理方法、装置、电子设备和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022160699A1 true WO2022160699A1 (zh) 2022-08-04

Family

ID=75704092

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115125 WO2022160699A1 (zh) 2021-01-29 2021-08-27 视频处理方法和视频处理装置

Country Status (2)

Country Link
CN (1) CN112764636A (zh)
WO (1) WO2022160699A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764636A (zh) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备和计算机可读存储介质
CN113946254B (zh) * 2021-11-01 2023-10-20 北京字跳网络技术有限公司 内容显示方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957123A (zh) * 2016-04-19 2016-09-21 乐视控股(北京)有限公司 图片编辑的方法、装置及终端设备
CN106373170A (zh) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 一种视频制作方法及装置
WO2017106960A1 (en) * 2015-12-24 2017-06-29 Mydub Media Corporation Methods, apparatus and computer-readable media for customized media production and templates therefor
CN111899155A (zh) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备及存储介质
CN112764636A (zh) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备和计算机可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106804005B (zh) * 2017-03-27 2019-05-17 维沃移动通信有限公司 一种视频的制作方法及移动终端
CN112153307A (zh) * 2020-08-28 2020-12-29 北京达佳互联信息技术有限公司 短视频中添加歌词的方法、装置、电子设备及存储介质
CN112087657B (zh) * 2020-09-21 2024-02-09 腾讯科技(深圳)有限公司 一种数据处理方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017106960A1 (en) * 2015-12-24 2017-06-29 Mydub Media Corporation Methods, apparatus and computer-readable media for customized media production and templates therefor
CN105957123A (zh) * 2016-04-19 2016-09-21 乐视控股(北京)有限公司 图片编辑的方法、装置及终端设备
CN106373170A (zh) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 一种视频制作方法及装置
CN111899155A (zh) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备及存储介质
CN112764636A (zh) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN112764636A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2020057327A1 (zh) 信息列表显示方法、装置及存储介质
US10528177B2 (en) Mobile terminal and control method for displaying images from a camera on a touch screen of the mobile terminal
WO2020015333A1 (zh) 视频拍摄方法、装置、终端设备及存储介质
WO2017201860A1 (zh) 视频直播方法及装置
WO2022237189A1 (zh) 多媒体作品的发布方法及装置
WO2017101485A1 (zh) 视频显示方法及装置
CN109151537B (zh) 视频处理方法、装置、电子设备及存储介质
JP6321301B2 (ja) ビデオ特効処理方法、装置、端末機器、プログラム、及び記録媒体
US20170068380A1 (en) Mobile terminal and method for controlling the same
JP2018518723A (ja) ゲーム実況方法及びその装置
WO2022160699A1 (zh) 视频处理方法和视频处理装置
CN104639977B (zh) 节目播放的方法及装置
US11545188B2 (en) Video processing method, video playing method, devices and storage medium
KR102457864B1 (ko) 비디오 처리 방법 및 장치, 단말 통신 장치 및 저장 매체
WO2022142871A1 (zh) 视频录制方法及装置
CN110636382A (zh) 在视频中添加可视对象的方法、装置、电子设备及存储介质
WO2022037348A1 (zh) 视频生成方法及装置
US20230111361A1 (en) Method and apparatus for processing video data
RU2666626C1 (ru) Способ и устройство для управления состоянием воспроизведения
WO2022160674A1 (zh) 用于作品编辑提示的方法和装置
WO2022205930A1 (zh) 图像效果的预览方法及图像效果的预览装置
CN111479158B (zh) 视频展示方法、装置、电子设备及存储介质
WO2022206037A1 (zh) 视频播放速度调节方法及电子设备
KR20180037235A (ko) 정보 처리 방법 및 장치
JP6130978B2 (ja) 画像削除方法、画像削除装置、プログラム及び記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922294

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2023)