CN112764636A - Video processing method, video processing device, electronic equipment and computer-readable storage medium - Google Patents

Video processing method, video processing device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN112764636A
CN112764636A CN202110130737.5A CN202110130737A CN112764636A CN 112764636 A CN112764636 A CN 112764636A CN 202110130737 A CN202110130737 A CN 202110130737A CN 112764636 A CN112764636 A CN 112764636A
Authority
CN
China
Prior art keywords
sound
video
image
image material
materials
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110130737.5A
Other languages
Chinese (zh)
Inventor
汪谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110130737.5A priority Critical patent/CN112764636A/en
Publication of CN112764636A publication Critical patent/CN112764636A/en
Priority to PCT/CN2021/115125 priority patent/WO2022160699A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure relates to a video processing method, including: receiving a first operation instruction for triggering the generation of the designated control, and displaying a sound image material list, wherein the sound image material list comprises at least one sound image material, and the sound image material comprises an image material and a sound material associated with the image material; determining a target voiced image material from the at least one voiced image material in response to a selection operation in the list of voiced image materials; and displaying the image material in the target audio-video material on the video content to be processed, and playing the sound material in the target audio-video material. The problem of relatively single sticker playing method in short video application in the related art is solved.

Description

Video processing method, video processing device, electronic equipment and computer-readable storage medium
Technical Field
The present disclosure relates to the field of computers, and in particular, to a video processing method, apparatus, electronic device, and computer-readable storage medium.
Background
In the related art, video content may be captured by a video production tool (e.g., a short video application), and certain processing is performed on the video content, where adding a sticker to a video is a common processing method, where the sticker refers to an image element that can be displayed on an upper layer of the video content. After the paster is added in the video, the position or the size of the paster displayed in the video can be adjusted, so that the interestingness of the video is increased. But the playing method of the paster in the video only comprises the steps of adding the paster, deleting the paster and adjusting the position and the size of the paster, so the playing method is still single.
Disclosure of Invention
The present disclosure provides a video processing method, apparatus, electronic device and computer-readable storage medium to at least solve the problem in the related art that the play of stickers in short video applications is relatively single. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, including: receiving a first operation instruction for triggering the generation of the designated control, and displaying a sound image material list, wherein the sound image material list comprises at least one sound image material, and the sound image material comprises an image material and a sound material associated with the image material; determining a target voiced image material from the at least one voiced image material in response to a selection operation in the list of voiced image materials; and displaying the image material in the target audio-video material on the video content to be processed, and playing the sound material in the target audio-video material.
As an alternative embodiment, the identification of the target vocal image pixel in the vocal image pixel list is displayed as an edit control.
As an optional embodiment, receiving a second operation instruction for triggering the generation of the editing control, and displaying a sound editing interface with a target of sound and image materials; and performing editing processing on the sound materials in the target sound image materials in the sound editing interface.
As an alternative embodiment, the sound editing interface comprises: the method comprises a sound selection control and a recording control, wherein the step of editing and processing the sound materials in the target sound image materials in a sound editing interface comprises the following steps: under the condition that the sound selection control is triggered, displaying a preset sound list to be selected, and determining a target sound from the sound list to be selected according to selection operation, wherein the target sound is used for replacing sound materials in a target sound image material; and under the condition of triggering the recording control, displaying a recording interface, and obtaining recording sound through recording on the recording interface, wherein the recording sound is used for replacing sound materials in the target audio-visual materials.
As an alternative embodiment, after determining the target audio image material from the at least one audio image material in response to the selection operation in the list of audio image materials, the method further comprises: a time editing interface for displaying a target audio-visual material, wherein the time editing interface comprises: the audio display device comprises a time axis of the video content and a time window on the time axis, wherein the time window is used for adjusting the starting and ending time of the target audio image pixel displayed on the video content.
As an alternative embodiment, the time window includes a start control on the left side and an end control on the right side, and the method further includes: receiving a first moving operation of the starting control on a time axis, and adjusting the starting time of the target audio-video material on the time axis according to the first moving operation; and receiving a second movement operation of the termination control on the time axis, and adjusting the termination time of the target audio-video material on the time axis according to the second movement operation.
As an alternative embodiment, a waveform diagram representing the sound material is displayed in the time window, and when the termination control is moved to a position where the distance from the tail of the waveform diagram is less than the preset distance, the termination control is automatically attracted to the tail of the waveform diagram.
As an alternative embodiment, the start-stop time of the sound material in the audio image material is the same as the start-stop time of the image material in the audio image material; or the starting time of the sound material in the audio image material is the same as the starting time of the image material in the audio image material, and the ending time of the sound material in the audio image material is the ending time of the sound material.
As an alternative embodiment, after displaying the image material in the target audio-video material on the video content to be processed and playing the sound material in the target audio-video material, the method further comprises: receiving a deleting operation of a target audio and video material; image materials and sound materials targeted for audio-image materials are deleted from the video content.
As an alternative embodiment, in the case that the video content includes sound information, the method further includes: and when the playing target is the sound material in the sound image material, reducing the volume of the sound information of the video content.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including: the receiving unit is configured to receive a first operation instruction for triggering generation of the specified control, display a sound image material list, wherein the sound image material list comprises at least one sound image material, and the sound image material comprises image materials and sound materials associated with the image materials; a determination unit configured to determine a target voiced image material from among the at least one voiced image material in response to a selection operation in the voiced image material list; and the display unit is configured to display image materials in the target audio-video materials on the video content to be processed and play sound materials in the target audio-video materials.
As an alternative embodiment, the apparatus further comprises: and the editing control display unit is configured to display the identification of the target audio image pixel in the audio image pixel list as an editing control.
As an alternative embodiment, the apparatus further comprises: the editing interface display unit is configured to receive a second operation instruction generated by triggering the editing control and display a sound editing interface with audio and video materials; a material determination unit configured to perform an editing process on a sound material among the target sound image materials in the sound editing interface.
As an alternative embodiment, the sound editing interface comprises: sound selection control and recording control, the material determining unit includes: the sound selection unit is configured to display a preset sound list to be selected under the condition that the sound selection control is triggered, and determine a target sound from the sound list to be selected according to selection operation, wherein the target sound is used for replacing sound materials in the target audio-video materials; and the sound recording unit is configured to display a recording interface under the condition that the recording control is triggered, and obtain recording sound through recording on the recording interface, wherein the recording sound is used for replacing sound materials in the target audio-video materials.
As an alternative embodiment, the apparatus further comprises: a time editing unit configured to display a time editing interface of the target audio-video material after determining the target audio-video material from the at least one audio-video material in response to a selection operation in the audio-video material list, wherein the time editing interface includes: the audio display device comprises a time axis of the video content and a time window on the time axis, wherein the time window is used for adjusting the starting and ending time of the target audio image pixel displayed on the video content.
As an alternative embodiment, the time window includes a start control on the left side and an end control on the right side, and the apparatus further includes: the first adjusting unit is configured to receive a first moving operation of the starting control on a time axis and adjust the starting time of the target audio-video material on the time axis according to the first moving operation; and the second adjusting unit is configured to receive a second moving operation of the termination control on the time axis and adjust the termination time of the target audio-video material on the time axis according to the second moving operation.
As an alternative embodiment, a waveform diagram representing the sound material is displayed in the time window, and when the termination control is moved to a position where the distance from the tail of the waveform diagram is less than the preset distance, the termination control is automatically attracted to the tail of the waveform diagram.
As an alternative embodiment, the start-stop time of the sound material in the audio image material is the same as the start-stop time of the image material in the audio image material; or the starting time of the sound material in the audio image material is the same as the starting time of the image material in the audio image material, and the ending time of the sound material in the audio image material is the ending time of the sound material.
As an alternative embodiment, the apparatus further comprises: a deletion operation receiving unit configured to receive a deletion operation of a target sound image material after displaying an image material of the target sound image material on a video content to be processed and playing a sound material of the target sound image material; a deleting unit configured to delete an image material and a sound material targeted with an acoustic image material from the video content.
As an alternative embodiment, the apparatus further comprises: a volume adjustment unit configured to reduce a volume of the sound information of the video content when a sound material among the sound image materials is played as a target in a case where the video content includes the sound information.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video processing method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video processing method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the video processing method of the claims above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the method includes the steps that a first operation instruction for triggering generation of a designated control is received, a sound image material list is displayed, the sound image material list comprises at least one sound image material, and the sound image material comprises an image material and a sound material associated with the image material; determining a target voiced image material from the at least one voiced image material in response to a selection operation in the list of voiced image materials; and displaying the image material in the target audio-video material on the video content to be processed, and playing the sound material in the target audio-video material. The scheme combines the sound effect and the sticker (image material), so that a user can add the sound effect while selecting the sticker, the sticker is changed from the image element without sound to the multimedia element containing sound, the content of the sticker is enriched, a new playing method of the sticker is added, the problem that the playing method of the sticker in short video application in the related technology is single is solved, and the interest of the video is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow diagram illustrating a video processing method according to an example embodiment.
FIG. 2 is a diagram illustrating a selection of voiced map material according to an exemplary embodiment.
FIG. 3 is a diagram illustrating a selected audio map material according to an exemplary embodiment.
Fig. 4 is a diagram illustrating a list of sounds to be selected according to an example embodiment.
Fig. 5 is a schematic diagram illustrating a record capture sound material according to an exemplary embodiment.
FIG. 6 is a schematic diagram illustrating a recording process according to an example embodiment.
FIG. 7 is a diagram illustrating a time window in accordance with an exemplary embodiment.
Fig. 8 is a block diagram illustrating a video processing device according to an example embodiment.
Fig. 9 is a block diagram illustrating an electronic device 800 for performing the above-described video processing method according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Example 1
Fig. 1 is a flow diagram illustrating a video processing method according to an exemplary embodiment, which may be used in a short video application, as shown in fig. 1, including the following steps.
In step S11, a first operation instruction triggering the generation of the designation control is received, and a sound image material list is displayed, where the sound image material list includes at least one sound image material, and the sound image material includes image material and sound material associated with the image material.
Specifically, the designated control may be a sound mapping control, and when the sound mapping control is triggered, a sound image material list including a plurality of sound image materials is displayed on the current interface.
The image material refers to preset image content which can be superposed on the video for display, and the position and the size of the image material displayed on the video can be adjusted. The sound material may be sound effect information such as: specially recorded sound effect information, or sound effect information intercepted from a movie program, and the like. The sound image material in the present embodiment includes an image material and a sound material, and the image material and the sound material are associated with each other, that is, when the sound image material is added to the video, both the image material and the sound material in the sound image material are added to the video.
Fig. 2 is a schematic diagram illustrating selection of voiced map material according to an exemplary embodiment, and in an alternative embodiment, taken in conjunction with fig. 2, after a short video is captured, the short video application provides a voiced map control ("voiced" in fig. 2) that, when triggered by a user, displays a voiced map window beneath the video to display thumbnails of multiple voiced image material in the voiced map window.
In step S13, in response to a selection operation in the list of voiced image materials, a target voiced image material is determined from at least one voiced image material.
On the mobile terminal, a user can select any one of the audio image materials through touch operation. In the example of fig. 2, the user has selected the first line of the second audio image material.
In step S14, image materials among the target audio-video materials are displayed on the video content to be processed, and sound materials among the target audio-video materials are played, wherein the image materials among the target audio-video materials are displayed on the video content to be processed, and the sound materials among the target audio-video materials are played.
When any one of the audio image materials is selected, the image material is displayed on the video content, and the sound material is played, so that the purpose of browsing the audio image material is achieved. Still referring to fig. 2, the selected target audio image material has been displayed over the video.
In the case where the target audio image material is displayed on the video, the image material in the target audio image material may be adjusted, for example, the position, size, and direction of the image material on the video may be adjusted. It should be noted that, if it is necessary to play the sound material in the target audio-video material again, the thumbnail of the target audio-video material may be clicked again, or the thumbnail of the target audio-video material may be pressed for a long time, and the sound material may be played again.
The method includes the steps that a first operation instruction for triggering generation of a designated control is received, a sound image material list is displayed, the sound image material list comprises at least one sound image material, and the sound image material comprises an image material and a sound material associated with the image material; determining a target voiced image material from the at least one voiced image material in response to a selection operation in the list of voiced image materials; and displaying the image material in the target audio-video material on the video content to be processed, and playing the sound material in the target audio-video material. The scheme combines the sound effect and the sticker (image material), so that a user can add the sound effect while selecting the sticker, the sticker is changed from the image element without sound to the multimedia element containing sound, the content of the sticker is enriched, a new playing method of the sticker is added, the problem that the playing method of the sticker in short video application in the related technology is single is solved, and the interest of the video is improved.
As an alternative embodiment, the method further includes: and displaying the identification of the target sound image pixel in the sound image pixel list as an editing control.
Specifically, the editing control is a control for performing sound editing on the target audio image pixel. The sound material among the target audio-visual material used on the current video content can be edited at the sound editing interface. Editing of the sound material may include: changing the sound materials, adjusting the playing speed of the sound materials, carrying out sound changing processing on the sound materials and the like.
As an alternative embodiment, the method further includes: receiving a second operation instruction generated by triggering the editing control, and displaying a sound editing interface of the target audio-video material; and performing editing processing on the sound materials in the target sound image materials in the sound editing interface.
Fig. 3 is a diagram illustrating selection of a voiced paste material according to an exemplary embodiment, and in an alternative embodiment, in conjunction with fig. 3, after selecting a second voiced image material in the first row, the thumbnail of the voiced image material is changed to an edit control, and upon clicking the edit control, the sound editing interface is entered.
It should be noted that, in the above sound editing interface, the change to the sound material is only effective in the current video content, that is, the sound material associated with the target sound image material by default is not changed.
Through the scheme, under the condition of realizing the sound sticker, the editing of the sound effect of the sticker is also realized, and the selection of a user is greatly enriched.
As an alternative embodiment, the sound editing interface comprises: the method comprises a sound selection control and a recording control, wherein the step of editing and processing the sound materials in the target sound image materials in a sound editing interface comprises the following steps: under the condition that the sound selection control is triggered, displaying a preset sound list to be selected, and determining a target sound from the sound list to be selected according to selection operation, wherein the target sound is used for replacing sound materials in a target sound image material; and under the condition of triggering the recording control, displaying a recording interface, and obtaining recording sound through recording on the recording interface, wherein the recording sound is used for replacing sound materials in the target audio-visual materials.
The above scheme provides two ways of editing sound material, which are described separately below.
In a first way, the user is provided with a list of sounds to be selected, which list comprises sound information that is allowed to be selected. Fig. 4 is a schematic diagram illustrating a list of sounds to be selected according to an exemplary embodiment, and in an alternative embodiment, in combination with fig. 4, when the user selects "sound effects library", the list of sounds to be selected is displayed, the list includes sound effects that are allowed to be selected, and the user can select one of the sound elements as the current target sound image element.
In the second way, the user is provided with the function of recording as sound material. Fig. 5 is a schematic diagram illustrating a recording-acquired sound material according to an exemplary embodiment, in an alternative embodiment, in combination with fig. 5, when the user selects "recording", the recording interface is switched to the recording interface, a recording control is provided, and the longest recording time is displayed, the user can record by pressing the recording control for a long time, and during the recording process, the interface may be as shown in fig. 6. And after the recording is finished, the recording can be used as the sound element in the current target audio image element.
It should be noted that, in the second mode, a recording interface appears after clicking, and the recording duration corresponding to each sound image pixel has a certain limitation, but may be different. The user can freely select the sound effect added by using the sound effect of the sound effect library or the self-recording. The original sound effect can also be restored through a 'restore default' button, and finally the operation is confirmed through the 'hook' of the panel.
Through the scheme, under the condition of realizing the sound sticker, the purpose of binding the sound effect by the user-defined sticker is also realized, and the selection of a user is greatly enriched.
As an alternative embodiment, after determining the target audio image material from the at least one audio image material in response to the selection operation in the list of audio image materials, the method further comprises: a time editing interface for displaying a target audio-visual material, wherein the time editing interface comprises: the audio display device comprises a time axis of the video content and a time window on the time axis, wherein the time window is used for adjusting the starting and ending time of the target audio image pixel displayed on the video content.
Specifically, in the above scheme, the time window is displayed on the time axis and is used to indicate the start-stop time for displaying the target audio-video material in the video, and the time between the start time and the end time indicated by the start-stop time is the time for displaying the target audio-video material in the video. The time at which the target audiovisual material is presented in the video content may be adjusted by adjusting the time window.
Fig. 7 is a schematic diagram of a time window according to an exemplary embodiment, and in an alternative embodiment, in conjunction with fig. 7, a time axis and a time window above the time axis are displayed below the video browsing area, the time axis may be formed by images of the video at a specified time point (not shown in the figure), and a sound wave indicator is displayed on the time window to indicate that the currently adjusted image material is sound image material. The vertical line on the time window is used to indicate the position played by the current browsing area.
According to the scheme, the time axis of the video and the time window of the target audio image materials are provided, so that the display of the target audio image materials in the video is controllable, more choices are provided for a user, and the diversity of the video is improved.
As an alternative embodiment, the time window includes a start control on the left side and an end control on the right side, and the method further includes: receiving a first moving operation of the starting control on a time axis, and adjusting the starting time of the target audio-video material on the time axis according to the first moving operation; and receiving a second movement operation of the termination control on the time axis, and adjusting the termination time of the target audio-video material on the time axis according to the second movement operation.
In an alternative embodiment, still referring to fig. 7, the left side and the right side of the time window are respectively provided with two handles, which are the start control and the end control, the start time of the target audio-video material displayed in the video content can be adjusted by adjusting the position of the start control on the time axis, and the end time of the target audio-video material displayed in the video content can be adjusted by adjusting the position of the end control on the time axis. It should be noted that the start control must be to the left of the end control because the start time of the target audio-visual material displayed in the video cannot be later than the end time.
The scheme provides a scheme for adjusting the inspiration time and the expiration time of the target audio-video material displayed in the video by adjusting the time window, so that the display of the sticker in the video is more diversified.
As an alternative embodiment, a waveform diagram representing the sound material is displayed in the time window, and when the termination control is moved to a position where the distance from the tail of the waveform diagram is less than the preset distance, the termination control is automatically attracted to the tail of the waveform diagram.
In the above solution, a waveform diagram for representing the sound material is displayed in the time window, a start position and an end position of the waveform diagram represent start and end times of the sound material, the start position of the waveform diagram is the same as a position of the start control by default, and the end position of the waveform diagram is used for representing an end time when the sound material is played once. When the termination control is moved to a position where the distance between the termination control and the tail of the waveform diagram is smaller than the preset distance, it is indicated that the position of the termination control is already close to the tail of the waveform diagram, and the termination control is usually difficult to be completely moved to the tail of the waveform diagram due to the influence of the precision of the mobile terminal device and the precision of user operation.
After the control termination control is automatically adsorbed to the tail part of the waveform diagram, the display time of the image material is the same as the playing time of the sound material, namely the display time and the playing time of the sound material start and end at the same time.
According to the scheme, when the termination control is moved to be close to the tail of the oscillogram, the termination control is automatically adsorbed to the tail of the oscillogram, so that the problem of inconvenient operation caused by equipment precision or operation precision is solved.
As an alternative embodiment, the start-stop time of the sound material in the audio image material is the same as the start-stop time of the image material in the audio image material; or the starting time of the sound material in the audio image material is the same as the starting time of the image material in the audio image material, and the ending time of the sound material in the audio image material is the ending time of the sound material.
The above scheme provides two ways of playing sound material, which are described separately below.
In the first mode, the start-stop time of the sound material and the start-stop time of the image material are the same, i.e., both start and end at the same time. In the case where the time length of the sound material is shorter than the time length of the image material, the sound material can be continuously played during the display of the image material by repeating the playing. And under the condition that the duration of the sound material is greater than that of the image material, the sound material can be played in an intercepting mode.
In the second mode, the audio material and the image material start at the same time, but the end time of the audio material is determined according to the time length of the sound of the audio material, that is, the audio material stops after being played no matter whether the time length of the audio material is greater than the time length of the image material or less than the time length of the image material.
As an alternative embodiment, after displaying image material in the target audio-video material on the video content to be processed and playing sound material in the target audio-video material, the method further comprises: receiving a deleting operation of a target audio and video material; image materials and sound materials targeted for audio-image materials are deleted from the video content.
In the above scheme, the target audio image material that has been added to the video may be deleted, and when a deletion operation is received, the image material and the sound material in the target audio image material are deleted at the same time.
As an alternative embodiment, in the case that the video content includes sound information, the method further includes: and when the playing target is the sound material in the sound image material, reducing the volume of the sound information of the video content.
In the above scheme, the existing sound information is included in the video, so that when the video is played to the target audio image material, the sound information included in the video itself can be subjected to volume reduction processing, so as to highlight the sound material in the target audio image material.
Example 2
Fig. 8 is a block diagram illustrating a video processing device according to an example embodiment. Referring to fig. 8, the apparatus includes the receiving unit 81, a determining module 82 and a presenting unit 83.
The receiving unit 81 is configured to receive a first operation instruction that triggers generation of a designation control, and display a sound image material list including at least one sound image material, wherein the sound image material includes an image material and a sound material associated with the image material.
The determination module 82 is configured to determine target voiced image material from the at least one voiced image material in response to a selection operation in the list of voiced image material.
The presentation unit 83 is configured to display image materials among the target audio-visual materials on the video content to be processed, and play sound materials among the target audio-visual materials.
As an alternative embodiment, the apparatus further comprises: and the editing control display unit is configured to display the identification of the target audio image pixel in the audio image pixel list as an editing control.
As an alternative embodiment, the apparatus further comprises: the editing interface display unit is configured to receive a second operation instruction generated by triggering the editing control and display a sound editing interface with audio and video materials; a material determination unit configured to perform an editing process on a sound material among the target sound image materials in the sound editing interface.
As an alternative embodiment, the sound editing interface comprises: sound selection control and recording control, the material determining unit includes: the sound selection unit is configured to display a preset sound list to be selected under the condition that the sound selection control is triggered, and determine a target sound from the sound list to be selected according to selection operation, wherein the target sound is used for replacing sound materials in the target audio-video materials; and the sound recording unit is configured to display a recording interface under the condition that the recording control is triggered, and obtain recording sound through recording on the recording interface, wherein the recording sound is used for replacing sound materials in the target audio-video materials.
As an alternative embodiment, the apparatus further comprises: a time editing unit configured to display a time editing interface of the target audio-video material after determining the target audio-video material from the at least one audio-video material in response to a selection operation in the audio-video material list, wherein the time editing interface includes: the audio display device comprises a time axis of the video content and a time window on the time axis, wherein the time window is used for adjusting the starting and ending time of the target audio image pixel displayed on the video content.
As an alternative embodiment, the time window includes a start control on the left side and an end control on the right side, and the apparatus further includes: the first adjusting unit is configured to receive a first moving operation of the starting control on a time axis and adjust the starting time of the target audio-video material on the time axis according to the first moving operation; and the second adjusting unit is configured to receive a second moving operation of the termination control on the time axis and adjust the termination time of the target audio-video material on the time axis according to the second moving operation.
As an alternative embodiment, a waveform diagram representing the sound material is displayed in the time window, and when the termination control is moved to a position where the distance from the tail of the waveform diagram is less than the preset distance, the termination control is automatically attracted to the tail of the waveform diagram.
As an alternative embodiment, the start-stop time of the sound material in the audio image material is the same as the start-stop time of the image material in the audio image material; or the starting time of the sound material in the audio image material is the same as the starting time of the image material in the audio image material, and the ending time of the sound material in the audio image material is the ending time of the sound material.
As an alternative embodiment, the apparatus further comprises: a deletion operation receiving unit configured to receive a deletion operation of a target sound image material after displaying an image material of the target sound image material on a video content to be processed and playing a sound material of the target sound image material; a deleting unit configured to delete an image material and a sound material targeted with an acoustic image material from the video content.
As an alternative embodiment, the apparatus further comprises: a volume adjustment unit configured to reduce a volume of the sound information of the video content when a sound material among the sound image materials is played as a target in a case where the video content includes the sound information.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Example 3
The present application further provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video processing method of embodiment 1
Fig. 9 is a block diagram illustrating an electronic device 800 for performing the above-described video processing method according to an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 9, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described method is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Example 4
The present application also provides a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video processing method according to embodiment 1.
Example 5
The present application also provides a computer program product comprising a computer program/instructions which, when executed by a processor, implements the video processing method of embodiment 1.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video processing method, comprising:
receiving a first operation instruction for triggering the generation of a specified control, and displaying a sound image material list, wherein the sound image material list comprises at least one sound image material, and the sound image material comprises an image material and a sound material associated with the image material;
determining a target voiced image material from the at least one voiced image material in response to a selection operation in the list of voiced image materials;
and displaying the image material in the target audio-video material on the video content to be processed, and playing the sound material in the target audio-video material.
2. The video processing method of claim 1, wherein the method further comprises:
and displaying the identification of the target audio image material in the audio image material list as an editing control.
3. The video processing method of claim 2, wherein the method further comprises:
receiving a second operation instruction which triggers the generation of the editing control, and displaying a sound editing interface of the target audio-video material;
and editing the sound materials in the target sound image materials in the sound editing interface.
4. The video processing method according to claim 3, wherein the sound editing interface comprises: the step of editing and processing the sound materials in the target audio-video materials in the sound editing interface comprises the following steps:
under the condition that the sound selection control is triggered, displaying a preset sound list to be selected, and determining target sound from the sound list to be selected according to selection operation, wherein the target sound is used for replacing sound materials in the target audio-video materials;
and under the condition of triggering the recording control, displaying a recording interface, and obtaining recording sound through recording on the recording interface, wherein the recording sound is used for replacing sound materials in the target audio-visual materials.
5. The video processing method of claim 1, wherein in response to a selection operation in the list of voiced image materials, after determining a target voiced image material from the at least one voiced image material, the method further comprises:
displaying a time editing interface of the target audio and video material, wherein the time editing interface comprises: the time window is used for adjusting the starting and ending time of the target audio-video material displayed on the video content.
6. The video processing method according to claim 1, wherein in the case where the video content includes sound information, the method further comprises:
and when the sound material in the audio and video material is played, reducing the volume of the sound information of the video content.
7. A video processing apparatus, comprising:
the receiving unit is configured to receive a first operation instruction for triggering generation of a specified control, and display a sound image material list, wherein the sound image material list comprises at least one sound image material, and the sound image material comprises image material and sound material associated with the image material;
a determination unit configured to determine a target voiced image material from the at least one voiced image material in response to a selection operation in the list of voiced image materials;
and the display unit is configured to display image materials in the target audio-video materials on the video content to be processed and play sound materials in the target audio-video materials.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video processing method of any of claims 1 to 6.
10. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the video processing method of any of claims 1 to 6.
CN202110130737.5A 2021-01-29 2021-01-29 Video processing method, video processing device, electronic equipment and computer-readable storage medium Pending CN112764636A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110130737.5A CN112764636A (en) 2021-01-29 2021-01-29 Video processing method, video processing device, electronic equipment and computer-readable storage medium
PCT/CN2021/115125 WO2022160699A1 (en) 2021-01-29 2021-08-27 Video processing method and video processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110130737.5A CN112764636A (en) 2021-01-29 2021-01-29 Video processing method, video processing device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112764636A true CN112764636A (en) 2021-05-07

Family

ID=75704092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110130737.5A Pending CN112764636A (en) 2021-01-29 2021-01-29 Video processing method, video processing device, electronic equipment and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112764636A (en)
WO (1) WO2022160699A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113946254A (en) * 2021-11-01 2022-01-18 北京字跳网络技术有限公司 Content display method, device, equipment and medium
WO2022160699A1 (en) * 2021-01-29 2022-08-04 北京达佳互联信息技术有限公司 Video processing method and video processing apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106804005A (en) * 2017-03-27 2017-06-06 维沃移动通信有限公司 The preparation method and mobile terminal of a kind of video
CN112087657A (en) * 2020-09-21 2020-12-15 腾讯科技(深圳)有限公司 Data processing method and device
CN112153307A (en) * 2020-08-28 2020-12-29 北京达佳互联信息技术有限公司 Method and device for adding lyrics in short video, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2916295A1 (en) * 2015-12-24 2017-06-24 Mydub Media Corporation Method and apparatus for mixing media tracks
CN105957123A (en) * 2016-04-19 2016-09-21 乐视控股(北京)有限公司 Picture editing method, picture editing device and terminal equipment
CN106373170A (en) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 Video making method and video making device
CN111899155B (en) * 2020-06-29 2024-04-26 腾讯科技(深圳)有限公司 Video processing method, device, computer equipment and storage medium
CN112764636A (en) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 Video processing method, video processing device, electronic equipment and computer-readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106804005A (en) * 2017-03-27 2017-06-06 维沃移动通信有限公司 The preparation method and mobile terminal of a kind of video
CN112153307A (en) * 2020-08-28 2020-12-29 北京达佳互联信息技术有限公司 Method and device for adding lyrics in short video, electronic equipment and storage medium
CN112087657A (en) * 2020-09-21 2020-12-15 腾讯科技(深圳)有限公司 Data processing method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160699A1 (en) * 2021-01-29 2022-08-04 北京达佳互联信息技术有限公司 Video processing method and video processing apparatus
CN113946254A (en) * 2021-11-01 2022-01-18 北京字跳网络技术有限公司 Content display method, device, equipment and medium
CN113946254B (en) * 2021-11-01 2023-10-20 北京字跳网络技术有限公司 Content display method, device, equipment and medium

Also Published As

Publication number Publication date
WO2022160699A1 (en) 2022-08-04

Similar Documents

Publication Publication Date Title
CN109151537B (en) Video processing method and device, electronic equipment and storage medium
US20170068380A1 (en) Mobile terminal and method for controlling the same
CN111770381B (en) Video editing prompting method and device and electronic equipment
CN110602394A (en) Video shooting method and device and electronic equipment
US20220291897A1 (en) Method and device for playing voice, electronic device, and storage medium
CN110636382A (en) Method and device for adding visual object in video, electronic equipment and storage medium
CN111479158B (en) Video display method and device, electronic equipment and storage medium
CN109660873B (en) Video-based interaction method, interaction device and computer-readable storage medium
CN107277628B (en) video preview display method and device
US20210266633A1 (en) Real-time voice information interactive method and apparatus, electronic device and storage medium
WO2022142871A1 (en) Video recording method and apparatus
CN110719530A (en) Video playing method and device, electronic equipment and storage medium
KR20220014278A (en) Method and device for processing video, and storage medium
WO2022160699A1 (en) Video processing method and video processing apparatus
CN113206948A (en) Image effect previewing method and device, electronic equipment and storage medium
CN113111220A (en) Video processing method, device, equipment, server and storage medium
CN111736746A (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN105578051A (en) Image capturing method and image capturing apparatus
CN108984098B (en) Information display control method and device based on social software
CN107872620B (en) Video recording method and device and computer readable storage medium
CN113905192A (en) Subtitle editing method and device, electronic equipment and storage medium
CN113613082A (en) Video playing method and device, electronic equipment and storage medium
CN113157179A (en) Picture adjustment parameter adjusting method and device, electronic equipment and storage medium
CN111182362A (en) Video control processing method and device
CN110809184A (en) Video processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination