CN117440207A - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN117440207A
CN117440207A CN202311444371.4A CN202311444371A CN117440207A CN 117440207 A CN117440207 A CN 117440207A CN 202311444371 A CN202311444371 A CN 202311444371A CN 117440207 A CN117440207 A CN 117440207A
Authority
CN
China
Prior art keywords
input
video
dubbing
identifier
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311444371.4A
Other languages
Chinese (zh)
Inventor
崔伟玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311444371.4A priority Critical patent/CN117440207A/en
Publication of CN117440207A publication Critical patent/CN117440207A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video processing method, a video processing device and electronic equipment, and belongs to the technical field of videos. The method comprises the following steps: receiving a first input under the condition of displaying a video playing interface of a first video; responsive to the first input, displaying at least one object identification corresponding to the character object in the first video; receiving a second input of a first object identification of the at least one object identification; in response to the second input, displaying video clip information of a first character object corresponding to the first object identification; receiving a third input; and in response to the third input, dubbing at least part of the video clips corresponding to the video clip information.

Description

Video processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of videos, and particularly relates to a video processing method, a video processing device and electronic equipment.
Background
Today, when a user views a video, only the original sound in the video can be heard. If the user wants to dub the video, a third party application program is required to be downloaded, and the video is dubbed by the third party application program, and the dubbing mode requires the user to have professional dubbing skills, so that the operation difficulty is high.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, a video processing device and electronic equipment, which can solve the technical problem that the requirements on the dubbing skills of users are high when dubbing is carried out in the related technology.
In a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a first input under the condition of displaying a video playing interface of a first video;
responsive to the first input, displaying at least one object identification corresponding to a character object in the first video;
receiving a second input of a first object identification of the at least one object identification;
responsive to the second input, displaying video clip information of a first character object corresponding to the first object identification;
receiving a third input;
and responding to the third input, dubbing at least part of video clips corresponding to the video clip information.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first receiving module is used for receiving a first input under the condition of displaying a video playing interface of the first video;
the first display module is used for responding to the first input and displaying at least one object identifier corresponding to the character object in the first video;
A second receiving module, configured to receive a second input of a first object identifier of the at least one object identifier;
the second display module is used for responding to the second input and displaying video clip information of a first character object corresponding to the first object identification;
a third receiving module for receiving a third input;
and the dubbing module is used for dubbing at least part of the video clips corresponding to the video clip information in response to the third input.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as provided in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the method as provided in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions to implement a method as provided in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement a method as provided in the first aspect.
In this embodiment of the present application, a first input may be performed on a video playing interface, where an object identifier corresponding to a character object in a video may be displayed, a user may input the object identifier, select a character object that wants to dub, after selecting one of the character objects, may display video clip information corresponding to the character object, and then the user may select a video clip that wants to dub. According to the method, dubbing of the diagonal object can be directly achieved on the video playing interface, the role and the video clip which need to be dubbed can be rapidly located through simple operation, a third party application program or a tool is not needed, and a user does not need to have professional dubbing skills.
Drawings
Fig. 1 is a flow chart of a video processing method according to some embodiments of the present application;
FIG. 2 is one of the interface diagrams of the video processing method according to some embodiments of the present application;
FIG. 3 is a second interface diagram of a video processing method according to some embodiments of the present disclosure;
FIG. 4 is a third interface diagram of a video processing method according to some embodiments of the present disclosure;
FIG. 5 is a fourth illustrative interface diagram of a video processing method according to some embodiments of the present application;
FIG. 6 is a fifth interface diagram of a video processing method according to some embodiments of the present application;
FIG. 7 is a sixth illustrative interface diagram of a video processing method according to some embodiments of the present application;
FIG. 8 is a seventh illustrative interface diagram of a video processing method according to some embodiments of the present application;
FIG. 9 is an eighth interface diagram of a video processing method according to some embodiments of the present application;
FIG. 10 is a diagram illustrating a video processing method according to some embodiments of the present application;
FIG. 11 is a schematic diagram of an interface of a video processing method according to some embodiments of the present application;
fig. 12 is a schematic structural diagram of a video processing apparatus according to some embodiments of the present application;
fig. 13 is a schematic structural view of an electronic device according to another embodiment of the present application;
fig. 14 is a schematic hardware structure of an electronic device provided in an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. In addition, "and/or" in the specification means at least one of the connected objects, and the character "/", generally means a relationship in which the associated objects are one kind of "or".
In order to solve the technical problems, the application provides a video processing method. The video processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application, where the embodiment of the present application provides a video processing method, and the method may include:
s101, receiving a first input under the condition of displaying a video playing interface of a first video;
S102, responding to the first input, and displaying at least one object identifier corresponding to a character object in the first video;
in this embodiment, the video processing method is applied to the terminal device. The first video is a video played in the foreground of the terminal device. Therefore, in the case of displaying the playback interface of the first video on the terminal device, the user can make the first input at the video playback interface of the first video.
After receiving and responding to the first input of the user, at least one object identification is displayed on the video playing interface of the first video, and each object identification is used for identifying one role object in the first video.
The first input is for triggering the display of an object identification of at least one character object on the video playback interface. Illustratively, the first input includes, but is not limited to: the touch input of the user through the touch device such as a finger or a stylus pen, or a voice command input by the user, or a specific gesture input by the user, or other feasibility inputs can be specifically determined according to actual use requirements, and the embodiment of the application is not limited. The specific gesture in some embodiments of the present application may be any one of a single tap gesture, a swipe gesture, a drag gesture, a pressure recognition gesture, a long press gesture, an area change gesture, a double press gesture, a double tap gesture; the click input in some embodiments of the present application may be a single click input, a double click input, or any number of click inputs, and may also be a long press input or a short press input.
As shown in fig. 2, as an alternative embodiment, the video playing interface of the first video has a dubbing control 201 thereon. As shown in fig. 3, the first input is a click input of the dubbing control 201 on the video playing interface, after the user clicks the dubbing control, a character selection window 301 is displayed on the video playing interface of the first video, where the character selection window includes three object identifiers, that is, object identifiers of person 1, person 2, and person 3.
S103, receiving a second input of a first object identifier in the at least one object identifier;
s104, responding to the second input, and displaying video clip information of a first character object corresponding to the first object identification;
in this embodiment, after the object identification of the at least one character object is displayed on the terminal device, the user may determine any one or more first character objects among the at least one character object as the first character objects that the user wants to dub.
After determining the first character object, the user may make a second input on the first object identifier of the first character object, and in response to the second input, the video clip information corresponding to the first character object is displayed on the terminal device. The video clip information corresponding to the first character object may be information that the first character object has a video clip speaking.
For example, the video clip information corresponding to the first character object may indicate a duration, a start time, and an end time of a video clip in which the first character object has a speech.
S105, receiving a third input;
and S106, responding to the third input, and dubbing at least part of video clips corresponding to the video clip information.
In this embodiment of the present invention, after displaying video clip information corresponding to a first character object in a video playing interface of a first video, a user may select, based on the video clip information, at least a portion of video clips that are desired to be dubbed in the first video through a third input, and dub the character objects in the video clips, thereby generating a first dubbing file corresponding to the video clips.
Wherein the third input may comprise a voice input dubbing a character object in at least a portion of the video clip. Specifically, in the dubbing process, at least part of the silenced video clips can be played on the terminal device, the user can input voice according to the expression or action of the character objects in the video clips so as to dub the character objects in the video clips, the terminal device can acquire voice data of the voice input, and the first dubbing file of the video clips is generated according to the voice data.
In this embodiment of the present application, a first input may be performed on a video playing interface, where an object identifier corresponding to a character object in a video may be displayed, a user may input the object identifier, select a character object that wants to dub, after selecting one of the character objects, may display video clip information corresponding to the character object, and then the user may select a video clip that wants to dub. According to the method, dubbing of the diagonal object can be directly achieved on the video playing interface, the role and the video clip which need to be dubbed can be rapidly located through simple operation, a third party application program or a tool is not needed, and a user does not need to have professional dubbing skills.
In some embodiments, the video clip information includes a first display identifier corresponding to a start-stop time of the first video clip, and S104 includes:
and displaying the first display identifier on a playing progress bar of the first video.
In this embodiment, the video playing interface of the first video further includes a playing progress bar, and the playing progress bar may be displayed on the bottom or top of the video player in a horizontal bar form. The playing progress bar can be used for representing the playing progress of the video, each point on the playing progress bar represents a video time point of the first video, each line segment on the playing progress bar represents a progress interval, and each progress interval is used for representing video clips within a duration range of the first video.
Because the video clip information includes a progress interval corresponding to the start-stop time of the video clip, after the first character object is determined, the start-stop time of the video clip corresponding to the first character object can be determined, and the first display identifier corresponding to the start-stop time is displayed on the playing progress bar. The first display identifies a range of durations of video segments that may characterize the presence of a speech by a first character object in the first video.
For example, the color of the progress interval may be adjusted on the playing progress bar, and the progress interval corresponding to the start-stop time of the video segment corresponding to the first character object is marked with a different color from the playing progress bar as the first display identifier. Or, adjusting the transparency of the progress section on the playing progress bar, and marking the progress section corresponding to the start and stop time of the video clip corresponding to the first character object as transparency different from the playing progress bar as a first display identifier.
As shown in fig. 4, the progress section corresponding to the first character object includes a first section 401, a second section 402, and a third section 403. The playing progress bar is originally of a first color, and the first section, the second section and the third section can be marked with a second color different from the playing progress bar to serve as a first display mark.
In this embodiment, by displaying the first display identifier corresponding to the start-stop time of the video segment corresponding to the first character object on the playing progress bar, the duration range of the video segment where the first character object has the speech can be conveniently and accurately displayed.
In some embodiments, a second display identifier is displayed on the playing progress bar, and in a case that a character object corresponding to the second display identifier is different from the first character object, the second display identifier is different from a display parameter of the first display identifier.
In this embodiment, if there are multiple character objects to be dubbed, the second display identifier may be used to characterize a duration range of a video segment in which the character objects to be dubbed exist outside the first character object in the first video, and the first display identifier and the second display identifier may be distinguished by different display parameters.
For example, the second display identifier may be displayed as a different color from the first display identifier on the play progress bar; the second display identifier may also be displayed on the playback progress bar with a different transparency than the first display identifier.
By the method, the duration range of the video clips where different character objects exist speaking can be conveniently and accurately displayed on the playing progress bar.
In some embodiments, the third input includes a first sub-input and a second sub-input, and S106 includes:
determining a second video clip corresponding to the video clip information in response to the first sub-input;
and dubbing the second video segment in response to the second sub-input.
In some embodiments, before the dubbing the second video segment, the method further comprises:
receiving a fourth input of a third display identifier corresponding to the second video segment;
in response to the fourth input, a start time or an end time of the second video segment is updated.
In the present embodiment, since the video clip information includes information of video clips in which all the first character objects have a talk, the video clip information includes information of at least one video clip. The user may select a second video clip from the at least one video clip via the first sub-input.
After selecting the second video clip, the user may dub the second video clip through a second sub-input. Or, the user can also adjust at least one of the starting time and the ending time of the second video segment through the fourth input, so as to adjust the duration range of the second video segment, thereby obtaining the video segment which is wanted to be dubbed.
After determining the video clip for which dubbing is desired, the user may dub the first character object in the video clip. Specifically, the terminal device may play the video clip which is subjected to silencing and is intended to be dubbed, in the process of speaking the first character object in the video clip, the user may perform the second sub-input according to the action, the mind and the scene of the first character object, and the terminal device may receive and respond to the second sub-input, thereby acquiring the voice information and processing the voice information into the dubbing file.
And then, in the process of playing the dubbing file, the terminal equipment can play the acquired voice information.
Illustratively, as shown in fig. 5, based on the video clip information, the progress interval identified by the first display identifier corresponding to the first character object includes a first interval, a second interval, and a third interval on the playing progress bar. The first sub-input may be clicking on the second interval, thereby determining a video segment corresponding to the second interval as the second video segment.
As illustrated in fig. 6, the starting time of the second section is 18:10, the termination time is 30:00, the fourth input may be dragging a start slider representing the start time of the second video segment on the playing progress bar, or dragging a stop slider representing the stop time, so as to adjust the start time or the stop time, and obtain the target video segment 601.
For example, as shown in fig. 7, after determining a video clip that wants to dub, the video playing interface of the first video may display a dubbing operation control 701, if the user presses the dubbing operation control for a long time, the video playing interface may start playing the video clip after the sound attenuation from the starting time of the video clip, and the user may dub the first character object during the playing process of the video clip; if the user stops pressing the dubbing run control, play and dubbing of the video clip is paused.
Through the mode, the video clips which the user wants to dub can be flexibly selected to dub.
In some embodiments, the at least partial video clip includes a third video clip, after S106, further comprising:
displaying at least one editing option;
receiving a fifth input of the first editing option;
and responding to the fifth input, and editing the first dubbing file corresponding to the third video clip according to an editing function corresponding to the first editing option, wherein the first editing option is any one of the at least one editing option.
In this embodiment, at least one editing option is displayed on the terminal device after the dubbing is completed or the playing of the dubbed video clip reaches the termination time. The user can make a fifth input on the first editing option, and the terminal device responds to the received fifth input and makes corresponding editing on the first dubbing file according to the first editing option.
Specifically, as shown in fig. 8, after the dubbing is completed or the playback of the target video clip reaches the end time, an edit box 801 is displayed on the terminal device. The edit box includes a plurality of edit options. Editing options may include delete option, save option, share option, re-dubbing option, and publish option. If the user performs fifth input on the deletion option, the terminal equipment deletes the first dubbing file; if the user performs fifth input on the storage option, the terminal equipment stores the first dubbing file to a preset address; if the user inputs a fifth input on the remix option, the user needs to reselect the first character object and the video clip which want to be dubbed so as to perform the remix; if the user inputs the release option in the fifth way, the terminal device releases the first dubbing file and the video clip corresponding to the first video file to the network.
In this embodiment, the editing operation of the first dubbing file can be conveniently and quickly completed.
In some embodiments, when the first editing option is a sharing option, the editing the first dubbing file corresponding to the third video segment according to the editing function corresponding to the first editing option includes:
Displaying at least one path identifier, wherein each path identifier indicates a shared path;
receiving a sixth input of the first path identifier;
and responding to the sixth input, and sharing the third video segment and the first dubbing file corresponding to the third video segment through a sharing path corresponding to the first path identifier, wherein the first path identifier is any one of the at least one path identifier.
In this embodiment, if the user performs the fifth input on the sharing option, at least one path identifier is displayed on the terminal device, and the user may further perform the sixth input on the first path identifier, then the terminal device shares the first dubbing file and the third video clip corresponding to the first dubbing file according to the sharing path indicated by the first path identifier.
For example, as shown in fig. 9, after the user performs the fifth input on the sharing option, a sharing path box 901 is popped up on the display interface of the terminal device, where the sharing path box includes four different applications or sharing objects that can be used for sharing, that is, application 1, application 2, friend 1, and friend 2, and each application or sharing object corresponds to one sharing path. When the user performs the sixth input on the first path identifier, the terminal device shares the first dubbing file and the third video segment corresponding to the first dubbing file to the corresponding application program or the sharing object according to the sharing path indicated by the first path identifier. In addition, the first dubbing file and the whole first video can be shared.
If the shared content is the first dubbing file and the third video clip corresponding to the first dubbing file, when the shared content is checked by the shared user, the third video clip dubbing as the first dubbing file is automatically played.
If the shared content is the first dubbing file and the whole first video, when the shared user views the shared content, the first video automatically jumps to the starting moment of the third video segment to start playing, and the dubbing of the third video segment is the first dubbing file.
By the mode, the video after the user dubbing is shared rapidly and conveniently.
In some embodiments, the second dubbing file is obtained after dubbing at least a part of the video clips corresponding to the video clip information, and the method further includes:
receiving a seventh input;
displaying at least one file identification in response to the seventh input, wherein each of the file identifications indicates an audio file, wherein the audio file includes the second dubbing file;
receiving an eighth input of a first file identification, wherein the first file identification is any one of the at least one file identification;
And in response to the eighth input, playing the first video based on the audio file indicated by the first file identification.
In this embodiment, after the user completes dubbing, a second dubbing file is generated and saved. If the user chooses to view the first video again, the dubbing mode of the first video may be adjusted prior to viewing.
Specifically, the user may make a seventh input on the video playing interface of the first video, and in response to the seventh input, at least one file identifier is displayed on the display interface of the terminal device, where each file identifier indicates an audio file, and the stored second dubbing file also corresponds to one file identifier.
The user can perform eighth input on the first file identifier in the at least one file identifier, and the terminal equipment plays the first video by taking the second dubbing file corresponding to the first file identifier as dubbing of the first video in response to the eighth input.
For example, as shown in fig. 10, a dubbing selection control exists on the video playing interface of the first video, and the user may make a seventh input to the dubbing selection control 1001, then a dubbing selection box 1002 is displayed on the video playing interface, where a plurality of file identifiers, such as a video original dubbing, a video dubbing 1, a video dubbing 2, a video dubbing 3, and a video dubbing 4, are displayed in the dubbing selection box. And the user can carry out eighth input on any one file identifier, so that the first video takes a second dubbing file corresponding to the first file identifier as dubbing of the first video, and plays the first video.
By the mode, dubbing audio when the first video is played can be flexibly selected.
In some embodiments, after displaying the at least one file identifier, the method further includes:
receiving a ninth input of a second file identifier in the at least one file identifier, wherein the audio file indicated by the second file identifier comprises dubbing files of at least two character objects;
displaying at least one character identification in response to the ninth input, wherein each of the character identifications indicates a dubbing file of one of the character objects;
receiving a tenth input of a first character identification, wherein the first character identification is any one of the at least one character identification;
and responding to the tenth input, and playing the first video based on the dubbing file of the character object corresponding to the first character mark.
By the above way, if the second dubbing file includes dubbing of multiple character objects, the user can further select which character objects to play when playing the first video by using the second dubbing file as dubbing of the first video.
Specifically, if the second dubbing file includes dubbing of a plurality of character objects, the user may display at least one character identifier on the video playing interface of the first video through the ninth input, where each character identifier corresponds to a dubbing file of one character object. The user can make tenth input on any one of the displayed first character identifiers, so that when the first video is played, the character object corresponding to the first character identifier plays the dubbing file in the second dubbing file, and other character objects play the original dubbing file of the first video.
Illustratively, as shown in fig. 11, after the user selects the second dubbing file, the user makes a ninth input, and pops up a dubbing play selection box 111 on the video play interface of the first video, where the selection box includes three character identifiers, namely, full audio, character 1 audio, and character 2 audio. The user may make a tenth input to the selected audio of figure 2, and then during the playing of the first video, the dubbing of figure 1 will take the original dubbing file and the dubbing of figure 2 will take the audio file in the second dubbing file.
By the method, the dubbing audio of a certain character object in the first video can be flexibly selected.
Fig. 12 is a schematic structural diagram of a video processing apparatus according to another embodiment of the present application, and as shown in fig. 12, the video processing apparatus may include:
a first receiving module 121, configured to receive a first input in a case of displaying a video playing interface of a first video;
a first display module 122, configured to display at least one object identifier corresponding to a character object in the first video in response to the first input;
a second receiving module 123, configured to receive a second input of a first object identifier of the at least one object identifier;
A second display module 124, configured to display video clip information of a first character object corresponding to the first object identifier in response to the second input;
a third receiving module 125 for receiving a third input;
and a dubbing module 126, configured to dub at least a part of the video clips corresponding to the video clip information in response to the third input.
In this embodiment of the present application, a first input may be performed on a video playing interface, where an object identifier corresponding to a character object in a video may be displayed, a user may input the object identifier, select a character object that wants to dub, after selecting one of the character objects, may display video clip information corresponding to the character object, and then the user may select a video clip that wants to dub. According to the embodiment of the invention, the dubbing of the diagonal object can be directly realized on the video playing interface, and the role and the video fragment which want to dub can be quickly positioned through simple operation, so that a third party application program or a tool is not needed, and a user does not need to have professional dubbing skills.
In some embodiments, the second display module 124 includes:
and the first display unit is used for displaying the first display identifier on the playing progress bar of the first video.
In some embodiments, a second display identifier is displayed on the playing progress bar, and in a case that a character object corresponding to the second display identifier is different from the first character object, the second display identifier is different from a display parameter of the first display identifier.
In some embodiments, the third input comprises a first sub-input and a second sub-input,
the dubbing module 126 further includes:
a first determining unit, configured to determine a second video clip corresponding to the video clip information in response to the first sub-input;
and the dubbing unit is used for dubbing the second video segment in response to the second sub-input.
In another optional example, the video processing apparatus further includes:
a fourth receiving module, configured to receive a fourth input of a third display identifier corresponding to the second video segment;
and the updating module is used for responding to the fourth input and updating the starting moment or the ending moment of the second video segment.
In some embodiments, the video processing device further comprises:
the third display module is used for displaying at least one editing option;
a fifth receiving module for receiving a fifth input of the first editing option;
And the editing module is used for responding to the fifth input, editing the first dubbing file corresponding to the third video clip according to the editing function corresponding to the first editing option, wherein the first editing option is any one of the at least one editing option.
In some embodiments, the editing module further comprises:
a second display unit, configured to display at least one path identifier, where each path identifier indicates a shared path;
a receiving unit configured to receive a sixth input of the first path identifier;
and the sharing unit is used for responding to the sixth input, and sharing the third video clip and the first dubbing file corresponding to the third video clip through a sharing path corresponding to the first path identifier, wherein the first path identifier is any one of the at least one path identifier.
In some embodiments, the video processing device further comprises:
a sixth receiving module for receiving a seventh input;
a fourth display module for displaying at least one file identification in response to the seventh input, wherein each of the file identifications indicates an audio file, wherein the audio file includes the second dubbing file;
A seventh receiving module, configured to receive an eighth input to a first file identifier, where the first file identifier is any one of the at least one file identifier;
and the first playing module is used for responding to the eighth input and playing the first video based on the audio file indicated by the first file identification.
In some embodiments, the video processing device further comprises:
an eighth receiving module, configured to receive a ninth input of a second file identifier in the at least one file identifier, where an audio file indicated by the second file identifier includes dubbing files of at least two character objects;
a fifth display module for displaying at least one character identifier in response to the ninth input, wherein each of the character identifiers indicates a dubbing file of one of the character objects;
a ninth receiving module, configured to receive a tenth input to a first role identifier, where the first role identifier is any one of the at least one role identifier;
and the second playing module is used for responding to the tenth input and playing the first video based on the dubbing file corresponding to the first character mark.
The video processing device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a mobile internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), etc., a server, a network attached storage (NetworkAttached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), a teller machine or a self-service machine, etc., a server, a network attached storage (NetworkAttached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), a teller machine or a self-service machine, etc., which are not limited in particular embodiments of the present application.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The video processing device provided in this embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, so as to achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 13, the embodiment of the present application further provides an electronic device 130, including a processor 131, a memory 132, and a program or an instruction stored in the memory 132 and capable of running on the processor 131, where the program or the instruction implements each process of the embodiment of the video processing method when executed by the processor 131, and the process can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Referring to fig. 14 in combination, fig. 14 is a schematic hardware structure of an electronic device implementing an embodiment of the present application. The electronic device 1400 includes, but is not limited to: radio frequency unit 141, network module 142, audio output unit 143, input unit 143, sensor 145, display unit 145, user input unit 147, interface unit 148, memory 149, and processor 140.
Those skilled in the art will appreciate that the electronic device 140 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 140 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the user input unit 147 is configured to receive a first input in a case of displaying a video playing interface of the first video;
a display unit 145, configured to display at least one object identifier corresponding to a character object in the first video in response to the first input;
a user input unit 147 for receiving a second input of a first object identification of the at least one object identification;
a display unit 145, configured to display video clip information of a first character object corresponding to the first object identifier in response to the second input;
a user input unit 147 for receiving a third input;
And the processor 140 is configured to dub at least part of the video clips corresponding to the video clip information in response to the third input.
In this embodiment of the present application, a first input may be performed on a video playing interface, an object identifier corresponding to a character object in a video may be displayed, a user may input the object identifier, select a character object that wants to dub, after selecting one of the character objects, may display video clip information corresponding to the character object, and then the user may select a video clip that wants to dub. According to the embodiment of the invention, the dubbing of the diagonal object can be directly realized on the video playing interface, and the role and the video fragment which want to dub can be quickly positioned through simple operation, so that a third party application program or a tool is not needed, and a user does not need to have professional dubbing skills.
In some embodiments, the display unit 145 is further configured to display the first display identifier on a playing progress bar of the first video.
In some embodiments, a second display identifier is displayed on the playing progress bar, and in a case that a character object corresponding to the second display identifier is different from the first character object, the second display identifier is different from a display parameter of the first display identifier.
In some embodiments, the third input comprises a first sub-input and a second sub-input,
the processor 140 is further configured to determine, in response to the first sub-input, a second video clip corresponding to the video clip information;
and dubbing the second video segment in response to the second sub-input.
In some embodiments, the user input unit 147 is further configured to receive a fourth input of a third display identifier corresponding to the second video segment;
the processor 140 is configured to update a start time or an end time of the second video segment in response to the fourth input.
In some embodiments, the display unit 145 is configured to display at least one editing option;
the user input unit 147 is configured to receive a fifth input of the first editing option;
and the processor 140 is configured to edit the first dubbing file corresponding to the third video clip according to an editing function corresponding to the first editing option in response to the fifth input, where the first editing option is any one of the at least one editing option.
In some embodiments, the display unit 145 is configured to display at least one path identifier, where each path identifier indicates a shared path;
The user input unit 147 is configured to receive a sixth input of the first path identifier;
the processor 140 is configured to share, in response to the sixth input, the third video segment and the first dubbing file corresponding to the third video segment through a sharing path corresponding to the first path identifier, where the first path identifier is any one of the at least one path identifier.
In some embodiments, the user input unit 147 is configured to receive a seventh input;
the display unit 145 is further configured to display at least one file identifier in response to the seventh input, where each of the file identifiers indicates an audio file, and the audio file includes the second dubbing file;
the user input unit 147 is further configured to receive an eighth input for a first file identifier, where the first file identifier is any one of the at least one file identifier;
and a processor 140, configured to play the first video based on the audio file indicated by the first file identifier in response to the eighth input.
In some embodiments, the user input unit 147 is configured to receive a ninth input of a second file identifier of the at least one file identifier, where the audio file indicated by the second file identifier includes dubbing files of at least two character objects;
The display unit 145 is configured to display at least one character identifier in response to the ninth input, where each character identifier indicates a dubbing file of one character object;
the user input unit 147 is further configured to receive a tenth input of a first role identifier, where the first role identifier is any one of the at least one role identifier;
the audio output unit 143 is configured to respond to the tenth input, and play the first video based on the dubbing file corresponding to the first corner identifier.
It should be appreciated that in embodiments of the present application, the input unit 144 may include a graphics processor (Graphics Processing Unit, GPU) 1441 and a microphone 1442, the graphics processor 1441 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 146 may include a display panel 1461, and the display panel 1461 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 147 includes at least one of a touch panel 1471 and other input devices 1472. Touch panel 1471, also known as a touch screen. The touch panel 1471 may include two parts, a touch detection device and a touch controller. Other input devices 1472 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Memory 149 may be used to store software programs as well as various data. The memory 149 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 149 may include volatile memory or nonvolatile memory, or the memory 149 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (RandomAccess Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 149 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 140 may include one or more processing units; optionally, the processor 140 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 140.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the video processing method when executed by a processor, and the same technical effect can be achieved, so that repetition is avoided, and no detailed description is given here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, each process of the embodiment of the video processing method can be realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video processing method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (15)

1. A video processing method, comprising:
receiving a first input under the condition of displaying a video playing interface of a first video;
responsive to the first input, displaying at least one object identification corresponding to a character object in the first video;
receiving a second input of a first object identification of the at least one object identification;
responsive to the second input, displaying video clip information of a first character object corresponding to the first object identification;
receiving a third input;
and responding to the third input, dubbing at least part of video clips corresponding to the video clip information.
2. The method according to claim 1, wherein the video clip information includes a first display identifier corresponding to a start-stop time of a first video clip, and the displaying the video clip information of a first character object corresponding to the first object identifier includes:
and displaying the first display identifier on a playing progress bar of the first video.
3. The method according to claim 2, wherein a second display identifier is displayed on the playing progress bar, and in a case where a character object corresponding to the second display identifier is different from the first character object, the second display identifier is different from a display parameter of the first display identifier.
4. The method of claim 1, wherein the third input comprises a first sub-input and a second sub-input,
the step of dubbing at least part of the video clips corresponding to the video clip information in response to the third input includes:
determining a second video clip corresponding to the video clip information in response to the first sub-input;
and dubbing the second video segment in response to the second sub-input.
5. The method of claim 4, wherein prior to dubbing the second video segment, the method further comprises:
receiving a fourth input of a third display identifier corresponding to the second video segment;
in response to the fourth input, a start time or an end time of the second video segment is updated.
6. The method of claim 1, wherein the at least part of the video clips include a third video clip, and wherein after the dubbing the at least part of the video clips corresponding to the video clip information, further comprising:
displaying at least one editing option;
receiving a fifth input of the first editing option;
and responding to the fifth input, and editing the first dubbing file corresponding to the third video clip according to an editing function corresponding to the first editing option, wherein the first editing option is any one of the at least one editing option.
7. The method of claim 6, wherein, in the case where the first editing option is a sharing option, the editing the first dubbing file corresponding to the third video segment according to the editing function corresponding to the first editing option includes:
displaying at least one path identifier, wherein each path identifier indicates a shared path;
receiving a sixth input of the first path identifier;
and responding to the sixth input, and sharing the third video segment and the first dubbing file corresponding to the third video segment through a sharing path corresponding to the first path identifier, wherein the first path identifier is any one of the at least one path identifier.
8. The method of claim 1, wherein the dubbing is performed on at least a portion of the video clip corresponding to the video clip information to obtain a second dubbing file, and the method further comprises:
receiving a seventh input;
displaying at least one file identification in response to the seventh input, wherein each of the file identifications indicates an audio file, wherein the audio file includes the second dubbing file;
Receiving an eighth input of a first file identification, wherein the first file identification is any one of the at least one file identification;
and in response to the eighth input, playing the first video based on the audio file indicated by the first file identification.
9. The method of claim 8, wherein after displaying the at least one file identifier, further comprising:
receiving a ninth input of a second file identifier in the at least one file identifier, wherein the audio file indicated by the second file identifier comprises dubbing files of at least two character objects;
displaying at least one character identification in response to the ninth input, wherein each of the character identifications indicates a dubbing file of one of the character objects;
receiving a tenth input of a first character identification, wherein the first character identification is any one of the at least one character identification;
and responding to the tenth input, and playing the first video based on the dubbing file corresponding to the first corner mark.
10. A video processing apparatus, comprising:
the first receiving module is used for receiving a first input under the condition of displaying a video playing interface of the first video;
The first display module is used for responding to the first input and displaying at least one object identifier corresponding to the character object in the first video;
a second receiving module, configured to receive a second input of a first object identifier of the at least one object identifier;
the second display module is used for responding to the second input and displaying video clip information of a first character object corresponding to the first object identification;
a third receiving module for receiving a third input;
and the dubbing module is used for dubbing at least part of the video clips corresponding to the video clip information in response to the third input.
11. The apparatus of claim 10, wherein the video clip information includes a first display identifier corresponding to a start-stop time of a first video clip, and wherein the second display module includes:
and the first display unit is used for displaying the first display identifier on the playing progress bar of the first video.
12. The apparatus of claim 10, wherein the third input comprises a first sub-input and a second sub-input, the dubbing module comprising:
a determining unit, configured to determine a second video clip corresponding to the video clip information in response to the first sub-input;
And the dubbing unit is used for dubbing the second video segment in response to the second sub-input.
13. The apparatus of claim 10, wherein the apparatus further comprises:
a fourth receiving module, configured to receive a fourth input of a third display identifier corresponding to the second video segment;
and the updating module is used for responding to the fourth input and updating the starting moment or the ending moment of the second video segment.
14. The apparatus of claim 10, wherein the apparatus further comprises:
the third display module is used for displaying at least one editing option;
a fifth receiving module for receiving a fifth input of the first editing option;
and the editing module is used for responding to the fifth input, editing the first dubbing file corresponding to the third video clip according to the editing function corresponding to the first editing option, wherein the first editing option is any one of the at least one editing option.
15. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video processing method of any one of claims 1-9.
CN202311444371.4A 2023-11-01 2023-11-01 Video processing method and device and electronic equipment Pending CN117440207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311444371.4A CN117440207A (en) 2023-11-01 2023-11-01 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311444371.4A CN117440207A (en) 2023-11-01 2023-11-01 Video processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117440207A true CN117440207A (en) 2024-01-23

Family

ID=89547761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311444371.4A Pending CN117440207A (en) 2023-11-01 2023-11-01 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117440207A (en)

Similar Documents

Publication Publication Date Title
CN112653920B (en) Video processing method, device, equipment and storage medium
WO2023061414A1 (en) File generation method and apparatus, and electronic device
US11941728B2 (en) Previewing method and apparatus for effect application, and device, and storage medium
CN112672061A (en) Video shooting method and device, electronic equipment and medium
WO2024104113A1 (en) Screen capture method, screen capture apparatus, electronic device, and readable storage medium
CN114153346A (en) Picture processing method and device, storage medium and electronic equipment
CN112214774A (en) Permission setting method, file playing method and device and electronic equipment
WO2023179539A1 (en) Video editing method and apparatus, and electronic device
CN112887794A (en) Video editing method and device
CN115941869A (en) Audio processing method and device and electronic equipment
CN113096686B (en) Audio processing method and device, electronic equipment and storage medium
CN117440207A (en) Video processing method and device and electronic equipment
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN114979050B (en) Voice generation method, voice generation device and electronic equipment
CN112312053B (en) Video recording method and device
CN112346698B (en) Audio processing method and device
CN114745506A (en) Video processing method and electronic equipment
CN117608459A (en) Screenshot method, screenshot device, screenshot equipment and screenshot medium
CN116074580A (en) Video processing method and device
CN117009020A (en) Wallpaper setting method and device, electronic equipment and readable storage medium
CN115774508A (en) Application component display method and device, electronic equipment and readable storage medium
CN115334242A (en) Video recording method, video recording device, electronic equipment and medium
CN117395460A (en) Video processing method, video processing device, electronic apparatus, and storage medium
CN117234641A (en) Information processing method, device and equipment
CN114860127A (en) Information transmission method and information transmission device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination