CN117714762A - Video processing method, device, electronic equipment and readable storage medium - Google Patents

Video processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117714762A
CN117714762A CN202311750548.3A CN202311750548A CN117714762A CN 117714762 A CN117714762 A CN 117714762A CN 202311750548 A CN202311750548 A CN 202311750548A CN 117714762 A CN117714762 A CN 117714762A
Authority
CN
China
Prior art keywords
video
input
displaying
sub
time axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311750548.3A
Other languages
Chinese (zh)
Inventor
杨思敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311750548.3A priority Critical patent/CN117714762A/en
Publication of CN117714762A publication Critical patent/CN117714762A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and a readable storage medium, and belongs to the technical field of images. The method comprises the following steps: displaying a first video and a second video; wherein the first video and the second video are at least partially identical; determining a target duration according to the first duration of the first video and the second duration of the second video; displaying a time axis with the length being the target time length according to the target time length; wherein the starting point of the time axis is aligned with the starting position of the first video and the starting position of the second video; receiving a first input for a first period of time on the timeline; and responding to the first input, displaying first annotation information, wherein the first annotation information is input content of the first input, and corresponds to the first time period on the time axis.

Description

Video processing method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of images, and particularly relates to a video processing method, a video processing device, electronic equipment and a readable storage medium.
Background
At present, with the vigorous growth and continuous development of the internet and the self-media industry, scenes of video creation and editing by using intelligent electronic equipment are increasingly common.
In a scene of video creation and editing, a situation of comparing two or even a plurality of videos often occurs, in this case, a user usually views one video first and then switches the other video, and in this way, the comparison of multiple videos is realized in a back-and-forth switching manner.
Therefore, in the prior art, the user is complicated to operate due to the fact that the user switches a plurality of videos back and forth for comparison.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, which can solve the problem of complex operation of a user caused by comparison of a plurality of videos switched back and forth by the user in the prior art.
In a first aspect, an embodiment of the present application provides a video processing method, including: displaying a first video and a second video; wherein the first video and the second video are at least partially identical; determining a target duration according to the first duration of the first video and the second duration of the second video; displaying a time axis with the length being the target time length according to the target time length; wherein the starting point of the time axis is aligned with the starting position of the first video and the starting position of the second video; receiving a first input for a first period of time on the timeline; and responding to the first input, displaying first annotation information, wherein the first annotation information is input content of the first input, and corresponds to the first time period on the time axis.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the first display module is used for displaying a first video and a second video; wherein the first video and the second video are at least partially identical; the determining module is used for determining a target duration according to the first duration of the first video and the second duration of the second video; the second display module is used for displaying a time axis with the length being the target time length according to the target time length; wherein the starting point of the time axis is aligned with the starting position of the first video and the starting position of the second video; a first receiving module for receiving a first input for a first period of time on the time axis; and the third display module is used for responding to the first input and displaying first annotation information, the first annotation information is the content input by the first input, and the first annotation information corresponds to the first period on the time axis.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, in the scene of the first video and the second video with the same content in the comparison part, the user for video comparison triggers synchronous display of the first video and the second video in a preset input mode, meanwhile, based on the duration of the two videos, a time axis is displayed, and the starting positions of the two videos are aligned with the starting point of the time axis, so that the two videos can share the time axis. Further, the user can perform first input on the time axis, and on one hand, a certain time period, such as a first time period, is determined through the first input, and on the other hand, contents are input according to some differences of the two videos in the first time period, so that the effect that first annotation information is correspondingly displayed in the first time period on the time axis is finally achieved. Therefore, in the embodiment of the application, in the scene of comparing multiple videos, synchronous display of the multiple videos is supported, and meanwhile, manual annotation of differences on a time axis is also supported, so that the differences between the multiple videos are clear at a glance for a user, the user does not need to switch back and forth between the multiple videos, and the purpose of simplifying user operation is achieved.
Drawings
FIG. 1 is a flow chart of a video processing method of an embodiment of the present application;
FIG. 2 is one of the display schematic diagrams of the electronic device of the embodiment of the present application;
FIG. 3 is a second schematic diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 4 is a third schematic illustration of a display of an electronic device according to an embodiment of the present application;
FIG. 5 is a fourth schematic illustration of a display of an electronic device according to an embodiment of the present application;
FIG. 6 is a fifth schematic display diagram of an electronic device according to an embodiment of the present application;
FIG. 7 is a sixth schematic illustration of a display of an electronic device according to an embodiment of the present application;
FIG. 8 is a seventh schematic display diagram of an electronic device according to an embodiment of the present application;
FIG. 9 is a schematic illustration of an electronic device according to an embodiment of the present application;
FIG. 10 is a ninth schematic display diagram of an electronic device according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a display of an electronic device according to an embodiment of the present application;
fig. 12 is a block diagram of a video processing apparatus of an embodiment of the present application;
fig. 13 is one of the hardware configuration diagrams of the electronic device according to the embodiment of the present application;
fig. 14 is a second schematic diagram of a hardware structure of the electronic device according to the embodiment of the present application.
Detailed Description
Technical solutions of embodiments of the present application will be clearly described below with reference to the accompanying drawings of embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, a flowchart of a video processing method according to an embodiment of the present application is shown, where the method is applied to an electronic device for example, and the method includes:
step 110: the first video and the second video are displayed. Wherein the first video and the second video are at least partially identical.
The user is also required to select the first video and the second video before the first video and the second video are displayed.
Application scenario for example, after the user clicks the selection option in the image list of the album application program, multiple images in the list may be selected, when the user selects the first video and the second video, the user triggers to display the review option under the screen, the user clicks the review option, the user enters the review page, and the first video and the second video are displayed in the review page.
In this embodiment, in the comparison scene used for the first video and the second video, the first aspect may represent two videos or a larger number of videos; in a second aspect, the first video and the second video are at least partially identical, such as two videos that are more similar. The following description will take two videos represented by a first video and a second video as examples.
For example, as shown in fig. 2, a first video 201 and a second video 202 are displayed in a top-bottom arrangement. For another example, the electronic device includes two screens, and the first video and the second video are displayed on the two screens respectively.
In some application scenarios, the first video and the second video are an original video and an edited video, respectively, so that, based on the present embodiment, a difference between the original video and the edited video can be represented.
Step 120: and determining a target duration according to the first duration of the first video and the second duration of the second video.
Optionally, the target duration is a longer one of the first duration and the second duration.
For example, the first duration is one hour and the second duration is one half hour, then the target duration is one half hour.
Optionally, one of the first video and the second video is a main video, and the target duration is the duration of the main video.
For example, as shown in fig. 2, the video displayed above is the main video. Further, the user can drag the lower video to the upper side for a long time, thereby replacing the main video.
The main video is arranged to analyze the difference of the other video relative to the main video from the main video point of view.
Step 130: and displaying a time axis with the length being the target duration according to the target duration. Wherein the start point of the time axis aligns the start position of the first video and the start position of the second video.
In this step, a time axis corresponding to the first time period and a time axis corresponding to the second time period are obtained, the two time axes are aligned, that is, the starting points of the two time axes are aligned, and one of the time axes is displayed based on the aligned two time axes, for example, a longer time axis and a time axis of the main video are displayed, that is, the time axis with the display length being the target time period. Therefore, on the time axis displayed in this step, the start point thereof may represent the start positions of the two videos.
For example, as shown in fig. 2, the time axis 203 is displayed while the first video 201 and the second video 202 are displayed.
Step 140: a first input is received for a first period of time on a time axis.
In some embodiments of the present application, the first input is used to select a certain period, such as a first period, on the time axis, and further used to input first labeling information during the first period, where the first input may be a first operation. Illustratively, the first input includes, but is not limited to: the user inputs the control on the screen, the touch control on the screen area, or the voice command input by the user, or the specific gesture input by the user, or other feasibility inputs through a touch control device such as a finger or a stylus, and the embodiment of the invention is not limited. The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application may be single click input, double click input, or any number of click inputs, and may also be long press input or short press input.
For example, the first input may be that the user selects two play positions on the time axis, a period in between the two play positions represents a first period, and the first annotation information is input after the first period is selected.
Optionally, the first annotation information comprises at least one of a color mark and annotation content.
Step 150: and responding to the first input, displaying first annotation information, wherein the first annotation information is input content of the first input, and corresponds to a first time period on a time axis.
In this step, the displayed first annotation information corresponds to a first period of time on the time axis.
Further, as shown in FIG. 2, the user clicks on the completion option, thereby saving all of the review results generated based on the review page.
For example, when a video is authored and edited, a team is required to finish the video, after the first member edits the video, in order to facilitate the following member to clearly see where the video is modified, the member can compare the original video with the edited video, and in the comparison process, annotation information is input to annotate the difference between the two videos.
In another example of an application scenario of the application scenario, after one member in the team compares two videos, the two videos are found to have different colors in a certain period, and the fragment is marked for subsequent members in the team to adjust the colors.
In the embodiment of the application, in the scene of the first video and the second video with the same content in the comparison part, the user for video comparison triggers synchronous display of the first video and the second video in a preset input mode, meanwhile, based on the duration of the two videos, a time axis is displayed, and the starting positions of the two videos are aligned with the starting point of the time axis, so that the two videos can share the time axis. Further, the user can perform first input on the time axis, and on one hand, a certain time period, such as a first time period, is determined through the first input, and on the other hand, contents are input according to some differences of the two videos in the first time period, so that the effect that first annotation information is correspondingly displayed in the first time period on the time axis is finally achieved. Therefore, in the embodiment of the application, in the scene of comparing multiple videos, synchronous display of the multiple videos is supported, and meanwhile, manual annotation of differences on a time axis is also supported, so that the differences between the multiple videos are clear at a glance for a user, the user does not need to switch back and forth between the multiple videos, and the purpose of simplifying user operation is achieved.
In a video processing method of another embodiment of the present application, the first input includes a first sub-input and a second sub-input.
In the flow of the present embodiment, step 150 includes:
substep A1: responsive to the first sub-input, adjusting a first slider on the timeline to a first position, and adjusting a second slider on the timeline to a second position; wherein the time period intermediate the first position and the second position is the first time period.
The first sub-input is used to select a first period of time on a time axis.
For example, as shown in fig. 2, two sliders, namely, a first slider 204 and a second slider 205, are disposed on the time axis, and the user slides arbitrarily to activate the corresponding sliders, and when the color of the slider is lightened, the slider is activated, and the user drags the slider to an arbitrary position on the time axis, as shown in fig. 3, at a portion between the first slider 301 and the second slider 302, that is, the first period 303.
Optionally, on the time axis, the period between the first position and the second position is highlighted to highlight the period selected by the user.
Substep A2: in response to the second sub-input, a first marker color is displayed at a first period on the time axis.
The second sub-input is used to select the first marker color.
For example, the second sub-input may be, as shown in fig. 3, displaying a color mark option 304 below the time axis, the user clicking the color mark option 304, as shown in fig. 4, displaying a plurality of color icons, one color icon being filled with one color to indicate the color, and, as an example, indicating red by the color icon 401, the user clicking an arbitrary color icon, as shown in fig. 5, displaying a red stripe 501 below the time axis corresponding to the first period. In other implementations, the background color of the portion of the first period on the time axis may also be filled with a first marker color, such as red.
In this embodiment, in the scene of the comparison video, a certain common period of the two videos can be marked by selecting the marking color so as to highlight the period, and also the related personnel is convenient to remind the related personnel of paying special attention to the period, and the related personnel do not need to switch back and forth between the two videos to find the period of occurrence of the difference, so that convenience can be brought to more video editors.
In a video processing method according to another embodiment of the present application, the first input further includes a third sub-input and a fourth sub-input.
In the flow of this embodiment, step 150 further includes:
substep B1: in response to the third sub-input, a first annotation box is displayed at a first time period on the timeline.
The third sub-input is used for inserting at least one annotation frame for the first period.
For example, the third sub-input may be, as shown in fig. 3, displaying the insert annotation option 305 below the time axis, and clicking the insert annotation option 305 by the user, as shown in fig. 6, displaying an annotation box, i.e. a first annotation box 601, above the first period, and displaying a text prompt of the input content in the first annotation box 601 to prompt the user to input the annotation content.
Optionally, the first annotation box is a bubble-like text entry box.
Substep B2: in response to the fourth sub-input, the first annotation content is displayed in the first annotation box.
The fourth sub-input is used for editing the first annotation content in the first annotation box.
For example, the fourth sub-input may be, as shown in fig. 6, the user clicks the first annotation box 601, displays an input keyboard, and the user edits the content in the first annotation box, as shown in fig. 7, where the content in the first annotation box 701 may be "the white balance parameter has a deviation" here.
Optionally, the size of the first annotation frame increases with the increase of the first annotation content, so as to ensure that the first annotation frame can display all of the first annotation content.
Optionally, as shown in fig. 7, a closing option 702 is displayed in the upper right corner of the first annotation frame, and after the user clicks the closing option 702, the first annotation content is automatically saved; as shown in fig. 8, after the saving, an icon 801 for representing a first comment box is displayed in a first period on the time axis, that is, in a section corresponding to a first mark color, and the icon 801 of the first comment box may be a minimized first comment box.
Further, as shown in fig. 8, the user clicks the icon 801 of the first annotation frame, and the first annotation frame is restored and displayed, so that the user can supplement content or delete content in the first annotation frame.
Further, as shown in fig. 8, the user presses the icon 801 of the first comment frame for a long time, triggers to display a deletion option, and clicks the deletion option, so that the first comment frame can be deleted, and the icon 801 of the first comment frame is not displayed on the time axis.
In this embodiment, based on the color marking of the first period, the period may be highlighted, and further, the user annotates the period with an annotating frame, so that more video editors can quickly learn which differences between the two videos are specific in the first period, without the need for more video editors to switch back and forth between the two videos to find specific differences in the period, thereby bringing convenience to more video editors.
In combination with the above two embodiments, after the user clicks the completion option of the review page, the review page is saved as a custom engineering file format, and after the user opens the file in this format, as shown in fig. 9, the review page can be restored, that is, the displayed first video 901, second video 902, time axis 903, mark color bar 904 on the time axis, and icon 905 of the comment box corresponding to the mark color bar 904 can be seen; when the user clicks the icon 905 of the comment box, the comment box is restored and displayed at the section corresponding to the mark color bar 904. Wherein the corresponding section of the mark color bar 904 on the time axis is highlighted.
In the flow of the video processing method according to another embodiment of the present application, after step 110, the method further includes:
step C1: the first information is displayed when the first video and the second video differ based on a first parameter between a first video frame and a second video frame respectively corresponding to the same time.
Wherein the first information comprises at least one of: the type of the first parameter, the first content for describing the difference of the first parameter, the number of times the first parameter is different, and the first time for representing each occurrence of the difference of the first parameter.
In the video processing method provided by the application, an automatic comparison scheme is further provided, and the automatic comparison result comprises first information.
Optionally, a first identifier is displayed on the review page, where the first identifier is used to indicate the first information.
For example, as shown in FIG. 2, a first logo 206 is displayed.
In this embodiment, the automatic comparison result is used to assist the user in comparing two videos, so as to provide a reference for the user to input annotation information. In the automatic comparison process, one frame of the first video is compared with the corresponding frame of the second video, and when the two frames are different, the two frames are listed in an automatic comparison result.
Optionally, a third input to the first identification is received, and the first information is displayed in response to the third input.
In some embodiments of the present application, the third input is used to operate the first identifier to trigger displaying the first information indicated by the first identifier, and the third input may be a third operation. Illustratively, the first input includes, but is not limited to: the user inputs the control on the screen, the touch control on the screen area, or the voice command input by the user, or the specific gesture input by the user, or other feasibility inputs through a touch control device such as a finger or a stylus, and the embodiment of the invention is not limited. The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application may be single click input, double click input, or any number of click inputs, and may also be long press input or short press input.
For example, the third input may be, as shown in fig. 2, the user clicking on the first identifier 206.
Optionally, a display window is suspended in the review page, the window being for displaying the first information.
Optionally, the first information includes at least: the type of the first parameter, the first content for describing the difference of the first parameter, the number of times the first parameter is different, and the first time for representing each occurrence of the difference of the first parameter.
As shown in fig. 10, in the window 10001, the first color is the type of the first parameter, the inconsistency of the white balance parameters is the first content describing the difference of the first parameter, the occurrence number is "1", and the occurrence is specifically at the time of "02:12:12"; secondly, the captions are of the type of the first parameter, the caption contents are different first contents describing the first parameter, the occurrence times are 2, and the occurrence times are respectively generated at two moments of 02:18:12 and 02:18:12; thirdly, the special effect is the type of the first parameter, the special effect is different to describe the first content with different first parameters, the occurrence frequency is 1, and the special effect is specifically generated at the moment of 00:18:12.
The interpretation of the number of times that the first parameter is different is that if the same first parameter is continuously different between consecutive frames of the first video and consecutive frames corresponding to the second video, the first parameter is considered to be different once.
Optionally, the user closes the window after clicking on the close option in the window.
Further, the user may input annotation information at a period where a difference occurs with reference to the first information.
In this embodiment, the background may analyze the content of the first video and the second video, identify a plurality of parameters such as a color space, a special effect filter, a subtitle, a bystander, and the like in the video, and provide video editing suggestions in terms of color calibration, special effect adaptation, text translation, and the like; after the advice is generated, a user can check through clicking the identification, so that the parameter types of the difference, such as colors, subtitles, special effects and the like, and specific difference descriptions, such as inconsistent white balance parameters, inconsistent saturation parameters, different highlight parameters, different subtitle contents, different subtitle templates and the like, the occurrence times of the difference and the occurrence time of the difference are displayed. Therefore, according to the embodiment, the reference suggestion is provided for the user through automatic analysis and comparison, so that the user can conveniently input the labeling information according to the comparison result, the step of manual comparison of the user is saved, and the user operation is simplified.
In the flow of the video processing method according to another embodiment of the present application, after step C1, the method further includes:
step D1: at least one first control is displayed. Wherein a first control is used to indicate a first moment.
In the first information, a same time instant appearing in the first information is indicated by a first control, where the same time instant is defined as the first time instant.
Optionally, for a first parameter, displaying a first control for indicating that different starting moments occur each time a different situation occurs; optionally, for a first parameter, two first controls are displayed each time a different condition occurs, where the two first controls are used to indicate that different starting moments occur, and the difference is that one first control corresponds to one video.
For example, as shown in fig. 10, when the white balance parameters are different between the first video and the second video, two first controls are displayed, and the time point "02:12:12" of the first video and the time point "02:12:12" of the second video are indicated, respectively.
Step D2: a second input to the first control is received.
In some embodiments of the present application, the second input is used to operate the first control to trigger a video frame displaying two videos indicated at the first time, and the second input may be a second operation. Illustratively, the first input includes, but is not limited to: the user inputs the control on the screen, the touch control on the screen area, or the voice command input by the user, or the specific gesture input by the user, or other feasibility inputs through a touch control device such as a finger or a stylus, and the embodiment of the invention is not limited. The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application may be single click input, double click input, or any number of click inputs, and may also be long press input or short press input.
For example, the second input may be, as shown in fig. 10, clicking on a first control 10002 for indicating a first time of the second video.
Step D3: in response to the second input, video frames of the first video and the second video at a first time indicated by the first control are displayed.
In this step, the playing positions of the first video and the second video are synchronously adjusted to adjust to the first moment indicated by the first control corresponding to the second input, so that in the review page, the video frame of the first moment of the first video and the video frame of the first moment of the second video are displayed.
Accordingly, as shown in fig. 11, the cursor 1101 on the time axis is moved to a position corresponding to the first moment indicated by the first control corresponding to the second input, where the cursor 1101 is used to indicate the playing position.
Optionally, for a first parameter, if the occurrence of the difference is one time, displaying a group or a first control; and if the different times are multiple times, displaying multiple groups or multiple first controls.
In the embodiment, in the automatic comparison result, the method provides an entry which can directly jump to the moment when the parameters are different, so that the user can check the difference in the video frame at the moment, and the user does not need to manually slide the cursor on the time axis to the moment, thereby simplifying the user operation.
In the flow of the video processing method according to another embodiment of the present application, after step 110, the method further includes:
step E1: the second identifier is displayed.
The second identifier is used to indicate second information. The second information is also part of the automatic comparison result.
The first information is used for indicating that corresponding frames of the two videos are different, and the second information is used for indicating that the two videos are different in whole.
Alternatively, the first identifier and the second identifier may be the same identifier, i.e. after the user clicks on this identifier, the first information and the second information are displayed.
Step E2: a fourth input is received for the second identification.
In some embodiments of the present application, the fourth input is used to operate the second identifier to trigger displaying the second information indicated by the second identifier, and the fourth input may be a fourth operation. Illustratively, the first input includes, but is not limited to: the user inputs the control on the screen, the touch control on the screen area, or the voice command input by the user, or the specific gesture input by the user, or other feasibility inputs through a touch control device such as a finger or a stylus, and the embodiment of the invention is not limited. The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application may be single click input, double click input, or any number of click inputs, and may also be long press input or short press input.
For example, the fourth input may be that the user clicks on the second identifier.
Step E3: in response to the fourth input, second parameters of the first video and the second video are displayed.
Wherein the second parameter comprises at least one of: video duration, canvas size, frame rate, sampling frequency, audio channel, authoring time, video author, video rating, number of edits.
Alternatively, when some second parameter of the two videos is the same, the second parameter may be displayed for reference by the user.
In the present embodiment, the difference of the two videos in the whole is provided based on the automatic comparison of the two videos, not limited to the difference in each frame; this embodiment may also display some of the same parameters for both videos to display more parameters. Therefore, the embodiment can provide a comparison reference for the user from more parameters, so that the operation step of inquiring two video related parameters by the user is omitted, and convenience is provided for the user.
In summary, the video processing method provided by the application is used for meeting the requirements of multiple users on multi-video examination and annotation. In the application, firstly, the fine annotation of multiple videos is realized by expanding the interactive operations of the segment selection, the segment marking and the segment annotation of a time axis; secondly, automatically analyzing video content by combining background computing power to provide decision-making auxiliary information for a user; thirdly, through designing the appearance of the interactive interface and the interactive input step, the user operation experience is improved. Therefore, the video editing function is expanded, and the user experience of multi-user collaborative video creation editing is improved.
According to the video processing method provided by the embodiment of the application, the execution subject can be a video processing device. In the embodiment of the present application, a video processing device is taken as an example to execute a video processing method by using the video processing device, and the video processing device provided in the embodiment of the present application is described.
Fig. 12 shows a block diagram of a video processing apparatus according to an embodiment of the present application, the apparatus including:
a first display module 10 for displaying a first video and a second video; wherein the first video and the second video are at least partially identical;
a determining module 20, configured to determine a target duration according to a first duration of the first video and a second duration of the second video;
a second display module 30, configured to display a time axis with a length being the target duration according to the target duration; wherein the starting point of the time axis is aligned with the starting position of the first video and the starting position of the second video;
a first receiving module 40 for receiving a first input for a first period on a time axis;
the third display module 50 is configured to display first labeling information in response to the first input, where the first labeling information is input by the first input, and the first labeling information corresponds to a first period on the time axis.
In the embodiment of the application, in the scene of the first video and the second video with the same content in the comparison part, the user for video comparison triggers synchronous display of the first video and the second video in a preset input mode, meanwhile, based on the duration of the two videos, a time axis is displayed, and the starting positions of the two videos are aligned with the starting point of the time axis, so that the two videos can share the time axis. Further, the user can perform first input on the time axis, and on one hand, a certain time period, such as a first time period, is determined through the first input, and on the other hand, contents are input according to some differences of the two videos in the first time period, so that the effect that first annotation information is correspondingly displayed in the first time period on the time axis is finally achieved. Therefore, in the embodiment of the application, in the scene of comparing multiple videos, synchronous display of the multiple videos is supported, and meanwhile, manual annotation of differences on a time axis is also supported, so that the differences between the multiple videos are clear at a glance for a user, the user does not need to switch back and forth between the multiple videos, and the purpose of simplifying user operation is achieved.
Optionally, the first input comprises a first sub-input and a second sub-input;
the third display module 50 includes:
an adjusting unit for adjusting a first slider on the time axis to a first position and a second slider on the time axis to a second position in response to the first sub-input; wherein the time period intermediate the first position and the second position is a first time period;
and a first display unit for displaying a first mark color at a first period on the time axis in response to the second sub-input.
Optionally, the first input further comprises a first third sub-input and a fourth sub-input;
the third display module 50 further includes:
a second display unit for displaying a first comment frame at a first period on the time axis in response to the third sub-input;
and the third display unit is used for responding to the fourth sub-input and displaying the first annotation content in the first annotation frame.
Optionally, the apparatus further comprises:
the fourth display module is used for displaying the first information when the first video and the second video are based on the first parameter difference between the first video frame and the second video frame which are respectively corresponding at the same moment;
wherein the first information comprises at least one of: the type of the first parameter, the first content for describing the difference of the first parameter, the number of times the first parameter is different, and the first time for representing each occurrence of the difference of the first parameter.
Optionally, the apparatus further comprises:
the fifth display module is used for displaying at least one first control; wherein a first control is used for indicating a first moment;
the second receiving module is used for receiving a second input to the first control;
and the sixth display module is used for responding to the second input and displaying the video frames of the first video and the second video at the first moment indicated by the first control.
The device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The device of the embodiment of the application may be a device with an action system. The action system may be an Android (Android) action system, may be an ios action system, and may also be other possible action systems, which are not specifically limited in the embodiments of the present application.
The device provided by the embodiment of the application can realize each process realized by the embodiment of the method and realize the same technical effect, and in order to avoid repetition, the description is omitted here.
Optionally, as shown in fig. 13, the embodiment of the present application further provides an electronic device 100, including a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and capable of being executed on the processor 101, where the program or the instruction implements each step of any one of the video processing method embodiments described above when executed by the processor 101, and the steps achieve the same technical effects, and for avoiding repetition, a description is omitted herein.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 14 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, processor 1010, camera 1011, and the like.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the display unit 1006 is configured to display a first video and a second video; wherein the first video and the second video are at least partially identical; a processor 1010 configured to determine a target duration according to a first duration of the first video and a second duration of the second video; a display unit 1006, configured to display a time axis with a length being the target duration according to the target duration; wherein the starting point of the time axis is aligned with the starting position of the first video and the starting position of the second video; a user input unit 1007 for receiving a first input of a first period on the time axis; the display unit 1006 is further configured to display first labeling information in response to the first input, where the first labeling information is input by the first input, and the first labeling information corresponds to the first period on the time axis.
In the embodiment of the application, in the scene of the first video and the second video with the same content in the comparison part, the user for video comparison triggers synchronous display of the first video and the second video in a preset input mode, meanwhile, based on the duration of the two videos, a time axis is displayed, and the starting positions of the two videos are aligned with the starting point of the time axis, so that the two videos can share the time axis. Further, the user can perform first input on the time axis, and on one hand, a certain time period, such as a first time period, is determined through the first input, and on the other hand, contents are input according to some differences of the two videos in the first time period, so that the effect that first annotation information is correspondingly displayed in the first time period on the time axis is finally achieved. Therefore, in the embodiment of the application, in the scene of comparing multiple videos, synchronous display of the multiple videos is supported, and meanwhile, manual annotation of differences on a time axis is also supported, so that the differences between the multiple videos are clear at a glance for a user, the user does not need to switch back and forth between the multiple videos, and the purpose of simplifying user operation is achieved.
Optionally, the first input includes a first sub-input and a second sub-input; a processor 1010 further configured to adjust a first slider on the timeline to a first position and a second slider on the timeline to a second position in response to the first sub-input; wherein a period intermediate the first position and the second position is the first period; the display unit 1006 is further configured to display a first marker color at the first period on the time axis in response to the second sub-input.
Optionally, the first input further comprises a first third sub-input and a fourth sub-input; a display unit 1006 further configured to display a first annotation frame at the first period on the time axis in response to the third sub-input; and displaying first annotation content in the first annotation box in response to the fourth sub-input.
Optionally, the display unit 1006 is further configured to display first information when the first video and the second video are different based on a first parameter between a first video frame and a second video frame that respectively correspond to the same time; wherein the first information includes at least one of: the type of the first parameter, first content for describing the difference of the first parameter, the number of times the difference of the first parameter occurs, and first time for representing each occurrence of the difference of the first parameter.
Optionally, the display unit 1006 is further configured to display at least one first control; wherein one of the first controls is used for indicating one of the first moments; a user input unit 1007 also for receiving a second input to said first control; the display unit 1006 is further configured to display, in response to the second input, a video frame of the first video and the second video at the first moment indicated by the first control.
In summary, the video processing method provided by the application is used for meeting the requirements of multiple users on multi-video examination and annotation. In the application, firstly, the fine annotation of multiple videos is realized by expanding the interactive operations of the segment selection, the segment marking and the segment annotation of a time axis; secondly, automatically analyzing video content by combining background computing power to provide decision-making auxiliary information for a user; thirdly, through designing the appearance of the interactive interface and the interactive input step, the user operation experience is improved. Therefore, the video editing function is expanded, and the user experience of multi-user collaborative video creation editing is improved.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of a still picture or a video image obtained by an image capturing device (such as a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. The processor 1010 may integrate an application processor that primarily processes an action system, user pages, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the video processing method when executed by a processor, and the same technical effect can be achieved, so that repetition is avoided, and no detailed description is given here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, each process of the embodiment of the video processing method can be realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video processing method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (12)

1. A method of video processing, the method comprising:
displaying a first video and a second video; wherein the first video and the second video are at least partially identical;
determining a target duration according to the first duration of the first video and the second duration of the second video;
displaying a time axis with the length being the target time length according to the target time length; wherein the starting point of the time axis is aligned with the starting position of the first video and the starting position of the second video;
receiving a first input for a first period of time on the timeline;
and responding to the first input, displaying first annotation information, wherein the first annotation information is input content of the first input, and corresponds to the first time period on the time axis.
2. The method of claim 1, wherein the first input comprises a first sub-input and a second sub-input;
the displaying, in response to the first input, first annotation information, comprising:
in response to the first sub-input, adjusting a first slider on the timeline to a first position, and adjusting a second slider on the timeline to a second position; wherein a period intermediate the first position and the second position is the first period;
A first marker color is displayed at the first time period on the time axis in response to the second sub-input.
3. The method of claim 2, wherein the first input further comprises a first third sub-input and a fourth sub-input;
the displaying, in response to the first input, first annotation information, further comprising:
displaying a first annotation box at the first time period on the timeline in response to the third sub-input;
and displaying first annotation content in the first annotation box in response to the fourth sub-input.
4. The method of claim 1, further comprising, after the displaying the first video and the second video:
displaying first information when the first video and the second video are different based on first parameters between first video frames and second video frames respectively corresponding to the same time;
wherein the first information includes at least one of: the type of the first parameter, first content for describing the difference of the first parameter, the number of times the difference of the first parameter occurs, and first time for representing each occurrence of the difference of the first parameter.
5. The method of claim 4, further comprising, after said displaying the first information:
displaying at least one first control; wherein one of the first controls is used for indicating one of the first moments;
receiving a second input to the first control;
and responding to the second input, and displaying video frames of the first video and the second video at the first moment indicated by the first control.
6. A video processing apparatus, the apparatus comprising:
the first display module is used for displaying a first video and a second video; wherein the first video and the second video are at least partially identical;
the determining module is used for determining a target duration according to the first duration of the first video and the second duration of the second video;
the second display module is used for displaying a time axis with the length being the target time length according to the target time length; wherein the starting point of the time axis is aligned with the starting position of the first video and the starting position of the second video;
a first receiving module for receiving a first input for a first period of time on the time axis;
and the third display module is used for responding to the first input and displaying first annotation information, the first annotation information is the content input by the first input, and the first annotation information corresponds to the first period on the time axis.
7. The apparatus of claim 6, wherein the first input comprises a first sub-input and a second sub-input;
the third display module includes:
an adjustment unit for adjusting a first slider on the time axis to a first position and a second slider on the time axis to a second position in response to the first sub-input; wherein a period intermediate the first position and the second position is the first period;
a first display unit for displaying a first marker color at the first period on the time axis in response to the second sub-input.
8. The apparatus of claim 7, wherein the first input further comprises a first third sub-input and a fourth sub-input;
the third display module further includes:
a second display unit configured to display a first annotation frame at the first period on the time axis in response to the third sub-input;
and the third display unit is used for responding to the fourth sub-input and displaying the first annotation content in the first annotation frame.
9. The apparatus of claim 6, wherein the apparatus further comprises:
A fourth display module, configured to display first information when the first video and the second video are based on a first parameter difference between a first video frame and a second video frame that respectively correspond to the same time;
wherein the first information includes at least one of: the type of the first parameter, first content for describing the difference of the first parameter, the number of times the difference of the first parameter occurs, and first time for representing each occurrence of the difference of the first parameter.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the fifth display module is used for displaying at least one first control; wherein one of the first controls is used for indicating one of the first moments;
the second receiving module is used for receiving a second input to the first control;
and the sixth display module is used for responding to the second input and displaying the video frames of the first video and the second video at the first moment indicated by the first control.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video processing method of any one of claims 1 to 5.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to any of claims 1 to 5.
CN202311750548.3A 2023-12-18 2023-12-18 Video processing method, device, electronic equipment and readable storage medium Pending CN117714762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311750548.3A CN117714762A (en) 2023-12-18 2023-12-18 Video processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311750548.3A CN117714762A (en) 2023-12-18 2023-12-18 Video processing method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117714762A true CN117714762A (en) 2024-03-15

Family

ID=90149491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311750548.3A Pending CN117714762A (en) 2023-12-18 2023-12-18 Video processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117714762A (en)

Similar Documents

Publication Publication Date Title
CN106776514B (en) Annotating method and device
WO2017211072A1 (en) Slide playback control method and apparatus
CN112672061B (en) Video shooting method and device, electronic equipment and medium
WO2023030306A1 (en) Method and apparatus for video editing, and electronic device
CN113918522A (en) File generation method and device and electronic equipment
CN113259743A (en) Video playing method and device and electronic equipment
WO2023093809A1 (en) File editing processing method and apparatus, and electronic device
CN113852757B (en) Video processing method, device, equipment and storage medium
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN115344159A (en) File processing method and device, electronic equipment and readable storage medium
CN117714762A (en) Video processing method, device, electronic equipment and readable storage medium
CN115437736A (en) Method and device for recording notes
CN112929494B (en) Information processing method, information processing apparatus, information processing medium, and electronic device
CN113283220A (en) Note recording method, device and equipment and readable storage medium
CN113923392A (en) Video recording method, video recording device and electronic equipment
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN113806570A (en) Image generation method and generation device, electronic device and storage medium
CN114518824A (en) Note recording method and device and electronic equipment
CN114245193A (en) Display control method and device and electronic equipment
CN113805709A (en) Information input method and device
CN113726953B (en) Display content acquisition method and device
CN114860122A (en) Application program control method and device
CN117294900A (en) Video playing method and device, electronic equipment and readable storage medium
CN114519859A (en) Text recognition method, text recognition device, electronic equipment and medium
CN117311885A (en) Picture viewing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination