CN115665355A - Video processing method and device, electronic equipment and readable storage medium - Google Patents

Video processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115665355A
CN115665355A CN202211257771.XA CN202211257771A CN115665355A CN 115665355 A CN115665355 A CN 115665355A CN 202211257771 A CN202211257771 A CN 202211257771A CN 115665355 A CN115665355 A CN 115665355A
Authority
CN
China
Prior art keywords
video
label
input
playing
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211257771.XA
Other languages
Chinese (zh)
Inventor
覃致维
耿立新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211257771.XA priority Critical patent/CN115665355A/en
Publication of CN115665355A publication Critical patent/CN115665355A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and a readable storage medium, and belongs to the field of video processing. The video processing method comprises the steps of displaying a first interface, wherein the first interface comprises a first video marked with a first label; receiving a first input to a first video; and responding to the first input, and playing the first video by taking the first playing time point corresponding to the first label as the starting time.

Description

Video processing method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a video processing method and device, electronic equipment and a readable storage medium.
Background
At present, with the rapid development and popularization of mobile terminals, cameras of the mobile terminals are applied more and more widely and more frequently in daily life of people. Functions such as photographing and video recording realized by a camera based on a mobile terminal are also more and more diversified. More and more users like to record the living drips by taking pictures and recording videos with the camera of the mobile terminal. Such as the growth process of a child, his or her learning work, and his or her tourist attractions, the change of a certain event, etc. However, for a plurality of recorded videos stored on a mobile terminal, when a user wants to locate a specific video segment, the user needs to find the video where the specific video segment is located first and then watch the video from the beginning to find the specific video segment frame by frame, and thus the location is difficult and time is wasted.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video processing method, an apparatus, an electronic device, and a readable storage medium, which can solve the problems that when a user wants to locate a specific video segment in a plurality of recorded videos, the user needs to search for the specific video segment frame by frame one by one, the location is difficult, and the location time is long.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
displaying a first interface, wherein the first interface comprises a first video marked with a first label;
receiving a first input to the first video;
and responding to the first input, and playing the first video by taking a first playing time point corresponding to the first label as a starting moment.
In a second aspect, an embodiment of the present application provides an apparatus for video processing, where the apparatus includes:
the first display module is used for displaying a first interface, wherein the first interface comprises a first video marked with a first label;
a first receiving module for receiving a first input to the first video;
and the first playing module is used for responding to the first input and playing the first video by taking a first playing time point corresponding to the first label as a starting moment.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a user can mark the initial video frame of a specific video segment in a first video in advance through a first label, classify and display the first video according to the first label marked by the user in a first interface, and when the user wants to locate the specific video segment, the user can play the specific video segment by selecting the first label corresponding to the specific video segment in the first interface and selecting the first video corresponding to the specific video segment, so that the user can quickly locate the specific video segment that the user wants to watch, the locating speed is high, and time and labor are saved.
Drawings
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present application;
FIG. 2 is a schematic view of a first interface provided by an embodiment of the present application;
fig. 3 is a schematic view of a second video recording interface provided in an embodiment of the present application;
FIG. 4 is a first schematic diagram of a video tag name modification interface provided in an embodiment of the present application;
FIG. 5 is a first schematic diagram of a video tag deletion interface provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a video tag position modification interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a tag management interface provided by an embodiment of the application;
FIG. 8 is a schematic diagram of an add video tag interface provided by an embodiment of the present application;
FIG. 9 is a second schematic diagram of a video tag deletion interface provided in an embodiment of the present application;
FIG. 10 is a second schematic diagram of a video tag name modification interface provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. The video processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present application. The video processing method provided by the embodiment of the application can be executed by electronic devices, for example, the electronic devices include but are not limited to smart phones, tablet computers, wearable devices, and the like.
As shown in fig. 1 and fig. 2, the video processing method of the present embodiment may include the following steps 101 to 103: step 101, displaying a first interface 201, wherein the first interface 201 comprises a first video 203 marked with a first label 202.
In this embodiment, after the electronic device receives a certain setting operation of a user, the first interface 201 may be displayed on the electronic device according to the certain setting operation of the user, where the first interface 201 includes at least one tag, and each tag corresponds to at least one video.
In this embodiment, a certain setting operation may be that a user writes a corresponding letter on the album APP interface; the method can also be used for sliding operation on an APP interface of the photo album. The comparison in this example is not particularly limited.
For example, when the electronic device opens the photo album APP, the user may write the letter "L" on the screen of the electronic device with a finger, and the electronic device may display the first interface 201 on the screen of the electronic device according to the received operation of the user writing the letter "L".
For another example, when the electronic device opens the album APP, the user may slide on the screen of the electronic device with a finger from the lower left corner to the upper right corner of the screen, and the mobile terminal may display the first interface 201 on the screen according to the received sliding operation of the user.
In this embodiment, as shown in fig. 2, the first interface 201 may be a video tag classification management interface in the album APP, at least one tag in the first interface 201 may be a first tag 202, a second tag, a third tag, and a fourth tag, and at least one video corresponding to the first tag 202 may be a first video 203, a second video, a third video, a fourth video, a fifth video, and so on.
In this embodiment, when a plurality of tags are set in the first interface 201, the plurality of tags are sorted according to a preset sorting rule. The step of sorting the plurality of labels according to the preset sorting rule can be that the plurality of labels are sorted according to the click times of the plurality of labels by the user from high to low; the labels may also be sorted according to the order of the first letters of the names of the labels, or sorted from near to far according to the click time of the user on the labels. This embodiment is not particularly limited, and may be set by the user.
For example, in the case that the first interface 201 has four tags, namely, "beijing travel", "west ampere travel", "Yunnan travel", and "Chongqing travel", the four tags are sorted according to the first letter order of the names of the four tags, and at this time, the four tags in the first interface 201 are arranged in the order of "beijing travel", "Chongqing travel", "west ampere travel", and "Yunnan travel" from top to bottom.
In this embodiment, in the case that one tag corresponds to a plurality of videos, the playing time point represented by the tag in each video may be the same or different.
For example, in the case where the first label 202 is marked in both the first video 203 and the second video, and the playing time point corresponding to the first label 202 in the first video 203 is twenty-three seconds, and the playing time point corresponding to the first label 202 in the second video is twenty-three seconds, the videos corresponding to the first label 202 in the first interface 201 are the first video 203 and the second video, and the playing time points represented by the first video 203 and the second video of the first label 202 are the same.
For another example, when the first label 202 is marked in both the first video 203 and the second video, and the playing time point corresponding to the first label 202 in the first video 203 is thirty-four seconds, and the playing time point corresponding to the first label 202 in the second video is twenty-three seconds, the videos corresponding to the first label 202 in the first interface 201 are the first video 203 and the second video, and the playing time points represented by the first video 203 and the second video of the first label 202 are different.
In the present embodiment, as shown in fig. 2, in the case of displaying the first interface 201, the first tab 202 is in the expanded state, and the plurality of videos corresponding to the first tab 202 are respectively displayed below the first tab 202 in the target form; the target form is a display screen of each video by using a video frame corresponding to the playing time point represented in each video by the first tag 202, and the playing time point represented in each video by the first tag 202 is displayed on the display screen of each video.
For example, the first tag 202 may be a food for western security, the plurality of videos corresponding to the first tag 202 are a first video 203, a second video and a third video, respectively, the playing time point represented by the first tag 202 in the first video 203 is thirty-four seconds, and the content of the video frame corresponding to the first tag 202 in the first video 203 is a bowl of mutton bun; the playing time point represented by the first tag 202 in the second video is twenty-three minutes, and the content of the video frame corresponding to the first tag 202 in the second video is Chinese hamburgers; the playing time point of the first label 202 in the third video is five to forty seconds, and the content of the video frame corresponding to the first label 202 in the third video is a splash surface; therefore, when the first interface 201 is displayed, the first video 203 is displayed below the first tab 202 with a video frame whose content is a bowl of lamb's bubble bun as a display screen, and "4' 30" is displayed on the display screen of the first video 203; the second video is displayed below the first label 202 with a video frame whose content is Chinese hamburger as a display screen, and "3' 20" is displayed on the display screen of the second video; the third video is displayed below the first label 202 with a video frame whose content is a splash surface as a display screen, and "5' 40" is displayed on the display screen of the third video.
Step 102, receiving a first input to the first video 203.
In this embodiment, the first input is used to play the first video 203 with the first play time point corresponding to the first tag 202 as the start time. The first input may be a click input of the target content by the user, or a voice instruction input by the user, or a specific gesture input by the user, and may be determined according to an actual use requirement in an actual application, which is not specifically limited in this embodiment.
The click input in this embodiment may be a single click input, a double click input, or any number of click inputs, and may also be a long-press input or a short-press input; in practical applications, the number of times of the click input may be preset, and the user may perform operations according to the preset number of times of the click input, or the number of times of the click input may be set by the user according to the operation habit of the user, which is not specifically limited in this embodiment.
For example, in this embodiment, the first input may be a single-click operation performed by the user on the target content in the first interface 201, where the target content is a first video 203 in the multiple videos corresponding to the first tab 202.
Step 103, responding to the first input, and playing the first video 203 with a first playing time point corresponding to the first tag 202 as a starting time.
In this embodiment, in a case where the first input is a single-click operation of the user on the first video 203 in the plurality of videos corresponding to the first tab 202 in the first interface 201, the first video 203 is played with the first play time point corresponding to the first tab 202 as a start time in response to the first input.
For example, the first label 202 is a food for western security, the first label 202 is in an unfolded state when the first interface 201 is displayed, the first video 203, the second video and the third video corresponding to the first label 202 are displayed below the first label 202, and when the user clicks the first video 203, the electronic device plays the first video 203 with thirty-fourth seconds of the first video 203 as a starting time, that is, the first video 203 is played with a video frame corresponding to a bowl of bun in a soup as a starting video frame, so that when the user wants to watch the video segment corresponding to the bun in the soup, the video segment corresponding to the bun in the soup can be quickly located, the user does not need to find the video segment corresponding to the bun in a plurality of video files frame by frame, and the operation is convenient and time and labor saving.
In the embodiment of the invention, a user can mark the initial video frame of a specific video segment in a first video in advance through a first label, classify and display the first video according to the first label marked by the user in a first interface, and when the user wants to locate the specific video segment, the user can play the specific video segment by selecting the first label corresponding to the specific video segment in the first interface and selecting the first video corresponding to the specific video segment, so that the user can quickly locate the specific video segment which the user wants to watch by taking the first playing time point corresponding to the first label as the initial time point, the locating speed is high, and time and labor are saved.
In the embodiment of the invention, one application scene of the invention can be used for short video content creation, a user often shoots a large amount of materials when creating the short video content, then searches and screens the collected large amount of materials at the later stage, the video is often opened in the searching process, and then the required segments are picked in the process of watching the video, so that the efficiency is greatly influenced. By the video processing method, the creator can mark the videos through the tags, so that the videos can be rapidly filtered in the later period, the video segments desired by the creator are screened, the creator does not need to open a plurality of video files to screen from the beginning, and the tags can help the creator jump the videos to the segments marked by the creator, so that creation efficiency is greatly improved.
In one embodiment, before displaying the first interface, the video processing method further comprises: displaying a first video frame in the process of recording or playing a second video; receiving a second input to the first video frame; responding to the second input, adding the first label in the second video to obtain the first video; the first playing time point corresponding to the first label is the playing time of the first video frame.
In this embodiment, when the user records the content of interest of the user when recording the second video, or when the user views the content of interest of the user when playing the second video, the first tag may be marked for the starting video frame of the content of interest of the user, and the starting video frame of the content of interest is the first video frame, so that when the user wants to view the content of interest of the user next time, the first tag may be selected in the first interface, and the first video corresponding to the first tag is selected, that is, the content of interest of the user in the first video may be played by using the first playing time point corresponding to the first tag as the starting time, so that the user can quickly locate the content of interest of the user, and the location speed is high, and time and labor are saved.
In this embodiment, the second input is used to add a first tag to a first video frame in a second video during recording or playing of the second video. The second input may be a click input of the target control by the user, or a voice instruction input by the user, or a specific gesture input by the user, and may be determined according to an actual use requirement in an actual application, which is not specifically limited in this embodiment.
In this embodiment, the second input may be a user click input to the target control.
In this embodiment, as shown in fig. 3, in the second video recording process, the video recording interface 301 is provided with a first label control 302, the second input may be a click input of the user to the first label control 302, and in response to the second input, a first label is added to a first video frame of the second video to obtain the first video.
In this embodiment, as shown in fig. 4, in the second video playing process, the video playing interface is provided with a second label control, the second input may be a click input of the user to the second label control 302, and in response to the second input, the first label is added to the first video frame of the second video to obtain the first video.
In this embodiment, the video recording interface 301 refers to an interface for recording a video of a camera APP of an electronic device, and a target object included in the video recording interface 301 may be a person, an animal, or an object appearing in a video recording preview picture. Before the user enters the camera APP and does not start the video recording function, the content displayed on the video recording interface 301 is a video recording preview picture.
In this embodiment, the first tag is a tag that identifies the content of the first video frame by a name, and the first tag may be a tag that is customized by a user or a tag that is recognized by the electronic device according to an AI algorithm. For example, a birthday tag after cake is detected, a building or place tag after building is identified, a food tag after food is identified.
In this embodiment, in the recording or playing process of the second video, in response to the second input, the user may mark the first tag on the second video at any recording time point or playing time point according to the need of the user, and the user may mark a unique first tag on the second video, or mark different tags on different video frames of the second video, that is, mark a plurality of tags on the second video.
In this embodiment, after receiving a second input to the first video frame by the user, the electronic device may obtain the first video frame and the playing time point of the first video frame in the second video, and display the first tags in the first interface in a classified manner according to the playing time points of the first video frame and the first video frame in the second video.
In this embodiment, after the first tag is added to the second video to obtain the first video, it is further required to determine whether the first interface has the first tag, and if yes, the first video is added below the first tag of the first interface; if not, a first label is created on the first interface, and the first video is added below the first label of the first interface.
For example, when the user plays the wild goose tower to shoot the video a about the wild goose tower on the first day, the first tag is set for the video frame of the west-ampere wild goose tower shot in the video a, the name of the first tag is "west-ampere travel", and before that, the user has not marked the tag named "west-ampere travel", so the tag named "west-ampere travel" does not exist in the first interface, and therefore, the tag named "west-ampere travel" needs to be created in the first interface, and the video a needs to be added below the tag of the "west-ampere travel" of the first interface. When a user plays terracotta soldiers and horses on the next day to shoot a video B about the terracotta soldiers and horses, a second label is marked on the video frame of the terracotta soldiers shot in the video B, the name of the second label is marked as 'the travel of western' and before the second label, the label named 'the travel of western' already exists in the first interface, and then the video B can be directly added below the label of 'the travel of western' of the first interface.
In the embodiment of the invention, the first label can be marked for the first video frame in the second video in the recording process of the second video to obtain the first video, so that the situation that a user needs to watch the second video again after the recording of the second video to set the first label for the second video is avoided, and the time is saved and the convenience is realized; can also be in the second video broadcast in-process, for the first video frame mark first label in the second video, obtain first video, the user of being convenient for sets up first label to the second video when watching the second video for the operation of adding the label to the second video becomes more intelligent and convenient, when improving video label and setting up efficiency, has also richened mobile terminal's function, has increased mobile terminal's market competition.
In one embodiment, after adding the first tag to the second video in response to the second input and obtaining the first video, the video processing method further includes: and displaying the first label at a first position of the playing progress bar of the first video, wherein the first position is a playing position corresponding to the playing time of the first video frame.
In the embodiment of the invention, the first label is displayed at the first position of the playing progress bar of the first video, and the first position is the playing position corresponding to the playing time of the first video frame, so that a user can conveniently and intuitively view the first playing time point corresponding to the first label in the playing process of the first video, when the user wants to watch the video clip corresponding to the first label, the user can quickly jump the first video from the current playing time point to the first playing time point, and play the video clip corresponding to the first label, so that the user is prevented from finding the video clip corresponding to the first label frame by frame, and the operation is convenient.
In one embodiment, after adding the first tag to the second video in response to the second input and obtaining the first video, the video processing method further includes: receiving a third input to the first tag; responding to the third input, and performing a first editing operation on the first label; the first editing operation comprises at least one of: deleting the first label; changing the name of the first tag; and changing the first playing time point corresponding to the first label.
In this embodiment, the third input is used to perform an editing operation on the first tag marked in the second video. The third input may be a click input of the target control by the user, or a voice instruction input by the user, or a specific gesture input by the user, and may be determined according to an actual use requirement in an actual application, which is not specifically limited in this embodiment.
In this embodiment, in the recording process of the second video, operations such as deleting and changing the marked first tag can be performed according to actual conditions, so that the content of the first tag marked for the first video frame of the second video is more attached to the content of the first video frame, and meanwhile, the requirements of users can be more met.
In this embodiment, a third input from the user is received, and in response to the third input, the first tag of the second video type is subjected to target processing based on the content of the third input.
In this embodiment, the third input may be: the first label is input by clicking the first label by a user, input by long pressing the first label by the user, or input by dragging the first label by the user. In practical applications, the determination may be determined according to actual use requirements, and this embodiment is not particularly limited in this respect.
As shown in fig. 4, in this embodiment, in a case that the fourth input is a click input of the user to the first tag 401, in response to the fourth input, the electronic device displays a tag name change interface 402, and the electronic device changes the name of the first tag 401 according to the modified tag name input by the user. For example, the user may directly modify the original tag name "butterfly" of the first tag 401 into "good memory at a certain time in a certain area" by clicking the first tag 401.
As shown in fig. 5, in this embodiment, when the fourth input is a long-press input of the first tab 501 by the user, the electronic device displays a tab deletion interface 502, and the electronic device deletes the first tab 501. For example, the user may click to delete the first tab 501 by long-pressing the first tab 501 in the event that a video tab deletion interface 502 appears on the screen of the electronic device.
As shown in fig. 6, in this embodiment, when the fourth input is a drag input of the user to the first tab 601, the user drags a position of the first tab 601 on the play progress bar of the second video, and the electronic device changes the first play time point corresponding to the first tab 601 according to the position of the first tab 601 dragged by the user on the play progress bar of the second video. For example, the user may drag the first tab 601 from the original first position to a second position that the user wants to mark by long pressing the first tab 601, and the electronic device modifies the playing time point information corresponding to the first tab 601 at the first position to the playing time point information corresponding to the second position.
In the embodiment of the present invention, after the second video is recorded, operations such as deleting, changing the name, and changing the first playing time point corresponding to the first label may be performed on the marked first label according to an actual situation, so that the first label marked for the first video frame of the second video is more attached to the content of the first video frame, and the user's requirements can be met.
In one embodiment, the video processing method further comprises: displaying a second interface in the video recording or playing process, wherein the second interface comprises a target control; receiving a fourth input to the target control; performing a second editing operation on the first video in response to the fourth input; the second editing operation comprises at least one of: adding a second label; deleting the added tags in the first video; changing the name of the added label in the first video; and changing the playing time point corresponding to the added label in the first video.
In this embodiment, as shown in fig. 7, in the video playing process, a second interface 701 is displayed, the second interface 701 is provided with a label management control 702, and a user may add a second label to the first video through the label management control 702, or may manage the first label 703 and the second label in the first video 703 through the label management control 702, so as to better meet the requirements of the user.
In this embodiment, the label management controls 702 include an add label control 7021, a delete label control 7022, and a modify label control 7023.
In the present embodiment, the fourth input is used to perform an editing operation on the first video. The fourth input may be a click input of the target control by the user, or a voice instruction input by the user, or a specific gesture input by the user, and may be determined according to an actual use requirement in an actual application, which is not specifically limited in this embodiment.
In this embodiment, as shown in fig. 8, in a case that the fourth input is a click input of the add tab control 7021 by the user, in response to the fourth input, a second tab 704 whose position is movable is randomly generated on the second interface 701, the user moves the second tab 704 to the play progress bar of the first video, and moves the second tab 704 to a play time point corresponding to a video frame to which the user wants to add the second tab 704, so that the addition of the second tab 704 is completed.
In this embodiment, the label management control 702 includes an add label control 7021, and when the user watches the first video, and watches the content that the user did not notice in the first video before, but watches the content that the user feels interesting again, the add label control 7021 can add a second label to the video frame that the user feels interesting now, so as to meet the user's requirement.
In this embodiment, as shown in fig. 9, when the fourth input is a click input of a user to the delete tab control 7022, in response to the fourth input, a first tab 703 and a second tab 704 that have been added to the first video are displayed, and a delete control is respectively disposed beside the first tab 703 and the second tab 704, and based on the content of the fourth input, the first tab 703 or the second tab 704 corresponding to the fourth input is deleted.
For example, where the fourth input is a user click input to the delete tab control 7022 and a click input to the delete control next to the first tab 703, the electronic device deletes the first tab 703 that has been added to the first video.
For another example, as shown in fig. 9, in a case where the fourth input is a click input by the user to the delete tab control 7022 and a click input to the delete control next to the second tab 704, the electronic device deletes the second tab 704 that has been added to the first video.
In this embodiment, as shown in fig. 10, when the fourth input is a click input of the user to the modified label control 7023, in response to the fourth input, the first label 703 and the second label 704 that have been added to the first video are displayed, and one modification control is respectively disposed beside the first label 703 and the second label 704, and based on the content of the fourth input, the name of the first label 703 or the second label 704 corresponding to the fourth input is modified, or the position of the first label 703 or the second label 704 corresponding to the fourth input on the progress bar for playing the first video is modified.
For example, in the case where the fourth input is a click input by the user to the modify tab control 7023 and a click input to the modify control beside the first tab 703, the name of the first tab 703 that has been added to the first video is modified.
For another example, as shown in fig. 10, in a case where the fourth input is a click input by the user to the modify tab control 7023 and a click input to the modify control beside the second tab 704, the name of the second tab 704 that has been added to the first video is modified.
For another example, in the case that the fourth input is a click input of the user to the modified tab control 7023 and a drag input of the first tab 703 on the play progress bar of the first video, the position of the first tab 703 on the play progress bar of the first video is changed to change the play time point corresponding to the first tab 703 that has been added in the first video.
For another example, in a case where the fourth input is a click input of the user to the modified tab control 7023 and a drag input of the second tab 704 on the play progress bar of the first video, the position of the second tab 704 on the play progress bar of the first video is changed to change the play time point corresponding to the second tab 704 that has been added in the first video.
In the embodiment of the invention, the target control is arranged, so that a user can conveniently record or play the video, the second label is added to the first video through the target control, and the first label and the second label added to the first video are managed through the target control, so that the requirements of the user can be better met.
In one embodiment, after adding the first tag to the second video in response to the second input and obtaining the first video, the video processing method further includes: under the condition that the first video comprises one first label, the first video is sent to a target contact person, so that when the target contact person plays the first video, a playing time point corresponding to the first label is used as a starting time to play the first video; under the condition that the first video comprises a plurality of first labels, the first video is sent to a target contact person, so that when the target contact person plays the first video, a playing time point corresponding to a specified label in the first labels is used as a starting moment to play the first video.
In this embodiment, when a user wants to share a specific video segment in a first video to a target contact person for watching, a first tag is added at an initial playing position of the specific video segment, and the first video only includes one first tag, the first video can be directly sent to the target contact person, and when the target contact person plays the first video, a playing time point corresponding to the first tag is taken as an initial time point of the first video to play the first video, so that the specific video segment can be directly played from the initial playing position of the specific video segment to the target contact person, and the target contact person does not need to find the specific video segment frame by frame in the first video.
In this embodiment, when a user wants to share a specific video segment in a first video to a target contact person for watching, a first tag is added at a starting playing position of the specific video segment, and the first video includes a plurality of first tags, the user needs to preset which first tag of the plurality of first tags is a designated tag before sharing the first video to the target contact person, and when the user presets the designated tag, and when the target contact person plays the first video, a playing time point corresponding to the designated tag is played as a starting time of the first video; and under the condition that a user does not preset a designated tag, when the target contact plays the first video, defaulting the first tag closest to the original starting playing time of the first video as the designated tag, and playing the first video by taking the playing time point corresponding to the designated tag as the starting time point of the first video.
In this embodiment, it should be noted that a specific process of presetting which first tag of the plurality of first tags is the designated tag by the user is as follows: when a user wants to share a first video with a target contact person, and the first video comprises a plurality of first labels, a prompt box used for setting the appointed labels is popped up on a screen of the electronic equipment, the user sets the corresponding first labels as the appointed labels in the prompt box according to the actual requirements of the user, and the first video is shared with the target contact person after the appointed labels are set.
In the embodiment of the invention, under the condition that the first video comprises one first label, the first video is sent to a target contact person, so that when the target contact person plays the first video, a playing time point corresponding to the first label is used as a starting time to play the first video; under the condition that the first video comprises a plurality of first labels, the first video is sent to a target contact person, so that when the target contact person plays the first video, the first video is played by taking a playing time point corresponding to a specified label in the plurality of first labels as a starting moment, the target contact person can directly watch a video clip which a user wants to share, and the use effect is good.
In one embodiment, after adding the first tag to the second video in response to the second input and obtaining the first video, the video processing method further includes:
receiving a fifth input to the first tag;
in response to the fifth input, taking the playing time point corresponding to the first label as a starting time point to intercept a video segment with a preset time length in the first video to obtain a target video segment;
and sending the target video segment to a target contact.
In this embodiment, the fifth input is configured to intercept a video segment with a preset duration in the first video by using the playing time point corresponding to the first tag as a starting time point, so as to obtain a target video segment. The fifth input may be a click input of the target control by the user, or a voice instruction input by the user, or a specific gesture input by the user, which may be determined according to an actual use requirement in an actual application, and this embodiment is not specifically limited to this. The click input in this embodiment may be a single click input, a double click input, or any number of click inputs, and may also be a long-press input or a short-press input; in practical applications, the number of times of the click input may be preset, and the user may perform operations according to the preset number of times of the click input, or the number of times of the click input may be set by the user according to the operation habit of the user, which is not specifically limited in this embodiment.
In this embodiment, the fifth input may be a three-click input of the first tag by the user, and in response to the fifth input, the video segment in the first video with a preset duration is captured by using the playing time point corresponding to the first tag as a starting time point, so as to obtain the target video segment.
In this embodiment, the preset duration may be set by the user, and the comparison in this embodiment is not specifically limited. For example, the preset time period may be five minutes.
In the embodiment of the invention, when a user only wants to share a certain video segment in a first video to a target contact person, a fifth input of the user to a first label is received, in response to the fifth input, a playing time point corresponding to the first label is used as a starting time point to capture the video segment with a preset time duration in the first video to obtain the target video segment, the target video segment is sent to the target contact person, the user does not need to cut the first video to obtain the target video segment, and then the target video segment is sent to the target contact person.
In the video processing method provided by the embodiment of the application, the execution main body can be a video processing device. In the embodiment of the present application, a method for executing video processing by a video processing apparatus is taken as an example, and the video processing apparatus provided in the embodiment of the present application is described.
Fig. 11 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. As shown in fig. 11, the video processing apparatus according to the embodiment of the present application includes a first display module 1101, a first receiving module 1102, and a first playing module 1103.
A first display module 1101, configured to display a first interface, where the first interface includes a first video marked with a first tag; a first receiving module 1102 for receiving a first input to the first video; a first playing module 1103, configured to, in response to the first input, play the first video with a first playing time point corresponding to the first tag as a starting time.
In the embodiment of the invention, a user can mark the initial video frame of a specific video segment in a first video in advance through a first label, and classify and display the first video according to the first label marked by the user in a first interface, when the user wants to locate the specific video segment, the user can play the specific video segment by selecting the first label corresponding to the specific video segment in the first interface and selecting the first video corresponding to the specific video segment, so that the user can quickly locate the specific video segment which the user wants to watch by taking the first playing time point corresponding to the first label as the initial time point, the locating speed is high, and time and labor are saved.
In one embodiment, the video processing apparatus further comprises a second display module, a second receiving module, and a first adding module; the second display module is used for displaying the first video frame in the process of recording or playing the second video; a second receiving module for receiving a second input to the first video frame; a first adding module, configured to add the first tag to the second video in response to the second input, so as to obtain the first video; the first playing time point corresponding to the first label is the playing time of the first video frame.
In the embodiment of the invention, the first label can be marked for the first video frame in the second video in the recording process of the second video to obtain the first video, so that the situation that a user needs to watch the second video again after the recording of the second video to set the first label for the second video is avoided, and the time is saved and the convenience is realized; the method can also mark the first label for the first video frame in the second video playing process to obtain the first video, so that the user can set the first label for the second video when watching the second video, the operation of adding the label to the second video becomes more intelligent and convenient, the function of the mobile terminal is enriched while the efficiency of setting the video label is improved, and the market competitiveness of the mobile terminal is increased.
In one embodiment, the video processing apparatus further comprises a third display module; and the third display module is used for displaying the first label at a first position of the playing progress bar of the first video, wherein the first position is a playing position corresponding to the playing time of the first video frame.
In the embodiment of the invention, the first label is displayed at the first position of the playing progress bar of the first video, and the first position is the playing position corresponding to the playing time of the first video frame, so that a user can conveniently and intuitively view the first playing time point corresponding to the first label in the playing process of the first video, when the user wants to watch the video clip corresponding to the first label, the user can quickly jump the first video from the current playing time point to the first playing time point, and play the video clip corresponding to the first label, so that the user is prevented from finding the video clip corresponding to the first label frame by frame, and the operation is convenient.
In one embodiment, the video processing apparatus further comprises a third receiving module and a first editing module; a third receiving module for receiving a third input to the first tag; the first editing module is used for responding to the third input and carrying out first editing operation on the first label; the first editing operation comprises at least one of: deleting the first label; changing the name of the first tag; and changing the first playing time point corresponding to the first label.
In the embodiment of the invention, after the second video is recorded, operations such as deleting, changing the name and changing the first playing time point corresponding to the first label can be performed on the marked first label according to the actual situation, so that the content of the first label marked for the first video frame of the second video is more attached to the content of the first video frame, and the requirements of users can be more met.
In one embodiment, the video processing apparatus further comprises a fourth display module, a fourth receiving module, and a second editing module; the fourth display module is used for displaying the second interface in the video recording or playing process; a fourth receiving module, configured to receive a fourth input to the target control; the second editing module is used for responding to the fourth input and performing second editing operation on the first video; the second editing operation comprises at least one of: adding a second label; deleting the added tags in the first video; changing the name of the added label in the first video; and changing the playing time point corresponding to the added label in the first video.
In the embodiment of the invention, the target control is arranged, so that a user can conveniently record or play the video, the second label is added to the first video through the target control, and the first label and the second label added to the first video are managed through the target control, so that the requirements of the user can be better met.
In one embodiment, the video processing apparatus further comprises a first sending module, a second playing module, a second sending module and a third playing module; the first sending module is used for sending the first video to a target contact person under the condition that the first video comprises one first label; the second playing module is used for playing the first video by taking the playing time point corresponding to the first label as an initial moment when the first video is played by the target contact person; the second sending module is used for sending the first video to a target contact person under the condition that the first video comprises a plurality of first labels; and the third playing module is used for playing the first video by taking a playing time point corresponding to a specified label in the plurality of first labels as an initial moment when the target contact plays the first video.
In the embodiment of the invention, under the condition that the first video comprises one first label, the first video is sent to a target contact person, so that when the target contact person plays the first video, a playing time point corresponding to the first label is used as a starting time to play the first video; under the condition that the first video comprises a plurality of first labels, the first video is sent to a target contact person, so that when the target contact person plays the first video, the first video is played by taking a playing time point corresponding to a specified label in the plurality of first labels as a starting moment, the target contact person can directly watch a video clip which a user wants to share, and the use effect is good.
In one embodiment, the video processing apparatus further comprises a fifth receiving module, a clipping module, and a third sending module; a fifth receiving module, configured to receive a fifth input to the first tag; the intercepting module is used for responding to the fifth input, and intercepting a video segment with preset time duration in the first video by taking the playing time point corresponding to the first label as a starting time point to obtain a target video segment; and the third sending module is used for sending the target video segment to a target contact person.
In the embodiment of the invention, when a user only wants to share a certain video segment in a first video with a target contact person, a fifth input of the user on a first label is received, in response to the fifth input, a playing time point corresponding to the first label is used as a starting time point to capture the video segment with a preset time length in the first video to obtain the target video segment, the target video segment is sent to the target contact person, the user does not need to cut the first video to obtain the target video segment, and then the target video segment is sent to the target contact person.
The video processing apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device, MID, an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer, a UMPC, a netbook or a personal digital assistant, a PDA, or the like, or a server, a Network Attached Storage, a personal computer, a PC, a television set NAS, a TV, a teller machine, or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiments of the present application are not limited specifically.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 12, an electronic device 1200 is further provided in an embodiment of the present application, and includes a processor 1201 and a memory 1202, where the memory 1202 stores a program or an instruction that can be executed on the processor 1201, and when the program or the instruction is executed by the processor 1201, the steps of the embodiment of the video processing method are implemented, and the same technical effect can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1300 includes, but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310, and the like.
Those skilled in the art will appreciate that the electronic device 1300 may further comprise a power source, such as a battery, for supplying power to the various components, and the power source may be logically connected to the processor 1310 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The display unit 1306 is configured to display a first interface, where the first interface includes a first video marked with a first label; a user input unit 1307 is used to receive a first input to the first video; the processor 1310 is configured to play the first video with a first play time point corresponding to the first tag as a start time in response to the first input.
In the embodiment of the invention, a user can mark the initial video frame of a specific video segment in a first video in advance through a first label, classify and display the first video according to the first label marked by the user in a first interface, and when the user wants to locate the specific video segment, the user can play the specific video segment by selecting the first label corresponding to the specific video segment in the first interface and selecting the first video corresponding to the specific video segment, so that the user can quickly locate the specific video segment which the user wants to watch by taking the first playing time point corresponding to the first label as the initial time point, the locating speed is high, and time and labor are saved.
Optionally, the display unit 1306 is further configured to display the first video frame during video recording or playing of the second video; user input unit 1307 is also used to receive a second input for the first video frame; processor 1310 is further configured to add the first tag in the second video in response to the second input, resulting in the first video; the first playing time point corresponding to the first label is the playing time of the first video frame.
In the embodiment of the invention, the first label can be marked for the first video frame in the second video in the recording process of the second video to obtain the first video, so that the situation that a user needs to watch the second video again after the recording of the second video to set the first label for the second video is avoided, and the time is saved and the convenience is realized; the method can also mark the first label for the first video frame in the second video playing process to obtain the first video, so that the user can set the first label for the second video when watching the second video, the operation of adding the label to the second video becomes more intelligent and convenient, the function of the mobile terminal is enriched while the efficiency of setting the video label is improved, and the market competitiveness of the mobile terminal is increased.
Optionally, the display unit 1306 is further configured to display the first tag at a first position of a play progress bar of the first video, where the first position is a play position corresponding to a play time of the first video frame.
In the embodiment of the present invention, the first tag is displayed at the first position of the play progress bar of the first video, where the first position is the play position corresponding to the play time of the first video frame, so that a user can conveniently and intuitively view the first play time point corresponding to the first tag during the play of the first video, and when the user wants to view the video clip corresponding to the first tag, the user can quickly jump the first video from the current play time point to the first play time point, and play the video clip corresponding to the first tag that the user wants to view, thereby avoiding the user from finding the video clip corresponding to the first tag frame by frame, and being convenient to operate.
Optionally, the user input unit 1307 is further configured to receive a third input to the first tag; processor 1310 is further configured to perform a first editing operation on the first tag in response to the third input; the first editing operation comprises at least one of: deleting the first label; changing the name of the first tag; and changing the first playing time point corresponding to the first label.
In the embodiment of the invention, after the second video is recorded, operations such as deleting, changing the name and changing the first playing time point corresponding to the first label can be performed on the marked first label according to the actual situation, so that the content of the first label marked for the first video frame of the second video is more attached to the content of the first video frame, and the requirements of users can be more met.
Optionally, the display unit 1306 is further configured to display a second interface during video recording or playing; the user input unit 1307 is further configured to receive a fourth input to the target control; processor 1310 is further configured to perform a second editing operation on the first video in response to the fourth input; the second editing operation comprises at least one of: adding a second label; deleting the added tags in the first video; changing the name of the added label in the first video; and changing the playing time point corresponding to the added label in the first video.
In the embodiment of the invention, the target control is arranged, so that a user can conveniently record or play the video, the second label is added to the first video through the target control, and the first label and the second label added to the first video are managed through the target control, so that the requirements of the user can be better met.
Optionally, the processor 1310 is further configured to, when the first video includes one first tag, send the first video to a target contact, and when the target contact plays the first video, play the first video with a play time point corresponding to the first tag as a start time; the processor 1310 is further configured to, if the first video includes a plurality of first tags, send the first video to a target contact, and play the first video with a play time point corresponding to a specified tag in the plurality of first tags as a start time when the target contact plays the first video.
In the embodiment of the invention, under the condition that the first video comprises one first label, the first video is sent to a target contact person, so that when the target contact person plays the first video, a playing time point corresponding to the first label is used as a starting time to play the first video; under the condition that the first video comprises a plurality of first labels, the first video is sent to a target contact person, so that when the target contact person plays the first video, the first video is played by taking a playing time point corresponding to a specified label in the plurality of first labels as a starting moment, the target contact person can directly watch a video clip which a user wants to share, and the use effect is good.
Optionally, the user input unit 1307 is further configured to receive a fifth input to the first tag; the processor 1310 is further configured to respond to the fifth input, intercept a video segment of a preset duration in the first video with the playing time point corresponding to the first tag as a starting time point, so as to obtain a target video segment; the processor 1310 is further configured to send the target video segment to a target contact.
In the embodiment of the invention, when a user only wants to share a certain video segment in a first video to a target contact person, a fifth input of the user to a first label is received, in response to the fifth input, a playing time point corresponding to the first label is used as a starting time point to capture the video segment with a preset time duration in the first video to obtain the target video segment, the target video segment is sent to the target contact person, the user does not need to cut the first video to obtain the target video segment, and then the target video segment is sent to the target contact person.
It should be understood that, in the embodiment of the present application, the input Unit 1304 may include a Graphics Processing Unit (GPU) 13041 and a microphone 13042, and the Graphics processor 13041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1307 includes a touch panel 13071 and at least one of other input devices 13072. Touch panel 13071, also known as a touch screen. The touch panel 13071 may include two parts, a touch detection device and a touch controller. Other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1309 may be used to store software programs as well as various data. The memory 1309 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, memory 1309 can include volatile memory or nonvolatile memory, or memory 1309 can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). Memory 1309 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1310 may include one or more processing units; optionally, the processor 1310 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing video processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A video processing method, comprising:
displaying a first interface, wherein the first interface comprises a first video marked with a first label;
receiving a first input to the first video;
and responding to the first input, and playing the first video by taking a first playing time point corresponding to the first label as a starting time.
2. The method of claim 1, wherein prior to displaying the first interface, the method further comprises:
displaying a first video frame in the process of recording or playing a second video;
receiving a second input to the first video frame;
responding to the second input, adding the first label in the second video to obtain the first video;
the first playing time point corresponding to the first label is the playing time of the first video frame.
3. The method of claim 2, wherein after the adding the first tag to the second video in response to the second input, resulting in the first video, the method further comprises:
and displaying the first label at a first position of the playing progress bar of the first video, wherein the first position is a playing position corresponding to the playing time of the first video frame.
4. The method of claim 2, wherein after the adding the first tag in the second video in response to the second input, resulting in the first video, the method further comprises:
receiving a third input to the first tag;
responding to the third input, and performing a first editing operation on the first label;
the first editing operation comprises at least one of:
deleting the first label;
changing the name of the first tag;
and changing the first playing time point corresponding to the first label.
5. The method of claim 2, further comprising:
displaying a second interface in the video recording or playing process, wherein the second interface comprises a target control;
receiving a fourth input to the target control;
performing a second editing operation on the first video in response to the fourth input;
the second editing operation comprises at least one of:
adding a second label;
deleting the added tags in the first video;
changing the name of the added label in the first video;
and changing the playing time point corresponding to the added label in the first video.
6. The method of claim 2, wherein after the adding the first tag to the second video in response to the second input, resulting in the first video, the method further comprises:
under the condition that the first video comprises one first label, the first video is sent to a target contact person, so that when the target contact person plays the first video, a playing time point corresponding to the first label is used as a starting moment to play the first video;
and under the condition that the first video comprises a plurality of first labels, sending the first video to a target contact person, so that when the target contact person plays the first video, playing the first video by taking a playing time point corresponding to a specified label in the plurality of first labels as a starting time.
7. The method of claim 2, wherein after the adding the first tag to the second video in response to the second input, resulting in the first video, the method further comprises:
receiving a fifth input to the first tag;
in response to the fifth input, taking the playing time point corresponding to the first label as a starting time point to intercept a video segment with a preset time length in the first video to obtain a target video segment;
and sending the target video segment to a target contact.
8. A video processing apparatus, comprising:
the first display module is used for displaying a first interface, wherein the first interface comprises a first video marked with a first label;
a first receiving module for receiving a first input to the first video;
and the first playing module is used for responding to the first input and playing the first video by taking a first playing time point corresponding to the first label as a starting moment.
9. An electronic device, comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the video processing method of any of claims 1-7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video processing method according to any one of claims 1 to 7.
CN202211257771.XA 2022-10-13 2022-10-13 Video processing method and device, electronic equipment and readable storage medium Pending CN115665355A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211257771.XA CN115665355A (en) 2022-10-13 2022-10-13 Video processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211257771.XA CN115665355A (en) 2022-10-13 2022-10-13 Video processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115665355A true CN115665355A (en) 2023-01-31

Family

ID=84988129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211257771.XA Pending CN115665355A (en) 2022-10-13 2022-10-13 Video processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115665355A (en)

Similar Documents

Publication Publication Date Title
CN112153288B (en) Method, apparatus, device and medium for distributing video or image
WO2021031733A1 (en) Method for generating video special effect, and terminal
US11671696B2 (en) User interfaces for managing visual content in media
CN111770386A (en) Video processing method, video processing device and electronic equipment
CN113918522A (en) File generation method and device and electronic equipment
CN115658197A (en) Interface switching method and interface switching device
JP2014052915A (en) Electronic apparatus, display control method, and program
CN112181252B (en) Screen capturing method and device and electronic equipment
CN112287141A (en) Photo album processing method and device, electronic equipment and storage medium
US20140210800A1 (en) Display control apparatus, display control method, and program
CN116017043A (en) Video generation method, device, electronic equipment and storage medium
CN115437736A (en) Method and device for recording notes
CN115550741A (en) Video management method and device, electronic equipment and readable storage medium
CN115665355A (en) Video processing method and device, electronic equipment and readable storage medium
CN114584704A (en) Shooting method and device and electronic equipment
CN115344159A (en) File processing method and device, electronic equipment and readable storage medium
CN114443567A (en) Multimedia file management method, device, electronic equipment and medium
CN114416664A (en) Information display method, information display device, electronic apparatus, and readable storage medium
CN113271379A (en) Image processing method and device and electronic equipment
CN112492206B (en) Image processing method and device and electronic equipment
KR102646519B1 (en) Device and method for providing electronic research note service
WO2024083017A1 (en) Content presentation method and apparatus, device, and storage medium
WO2023217122A1 (en) Video clipping template search method and apparatus, and electronic device and storage medium
CN115278378A (en) Information display method, information display device, electronic apparatus, and storage medium
CN115146189A (en) Content sharing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination