CN111770386A - Video processing method, video processing device and electronic equipment - Google Patents

Video processing method, video processing device and electronic equipment Download PDF

Info

Publication number
CN111770386A
CN111770386A CN202010478564.1A CN202010478564A CN111770386A CN 111770386 A CN111770386 A CN 111770386A CN 202010478564 A CN202010478564 A CN 202010478564A CN 111770386 A CN111770386 A CN 111770386A
Authority
CN
China
Prior art keywords
video
target
input
window
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010478564.1A
Other languages
Chinese (zh)
Inventor
张立恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010478564.1A priority Critical patent/CN111770386A/en
Publication of CN111770386A publication Critical patent/CN111770386A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application provides a video processing method, a video processing device and an electronic device, which belong to the technical field of data processing, wherein the method comprises the following steps: receiving a first input of a user to a first thumbnail in a communication program interface, wherein the first thumbnail indicates a first video; displaying N head portraits of the person in the first video in response to the first input; receiving a second input of the target character portrait in the N character portraits from the user; and responding to the second input, and displaying M video clips, wherein the M video clips are video clips including the target person in the first video, and the head portrait of the target person is the head portrait of the target person. According to the scheme, the video clips containing the specific characters in the video of the communication program interface can be quickly positioned, the time for a user to find the video clips containing the specific characters in the video is shortened, and the efficiency for the user to check the video clips containing the specific characters in the video of the communication program interface is improved.

Description

Video processing method, video processing device and electronic equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a video processing method, a video processing apparatus, and an electronic device.
Background
With the increasing popularity of the internet, users can share videos through various application programs in electronic devices of the users.
For example, school teachers often send videos of life and learning of children in schools in parent chat groups, so that parents can know the states of children in life of schools, and parents can also send videos of sports meetings and artistic performances of children shot to the parent chat groups for sharing. But since the video transmitted by teachers and parents contains video clips of many children, parents may only be interested in video clips containing their children. In such a scenario, if a user includes a video clip of a specific character in a video viewed through the communication program interface, the user can only view the entire video to find the video clip of the specific character.
This method requires a lot of time and effort for the user to watch the whole video, and the user has to watch a lot of uninteresting video contents to screen out interesting video contents, which is very time consuming and cumbersome.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device and electronic equipment, and can solve the problem that in the prior art, a user needs to spend a lot of time and energy to watch a video in a communication program interface completely, so that the video in the communication program interface can be watched with a video clip containing a specific character, and the problem is time-consuming and tedious.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
receiving a first input of a user to a first thumbnail in a communication program interface, wherein the first thumbnail indicates a first video;
in response to the first input, displaying N head portraits of the person in the first video;
receiving a second input of the target character portrait in the N personal character portraits from the user;
in response to the second input, displaying M video clips, wherein the M video clips are video clips including a target person in a first video, and the head portrait of the target person is the head portrait of the target person;
wherein N, M are each positive integer second aspect.
An embodiment of the present application provides a video processing apparatus, including:
the first receiving module is used for receiving first input of a first thumbnail in a communication program interface by a user, wherein the first thumbnail indicates a first video;
a first display module for displaying N head portraits of the person in the first video in response to the first input;
the second receiving module is used for receiving second input of the target character avatar in the N person character avatars from the user;
a second display module, configured to display, in response to the second input, M video segments, where the M video segments are video segments including a target person in a first video, and the head portrait of the target person is a head portrait of the target person;
wherein N, M are all positive integers.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the video processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the video processing method according to the first aspect.
According to the video processing method, the video processing device and the electronic equipment, the head portrait of the person contained in the video of the communication program interface is displayed for the user to select the required head portrait of the target person, so that the video clip containing the target person is displayed, the user can watch the video clip containing the specific person under the condition of not completely watching the video in the communication program interface, the video clip containing the specific person in the video can be quickly positioned, the time for the user to find the video clip containing the specific person in the video is shortened, and the efficiency of the user to check the video clip containing the specific person in the video of the communication program interface is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating an effect of a video processing method according to an embodiment of the present application;
fig. 3 is a second schematic diagram illustrating an effect of the video processing method according to the embodiment of the present application;
fig. 4 is a third schematic view illustrating an effect of the video processing method according to the embodiment of the present application;
fig. 5 is a fourth schematic diagram illustrating an effect of the video processing method according to the embodiment of the present application;
fig. 6 is a fifth schematic view illustrating an effect of the video processing method according to the embodiment of the present application;
FIG. 7 is a flow chart illustrating steps of another video processing method according to an embodiment of the present application;
FIG. 8 is a sixth schematic view illustrating the effect of the video processing method according to the embodiment of the present application;
fig. 9 is a flowchart illustrating steps of a first video segment selecting method according to an embodiment of the present application;
fig. 10 is a seventh schematic diagram illustrating an effect of the video processing method according to the embodiment of the present application;
fig. 11 is a flowchart illustrating steps of a second video segment selection method according to an embodiment of the present application;
fig. 12 is a diagram illustrating an eighth effect of the video processing method according to the embodiment of the present application;
fig. 13 is a flowchart illustrating steps of a third video segment selecting method according to an embodiment of the present application;
fig. 14 is a flowchart illustrating steps of a video compositing method according to an embodiment of the present application;
fig. 15 is a ninth view illustrating an effect of the video processing method according to the embodiment of the present application;
fig. 16 is a block diagram illustrating a video processing apparatus according to an embodiment of the present application;
fig. 17 is a schematic diagram illustrating a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein.
A video processing method, a video processing apparatus, and an electronic device provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, an embodiment of the present application provides a video processing method, where the method includes:
step 101, receiving a first input of a user to a first thumbnail in a communication program interface, wherein the first thumbnail indicates a first video.
In the embodiment of the present application, the communication program interface refers to a session interface of a communication application program, and may be a group chat session interface in which multiple persons participate, or a chat session interface in which one-to-one double persons participate. The first video refers to a video included in the communication program interface, that is, a video sent by a user participating in a session in the communication program interface, and may be multiple videos sent by the same user or multiple videos sent by multiple users.
The embodiment of the application is mainly suitable for the first video containing the figure image. Generally, in the communication program interface, a video sent by a user is displayed in a thumbnail mode, that is, a first video in the communication program interface is displayed in a first thumbnail mode, and the first thumbnail may be a first video frame of the first video, or any key video frame, or a last video frame, and may be determined specifically according to actual requirements, which is not limited herein. The first input may be an input in the form of a click input, a long-press input, a slide input, a gesture input, a voice input, a floating window input, or the like for the first video, and may be determined specifically according to actual needs, which is not limited herein.
In the case that a user needs to watch a video segment containing a certain person in a first video, the user can only search by completely watching the first video in the prior art, but the user does not need to watch the first video in the embodiment of the application, and the user can display the video segment containing the certain person, which the user needs to watch, by triggering subsequent processing steps through a first input on the first video. When there are multiple first videos in the communication program interface, the first input may be an operation performed by the user only on any one of the first videos in the communication program interface, that is, a subsequent processing step may be triggered on the multiple first videos in the communication program interface, or may be a subsequent processing step triggered only on the first video specified by the first input of the user, which may be determined specifically according to actual requirements, and is not limited herein.
In practical application, for example, in a chat group of parents of a school, a teacher and a parent may share videos of daily learning and out-of-class activities of children in the chat group, but for the parents, the parents usually only want to see a video clip of their own children in the videos and do not want to see the entire videos, and through the scheme of the embodiment of the present application, the parents only need to click on a certain first video included in the chat group and perform a first input of a long press or the like, so that the parents can trigger processing of the first video sent by each user in the chat group to view the video clip including their own children. How a video segment containing a person is displayed in detail is described in the following steps.
Step 102, responding to the first input, and displaying N head portraits of the first video.
In the embodiment of the present application, the person avatar refers to a face area image in a person video frame of a person included in the first video. The character recognition is carried out on each video segment in the first video, specifically, the character recognition can be realized through a character recognition model, a character video frame containing character images in the first video is obtained, the face area images in the character video frame are intercepted, N personal character head portraits are obtained, and the character head portraits are displayed, wherein N is a positive integer.
Further, when a plurality of captured figures are available, the face features of each figure can be compared, and the figures with the face feature similarity greater than the face similarity threshold can be subjected to de-duplication processing, so that the plurality of figures belonging to the same figure can be prevented from being displayed. The face feature similarity threshold may be measured experimentally, or set by the user, and may be specifically determined according to actual requirements, which is not limited herein. After the character avatar is acquired, an association relationship can be established between the character video frame for acquiring the character avatar and the corresponding character avatar, and if the character avatar is subjected to de-duplication processing, an association relationship is also established between the character avatar reserved for de-duplication and the character video frame corresponding to the character avatar with the removed de-duplication property for subsequent steps.
In practical application, the method can also be suitable for practical requirements to screen the acquired character head portraits. For example: the method is suitable for the requirement that a user needs to obtain the head portrait of the character of the child, and can identify the age of the character corresponding to the head portrait of the character and screen out the head portrait of the character with the age smaller than the age threshold; or the method is suitable for the requirement of facilitating user identification, and can screen out the character avatar with the definition greater than the definition threshold value, and the like, and the screening mode of the character avatar is only an exemplary illustration, and can be determined according to the actual requirement, and the method is not limited here.
And 103, receiving a second input of the target character avatar in the N person characters from the user.
In this embodiment of the application, the second input may be a click input, a long-press input, a sliding input, a gesture input, a voice input, a floating window input, and the like of the user on one or more target character avatars in the character avatars, which may be determined according to actual needs and is not limited herein. The target person avatar is the avatar of the target person that the user specifies via the second input that the video segment is to be viewed. By displaying at least one character avatar for browsing by the user, the user can perform second input on at least one target character avatar in the display interface of the character avatar according to the requirement of the user, so as to trigger the subsequent video processing step.
And 104, responding to the second input, and displaying M video segments, wherein the M video segments are video segments of the first video, which comprise a target person, and the head portrait of the target person is the head portrait of the target person, and N, M are positive integers.
In the embodiment of the present application, the target person video frame including the target person may be quickly obtained through the association relationship between the person avatar and the person video frame described in step 102, and the M video segments where the target person video frame is located are displayed, or of course, the M video segments further selected by the user from the video segments including the target person may be displayed, where M is a positive integer. The person recognition can be performed again on each video frame in the first video, after the person head portrait in the person video frame containing the person image is extracted, the face feature comparison is performed on each person head portrait and the target person head portrait, and the video clip where the person video frame where the person head portrait with the feature similarity larger than the similarity threshold is located is displayed.
For example, the embodiment of the present application may be specifically applied to an application scenario for processing a first video in a communication program interface of group chat. Referring to fig. 2, one of the effect diagrams of the video processing method is shown, where the communication program interface is a group chat interface named "XX kindergarten parent chat group", and a user may first perform a first input on a first video 1 in the communication program interface, so as to switch to the second effect diagram of the video processing method shown in fig. 3, where a character floating window 3 is displayed in the communication program interface, four character avatars 2 are displayed in the character floating window 3, and after browsing, the user may further perform a second input on a target character avatar 21 at the upper left corner, and then further switch to the third effect diagram of the video processing method shown in fig. 4, and display a video clip 4 including a target character corresponding to the target character avatar 21 in the communication program interface.
Further, referring to fig. 5, a fourth effect diagram of a video processing method is shown, where a plurality of first videos 1 exist in a communication program interface named "XX kindergarten parent chat group", and a user may switch to the fifth effect diagram of the video processing method shown in fig. 6 only by performing a first input on any one of the first videos 1, where character floating windows 3 of character avatars of the plurality of first videos 1 are displayed simultaneously, so that the user only needs to perform a third input on a target character avatar in a character avatar corresponding to one of the first videos, that is, the plurality of first videos can be displayed with the target character avatar, and thus the user can more efficiently view video clips with a desired target character.
The lower left corner of the communication program interface of fig. 2 further includes a first function option 11 in the form of a camera, which is used to trigger a shooting function; the "send" second function option 12 in the lower right corner is used for publishing the information input by the user in the session of the communication program interface; the video playing window in fig. 4 further includes a "selection" third function option 13, the user switches the displayed video, and a fourth function option 14 for the user to pause the video and a fifth function option 15 for switching the next video, and the function options in the effect schematic diagram of the video processing method included in the following description can refer to the description herein, and will not be described again in the following.
According to the video processing method provided by the embodiment of the application, the head portrait of the character contained in the video of the communication program interface is displayed for the user to select the required head portrait of the target character, so that the video clip containing the target character is displayed, the user can watch the video clip containing the specific character under the condition of not completely watching the video in the communication program interface, the video clip containing the specific character in the video can be quickly positioned, the time for the user to find the video clip containing the specific character in the video is shortened, and the efficiency for the user to check the video clip containing the specific character in the video of the communication program interface is improved.
Referring to fig. 7, an embodiment of the present application provides another video processing method, where the method includes:
step 201, receiving a first input of a first thumbnail in a communication program interface from a user, where the first thumbnail indicates a first video.
This step can refer to the detailed description of step 101, which is not repeated herein.
Step 202, filtering videos which are processed according to a target mode in the first video, wherein the target mode comprises: at least one of video extraction and video splicing.
In the embodiment of the application, since the video needs to be loaded and analyzed before the first video in the communication program interface is processed, a certain storage space is occupied, and in order to avoid data redundancy and waste of data storage resources and data processing resources caused by repeated processing of the video, the video processed according to the scheme in the first video needs to be filtered. Specifically, because the video needs to be extracted and spliced in the scheme of the application, the first video subjected to video extraction and video splicing can be filtered in the filtering process, and the filtered first video is subjected to subsequent processing.
According to the method and the device, before the first video in the target application program interface is processed, the video processed according to the target mode is filtered, the trouble that a user needs to store a large number of redundant videos is solved, a large number of electronic device memories are saved, repeated processing of the videos is avoided, and data processing resources are saved.
Step 203, responding to the first input, and displaying N head portraits of the person in the first video.
This step can refer to the detailed description of step 102, which is not repeated here.
And step 204, receiving a second input of the target character avatar in the N person characters from the user.
This step can refer to the detailed description of step 103, which is not repeated herein.
Step 205, responding to the second input, displaying a video playing window of the first video in the communication program interface, where the video playing window includes a progress bar, the progress bar includes M video segment identifiers, and each video segment identifier is used to indicate a video segment containing a target person in the first video.
In the embodiment of the application, the video playing window is a window in the communication program interface and is used for displaying the first video, wherein the video playing window comprises a progress bar used for displaying and controlling video playing. The video clip identification is an identification for marking a video clip containing a target person.
In order to facilitate the user to identify the position of the progress bar in the first video of the video segment containing the target person, the progress of the video in the first video of the video segment containing the target person can be marked in the progress bar, that is, the M video segment identifiers corresponding to the M video segments in the progress bar.
Step 206, receiving a third input of the target video segment identifier in the M video segment identifiers from the user.
In the embodiment of the present application, the third input may be an input in the form of a click, a long press, a slide, a gesture input, a voice input, and the like of the user on the target video segment identification. The user can clearly know the position of the progress bar where the video segment containing the target person is located through the video segment identification in the progress bar, and then can perform third input on the target video segment identification required in the progress bar according to the requirement of the user. The user needs to select at least one target video clip identifier.
Step 207, responding to the third input, displaying a floating window in the communication program interface, wherein the floating window comprises a target video key frame of the target video clip indicated by the target video clip identifier.
In the embodiment of the application, the target video key frame refers to a key frame corresponding to a target progress position in a progress bar selected by a user, it can be understood that different time nodes in a video correspond to different key frames, and after a third input of the user specifies the target progress position, a key frame adjacent to the target progress position can be queried according to a timestamp corresponding to the target progress position to serve as the target video key frame. According to a third input of the user to the target video clip identifier in the progress bar of the video playing window, a floating window can be displayed in the video playing window, and a target video key frame indicated by the target video clip identifier is displayed in the floating window. By the method, the user can quickly check the content in the target video clip in the communication program interface by dragging the progress bar in the video playing window.
Referring to fig. 8, which shows a sixth effect diagram of the video processing method, a video segment selection interface including a plurality of video segments 5 may be presented to a user for the user to perform a third input on a desired video segment, and then the selected video segment is synthesized by clicking a synthesis function option 6 therein.
According to the method and the device, the video playing window is displayed in the communication program interface to display the video clip, the playing window also comprises the video clip identification marked with the progress position of the video clip containing the target character, so that a user can be helped to conveniently know the video progress of the video clip containing the required target character, and the user can also display the key video frame in the video clip by selecting the video clip identification, so that the user can quickly and conveniently view the video clip containing the target character.
And 208, selecting at least one video clip to be processed from the M video clips.
In this embodiment of the present application, a to-be-processed video clip refers to a video clip that needs to be processed by at least one of video extraction, video splicing, and the like.
After the user views the video clip containing the target person in the first video through the scheme, if the video clip containing the target person and other video clips need to be further rapidly extracted, the video clip needing to be acquired can be further designated as a video clip to be processed for subsequent processing.
Step 209, taking the video clip to be processed as the target video when the number of the video clips to be processed is 1.
In the embodiment of the application, when only one video clip to be processed is available, the video clip to be processed is directly extracted from the first video without video splicing.
For example, there is a first video a with a video duration of 10 minutes, and the target person only appears in a video segment of 9 minutes to 10 minutes, where a continuous and uninterrupted target video can be obtained if the video segment of 9 minutes to 10 minutes in the first video a is directly extracted to obtain the video containing the target person.
And step 210, performing video synthesis on at least two video clips to be processed to obtain a target video under the condition that the number of the video clips to be processed is at least two.
In the embodiment of the application, under the condition that the number of the video clips to be processed is at least two, in order to save time and energy required by a user to subsequently splice a plurality of video clips to be processed by himself, the plurality of video clips to be processed can be automatically spliced, and a complete video composed of the plurality of video clips to be processed is obtained.
For example, there are two video segments of a first video B having a video duration of 10 minutes, in which the target person appears multiple times, 5 minutes to 6 minutes, and 9 minutes and 10 minutes, but there is a time interval of three minutes between the two video segments, and the two video segments can be video-spliced one after the other to obtain a target video including two video segments.
According to the method and the device, the video clips appointed by the user are extracted and synthesized, so that the user can quickly acquire the video clip containing the target person in the first video without mastering a certain video processing technology, the threshold for extracting the specific clip in the video is reduced, and the time and the energy required by the user for extracting and synthesizing the video are saved.
Optionally, referring to fig. 9, the step 208 may include:
sub-step 2081, receiving a fourth input of the user to the progress bar.
In the embodiment of the present application, the fourth input may be an input in the form of a click, a long press, a slide, a voice input, a gesture input, and the like of the user with respect to the progress bar. Specifically, after the user views the video segment containing the target character in the first video, the user can select the required video segment by performing the first input on the progress bar.
Sub-step 2082, in response to the fourth input, determining the video segment selected by the fourth input as the video segment to be processed.
In the embodiment of the application, according to the fourth input of the user on the progress bar, the position of the progress bar where a video clip required by the user is located can be determined, and the video clip at the position of the progress bar is used as a to-be-processed video clip for subsequent video processing.
According to the method and the device, the video clip to be processed, which participates in extraction and synthesis, is determined according to the fourth input of the user to the progress bar, so that the user can flexibly contain the video clip of the specific character in the video according to the self requirement.
Referring to fig. 10, a seventh effect schematic diagram of the video processing method is shown, wherein a video segment identifier 7 corresponding to a video segment containing a target person is displayed on a progress bar of a video playing window, and after a user views video content containing the target person through the video segment identifier 7, a fourth input may be performed on a target video segment marker 71 in the progress bar, so that a video segment corresponding to the target video segment marker 71 is taken as a video segment to be processed.
Optionally, referring to fig. 11, the progress bar further includes: a start time identifier and an end time identifier that can be moved along the progress bar, in step 2081, including:
substep 20811, receiving a fourth input from the user for the start time identifier and the end time identifier.
In this embodiment, the start time indicator and the end time indicator are cursors that can move along the progress bar, and at this time, the user can move the start time indicator and the end time indicator along the progress bar through the fourth input. The position of the start time identifier in the progress bar needs to be prior to the end time identifier, otherwise, an error that the end time is prior to the start time occurs, and of course, the time identifier at the previous position can be used as the start time identifier, and the time identifier at the subsequent position can be used as the end time identifier, so as to avoid the problem of error between the end time and the start time.
Referring to fig. 11, the step 2082, including:
sub-step 20821, determining the video segment between the start time identifier and the end time identifier in the progress bar as the video segment to be processed.
In the embodiment of the application, according to the start time identifier and the end time identifier determined by the user through the fourth input, the progress bar located between the start time identifier and the end time identifier may be used to determine a target progress bar where a video clip required by the user is located, and a video clip corresponding to the target progress bar is used as a to-be-processed video clip.
Referring to fig. 12, an eighth effect diagram of a video processing mode according to the embodiment of the present application is shown, where a start time identifier 8 and an end time identifier 9 are included in a progress bar, and a user may adjust positions of the start time identifier 8 and the end time identifier 9 by performing a fourth input on the start time identifier 8 and the end time identifier 9, so as to use a video segment located between the start time identifier 8 and the end time identifier 9 as a video segment to be processed.
According to the method and the device, the starting time identification and the ending time identification are set in the progress bar, so that the user can select the required video clip more conveniently.
Optionally, the video playing window of the first video further includes: the video composition window, referring to fig. 13, the step 208, may include:
substep 2083, receiving a fifth input from the user to P video segment identifiers of the M video segment identifiers, where P is not greater than M and P is a positive integer.
In the embodiment of the application, the video composition window is a functional window which is positioned in the video playing window and used for selecting and controlling the composition of the video segments, the video composition window can be automatically triggered and displayed after the user selects the video segment identifier, and the user can also automatically press the corresponding composition functional key to trigger after the user selects the video segment identifier. The fifth input may be in the form of a user click, long press, swipe, gesture input, voice input, floating window input, etc. for the video segment identification.
After a user views a video segment containing a target character in a first video and finds P video segments interested by the user, P video segment identifiers corresponding to the P video segments interested by the user can be selected from the M video segment identifiers. The P video segment identifiers may be a part of or all of the M video segment identifiers.
Substep 2084, adding the target object indicated by the P video segment identifications to the video composition window.
In this embodiment of the application, the target object refers to a part or all of video frames in a video clip corresponding to a video clip identifier, for example, a key video frame, a first video frame, a last video frame, and the like in the video clip, and may also be a video clip composed of all video frames, as long as the characteristics of the video clip can be expressed, which is convenient for a user to distinguish the video clips, and may be specifically determined according to actual requirements, which is not limited herein.
The user can add the target objects indicated by the needed P video clip identifications into the video synthesis window according to the self requirement so as to control the video synthesis.
Substep 2085, determining the video segment indicated by the target object as the video segment to be processed.
In the embodiment of the application, after the user adds the target object in the video composition window, the video clip corresponding to the target object can be used as a to-be-processed video clip for subsequent video composition.
In the embodiment of the application, the video composition window is displayed in the video playing window, so that a user can intuitively and conveniently select and control the video clips participating in video composition by adding the target object indicated by the required video clip into the video composition window.
Alternatively, referring to fig. 14, the target object includes: a video thumbnail or video frame of a video clip, sub-step 2084, comprising:
substep 20841, in case the video composition window is a video clip composition window, adding P video thumbnails of the P video clips indicated by the P video clip identifications to the video composition window.
In this embodiment of the present application, the video segment composition window refers to a functional window capable of displaying video composition of P video segments, and specifically, video thumbnails of P video segments displayed in the video segment composition window may be provided for a user to identify. The video thumbnail may be a first video frame, a last video frame, or any key frame of the video clip, and may be determined according to actual requirements, which is not limited herein.
Sub-step 20842, in case the video composition window is a video frame composition window, adding all video frames of each of the P video segments indicated by the P video segment identifications to the video composition window.
In the embodiment of the present application, the video frame synthesis window is a functional window capable of being used as a frame synthesizer, and the frame synthesizer is a functional module capable of synthesizing a plurality of video frames, and can connect the plurality of video frames in series into a video. Specifically, after the user performs the fifth input on the P video segment identifiers, the video frames of the P video segments indicated by the P video segment identifiers may be added to the video frame composition window, and the user may clearly view each video frame in the video segments through the video frame composition window and perform subsequent composition.
Referring to fig. 15, a ninth effect diagram of the video processing method is shown, in which a user performs a long-press operation on the video clip identifier 71 in the progress bar, so as to drag the video thumbnail 91 and the video thumbnail 92 of the video clip corresponding to the video clip identifier 71 into the video clip composition window 10, so as to take the video thumbnail 91 and the video clip corresponding to the video thumbnail 92 as videos to be processed.
Referring to fig. 14, the step 210 includes:
in substep 2101, when the video composition window is a video clip composition window, video composition is performed on the video clips indicated by the P video thumbnails to obtain a target video.
In the embodiment of the application, after the user adds the video thumbnail in the video clip composition window, the video clip indicated by the video thumbnail contained in the video clip composition window is subjected to video composition to obtain the target video required by the user.
And a substep 2102 of performing video synthesis on the video frames displayed in the video frame synthesis window to obtain a target video when the video synthesis window is a video frame synthesis window.
In the embodiment of the application, after the user adds the video frame in the video frame synthesis window, the video frame contained in the video frame synthesis window is subjected to video synthesis to obtain the target video required by the user.
According to the embodiment of the application, the video frame of the required video clip can be added by the user through the display video frame synthesis window, so that the user can select and synthesize the video frame of the required video clip more conveniently, the video thumbnail of the required video clip can be added by the user through the display video clip synthesis window, the user can select and synthesize the video clip conveniently, and the flexibility of video synthesis is improved.
Optionally, step 2101 includes: and under the condition that the video synthesis window is a video segment synthesis window, determining a first synthesis sequence of the video segment corresponding to each video thumbnail based on the display position of each video thumbnail in the video segment synthesis window, and performing video synthesis on the video segment corresponding to each video thumbnail according to the first synthesis sequence to obtain a target video.
In the embodiment of the present application, the first composition order refers to a sequence of video segments in a video composition process. In the video synthesis process of the embodiment of the application, the video segments are sequentially subjected to head stitching, so that the playing time point of the video segment with the former synthesis sequence in the obtained target video is also ahead. Specifically, the first combining order is determined according to the display position of each video thumbnail in the video segment combining window, and may be from left to right according to the display position, or may be from right to left, from top to bottom, from bottom to top, and so on.
Optionally, the step 2102 comprises: and under the condition that the video synthesis window is a video frame synthesis window, determining a second synthesis sequence of each video frame based on the display position of each video frame in the video frame synthesis window, and performing video synthesis on each video frame according to the second synthesis sequence to obtain a target video.
In this embodiment of the present application, the second synthesizing order refers to a sequence of video frames in video synthesis, and the second synthesizing order is similar to the first synthesizing order in step 2101, and is not described herein again to avoid repetition.
According to the video composition method and device, video composition can be conducted on the video clips according to the determined first composition sequence, so that a user can control video composition in the video clip composition window, and video composition can be conducted on the video frames according to the determined second composition sequence, so that the user can control video composition in the video frame composition window, and the flexibility of video composition is improved.
According to the other video processing method provided by the embodiment of the application, the head portrait of the person contained in the video of the communication program interface is displayed for the user to select the required head portrait of the target person, so that the video clip containing the target person is displayed, the user can watch the video clip containing the specific person under the condition of not completely watching the video in the communication program interface, the video clip containing the specific person in the video can be quickly positioned, the time for the user to find the video clip containing the specific person in the video is shortened, and the efficiency for the user to check the video clip containing the specific person in the video of the communication program interface is improved. And the video clips selected by the user can be extracted or synthesized, so that the user can acquire the video clips containing the specific characters without mastering a certain video processing technology, and the threshold of video processing is reduced. And the video composition can be controlled by providing two modes of video clip composition and video frame composition for users, thereby improving the flexibility of video composition. And the video clips selected by the user can be synthesized according to the input of the user according to the sequence set by the user, so that the flexibility of extracting the character video by the user is improved. And the processed video is filtered, so that the condition that data processing resources and data storage resources are wasted due to repeated processing and repeated analysis is avoided.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution main body may be a video processing apparatus, or a control module in the video processing apparatus for executing the loaded video processing method. In the embodiment of the present application, a video processing apparatus executes a method for loading video processing as an example, and a method for video processing provided in the embodiment of the present application is described.
Referring to fig. 16, an embodiment of the present application further provides a block diagram of a video processing apparatus 30, where the video processing apparatus includes:
the first receiving module 301 is configured to receive a first input of a first thumbnail in a communication program interface from a user, where the first thumbnail indicates a first video.
A first display module 302, configured to display N head portraits of the person in the first video in response to the first input.
A second receiving module 303, configured to receive a second input of the target person avatar in the N person avatars from the user.
A second display module 304, configured to display, in response to the second input, M video segments, where the M video segments are video segments of the first video that include a target person, and the target person avatar is an avatar of the target person, and N, M are positive integers.
Optionally, the apparatus further includes:
a selecting module 305, configured to select at least one to-be-processed video segment from the M video segments.
A first processing module 306, configured to take the video segment to be processed as a target video if the number of the video segments to be processed is 1.
The second processing module 307 is configured to, if the number of the to-be-processed video segments is at least two, perform video synthesis on the at least two to-be-processed video segments to obtain a target video.
Optionally, the second display module 304 is further configured to:
responding to the second input, displaying a video playing window of the first video in the communication program interface, wherein the video playing window comprises a progress bar, the progress bar comprises M video segment identifications, and each video segment identification is used for indicating one video segment containing a target character in the first video;
receiving a third input of a user to a target video clip identifier in the M video clip identifiers;
and responding to the third input, and displaying a floating window in the communication program interface, wherein the floating window comprises a target video key frame of the target video clip indicated by the target video clip identification.
Optionally, the selecting module 305 is further configured to:
receiving a fourth input of the progress bar by the user;
and in response to the fourth input, determining the video segment selected by the fourth input as the video segment to be processed.
Optionally, the progress bar further includes: the start time identifier and the end time identifier that can be extended to move the progress bar, and the selecting module 305 is further configured to:
receiving a fourth input of the starting time identifier and the ending time identifier by a user;
the taking the video segment selected by the fourth input as the video segment to be processed includes:
and determining the video clip positioned between the starting time identifier and the ending time identifier in the progress bar as a video clip to be processed.
Optionally, the video playing window of the first video further includes: a video composition window; the selecting module 305 is further configured to:
receiving a fifth input of the user to P video segment identifications among the M video segment identifications;
adding the target objects indicated by the P video segment identifications to the video composition window;
and determining the video segment indicated by the target object as a video segment to be processed.
Optionally, the target object includes: a video thumbnail or a video frame of the video clip, and the selecting module 305 is further configured to:
adding P video thumbnails of P video clips indicated by the P video clip identifications to the video composition window when the video composition window is a video clip composition window;
in the case that the video composition window is a video frame composition window, adding all video frames of each of the P video clips indicated by the P video clip identifications to the video composition window;
in a case that the number of the to-be-processed video segments is at least two, the second processing module 307 is further configured to:
under the condition that the video synthesis window is a video clip synthesis window, carrying out video synthesis on the video clips indicated by the P video thumbnails to obtain a target video;
and under the condition that the video synthesis window is a video frame synthesis window, carrying out video synthesis on the video frames displayed in the video frame synthesis window to obtain a target video.
Optionally, the second processing module 307 is further configured to:
under the condition that the video synthesis window is a video segment synthesis window, determining a first synthesis sequence of the video segment corresponding to each video thumbnail based on the display position of each video thumbnail in the video segment synthesis window, and performing video synthesis on the video segment corresponding to each video thumbnail according to the first synthesis sequence to obtain a target video;
and under the condition that the video synthesis window is a video frame synthesis window, determining a second synthesis sequence of each video frame based on the display position of each video frame in the video frame synthesis window, and performing video synthesis on each video frame according to the second synthesis sequence to obtain a target video.
Optionally, the first display module 302 is further configured to:
filtering videos which are processed according to a target mode in the first videos;
wherein the target mode comprises: at least one of video extraction and video splicing.
The video processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video processing apparatus provided in this embodiment of the present application can implement each process implemented by the video processing apparatus in the method embodiments of fig. 1 to fig. 15, and for avoiding repetition, details are not repeated here.
According to the video processing device, the character head portrait contained in the video of the communication program interface is displayed for the user to select the required target character head portrait, so that the video clip containing the target character is displayed, the user can watch the video clip containing the specific character under the condition that the user does not need to watch the video in the communication program interface completely, the video clip containing the specific character in the video can be positioned quickly, the time for the user to find the video clip containing the specific character in the video is shortened, and the efficiency for the user to check the video clip containing the specific character in the video of the communication program interface is improved.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 410, a memory 409, and a program or an instruction stored in the memory 409 and executable on the processor 410, where the program or the instruction is executed by the processor 410 to implement each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 17 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 405, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 17 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The user input unit 407 is configured to receive a first input of a first thumbnail in the communication program interface from a user, where the first thumbnail indicates a first video.
A display unit 406, configured to display N head portraits of the person in the first video in response to the first input.
The user input unit 407 is further configured to receive a second input of the target person avatar in the N person avatars from the user.
The display unit 406 is further configured to display, in response to the second input, M video segments, where the M video segments are video segments including a target person in the first video, and the target person avatar is an avatar of the target person.
Wherein N, M are all positive integers.
According to the method and the device, the head portrait of the person contained in the video of the communication program interface is displayed for the user to select the required head portrait of the target person, so that the video clip containing the target person is displayed, the user can watch the video clip containing the specific person under the condition that the user does not need to watch the video in the communication program interface completely, the video clip containing the specific person in the video can be quickly positioned, the time for the user to find the video clip containing the specific person in the video is shortened, and the efficiency for the user to check the video clip containing the specific person in the video of the communication program interface is improved.
Optionally, the processor 410 is configured to select at least one to-be-processed video segment from the M video segments;
taking the video clips to be processed as target videos under the condition that the number of the video clips to be processed is 1;
and under the condition that the number of the video clips to be processed is at least two, performing video synthesis on the at least two video clips to be processed to obtain a target video.
Optionally, the display unit 406 is further configured to:
responding to the second input, displaying a video playing window of the first video in the communication program interface, wherein the video playing window comprises a progress bar, the progress bar comprises M video segment identifications, and each video segment identification is used for indicating one video segment containing a target character in the first video;
the user input unit 407 is further configured to receive a third input of a target video segment identifier in the M video segment identifiers from a user;
the display unit 406 is further configured to, in response to the third input, display a floating window in the communication program interface, where the floating window includes a target video key frame of a target video clip indicated by the target video clip identifier.
Optionally, the user input unit 407 is further configured to receive a fourth input of the progress bar by the user;
the processor 410 is further configured to determine, in response to the fourth input, a video segment selected by the fourth input as a video segment to be processed.
Optionally, the progress bar further includes: a start time identifier and an end time identifier that can extend the movement of the progress bar;
the user input unit 406 is further configured to receive a fourth input of the start time identifier and the end time identifier from the user;
the processor 410 is further configured to determine a video segment in the progress bar between the start time identifier and the end time identifier as a video segment to be processed.
Optionally, the video playing window of the first video further includes: a video composition window;
a user input unit 407, further configured to receive a fifth input of P video segment identifiers of the M video segment identifiers from a user;
a processor 410, further configured to add the target objects indicated by the P video segment identifications to the video composition window;
and determining the video segment indicated by the target object as a video segment to be processed.
Optionally, the target object includes: a video thumbnail or video frame of a video clip, the processor 410 being further configured to:
adding P video thumbnails of P video clips indicated by the P video clip identifications to the video composition window when the video composition window is a video clip composition window;
in the case that the video composition window is a video frame composition window, adding all video frames of each of the P video clips indicated by the P video clip identifications to the video composition window;
under the condition that the video synthesis window is a video clip synthesis window, carrying out video synthesis on the video clips indicated by the P video thumbnails to obtain a target video;
and under the condition that the video synthesis window is a video frame synthesis window, carrying out video synthesis on the video frames displayed in the video frame synthesis window to obtain a target video.
Optionally, in the case that the video composition window is a video segment composition window, the processor 410 is further configured to:
under the condition that the video synthesis window is a video segment synthesis window, determining a first synthesis sequence of the video segment corresponding to each video thumbnail based on the display position of each video thumbnail in the video segment synthesis window, and performing video synthesis on the video segment corresponding to each video thumbnail according to the first synthesis sequence to obtain a target video;
and under the condition that the video synthesis window is a video frame synthesis window, determining a second synthesis sequence of each video frame based on the display position of each video frame in the video frame synthesis window, and performing video synthesis on each video frame according to the second synthesis sequence to obtain a target video.
Optionally, the processor 410 is further configured to:
filtering videos which are processed according to a target mode in the first videos;
wherein the target mode comprises: at least one of video extraction and video splicing.
According to the method and the device, the video clips selected by the user can be extracted or synthesized, so that the user can acquire the video clips containing the specific characters without mastering a certain video processing technology, and the threshold of video processing is reduced. And the video composition can be controlled by providing two modes of video clip composition and video frame composition for users, thereby improving the flexibility of video composition. And the video clips selected by the user can be synthesized according to the input of the user according to the sequence set by the user, so that the flexibility of extracting the character video by the user is improved. And the processed video is filtered, so that the condition that data processing resources and data storage resources are wasted due to repeated processing and repeated analysis is avoided.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of video processing, the method comprising:
receiving a first input of a user to a first thumbnail in a communication program interface, wherein the first thumbnail indicates a first video;
in response to the first input, displaying N head portraits of the person in the first video;
receiving a second input of the target character portrait in the N personal character portraits from the user;
in response to the second input, displaying M video clips, wherein the M video clips are video clips including a target person in a first video, and the head portrait of the target person is the head portrait of the target person;
wherein N, M are all positive integers.
2. The method of claim 1, further comprising, after said displaying M video segments in response to said second input:
selecting at least one video clip to be processed from the M video clips;
taking the video clips to be processed as target videos under the condition that the number of the video clips to be processed is 1;
and under the condition that the number of the video clips to be processed is at least two, performing video synthesis on the at least two video clips to be processed to obtain a target video.
3. The method of claim 2, wherein said displaying M video segments in response to said second input comprises:
responding to the second input, displaying a video playing window of the first video in the communication program interface, wherein the video playing window comprises a progress bar, the progress bar comprises M video segment identifications, and each video segment identification is used for indicating one video segment containing a target character in the first video;
receiving a third input of a user to a target video clip identifier in the M video clip identifiers;
and responding to the third input, and displaying a floating window in the communication program interface, wherein the floating window comprises a target video key frame of the target video clip indicated by the target video clip identification.
4. The method according to claim 3, wherein said selecting at least one video clip to be processed from said M video clips comprises:
receiving a fourth input of the progress bar by the user;
and in response to the fourth input, determining the video segment selected by the fourth input as the video segment to be processed.
5. The method of claim 4, wherein the progress bar further comprises: a start time identifier and an end time identifier that can extend the movement of the progress bar;
the receiving of the fourth input of the user to the progress bar includes:
receiving a fourth input of the starting time identifier and the ending time identifier by a user;
the taking the video segment selected by the fourth input as the video segment to be processed includes:
and determining the video clip positioned between the starting time identifier and the ending time identifier in the progress bar as a video clip to be processed.
6. The method of claim 3, wherein the video playback window of the first video further comprises: a video composition window; the selecting at least one video clip to be processed from the M video clips comprises:
receiving a fifth input of the user to P video segment identifications among the M video segment identifications;
adding the target objects indicated by the P video segment identifications to the video composition window;
and determining the video segment indicated by the target object as a video segment to be processed.
7. The method of claim 6, wherein the target object comprises: a video thumbnail or video frame of a video clip, said adding the target objects indicated by the P video clip identifications to the video composition window, comprising:
adding P video thumbnails of P video clips indicated by the P video clip identifications to the video composition window when the video composition window is a video clip composition window;
in the case that the video composition window is a video frame composition window, adding all video frames of each of the P video clips indicated by the P video clip identifications to the video composition window;
under the condition that the number of the video clips to be processed is at least two, the video composition is carried out on the at least two video clips to be processed to obtain a target video, and the method comprises the following steps:
under the condition that the video synthesis window is a video clip synthesis window, carrying out video synthesis on the video clips indicated by the P video thumbnails to obtain a target video;
and under the condition that the video synthesis window is a video frame synthesis window, carrying out video synthesis on the video frames displayed in the video frame synthesis window to obtain a target video.
8. The method according to claim 7, wherein the video composition of the video segments indicated by the P video thumbnails to obtain the target video in the case that the video composition window is a video segment composition window, includes:
under the condition that the video synthesis window is a video segment synthesis window, determining a first synthesis sequence of the video segment corresponding to each video thumbnail based on the display position of each video thumbnail in the video segment synthesis window, and performing video synthesis on the video segment corresponding to each video thumbnail according to the first synthesis sequence to obtain a target video;
when the video synthesis window is a video frame synthesis window, performing video synthesis on the video frame displayed in the video frame synthesis window to obtain a target video, including:
and under the condition that the video synthesis window is a video frame synthesis window, determining a second synthesis sequence of each video frame based on the display position of each video frame in the video frame synthesis window, and performing video synthesis on each video frame according to the second synthesis sequence to obtain a target video.
9. The method of claim 1, further comprising, prior to said displaying N head images of persons in said first video:
filtering videos which are processed according to a target mode in the first videos;
wherein the target mode comprises: at least one of video extraction and video splicing.
10. A video processing apparatus, characterized in that the apparatus comprises:
the first receiving module is used for receiving first input of a first thumbnail in a communication program interface by a user, wherein the first thumbnail indicates a first video;
a first display module for displaying N head portraits of the person in the first video in response to the first input;
the second receiving module is used for receiving second input of the target character avatar in the N person character avatars from the user;
a second display module, configured to display, in response to the second input, M video segments, where the M video segments are video segments including a target person in a first video, and the head portrait of the target person is a head portrait of the target person;
wherein N, M are all positive integers.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video processing method according to claims 1-9.
CN202010478564.1A 2020-05-29 2020-05-29 Video processing method, video processing device and electronic equipment Pending CN111770386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010478564.1A CN111770386A (en) 2020-05-29 2020-05-29 Video processing method, video processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478564.1A CN111770386A (en) 2020-05-29 2020-05-29 Video processing method, video processing device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111770386A true CN111770386A (en) 2020-10-13

Family

ID=72719892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478564.1A Pending CN111770386A (en) 2020-05-29 2020-05-29 Video processing method, video processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111770386A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010738A (en) * 2021-02-08 2021-06-22 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and readable storage medium
CN113315691A (en) * 2021-05-20 2021-08-27 维沃移动通信有限公司 Video processing method and device and electronic equipment
CN113596555A (en) * 2021-06-21 2021-11-02 维沃移动通信(杭州)有限公司 Video playing method and device and electronic equipment
CN114390356A (en) * 2022-01-19 2022-04-22 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN115278378A (en) * 2022-07-27 2022-11-01 维沃移动通信有限公司 Information display method, information display device, electronic apparatus, and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095804A (en) * 2016-05-30 2016-11-09 维沃移动通信有限公司 The processing method of a kind of video segment, localization method and terminal
CN106375874A (en) * 2016-09-20 2017-02-01 北京小米移动软件有限公司 Video processing method, device, terminal equipment and server
CN106713964A (en) * 2016-12-05 2017-05-24 乐视控股(北京)有限公司 Method of generating video abstract viewpoint graph and apparatus thereof
CN106792218A (en) * 2016-12-20 2017-05-31 北京猎豹移动科技有限公司 Video clipping playing method and device
CN108040286A (en) * 2017-11-28 2018-05-15 北京潘达互娱科技有限公司 Video previewing method, device, electronic equipment and computer-readable recording medium
CN108476327A (en) * 2015-08-20 2018-08-31 皇家Kpn公司 Piece video is formed based on Media Stream
CN109151595A (en) * 2018-09-30 2019-01-04 北京微播视界科技有限公司 Method for processing video frequency, device, terminal and medium
CN109936763A (en) * 2017-12-15 2019-06-25 腾讯科技(深圳)有限公司 The processing of video and dissemination method
CN109963164A (en) * 2017-12-14 2019-07-02 北京搜狗科技发展有限公司 A kind of method, apparatus and equipment of query object in video
CN109993025A (en) * 2017-12-29 2019-07-09 中移(杭州)信息技术有限公司 A kind of extraction method of key frame and equipment
CN110611848A (en) * 2019-09-30 2019-12-24 咪咕视讯科技有限公司 Information processing method, system, terminal, server and readable storage medium
CN110913244A (en) * 2018-09-18 2020-03-24 传线网络科技(上海)有限公司 Video processing method and device, electronic equipment and storage medium
CN111061912A (en) * 2018-10-16 2020-04-24 华为技术有限公司 Method for processing video file and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108476327A (en) * 2015-08-20 2018-08-31 皇家Kpn公司 Piece video is formed based on Media Stream
CN106095804A (en) * 2016-05-30 2016-11-09 维沃移动通信有限公司 The processing method of a kind of video segment, localization method and terminal
CN106375874A (en) * 2016-09-20 2017-02-01 北京小米移动软件有限公司 Video processing method, device, terminal equipment and server
CN106713964A (en) * 2016-12-05 2017-05-24 乐视控股(北京)有限公司 Method of generating video abstract viewpoint graph and apparatus thereof
CN106792218A (en) * 2016-12-20 2017-05-31 北京猎豹移动科技有限公司 Video clipping playing method and device
CN108040286A (en) * 2017-11-28 2018-05-15 北京潘达互娱科技有限公司 Video previewing method, device, electronic equipment and computer-readable recording medium
CN109963164A (en) * 2017-12-14 2019-07-02 北京搜狗科技发展有限公司 A kind of method, apparatus and equipment of query object in video
CN109936763A (en) * 2017-12-15 2019-06-25 腾讯科技(深圳)有限公司 The processing of video and dissemination method
CN109993025A (en) * 2017-12-29 2019-07-09 中移(杭州)信息技术有限公司 A kind of extraction method of key frame and equipment
CN110913244A (en) * 2018-09-18 2020-03-24 传线网络科技(上海)有限公司 Video processing method and device, electronic equipment and storage medium
CN109151595A (en) * 2018-09-30 2019-01-04 北京微播视界科技有限公司 Method for processing video frequency, device, terminal and medium
CN111061912A (en) * 2018-10-16 2020-04-24 华为技术有限公司 Method for processing video file and electronic equipment
CN110611848A (en) * 2019-09-30 2019-12-24 咪咕视讯科技有限公司 Information processing method, system, terminal, server and readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010738A (en) * 2021-02-08 2021-06-22 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and readable storage medium
CN113010738B (en) * 2021-02-08 2024-01-30 维沃移动通信(杭州)有限公司 Video processing method, device, electronic equipment and readable storage medium
CN113315691A (en) * 2021-05-20 2021-08-27 维沃移动通信有限公司 Video processing method and device and electronic equipment
WO2022242577A1 (en) * 2021-05-20 2022-11-24 维沃移动通信有限公司 Video processing method and apparatus, and electronic device
CN113315691B (en) * 2021-05-20 2023-02-24 维沃移动通信有限公司 Video processing method and device and electronic equipment
CN113596555A (en) * 2021-06-21 2021-11-02 维沃移动通信(杭州)有限公司 Video playing method and device and electronic equipment
CN113596555B (en) * 2021-06-21 2024-01-19 维沃移动通信(杭州)有限公司 Video playing method and device and electronic equipment
CN114390356A (en) * 2022-01-19 2022-04-22 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN115278378A (en) * 2022-07-27 2022-11-01 维沃移动通信有限公司 Information display method, information display device, electronic apparatus, and storage medium
CN115278378B (en) * 2022-07-27 2024-06-21 维沃移动通信有限公司 Information display method, information display device, electronic apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN111770386A (en) Video processing method, video processing device and electronic equipment
US11317139B2 (en) Control method and apparatus
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
CN112437353B (en) Video processing method, video processing device, electronic apparatus, and readable storage medium
CN111757175A (en) Video processing method and device
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN107870999B (en) Multimedia playing method, device, storage medium and electronic equipment
CN111857512A (en) Image editing method and device and electronic equipment
CN108228776B (en) Data processing method, data processing device, storage medium and electronic equipment
CN113596555B (en) Video playing method and device and electronic equipment
CN113794835B (en) Video recording method and device and electronic equipment
CN112887794A (en) Video editing method and device
CN112181252B (en) Screen capturing method and device and electronic equipment
CN106936830B (en) Multimedia data playing method and device
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN112752127B (en) Method and device for positioning video playing position, storage medium and electronic device
CN112328829A (en) Video content retrieval method and device
CN111954076A (en) Resource display method and device and electronic equipment
WO2022247766A1 (en) Image processing method and apparatus, and electronic device
CN109040848A (en) Barrage is put upside down method, apparatus, electronic equipment and storage medium
CN111225250B (en) Video extended information processing method and device
CN113268961A (en) Travel note generation method and device
CN113709565A (en) Method and device for recording facial expressions of watching videos
CN115278378B (en) Information display method, information display device, electronic apparatus, and storage medium
CN114245174B (en) Video preview method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201013

RJ01 Rejection of invention patent application after publication