CN112616082A - Video preview method, device, terminal and storage medium - Google Patents

Video preview method, device, terminal and storage medium Download PDF

Info

Publication number
CN112616082A
CN112616082A CN202011442171.1A CN202011442171A CN112616082A CN 112616082 A CN112616082 A CN 112616082A CN 202011442171 A CN202011442171 A CN 202011442171A CN 112616082 A CN112616082 A CN 112616082A
Authority
CN
China
Prior art keywords
playing
sub
video
area
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011442171.1A
Other languages
Chinese (zh)
Inventor
刘春宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202011442171.1A priority Critical patent/CN112616082A/en
Publication of CN112616082A publication Critical patent/CN112616082A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video preview method, a video preview device, a terminal and a storage medium, and belongs to the technical field of video processing. For a section of video which a user wants to quickly preview, before the video is played, the video is divided into a plurality of sub-videos according to the playing time length, and then the corresponding sub-videos are respectively played in each sub-playing area of the playing area. By the method, the playing effect of simultaneously playing the plurality of sub-videos of the video segment in the whole playing area is realized, so that a user can quickly preview the complete content of the video segment, and the efficiency of previewing the video by the user is greatly improved.

Description

Video preview method, device, terminal and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video preview method, an apparatus, a terminal, and a storage medium.
Background
With the development of internet technology, more and more users prefer to acquire information by watching videos, and when the users watch videos for a long time, such as movies, the users want to see the key contents of the videos quickly to know the main contents of the videos.
In the related art, when a user wants to see the key content of a segment of video quickly, a mouse may be placed on a playing progress bar in a video player, and then the video player can locate a key frame corresponding to a position in the video based on the position of the mouse on the playing progress bar and display the key frame above the playing progress bar, so as to preview the key content of the video.
However, the above method requires the user to manually locate the key content that the user wants to see, which is time-consuming, and locating the key frame by the video player may omit the video content near the key frame, so that the user cannot preview all the video content that the user wants to see, resulting in inefficient video preview by the user.
Disclosure of Invention
The embodiment of the application provides a video previewing method, a video previewing device, a terminal and a storage medium, and the efficiency of previewing a video by a user can be improved. The technical scheme is as follows:
in one aspect, a video preview method is provided, and the method includes:
determining a playing area of a target video to be played;
dividing the playing area into a plurality of sub-playing areas;
determining playing information of each sub-playing area based on a plurality of sub-playing areas and the playing duration of the target video;
acquiring a plurality of sub-videos of the target video based on the playing information of each sub-playing area, wherein one sub-video corresponds to one sub-playing area;
and playing the corresponding sub-video in each sub-playing area of the playing area.
In one possible implementation, the method further includes:
determining the number of the sub-play areas in response to the division operation of the play area;
and displaying a plurality of the sub-playing areas based on the size parameter of the playing area and the number of the sub-playing areas.
In one possible implementation, the method further includes:
determining the playing duration of the target video in each sub-playing area based on the number of the sub-playing areas and the playing duration of the target video;
and determining the starting playing time of the target video in each sub-playing area based on the playing duration of the target video in each sub-playing area and the playing duration of the target video.
In one possible implementation, the method further includes:
and averagely dividing the playing duration of the target video according to the number of the sub-playing areas to obtain the playing duration of the target video in each sub-playing area.
In one possible implementation, the method further includes:
and based on the playing information of each sub-playing area, the target video is decoded by the decoder corresponding to each sub-playing area respectively to obtain a plurality of sub-videos of the target video.
In one possible implementation, the method further includes:
the decoder corresponding to each sub-playing area locates each key frame corresponding to each starting playing time in the target video based on the starting playing time of the target video in each sub-playing area;
and the decoder corresponding to each sub-playing area decodes the target video by taking each positioned key frame as an initial frame based on the playing duration of the target video in each sub-playing area to obtain a plurality of sub-videos corresponding to each playing duration.
In one possible implementation, the method further includes:
and responding to the playing control operation of any sub-playing area, and playing the sub-video corresponding to the sub-playing area based on the playing control operation.
In one possible implementation, the method further includes:
when the playing control operation comprises a full-screen playing operation, responding to the full-screen playing operation of any one sub-playing area, and switching the sub-video played by the sub-playing area to the full-screen playing;
when the playing control operation comprises an amplification display operation, responding to the amplification display operation of any one of the sub-playing areas, performing amplification adjustment on the size of the sub-playing area, and playing the sub-video corresponding to the sub-playing area in the adjusted sub-playing area.
In one possible implementation, the method further includes:
when the playing control operation comprises a positioning playing operation, in response to the positioning playing operation of any one of the sub-playing areas, searching a key frame corresponding to the positioning playing operation in the sub-video corresponding to the sub-playing area, and playing the corresponding sub-video in the sub-playing area by taking the key frame as an initial frame.
In one aspect, a video preview apparatus is provided, the apparatus including:
the first determining module is used for determining a playing area of a target video to be played;
a dividing module, configured to divide the playing area into a plurality of sub-playing areas;
the second determining module is used for determining the playing information of each sub-playing area based on a plurality of sub-playing areas and the playing duration of the target video;
the acquisition module is used for acquiring a plurality of sub-videos of the target video based on the playing information of each sub-playing area, wherein one sub-video corresponds to one sub-playing area;
and the playing module is used for playing the corresponding sub-video in each sub-playing area of the playing area.
In one possible implementation, the dividing module is configured to:
determining the number of the sub-play areas in response to the division operation of the play area;
and displaying a plurality of the sub-playing areas based on the size parameter of the playing area and the number of the sub-playing areas.
In one possible implementation, the second determining module includes:
a first determining unit, configured to determine, based on the number of the sub-play areas and the play duration of the target video, a play duration of the target video in each of the sub-play areas;
and the second determining unit is used for determining the starting playing time of the target video in each sub-playing area based on the playing duration of the target video in each sub-playing area and the playing time length of the target video.
In a possible implementation manner, the first determining unit is configured to:
and averagely dividing the playing duration of the target video according to the number of the sub-playing areas to obtain the playing duration of the target video in each sub-playing area.
In one possible implementation, the apparatus further includes:
and the decoding module is used for decoding the target video by the decoder corresponding to each sub-playing area based on the playing information of each sub-playing area to obtain a plurality of sub-videos of the target video.
In one possible implementation, the decoding module is further configured to:
the decoder corresponding to each sub-playing area locates each key frame corresponding to each starting playing time in the target video based on the starting playing time of the target video in each sub-playing area;
and the decoder corresponding to each sub-playing area decodes the target video by taking each positioned key frame as an initial frame based on the playing duration of the target video in each sub-playing area to obtain a plurality of sub-videos corresponding to each playing duration.
In one possible implementation manner, the playing module is further configured to:
and responding to the playing control operation of any one of the sub-playing areas, and playing the sub-video corresponding to the sub-playing area based on the playing control operation.
In one possible implementation manner, the playing module is further configured to:
when the playing control operation comprises a full-screen playing operation, responding to the full-screen playing operation of any one sub-playing area, and switching the sub-video played by the sub-playing area to the full-screen playing;
when the playing control operation comprises an amplification display operation, responding to the amplification display operation of any one of the sub-playing areas, performing amplification adjustment on the size of the sub-playing area, and playing the sub-video corresponding to the sub-playing area in the adjusted sub-playing area.
In one possible implementation manner, the playing module is further configured to:
when the playing control operation comprises a positioning playing operation, in response to the positioning playing operation of any one of the sub-playing areas, searching a key frame corresponding to the positioning playing operation in the sub-video corresponding to the sub-playing area, and playing the corresponding sub-video in the sub-playing area by taking the key frame as an initial frame.
In one aspect, a terminal is provided, which includes a processor and a memory, where at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to implement the video preview method.
In one aspect, a computer-readable storage medium having at least one program code stored therein is provided, the at least one program code being loaded and executed by a processor to implement the video preview method described above.
In one aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer-readable storage medium, the computer program code being read by a processor of a computer device from the computer-readable storage medium, the computer program code being executed by the processor such that the computer device performs the video preview method described above.
The application provides a video previewing method, wherein for a section of video which a user wants to quickly preview, before the video is played, the video is divided into a plurality of sub-videos according to playing time, and then the corresponding sub-videos are respectively played in each sub-playing area of a playing area. By the method, the playing effect of simultaneously playing the plurality of sub-videos of the video segment in the whole playing area is realized, so that a user can quickly preview the complete content of the video segment, and the efficiency of previewing the video by the user is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a video preview method provided in an embodiment of the present application;
fig. 2 is a flowchart of a video preview method provided in an embodiment of the present application;
fig. 3 is a flowchart of another video previewing method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a video preview provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video preview device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of a video preview method according to an embodiment of the present application. Referring to fig. 1, the implementation environment includes: a terminal 101 and a server 102.
The terminal 101 may be at least one of a smartphone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, a laptop portable computer, and the like. The terminal 101 has a communication function and can access the internet, and the terminal 101 may be generally referred to as one of a plurality of terminals, and this embodiment is only illustrated by the terminal 101. Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. An application having a video playback function is running on the terminal 101.
The server 102 may be an independent physical server, a server cluster or a distributed file system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The server 102 is configured to provide background services, such as a video storage service and a video transmission service, for the application program running on the terminal 101.
The server 102 and the terminal 101 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the application. Alternatively, the number of the servers 102 may be more or less, and the embodiment of the present application is not limited thereto.
Fig. 2 is a flowchart of a video preview method according to an embodiment of the present application. The embodiment is described by taking a terminal as an execution subject, and referring to fig. 2, the embodiment includes:
201. the terminal determines a playing area of a target video to be played.
In the embodiment of the application, the target video is a video which the user wants to preview. Optionally, the target video is a local video stored on the terminal. Optionally, the target video is an online video. The source of the target video is not particularly limited in the embodiments of the present application.
202. The terminal divides the playing area into a plurality of sub-playing areas.
In the embodiment of the application, after the terminal divides the playing area, the obtained divided playing area includes a plurality of partitions, each partition is a sub-playing area, and the size of the sub-playing area is smaller than that of the playing area.
203. And the terminal determines the playing information of each sub-playing area based on the plurality of sub-playing areas and the playing time length of the target video.
In the embodiment of the present application, the playing information refers to the starting playing time and the playing duration of the target video in the sub-playing area.
204. The terminal acquires a plurality of sub-videos of the target video based on the playing information of each sub-playing area, wherein one sub-video corresponds to one sub-playing area.
In the embodiment of the present application, each sub-play area corresponds to a decoder, and the decoder is configured to decode a target video to obtain a plurality of sub-videos of the target video.
205. And the terminal plays the corresponding sub-video in each sub-playing area of the playing area.
In the embodiment of the application, a video preview method is provided, in which for a segment of video that a user wants to quickly preview, before playing the video, the video is divided into a plurality of sub-videos according to the playing time length, and then corresponding sub-videos are respectively played in each sub-playing area of a playing area. By the method, the playing effect of simultaneously playing the plurality of sub-videos of the video segment in the whole playing area is realized, so that a user can quickly preview the complete content of the video segment, and the efficiency of previewing the video by the user is greatly improved.
Fig. 3 is a flowchart of another video preview method provided in an embodiment of the present application. The embodiment is described by taking a terminal as an execution subject, and referring to fig. 3, the embodiment includes:
301. and the terminal responds to the click operation of the user on the target video to be played and displays the playing area of the target video.
In the embodiment of the application, the terminal provides a video playing function, a user clicks a target video to be previewed on the terminal, the terminal responds to the clicking operation, starts the video playing function, obtains a video playing interface, and displays a playing area of the target video on the video playing interface.
Optionally, the target video is a local video stored on the terminal, a local video selection interface is displayed on the terminal, the user can select the target video for previewing by performing a click operation on the video to be previewed, then the terminal responds to the click operation to obtain a video playing interface, and a playing area of the target video is displayed on the video playing interface.
Optionally, the target video is an online video, an online video selection interface is displayed on the terminal, a user can select the target video for previewing by performing a click operation on a video to be previewed, the terminal responds to the click operation and sends an acquisition request of the target video to the target server, the acquisition request carries an identifier of the target video and an identifier of the terminal, the target server sends a video resource of the target video to the terminal based on the acquisition request, the terminal receives the video resource of the target video, acquires a video playing interface, and a playing area of the target video is displayed on the video playing interface.
It should be noted that, in other embodiments, the terminal provides a plurality of videos on the acquired video playing interface, and the user can select a target video for previewing by performing a click operation on a video that the user wants to preview. The embodiment of the present application does not specifically limit the selection manner of the target video.
Additionally, in some embodiments, the target video is a segment video of a complete video segment. Specifically, after a user clicks a first video to be previewed on a terminal, the terminal responds to the clicking operation to start a video playing function and acquire a video playing interface, a segment selection operation for the first video is provided on the video playing interface, the terminal detects the segment selection operation for the first video by the user, and a second video, namely a target video, is determined based on the segment selection operation. For example, the segment selection operation is implemented by an editing operation of the user on an editable dialog box, and the user inputs the start time and the end time of the segment video to be previewed in the editable dialog box to implement the selection of the segment video.
It should be noted that, the embodiment of the present application is not limited to whether the target video is a complete video or a segment video in a complete video.
302. The terminal determines the number of the sub-play areas in response to the division operation of the play area.
In the embodiment of the application, after the terminal divides the playing area, the obtained divided playing area includes a plurality of partitions, each partition is a sub-playing area, and the size of the sub-playing area is smaller than that of the playing area. The terminal detects the dividing operation of the user to the playing area, and determines the number of the sub-playing areas based on the dividing operation.
Two implementations of the partitioning operation are explained below:
in some embodiments, the partitioning operation is implemented by a user editing operation on an editable dialog box. In the video playing interface, an editable dialog box for dividing the playing area is provided, a user inputs numbers in the editable dialog box to realize the dividing operation of the playing area, and when the terminal detects the editing operation of the user in the editable dialog box, the number of the sub-playing areas is determined based on the numbers input by the user. For example, if the user inputs the number 4 in the editable dialog box, the terminal determines that the number of the sub-play areas is 4.
In other embodiments, the partitioning is performed by a user selection of a partitioning option. In a video playing interface, a plurality of division options for the playing area are provided, a user realizes the division operation of the playing area by selecting the corresponding division options, and when the terminal detects the selection operation of the user on the division options, the number of the sub-playing areas is determined based on the division options selected by the user. For example, three division options for the playing area are provided in a video playing interface displayed by the terminal, namely "division into 2 areas", "division into 4 areas", and "division into 6 areas", the user selects the division option "division into 4 areas", and the terminal determines that the number of the sub-playing areas is 4 based on the selection operation of the user on the division option.
It should be noted that, in practical applications, the implementation manner of the dividing operation is not limited to the above two cases. Optionally, the user can implement the corresponding division operation by means of voice input, shortcut gestures, and the like. The embodiment of the present application does not specifically limit the specific implementation manner of the dividing operation.
303. The terminal displays a plurality of sub-play areas based on the size parameter of the play area and the number of the sub-play areas.
In the embodiment of the present application, the size parameter of the playing area refers to the width and height of the playing area. The terminal divides the playing area based on the size parameter of the playing area and the number of the sub-playing areas determined in the above step 302, and displays a plurality of sub-playing areas. Optionally, a cover picture of the target video is displayed in each sub-play area. Optionally, a default picture preset by the terminal is displayed in each sub-play area. The display content in the sub-play area in this step is not specifically limited in the embodiment of the present application.
Fig. 4 is a schematic diagram of a video preview provided in an embodiment of the present application. Referring to the left diagram in fig. 4, the size parameters of the playing area in the diagram are: width w and height h, wherein w >0 and h > 0. The terminal equally divides the playing area based on the size parameter and the number of the sub-playing areas, which may specifically refer to the right diagram in fig. 4, where the number of the sub-playing areas is 4, and the sizes of the sub-playing areas are the same.
The plurality of sub-play areas shown in fig. 4 are obtained by equally dividing the play area. In some embodiments, the terminal provides a size adjustment function for the sub-play area, and can obtain the sub-play area with a corresponding size based on the size adjustment requirement of the user for the sub-play area. For example, the terminal displays a size adjustment interface of the sub-play areas, on the size adjustment interface, the size parameter of each sub-play area is displayed, a user can trigger a corresponding size adjustment instruction by adjusting the size parameter of each sub-play area, and the terminal responds to the size adjustment instruction to display the corresponding sub-play area. For another example, the sub-play areas displayed by the terminal are operable, the user can perform zoom operation on the corresponding sub-play areas according to the size adjustment requirement on each sub-play area, trigger the corresponding size adjustment instruction, and the terminal responds to the size adjustment instruction to display the corresponding sub-play areas. The size display of the sub-play area is not specifically limited in the embodiment of the present application. By the method for adjusting the size of the sub-playing area, the personalized requirements of the user on the video preview effect can be met.
The above steps 302 to 303 are an embodiment in which the terminal displays a plurality of sub-play areas in response to the division operation of the play area. Optionally, after performing step 301, the terminal can display a plurality of sub-play areas in a default manner, for example, the default manner is to display 4 sub-play areas. The display mode of the multiple sub-play areas is not specifically limited in the embodiment of the present application.
304. And the terminal determines the playing information of each sub-playing area based on the plurality of sub-playing areas and the playing time of the target video.
In the embodiment of the application, the terminal acquires the playing time of the target video, and then determines the playing information of each sub-playing area based on the playing time and the number of the sub-playing areas. The playing information refers to the starting playing time and the playing duration of the target video in the sub-playing area.
The following describes a method for determining the playing information of each sub-playing area, and the method includes the following two steps:
the method comprises the following steps: and the terminal determines the playing duration of the target video in each sub-playing area based on the number of the sub-playing areas and the playing duration of the target video.
Optionally, the terminal averagely divides the playing time of the target video according to the number of the sub-playing areas to obtain the playing duration of the target video in each sub-playing area. For example, if the playing time of the target video is 20 minutes, and the number of the sub-playing areas is 4, the playing duration of the target video in each sub-playing area is 5 minutes.
Optionally, the terminal provides a function of setting the play duration. The terminal displays a setting interface of the playing duration, the playing duration of the target video and the time setting option of each sub-playing area are displayed on the setting interface, a user can adjust the time setting option of each sub-playing area to set the playing duration of the target video in each sub-playing area, and then the terminal determines the playing duration of the target video in each sub-playing area based on the adjustment operation of the user. For example, the playing time of the target video is 20 minutes, the number of the sub-playing regions is 4, the terminal displays time setting options of the 4 sub-playing regions, the user sets the playing durations of the target video in the 4 sub-playing regions to be 4 minutes, 5 minutes, 6 minutes and 5 minutes respectively by performing adjustment operation on each time setting option, and the terminal determines the playing durations of the target video in each sub-playing region based on the adjustment operation.
The method for determining the playing duration of the target video in each sub-playing area is not particularly limited in the embodiment of the present application.
Step two: and the terminal determines the initial playing time of the target video in each sub-playing area based on the playing duration of the target video in each sub-playing area and the playing duration of the target video.
In the embodiment of the application, the terminal takes the playing time length of the target video as the total playing time length, takes the playing duration time of the target video in each sub-playing area as the playing time length of the target video in each sub-playing area, and then sequentially determines the initial playing time of the target video in each sub-playing area according to the sequence of each sub-playing area.
Alternatively, this step is illustrated below with reference to fig. 4:
the right diagram of fig. 4 shows 4 sub-play areas, which are sub-play area 1, sub-play area 2, sub-play area 3, and sub-play area 4. The playing time of the target video is 20 minutes, the playing duration of the target video in each sub-playing area is 5 minutes, and the initial playing time of the target video in the sub-playing area 1 is 0 minute; the initial playing time of the target video in the sub-playing area 2 is 5 minutes; the initial playing time of the target video in the sub-playing area 3 is 10 minutes; the initial playing time of the target video in the sub-playing area 4 is 15 minutes.
Optionally, taking the playing durations of the target video in the 4 sub-playing regions as 4 minutes, 5 minutes, 6 minutes and 5 minutes, respectively, as an example, the starting playing time of the target video in the sub-playing region 1 is 0 minute; the initial playing time of the target video in the sub-playing area 2 is 4 minutes; the initial playing time of the target video in the sub-playing area 3 is 9 minutes; the initial playing time of the target video in the sub-playing area 4 is 15 minutes.
Through the first step and the second step, the terminal can determine the playing information of each sub-playing area, for example, the playing information of each sub-playing area is as follows:
"sub-play area 1: the initial playing time is 0 minute, and the playing duration time is 5 minutes;
sub-play area 2: the initial playing time is 5 minutes, and the playing duration time is 5 minutes;
sub-play area 3: the initial playing time is 10 minutes, and the playing duration time is 5 minutes;
sub-play area 4: the initial play time is 15 minutes and the play duration is 5 minutes ".
305. The terminal acquires a plurality of sub-videos of the target video based on the playing information of each sub-playing area, wherein one sub-video corresponds to one sub-playing area.
In the embodiment of the present application, each sub-play area corresponds to a decoder, and the decoder is configured to decode a target video to obtain a plurality of sub-videos of the target video. And the terminal decodes the target video by the decoder corresponding to each sub-playing area based on the playing information of each sub-playing area to obtain a plurality of sub-videos of the target video. Optionally, the terminal acquires the target video, and then sends a decoding instruction to the decoder of each sub-play area based on the play information of each sub-play area, where the decoding instruction carries video data of the target video, an area identifier of the sub-play area, and play information corresponding to the sub-play area, and the decoding instruction is used to instruct the decoder to decode the target video, so as to obtain a sub-video corresponding to the sub-play area.
The method specifically comprises the following steps of:
the method comprises the following steps: and the decoder corresponding to each sub-playing area locates each key frame corresponding to each starting playing time in the target video based on the starting playing time of the target video in each sub-playing area.
Step two: and the decoder corresponding to each sub-playing area decodes the target video by taking each positioned key frame as an initial frame based on the playing duration of the target video in each sub-playing area to obtain a plurality of sub-videos corresponding to each playing duration.
The following will specifically describe this step by taking the sub-play area 2 as an example with reference to fig. 4: the decoder corresponding to the sub-play area receives a decoding instruction, and the decoding instruction carries the video data of the target video, the area identifier "2" of the sub-play area 2, and the play information "sub-play area 2: the initial playing time is 5 minutes, the playing duration is 5 minutes ", then the decoder locates a key frame of the target video after playing for 5 minutes in a searching (seek) mode in the received video data of the target video based on the initial playing time, and then decodes the video within 5 minutes from the key frame in the target video based on the playing duration to obtain a video frame of a sub-video with the key frame as the initial frame and the duration of 5 minutes.
The method for acquiring a sub-video corresponding to a certain sub-play area is described above by way of example, and accordingly, the method for acquiring each sub-video may refer to the above method, which is not described herein again.
306. And the terminal plays the corresponding sub-video in each sub-playing area.
In the embodiment of the application, after the decoder corresponding to each sub-play area acquires the corresponding sub-video, the corresponding video frame is decoded, and then the terminal plays the corresponding sub-video in each sub-play area. Referring to the right diagram in fig. 4, there are 4 sub-play areas in total, and the terminal plays the corresponding sub-video in each sub-play area, so that the playing of the complete segment of the target video within 5 minutes is realized, the efficiency of video preview is improved to 4 times, and the complete content of the target video can be previewed.
This step can be achieved in any of three ways:
the first method is as follows: and the decoder of each sub-playing area scales the decoded video frame based on the size of each sub-playing area so as to adapt the size of the video frame to the size of the corresponding sub-playing area. And then the terminal respectively plays the corresponding sub-videos in each sub-playing area.
The second method comprises the following steps: the decoder of each sub-playing area scales the video frame obtained by decoding based on the size of each sub-playing area so as to enable the size of the video frame to be adapted to the size of the corresponding sub-playing area, then the decoder of the playing area performs real-time splicing on each frame of video in each video frame according to the time sequence to obtain the real-time spliced video frame adapted to the size of the playing area, and then the terminal plays the video in the playing area based on the real-time spliced video frame. In the embodiment, the video frames of the sub-playing areas are spliced in real time, and then the spliced video frames are played in the playing areas in real time, so that the playing effect of playing the corresponding sub-videos in the sub-playing areas is achieved.
The third method comprises the following steps: and the decoder of each sub-playing area scales the decoded video frame based on the size of each sub-playing area so as to enable the size of the video frame to be adapted to the size of the corresponding sub-playing area, then the decoder of the playing area splices the video frames to obtain a video frame adapted to the size of the playing area, and the terminal plays the video based on the video frame in the playing area. In the embodiment, the video frames of the sub-playing areas are spliced, and then the spliced video frames are played in the playing area, so that the playing effect of playing the corresponding sub-videos in the sub-playing areas is achieved.
It should be noted that, in practical applications, there may be other manners for the terminal to play the corresponding sub-video in each sub-play area, and the three manners described above are merely illustrative, and the manner for the terminal to play the sub-video is not specifically limited in the embodiment of the present application.
Through the above steps 301 to 306, a playing effect of simultaneously playing a plurality of segments of a segment of video in the whole video playing area is achieved, and optionally, after executing step 306, the terminal can also execute the following step 307.
307. And the terminal responds to the playing control operation of the user on any sub-playing area and plays the sub-video corresponding to the sub-playing area based on the playing control operation.
In this embodiment of the application, in step 306, when the implementation manner of playing the sub-video by the terminal is the first two manners, after the terminal plays the corresponding sub-video in each sub-play area, the user can perform play control operation on each sub-play area, so as to implement play control on video playing.
Optionally, the playing control operation is a full-screen playing operation, and the terminal switches the sub-video played in the sub-playing area to the full-screen playing in response to the full-screen playing operation on any sub-playing area. It should be noted that, after the terminal switches the sub-video played in the sub-play area to full-screen play, the terminal responds to the exit operation of the user on the current full-screen play, exits from full-screen play, and continues to play the corresponding sub-video in the sub-play area.
For example, the terminal displays a full-screen play button in a sub-play area, when the terminal detects that a user clicks the full-screen play button in a certain sub-play area, the sub-video played in the sub-play area is switched to full-screen play, the terminal displays a full-screen exit button in a full-screen play mode, when the terminal detects that the user clicks the full-screen exit button, the terminal exits full-screen play, and the corresponding sub-video continues to be played in the sub-play area. The embodiment of the present application does not specifically limit the implementation manner of the full screen play operation.
Optionally, the playing control operation is an enlargement display operation, the terminal responds to the enlargement display operation of any sub-playing area, enlarges and adjusts the size of the sub-playing area, and plays the sub-video corresponding to the sub-playing area in the adjusted sub-playing area.
For example, the sub-play area displayed by the terminal is operable, a user can perform an amplification display operation on the sub-play area according to an amplification display requirement on any sub-play area, trigger a corresponding size adjustment instruction, and the terminal responds to the size adjustment instruction to display the amplified sub-play area. It should be noted that, in the embodiment of the present application, when the terminal performs the enlargement adjustment on the size of the sub-play area, correspondingly, the size of the remaining sub-play areas that are not subjected to the enlargement display operation is reduced and adjusted to adapt to the entire play area; or, the terminal performs amplification adjustment on the size of the sub-playing area, so that the sub-playing area covers the rest sub-playing areas which are not subjected to amplification display operation. The embodiment of the present application does not specifically limit the implementation manner of adjusting the size of the sub-play area by the terminal.
Optionally, the playing control operation is a positioning playing operation, and the terminal, in response to the positioning playing operation for any one of the sub-playing areas, searches for a key frame corresponding to the positioning playing operation in the sub-video corresponding to the sub-playing area, and plays the corresponding sub-video in the sub-playing area with the key frame as an initial frame.
For example, the terminal displays a playing progress bar in a sub-playing area, and when the terminal detects a dragging operation performed by a user on the playing progress bar in a certain sub-playing area, or a clicking operation on a certain position on the playing progress bar, the terminal searches for a corresponding key frame in the sub-video based on the dragging operation or the clicking operation, and plays the corresponding sub-video in the sub-playing area with the key frame as a start frame.
Optionally, the play control operation is a pause play operation, a resume play operation, a fast forward operation, a fast backward operation, and the like, and in practical applications, the terminal provides multiple play functions, so that a user performs the play control operation based on different play functions to realize play control of video play, which is not specifically limited in this embodiment of the present application.
Optionally, the playing control operation is implemented by a playing control displayed in the sub-playing area by the terminal, and the user can implement playing control of video playing by operating the playing control. Next, how to implement the play control operation by the play control when the terminal plays the sub-video in the first two manners shown in step 306 is illustrated.
In a scenario where the terminal plays the sub-video in the first manner shown in step 306:
for example, the playing control is a full-screen playing button for providing a full-screen playing function, when the terminal detects a click operation performed by a user on the full-screen playing button in a certain sub-playing area, a decoder corresponding to the sub-playing area amplifies a video frame corresponding to the sub-playing area, so that the size of the video frame is adapted to the size of full-screen playing, and then the terminal performs full-screen playing on the adjusted video frame. It should be noted that, when the terminal detects that the user performs a click operation on a full-screen play button in a certain sub-play area, the other sub-play areas that are not subjected to the click operation continue to play the corresponding sub-video, or the playing of the corresponding sub-video is suspended, and the user can set the play states of the other sub-play areas according to the requirement, which is not specifically limited in this embodiment of the application.
For another example, the playing control is a playing button, and is configured to provide a playing function of pausing and continuing to play the video, when the terminal detects a click operation performed by the user on the playing button, the terminal pauses to play the sub-video in the sub-playing area, and when the terminal detects a click operation performed by the user on the playing button again, the terminal continues to play the sub-video in the sub-playing area.
In a scenario where the terminal plays the sub-video in the manner shown in step 306:
for example, the playing control is a full-screen playing button for providing a full-screen playing function, when the terminal detects that a user clicks the full-screen playing button in a certain sub-playing area, a decoder corresponding to the sub-playing area amplifies a video frame corresponding to the sub-playing area, so that the size of the video frame is adapted to the size of full-screen playing, and then the terminal performs full-screen playing on the adjusted video frame. It should be noted that, for other sub-play areas where the click operation is not performed, the terminal suspends splicing the video frames in the sub-play areas, or continues to splice and play the video frames in the sub-play areas in real time according to the time sequence, and the user can set the play states of the other sub-play areas according to the requirement, which is not specifically limited in this embodiment of the present application.
For another example, the playing control is a playing button for providing a playing function of pausing and continuing to play the video, when the terminal detects the click operation performed by the user on the playing button, the terminal continues to splice and play the video frames in the sub-playing areas in real time according to the time sequence only for other sub-playing areas not subjected to the click operation, that is, only the video in the sub-playing area subjected to the click operation is in the paused playing state, and the other sub-playing areas are in the normal playing state; when the terminal detects the click operation performed by the user on the play button again, the terminal uses the currently displayed video frame in the sub-play area where the click operation is performed as the starting frame to be spliced with the video frames to be spliced currently in other sub-play areas in real time, and then plays the spliced video frames in the play area in real time, that is, the videos in all the sub-play areas are in a normal play state at this time.
In the embodiment of the application, a video preview method is provided, in which for a segment of video that a user wants to quickly preview, before playing the video, the video is divided into a plurality of sub-videos according to the playing time length, and then corresponding sub-videos are respectively played in each sub-playing area of a playing area. By the method, the playing effect of simultaneously playing the plurality of sub-videos of the video segment in the whole playing area is realized, so that a user can quickly preview the complete content of the video segment, and the efficiency of previewing the video by the user is greatly improved.
Fig. 5 is a schematic structural diagram of a video preview device provided in an embodiment of the present application, and referring to fig. 5, the device includes: a first determining module 501, a dividing module 502, a second determining module 503, an obtaining module 504 and a playing module 505.
A first determining module 501, configured to determine a playing area of a target video to be played;
a dividing module 502, configured to divide the playing area into multiple sub-playing areas;
a second determining module 503, configured to determine playing information of each sub-playing area based on a plurality of the sub-playing areas and playing time lengths of the target video;
an obtaining module 504, configured to obtain multiple sub-videos of the target video based on the playing information of each sub-playing area, where one sub-video corresponds to one sub-playing area;
the playing module 505 is configured to play the corresponding sub-video in each sub-playing area of the playing area.
In one possible implementation, the dividing module 502 is configured to:
determining the number of the sub-play areas in response to the division operation of the play area;
and displaying a plurality of the sub-playing areas based on the size parameter of the playing area and the number of the sub-playing areas.
In one possible implementation, the second determining module 503 includes:
a first determining unit, configured to determine, based on the number of the sub-play areas and the play duration of the target video, a play duration of the target video in each of the sub-play areas;
and the second determining unit is used for determining the starting playing time of the target video in each sub-playing area based on the playing duration of the target video in each sub-playing area and the playing time length of the target video.
In a possible implementation manner, the first determining unit is configured to:
and averagely dividing the playing duration of the target video according to the number of the sub-playing areas to obtain the playing duration of the target video in each sub-playing area.
In one possible implementation, the apparatus further includes:
and the decoding module is used for decoding the target video by the decoder corresponding to each sub-playing area based on the playing information of each sub-playing area to obtain a plurality of sub-videos of the target video.
In one possible implementation, the decoding module is further configured to:
the decoder corresponding to each sub-playing area locates each key frame corresponding to each starting playing time in the target video based on the starting playing time of the target video in each sub-playing area;
and the decoder corresponding to each sub-playing area decodes the target video by taking each positioned key frame as an initial frame based on the playing duration of the target video in each sub-playing area to obtain a plurality of sub-videos corresponding to each playing duration.
In a possible implementation manner, the playing module 505 is further configured to:
and responding to the playing control operation of any one of the sub-playing areas, and playing the sub-video corresponding to the sub-playing area based on the playing control operation.
In a possible implementation manner, the playing module 505 is further configured to:
when the playing control operation comprises a full-screen playing operation, responding to the full-screen playing operation of any one sub-playing area, and switching the sub-video played by the sub-playing area to the full-screen playing;
when the playing control operation comprises an amplification display operation, responding to the amplification display operation of any one of the sub-playing areas, performing amplification adjustment on the size of the sub-playing area, and playing the sub-video corresponding to the sub-playing area in the adjusted sub-playing area.
In a possible implementation manner, the playing module 505 is further configured to:
when the playing control operation comprises a positioning playing operation, in response to the positioning playing operation of any one of the sub-playing areas, searching a key frame corresponding to the positioning playing operation in the sub-video corresponding to the sub-playing area, and playing the corresponding sub-video in the sub-playing area by taking the key frame as an initial frame.
In the embodiment of the present application, a video preview apparatus is provided, where for a segment of video that a user wants to quickly preview, before playing the video, the video is divided into a plurality of sub-videos according to a playing time length, and then corresponding sub-videos are respectively played in each sub-playing area of a playing area. By the method, the playing effect of simultaneously playing the plurality of sub-videos of the video segment in the whole playing area is realized, so that a user can quickly preview the complete content of the video segment, and the efficiency of previewing the video by the user is greatly improved.
It should be noted that: in the video preview apparatus provided in the foregoing embodiment, when performing video preview, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the video preview apparatus and the video preview method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 6 is a schematic structural diagram of a server 600 according to an embodiment of the present application, where the server 600 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where at least one program code is stored in the memory 602, and is loaded and executed by the processor 601 to implement the video preview method provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
Fig. 7 is a schematic structural diagram of a terminal 700 according to an embodiment of the present application. The terminal 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, terminal 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one program code for execution by processor 701 to implement the video preview method provided by the method embodiments herein.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 704, a display screen 705, a camera assembly 706, an audio circuit 707, a positioning component 708, and a power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, disposed on a front panel of the terminal 700; in other embodiments, the display 705 can be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic Location of the terminal 700 for navigation or LBS (Location Based Service). The Positioning component 708 can be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side frame of terminal 700 and/or underneath display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal 700. When a physical button or a vendor Logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 705 is increased; when the ambient light intensity is low, the display brightness of the display screen 705 is adjusted down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 is gradually increased, the processor 701 controls the display 705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer readable storage medium, such as a memory including a program code, which is executable by a processor in a terminal or a server to perform the media asset playing method in the above embodiments, is also provided. For example, the computer-readable storage medium may be a read-only memory (ROM), a Random Access Memory (RAM), a compact-disc read-only memory (cd-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (20)

1. A method for video preview, the method comprising:
determining a playing area of a target video to be played;
dividing the playing area into a plurality of sub-playing areas;
determining playing information of each sub-playing area based on a plurality of sub-playing areas and the playing duration of the target video;
acquiring a plurality of sub-videos of the target video based on the playing information of each sub-playing area, wherein one sub-video corresponds to one sub-playing area;
and playing the corresponding sub-video in each sub-playing area of the playing area.
2. The method of claim 1, wherein the dividing the playing area into a plurality of sub-playing areas comprises:
determining the number of the sub-play areas in response to the division operation of the play areas;
and displaying a plurality of the sub-playing areas based on the size parameter of the playing area and the number of the sub-playing areas.
3. The method according to claim 1, wherein the determining the playing information of each of the sub-playing regions based on the plurality of sub-playing regions and the playing time of the target video comprises:
determining the playing duration of the target video in each sub-playing area based on the number of the sub-playing areas and the playing duration of the target video;
and determining the starting playing time of the target video in each sub-playing area based on the playing duration of the target video in each sub-playing area and the playing duration of the target video.
4. The method of claim 3, wherein the determining the playing duration of the target video in each of the sub-playing regions based on the number of the sub-playing regions and the playing duration of the target video comprises:
and averagely dividing the playing duration of the target video according to the number of the sub-playing areas to obtain the playing duration of the target video in each sub-playing area.
5. The method according to claim 1, wherein the obtaining a plurality of sub-videos of the target video based on the playing information of each sub-playing area comprises:
and based on the playing information of each sub-playing area, the target video is decoded by a decoder corresponding to each sub-playing area respectively to obtain a plurality of sub-videos of the target video.
6. The method according to claim 5, wherein the decoding the target video by a decoder corresponding to each of the sub-play areas based on the play information of each of the sub-play areas to obtain a plurality of sub-videos of the target video comprises:
a decoder corresponding to each sub-playing area locates each key frame corresponding to each starting playing time in the target video based on the starting playing time of the target video in each sub-playing area;
and the decoder corresponding to each sub-playing area decodes the target video by taking each positioned key frame as an initial frame based on the playing duration of the target video in each sub-playing area to obtain a plurality of sub-videos corresponding to each playing duration.
7. The method according to claim 1, wherein after the playing of the corresponding sub-video in each of the sub-playing areas of the respective playing areas, the method further comprises:
responding to the playing control operation of any one sub playing area, and playing the sub video corresponding to the sub playing area based on the playing control operation.
8. The method according to claim 7, wherein playing the sub video corresponding to the sub playing area based on the playing control operation comprises:
when the playing control operation comprises a full-screen playing operation, responding to the full-screen playing operation of any one sub-playing area, and switching the sub-video played by the sub-playing area to the full-screen playing;
when the playing control operation comprises an amplification display operation, responding to the amplification display operation of any one of the sub-playing areas, performing amplification adjustment on the size of the sub-playing area, and playing the sub-video corresponding to the sub-playing area in the adjusted sub-playing area.
9. The method according to claim 7, wherein playing the sub video corresponding to the sub playing area based on the playing control operation comprises:
when the playing control operation comprises a positioning playing operation, responding to the positioning playing operation of any one of the sub-playing areas, searching a key frame corresponding to the positioning playing operation in the sub-video corresponding to the sub-playing area, and playing the corresponding sub-video in the sub-playing area by taking the key frame as an initial frame.
10. A video preview device, the device comprising:
the first determining module is used for determining a playing area of a target video to be played;
the dividing module is used for dividing the playing area into a plurality of sub-playing areas;
a second determining module, configured to determine playing information of each sub-playing area based on a plurality of sub-playing areas and a playing duration of the target video;
an obtaining module, configured to obtain multiple sub-videos of the target video based on playing information of each sub-playing area, where one sub-video corresponds to one sub-playing area;
and the playing module is used for playing the corresponding sub-video in each sub-playing area of the playing area.
11. The apparatus of claim 10, wherein the partitioning module is configured to:
determining the number of the sub-play areas in response to the division operation of the play areas;
and displaying a plurality of the sub-playing areas based on the size parameter of the playing area and the number of the sub-playing areas.
12. The apparatus of claim 10, wherein the second determining module comprises:
a first determining unit, configured to determine, based on the number of the sub-play areas and a play duration of the target video, a play duration of the target video in each of the sub-play areas;
a second determining unit, configured to determine, based on the playing duration of the target video in each of the sub-playing areas and the playing duration of the target video, an initial playing time of the target video in each of the sub-playing areas.
13. The apparatus of claim 12, wherein the first determining unit is configured to:
and averagely dividing the playing duration of the target video according to the number of the sub-playing areas to obtain the playing duration of the target video in each sub-playing area.
14. The apparatus of claim 10, further comprising:
and the decoding module is used for decoding the target video by the decoder corresponding to each sub-playing area based on the playing information of each sub-playing area to obtain a plurality of sub-videos of the target video.
15. The apparatus of claim 14, wherein the decoding module is configured to:
a decoder corresponding to each sub-playing area locates each key frame corresponding to each starting playing time in the target video based on the starting playing time of the target video in each sub-playing area;
and the decoder corresponding to each sub-playing area decodes the target video by taking each positioned key frame as an initial frame based on the playing duration of the target video in each sub-playing area to obtain a plurality of sub-videos corresponding to each playing duration.
16. The apparatus of claim 10, wherein the playback module is further configured to:
responding to the playing control operation of any one sub playing area, and playing the sub video corresponding to the sub playing area based on the playing control operation.
17. The apparatus of claim 16, wherein the playback module is further configured to:
when the playing control operation comprises a full-screen playing operation, responding to the full-screen playing operation of any one sub-playing area, and switching the sub-video played by the sub-playing area to the full-screen playing;
when the playing control operation comprises an amplification display operation, responding to the amplification display operation of any one of the sub-playing areas, performing amplification adjustment on the size of the sub-playing area, and playing the sub-video corresponding to the sub-playing area in the adjusted sub-playing area.
18. The apparatus of claim 16, wherein the playback module is further configured to:
when the playing control operation comprises a positioning playing operation, responding to the positioning playing operation of any one of the sub-playing areas, searching a key frame corresponding to the positioning playing operation in the sub-video corresponding to the sub-playing area, and playing the corresponding sub-video in the sub-playing area by taking the key frame as an initial frame.
19. A terminal, characterized in that the terminal comprises a processor and a memory, the memory having stored therein at least one program code, which is loaded and executed by the processor to implement the operations performed by the video preview method of any of claims 1 to 9.
20. A computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to perform operations performed by the video preview method of any of claims 1 to 9.
CN202011442171.1A 2020-12-08 2020-12-08 Video preview method, device, terminal and storage medium Pending CN112616082A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011442171.1A CN112616082A (en) 2020-12-08 2020-12-08 Video preview method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011442171.1A CN112616082A (en) 2020-12-08 2020-12-08 Video preview method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112616082A true CN112616082A (en) 2021-04-06

Family

ID=75232711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011442171.1A Pending CN112616082A (en) 2020-12-08 2020-12-08 Video preview method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112616082A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113766325A (en) * 2021-08-11 2021-12-07 珠海格力电器股份有限公司 Video playing method and device, electronic equipment and storage medium
CN114173178A (en) * 2021-12-14 2022-03-11 维沃移动通信有限公司 Video playing method, video playing device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103916718A (en) * 2013-01-05 2014-07-09 腾讯科技(北京)有限公司 Method and system for playing video based on video clip
US20180071610A1 (en) * 2016-09-15 2018-03-15 Karhu Media, LLC Athletic training method and system for remote video playback
CN111698565A (en) * 2020-06-03 2020-09-22 咪咕动漫有限公司 Video playing method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103916718A (en) * 2013-01-05 2014-07-09 腾讯科技(北京)有限公司 Method and system for playing video based on video clip
US20180071610A1 (en) * 2016-09-15 2018-03-15 Karhu Media, LLC Athletic training method and system for remote video playback
CN111698565A (en) * 2020-06-03 2020-09-22 咪咕动漫有限公司 Video playing method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113766325A (en) * 2021-08-11 2021-12-07 珠海格力电器股份有限公司 Video playing method and device, electronic equipment and storage medium
CN114173178A (en) * 2021-12-14 2022-03-11 维沃移动通信有限公司 Video playing method, video playing device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN110233976B (en) Video synthesis method and device
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN108391171B (en) Video playing control method and device, and terminal
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN107908929B (en) Method and device for playing audio data
CN111065001B (en) Video production method, device, equipment and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN110248236B (en) Video playing method, device, terminal and storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN110290392B (en) Live broadcast information display method, device, equipment and storage medium
CN107896337B (en) Information popularization method and device and storage medium
CN109982129B (en) Short video playing control method and device and storage medium
CN111741366A (en) Audio playing method, device, terminal and storage medium
CN112104648A (en) Data processing method, device, terminal, server and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111818358A (en) Audio file playing method and device, terminal and storage medium
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN111818367A (en) Audio file playing method, device, terminal, server and storage medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN114245218A (en) Audio and video playing method and device, computer equipment and storage medium
CN112770177B (en) Multimedia file generation method, multimedia file release method and device
CN112866584B (en) Video synthesis method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination