CN111314784A - Video playing method and electronic equipment - Google Patents

Video playing method and electronic equipment Download PDF

Info

Publication number
CN111314784A
CN111314784A CN202010131060.2A CN202010131060A CN111314784A CN 111314784 A CN111314784 A CN 111314784A CN 202010131060 A CN202010131060 A CN 202010131060A CN 111314784 A CN111314784 A CN 111314784A
Authority
CN
China
Prior art keywords
video
playing
priority tendency
tables
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010131060.2A
Other languages
Chinese (zh)
Other versions
CN111314784B (en
Inventor
李陈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010131060.2A priority Critical patent/CN111314784B/en
Publication of CN111314784A publication Critical patent/CN111314784A/en
Application granted granted Critical
Publication of CN111314784B publication Critical patent/CN111314784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Abstract

The invention discloses a video playing method and electronic equipment, comprising the following steps: determining a second video segment having a plot incidence relation with a first playing position of a target video in the process of playing the target video, wherein the second video segment is positioned in front of the first playing position; displaying a target control at a preset position of a video playing interface of a target video; and receiving target operation aiming at the target control, responding to the target operation, and playing a video picture corresponding to the second video clip. Therefore, in the video playing process, a specific control can be displayed on the video playing interface, and a user can trigger the electronic equipment to automatically position the front video segment which has a plot association relation with the current playing position by operating the specific control and play the front video segment, so that the front video segment can be quickly associated with the current plot, the user can quickly know the coming and going pulse of the plot, the positioning operation of the front video segment in the video skipping playing scene is simplified, and the video playing efficiency is improved.

Description

Video playing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of video processing, in particular to a video processing method and electronic equipment.
Background
With the development of internet technology, watching videos has become one of the main ways for people to entertain and acquire information in daily life. At present, when a user watches a video, the user often clicks a selection button of a video player or slides a progress bar below the video player to change the playing position of the video, so as to realize skip playing of the video.
In the prior art, when a video is played in a skipping manner, if a scenario difference between a video picture before the skipping is played and a video picture after the skipping is played is large, a user usually will be confused about the scenario of the video picture after the skipping is played, the user needs to manually search a video segment related to the scenario of the video picture after the skipping is played frame by frame from among the video segments skipped before, and then locate the video segment related to the scenario for playing, so that the operation is complicated.
Disclosure of Invention
The embodiment of the invention provides a video playing method and electronic equipment, and aims to solve the technical problem that operation is complicated when a preposed scenario related to a currently played video scenario is searched in a video skipping playing scene.
To solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a video playing method, which is applied to an electronic device, and the method includes:
in the process of playing a target video by the electronic equipment, determining a second video segment having a plot association relation with a first playing position of the target video, wherein the second video segment is positioned in front of the first playing position of the target video;
displaying a target control at a preset position of a video playing interface of the target video;
and receiving target operation aiming at the target control, responding to the target operation, and playing a video picture corresponding to the second video clip.
Optionally, as an embodiment, the determining a second video segment having a plot association relationship with the first playing position of the target video includes:
determining a video clip where a first playing position of the target video is located;
determining a video clip corresponding to the video clip where the first playing position is located according to a preset relation table and the video clip where the first playing position is located; the preset relation table comprises a plurality of mapping relations, and the corresponding relation of two or more than two video clips with the incidence relation of the scenario is recorded in each mapping relation;
and determining the video clip corresponding to the video clip where the first playing position is located as a second video clip.
Optionally, as an embodiment, before the step of determining, according to the preset relationship table and the video clip where the first playing position is located, the video clip corresponding to the video clip where the first playing position is located, the method further includes: generating the preset relation table; wherein the content of the first and second substances,
the generating the preset relationship table includes:
dividing the target video into a plurality of video segments;
generating a corresponding character graph according to the picture content of each video clip, wherein objects appearing in the video clips and the appearance duration of each object are recorded in the character graphs;
comparing the character graphs corresponding to two video clips in the plurality of video clips, and if the objects with the first N positions in appearance duration in the two character graphs are the same, determining the video clips corresponding to the two character graphs as the two video clips with the plot incidence relation, wherein N is an integer;
and generating the preset relation table based on the determined video segments with the associated relation in the scenarios.
Optionally, as an embodiment, the generating, for each video segment, a corresponding character graph based on the picture content of the video segment includes:
for each video clip, sampling the video clip at 5 frames per interval to obtain a sampling frame sequence { P ] of the video clip1,P2,…,PMWhere M is the number of video frames in the sampling frame sequence, PiFor the ith video frame in the sampling frame sequence, i is more than or equal to 1 and less than or equal to M, PiAt the position of the video segment at Pi+1Before;
for each PiTo said PiPerforming object detection to obtain the PiObject set T ofi
Based on the obtained TiEvery 5 consecutive T iniGenerating a first priority tendency table to obtain a plurality of first priority tendency tables of the video segments, wherein a first priority tendency table records corresponding 5 continuous TiThe occurrence times of the objects are arranged at the front R position, and R is a positive integer;
generating a second priority tendency table based on every 5 continuous first priority tendency tables in the obtained plurality of first priority tendency tables to obtain a plurality of second priority tendency tables of the video clip, wherein one second priority tendency table records objects with the first R bits appearing in the corresponding 5 continuous first priority tendency tables;
generating a third priority tendency table based on every 5 continuous second priority tendency tables in the obtained plurality of second priority tendency tables to obtain a plurality of third priority tendency tables of the video clip, wherein one third priority tendency table records objects with the first R bits appearing in the corresponding 5 continuous second priority tendency tables;
generating a fourth priority tendency table of the video clip based on every 5 continuous third priority tendency tables in the obtained plurality of third priority tendency tables, wherein the fourth priority tendency table records objects with the first R-bit occurrence times in the corresponding 5 continuous third priority tendency tables;
and generating a character graph of the video clip based on the fourth priority tendency table of the video clip and the appearance duration of each object in the fourth priority tendency table of the video clip.
Optionally, as an embodiment, the target control displays scenario introduction information of the second video segment.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
the electronic equipment comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining a second video segment which has a plot incidence relation with a first playing position of a target video in the process of playing the target video by the electronic equipment, and the second video segment is positioned in front of the first playing position of the target video;
the display unit is used for displaying a target control at a preset position of a video playing interface of the target video;
a receiving unit, configured to receive a target operation for the target control;
and the playing unit is used for responding to the target operation and playing the video picture corresponding to the second video clip.
Optionally, as an embodiment, the determining unit includes:
the first determining subunit is used for determining a video clip where a first playing position of the target video is located;
the second determining subunit is configured to determine, according to a preset relationship table and the video clip in which the first playing position is located, a video clip corresponding to the video clip in which the first playing position is located; the preset relation table comprises a plurality of mapping relations, and the corresponding relation of two or more than two video clips with the incidence relation of the scenario is recorded in each mapping relation;
and the third determining subunit is configured to determine the video segment corresponding to the video segment where the first playing position is located as the second video segment.
Optionally, as an embodiment, the electronic device further includes: a generating unit, wherein the generating unit comprises:
a dividing subunit, configured to divide the target video into a plurality of video segments;
the system comprises a first generation subunit, a second generation subunit, a third generation subunit and a fourth generation subunit, wherein the first generation subunit is used for generating a corresponding character graph for each video clip based on the picture content of the video clip, and objects appearing in the video clip and the appearance duration of each object are recorded in the character graph;
a fourth determining subunit, configured to compare the character graphs corresponding to two of the multiple video segments, and if the objects with the first N times of occurrence duration in the two character graphs are the same, determine the video segments corresponding to the two character graphs as two video segments with an association relation in the scenario, where N is an integer;
and the second generation subunit is used for generating the preset relation table based on the determined video segments with the associated relation in the scenarios.
Optionally, as an embodiment, the first generating subunit includes:
a frame sampling module, configured to sample, for each video segment, the video segment at 5 frames per interval to obtain a sampling frame sequence { P ] of the video segment1,P2,…,PMWhere M is the number of video frames in the sampling frame sequence, PiFor the ith video frame in the sampling frame sequence, i is more than or equal to 1 and less than or equal to M, PiAt the position of the video segment at Pi+1Before;
an object detection module for detecting each PiTo said PiPerforming object detection to obtain the PiObject set T ofi
A first generation module for generating a first T based on the obtained TiEvery 5 consecutive T iniGenerating a first priority tendency table to obtain a plurality of first priority tendency tables of the video segments, wherein a first priority tendency table records corresponding 5 continuous TiThe occurrence times of the objects are arranged at the front R position, and R is a positive integer;
a second generating module, configured to generate a second priority tendency table based on every 5 consecutive first priority tendency tables in the obtained multiple first priority tendency tables, so as to obtain multiple second priority tendency tables of the video clip, where an object whose occurrence frequency is ranked in the top R bits in the corresponding 5 consecutive first priority tendency tables is recorded in one second priority tendency table;
a third generating module, configured to generate a third priority tendency table based on every 5 consecutive second priority tendency tables in the obtained multiple second priority tendency tables, so as to obtain multiple third priority tendency tables of the video clip, where an object whose occurrence frequency is ranked in the top R bit in the corresponding 5 consecutive second priority tendency tables is recorded in one third priority tendency table;
a fourth generating module, configured to generate a fourth priority tendency table of the video segment based on every 5 consecutive third priority tendency tables in the obtained multiple third priority tendency tables, where an object whose occurrence frequency is ranked in the top R bits in the corresponding 5 consecutive third priority tendency tables is recorded in the fourth priority tendency table;
and the fifth generating module is used for generating the character graph of the video clip based on the fourth priority tendency table of the video clip and the occurrence duration of each object in the fourth priority tendency table of the video clip.
Optionally, as an embodiment, the target control displays scenario introduction information of the second video segment.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video playing method in the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the video playing method in the first aspect.
In the embodiment of the invention, in the process of playing the target video by the electronic equipment, the second video segment having the plot association relationship with the first playing position of the target video can be determined, and the target control is displayed on the video playing interface of the target video, so that the user can trigger the electronic equipment to play the second video segment having the plot association relationship with the first playing position of the target video by operating the target control. Compared with the prior art, in the embodiment of the invention, the specific control can be displayed on the video playing interface in the video playing process, and the user can trigger the electronic equipment to automatically position the front video segment which has the scenario association relation with the current playing position by operating the specific control and play the front video segment, so that the front video segment can be quickly associated with the current scenario, the user can quickly know the coming and going of the scenario, the positioning operation of the front video segment in the video skipping playing scene is simplified, and the video playing efficiency is improved.
Drawings
Fig. 1 is a flowchart of a video playing method according to an embodiment of the present invention;
fig. 2A is an application scene diagram of a video playing method according to an embodiment of the present invention;
fig. 2B is a diagram of another application scenario of the video playing method according to the embodiment of the present invention;
fig. 2C is a diagram of another application scenario of the video playing method according to the embodiment of the present invention;
fig. 2D is a diagram of another application scenario of the video playing method according to the embodiment of the present invention;
fig. 2E is a diagram of another application scenario of the video playing method according to the embodiment of the present invention;
fig. 2F is a diagram of another application scenario of the video playing method according to the embodiment of the present invention;
FIG. 3 is a flow chart of one implementation of step 101 provided by an embodiment of the present invention;
FIG. 4 is a flowchart of a preset relationship table generating step according to an embodiment of the present invention;
FIG. 5A is a diagram illustrating an exemplary step of generating a preset relationship table according to an embodiment of the present invention;
FIG. 5B is a diagram illustrating another example of a preset relationship table generating step according to an embodiment of the present invention;
FIG. 5C is a diagram illustrating another example of a preset relationship table generating step according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, when a video is played, the video can be played by fast forward and skip, and when the video is basically in a key plot, the video can prompt the approximate content of the current plot through characters. However, if the scenario in the middle transition is large when the user jumps to play, or some episodes are skipped directly when the user plays the series, and the user sees the scenario from the middle scenario, the user will be confused about the current scenario, and needs to search again from the skipped episode, which is complicated to operate.
In order to solve the above technical problem, embodiments of the present invention provide a video playing method and an electronic device.
First, a video playing method provided by an embodiment of the present invention is described below.
It should be noted that the video playing method provided by the embodiment of the present invention is applicable to an electronic device, and in practical application, the electronic device may include: smart phones, tablet computers, personal digital assistants, and the like, which are not limited in this embodiment of the present invention.
Fig. 1 is a flowchart of a video playing method according to an embodiment of the present invention, and as shown in fig. 1, the method may include the following steps: step 101, step 102, step 103 and step 104.
In step 101, in the process of playing the target video by the electronic device, a second video segment having a plot association relationship with the first playing position of the target video is determined, wherein the second video segment is located before the first playing position of the target video.
In the embodiment of the invention, in the process of playing the target video by the electronic equipment, when the behavior that the user skips the previous plot is detected, the second video segment which has the plot incidence relation with the first playing position of the target video can be determined; or, in the process of playing the target video by the electronic equipment, determining a second video segment having a plot association relation with the first playing position of the target video in real time by default.
In the embodiment of the present invention, the target video may be a single episode, for example, the target video is a single episode of a movie, a documentary, or a series; alternatively, the target video may be a plurality of episodes, for example, the target video is a plurality of episodes of a series, which is not limited in this embodiment of the present invention.
In this embodiment of the present invention, the first playing position may be a current playing position of the target video. The scenario of the second video segment is associated with the scenario of the first play position of the target video.
In one example, the first playback position is at the 10 th minute of the target video, and then the second video segment is before the 10 th minute of the target video, e.g., the second video segment is at the 5 th minute of the target video.
In the embodiment of the invention, when the target video is a single episode, the first playing position and the second video clip belong to the single episode. When the target video is a plurality of episodes, the first playing position and the second video segment may belong to the same episode, for example, the first playing position belongs to a second episode of a series, and the second video segment also belongs to the second episode of the series; or the first play position and the second video segment may belong to two episodes separately, e.g. the second video segment belongs to a first episode of a series and the first play position belongs to a second episode of the series.
In step 102, a target control is displayed at a preset position of a video playing interface of a target video.
In the embodiment of the invention, the target control is used for triggering the electronic equipment to play the second video clip.
In order to ensure that the target control does not block the main picture content on the video playing interface, in an embodiment provided by the present invention, the step 102 may specifically include the following steps (not shown in the figure): step 1021, step 1022, and step 1023, wherein,
in step 1021, determining the display mode of the video playing interface of the target video;
in step 1022, if the display mode of the video playing interface is full-screen display, determining that the preset position is a lower right area of the video playing interface, and displaying the target control in the lower right area of the video playing interface;
in step 1023, if the display mode of the video playing interface is non-full screen display, the preset position is determined as the lower area of the progress bar of the video playing interface, and the target control is displayed in the lower area of the progress bar of the video playing interface.
In order to facilitate the user to know the scenario information of the second video segment, in the embodiment of the present invention, the target control may display the scenario introduction information of the second video segment, so as to intuitively provide the scenario introduction information of the second video segment to the user, where the scenario introduction information may be a rough scenario introduction or a detailed scenario introduction.
In the embodiment of the invention, the scenario introduction information of the second video segment can be acquired based on the subtitle file of the second video segment, and then the scenario introduction information of the second video segment is displayed on the target control; or acquiring the plot introduction information of the second video clip based on the audio file of the second video clip, and then displaying the plot introduction information of the second video clip on the target control; or, the scenario introduction information of the second video segment may also be obtained by combining the subtitle file and the audio file of the second video segment, and then the scenario introduction information of the second video segment is displayed on the target control, which is not limited in the embodiment of the present invention.
In view of the fact that processing of the subtitle files is simpler and faster than that of the audio files, in order to improve processing efficiency, in the embodiment of the present invention, it may be determined whether the second video segment has a corresponding subtitle file, and if the second video segment has a corresponding subtitle file, scenario introduction information of the second video segment is generated based on the subtitle file; and if the second video segment does not have the corresponding subtitle file, extracting the audio file corresponding to the second video segment, identifying the audio file as a text file, and generating the plot introduction information of the second video segment based on the text file.
In the embodiment of the invention, when the scenario introduction information of the second video segment is generated based on the subtitle file of the second video segment, the keyword and the high-frequency word of the subtitle file can be extracted, and the scenario introduction information is generated based on the keyword and the high-frequency word.
Therefore, in the embodiment of the invention, the scenario introduction information of the second video clip can be acquired in multiple ways, and the corresponding scenario introduction information acquisition ways can be provided for different types of video clips.
In step 103, a target operation for a target control is received.
In the embodiment of the invention, the target operation can be click operation, long-time press operation or sliding operation.
In step 104, in response to the target operation, playing a video picture corresponding to the second video clip.
In the embodiment of the invention, in order to ensure that a user can comprehensively know the scenario content of the second video segment, when the video picture corresponding to the second video segment is played, the video picture corresponding to the second video segment can be played from the first frame of the second video segment.
In order to meet the diversified requirements of the user, in an embodiment of the present invention, the step 104 may specifically include the following steps:
responding to the target operation, pausing playing of the video picture corresponding to the first playing position, and playing the video picture corresponding to the second video clip on the video playing interface of the target video in a window form, wherein the size of the window is smaller than that of the video playing interface; alternatively, the first and second electrodes may be,
responding to the target operation, and split-screen playing a video picture corresponding to the second video clip and a video picture at the first playing position on a display screen of the electronic equipment; alternatively, the first and second electrodes may be,
and responding to the target operation, when the electronic equipment is the folding screen electronic equipment, playing the video picture at the first playing position on one display screen of the electronic equipment, and playing the video picture corresponding to the second video clip on the other display screen of the electronic equipment.
In the embodiment of the invention, when the video picture corresponding to the second video segment is played, the user can select to output the audio of the plot at the first playing position or output the audio of the plot of the second video segment. It should be noted that, in order to avoid mutual interference between audios and influence on the user to watch the video, the audio outputs at the two locations are mutually exclusive, the audio output of one scenario is selected, and the audio output of the other scenario needs to be turned off.
In order to facilitate understanding of the technical solution of the embodiment of the present invention, the description is made with reference to application scene diagrams shown in fig. 2A to 2F.
When a user watches a video by using the electronic device 20, the electronic device 20 displays a video playing interface 21, and a video picture of a first playing position is displayed on the video playing interface 21, as shown in fig. 2A; in the video playing process, the electronic device 20 determines a second video segment related to the scenario of the first playing position and obtains scenario introduction information of the second video segment; the electronic device 20 displays the obtained scenario introduction information of the second video segment on the video playing interface 21 in a control manner, for example, as shown in fig. 2B, when the video playing interface 21 is not displayed in a full screen, a "pre-scenario introduction" control is displayed below the progress bar, or as shown in fig. 2C, when the video playing interface 21 is displayed in a full screen, a "pre-scenario introduction" control is displayed at the right lower side of the interface; the user clicks the "introduction to the front scenario" control, and may trigger the electronic device 20 to play the second video segment, for example, as shown in fig. 2D, the video frame of the second video segment may be played in a form of a floating window, and the video frame at the first playing position is in a paused state; or if the electronic device 20 supports split-screen, the video frame at the first playing position and the video frame of the second video clip can be played in split-screen manner as shown in fig. 2E; or if the electronic device 20 is a folding screen, two-part playback may be performed at the fold, as shown in fig. 2F.
Therefore, in the embodiment of the invention, when the user skips some scenarios in the middle of watching the video, and when the post scenario is played, the pre scenario associated with the post scenario can be prompted in the post segment, so that the user can directly know the content of the pre scenario and can be quickly associated to the pre scenario associated with the post scenario.
As can be seen from the foregoing embodiments, in the embodiment, in the process of playing the target video by the electronic device, the second video segment having the plot association relationship with the first playing position of the target video may be determined, and the target control is displayed on the video playing interface of the target video, so that the user may trigger the electronic device to play the second video segment having the plot association relationship with the first playing position of the target video by operating the target control. Compared with the prior art, in the embodiment of the invention, the specific control can be displayed on the video playing interface in the video playing process, and the user can trigger the electronic equipment to automatically position the front video segment which has the scenario association relation with the current playing position by operating the specific control and play the front video segment, so that the front video segment can be quickly associated with the current scenario, the user can quickly know the coming and going of the scenario, the positioning operation of the front video segment in the video skipping playing scene is simplified, and the video playing efficiency is improved.
In another embodiment provided by the present invention, the embodiment may be based on the embodiment shown in fig. 1, and as shown in fig. 3, the step 101 may specifically include the following steps: step 301, step 302 and step 303, wherein,
in step 301, in the process of playing the target video by the electronic device, a video clip where a first playing position of the target video is located is determined.
In view of the scenario of the video formed based on the picture contents of the plurality of video frames, in the embodiment of the present invention, the video clip may be used as a processing unit, and when determining the second video clip, the video clip where the first playing position is located may be determined first, and then the second video clip may be determined based on the video clip where the first playing position is located.
In step 302, according to the preset relationship table and the video clip where the first playing position is located, determining a video clip corresponding to the video clip where the first playing position is located; the preset relation table comprises a plurality of mapping relations, and the corresponding relation of two or more than two video segments with the incidence relation of the scenario is recorded in each mapping relation.
In the embodiment of the invention, after the video segment where the first playing position of the target video is located is determined, whether the preset relation table is configured in advance can be detected, if the preset relation table is configured in advance, the preset relation table is directly obtained, and the second video segment is determined according to the plot incidence relation of each video segment of the target video recorded in the preset relation table; and if the preset relation table is not configured in advance, generating the preset relation table, and then determining a second video segment according to the plot incidence relation of each video segment of the target video recorded in the preset relation table.
In step 303, the video segment corresponding to the video segment at which the first play position is located is determined as the second video segment.
Therefore, in the embodiment of the invention, the second video segment can be determined according to the video segment where the first playing position of the target video is located, and since the scenario of the video is formed based on the picture contents of the plurality of video frames, the determination of the associated scenario video segment is performed by taking the video segment as a processing unit, so that the accuracy of the scenario association degree of the determined second video segment and the first playing position can be ensured.
In another embodiment provided by the present invention, on the basis of the embodiment shown in fig. 3, when generating the preset relationship table, the preset relationship table may be generated by an electronic device, or may be generated by a server, as shown in fig. 4, the step of generating the preset relationship table may include the following steps: step 401, step 402, step 403 and step 404, wherein,
in step 401, a target video is divided into a plurality of video segments.
In the embodiment of the invention, the target video can be divided into a plurality of video segments with the same duration, that is, each video segment contains the same number of video frames, so as to ensure the accuracy of the generated relation table.
In step 402, for each video segment, a corresponding character graph is generated based on the picture content of the video segment, wherein objects appearing in the video segment and the appearance duration of each object are recorded in the character graph.
In an embodiment provided by the present invention, the step 402 may specifically include the following steps: (not shown in the figure) step 4021, step 4022, step 4023, step 4024, step 4025, step 4026, and step 4027, wherein,
in step 4021, for each video clip, sampling the video clip every 5 frames to obtain a sequence of sampling frames { P } of the video clip1,P2,…,PMWhere M is the number of video frames in the sampled frame sequence, PiFor the ith video frame in the sampling frame sequence, i is more than or equal to 1 and less than or equal to M, PiAt position P of video clipi+1Before.
In one example, the target video is divided into 4 video segments: video segment 1, video segment 2, video segment 3, and video segment 4. Taking video segment 1 as an example, one video frame is extracted every 5 frames to obtain a sampling frame sequence { P ] of video segment 11,P2,…,PMIn which P is1For frame 1, P in video segment 12For frame 6, P in video segment 13For frame 11, P in video segment 14For frame 16, P in video segment 15 Frame 21 in video clip 1, and so on.
In step 4022, for each PiTo PiPerforming object detection to obtain PiObject set T ofi
Following the example in step 4021, a sequence of sample frames { P } for video segment 11,P2,…,P3125Carrying out object detection to obtain a corresponding object set (T)1,T2,…,TM}, e.g. P1Object set T of1Including person 1, person 2 and person 3, P2Object set T of2Including the person 1, P3Object set T of3Including the person 2, P4Object set T of4Including person 1 and person 2, P5Object set T of5Including person 1 and person 3.
In step 4023, based on the obtained TiEvery 5 consecutive T iniGenerating a first priority tendency list, and obtaining a plurality of first priority tendency lists of the video clip, wherein, a first priority tendency list records corresponding 5 continuous TiThe number of occurrences in (1) is the object with the first R position, and R is a positive integer.
Following the example in step 4022, where R is 5, as shown in fig. 5A, count T1、T2、T3、T4And T5Number of occurrences of middle character: figure 1 appears 4 times, figure 2 appears 3 times, figure 3 appears 2 times, and the descending order is carried out to obtain T1~T5Wherein the first priority tendency table comprises character 1, character 2 and character 3. In the same way, T can be obtained6~T10Can obtain T11~T15Can obtain T16~T20Can obtain T21~T25The first priority tendency table of (2) and so on, other plural corresponding first priority tendency tables can be obtained.
In step 4024, a second priority tendency table is generated based on every 5 consecutive first priority tendency tables in the obtained plurality of first priority tendency tables, and a plurality of second priority tendency tables of the video clip are obtained, wherein one second priority tendency table records objects whose appearance times are arranged in the top R bits in the corresponding 5 consecutive first priority tendency tables.
Following the example in step 4023, R ═ 5, e.g., T6~T10Includes person 1 and person 3, T11~T15Includes character 3 and character 2, T in the first priority tendency table16~T20Includes character 2 and character 3, T21~T25Includes person 1 and person 3 in the first priority tendency table. As shown in fig. 5B, count T1~T5First priority tendency table, T6~T10First priority tendency table, T11~T15First priority tendency table of21~T25The number of occurrences of the character in the first priority tendency table of (1): and the characters 3, 1 and 2 appear 5 times, and are arranged in descending order to obtain a corresponding second priority tendency table, wherein the second priority tendency table comprises the characters 3, 1 and 2. By analogy, other corresponding second priority tendency tables can be obtained.
In step 4025, a third priority tendency table of the video segment is generated based on every 5 consecutive second priority tendency tables in the obtained plurality of second priority tendency tables, and a plurality of third priority tendency tables are obtained, wherein one third priority tendency table records objects with the first R-bit occurrence frequency in the corresponding 5 consecutive second priority tendency tables.
In step 4026, a fourth priority tendency table of the video segment is generated based on every 5 consecutive third priority tendency tables in the obtained plurality of third priority tendency tables, wherein the fourth priority tendency table records objects whose occurrence times are in the top R bits in the corresponding 5 consecutive third priority tendency tables.
In this embodiment of the present invention, the generating manners of the third priority trend table and the fourth priority trend table are similar to the generating manners of the first priority trend table and the second priority trend table in step 4023 and step 4024, and are not described herein again.
In step 4027, a character graph of the video segment is generated based on the fourth priority tendency table of the video segment and the occurrence duration of each object in the fourth priority tendency table of the video segment.
In the above example, the fourth priority tendency table of video clip 1, the fourth priority tendency table of video clip 2, the fourth priority tendency table of video clip 3, and the fourth priority tendency table of video clip 4 are obtained through steps 4023 to 4026. For example, the person 1, the person 2, the person 4, the person 5, and the person 6 are included in the fourth priority tendency table of the video segment 1, the person 3, the person 5, the person 2, and the person N +1 are included in the fourth priority tendency table of the video segment 2, the person 1, the person 2, the person 4, the person 5, and the person 6 are included in the fourth priority tendency table of the video segment 3, and the person 1, the person 3, the person 2, the person N, and the person N +1 are included in the fourth priority tendency table of the video segment 4.
Correspondingly, the character diagram of the video segment 1 includes the character 1, the character 2, the character 4, the character 5, the character 6 and the corresponding occurrence duration, the character diagram of the video segment 2 includes the character 1, the character 3, the character 5, the character 2, the character N +1 and the corresponding occurrence duration, the character diagram of the video segment 3 includes the character 1, the character 2, the character 4, the character 5, the character 6 and the corresponding occurrence duration, and the character diagram of the video segment 4 includes the character 1, the character 3, the character 2, the character N +1 and the corresponding occurrence duration.
In step 403, comparing the character graphs corresponding to two video segments of the plurality of video segments, and if the objects with the top N-th appearance time in the two character graphs are the same, determining the video segments corresponding to the two character graphs as the two video segments with the plot association relationship, where N is an integer.
As shown in fig. 5C, in the above example, by comparing the character graphs of the video segments 1 to 4, it is found that the characters 1, 2, and 4 appear in the video segments 1 and 3, and the characters 1, 2, and 4 appear in the top 3 digits, and thus it is determined that the scenarios of the video segments 1 and 2 are related.
In step 404, a preset relationship table is generated based on the determined video segments with the associated relationship in the scenario.
In the embodiment of the invention, when the preset relation table is generated, the plot incidence relation between two corresponding video segments can be recorded by recording the plot incidence relation between the first video frames of the two video segments.
For example, it is determined in step 403 that a scenario association relationship exists between the video segment 1 and the video segment 3, and when the preset relationship table is generated, the scenario association relationship between the video segment 1 and the video segment 3 may be recorded by recording that a scenario association relationship exists between a first video frame of the video segment 1 and a first video frame of the video segment 3.
Therefore, in the embodiment of the invention, because the occurrence frequency and the duration of the characters are important factors for representing the scenario, the scenario association relationship among the video segments is established according to the occurrence frequency and the duration of the characters in the video segments, and the accuracy of the established scenario association relationship is higher.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device 600 may include: a determination unit 601, a display unit 602, a reception unit 603, and a playback unit 604, wherein,
a determining unit 601, configured to determine, in a process of playing a target video by the electronic device, a second video segment having a scenario association relationship with a first playing position of the target video, where the second video segment is located before the first playing position of the target video;
a display unit 602, configured to display a target control at a preset position of a video playing interface of the target video;
a receiving unit 603, configured to receive a target operation for the target control;
a playing unit 604, configured to play a video picture corresponding to the second video clip in response to the target operation.
As can be seen from the foregoing embodiments, in the embodiment, in the process of playing the target video by the electronic device, the second video segment having the plot association relationship with the first playing position of the target video may be determined, and the target control is displayed on the video playing interface of the target video, so that the user may trigger the electronic device to play the second video segment having the plot association relationship with the first playing position of the target video by operating the target control. Compared with the prior art, in the embodiment of the invention, the specific control can be displayed on the video playing interface in the video playing process, and the user can trigger the electronic equipment to automatically position the front video segment which has the scenario association relation with the current playing position by operating the specific control and play the front video segment, so that the front video segment can be quickly associated with the current scenario, the user can quickly know the coming and going of the scenario, the positioning operation of the front video segment in the video skipping playing scene is simplified, and the video playing efficiency is improved.
Optionally, as an embodiment, the determining unit 601 may include:
the first determining subunit is used for determining a video clip where a first playing position of the target video is located;
the second determining subunit is configured to determine, according to a preset relationship table and the video clip in which the first playing position is located, a video clip corresponding to the video clip in which the first playing position is located; the preset relation table comprises a plurality of mapping relations, and the corresponding relation of two or more than two video clips with the incidence relation of the scenario is recorded in each mapping relation;
and the third determining subunit is configured to determine the video segment corresponding to the video segment where the first playing position is located as the second video segment.
Optionally, as an embodiment, the electronic device 600 may further include: a generating unit, wherein the generating unit may include:
a dividing subunit, configured to divide the target video into a plurality of video segments;
the system comprises a first generation subunit, a second generation subunit, a third generation subunit and a fourth generation subunit, wherein the first generation subunit is used for generating a corresponding character graph for each video clip based on the picture content of the video clip, and objects appearing in the video clip and the appearance duration of each object are recorded in the character graph;
a fourth determining subunit, configured to compare the character graphs corresponding to two of the multiple video segments, and if the objects with the first N times of occurrence duration in the two character graphs are the same, determine the video segments corresponding to the two character graphs as two video segments with an association relation in the scenario, where N is an integer;
and the second generation subunit is used for generating the preset relation table based on the determined video segments with the associated relation in the scenarios.
Optionally, as an embodiment, the first generating subunit may include:
a frame sampling module, configured to sample, for each video segment, the video segment at 5 frames per interval to obtain a sampling frame sequence { P ] of the video segment1,P2,…,PMWhere M is the number of video frames in the sampling frame sequence, PiFor the ith video frame in the sampling frame sequence, i is more than or equal to 1 and less than or equal to M, PiAt the position of the video segment at Pi+1Before;
an object detection module for detecting each PiTo said PiPerforming object detection to obtain the PiObject set T ofi
A first generation module for generating a first T based on the obtained TiEvery 5 consecutive T iniGenerating a first priority tendency table to obtain a plurality of first priority tendency tables of the video segments, wherein a first priority tendency table records corresponding 5 continuous TiThe occurrence times of the objects are arranged at the front R position, and R is a positive integer;
a second generating module, configured to generate a second priority tendency table based on every 5 consecutive first priority tendency tables in the obtained multiple first priority tendency tables, so as to obtain multiple second priority tendency tables of the video clip, where an object whose occurrence frequency is ranked in the top R bits in the corresponding 5 consecutive first priority tendency tables is recorded in one second priority tendency table;
a third generating module, configured to generate a third priority tendency table based on every 5 consecutive second priority tendency tables in the obtained multiple second priority tendency tables, so as to obtain multiple third priority tendency tables of the video clip, where an object whose occurrence frequency is ranked in the top R bit in the corresponding 5 consecutive second priority tendency tables is recorded in one third priority tendency table;
a fourth generating module, configured to generate a fourth priority tendency table of the video segment based on every 5 consecutive third priority tendency tables in the obtained multiple third priority tendency tables, where an object whose occurrence frequency is ranked in the top R bits in the corresponding 5 consecutive third priority tendency tables is recorded in the fourth priority tendency table;
and the fifth generating module is used for generating the character graph of the video clip based on the fourth priority tendency table of the video clip and the occurrence duration of each object in the fourth priority tendency table of the video clip.
Optionally, as an embodiment, the target control displays scenario introduction information of the second video segment.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention, and as shown in fig. 7, the electronic device 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like. Wherein:
a processor 710, configured to determine, in a process of playing a target video by the electronic device, a second video segment having a scenario association relationship with a first playing position of the target video, where the second video segment is located before the first playing position of the target video; displaying a target control at a preset position of a video playing interface of the target video;
the user input unit 707 is configured to receive a target operation for the target control;
the processor 710 is further configured to respond to the target operation and play a video picture corresponding to the second video segment.
Therefore, in the embodiment of the present invention, in the process of playing the target video by the electronic device, the second video segment having the plot association relationship with the first playing position of the target video may be determined, and the target control is displayed on the video playing interface of the target video, so that the user may trigger the electronic device to play the second video segment having the plot association relationship with the first playing position of the target video by operating the target control. Compared with the prior art, in the embodiment of the invention, the specific control can be displayed on the video playing interface in the video playing process, and the user can trigger the electronic equipment to automatically position the front video segment which has the scenario association relation with the current playing position by operating the specific control and play the front video segment, so that the front video segment can be quickly associated with the current scenario, the user can quickly know the coming and going of the scenario, the positioning operation of the front video segment in the video skipping playing scene is simplified, and the video playing efficiency is improved.
Optionally, as an embodiment, the determining a second video segment having a plot association relationship with the first playing position of the target video includes:
determining a video clip where a first playing position of the target video is located;
determining a video clip corresponding to the video clip where the first playing position is located according to a preset relation table and the video clip where the first playing position is located; the preset relation table comprises a plurality of mapping relations, and the corresponding relation of two or more than two video clips with the incidence relation of the scenario is recorded in each mapping relation;
and determining the video clip corresponding to the video clip where the first playing position is located as a second video clip.
Optionally, as an embodiment, before the step of determining, according to the preset relationship table and the video clip where the first playing position is located, the video clip corresponding to the video clip where the first playing position is located, the method further includes: generating the preset relation table; wherein the content of the first and second substances,
the generating the preset relationship table includes:
dividing the target video into a plurality of video segments;
generating a corresponding character graph according to the picture content of each video clip, wherein objects appearing in the video clips and the appearance duration of each object are recorded in the character graphs;
comparing the character graphs corresponding to two video clips in the plurality of video clips, and if the objects with the first N positions in appearance duration in the two character graphs are the same, determining the video clips corresponding to the two character graphs as the two video clips with the plot incidence relation, wherein N is an integer;
and generating the preset relation table based on the determined video segments with the associated relation in the scenarios.
Optionally, as an embodiment, the generating, for each video segment, a corresponding character graph based on the picture content of the video segment includes:
for each video clip, sampling the video clip at 5 frames per interval to obtain a sampling frame sequence { P ] of the video clip1,P2,…,PMWhere M is the number of video frames in the sampling frame sequence, PiFor the ith video frame in the sampling frame sequence, i is more than or equal to 1 and less than or equal to M, PiAt the position of the video segment at Pi+1Before;
for each PiTo said PiPerforming object detection to obtain the PiObject set T ofi
Based on the obtained TiEvery 5 consecutive T iniGenerating a first priority tendency list to obtain multiple first priority tendency lists of the video clips, wherein one first priority tendency list isCorresponding 5 continuous T are recorded in the tableiThe occurrence times of the objects are arranged at the front R position, and R is a positive integer;
generating a second priority tendency table based on every 5 continuous first priority tendency tables in the obtained plurality of first priority tendency tables to obtain a plurality of second priority tendency tables of the video clip, wherein one second priority tendency table records objects with the first R bits appearing in the corresponding 5 continuous first priority tendency tables;
generating a third priority tendency table based on every 5 continuous second priority tendency tables in the obtained plurality of second priority tendency tables to obtain a plurality of third priority tendency tables of the video clip, wherein one third priority tendency table records objects with the first R bits appearing in the corresponding 5 continuous second priority tendency tables;
generating a fourth priority tendency table of the video clip based on every 5 continuous third priority tendency tables in the obtained plurality of third priority tendency tables, wherein the fourth priority tendency table records objects with the first R-bit occurrence times in the corresponding 5 continuous third priority tendency tables;
and generating a character graph of the video clip based on the fourth priority tendency table of the video clip and the appearance duration of each object in the fourth priority tendency table of the video clip.
Optionally, as an embodiment, the target control displays scenario introduction information of the second video segment.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the electronic apparatus 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image may be displayed on the display unit 706. The image processed by the graphics processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The electronic device 700 also includes at least one sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the electronic device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 708 is an interface for connecting an external device to the electronic apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 700 or may be used to transmit data between the electronic apparatus 700 and the external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby monitoring the whole electronic device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The electronic device 700 may also include a power supply 711 (e.g., a battery) for providing power to the various components, and preferably, the power supply 711 may be logically coupled to the processor 710 via a power management system, such that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the electronic device 700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of any of the above embodiments of the video playing method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of any of the above video playing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A video playing method is applied to electronic equipment, and is characterized by comprising the following steps:
in the process of playing a target video by the electronic equipment, determining a second video segment having a plot association relation with a first playing position of the target video, wherein the second video segment is positioned in front of the first playing position of the target video;
displaying a target control at a preset position of a video playing interface of the target video;
and receiving target operation aiming at the target control, responding to the target operation, and playing a video picture corresponding to the second video clip.
2. The method of claim 1, wherein determining the second video segment having a plot association with the first playback position of the target video comprises:
determining a video clip where a first playing position of the target video is located;
determining a video clip corresponding to the video clip where the first playing position is located according to a preset relation table and the video clip where the first playing position is located; the preset relation table comprises a plurality of mapping relations, and the corresponding relation of two or more than two video clips with the incidence relation of the scenario is recorded in each mapping relation;
and determining the video clip corresponding to the video clip where the first playing position is located as a second video clip.
3. The method according to claim 2, wherein before the step of determining the video segment corresponding to the video segment at the first playing position according to the preset relationship table and the video segment at the first playing position, the method further comprises: generating the preset relation table; wherein the content of the first and second substances,
the generating the preset relationship table includes:
dividing the target video into a plurality of video segments;
generating a corresponding character graph according to the picture content of each video clip, wherein objects appearing in the video clips and the appearance duration of each object are recorded in the character graphs;
comparing the character graphs corresponding to two video clips in the plurality of video clips, and if the objects with the first N positions in appearance duration in the two character graphs are the same, determining the video clips corresponding to the two character graphs as the two video clips with the plot incidence relation, wherein N is an integer;
and generating the preset relation table based on the determined video segments with the associated relation in the scenarios.
4. The method of claim 3, wherein generating, for each video clip, a corresponding character graph based on the picture content of the video clip comprises:
for each video clip, sampling the video clip at 5 frames per interval to obtain a sampling frame sequence { P ] of the video clip1,P2,…,PMWhere M is the number of video frames in the sampling frame sequence, PiFor the ith video frame in the sampling frame sequence, i is more than or equal to 1 and less than or equal to M, PiAt the position of the video segment at Pi+1Before;
for each PiTo said PiPerforming object detection to obtain the PiObject set T ofi
Based on the obtained TiEvery 5 consecutive T iniGenerating a first priority tendency table to obtain a plurality of first priority tendency tables of the video segments, wherein a first priority tendency table records corresponding 5 continuous TiThe occurrence times of the objects are arranged at the front R position, and R is a positive integer;
generating a second priority tendency table based on every 5 continuous first priority tendency tables in the obtained plurality of first priority tendency tables to obtain a plurality of second priority tendency tables of the video clip, wherein one second priority tendency table records objects with the first R bits appearing in the corresponding 5 continuous first priority tendency tables;
generating a third priority tendency table based on every 5 continuous second priority tendency tables in the obtained plurality of second priority tendency tables to obtain a plurality of third priority tendency tables of the video clip, wherein one third priority tendency table records objects with the first R bits appearing in the corresponding 5 continuous second priority tendency tables;
generating a fourth priority tendency table of the video clip based on every 5 continuous third priority tendency tables in the obtained plurality of third priority tendency tables, wherein the fourth priority tendency table records objects with the first R-bit occurrence times in the corresponding 5 continuous third priority tendency tables;
and generating a character graph of the video clip based on the fourth priority tendency table of the video clip and the appearance duration of each object in the fourth priority tendency table of the video clip.
5. The method according to any one of claims 1 to 4, wherein the target control displays scenario introduction information of the second video segment.
6. An electronic device, characterized in that the electronic device comprises:
the electronic equipment comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining a second video segment which has a plot incidence relation with a first playing position of a target video in the process of playing the target video by the electronic equipment, and the second video segment is positioned in front of the first playing position of the target video;
the display unit is used for displaying a target control at a preset position of a video playing interface of the target video;
a receiving unit, configured to receive a target operation for the target control;
and the playing unit is used for responding to the target operation and playing the video picture corresponding to the second video clip.
7. The electronic device according to claim 6, wherein the determination unit includes:
the first determining subunit is used for determining a video clip where a first playing position of the target video is located;
the second determining subunit is configured to determine, according to a preset relationship table and the video clip in which the first playing position is located, a video clip corresponding to the video clip in which the first playing position is located; the preset relation table comprises a plurality of mapping relations, and the corresponding relation of two or more than two video clips with the incidence relation of the scenario is recorded in each mapping relation;
and the third determining subunit is configured to determine the video segment corresponding to the video segment where the first playing position is located as the second video segment.
8. The electronic device of claim 7, further comprising: a generating unit, wherein the generating unit comprises:
a dividing subunit, configured to divide the target video into a plurality of video segments;
the system comprises a first generation subunit, a second generation subunit, a third generation subunit and a fourth generation subunit, wherein the first generation subunit is used for generating a corresponding character graph for each video clip based on the picture content of the video clip, and objects appearing in the video clip and the appearance duration of each object are recorded in the character graph;
a fourth determining subunit, configured to compare the character graphs corresponding to two of the multiple video segments, and if the objects with the first N times of occurrence duration in the two character graphs are the same, determine the video segments corresponding to the two character graphs as two video segments with an association relation in the scenario, where N is an integer;
and the second generation subunit is used for generating the preset relation table based on the determined video segments with the associated relation in the scenarios.
9. The electronic device of claim 8, wherein the first generating subunit comprises:
a frame sampling module, configured to sample, for each video segment, the video segment at 5 frames per interval to obtain a sampling frame sequence { P ] of the video segment1,P2,…,PMWhere M is the number of video frames in the sampling frame sequence, PiFor the ith video frame in the sampling frame sequence, i is more than or equal to 1 and less than or equal to M, PiAt the position of the video segment at Pi+1Before;
an object detection module for detecting each PiTo said PiPerforming object detection to obtain the PiObject set T ofi
A first generation module for generating a first T based on the obtained TiEvery 5 consecutive T iniGenerating a first priority tendency table to obtain a plurality of first priority tendency tables of the video segments, wherein a first priority tendency table records corresponding 5 continuous TiThe occurrence times of the objects are arranged at the front R position, and R is a positive integer;
a second generating module, configured to generate a second priority tendency table based on every 5 consecutive first priority tendency tables in the obtained multiple first priority tendency tables, so as to obtain multiple second priority tendency tables of the video clip, where an object whose occurrence frequency is ranked in the top R bits in the corresponding 5 consecutive first priority tendency tables is recorded in one second priority tendency table;
a third generating module, configured to generate a third priority tendency table based on every 5 consecutive second priority tendency tables in the obtained multiple second priority tendency tables, so as to obtain multiple third priority tendency tables of the video clip, where an object whose occurrence frequency is ranked in the top R bit in the corresponding 5 consecutive second priority tendency tables is recorded in one third priority tendency table;
a fourth generating module, configured to generate a fourth priority tendency table of the video segment based on every 5 consecutive third priority tendency tables in the obtained multiple third priority tendency tables, where an object whose occurrence frequency is ranked in the top R bits in the corresponding 5 consecutive third priority tendency tables is recorded in the fourth priority tendency table;
and the fifth generating module is used for generating the character graph of the video clip based on the fourth priority tendency table of the video clip and the occurrence duration of each object in the fourth priority tendency table of the video clip.
10. The electronic device according to any one of claims 6 to 9, wherein the target control displays scenario introduction information of the second video segment.
11. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video playback method as claimed in any one of claims 1 to 5.
CN202010131060.2A 2020-02-28 2020-02-28 Video playing method and electronic equipment Active CN111314784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010131060.2A CN111314784B (en) 2020-02-28 2020-02-28 Video playing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010131060.2A CN111314784B (en) 2020-02-28 2020-02-28 Video playing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111314784A true CN111314784A (en) 2020-06-19
CN111314784B CN111314784B (en) 2021-08-31

Family

ID=71148431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010131060.2A Active CN111314784B (en) 2020-02-28 2020-02-28 Video playing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111314784B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111760266A (en) * 2020-07-01 2020-10-13 网易(杭州)网络有限公司 Game live broadcast method and device and electronic equipment
CN112487243A (en) * 2020-11-27 2021-03-12 上海连尚网络科技有限公司 Video display method, device and storage medium
CN113938712A (en) * 2021-10-13 2022-01-14 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN114827736A (en) * 2022-04-24 2022-07-29 北京达佳互联信息技术有限公司 Video playback method and device, electronic equipment and storage medium
WO2022179415A1 (en) * 2021-02-25 2022-09-01 腾讯科技(深圳)有限公司 Audiovisual work display method and apparatus, and device and medium
CN115022705A (en) * 2022-05-24 2022-09-06 咪咕文化科技有限公司 Video playing method, device and equipment
CN115134648A (en) * 2021-03-26 2022-09-30 腾讯科技(深圳)有限公司 Video playing method, device, equipment and computer readable storage medium
WO2024046266A1 (en) * 2022-09-02 2024-03-07 维沃移动通信有限公司 Video management method and apparatus, electronic device, and readable storage medium
CN114827736B (en) * 2022-04-24 2024-04-30 北京达佳互联信息技术有限公司 Video playback method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090193478A1 (en) * 2006-04-24 2009-07-30 Jones David D Content Shuffling System and Method
US8914386B1 (en) * 2010-09-13 2014-12-16 Audible, Inc. Systems and methods for determining relationships between stories
US9584874B1 (en) * 2014-06-16 2017-02-28 Juan José Farías Portal for collection and distribution of web-based audiovisual content blocks and creation of audience statistics
CN108769831A (en) * 2018-05-30 2018-11-06 互影科技(北京)有限公司 The generation method and device of video advance notice
CN108924604A (en) * 2018-08-22 2018-11-30 百度在线网络技术(北京)有限公司 Method and apparatus for playing video
CN109309860A (en) * 2018-10-16 2019-02-05 腾讯科技(深圳)有限公司 Methods of exhibiting and device, storage medium, the electronic device of prompt information
CN110149558A (en) * 2018-08-02 2019-08-20 腾讯科技(深圳)有限公司 A kind of video playing real-time recommendation method and system based on content recognition
US20190335243A1 (en) * 2015-11-19 2019-10-31 Google Llc Reminders of Media Content Referenced in Other Media Content
CN110430461A (en) * 2019-08-28 2019-11-08 腾讯科技(深圳)有限公司 A kind of method, apparatus and video playback apparatus controlling video playing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090193478A1 (en) * 2006-04-24 2009-07-30 Jones David D Content Shuffling System and Method
US8914386B1 (en) * 2010-09-13 2014-12-16 Audible, Inc. Systems and methods for determining relationships between stories
US9584874B1 (en) * 2014-06-16 2017-02-28 Juan José Farías Portal for collection and distribution of web-based audiovisual content blocks and creation of audience statistics
US20190335243A1 (en) * 2015-11-19 2019-10-31 Google Llc Reminders of Media Content Referenced in Other Media Content
CN108769831A (en) * 2018-05-30 2018-11-06 互影科技(北京)有限公司 The generation method and device of video advance notice
CN110149558A (en) * 2018-08-02 2019-08-20 腾讯科技(深圳)有限公司 A kind of video playing real-time recommendation method and system based on content recognition
CN108924604A (en) * 2018-08-22 2018-11-30 百度在线网络技术(北京)有限公司 Method and apparatus for playing video
CN109309860A (en) * 2018-10-16 2019-02-05 腾讯科技(深圳)有限公司 Methods of exhibiting and device, storage medium, the electronic device of prompt information
CN110430461A (en) * 2019-08-28 2019-11-08 腾讯科技(深圳)有限公司 A kind of method, apparatus and video playback apparatus controlling video playing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘娟: "视频分割与目录生成研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111760266A (en) * 2020-07-01 2020-10-13 网易(杭州)网络有限公司 Game live broadcast method and device and electronic equipment
CN112487243A (en) * 2020-11-27 2021-03-12 上海连尚网络科技有限公司 Video display method, device and storage medium
WO2022111248A1 (en) * 2020-11-27 2022-06-02 上海连尚网络科技有限公司 Video displaying method and device, and storage medium
WO2022179415A1 (en) * 2021-02-25 2022-09-01 腾讯科技(深圳)有限公司 Audiovisual work display method and apparatus, and device and medium
CN115134648A (en) * 2021-03-26 2022-09-30 腾讯科技(深圳)有限公司 Video playing method, device, equipment and computer readable storage medium
CN113938712A (en) * 2021-10-13 2022-01-14 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN113938712B (en) * 2021-10-13 2023-10-10 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN114827736A (en) * 2022-04-24 2022-07-29 北京达佳互联信息技术有限公司 Video playback method and device, electronic equipment and storage medium
CN114827736B (en) * 2022-04-24 2024-04-30 北京达佳互联信息技术有限公司 Video playback method and device, electronic equipment and storage medium
CN115022705A (en) * 2022-05-24 2022-09-06 咪咕文化科技有限公司 Video playing method, device and equipment
WO2024046266A1 (en) * 2022-09-02 2024-03-07 维沃移动通信有限公司 Video management method and apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN111314784B (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN111314784B (en) Video playing method and electronic equipment
CN108762954B (en) Object sharing method and mobile terminal
CN110740259B (en) Video processing method and electronic equipment
CN110087117B (en) Video playing method and terminal
CN109597556B (en) Screen capturing method and terminal
CN109078319B (en) Game interface display method and terminal
CN108737904B (en) Video data processing method and mobile terminal
CN108763316B (en) Audio list management method and mobile terminal
CN110784771B (en) Video sharing method and electronic equipment
CN109213416B (en) Display information processing method and mobile terminal
CN108616771B (en) Video playing method and mobile terminal
CN109922294B (en) Video processing method and mobile terminal
CN109618218B (en) Video processing method and mobile terminal
CN109753202B (en) Screen capturing method and mobile terminal
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN110719527A (en) Video processing method, electronic equipment and mobile terminal
CN111212316B (en) Video generation method and electronic equipment
CN110062281B (en) Play progress adjusting method and terminal equipment thereof
CN109672845B (en) Video call method and device and mobile terminal
CN111143614A (en) Video display method and electronic equipment
CN111158815A (en) Dynamic wallpaper fuzzy method, terminal and computer readable storage medium
CN111050214A (en) Video playing method and electronic equipment
CN108628534B (en) Character display method and mobile terminal
CN107920272B (en) Bullet screen screening method and device and mobile terminal
CN111049977B (en) Alarm clock reminding method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant