CN111225263A - Video playing control method and device, electronic equipment and storage medium - Google Patents

Video playing control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111225263A
CN111225263A CN202010053822.1A CN202010053822A CN111225263A CN 111225263 A CN111225263 A CN 111225263A CN 202010053822 A CN202010053822 A CN 202010053822A CN 111225263 A CN111225263 A CN 111225263A
Authority
CN
China
Prior art keywords
video
target
frame
sub
time information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010053822.1A
Other languages
Chinese (zh)
Other versions
CN111225263B (en
Inventor
郑权彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202010053822.1A priority Critical patent/CN111225263B/en
Publication of CN111225263A publication Critical patent/CN111225263A/en
Application granted granted Critical
Publication of CN111225263B publication Critical patent/CN111225263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8193Monomedia components thereof involving executable data, e.g. software dedicated tools, e.g. video decoder software or IPMP tool
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a video playing control method and device, electronic equipment and a storage medium, and relates to the technical field of video playing. In the present application, first, target display time information is obtained based on display time information of a first frame video frame displayed in a target video stream and play duration information of the target video stream. Secondly, determining a target sub-video frame of each target sub-video stream based on the target display time information, wherein the target video stream comprises a plurality of target sub-video streams. Then, a splicing display operation is performed based on the target sub video frames of the plurality of target sub video streams. By the method, the problem of poor playing control effect in video playing based on the webpage client in the prior art can be solved.

Description

Video playing control method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video playing technologies, and in particular, to a video playing control method and apparatus, an electronic device, and a storage medium.
Background
The video playing can be generally carried out based on two modes of a local client and a webpage client, wherein the local client has a better control effect when controlling the playing of the video due to the completeness of a program code, so that a better video playing strategy is provided for a user.
However, the inventor researches and finds that, in the prior art, when video playing control is performed by a webpage client (webpage player), the control effect is poor.
Disclosure of Invention
In view of the above, an object of the present application is to provide a video playing control method and apparatus, an electronic device, and a storage medium, so as to solve the problem in the prior art that the playing control effect is poor when playing a video based on a web client.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
a video playing control method comprises the following steps:
obtaining target display time information based on display time information of a first frame of video frame displayed in a target video stream and playing time length information of the target video stream;
determining a target sub-video frame of each target sub-video stream based on the target display time information, wherein the target video stream comprises a plurality of target sub-video streams;
and executing splicing display operation based on the target sub video frames of the plurality of target sub video streams.
In a preferred option of the embodiment of the present application, in the video playing control method, the step of obtaining the target display time information based on the display time information of the first frame of video frame displayed in the target video stream and the playing time information of the target video stream includes:
acquiring analysis time information of a first frame of video frame displayed in a target video stream, and taking the analysis time as display time information of the first frame of video frame;
and obtaining target display time information based on the display time information and the playing time length information of the target video stream.
In a preferred option of the embodiment of the present application, in the video playing control method, the step of obtaining the analysis time information of the first frame of video frame displayed in the target video stream includes:
acquiring a package file header of a first frame of video frame displayed in a target video stream, wherein the target video stream is in a streaming media format;
and analyzing the encapsulated file header to obtain timestamp information, and using the timestamp information as analysis time information of the first frame of video frame.
In a preferred option of the embodiment of the present application, in the video playing control method, the step of determining the target sub video frame of each target sub video stream based on the target display time information includes:
determining a plurality of target sub-video streams included in the target video stream;
and for each determined target sub-video stream, determining a target sub-video frame of the target sub-video stream based on the target display time information.
In a preferred option of the embodiment of the present application, in the video playing control method, the step of determining a plurality of target sub-video streams included in the target video stream includes:
generating a content selection parameter in response to preset operation performed by a user based on a currently displayed video frame, wherein the preset operation comprises direction selection operation, zooming selection operation and position selection operation;
among the plurality of sub-video streams, a plurality of sub-video streams are determined based on the content selection parameter as a plurality of target sub-video streams included in the target video stream.
In a preferred option of the embodiment of the present application, in the video playing control method, the step of executing a splicing display operation based on the target sub video frames of the plurality of target sub video streams includes:
performing splicing operation based on target sub-video frames of the plurality of target sub-video streams to obtain target video frames, wherein the target video frames comprise each target sub-video frame;
and performing a display operation on at least part of the video data in the target video frame.
In a preferred option of the embodiment of the present application, in the video playing control method, the step of performing a display operation on at least a part of video data in the target video frame includes:
determining at least part of video data based on the obtained display parameters in the target video frame;
performing a display operation based on the at least part of the video data.
An embodiment of the present application further provides a video playing control device, including:
the information acquisition module is used for acquiring target display time information based on the display time information of a first frame of video frame displayed in the target video stream and the playing time length information of the target video stream;
a video frame determination module, configured to determine a target sub-video frame of each target sub-video stream based on the target display time information, where the target video stream includes a plurality of target sub-video streams;
and the operation execution module is used for executing splicing display operation based on the target sub-video frames of the plurality of target sub-video streams.
On the basis, an embodiment of the present application further provides an electronic device, including:
a memory for storing a computer program;
and the processor is connected with the memory and is used for executing the computer program so as to realize the video playing control method.
On the basis of the above, an embodiment of the present application further provides an electronic device, on which a computer program is stored, and the program, when executed, implements the video playback control method described above.
According to the video playing control method and device, the electronic device and the storage medium, the target display time information is obtained based on the display time information of the first frame video frame displayed in the target video stream and the playing time length information of the target video stream, so that (synchronous) determination can be performed on a plurality of target sub-video frames based on the target display time information, and accordingly splicing display operation can be performed on the plurality of determined target sub-video frames. Therefore, the synchronization of the target sub-video frames can be realized in the splicing display process without acquiring the display time information of each frame of target sub-video frames, so that the problem that the display time information of each frame of target sub-video frames is difficult to directly acquire because a webpage client does not have an application program interface for acquiring the display time information in the prior art is solved, the problem that the video splicing display cannot be performed due to the fact that the display time information cannot be acquired is solved, the problem that the playing control effect is poor (for example, only a single video stream can be played) when the video is played based on the webpage client in the prior art is solved, and the method has high practical value.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a schematic view of application interaction between an electronic device and a video server according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating steps included in a video playback control method according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating steps included in step S110 in fig. 2.
Fig. 4 is a flowchart illustrating steps included in step S120 in fig. 2.
Fig. 5 is a schematic effect diagram of a user performing a first preset operation according to an embodiment of the present application.
Fig. 6 is a schematic diagram illustrating an effect of a second preset operation performed by a user according to an embodiment of the present application.
Fig. 7 is a schematic diagram illustrating an effect of a third preset operation performed by a user according to an embodiment of the present application.
Fig. 8 is a flowchart illustrating steps included in step S130 in fig. 2.
Fig. 9 is a schematic effect diagram of performing a splicing operation on a video frame according to an embodiment of the present application.
Fig. 10 is a schematic diagram illustrating an effect of performing a display operation on a target video frame according to an embodiment of the present application.
Fig. 11 is a schematic diagram illustrating another effect of performing a display operation on a target video frame according to an embodiment of the present application.
Fig. 12 is a block diagram illustrating functional modules included in a video playback control apparatus according to an embodiment of the present disclosure.
Icon: 10-an electronic device; 12-a memory; 14-a processor; 100-video playback control means; 110-an information obtaining module; 120-a video frame determination module; 130-operation execution module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, an embodiment of the present application provides an electronic device 10, which may include a memory 12, a processor 14, and a video playback control apparatus 100.
Wherein the memory 12 and the processor 14 are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The video playback control apparatus 100 includes at least one software functional module, which may be a web client, and may be stored in the memory 12 in the form of software or firmware (firmware). The processor 14 is configured to execute an executable computer program stored in the memory 12, for example, a software functional module and a computer program included in the video playback control apparatus 100, so as to implement the video playback control method provided in the embodiment of the present application.
Alternatively, the Memory 12 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
Also, the Processor 14 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), a System on chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It is to be understood that the structure shown in fig. 1 is only an illustration, and the electronic device 10 may further include more or less components than those shown in fig. 1, or have a different configuration than that shown in fig. 1, for example, and may further include a communication unit for information interaction with other devices (e.g., a video server).
The electronic device 10 may include, but is not limited to, a mobile phone, a tablet computer, a computer, and other terminal devices with data processing capability. Also, in some examples, the electronic device 10 may be a live device, such as a terminal device used by a viewer to view video. The video server may be a live server for providing a video stream to a live device.
With reference to fig. 2, an embodiment of the present application further provides a video playing control method applicable to the electronic device 10. Wherein, the method steps defined by the flow related to the video playing control method can be implemented by the electronic device 10. The specific process shown in FIG. 2 will be described in detail below.
Step S110, obtaining target display time information based on the display time information of the first frame video frame displayed in the target video stream and the play time information of the target video stream.
In this embodiment, after the target video stream is started to be played, that is, after the first frame of video frame is displayed, the display time information of the first frame of video frame displayed in the target video stream may be obtained first, and the play time length information of the target video stream may be obtained.
Then, the target display time information may be obtained based on the display time information and the play duration information of the first frame of video frame.
Step S120, determining the target sub video frame of each target sub video stream based on the target display time information.
In the present embodiment, after the target display time information is obtained based on step S110, the target sub video frame of each target sub video stream may be determined based on the target display time information.
Wherein the target video stream includes a plurality of the target sub-video streams, each of which includes a plurality of sub-video frames in a time direction. In this way, a frame of sub-video frame can be respectively determined as a target sub-video frame in a plurality of frames of sub-video frames included in each target sub-video stream based on the target display time information, so as to obtain a plurality of frames of target sub-video frames.
And step S130, executing splicing display operation based on the target sub video frames of the plurality of target sub video streams.
In the present embodiment, after determining the target sub video frame of each of the target sub video streams based on step S120, the splicing display operation may be performed based on the determined multiple frames of the target sub video frames. In this way, splicing display can be performed between different target sub-video streams.
Based on the method, the synchronization of the target sub-video frames can be realized in the splicing display process without acquiring the display time information of each frame of target sub-video frames, so that the problem that the display time information of each frame of target sub-video frames is difficult to directly acquire because a webpage client does not have an application program interface for acquiring the display time information in the prior art is solved, the problem that the video splicing display cannot be performed due to the fact that the display time information cannot be acquired is solved, and the problem that the playing control effect is poor (such as only a single video stream can be played) when the video is played based on the webpage client in the prior art is solved.
It should be noted that, in step S110, a specific manner of obtaining the target display time information based on the display time information and the playing time length information is not limited, and may be selected according to an actual application requirement, and if an obtaining manner based on the display time information is different, step S110 may include different sub-steps.
For example, in an alternative example, the start playing time information of the electronic device 10 actually starting to play the target video stream may be used as the display time information of the first frame video frame, so as to obtain the target display time information based on the start playing time information and the playing time length information.
In detail, in a specific application example, if the user selects to start playing the target video stream at time a, the display time information of the first frame of video frame may be the time a.
For another example, in another alternative example, in conjunction with fig. 3, step S110 may include step S111 and step S113, which are described in detail below.
Step S111, acquiring parsing time information of a first frame of video frame displayed in the target video stream, and using the parsing time as display time information of the first frame of video frame.
In this embodiment, after the target video stream is started to be played, the parsing time information of the first frame of video frame displayed in the target video stream may be obtained first, and the parsing time information may be used as the display time information of the first frame of video frame.
Step S113, obtaining target display time information based on the display time information and the play time length information of the target video stream.
In this embodiment, after obtaining the display time information (i.e., the parsing time information) of the first frame of video frame based on step S111, the target display time information may be obtained based on the display time information and the playing time length information of the target video stream.
It should be noted that, as a result of the research of the inventor of the present application, it is found that the parsing time information of the first frame video frame is equal to the display time information of the first frame video frame, and thus, the parsing time information can be used as the display time information.
However, in each frame of video frames other than the first frame of video frame included in the target video stream, the analysis time information of each frame of video frame is not equal to the display time information of the frame of video frame, and therefore, the display time information of the frame of video frame cannot be acquired by acquiring the analysis time information of the frame of video frame, that is, the analysis time information cannot comprehensively reflect the display time information.
Thus, through long-term research by the inventors of the present application, it is found that the target display time information can be obtained based on the analysis time information and the play time length information of the first frame video frame.
Optionally, a specific manner of executing step S111 to obtain the parsing time information of the first frame of video frame is not limited, and may be selected according to actual application requirements.
For example, based on different formats of the target video stream, different manners may be selected to obtain the parsing time information of the first frame video frame. In a specific application example, the target VIDEO stream may be in a streaming media (FLV, FLASH VIDEO) format, so that step S111 may include the following sub-steps:
firstly, a package file header of a first frame video frame displayed in a target video stream can be obtained; secondly, the encapsulated file header may be parsed to obtain timestamp information, and the timestamp information may be used as parsing time information of the first frame of video frame.
The specific way of obtaining the timestamp information (PTS, Presentation Time Stamp) may be to analyze the package header (FLV Body) to obtain data with a length of 3bytes in a data area of the package header, so as to obtain the timestamp information.
It can be understood that, since the first frame of video frame is formed by splicing multiple sub-video frames, the encapsulated header of one sub-video frame can be parsed to obtain the timestamp information.
Optionally, the specific manner of executing step S113 to obtain the target display time information is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, the target display time information may be equal to the sum (e.g., x + y) of the display time information (e.g., x) of the first frame video frame and the play-out-duration information (e.g., y).
It can be understood that the specific manner of obtaining the play time length information is not limited, and may be selected according to the actual application requirements.
For example, in an alternative example, a timer may be generated after the target video stream starts to be played, so as to count the playing time duration information of playing the target video stream in real time, such as 3S, 10S, 2 min, 30 min, and so on.
For another example, in another alternative example, a counter may be generated after the target video stream starts to be played, so as to count the playing frame number information of the target video stream in real time, and then the playing duration information (e.g., n × m) is calculated based on the playing frame number information (e.g., n frames) and the duration of each frame of video frame (e.g., m seconds).
It should be noted that, in step S120, a specific manner of determining the target sub video frame of each target sub video stream is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, if the plurality of target sub-video streams included in the target video stream are fixed, when step S120 is executed, the target sub-video frame of each target sub-video stream in the target video stream may be determined based on the target display time information directly for the target sub-video stream.
For another example, in another alternative example, if a plurality of target video streams included in the target video stream may change, in order to ensure validity of the multi-frame target sub-video frame determined in step S120, with reference to fig. 4, step S120 may include step S121 and step S123, which are described below in detail.
Step S121, determining a plurality of target sub-video streams included in the target video stream.
In this embodiment, before determining the target sub-video frame, a plurality of target sub-video streams included in the target video stream may be determined.
For example, in a specific application example, if there are 5 sub-video streams (e.g., sub-video stream a, sub-video stream B, sub-video stream C, sub-video stream D, and sub-video stream E), 3 sub-video streams (e.g., sub-video stream B, sub-video stream C, and sub-video stream D) can be determined from the 5 sub-video streams as a plurality of target sub-video streams included in the target video stream.
Step S123, for each determined target sub-video stream, determining a target sub-video frame of the target sub-video stream based on the target display time information.
In this embodiment, after determining the plurality of target sub-video streams based on step S121, for each determined target sub-video stream, a target sub-video frame of the target sub-video stream may be determined based on the target time information, so as to obtain a plurality of frames of target sub-video frames.
For example, in a specific application example, in combination with the foregoing example, the plurality of target sub-video streams are a sub-video stream B, a sub-video stream C, and a sub-video stream D.
Thus, a frame of video frame can be determined in the sub-video stream B based on the target time information, as a target sub-video frame (e.g. target sub-video frame B) of the sub-video stream B; or determining a frame of video frame in the sub-video stream C based on the target time information, as a target sub-video frame (e.g., target sub-video frame C) of the sub-video stream C; a frame of video frame can be determined in the sub-video stream D based on the target time information, and is used as a target sub-video frame (e.g., target sub-video frame D) of the sub-video stream D. Accordingly, in performing step S130, a tiled display operation may be performed based on the target sub-video frame b, the target sub-video frame c, and the target sub-video frame d.
Optionally, the specific manner of determining the plurality of target sub-video streams in step S121 is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, a plurality of target sub-video streams included in the target video stream may be updated based on a certain preset rule, and thus, the plurality of target sub-video streams may be determined based on the preset rule.
With reference to the foregoing example, in a specific application example, if there are 5 sub-video streams, which are respectively the sub-video stream a, the sub-video stream B, the sub-video stream C, the sub-video stream D, and the sub-video stream E. Thus, the preset rule may be:
determining the sub-video stream A and the sub-video stream B as a plurality of target sub-video streams included in the target video stream in a first period of time, namely performing splicing display based on the sub-video stream A and the sub-video stream B in the first period of time; determining the sub-video stream B and the sub-video stream C as a plurality of target sub-video streams included in the target video stream in a second period of time, namely performing splicing display based on the sub-video stream B and the sub-video stream C in the second period of time; in a third period of time, determining the sub-video stream C and the sub-video stream D as a plurality of target sub-video streams included in the target video stream, namely performing splicing display based on the sub-video stream C and the sub-video stream D in the third period of time; and in the fourth period of time, determining the sub-video stream D and the sub-video stream E as a plurality of target sub-video streams included in the target video stream, namely performing splicing display based on the sub-video stream D and the sub-video stream E in the fourth period of time.
For another example, in another alternative example, the plurality of target sub-video streams included in the target video stream may be updated based on a user selection, and thus, the plurality of target sub-video streams may be determined based on the user selection. Based on this, step S121 may include the following substeps:
firstly, content selection parameters can be generated in response to preset operation of a user based on a currently displayed video frame; next, among the plurality of sub-video streams, a plurality of sub-video streams may be determined based on the content selection parameter as a plurality of target sub-video streams included in the target video stream.
That is, in the above-described example, based on whether or not the user performs the preset operation and on different contents of the preset operation, the content selection parameter may or may not be generated, and different selection parameters may be generated, thereby determining a plurality of different target sub-video streams among the plurality of sub-video streams that are present.
For example, in a specific application example, in combination with the above example, if there are 5 sub-video streams, which are respectively the sub-video stream a, the sub-video stream B, the sub-video stream C, the sub-video stream D, and the sub-video stream E. Before a user performs a preset operation, the target video stream may include a plurality of target sub-video streams, which are sub-video stream a, sub-video stream B, and sub-video stream C, and after the user performs the preset operation, the target video stream may include a plurality of target sub-video streams, which are sub-video stream C, sub-video stream D, and sub-video stream E.
The specific content of the preset operation is not limited, and may be selected according to the actual application requirements, for example, the specific content may include, but is not limited to, a direction selection operation, a zoom selection operation, a position selection operation, and the like.
It is understood that the direction selection operation may refer to an operation in which a user slides on the screen of the electronic device 10 by a finger or other device (e.g., a mouse), as shown in fig. 5.
The zoom selection operation may be an operation of zooming or continuously clicking on the screen of the electronic device 10 by a user through a finger or other devices (e.g., a mouse), as shown in fig. 6, for the user to perform a zoom-in operation on the screen of the electronic device 10 through the finger.
The position selection operation may refer to an operation in which a user clicks a position display area (e.g., a game map) on the screen of the electronic device 10 by using a finger or another device (e.g., a mouse), as shown in fig. 7, and indicates that the target video stream includes a plurality of target sub-video streams formed based on the video data at the position a before the user performs a preset operation, and includes a plurality of target sub-video streams formed based on the video data at the position B after the user performs the preset operation.
It should be noted that, in step S130, a specific manner of performing the splicing display operation based on the plurality of target sub-video frames is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, after determining the target sub-video frame of the plurality of frames, the display operation may be performed directly on the target sub-video frame of the plurality of frames, that is, the entire video data of the target sub-video frame of the plurality of frames is presented to the user.
For another example, in another alternative example, in conjunction with fig. 8, step S130 may include step S131 and step S133, which are described in detail below.
Step S131, executing splicing operation based on the target sub-video frames of the plurality of target sub-video streams to obtain a target video frame.
In this embodiment, after determining multiple frames of the target sub-video frames based on step S120, a splicing operation may be performed based on the multiple frames of the target sub-video frames, so as to obtain a target video frame including each of the target sub-video frames.
Step S133, performing a display operation on at least a part of the video data in the target video frame.
In this embodiment, after obtaining the target video frame based on step S131, a display operation may be performed on at least part of the video data in the target video frame to show at least part of the video data in the target video frame to a user.
Optionally, the specific manner of executing the step S131 to perform the splicing operation on the multi-frame target sub-video frame is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, the target sub-video frame may be subjected to any splicing operation, such as splicing the target sub-video frame a to the right, upper, or lower side of the target sub-video frame b.
For another example, in another alternative example, in order to improve the viewing experience of the user, the multi-frame target sub-video frame may be subjected to a splicing operation based on a certain rule, as shown in fig. 9, the target sub-video frame a may be spliced to the left side of the target sub-video frame b based on the continuity of the picture, so as to obtain the target video frame including the target sub-video frame a and the target sub-video frame b.
Optionally, the specific manner of performing the step S133 to perform the display operation on at least a part of the video data in the target video frame is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, a display operation may be performed on at least part of any of the target video frames, i.e., the at least part of any of the target video frames is presented to a user.
For another example, in another alternative example, step S133 may include the following sub-steps to perform a display operation on at least part of the video data in the target video frame:
first, at least part of video data may be determined based on the obtained display parameters in the target video frame; second, a display operation may be performed based on the at least part of the video data.
The display parameter may include, but is not limited to, a picture scale parameter, and the like. That is, the determination of the video data may be made based on the picture scale parameter.
For example, the larger the picture scale parameter, the less video data may be determined; conversely, the smaller the picture scale parameter, the more video data can be determined.
In a specific application example, if the frame scaling parameter may include three levels, which are a first level, a second level and a third level. Based on the target video frame shown in fig. 9, if the obtained frame scale parameter is one level, one quarter of the video data of the target video frame can be displayed; if the obtained picture proportion parameter is two-level, one half of video data of the target video frame can be displayed; if the obtained picture scale parameter is three-level, all video data of the target video frame can be displayed.
That is, at least a portion of the video data presented to the user includes at least a portion of the video data of each frame of the target sub-video frame.
For example, in connection with the example shown in fig. 9, in this example, the target video frame includes a target sub-video frame a and a target sub-video frame b. Thus, if the obtained picture scale parameter is one level, a display operation may be performed based on the video data of one fourth of the target sub-video frame a and the video data of one fourth of the target sub-video frame, so as to show the video data of one fourth of the target sub-video frame a and the video data of one fourth of the target sub-video frame a to a user (shown in fig. 10).
For another example, if the obtained picture scale parameter is two-level, a display operation may be performed based on the video data of one-half of the target sub-video frame a and the video data of one-half of the target sub-video frame, so as to present the video data of one-half of the target sub-video frame a and the video data of one-half of the target sub-video frame to the user (shown in fig. 11).
Further, considering that audio data is also generally played in the process of playing video data, after the splicing display operation is completed based on the above steps, the obtained target video frame and the audio data can be synchronized to achieve synchronous playing of video and audio.
With reference to fig. 12, an embodiment of the present application further provides a video playback control apparatus 100 applicable to the electronic device 10. The video playback control apparatus 100 may include an information obtaining module 110, a video frame determining module 120, and an operation executing module 130.
The information obtaining module 110 is configured to obtain target display time information based on display time information of a first frame of video frame displayed in a target video stream and play duration information of the target video stream. In this embodiment, the information obtaining module 110 may be configured to perform step S110 shown in fig. 2, and reference may be made to the foregoing description of step S110 regarding relevant contents of the information obtaining module 110.
The video frame determination module 120 is configured to determine a target sub-video frame of each target sub-video stream based on the target display time information, where the target video stream includes a plurality of target sub-video streams. In this embodiment, the video frame determination module 120 may be configured to perform step S120 shown in fig. 2, and reference may be made to the foregoing description of step S120 for relevant contents of the video frame determination module 120.
The operation executing module 130 is configured to execute a splicing display operation based on a target sub video frame of a plurality of target sub video streams. In this embodiment, the operation performing module 130 may be configured to perform step S130 shown in fig. 2, and reference may be made to the foregoing description of step S130 for relevant contents of the operation performing module 130.
In an embodiment of the present application, there is also provided a computer-readable storage medium, where a computer program is stored, and the computer program executes the steps of the video playing control method when running, corresponding to the video playing control method.
The steps executed when the computer program runs are not described in detail here, and the explanation of the video playing control method can be referred to.
In summary, according to the video play control method and apparatus, the electronic device, and the storage medium provided by the present application, the target display time information is obtained based on the display time information of the first frame video frame displayed in the target video stream and the play time information of the target video stream, so that (synchronous) determination can be performed on a plurality of target sub video frames based on the target display time information, and thus, a splicing display operation can be performed on the determined plurality of target sub video frames. Therefore, the synchronization of the target sub-video frames can be realized in the splicing display process without acquiring the display time information of each frame of target sub-video frames, so that the problem that the display time information of each frame of target sub-video frames is difficult to directly acquire because a webpage client does not have an application program interface for acquiring the display time information in the prior art is solved, the problem that the video splicing display cannot be performed due to the fact that the display time information cannot be acquired is solved, the problem that the playing control effect is poor (for example, only a single video stream can be played) when the video is played based on the webpage client in the prior art is solved, and the method has high practical value.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A video playback control method, comprising:
obtaining target display time information based on display time information of a first frame of video frame displayed in a target video stream and playing time length information of the target video stream;
determining a target sub-video frame of each target sub-video stream based on the target display time information, wherein the target video stream comprises a plurality of target sub-video streams;
and executing splicing display operation based on the target sub video frames of the plurality of target sub video streams.
2. The video playback control method according to claim 1, wherein the step of obtaining target display time information based on the display time information of the first frame video frame displayed in the target video stream and the playback time length information of the target video stream includes:
acquiring analysis time information of a first frame of video frame displayed in a target video stream, and taking the analysis time as display time information of the first frame of video frame;
and obtaining target display time information based on the display time information and the playing time length information of the target video stream.
3. The video playback control method according to claim 2, wherein the step of obtaining the parsing time information of the first frame of video frame displayed in the target video stream includes:
acquiring a package file header of a first frame of video frame displayed in a target video stream, wherein the target video stream is in a streaming media format;
and analyzing the encapsulated file header to obtain timestamp information, and using the timestamp information as analysis time information of the first frame of video frame.
4. The video playback control method according to any one of claims 1 to 3, wherein the step of determining the target sub video frame of each target sub video stream based on the target display time information includes:
determining a plurality of target sub-video streams included in the target video stream;
and for each determined target sub-video stream, determining a target sub-video frame of the target sub-video stream based on the target display time information.
5. The video playback control method according to claim 4, wherein the step of determining a plurality of target sub-video streams included in the target video stream includes:
generating a content selection parameter in response to preset operation performed by a user based on a currently displayed video frame, wherein the preset operation comprises direction selection operation, zooming selection operation and position selection operation;
among the plurality of sub-video streams, a plurality of sub-video streams are determined based on the content selection parameter as a plurality of target sub-video streams included in the target video stream.
6. The video playback control method according to any one of claims 1 to 3, wherein the step of performing a splicing display operation based on the target sub video frames of the plurality of target sub video streams includes:
performing splicing operation based on target sub-video frames of the plurality of target sub-video streams to obtain target video frames, wherein the target video frames comprise each target sub-video frame;
and performing a display operation on at least part of the video data in the target video frame.
7. The video playback control method according to claim 6, wherein the step of performing a display operation on at least a portion of the video data in the target video frame includes:
determining at least part of video data based on the obtained display parameters in the target video frame;
performing a display operation based on the at least part of the video data.
8. A video playback control apparatus, comprising:
the information acquisition module is used for acquiring target display time information based on the display time information of a first frame of video frame displayed in the target video stream and the playing time length information of the target video stream;
a video frame determination module, configured to determine a target sub-video frame of each target sub-video stream based on the target display time information, where the target video stream includes a plurality of target sub-video streams;
and the operation execution module is used for executing splicing display operation based on the target sub-video frames of the plurality of target sub-video streams.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor coupled to the memory for executing the computer program to implement the video playback control method of any of claims 1-7.
10. A computer-readable storage medium on which a computer program is stored, characterized in that the program, when executed, implements the video playback control method of any one of claims 1 to 7.
CN202010053822.1A 2020-01-17 2020-01-17 Video playing control method and device, electronic equipment and storage medium Active CN111225263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010053822.1A CN111225263B (en) 2020-01-17 2020-01-17 Video playing control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010053822.1A CN111225263B (en) 2020-01-17 2020-01-17 Video playing control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111225263A true CN111225263A (en) 2020-06-02
CN111225263B CN111225263B (en) 2022-06-14

Family

ID=70829584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010053822.1A Active CN111225263B (en) 2020-01-17 2020-01-17 Video playing control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111225263B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114374858A (en) * 2022-01-14 2022-04-19 京东方科技集团股份有限公司 Multicast playing end system and video splicing method and device thereof

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1933594A (en) * 2005-09-14 2007-03-21 王世刚 Multichannel audio-video frequency data network transmitting and synchronous playing method
CN101014123A (en) * 2007-02-05 2007-08-08 北京大学 Method and system for rebuilding free viewpoint of multi-view video streaming
WO2011080907A1 (en) * 2009-12-28 2011-07-07 パナソニック株式会社 Display apparatus and method, recording medium, transmission apparatus and method, and playback apparatus and method
CN105872569A (en) * 2015-11-27 2016-08-17 乐视云计算有限公司 Video playing method and system, and devices
CN105915937A (en) * 2016-05-10 2016-08-31 上海乐相科技有限公司 Panoramic video playing method and device
CN106412622A (en) * 2016-11-14 2017-02-15 百度在线网络技术(北京)有限公司 Method and apparatus for displaying barrage information during video content playing process
US20170181113A1 (en) * 2015-12-16 2017-06-22 Sonos, Inc. Synchronization of Content Between Networked Devices
CN107509100A (en) * 2017-09-15 2017-12-22 深圳国微技术有限公司 Audio and video synchronization method, system, computer installation and computer-readable recording medium
CN107645669A (en) * 2017-10-18 2018-01-30 青岛桐轩佳航科技有限公司 Multi-screen display control method, apparatus and system
CN108737874A (en) * 2018-06-05 2018-11-02 武汉斗鱼网络科技有限公司 A kind of video broadcasting method and electronic equipment
WO2019076309A1 (en) * 2017-10-17 2019-04-25 腾讯科技(深圳)有限公司 Animation synchronous playback method and device, storage medium and electronic device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1933594A (en) * 2005-09-14 2007-03-21 王世刚 Multichannel audio-video frequency data network transmitting and synchronous playing method
CN101014123A (en) * 2007-02-05 2007-08-08 北京大学 Method and system for rebuilding free viewpoint of multi-view video streaming
WO2011080907A1 (en) * 2009-12-28 2011-07-07 パナソニック株式会社 Display apparatus and method, recording medium, transmission apparatus and method, and playback apparatus and method
CN105872569A (en) * 2015-11-27 2016-08-17 乐视云计算有限公司 Video playing method and system, and devices
US20170181113A1 (en) * 2015-12-16 2017-06-22 Sonos, Inc. Synchronization of Content Between Networked Devices
CN105915937A (en) * 2016-05-10 2016-08-31 上海乐相科技有限公司 Panoramic video playing method and device
CN106412622A (en) * 2016-11-14 2017-02-15 百度在线网络技术(北京)有限公司 Method and apparatus for displaying barrage information during video content playing process
CN107509100A (en) * 2017-09-15 2017-12-22 深圳国微技术有限公司 Audio and video synchronization method, system, computer installation and computer-readable recording medium
WO2019076309A1 (en) * 2017-10-17 2019-04-25 腾讯科技(深圳)有限公司 Animation synchronous playback method and device, storage medium and electronic device
CN107645669A (en) * 2017-10-18 2018-01-30 青岛桐轩佳航科技有限公司 Multi-screen display control method, apparatus and system
CN108737874A (en) * 2018-06-05 2018-11-02 武汉斗鱼网络科技有限公司 A kind of video broadcasting method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YU GAO: "Optimizing frame structure with real-time computation for interactive multiview video streaming", 《2012 3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON)》 *
彭梦琳: "全景视频拼接及播放技术研究与实现", 《中国优秀硕士学位论文全文库》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114374858A (en) * 2022-01-14 2022-04-19 京东方科技集团股份有限公司 Multicast playing end system and video splicing method and device thereof

Also Published As

Publication number Publication date
CN111225263B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US11546667B2 (en) Synchronizing video content with extrinsic data
US11830161B2 (en) Dynamically cropping digital content for display in any aspect ratio
CN104994425B (en) A kind of video identifier method and apparatus
CN112188225B (en) Bullet screen issuing method for live broadcast playback and live broadcast video bullet screen playback method
CA2943975C (en) Method for associating media files with additional content
CN109714622B (en) Video data processing method and device and electronic equipment
CN113286197A (en) Information display method and device, electronic equipment and storage medium
US10531153B2 (en) Cognitive image obstruction
US10021433B1 (en) Video-production system with social-media features
CN112911343B (en) Multi-channel video playing method and device, electronic equipment and storage medium
EP3142357A1 (en) Operation instruction method and device for remote controller of smart television
CN110996157A (en) Video playing method and device, electronic equipment and machine-readable storage medium
CN111225263B (en) Video playing control method and device, electronic equipment and storage medium
CN106878807B (en) Video switching method and device
CN109714626B (en) Information interaction method and device, electronic equipment and computer readable storage medium
US9584859B2 (en) Testing effectiveness of TV commercials to account for second screen distractions
CN109194975B (en) Audio and video live broadcast stream following method and device
CN111667313A (en) Advertisement display method and device, client device and storage medium
CN108449646B (en) Method, equipment and medium for establishing social relationship through barrage
CN114363710A (en) Live broadcast watching method and device based on time shifting acceleration
CN111107293B (en) 360-degree video recording method and device, electronic equipment and storage medium
CN111147884A (en) Data processing method, device, system, user side and storage medium
CN112351314B (en) Multimedia information playing method, server, terminal, system and storage medium
CN110971959B (en) Display method, storage medium, electronic device and system for preventing abrupt change of picture
CN116527989B (en) Video playing device interface display method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant