CN111741314A - Video playing method and display equipment - Google Patents

Video playing method and display equipment Download PDF

Info

Publication number
CN111741314A
CN111741314A CN202010559501.9A CN202010559501A CN111741314A CN 111741314 A CN111741314 A CN 111741314A CN 202010559501 A CN202010559501 A CN 202010559501A CN 111741314 A CN111741314 A CN 111741314A
Authority
CN
China
Prior art keywords
stream
sparse
target
video
complete
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010559501.9A
Other languages
Chinese (zh)
Inventor
刘相双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Media Network Technology Co Ltd
Juhaokan Technology Co Ltd
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202010559501.9A priority Critical patent/CN111741314A/en
Publication of CN111741314A publication Critical patent/CN111741314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses a video playing method and display equipment. In the embodiment of the application, when the change of the field angle is detected, the display device acquires a target sparse stream which is located behind the current playing progress and is closest to the current playing progress and a plurality of complete streams which are located behind the target sparse stream, and the sparse stream contains an I frame generated according to one video frame in the corresponding complete streams, so that if the current playing progress is played to a certain video frame in the certain complete stream, the display device can play the I frame in the target sparse stream and the video frames which are left after the I frame according to the target sparse stream acquired by the current playing progress, so that before playing of the next complete stream is started, a user does not need to wait for a next high-definition picture by simply watching a basic video stream, the time for the user to watch a low-resolution video picture is reduced, and the user experience is improved.

Description

Video playing method and display equipment
Technical Field
The present application relates to the field of streaming media technologies, and in particular, to a video playing method and a display device.
Background
Currently, in the AR (Augmented Reality) or VR (Virtual Reality) domain, a server may push a large-angle video stream, for example, a 360-degree panoramic video stream, to a display device. With the application of high-resolution streaming media, the user's demand for resolution of large-angle video streams is also higher and higher. However, due to the limitation of network bandwidth and the influence of traffic cost, the cost of directly pushing a high-resolution large-angle high-definition video stream to a display device by a server is too high. Based on this, the server may push the base video stream with a lower resolution to the display device, and at the same time, divide the high-resolution high-definition video stream into a plurality of slices, each slice is encoded into one sub-stream, and then, obtain a corresponding sub-stream combination according to the current field angle of the user and push the sub-stream combination to the display device, and the display device may play the received sub-stream combination and the base video stream in a combined manner.
Generally, one sub-stream includes an I-frame and a plurality of P-frames, wherein the I-frame is the first frame data of the sub-stream, and the display device can play back the following P-frames after decoding the I-frame. When the display device detects that the angle of view of the user changes, the sub-stream combination corresponding to the changed angle of view can be acquired. However, since the sub-stream combination needs time to be downloaded, when the sub-stream combination is acquired, the playing time for playing a certain video frame in the first sub-stream may have been reached, and at this time, it is not time to decode the I frame of the first sub-stream. In this case, the display device can only look for the I-frame of the next substream to decode. Therefore, the display device cannot play the first sub-stream, and the user needs to wait for the playing time of the second sub-stream to arrive, so as to see the high-definition picture played by the display device by combining the basic video stream and the second sub-stream combined by the sub-streams.
Disclosure of Invention
The application provides a video playing method and display equipment, which can improve the experience of watching videos of users when the field angle changes. The technical scheme is as follows:
in one aspect, a display device is provided, the display device comprising a controller and a display;
the controller is used for responding to the change of the field angle, and acquiring a sub-stream combination corresponding to the changed field angle according to the current playing progress, wherein the sub-stream combination comprises a target sparse stream and a plurality of complete streams, each complete stream corresponds to at least one sparse stream, each sparse stream comprises an I frame generated according to one video frame in the corresponding complete stream, and the target sparse stream is a sparse stream which is positioned behind the current playing progress and is closest to the current playing progress;
the controller is further configured to control the display to play video according to the target sparse stream and the plurality of full streams.
In another aspect, a video playing method is provided, where the video playing method includes:
responding to the change of the field angle, and acquiring a sub-stream combination corresponding to the changed field angle according to the current playing progress, wherein the sub-stream combination comprises a target sparse stream and a plurality of complete streams, each complete stream corresponds to at least one sparse stream, each sparse stream comprises an I frame generated according to one video frame in the corresponding complete stream, and the target sparse stream is a sparse stream which is behind the current playing progress and is closest to the current playing progress;
and playing the video according to the target sparse flow and the plurality of complete flows.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the video playing method described above.
In another aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the steps of the video playback method described above.
The technical scheme provided by the application can at least bring the following beneficial effects:
in the embodiment of the application, when the change of the field angle is detected, the display device acquires a target sparse stream which is located behind the current playing progress and is closest to the current playing progress and a plurality of complete streams which are located behind the target sparse stream, and the sparse stream comprises an I frame which is generated according to one video frame in the corresponding complete streams, so that if the current playing progress is played to a certain video frame in the certain complete stream, the display device can play the I frame in the target sparse stream and the video frames which are left after the I frame according to the target sparse stream acquired by the current playing progress, and therefore before playing of the complete stream is started, a user does not need to wait for a next high-definition picture by simply watching a basic video stream, the time for the user to watch a low-resolution video picture is reduced, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an operational scenario between a display device and a control apparatus according to an exemplary embodiment;
fig. 2 is a block diagram showing a hardware configuration of a display device according to an exemplary embodiment;
fig. 3 is a block diagram illustrating a configuration of a control apparatus according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a functional configuration of a display device according to an exemplary embodiment;
FIG. 5 is a block diagram illustrating a configuration of a software system in a display device according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating a configuration of an application in a display device according to an exemplary embodiment;
FIG. 7 is a flow diagram illustrating a method of video playback in accordance with an exemplary embodiment;
fig. 8 is a flow chart illustrating another video playback method in accordance with an exemplary embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The term "remote control" as used in this application refers to a component of an electronic device (such as the display device disclosed in this application) that is typically wirelessly controllable over a relatively short range of distances. The touch screen remote control device is generally connected with an electronic device by using infrared and/or Radio Frequency (RF) signals and/or bluetooth, and may also include functional modules such as WiFi, wireless USB (universal serial Bus), bluetooth, and a motion sensor.
The term "gesture" as used in this application refers to a user's behavior through a change in hand shape or an action such as hand motion to convey a desired idea, action, purpose, or result.
Fig. 1 is a schematic diagram illustrating an operational scenario between a display device and a control apparatus according to an exemplary embodiment. As shown in fig. 1, a user may operate the display device 200 through the mobile terminal 300 and the control apparatus 100.
The control device 100 may be a remote controller, which includes infrared protocol communication or bluetooth protocol communication, and other short-distance communication methods, and controls the display apparatus 200 in a wireless or other wired manner. The user may input a user command through a key on a remote controller, voice input, control panel input, etc. to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
In some embodiments, mobile terminals, tablets, computers, laptops, and other smart devices may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device. The application, through configuration, may provide the user with various controls in an intuitive User Interface (UI) on a screen associated with the smart device.
For example, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 300 and the display device 200 can establish a control instruction protocol, synchronize a remote control keyboard to the mobile terminal 300, and control the display device 200 by controlling a user interface on the mobile terminal 300. The audio and video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display apparatus 200 also performs data communication with the server 400 through various communication means. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 400 may be a group or a plurality of groups, and may be one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
In some possible embodiments, the display device 200 also communicates data with VR devices such as VR glasses via a variety of communication means. In this way, when the posture of the user changes, the VR device may transmit the posture change parameter of the user to the display device 200, and the display device 200 may determine whether the angle of view is changed according to the posture change parameter.
The display device 200 may include a liquid crystal display, an OLED (organic light-Emitting Diode) display, a projection display device. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide an intelligent network tv function that provides a computer support function in addition to the broadcast receiving tv function. Illustratively, the display device 200 may also provide functions of a smart tv such as a web tv, a smart tv, an Internet Protocol Television (IPTV), and the like.
Next, a description is given of a display device provided in an embodiment of the present application.
Referring to fig. 2, fig. 2 is a block diagram illustrating a hardware configuration of a display apparatus according to an exemplary embodiment. The display device 200 includes a controller 210, a communication interface 230, a detector 240, an input/output interface 250, a video processor 260-1, an audio processor 260-2, a display 280, an audio output 270, a memory 290, a power supply, and an infrared receiver.
A display 280 for receiving the image signal from the video processor 260-1 and displaying the video content and image and components of the menu manipulation interface. The display 280 includes a display screen assembly for presenting a picture, and a driving assembly for driving the display of an image. The displayed video content may be broadcast television content, i.e., various broadcast signals received through a wired or wireless communication protocol, or the displayed video content may be various image content transmitted by a network server side received through a network communication protocol.
Meanwhile, the display 280 may also display a user manipulation UI interface generated in the display apparatus 200 and used to control the display apparatus 200. In addition, the display 280 may further include a driving component for driving the display according to the type of the display 280. Alternatively, in case the display 280 is a projection display, it may also comprise a projection device and a projection screen.
The communication interface 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communication interface 230 may be a Wifi chip 231, a bluetooth communication protocol chip 232, a wired ethernet communication protocol chip 233, or other network communication protocol chips or near field communication protocol chips, and an infrared receiver (not shown).
The display apparatus 200 may establish a connection for transmission and reception of control signals and data signals with an external control apparatus or a content providing apparatus through the communication interface 230. In addition, the infrared receiver is an interface for receiving an infrared control signal of the control device 100 (e.g., an infrared remote controller, etc.).
The detector 240 may be used to collect signals of the external environment or interaction with the outside. The detector 240 includes a light receiver 242, and the light receiver 242 is a sensor for collecting the intensity of ambient light, and by collecting the ambient light, parameter changes and the like can be adaptively displayed.
The detector 240 further includes an image collector 241, such as a camera, etc., which may be used to collect external environment scenes, collect attributes of the user or interact gestures with the user, adaptively change display parameters, and also recognize gestures of the user, so as to implement the function of interaction with the user.
In some embodiments, the detector 240 may further include a temperature sensor, and the display apparatus 200 may adaptively adjust a display color temperature of the image by sensing an ambient temperature through the temperature sensor. For example, when the ambient temperature is higher, the display apparatus 200 may be adjusted to display a cool tone, or when the ambient temperature is lower, the display apparatus 200 may be adjusted to display a warm tone.
In other embodiments, the detector 240 may further comprise a sound collector, such as a microphone, which may be used to receive a user's voice, a voice signal including a control instruction from the user to control the display device 200, or collect an ambient sound for identifying the type of ambient scene, and the display device 200 may be adapted to the ambient noise.
The input/output interface 250 is used for data transmission between the display device 200 and other external devices under the control of the controller 210. Such as receiving video signals, audio signals, command instructions, etc. from an external device.
Input/output interface 250 may include, but is not limited to, the following: a high definition Multimedia interface (hdmi) interface 251, an analog or data high definition component input interface 253, a composite video input interface 252, a USB input interface 254, an RGB (Red Green Blue, color mode) port (not shown in the figure), and any one or more interfaces.
In some exemplary embodiments, the input/output interface 250 may also be a composite input/output interface formed by the above-mentioned plurality of interfaces.
The video processor 260-1 is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, and the like according to a standard codec protocol of the input signal, so as to obtain a signal that can be directly displayed or played on the display device 200.
For example, the video processor 260-1 may include a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio and video data stream, and if the input MPEG-2 is input, the demultiplexing module demultiplexes the input audio and video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And an image synthesis module, such as an image synthesizer, configured to superimpose and mix a graphics generator with the scaled video image according to a GUI (Graphical User Interface) signal input by a User or generated by the User, so as to generate an image signal for display.
The frame rate conversion module is configured to convert an input video frame rate, for example, a 60Hz frame rate into a 120Hz frame rate or a 240Hz frame rate, and a common format is implemented by using, for example, an interpolation frame method.
And the display formatting module is used for changing the video output signals after the frame rate conversion is received to obtain signals conforming to the display format, such as output RGB data signals.
The audio processor 260-2 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, amplification processing, and the like to obtain an audio signal that can be played in the speaker.
In other exemplary embodiments, video processor 260-1 may include one or more chips. The audio processor 260-2 may also include one or more chips.
In other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or may be integrated with the controller 210 in one or more chips.
An audio output 272, which receives the sound signal output from the audio processor 260-2 under the control of the controller 210, such as: the speaker 272, and the external sound output terminal 274 that can be output to the generation device of the external device, in addition to the speaker 272 carried by the display device 200 itself, such as: an external sound interface or an earphone interface and the like.
The power supply uses the power input from the external power source to provide power supply support for the display apparatus 200 under the control of the controller 210. The power supply may be a built-in power supply circuit installed inside the display device 200, or may be a power supply installed outside the display device 200, and provides a power supply interface for an external power supply in the display device 200.
A user input interface for receiving a user input signal and then transmitting the received user input signal to the controller 210. The user input signal may be a remote controller signal received through an infrared receiver, or may be various user control signals received through a network communication module.
Illustratively, the user inputs a user input signal through the control device 100 or the mobile terminal 300, and the user input interface responds to the user input signal through the controller 210 by the display device 200 according to the user input signal.
In some embodiments, a user may display a Graphical User Interface (GUI) on the display 280 to input a user command, which is received by the user input interface through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the input user command by recognizing the sound or gesture through the sensor.
The controller 210 controls the operation of the display apparatus 200 and responds to the user's operation through various software control programs stored in the memory 290.
As shown in fig. 2, the controller 210 includes a RAM (Random Access Memory) 213, a ROM (Read-Only Memory) 214, a graphics processor 216, a CPU processor 212, and a communication interface 218, such as: a first interface 218-1 through an nth interface 218-n, and a communication bus. The RAM213 and the ROM214, the graphic processor 216, the CPU processor 212, and the communication interface 218 are connected via a bus.
And a RAM213 for storing instructions for various system boots. If the display device 200 starts to power up when receiving the power-on signal, the cpu (central processing unit) 212 executes the system start instruction in the RAM213, and copies the operating system stored in the memory 290 to the RAM213, so as to start to execute the start operating system. After the start of the operating system is completed, the CPU processor 212 copies the various application programs in the memory 290 to the RAM213 and then starts running and starting the various application programs.
A graphics processor 216 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The graphic processor 216 includes an operator for performing an operation by receiving various interactive instructions input by a user, and displays various objects according to display attributes. The graphics processor 216 also includes a renderer that generates various objects based on the operator and displays the rendered results on the display 280.
The CPU processor 212 is configured to execute the operating system and application program instructions stored in the memory 290, and execute various application programs, data and contents according to various received interactive instructions of external input, so as to finally display and play various audio-video contents.
In some exemplary embodiments, the CPU processor 212 may include a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some operations of the display apparatus 200 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. A plurality of or a sub-processor for performing an operation in a standby mode or the like.
The controller 210 may control the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of the selectable objects, such as a hyperlink or an icon. Operations related to the selected object may include, for example: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon. The user command for selecting the UI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
The memory 290 includes a memory for storing various software modules for driving the display device 200. Such as: various software modules stored in memory 290, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
Wherein the basic module is used for signal communication among the hardware in the postpartum care display device 200 and for sending the bottom layer software for processing and controlling signals to the upper layer module. The detection module is used for collecting various information from various sensors or user input interfaces, and performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is used for controlling the display 280 to display image content, and can be used for playing information such as multimedia image content and UI interface. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing a module for data communication between browsing servers. And the service module is used for providing various services and modules including various application programs.
Meanwhile, the memory 290 is also used to store visual effect maps and the like for receiving external data and user data, images of respective items in various user interfaces, and a focus object.
Referring to fig. 3, fig. 3 is a block diagram illustrating a configuration of a control apparatus according to an exemplary embodiment. The control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control device 100 is configured to control the display device 200 and may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200. Such as: the user responds to the channel up and down operation by operating the channel up and down keys on the control device 100.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications that control the display apparatus 200 according to user demands.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent electronic device may function similar to the control device 100 after installing an application that manipulates the display device 200. Such as: the user may implement the functions of controlling the physical keys of the device 100 by installing applications, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 300 or other intelligent electronic device.
The controller 110 includes a processor 112 and RAM113 and ROM114, a communication interface 118, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components for communication and coordination and external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display apparatus 200. The communication interface 130 may include at least one of a WiFi chip, a bluetooth module, an NFC module, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touch pad 142, a sensor 143, keys 144, and other input interfaces. Such as: the user can realize a user instruction input function through actions such as voice, touch, gesture, pressing, and the like, and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display apparatus 200. In some embodiments, the interface may be an infrared interface or a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
In some embodiments, the control device 100 includes at least one of a communication interface 130 and an output interface. The control device 100 is provided with a communication interface 130, such as: a WiFi module, a bluetooth module, an NFC (Near Field Communication) module, etc. which may encode a user input command through a WiFi protocol, a bluetooth protocol, or an NFC protocol and send the user input command to the display device 200.
A memory 190 for storing various operation programs, data, and applications for driving and controlling the display device 200 under the control of the controller 110. The memory 190 may store various control signal commands input by a user.
The power supply 180, which is used to provide operational power support for the various components of the control device 100 under the control of the controller 110, may include a battery and associated control circuitry.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a functional configuration of a display device according to an exemplary embodiment.
As shown in fig. 4, the memory 290 is used to store an operating system, applications, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. The memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically configured to store an operating program for driving the controller 210 in the display device 200, and to store various applications installed in the display device 200, various applications downloaded by a user from an external device, various graphical user interfaces related to the applications, various objects related to the graphical user interfaces, user data information, and internal data of various supported applications. The memory 290 is used for storing System software such as an OS (Operating System) kernel, middleware, and applications, and storing input video data and audio data, and other user data.
The memory 290 is specifically used for storing drivers and related data such as the audio/video processors 260-1 and 260-2, the display 280, the communication interface 230, the input/output interface of the detector 240, and the like.
In some embodiments, memory 290 may store software and/or programs, software programs for representing an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an Application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (e.g., the middleware, APIs, or applications), and the kernel may provide interfaces to allow the middleware and APIs, or applications, to access the controller to implement controlling or managing system resources.
The memory 290, for example, includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The controller 210 performs functions such as: a broadcast television signal reception demodulation function, a television channel selection control function, a volume selection control function, an image control function, a display control function, an audio control function, an external instruction recognition function, a communication control function, an optical signal reception function, an electric power control function, a software control platform supporting various functions, a browser function, and the like.
Referring to fig. 5, fig. 5 is a block diagram illustrating a configuration of a software system in a display device according to an exemplary embodiment.
As shown in FIG. 5, an operating system 2911, including executing operating software for handling various basic system services and for performing hardware related tasks, acts as an intermediary for data processing performed between application programs and hardware components. In some embodiments, portions of the operating system kernel may contain a series of software to manage the display device hardware resources and provide services to other programs or software code.
In other embodiments, portions of the operating system kernel may include one or more device drivers, which may be a set of software code in the operating system that assists in operating or controlling the devices or hardware associated with the display device. The drivers may contain code that operates the video, audio, and/or other multimedia components. Examples include a display screen, a camera, Flash, WiFi, and audio drivers.
The accessibility module 2911-1 is configured to modify or access the application program to achieve accessibility and operability of the application program for displaying content.
A communication module 2911-2 for connection to other peripherals via associated communication interfaces and a communication network.
The user interface module 2911-3 is configured to provide an object for displaying a user interface, so that each application program can access the object, and user operability can be achieved.
Control applications 2911-4 for controllable process management, including runtime applications and the like.
The event transmission system 2914, which may be implemented within the operating system 2911 or within the application program 2912, in some embodiments, on the one hand, within the operating system 2911 and on the other hand, within the application program 2912, is configured to listen for various user input events, and to refer to handlers that perform one or more predefined operations in response to the identification of various types of events or sub-events, depending on the various events.
The event monitoring module 2914-1 is configured to monitor an event or a sub-event input by the user input interface.
The event identification module 2914-1 is configured to input definitions of various types of events for various user input interfaces, identify various events or sub-events, and transmit the same to a process for executing one or more corresponding sets of processes.
The event or sub-event refers to an input detected by one or more sensors in the display device 200 and an input of an external control device (e.g., the control device 100). Such as: the method comprises the following steps of inputting various sub-events through voice, inputting gestures through gesture recognition, inputting sub-events through remote control key commands of the control equipment and the like. Illustratively, the one or more sub-events in the remote control include a variety of forms including, but not limited to, one or a combination of key presses up/down/left/right/, ok keys, key presses, and the like. And non-physical key operations such as move, hold, release, etc.
The interface layout manager 2913, directly or indirectly receiving the input events or sub-events from the event transmission system 2914, monitors the input events or sub-events, and updates the layout of the user interface, including but not limited to the position of each control or sub-control in the interface, and the size, position, and level of the container, and other various execution operations related to the layout of the interface.
Referring to fig. 6, fig. 6 is a block diagram illustrating a configuration of an application program in a display device according to an exemplary embodiment.
As shown in fig. 6, the application layer 2912 contains various applications that may be executed at the display device 200. Applications may include, but are not limited to, one or more applications such as: live television applications, video-on-demand applications, media center applications, application program centers, gaming applications, and the like.
The live television application can provide live television through different signal sources. For example, a live television application may provide television signals using input from cable television, radio broadcasts, satellite services, or other types of live television services. And, the live television application may display a video of the live television signal on the display device 200.
And the video-on-demand application can provide videos from different storage sources. Unlike live television applications, video on demand provides a video display from some storage source. For example, the video on demand may come from a server side of the cloud storage, from a local hard disk storage containing stored video programs.
The media center application can provide various applications for playing multimedia contents. For example, a media center, which may be other than live television or video on demand, may provide services that a user may access to various images or audio through a media center application.
The application program center can provide and store various applications. The application may be a game, an application, or some other application associated with a computer system or other device that may be run in the smart television. The application center may obtain these applications from different sources, store them in local storage, and then be operable on the display device 200.
The following explains the video playing method provided in the embodiments of the present application in detail.
In the embodiment of the application, the display device acquires the basic video stream and the high-definition video stream of the wide-angle video from the server. Wherein the resolution of the base video stream is lower than the resolution of the high definition video stream. The display device can pull the complete basic video stream of the whole video from the server directly, however, for the high-definition video stream, due to the consideration of network bandwidth, cost and the like, the server cuts the high-definition video stream into a plurality of slices, each slice is encoded into a sub-stream, and the display device realizes the combined playing of the basic video stream and the pulled sub-stream by pulling the sub-stream from the server. The video playing method provided by the embodiment of the application mainly provides a realization process that the display device pulls the sub-stream and plays the sub-stream.
Fig. 7 is a flowchart illustrating a video playing method according to an exemplary embodiment, which is applied to the display device. Referring to fig. 7, the method includes the following steps:
step 701: and responding to the change of the field angle, and acquiring the sub-stream combination corresponding to the changed field angle according to the current playing progress.
The sub-stream combination comprises a target sparse stream and a plurality of complete streams positioned behind the target sparse stream, each complete stream corresponds to at least one sparse stream, each sparse stream comprises an I frame generated according to one video frame in the corresponding complete stream, and the target sparse stream is a sparse stream which is positioned behind the current playing progress and is closest to the current playing progress.
In this embodiment, the server divides the high definition video stream into a plurality of slices, and encodes each slice to obtain a corresponding sub-stream. Each sub-stream comprises a plurality of video frames, and the first video frame of each sub-stream is an I-frame, and the following video frames are P-frames. These directly encoded substreams contain the complete slice data and are therefore referred to as complete streams. After obtaining the complete streams, the server generates at least one sparse stream corresponding to each complete stream.
In one possible implementation, the server generates n sparse streams from n video frames of a plurality of video frames included in one complete stream, wherein each sparse stream includes I frames generated from one of the n video frames.
For example, assuming that a full stream includes 30 video frames, the server generates 5 sparse streams from the 5 th, 10 th, 15 th, 20 th and 25 th video frames of the full stream. The 1 st sparse stream includes an I frame generated according to the 5 th frame of video data, the 2 nd sparse stream includes an I frame generated according to the 10 th frame of video data, and so on.
Optionally, in order to characterize the I frames in each sparse stream generated according to the second video frame in the complete stream, each sparse stream further includes an offset value, which is used to indicate the position of the video frame generating the I frame in the corresponding sparse stream in the corresponding complete stream.
It should be noted that, in the embodiment of the present application, the offset value may be stored in a fixed area at the head or the tail of the sparse stream file.
In another possible implementation, the server divides a complete stream into a plurality of segment data, and stores a first video frame of other segment data except the first segment data as an I frame, thereby obtaining a plurality of sparse streams. Thus, each sparse stream will include one I frame and a number of P frames following the I frame, and the number of P frames included in each sparse stream is effectively a video frame between the I frame of the sparse stream and the I frame of the next sparse stream.
For example, assuming that one complete stream includes 30 video frames, the server divides the complete stream into 3 segments, the 1 st to 10 th video frames being the first segment data, the 11 th to 20 th video frames being the second segment data, and the 21 st to 30 th video frames being the third segment data. For the first segment data, since the first video frame itself is an I-frame, it may not be necessary to generate a sparse stream. For the second segment data, the 11 th video frame may be saved as an I-frame, resulting in a sparse stream, and for the third segment data, the 21 st video frame may be saved as an I-frame, resulting in a second sparse stream.
Optionally, the at least one sparse stream corresponding to one complete stream may be arranged according to the sequence of the I frames included in each sparse stream. The sequence of the I frames included in each sparse stream refers to the sequence of the video frames generating the corresponding I frames in the corresponding complete stream.
In addition, in the embodiment of the present application, in order to facilitate subsequent obtaining of the sparse stream, the sparse stream corresponding to each complete stream may be named according to the position of the I frame in the complete stream. For example, again taking the example of 5 sparse streams corresponding to the aforementioned full stream comprising 30 video frames, the first sparse stream is named 5, the second sparse stream is named 10, and so on. Or, in some possible embodiments, the number of P frames between I frames of two adjacent sparse streams may be the same, and in this case, it is sufficient to directly arrange each sparse stream in the order of the I frames of each sparse stream in the complete stream to name each sparse stream in turn. For example, again taking the example of 5 sparse streams corresponding to the aforementioned full stream comprising 30 video frames, the first sparse stream may be named 1, the second sparse stream may be named 2, and so on.
The foregoing only provides several implementation manners of sparse streams designed for facilitating obtaining of sparse streams, and in some possible embodiments, the searching for sparse streams may also be implemented by adding other information to sparse streams or by other ways of storing sparse streams, which is not limited in this embodiment of the present application.
In the embodiment of the application, for a video to be played, a display device first obtains a media description file of a high-definition video stream of the video to be played from a server. The media description file may include slice information of a plurality of slices, where the plurality of slices are slices obtained by the server dividing the high definition video stream.
Illustratively, the display device sends an acquisition request to the server, where the acquisition request carries an identifier of a video to be played. After receiving the acquisition request, the server acquires a media description file of a high-definition video stream of the video according to the identifier of the video to be played, and sends the media description file to the display device.
After receiving a media description file of a high-definition video stream of a video to be played, which is sent by a server, a display device analyzes the media description file, so that slice information of a plurality of slices is obtained.
After slice information of a plurality of slices is obtained, the display apparatus may acquire a viewing angle of the user in real time. Wherein the display device establishes a communication connection with a VR device, such as VR glasses. The VR device may detect the user's gesture in real time. The display device receives the user gesture detected by the VR device in real time, and determines the field angle of the user in real time according to the user gesture. When the current field angle of the user determined by the display device changes compared with the field angle determined last time at a certain moment, the display device acquires the sub-stream combination corresponding to the current field angle according to the current playing progress, and the current field angle is the changed field angle.
It should be noted that when the display device detects that the angle of view changes, it is possible that a P frame in a certain complete stream is currently played according to the current playing progress. For convenience of description, a P frame to be currently played is referred to as a target video frame, and a complete stream to which the P frame belongs is referred to as a target complete stream. In this case, since the playing time of the target completely rectified I frame has already elapsed, even if the target completely rectified I frame is currently acquired, it is not time to decode the target completely rectified I frame, and accordingly, other video frames located after the target video frame in the target completely stream cannot be decoded and played. Based on this, the display device may obtain a target sparse stream that is closest to the current playing progress and located behind the current playing progress in at least one sparse stream corresponding to the target complete stream, so as to load a subsequent video frame by means of an I frame included in the target sparse stream.
In some possible embodiments, after determining that the current field angle is changed from the last determined field angle, the display device determines a corresponding video area according to the current field angle, and further determines slice numbers of a plurality of slices corresponding to the current field angle according to the video area. And then, the display equipment sends a video data acquisition request to the server, wherein the video data acquisition request carries the slice numbers of the multiple slices corresponding to the current field angle and the current playing progress. The current playing progress can be represented by the current playing time of the video. After receiving a video data acquisition request, a server determines a complete stream corresponding to a first slice in a plurality of slices according to slice numbers of the plurality of slices corresponding to a current field angle, wherein the complete stream is a target complete stream corresponding to a current playing progress, and then the server determines which video frame in the target complete stream is to be played by a display device according to the current playing progress, and further acquires a target sparse stream from at least one sparse stream corresponding to the target complete stream according to the determined video frame. In addition, the server may further obtain a plurality of complete streams corresponding to remaining slices except for the first slice in the plurality of slices corresponding to the current field angle, and feed back the target sparse stream and the plurality of complete streams as a sub-stream combination to the display device, where the sub-stream combination is the sub-stream combination corresponding to the changed field angle.
In some possible embodiments, after acquiring the target sparse stream, the server may further acquire the remaining video frames of the target complete stream after the target sparse stream. In addition, the server may further obtain a plurality of complete streams obtained by encoding remaining slices except for the first slice among the plurality of slices corresponding to the current field angle. And feeding back the target sparse stream, the residual video frames behind the target sparse stream and the plurality of complete streams to a display device as a sub-stream combination, wherein the sub-stream combination is the sub-stream combination corresponding to the changed field angle. Or if the sparse stream includes an I frame and a plurality of P frames, after the server acquires the target sparse stream, acquiring the remaining sparse stream located after the target sparse stream in at least one sparse stream corresponding to the target complete stream, and sending the target sparse stream, the remaining sparse stream located after the target sparse stream, and the plurality of complete streams as a sub-stream combination to the display device.
In some possible embodiments, after determining that the current field angle is changed from the last determined field angle, the display device determines a corresponding video area according to the current field angle, and further determines slice numbers of a plurality of slices corresponding to the current field angle according to the video area. And then, the display equipment sends a video data acquisition request to the server, wherein the video data acquisition request carries the slice numbers of the plurality of slices corresponding to the current field angle. After receiving the video data acquisition request, the server acquires a complete stream corresponding to each slice in the plurality of slices and acquires at least one sparse stream corresponding to each complete stream according to the slice numbers of the plurality of slices corresponding to the current field angle carried in the video data acquisition request. And sending the obtained multiple complete flows and multiple sparse flows corresponding to the multiple complete flows to a display device.
After receiving the multiple complete streams and the multiple sparse streams, the display device determines a target complete stream to be played currently according to a current playing progress, determines which video frame in the target complete stream is to be played currently according to the current playing progress, further acquires the target sparse stream from at least one sparse stream corresponding to the target complete stream according to the video frame, and takes the acquired target sparse stream and the multiple complete streams located behind the target sparse stream as a sub-stream combination, wherein the sub-stream combination is a sub-stream combination corresponding to a changed view angle.
It should be noted that, whether the server or the display device obtains the target sparse stream from the at least one sparse stream corresponding to the target complete stream, the implementation process may be: first it is detected whether the target video frame is located behind a reference video frame in the target complete stream. The target video frame refers to a video frame in a target complete stream to be played currently, which is determined according to a current playing progress, and the reference video frame refers to a video frame used for generating an I frame of a last sparse stream corresponding to the target complete stream.
For example, assuming that the target complete stream includes 30 video frames, 5 sparse streams are generated based on the 5 th, 10 th, 15 th, 20 th and 25 th video frames of the target complete stream. In this case, the video frame used to generate the I frame of the last sparse stream of the 5 sparse streams is the 25 th video frame of the target complete stream, and thus, the reference video frame is the 25 th video frame.
If the target video frame is located behind the reference video frame, it indicates that the playing time of the target video frame is located behind the I frame of the last sparse stream of the target complete stream, that is, the current playing time has passed the playing time of the I frame of the last sparse stream, and in this case, the loading and playing of the P frame behind the target video frame cannot be realized according to the I frame of the last sparse stream. At this time, it is sufficient to directly obtain the next complete stream of the target complete stream, that is, in this case, the target sparse stream may not be included in the sub-stream combination.
And if the target video frame is not positioned behind the reference video frame, acquiring a target sparse stream from at least one sparse stream corresponding to the target complete stream, wherein the target sparse stream refers to a sparse stream which comprises an I frame positioned behind the target video frame and is closest to the target video frame. Where the I frame included is located after the target video frame, it means that the video frame used to generate the I frame of the corresponding sparse stream is located after the target video frame in the target full stream. Similarly, the I frame included closest to the target video frame also means that the video frame that generated the I frame of the corresponding sparse stream is closest to the target video frame in the target full stream.
For example, assuming that the target complete stream includes 30 video frames, 5 sparse streams are generated based on the 5 th, 10 th, 15 th, 20 th and 25 th video frames of the target complete stream. If the target video frame is the 16 th video frame of the target complete stream, the video frame for generating the sparse stream located after the target video frame and closest to the target video frame is the 20 th video frame, and therefore the sparse stream generated from the 20 th video frame can be used as the target sparse stream.
It should be noted that, in the embodiment of the present application, a method for acquiring a target sparse stream according to a target video frame is different according to different implementation manners of the sparse stream.
When the sparse stream comprises an I frame and an offset value, after the target video frame is determined not to be positioned in the reference video frame, the target sparse stream positioned behind the target video frame and closest to the video frame is determined according to the position of the target video frame in the target complete stream and the offset value of each sparse stream corresponding to the target complete stream.
And when each sparse stream is named according to the position of the I frame in the corresponding complete stream, acquiring the target sparse stream according to the name of each sparse stream.
When each sparse stream is named according to the sequence of the I frames in the corresponding complete stream, the I frames included in each sparse stream are determined to be generated according to the number of the sparse streams corresponding to the target complete stream and the name of each sparse stream, and then the target sparse stream is determined according to the target video frames and the video frames corresponding to the I frames of each sparse stream in the target complete stream.
Step 702: and performing video playing according to the target sparse stream and the plurality of complete streams included in the sub-stream combination.
After the sub-stream combination corresponding to the current field angle is obtained, the display device performs video playing according to the target sparse stream and the plurality of complete streams included in the sub-stream combination.
If the target sparse stream includes an I frame and an offset value, and the display device pulls the target sparse stream, the remaining video frames located after the I frame of the target sparse stream in the target complete stream, and the plurality of complete streams located after the target sparse stream from the server, the display device directly and sequentially plays the I frame of the target sparse stream, the remaining video frames after the I frame, and the plurality of complete streams located after the target sparse stream. By pulling the target sparse stream and the remaining video frames after the I frame of the target sparse stream at one time and completing rectification, the number of interactions between the display device and the server can be reduced, and the amount of data transmitted by the server can be reduced.
Optionally, if the target sparse stream includes an I frame and an offset value, and the sub-stream combination acquired by the display device from the server includes only the target sparse stream and a plurality of complete streams, after acquiring the large target sparse stream, the display device acquires all video frames located after a first video frame from the target complete streams stored by the server according to the offset value in the sparse stream, where the first video frame is a video frame used to generate an I frame in the target sparse stream. And then sequentially playing the I frame included by the target sparse stream, all video frames after the first video frame and a plurality of complete streams in the sub-stream combination. The remaining video frames of the target complete stream after the first video frame may also be referred to as remaining video frames of the target complete stream after the I frame of the target sparse stream.
For example, assuming that the target full stream includes 30 video frames, 5 sparse streams are generated from the 5 th, 10 th, 15 th, 20 th and 25 th video frames of the target full stream. The target sparse stream is the 4 th sparse stream, at which time the offset value in the target sparse stream may indicate that the video frame used to generate the I frame of the 4 th sparse stream is the 20 th video frame in the target full stream. Based on this, the display device pulls all video frames following the 20 th video frame among the plurality of video frames included in the target complete stream, which are P frames, from the server according to the offset value. Then, the display device may decode the I frame and the acquired P frame of the target sparse stream, and the plurality of intact flows located after the target sparse stream in the substream combination, and sequentially play the I frame, the acquired P frame, and the plurality of intact flows of the target sparse stream from the play time of the I frame of the target sparse stream.
Optionally, if the target sparse stream includes an I frame and an offset value, and the display device pulls the target complete stream and a plurality of complete streams located after the target sparse stream from the server, the display device obtains other video frames after the I frame of the target sparse stream from the obtained target complete stream according to the offset value of the target sparse stream, and then sequentially plays the I frame of the target sparse stream, the obtained other video frames, and the plurality of complete streams located after the target sparse stream.
Optionally, if the target sparse stream includes an I frame and a plurality of P frames, and the display device pulls the target sparse stream, a remaining sparse stream located after the target sparse stream, and a plurality of complete streams from the server, the display device sequentially plays the target sparse stream, the remaining sparse stream located after the target sparse stream, and the plurality of complete streams located after the target sparse stream.
For example, assuming that the target complete stream includes 30 video frames, the server divides the target complete stream into 3 segment data, each segment data including 10 video frames, resulting in 3 sparse streams. Assuming that the target sparse stream is the second sparse stream, the display device may acquire the second sparse stream and the third sparse stream, sequentially load and play the second sparse stream, the third sparse stream, and the next rectified stream after the target rectification is completed.
Therefore, if the current playing progress indicates that the target complete stream to be played includes 30 video frames and the current target video frame to be played is the 16 th frame, according to the video playing method in the prior art, because the target complete stream I frame cannot be decoded, the display device will look for the next I frame to be decoded and played, and the next I frame is the target complete stream I frame, so that the display device needs to wait for 14 frames of time to be able to play the next complete stream I frame high definition picture, and within the 14 frames of waiting time, the user can only view the base video stream with lower resolution, that is, the user cannot see the high definition picture for 14 frames of time. If the video playing is performed by the method provided by the embodiment of the present application, 5 sparse streams are respectively generated according to the 5 th video frame, the 10 th video frame, the 15 th video frame, the 20 th video frame and the 25 th video frame of the 30 video frames, the display device will acquire the target sparse stream generated from the 20 th video frame of the target complete stream according to the target video frame, in this way, the display device decodes the I frame included in the target sparse stream, so that the display device can play the I frame included in the target sparse stream and the P frames remaining after the I frame in the target complete stream only by waiting for 4 frames, in other words, the user also only needs to wait for 4 frames, compared with the method in the prior art, the method has the advantages that the time for watching the low-resolution video by the user is shortened, and the user experience is improved.
Fig. 8 shows a complete flow diagram of the display device interacting with the server to obtain the sub-stream combinations and the display device playing the sub-stream combinations. Referring to fig. 8, the display device first sends an acquisition request to the server, where the acquisition request carries an identifier of a video to be played, and the server feeds back a media description file of a high-definition video stream of the video to be played to the display device according to the identifier of the video to be played. Then, the display device determines a current first field angle, and determines slice information of a plurality of slices corresponding to the first field angle according to the first field angle. And sending a video data acquisition request to a server, wherein the video data acquisition request carries slice information of a plurality of slices corresponding to the first field angle. And the server returns the sub-stream combination corresponding to the first field angle to the display equipment according to the video data acquisition request. And the display equipment sequentially plays each complete current in the sub-flow combination corresponding to the first field angle and detects whether the field angle changes in real time in the playing process. When the change of the field angle is detected, the slice information corresponding to the changed field angle is determined, and the sub-stream combination corresponding to the changed field angle is pulled from the server through the video data acquisition request. The sub-stream combination comprises a target sparse stream which is positioned behind the current playing progress and is closest to the current playing progress, residual video frames which are positioned behind an I frame of the target sparse stream in a target complete stream corresponding to the target sparse stream, and a plurality of complete streams which are positioned behind the target sparse stream. And then, the display equipment loads and plays the I frame in the target sparse stream, other video frames behind the video frame of the target sparse stream in the target complete stream and a plurality of complete streams in sequence according to the target sparse stream.
In the embodiment of the application, when the change of the field angle is detected, the display device acquires a target sparse stream which is located behind the current playing progress and is closest to the current playing progress and a plurality of complete streams which are located behind the target sparse stream, and the sparse stream comprises an I frame which is generated according to one video frame in the corresponding complete streams, so that if the current playing progress is played to a certain video frame in the certain complete stream, the display device can play the I frame in the target sparse stream and the video frames which are left after the I frame according to the target sparse stream acquired by the current playing progress, and therefore before playing of the complete stream is started, a user does not need to wait for a next high-definition picture by simply watching a basic video stream, the time for the user to watch a low-resolution video picture is reduced, and the user experience is improved.
In some embodiments, a computer-readable storage medium is also provided, in which a computer program is stored, which when executed by a processor implements the steps of the video playing method in the above embodiments. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the video playback method described above.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (12)

1. A display device, characterized in that the display device comprises a display and a controller;
the controller is used for responding to the change of the field angle, and acquiring a sub-stream combination corresponding to the changed field angle according to the current playing progress, wherein the sub-stream combination comprises a target sparse stream and a plurality of complete streams positioned behind the target sparse stream, each complete stream corresponds to at least one sparse stream, each sparse stream comprises an I frame generated according to one video frame in the corresponding complete stream, and the target sparse stream is a sparse stream positioned behind the current playing progress and closest to the current playing progress;
the controller is further configured to control the display to play video according to the target sparse stream and the plurality of complete streams.
2. The display device of claim 1, wherein each sparse stream further comprises an offset value indicating where in the corresponding full stream the video frames that generated the I frames in the respective sparse stream are located.
3. The display device according to claim 2, wherein the sub-stream combination further includes a remaining video frame of a target complete stream after a first video frame, the target complete stream is a complete stream corresponding to the target sparse stream, and the first video frame is a video frame used for generating an I frame of the target sparse stream;
the controller is specifically configured to control the display to sequentially play the target sparse stream, the remaining video frames located after the first video frame, and the plurality of complete streams.
4. The display device according to claim 2,
the controller is specifically configured to obtain, according to an offset value in the target sparse stream, all video frames located after the first video frame from a target complete stream corresponding to the target sparse stream, where the first video frame is a video frame used to generate an I frame in the target sparse stream;
the controller is further specifically configured to control the display to sequentially play the I frame included in the target sparse stream, all video frames subsequent to the first video frame, and the plurality of complete streams.
5. The display device of claim 1, wherein each of the completed streams of the corresponding at least one sparse stream is arranged in an order of precedence of video frames in the corresponding completed stream that generated the I frames in the corresponding sparse stream, each sparse stream further comprising a plurality of P frames, the plurality of P-frames refers to video frames between a first video frame and a second video frame in a complete stream corresponding to the respective sparse stream, the first video frame refers to a video frame used for generating an I frame included in a corresponding sparse stream, the second video frame refers to a video frame used for generating an I frame included in a next sparse stream of the corresponding sparse stream, the sub-stream combination further includes a remaining sparse stream located after the target sparse stream in at least one sparse stream corresponding to the target complete stream, and the target complete stream refers to a complete stream corresponding to the target sparse stream.
6. The display device of claim 5, wherein the controller is specifically configured to control the display to sequentially play the target sparse stream, the remaining sparse stream located after the target sparse stream, and the plurality of complete streams.
7. A video playback method, the method comprising:
responding to the change of the field angle, and acquiring a sub-stream combination corresponding to the changed field angle according to the current playing progress;
the sub-stream combination comprises a target sparse stream and a plurality of complete streams positioned behind the target sparse stream, each complete stream corresponds to at least one sparse stream, each sparse stream comprises an I frame generated according to one video frame in the corresponding complete stream, and the target sparse stream is a sparse stream positioned behind the current playing progress and closest to the current playing progress;
and playing the video according to the target sparse flow and the plurality of complete flows.
8. The method of claim 7, wherein each sparse stream further comprises an offset value indicating where in the corresponding full stream the video frames that generated the I frames in the respective sparse stream are located.
9. The method of claim 8, wherein the sub-stream combination further includes a remaining video frame of a target complete stream after a first video frame, the target complete stream being a complete stream corresponding to the target sparse stream, the first video frame being a video frame used for generating an I frame of the target sparse stream;
the playing the video according to the target sparse stream and the plurality of complete streams includes:
and sequentially playing the target sparse stream, the remaining video frames after the first video frame and the plurality of complete streams.
10. The method of claim 8, wherein the playing the video according to the target sparse stream and the plurality of complete streams comprises:
according to the offset value in the target sparse stream, all video frames positioned after the first video frame are obtained from a target complete stream corresponding to the target sparse stream, wherein the first video frame is a video frame used for generating an I frame in the target sparse stream;
and sequentially playing the I frame included by the target sparse stream, all video frames after the first video frame and the plurality of complete streams.
11. The method according to claim 7, wherein the at least one sparse stream corresponding to each complete stream is arranged according to a sequence of video frames generating I frames in the corresponding sparse stream in the corresponding complete stream, each sparse stream further includes a plurality of P frames, the plurality of P frames refer to video frames between a first video frame and a second video frame in the complete stream corresponding to the corresponding sparse stream, the first video frame refers to a video frame used for generating I frames included in the corresponding sparse stream, the second video frame refers to a video frame used for generating I frames included in a next sparse stream of the corresponding sparse stream, the sub-stream combination further includes remaining sparse streams after the target sparse stream in the at least one sparse stream corresponding to the target complete stream, and the target complete stream refers to the complete stream corresponding to the target sparse stream.
12. The method of claim 11, wherein the playing video according to the target sparse stream and the plurality of complete streams comprises:
and sequentially playing the target sparse stream, the rest sparse streams behind the target sparse stream and the plurality of complete streams.
CN202010559501.9A 2020-06-18 2020-06-18 Video playing method and display equipment Pending CN111741314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010559501.9A CN111741314A (en) 2020-06-18 2020-06-18 Video playing method and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010559501.9A CN111741314A (en) 2020-06-18 2020-06-18 Video playing method and display equipment

Publications (1)

Publication Number Publication Date
CN111741314A true CN111741314A (en) 2020-10-02

Family

ID=72649734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010559501.9A Pending CN111741314A (en) 2020-06-18 2020-06-18 Video playing method and display equipment

Country Status (1)

Country Link
CN (1) CN111741314A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691797A (en) * 2021-08-27 2021-11-23 咪咕文化科技有限公司 Video playing processing method, device, equipment and storage medium
WO2022242482A1 (en) * 2021-05-21 2022-11-24 北京字跳网络技术有限公司 Playback control method and device, storage medium, and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854533A (en) * 2010-06-10 2010-10-06 华为技术有限公司 Frequency channel switching method, device and system
US20130223812A1 (en) * 2012-02-26 2013-08-29 Antonio Rossi Streaming video navigation systems and methods
CN106303682A (en) * 2016-08-09 2017-01-04 华为技术有限公司 The method and device of channel switch
CN106937141A (en) * 2017-03-24 2017-07-07 北京奇艺世纪科技有限公司 A kind of bitstreams switching method and device
CN108632681A (en) * 2017-03-21 2018-10-09 华为软件技术有限公司 Play method, server and the terminal of Media Stream
US20190313144A1 (en) * 2016-12-20 2019-10-10 Koninklijke Kpn N.V. Synchronizing processing between streams
CN110351607A (en) * 2018-04-04 2019-10-18 优酷网络技术(北京)有限公司 A kind of method, computer storage medium and the client of panoramic video scene switching

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854533A (en) * 2010-06-10 2010-10-06 华为技术有限公司 Frequency channel switching method, device and system
US20130223812A1 (en) * 2012-02-26 2013-08-29 Antonio Rossi Streaming video navigation systems and methods
CN106303682A (en) * 2016-08-09 2017-01-04 华为技术有限公司 The method and device of channel switch
US20190313144A1 (en) * 2016-12-20 2019-10-10 Koninklijke Kpn N.V. Synchronizing processing between streams
CN108632681A (en) * 2017-03-21 2018-10-09 华为软件技术有限公司 Play method, server and the terminal of Media Stream
CN106937141A (en) * 2017-03-24 2017-07-07 北京奇艺世纪科技有限公司 A kind of bitstreams switching method and device
CN110351607A (en) * 2018-04-04 2019-10-18 优酷网络技术(北京)有限公司 A kind of method, computer storage medium and the client of panoramic video scene switching

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022242482A1 (en) * 2021-05-21 2022-11-24 北京字跳网络技术有限公司 Playback control method and device, storage medium, and program product
CN113691797A (en) * 2021-08-27 2021-11-23 咪咕文化科技有限公司 Video playing processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111277884B (en) Video playing method and device
CN112333509B (en) Media asset recommendation method, recommended media asset playing method and display equipment
CN111752518A (en) Screen projection method of display equipment and display equipment
CN113259741B (en) Demonstration method and display device for classical viewpoint of episode
CN111131898B (en) Method and device for playing media resource, display equipment and storage medium
CN112073762B (en) Information acquisition method based on multi-system display equipment and multi-system display equipment
CN111836109A (en) Display device, server and method for automatically updating column frame
CN112214189A (en) Image display method and display device
US11960674B2 (en) Display method and display apparatus for operation prompt information of input control
CN112153406A (en) Live broadcast data generation method, display equipment and server
CN112188279A (en) Channel switching method and display equipment
CN112118400A (en) Display method of image on display device and display device
CN113825032A (en) Media asset playing method and display equipment
CN111954059A (en) Screen saver display method and display device
CN112199064A (en) Interaction method of browser application and system platform and display equipment
CN111083538A (en) Background image display method and device
CN111741314A (en) Video playing method and display equipment
CN112272331B (en) Method for rapidly displaying program channel list and display equipment
CN112055245B (en) Color subtitle realization method and display device
CN114079819A (en) Content display method and display equipment
CN111984167A (en) Rapid naming method and display device
CN112118476B (en) Method for rapidly displaying program reservation icon and display equipment
CN111988646B (en) User interface display method and display device of application program
CN114390190A (en) Display equipment and method for monitoring application to start camera
CN113467651A (en) Display method and display equipment for content corresponding to control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201002

RJ01 Rejection of invention patent application after publication