CN110944239A - Video playing method and device - Google Patents
Video playing method and device Download PDFInfo
- Publication number
- CN110944239A CN110944239A CN201911191989.8A CN201911191989A CN110944239A CN 110944239 A CN110944239 A CN 110944239A CN 201911191989 A CN201911191989 A CN 201911191989A CN 110944239 A CN110944239 A CN 110944239A
- Authority
- CN
- China
- Prior art keywords
- video
- user
- cutting
- current
- bottom layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000009877 rendering Methods 0.000 claims abstract description 30
- 238000004590 computer program Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440263—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention provides a video playing method and a video playing device, which are used for determining cutting images falling in a current visible area of a user, wherein the user equipment and corresponding service equipment agree on the number of cutting copies of a current panoramic video after being unfolded, and each cutting image is obtained by cutting; requesting the service equipment to obtain corresponding video data according to the cutting image falling in the current visible area of the user; and rendering and displaying the video data, and overlaying a bottom layer, wherein the bottom layer displays the low-resolution video content corresponding to the current panoramic video.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a video playing technology.
Background
The 360-degree video can bring immersive viewing experience and is an important mode for viewing in the future, and the 360-degree video has structural characteristics so that the contents outside the view cannot be seen by a user. Thus, the actual content available is only a small fraction of the total amount of video transmission, which results in a significant waste of bandwidth resources. For example, assuming a Field Of View (FOV) Of 90 degrees and a 360 degree video resolution Of 1080p, the end user can see a picture quality equivalent to 270p Of normal video. If the picture quality of 1080p common video is wanted, the original 360-degree video resolution is at least 4320 p. This has two problems, one is that the bandwidth requirement is increased, and the second is that the requirement for the decoding capability of the terminal equipment is also higher. Typically, a 1080p video has a bandwidth requirement of about 8Mbps, and a 4320p video has a bandwidth requirement of about 80 Mbps. Most users' network environments are unable to meet the bandwidth requirements of 80 Mbps. The video of 4320p cannot be decoded by most of the existing terminal devices, and even if the video can be decoded by individual devices, the energy consumption is very high.
Therefore, how to effectively play the high-resolution and high-bitrate 360-degree video becomes one of the problems that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a method and a device for playing video.
According to an aspect of the present invention, a method for video playing on a user equipment side is provided, wherein the method includes:
a, determining cutting images falling in a current visible area of a user, wherein the user equipment and corresponding service equipment agree on the number of cutting copies of a current panoramic video after being unfolded, and cutting to obtain each cutting image;
b, according to the cutting image falling in the current visible area of the user, requesting the service equipment to obtain corresponding video data;
and c, rendering and displaying the video data, and overlaying a bottom layer, wherein the bottom layer displays the low-resolution video content corresponding to the current panoramic video.
According to another aspect of the present invention, there is also provided an apparatus for video playing at a user equipment, wherein the apparatus includes:
the cutting device is used for determining cutting images falling in a current visible area of a user, wherein the user equipment and corresponding service equipment agree on the number of cutting copies of a current panoramic video after being unfolded, and each cutting image is obtained through cutting;
the request device is used for requesting the service equipment to obtain corresponding video data according to the cutting image falling in the current visible area of the user;
and the rendering device is used for rendering and displaying the video data and superposing a bottom layer, wherein the bottom layer displays the low-resolution video content corresponding to the current panoramic video.
According to yet another aspect of the invention, there is also provided a computer readable storage medium storing computer code which, when executed, performs a method as in any one of the preceding.
According to yet another aspect of the invention, there is also provided a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
According to still another aspect of the present invention, there is also provided a computer apparatus including:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
Compared with the prior art, the method and the device have the advantages that the corresponding video data are requested to be obtained from the corresponding service equipment for rendering display according to the cutting image falling in the current visible area of the user, the video source quality is improved under the condition of keeping the bandwidth unchanged by means of superposing the video data corresponding to the low-resolution base image layer and the high-resolution cutting image, the user watching experience is improved, abnormal display caused by network reasons is prevented, the problem of low panoramic live broadcast definition is solved, the memory space during decoding is reduced through the blocking principle, and high-definition video can be played on a common machine.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention;
FIG. 2 illustrates a schematic diagram of an apparatus for video playback in accordance with an aspect of the present invention;
fig. 3 illustrates a flow diagram of a method of video playback in accordance with another aspect of the invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The term "computer device" or "computer" in this context refers to an intelligent electronic device that can execute predetermined processes such as numerical calculation and/or logic calculation by running predetermined programs or instructions, and may include a processor and a memory, wherein the processor executes a pre-stored instruction stored in the memory to execute the predetermined processes, or the predetermined processes are executed by hardware such as ASIC, FPGA, DSP, or a combination thereof. Computer devices include, but are not limited to, servers, personal computers, laptops, tablets, smart phones, and the like.
The computer equipment comprises user equipment and network equipment. Wherein the user equipment includes but is not limited to computers, smart phones, PDAs, etc.; the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of computers or network servers, wherein Cloud Computing is one of distributed Computing, a super virtual computer consisting of a collection of loosely coupled computers. Wherein the computer device can be operated alone to implement the invention, or can be accessed to a network and implement the invention through interoperation with other computer devices in the network. The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
It should be noted that the user equipment, the network device, the network, etc. are only examples, and other existing or future computer devices or networks may also be included in the scope of the present invention, and are included by reference.
The methods discussed below, some of which are illustrated by flow diagrams, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present invention. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements (e.g., "between" versus "directly between", "adjacent" versus "directly adjacent to", etc.) should be interpreted in a similar manner.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present invention is described in further detail below with reference to the attached drawing figures.
FIG. 1 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention. The computer system/server 12 shown in FIG. 1 is only one example and should not be taken to limit the scope of use or the functionality of embodiments of the present invention.
As shown in FIG. 1, computer system/server 12 is in the form of a general purpose computing device. The components of computer system/server 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The computer system/server 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 1, and commonly referred to as a "hard drive"). Although not shown in FIG. 1, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The computer system/server 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the computer system/server 12, and/or with any devices (e.g., network card, modem, etc.) that enable the computer system/server 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the computer system/server 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 20. As shown, network adapter 20 communicates with the other modules of computer system/server 12 via bus 18. It should be appreciated that although not shown in FIG. 1, other hardware and/or software modules may be used in conjunction with the computer system/server 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the memory 28.
For example, the memory 28 stores a computer program for executing the functions and processes of the present invention, and the processing unit 16 executes the computer program, so that the present invention recognizes the intention of an incoming call on the network side.
The specific apparatus/steps of the present invention for video playback will be described in detail below.
Fig. 2 shows a schematic diagram of an apparatus for video playback in accordance with an aspect of the present invention.
The apparatus 1 comprises a cutting means 201, a requesting means 202 and a rendering means 203.
The cutting device 201 determines the cut images falling in the current visible area of the user, wherein the user device and the corresponding service device agree on the number of cut copies of the unfolded current panoramic video, and each cut image is obtained by cutting.
In particular, for panoramic video, because of the particularity of panoramic video display, for example, panoramic video is usually displayed in a sphere or a cube, in order to present the same viewing effect as that of general 2D video to a user, the panoramic video needs higher definition. If the definition of the panoramic video needs to be increased, the size of the panoramic video also needs to be increased, the user equipment and the corresponding service equipment agree on the number of the cutting copies of the unfolded panoramic video, each cutting image is obtained by cutting, the number of each cutting image can be any m x n, and m and n are integers more than or equal to 1. For the current panoramic video, a user cannot View all the current panoramic video at the same time, and the user can only View video content in the Field Of View (FOV) Of the user, but the video content outside the FOV cannot be seen by the user. The FOV is referred to herein as the current visible region of the user, and the cutting device 201 determines the cut images that fall within the current visible region of the user.
The requesting device 202 requests the service device to obtain corresponding video data according to the cut image falling in the current visible area of the user.
Specifically, the requesting device 202 requests the corresponding server to obtain the video data corresponding to the cut image in the current visible area according to the cut image determined by the cutting device 201 and falling in the current visible area of the user. In this case, the user device determines which cut images Of the video data should be requested from the service device in the form Of a head rotation angle + fov (View).
And the rendering device 203 renders and displays the video data, and superimposes a bottom layer which displays the low-resolution video content corresponding to the current panoramic video.
Specifically, the rendering device 203 renders and displays the video data received from the corresponding service device in the current visible area, and receives a base layer from the service device, and displays the base layer in the entire current panoramic video, where the base layer displays the video content with low resolution, and the rendering and displaying in the current visible area displays the video content with high resolution, so that the video content viewed by the user is high resolution, and the current visible area of the user changes with the change of the posture of the user, so that the position occupied by the video data rendered and displayed with high resolution in the current panoramic video also changes with the change of the posture of the user.
Here, the cut image of the current visible region is rendered and displayed after the device 1 receives the corresponding video data and the video data corresponding to the low-resolution base image layer and the high-resolution cut image are completely loaded, so that the viewing experience of the user is improved and the display abnormality caused by a network reason, such as displaying a local black block or other effects affecting the viewing, is prevented in a manner of superimposing the video data corresponding to the low-resolution base image layer and the high-resolution cut image.
Preferably, the device 1 further comprises determination means (not shown). And the determining device determines the current visible area of the user according to the current video playing progress and the self posture of the user equipment.
Specifically, the determining device determines, for example, a field of view FOV according to a current video playing progress and a self posture of the user equipment, determines a current visible area of the user according to the field of view FOV, thereby determining a segmented image included in the current visible area, and then the user equipment generates a corresponding video request based on the current visible area, and sends the video request to the service equipment through interaction with the corresponding service equipment, thereby receiving video data corresponding to the segmented image included in the current visible area returned by the service equipment.
Preferably, the device 1 further comprises concealing means (not shown). When a preset triggering condition is met, hiding video data corresponding to the current visible area of the user by a hiding device, and only displaying the bottom layer;
wherein the predetermined trigger condition comprises:
the current visible area of the user changes;
the network speed of the user equipment is lower than a preset threshold value.
Specifically, in order to avoid black block display caused by that the content of the current visible area is not updated timely when the head is rotated quickly by the user, that is, the current visible area in the sight line changes, or the network speed is slow, the hiding device judges whether a predetermined trigger condition is met, if the current visible area of the user changes, or the network speed of the user equipment is lower than a predetermined threshold value, and when the predetermined trigger condition is met, the hiding device hides all visible areas (except the bottom map layer) and only displays the bottom map layer with the low resolution, so that even if the content of the current visible area is not updated timely, the video black block is not displayed and at least the bottom map layer with the low resolution is displayed.
Preferably, the device 1 further comprises control means (not shown). The control device sets corresponding texture maps for all cut images obtained by cutting, and allocates corresponding marks for all cut images and the bottom map layer; and controlling the display and the hiding of the cutting image and the bottom layer according to the identification.
Specifically, after a current panoramic video is unfolded, a user equipment end and a service equipment end agree to cut the current panoramic video, for example, agree to cut the current panoramic video into 16 parts, a control device sets a corresponding texture map for each cut image, each texture is bound with openglcontext created by a native layer, video content drawing work is directly completed in an opengl shared context mode, the device 1 allocates corresponding identifiers for each cut image and the base map layer, for example, the identifier of the cut image after cutting is set to 1 to 16, the base map layer is set to identifier 17, then, a 17-block memory region is created, the identifier (id) created here and the cut image after cutting are bound, for example, the cut image 1 corresponds to id1, the cut image 2 corresponds to id2, and so on, and the corresponding id space is transmitted to a base decoding rendering thread, the control device can control the display and the hiding of each cutting image and the base image layer according to the marks.
In the method, the image rendering is directly performed through the address space of the pre-application in the openglContext binding mode, so that the communication delay is reduced, and the problem of refresh delay is solved.
Preferably, the apparatus 1 further comprises creating means (not shown). The creating means creates a rendering thread pool;
and the rendering device 203 starts a decoder from the rendering thread pool according to the video data and the bottom layer to decode and render the video data and the bottom layer.
Specifically, under different viewing angles, the number of cut images that can be seen in the current visible region by a user may be different, and one cut image corresponds to one decoder, so that the cut image appearing in the current visible region determines which decoders are started, the creating device creates a rendering thread pool, and then determines which cut images appearing in the current visible region correspond according to the received video data and the base layer through matching, and starts the corresponding decoders in the rendering thread pool to decode and render the video data and the base layer.
Preferably, the current panoramic video comprises live video and video on demand.
The method and the device can be applied to scenes such as video-on-demand and live broadcasting and the like needing to transmit high-code-rate content, and can be used on equipment based on a unity + android platform.
Fig. 3 illustrates a flow diagram of a method of video playback in accordance with another aspect of the invention.
In step S301, the apparatus 1 determines the cut images falling within the current visible area of the user, where the user device and the corresponding service device agree on the number of cut copies of the expanded current panoramic video, and cuts the images to obtain each cut image.
In particular, for panoramic video, because of the particularity of panoramic video display, for example, panoramic video is usually displayed in a sphere or a cube, in order to present the same viewing effect as that of general 2D video to a user, the panoramic video needs higher definition. If the definition of the panoramic video needs to be increased, the size of the panoramic video also needs to be increased, the user equipment and the corresponding service equipment agree on the number of the cutting copies of the unfolded panoramic video, each cutting image is obtained by cutting, the number of each cutting image can be any m x n, and m and n are integers more than or equal to 1. For the current panoramic video, a user cannot View all the current panoramic video at the same time, and the user can only View video content in the Field Of View (FOV) Of the user, but the video content outside the FOV cannot be seen by the user. The FOV is referred to herein as the current visible region of the user, and in step S301, the device 1 determines a cut image that falls within the current visible region of the user.
In step S302, the apparatus 1 requests the service device to obtain corresponding video data according to the cut image falling in the current visible area of the user.
Specifically, in step S302, the apparatus 1 requests the corresponding server to obtain video data corresponding to the cut image in the current visible area according to the cut image determined in step S301 and falling within the current visible area of the user. In this case, the user device determines which cut images Of the video data should be requested from the service device in the form Of a head rotation angle + fov (View).
In step S303, the device 1 renders and displays the video data, and superimposes a bottom layer that displays low-resolution video content corresponding to the current panoramic video.
Specifically, in step S303, the apparatus 1 renders and displays the video data received from the corresponding service device in the current visible area, and receives a bottom layer from the service device, and displays the bottom layer in the entire current panoramic video, where the bottom layer displays the video content with low resolution, and the current visible area displays the video content with high resolution, so that the video content viewed by the user is high resolution, and the current visible area of the user changes with the change of the posture of the user, so that the position occupied by the video data rendered and displayed with high resolution in the current panoramic video also changes with the change of the posture of the user.
Here, the cut image of the current visible region is rendered and displayed after the device 1 receives the corresponding video data and the video data corresponding to the low-resolution base image layer and the high-resolution cut image are completely loaded, so that the viewing experience of the user is improved and the display abnormality caused by a network reason, such as displaying a local black block or other effects affecting the viewing, is prevented in a manner of superimposing the video data corresponding to the low-resolution base image layer and the high-resolution cut image.
Preferably, the method further comprises step S304 (not shown). In this step S304, the apparatus 1 determines the current visible area of the user according to the current video playing progress and the self-posture of the user equipment.
Specifically, in step S304, the apparatus 1 determines, for example, a field of view FOV according to the current video playing progress and the self posture of the user equipment, determines the current visible area of the user according to the field of view FOV, thereby determining the segmented image included in the current visible area, and then, the user equipment generates a corresponding video request based on the determined video request, and sends the video request to the service equipment through interaction with the corresponding service equipment, thereby receiving the video data, which is returned by the service equipment and corresponds to the segmented image included in the current visible area, of the user equipment.
Preferably, the method further comprises step S305 (not shown). In this step S305, when a predetermined trigger condition is satisfied, the apparatus 1 hides the video data corresponding to the current visible area of the user and displays only the bottom layer;
wherein the predetermined trigger condition comprises:
the current visible area of the user changes;
the network speed of the user equipment is lower than a preset threshold value.
Specifically, in order to avoid the black block display caused by the fact that the content of the current visible area is not updated timely when the user rotates the head quickly, that is, the current visible area in the line of sight changes, or the network speed is slow, in step S305, the apparatus 1 determines whether a predetermined trigger condition is met, for example, the current visible area of the user changes, or the network speed at which the user equipment is located is lower than a predetermined threshold, and when the apparatus 1 determines that the predetermined trigger condition is met, the apparatus 1 hides all visible areas (except the bottom map layer) and only displays the bottom map layer with the low resolution, so that even if the content of the current visible area is not updated timely, the video black block is not displayed, and at least the bottom map layer with the low resolution is displayed.
Preferably, the method further comprises step S306 (not shown). In this step S306, the apparatus 1 sets a corresponding texture map for each cut image obtained by the cutting, and assigns a corresponding identifier to each cut image and the base map layer; and controlling the display and the hiding of the cutting image and the bottom layer according to the identification.
Specifically, after the current panoramic video is unfolded, the user device and the service device agree to cut the current panoramic video, for example, agree to cut the current panoramic video into 16 parts, in step S306, the apparatus 1 sets a corresponding texture map for each cut image, each texture is bound to opentext created by a native layer, the video content drawing work is directly completed in an opentext manner, the apparatus 1 assigns a corresponding identifier to each cut image and the base layer, for example, the identifier of the cut image after cutting is set to 1 to 16, the identifier of the base layer is set to 17, then, 17 block memory regions are created, the identifier (id) created here and the cut image after cutting are bound, for example, the cut image 1 corresponds to id1, the cut image 2 corresponds to id2, and so on, and corresponding id space is transferred to the base layer decoding rendering thread, the device 1 can control the display and the hiding of each cutting image and the base image layer according to the marks.
In the method, the image rendering is directly performed through the address space of the pre-application in the openglContext binding mode, so that the communication delay is reduced, and the problem of refresh delay is solved.
Preferably, the method further comprises step S307 (not shown). In this step S307, the apparatus 1 creates a rendering thread pool;
wherein, rendering and displaying the video data and overlaying a bottom layer comprises:
and according to the video data and the bottom layer, starting a decoder from the rendering thread pool to decode and render the video data and the bottom layer.
Specifically, under different viewing angles, the number of cut images that can be seen in the current visible region by a user may be different, and one cut image corresponds to one decoder, so that the cut image appearing in the current visible region determines which decoders are started.
Preferably, the current panoramic video comprises live video and video on demand.
The method and the device can be applied to scenes such as video-on-demand and live broadcasting and the like needing to transmit high-code-rate content, and can be used on equipment based on a unity + android platform.
The invention also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding claims.
The invention also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present invention also provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It is noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, the various means of the invention may be implemented using Application Specific Integrated Circuits (ASICs) or any other similar hardware devices. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Claims (15)
1. A method for playing video at a user equipment end comprises the following steps:
a, determining cutting images falling in a current visible area of a user, wherein the user equipment and corresponding service equipment agree on the number of cutting copies of a current panoramic video after being unfolded, and cutting to obtain each cutting image;
b, according to the cutting image falling in the current visible area of the user, requesting the service equipment to obtain corresponding video data;
and c, rendering and displaying the video data, and overlaying a bottom layer, wherein the bottom layer displays the low-resolution video content corresponding to the current panoramic video.
2. The method of claim 1, wherein the method further comprises:
and determining the current visible area of the user according to the current video playing progress and the self posture of the user equipment.
3. The method of claim 1 or 2, wherein the method further comprises:
when a preset trigger condition is met, hiding video data corresponding to the current visible area of the user and only displaying the bottom layer;
wherein the predetermined trigger condition comprises:
the current visible area of the user changes;
the network speed of the user equipment is lower than a preset threshold value.
4. The method of any of claims 1 to 3, wherein the method further comprises:
setting corresponding texture maps for all cut images obtained by cutting, and distributing corresponding marks for all the cut images and the bottom map layer;
and controlling the display and the hiding of the cutting image and the bottom layer according to the identification.
5. The method of any of claims 1 to 4, wherein the method further comprises:
creating a rendering thread pool;
wherein, rendering and displaying the video data and overlaying a bottom layer comprises:
and according to the video data and the bottom layer, starting a decoder from the rendering thread pool to decode and render the video data and the bottom layer.
6. The method of any of claims 1-5, wherein the current panoramic video comprises live video, video on demand.
7. An apparatus for video playing at a user equipment, wherein the apparatus comprises:
the cutting device is used for determining cutting images falling in a current visible area of a user, wherein the user equipment and corresponding service equipment agree on the number of cutting copies of a current panoramic video after being unfolded, and each cutting image is obtained through cutting;
the request device is used for requesting the service equipment to obtain corresponding video data according to the cutting image falling in the current visible area of the user;
and the rendering device is used for rendering and displaying the video data and superposing a bottom layer, wherein the bottom layer displays the low-resolution video content corresponding to the current panoramic video.
8. The apparatus of claim 7, wherein the apparatus further comprises:
and the determining device is used for determining the current visible area of the user according to the current video playing progress and the self posture of the user equipment.
9. The apparatus of claim 7 or 8, wherein the apparatus further comprises:
hiding means for hiding video data corresponding to the current visible area of the user and displaying only the bottom layer when a predetermined trigger condition is satisfied;
wherein the predetermined trigger condition comprises:
the current visible area of the user changes;
the network speed of the user equipment is lower than a preset threshold value.
10. The apparatus of any one of claims 7 to 9, wherein the apparatus further comprises control means for:
setting corresponding texture maps for all cut images obtained by cutting, and distributing corresponding marks for all the cut images and the bottom map layer;
and controlling the display and the hiding of the cutting image and the bottom layer according to the identification.
11. The apparatus of any one of claims 7 to 10, wherein the apparatus further comprises:
creating means for creating a rendering thread pool;
wherein the rendering device is configured to:
and according to the video data and the bottom layer, starting a decoder from the rendering thread pool to decode and render the video data and the bottom layer.
12. The apparatus of any of claims 7-11, wherein the current panoramic video comprises a live video, an on-demand video.
13. A computer readable storage medium storing computer code which, when executed, performs the method of any of claims 1 to 6.
14. A computer program product, the method of any one of claims 1 to 6 being performed when the computer program product is executed by a computer device.
15. A computer device, the computer device comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911191989.8A CN110944239A (en) | 2019-11-28 | 2019-11-28 | Video playing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911191989.8A CN110944239A (en) | 2019-11-28 | 2019-11-28 | Video playing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110944239A true CN110944239A (en) | 2020-03-31 |
Family
ID=69908275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911191989.8A Withdrawn CN110944239A (en) | 2019-11-28 | 2019-11-28 | Video playing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110944239A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112672131A (en) * | 2020-12-07 | 2021-04-16 | 聚好看科技股份有限公司 | Panoramic video image display method and display equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170045633A (en) * | 2015-10-19 | 2017-04-27 | 주식회사 케이티 | Method for providing panoramic video service, panoramic video providing server and system |
CN107040794A (en) * | 2017-04-26 | 2017-08-11 | 盯盯拍(深圳)技术股份有限公司 | Video broadcasting method, server, virtual reality device and panoramic virtual reality play system |
CN108632674A (en) * | 2017-03-23 | 2018-10-09 | 华为技术有限公司 | A kind of playback method and client of panoramic video |
WO2019080792A1 (en) * | 2017-10-23 | 2019-05-02 | 腾讯科技(深圳)有限公司 | Panoramic video image playing method and device, storage medium and electronic device |
CN110149542A (en) * | 2018-02-13 | 2019-08-20 | 华为技术有限公司 | Transfer control method |
CN110446070A (en) * | 2019-07-16 | 2019-11-12 | 重庆爱奇艺智能科技有限公司 | A kind of method and apparatus of video playing |
-
2019
- 2019-11-28 CN CN201911191989.8A patent/CN110944239A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170045633A (en) * | 2015-10-19 | 2017-04-27 | 주식회사 케이티 | Method for providing panoramic video service, panoramic video providing server and system |
CN108632674A (en) * | 2017-03-23 | 2018-10-09 | 华为技术有限公司 | A kind of playback method and client of panoramic video |
CN107040794A (en) * | 2017-04-26 | 2017-08-11 | 盯盯拍(深圳)技术股份有限公司 | Video broadcasting method, server, virtual reality device and panoramic virtual reality play system |
WO2019080792A1 (en) * | 2017-10-23 | 2019-05-02 | 腾讯科技(深圳)有限公司 | Panoramic video image playing method and device, storage medium and electronic device |
CN110149542A (en) * | 2018-02-13 | 2019-08-20 | 华为技术有限公司 | Transfer control method |
CN110446070A (en) * | 2019-07-16 | 2019-11-12 | 重庆爱奇艺智能科技有限公司 | A kind of method and apparatus of video playing |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112672131A (en) * | 2020-12-07 | 2021-04-16 | 聚好看科技股份有限公司 | Panoramic video image display method and display equipment |
CN112672131B (en) * | 2020-12-07 | 2024-02-06 | 聚好看科技股份有限公司 | Panoramic video image display method and display device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9043800B2 (en) | Video player instance prioritization | |
US8898577B2 (en) | Application sharing with occlusion removal | |
US8477852B2 (en) | Uniform video decoding and display | |
US7969444B1 (en) | Distributed rendering of texture data | |
CN108984137B (en) | Double-screen display method and system and computer readable storage medium | |
CN106131540B (en) | Method and system based on D3D playing panoramic videos | |
EP2245598B1 (en) | Multi-buffer support for off-screen surfaces in a graphics processing system | |
TWI698834B (en) | Methods and devices for graphics processing | |
CN112843676A (en) | Data processing method, device, terminal, server and storage medium | |
CN116821040B (en) | Display acceleration method, device and medium based on GPU direct memory access | |
CN106384330B (en) | Panoramic picture playing method and panoramic picture playing device | |
CN107317960A (en) | Video image acquisition methods and acquisition device | |
CN110944239A (en) | Video playing method and device | |
CN115715464A (en) | Method and apparatus for occlusion handling techniques | |
CN116880937A (en) | Desktop screen capturing data processing method, device, equipment and medium for interactive classroom | |
CN109814703B (en) | Display method, device, equipment and medium | |
CN116348904A (en) | Optimizing GPU kernels with SIMO methods for downscaling with GPU caches | |
US20200380745A1 (en) | Methods and apparatus for viewpoint visibility management | |
CN111091848B (en) | Method and device for predicting head posture | |
US20090231330A1 (en) | Method and system for rendering a three-dimensional scene using a dynamic graphics platform | |
CN114930288B (en) | Method and apparatus for facilitating region processing of images for an under-display device display | |
US11875452B2 (en) | Billboard layers in object-space rendering | |
CN112637681B (en) | Video redirection method and device | |
WO2021073519A1 (en) | Methods and apparatus to facilitate regional processing of images for under-display device displays | |
US20130254704A1 (en) | Multiple Simultaneous Displays on the Same Screen |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200331 |
|
WW01 | Invention patent application withdrawn after publication |