CN111741343B - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN111741343B
CN111741343B CN202010553664.6A CN202010553664A CN111741343B CN 111741343 B CN111741343 B CN 111741343B CN 202010553664 A CN202010553664 A CN 202010553664A CN 111741343 B CN111741343 B CN 111741343B
Authority
CN
China
Prior art keywords
video
data
model
processing chip
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010553664.6A
Other languages
Chinese (zh)
Other versions
CN111741343A (en
Inventor
成小全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010553664.6A priority Critical patent/CN111741343B/en
Publication of CN111741343A publication Critical patent/CN111741343A/en
Application granted granted Critical
Publication of CN111741343B publication Critical patent/CN111741343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention relates to the technical field of image processing, and discloses a video processing method and device and electronic equipment. The video processing method comprises the following steps: processing the received program media data to obtain video compression data, and sending the video compression data to a graphic processing chip for the graphic processing chip to decode the video compression data to obtain panoramic video data; and processing the panoramic video data based on the VR model, and sending the processed panoramic video data to a graphic processing chip. In the invention, the video compression data can be decoded by using the graphic processing chip, the hardware decoding capability is fully utilized, the hardware configuration requirement is reduced, and the method is convenient to popularize on low-configuration hardware; and moreover, the method does not depend on an android native framework, simplifies the playing flow of the played program, and improves the playing and control speed.

Description

Video processing method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a video processing method and device and electronic equipment.
Background
In recent years, virtual Reality (VR) is very hot, and various VR products such as VR glasses and VR helmets are produced. VR algorithms may be slightly different for different VR products, but the technical principles for their implementation are basically the same, all based on the android native framework.
The inventor finds that at least the following problems exist in the prior art: the current VR product has high requirements on hardware configuration and cannot be popularized on low-configuration hardware; moreover, the method too depends on the android native framework and the decoder, the flow is complex, the time consumption is large, and the real-time performance is low.
Disclosure of Invention
The embodiment of the invention aims to provide a video processing method, a video processing device and electronic equipment, which can decode video compression data by utilizing a graphic processing chip, fully utilize hardware decoding capacity, reduce hardware configuration requirements, facilitate popularization on low-configuration hardware, are suitable for large displays and low-configuration electronic equipment, and can realize the playing of VR (virtual reality) film sources with high code streams (such as 4K and 10 bit) on the low-configuration electronic equipment; and moreover, the method does not depend on an android native framework, simplifies the playing flow of the played program, and improves the playing and control speed.
To solve the above technical problem, an embodiment of the present invention provides a video processing method, including: processing the received program media data to obtain video compression data, and sending the video compression data to a graphic processing chip for the graphic processing chip to decode the video compression data to obtain panoramic video data; and processing the panoramic video data based on the VR model, and sending the processed panoramic video data to a graphic processing chip.
The embodiment of the invention also provides a video processing device, which comprises a playing device and a VR toolkit which are connected with each other; the playing device is used for processing the received program media data to obtain video compressed data, and sending the video compressed data to the graphics processing chip so that the graphics processing chip can decode the video compressed data to obtain panoramic video data; the VR toolkit is used for processing the panoramic video data based on the VR model and sending the processed panoramic video data to the graphic processing chip.
The embodiment of the invention also provides electronic equipment which comprises the video processing device.
Compared with the prior art, the embodiment of the invention processes program media data to obtain video compressed data when the program media data are received, and sends the video compressed data to the graphic processing chip, the graphic processing chip can decode the video compressed data to obtain panoramic video data, then processes the panoramic video data based on the VR model, renders the panoramic video data on the VR model, and sends the VR model with the panoramic video data rendered to the graphic processing chip, so that the graphic processing chip renders images under the view angle of a user on a display screen for the user; in the invention, the video compression data is decoded by utilizing the graphic processing chip, the hardware decoding capability is fully utilized, the hardware configuration requirement is reduced, the method is convenient to popularize on low-configuration hardware, is suitable for large displays and low-configuration electronic equipment, and can realize the playing of VR film sources with high code streams (such as 4K and 10 bit) on the low-configuration electronic equipment; in addition, the method does not depend on an android native framework, simplifies the playing flow of the played program, and improves the playing and control speed.
In addition, when the view angle change parameter is received, the VR model is updated based on the view angle change parameter. In this embodiment, when the view angle change parameter input by the user is received, the VR model can be adjusted accordingly, that is, the panoramic video data of the image processor can be input based on the adjustment corresponding to the view angle adjusted by the user, so that the electronic device can display the video image with the adjusted view angle through the display.
In addition, processing the received program media data to obtain video compression data, including: and processing the received program media data to obtain video compression data and program video information.
In addition, before processing the panoramic video data based on the VR model, the method further includes: from the program video information, a VR model is created. A specific implementation of creating a VR model is provided in this embodiment.
In addition, before processing the panoramic video data based on the VR model, the method further includes: establishing a binding relationship with the graphic processing chip; and acquiring panoramic video data from the graphic processing chip based on the binding relationship. In this embodiment, the panoramic video data is obtained by establishing a binding relationship with the graphics processing chip.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a detailed flow of a video processing method according to a first embodiment of the present invention;
fig. 2 is a detailed flowchart of a video processing method according to a second embodiment of the present invention;
fig. 3 is a detailed configuration diagram of a video processing apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a flow of playing a panoramic video according to an electronic device in a third embodiment of the present invention;
fig. 5 is a specific configuration diagram of a playback apparatus in a video processing apparatus according to a fifth embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The first embodiment of the present invention relates to a video processing method, which is applied to an electronic device, such as a mobile phone, a tablet computer, a television set-top box, a television set, and the like. The electronic device includes a video processing device, the video processing device includes a playing device and a VR toolkit (i.e., VR SDK), and the video processing device may be video playing software installed in the electronic device. In addition, the electronic device further includes a Graphics Processing chip, the Graphics Processing chip includes a decoder and a Graphics Processing Unit (GPU for short), and the playing device is connected to the decoder.
Fig. 1 shows a specific flow of the video processing method according to the present embodiment.
Step 101, processing the received program media data to obtain video compressed data, and sending the video compressed data to a graphics processing chip, so that the graphics processing chip decodes the video compressed data to obtain panoramic video data.
Specifically, when video playing software in the electronic equipment is opened by a user, a program list is displayed on a display of the electronic equipment, and after a program selected to be played by the user from the program list is received, a playing device in the video playing software sends a playing request of the program to a preset license plate side, the license plate side starts playing, a playing program instance is created, and a video address of the program is sent to the playing device; the playing device accesses the server to obtain program media data based on the video address of the program, analyzes the program media data to obtain program video information and video compression data, the video compression data is a video code stream, and then sends the video compression data to a decoder in the graphic processing chip, and the decoder can decode the video compression data to obtain panoramic video data. The program video information comprises an audio and video format, an audio and video coding format, resolution, memory occupation size and the like. The playing device is directly connected with a decoder in the graphic processing chip in an abutting mode, and a coding and decoding interface MediaCodec in an android native framework is not needed.
And 102, processing the panoramic video data based on the VR model, and sending the processed panoramic video data to a graphic processing chip.
Specifically, the VR toolkit may create a VR model corresponding to the program according to the program video information and an Open Graphics Library (IPENGL Library for short) configured in the electronic device, where the VR model is, for example, a sphere model, a box model, or the like.
After decoding to obtain panoramic video data, the decoder sends the panoramic video data to a Graphics Processing Unit (GPU), the VR toolkit obtains the panoramic video data from the GPU, and then calls an OPENGL library based on the created VR model to perform texture mapping, re-projection and other processing on the panoramic video data, namely, each frame of data of the panoramic video data is converted into an RGB format and is rendered on the VR model.
And then based on the current view angle parameters, the model matrix of the VR model and the view matrix of the generated user view angle, sending the VR model with rendered panoramic video data to a Graphics Processing Unit (GPU), wherein the Graphics Processing Unit (GPU) can render the panoramic video data under the user view angle on the VR model according to the view matrix, and outputting the image under the current user view angle to a display of the electronic equipment.
In one example, after parsing program video information from the program media data, the playing device may determine whether the program is of a VR program type based on the program video information, and if it is determined that the program is of the VR program type, bind the VR toolkit with the GPU, and establish a binding relationship therebetween, so that the VR toolkit is directly docked with the GPU, and the panoramic video data is captured from the GPU. The graphics processing chip is generally provided with a chip SDK that interfaces with the VR SDK. In addition, because the VR toolkit is directly connected with the GPU, panoramic video data can be sent to the GPU without surface view in an android native box.
Compared with the prior art, the method has the advantages that when program media data are received, the program media data are processed to obtain video compression data, the video compression data are sent to the graphic processing chip, the graphic processing chip can decode the video compression data to obtain panoramic video data, then the panoramic video data are processed based on the VR model, the panoramic video data are rendered on the VR model, the VR model with the rendered panoramic video data is sent to the graphic processing chip, and images under the view angle of a user are rendered on the display screen for the graphic processing chip to be displayed to the user; in the invention, the video compression data is decoded by using the graphic processing chip, the hardware decoding capability is fully utilized, the hardware configuration requirement is reduced, the popularization on low-configuration hardware is facilitated, the method is suitable for large displays and low-configuration electronic equipment, and the playing of a VR film source with high code stream (such as 4K and 10 bit) can be realized on the low-configuration electronic equipment; and moreover, the method does not depend on an android native framework, simplifies the playing flow of the played program, and improves the playing and control speed.
A second embodiment of the present invention relates to a video processing method, and the main differences of the present embodiment from the first embodiment are: the panoramic video data can be input to the image processor with a corresponding adjustment based on the user-adjusted viewing angle.
Fig. 2 shows a specific flow of the training process of the video processing method according to the present embodiment.
Step 201, processing the received program media data to obtain video compressed data, and sending the video compressed data to a graphics processing chip for the graphics processing chip to decode the video compressed data to obtain panoramic video data. Substantially the same as step 101 in the first embodiment, and will not be described herein again.
Step 202, when the view angle change parameter is received, updating the VR model based on the view angle change parameter.
And 203, processing the panoramic video data based on the VR model, and sending the processed panoramic video data to a graphic processing chip.
Specifically, the user can adjust the steering and the angle according to the needs, and for example, the electronic device is a television box, the user can adjust the steering and the angle through a remote controller. When receiving visual angle change parameters including steering and angles, a playing device of the electronic equipment sends the visual angle change parameters to a VR toolkit, the VR toolkit can adjust a VR model based on the visual angle change parameters, and processes subsequent panoramic video data by using the updated VR model, so that the panoramic video data are matched with the adjusted visual angle of a user, and an updated view matrix is generated, and a graphic processor GPU can output video images after the visual angle is adjusted to a display based on the updated view matrix.
Compared with the first embodiment, when the view angle change parameter input by the user is received, the VR model can be adjusted accordingly, that is, the panoramic video data of the image processor can be adjusted and input based on the view angle adjusted by the user, so that the electronic device can display the video image with the adjusted view angle through the display.
A third embodiment of the present invention relates to a video processing apparatus, which is applied to an electronic device, such as a mobile phone, a tablet computer, a television set-top box, a television, and the like, and can display a panoramic video on a display of the electronic device or a display connected to the electronic device.
Referring to fig. 3, the video processing apparatus 1 includes a playing apparatus 11 and a VR toolkit 12 (i.e., VR SDK) connected to each other, and the video processing apparatus may be video playing software installed in an electronic device. In addition, the electronic device further includes a Graphics Processing chip 3, the Graphics Processing chip 3 includes a decoder 31 and a Graphics Processing Unit (GPU) 32 (GPU), and the playing device 11 is connected to the decoder 31.
In this embodiment, the playing device 11 is configured to process the received program media data to obtain video compressed data, and send the video compressed data to the graphics processing chip 3, so that the graphics processing chip 3 decodes the video compressed data to obtain panoramic video data.
The VR toolkit 12 is configured to process the panoramic video data based on the VR model, and send the processed panoramic video data to the graphics processing chip 3.
Specifically, the playing device 11 can process the received program media data to obtain video compressed data, and send the video compressed data to the decoder 31 of the graphics processing chip 3, the decoder 31 decodes panoramic video data obtained by the video compressed data, and sends the panoramic video data to the graphics processor GPU32, and the VR toolkit 12 obtains the panoramic video data from the graphics processor GPU32.
The VR toolkit may create, according to the program video information and an Open Graphics Library (IPENGL Library for short) configured in the electronic device, a VR model corresponding to the program, where the VR model is, for example, a sphere model, a box model, or the like, perform texture mapping, re-projection, and other processing on the acquired panoramic video data based on the created VR model, and send the processed panoramic video data to the GPU32.
In one example, the playing device 11 is configured to establish a binding relationship between the VR toolkit and the graphics processing chip before processing the panoramic video data based on the VR model, and the VR toolkit 12 can obtain the panoramic video data from the decoder 31 from the graphics processor 32 of the graphics processing chip 3 based on the binding relationship.
A specific flow of playing back a program by the video processing apparatus 1 of the present embodiment will be described with reference to fig. 4.
When video playing software in the electronic equipment is opened by a user, a program list is displayed on a display 5 of the electronic equipment, when a program selected to be played by the user from the program list is received, a playing device 31 in the video playing software sends a playing request of the program to a preset license plate side 4, the license plate side 4 starts playing, a playing program instance is created, and a video address of the program is sent to a playing device 11; the playing device 11 obtains program media data from the video address access server of the program, and then analyzes the program media data to obtain program video information and video compressed data, the video compressed data is a video code stream, and then sends the video compressed data to the decoder 31 in the graphics processing chip 3, and the decoder 31 can decode the video compressed data to obtain panoramic video data. The program video information comprises an audio and video format, an audio and video coding format, resolution, memory occupation size and the like. The playing device is directly connected with a decoder in the graphic processing chip in an abutting mode without a MediaCodec of a coding and decoding interface in an android native framework.
The playing device 11 can also initialize the decoder 31 and the image processor 32 after receiving the program video address, bind the VR toolkit 12 with the GPU32, establish a binding relationship between the two, and then send the program video information to the VR toolkit 12, where the VR toolkit 12 can create a VR model corresponding to the program according to the program video information and an Open Graphics Library (IPENGL Library for short) configured in the electronic device, where the VR model is, for example, a sphere model, a box model, and the like.
After decoding the panoramic video data, the decoder 31 sends the panoramic video data to the graphics processor GPU32, and the VR toolkit 12 directly interfaces with the graphics processor GPU32 so as to obtain the panoramic video data from the graphics processor GPU32, and then the VR toolkit 12 calls an OPENGL library based on the created VR model to perform texture mapping, re-projection and other processing on the obtained panoramic video data, that is, each frame data of the panoramic video data is converted into an RGB format and rendered on the VR model.
Then, based on the current view angle parameter and the model matrix of the VR model and the view matrix of the generated user view angle, the VR model with rendered panoramic video data is sent to the graphics processing unit GPU32, and the graphics processing unit GPU32 can render the panoramic video data at the user view angle on the VR model according to the view matrix, and output the image at the current user view angle to the display 5 of the electronic device.
In this embodiment, the hardware decoding format of the decoder 31 may also be expanded to support decoding of more video formats, so as to improve the user experience.
Compared with the prior art, the method has the advantages that when program media data are received, the program media data are processed to obtain video compression data, the video compression data are sent to the graphic processing chip, the graphic processing chip can decode the video compression data to obtain panoramic video data, then the panoramic video data are processed based on the VR model, the panoramic video data are rendered on the VR model, the VR model with the rendered panoramic video data is sent to the graphic processing chip, and images under the view angle of a user are rendered on the display screen for the graphic processing chip to be displayed to the user; in the invention, the video compression data is decoded by using the graphic processing chip, the hardware decoding capability is fully utilized, the hardware configuration requirement is reduced, the popularization on low-configuration hardware is facilitated, the method is suitable for large displays and low-configuration electronic equipment, and the playing of a VR film source with high code stream (such as 4K and 10 bit) can be realized on the low-configuration electronic equipment; in addition, the method does not depend on an android native framework, simplifies the playing flow of the played program, and improves the playing and control speed.
A fourth embodiment of the present invention relates to a video processing method, and is different from the third embodiment mainly in that: the panoramic video data can be input to the image processor with a corresponding adjustment based on the user-adjusted viewing angle.
Referring to fig. 3, the playing device 11 is further configured to send the view angle variation parameter to the VR toolkit 12 when receiving the view angle variation parameter, and the VR toolkit 12 is capable of updating the VR model based on the view angle variation parameter.
Specifically, the user can adjust the turning direction and the angle according to the needs, and for example, the user can adjust the turning direction and the angle through a remote controller by taking the electronic device as a television box. When receiving the view angle change parameters including the turning direction and the angle, the playing device 11 of the electronic device sends the view angle change parameters to the VR toolkit 12, and the VR toolkit 12 adjusts the VR model based on the view angle change parameters and processes the subsequent panoramic video data by using the updated VR model, so that the panoramic video data matches the adjusted view angle of the user and generates an updated view matrix, and the graphics processor GPU32 can output the video image with the adjusted view angle to the display 5 based on the updated view matrix.
Compared with the first embodiment, when the view angle change parameter input by the user is received, the VR model can be adjusted accordingly, that is, the panoramic video data of the image processor can be adjusted and input based on the view angle adjusted by the user, so that the electronic device can display the video image with the adjusted view angle through the display.
A fifth embodiment of the present invention relates to a video processing apparatus, and is different from the third embodiment mainly in that: a specific structure of the playback apparatus 11 is provided.
Referring to fig. 5, the playing device 11 includes: a main control module 111, a flow control module 112, a decoding control module 113, and a data parsing module 114.
The main control module 1 is configured to respond to the received program video address, and obtain media data from the video address through the stream control module 12. Specifically, the main control module 111 is a control module of the video processing apparatus 1, and is capable of receiving an external operation command, and controlling the stream control module 112, the decoding control module 113, and the data analysis module 114 based on the operation command, when receiving a program video address, the main control module 111 sends the program video address to the stream control module 112, and the stream control module 112 establishes a session with a server based on the program video address, acquires media data of the program from the server, and stores the media data in a buffer area of a preset electronic device, so that multiple types of streaming media protocols such as HLS, HTTP, IGMP, and the like can be supported.
The decoding control module 113 is configured to send the acquired media data to the data parsing module 114, that is, the decoding control module 113 can read the media data from the buffer and send the media data to the data parsing module 114; in addition, the code receiving control module 13 can also control the code adder 31, for example, set the decoder 31, stop playing, and store the relevant information of the decoder 31, such as the buffer status, the occupancy of the current playing, and the like.
The data analysis module 114 is configured to process the received program media data to obtain video compressed data, send the video compressed data to the decoder 31 of the graphics processing chip, and analyze program video information from the media data, where the program video information includes an audio/video format, an audio/video coding format, a resolution, a memory size, and the like.
In this embodiment, the data analysis module 114 has a unified interface adapted to different image processing chips 3, and is capable of starting the decoder 31 in the image processing chip 3, setting an audio/video format, writing audio/video data, starting/pausing/stopping playing, obtaining decoder information, and other conventional operations, so as to adapt to the image processing chip 3.
This embodiment provides a specific structure of a playback apparatus, as compared with the first embodiment,
a sixth embodiment of the present invention relates to an electronic device, such as a mobile phone, a tablet pc, a tv set-top box, a tv set, etc., and a panoramic video can be displayed on a display of the electronic device or a display connected to the electronic device by applying the video processing method of the present invention.
In this embodiment, an electronic device includes the video processing apparatus of any one of the third to fifth embodiments.
The electronic device in this embodiment may include at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing method as in the first or second embodiments.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the bus connecting together various circuits of the memory and the processor or processors. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
Compared with the prior art, the method has the advantages that when program media data are received, the program media data are processed to obtain video compression data, the video compression data are sent to the graphic processing chip, the graphic processing chip can decode the video compression data to obtain panoramic video data, then the panoramic video data are processed based on the VR model, the panoramic video data are rendered on the VR model, the VR model with the rendered panoramic video data is sent to the graphic processing chip, and images under the view angle of a user are rendered on the display screen for the graphic processing chip to be displayed to the user; in the invention, the video compression data is decoded by utilizing the graphic processing chip, the hardware decoding capability is fully utilized, the hardware configuration requirement is reduced, the method is convenient to popularize on low-configuration hardware, is suitable for large displays and low-configuration electronic equipment, and can realize the playing of VR film sources with high code streams (such as 4K and 10 bit) on the low-configuration electronic equipment; in addition, the method does not depend on an android native framework, simplifies the playing flow of the played program, and improves the playing and control speed.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of practicing the invention, and that various changes in form and detail may be made therein without departing from the spirit and scope of the invention in practice.

Claims (6)

1. The video processing method is applied to electronic equipment, the electronic equipment comprises a graphic processing chip, a playing device and a VR toolkit, the graphic processing chip comprises a decoder, and the playing device is connected with the decoder;
the method comprises the following steps:
processing received program media data to obtain program video information and video compression data, binding a VR toolkit with a graphic processor and sending the video compression data to a graphic processing chip when the program is judged to be of a VR program type based on the program video information, so that the graphic processing chip decodes the video compression data to obtain panoramic video data;
acquiring the panoramic video data from the graphic processing chip, processing the panoramic video data based on a VR model, and rendering the panoramic video data onto the VR model;
based on the current view angle parameter, a model matrix of the VR model and a view matrix of a generated user view angle, sending the VR model with rendered panoramic video data to the graphics processing chip, so that the graphics processing chip renders the acquired panoramic video data of the current view angle on the VR model according to the view matrix and outputs a rendered video image to a display of the electronic device; wherein the VR model is pre-created according to program video information and open graphics configured in the electronic device;
when receiving a visual angle change parameter for adjusting steering and angle, updating the VR model based on the visual angle change parameter, processing subsequent panoramic video data by using the updated VR model, matching the panoramic video data with the adjusted visual angle, generating an updated view matrix, and sending the updated view matrix to the graphics processing chip, so that the graphics processing chip outputs a video image with the adjusted visual angle to the display based on the updated view matrix.
2. The video processing method of claim 1, prior to the processing the panoramic video data based on the VR model, further comprising:
establishing a binding relationship with the graphic processing chip;
and acquiring the panoramic video data from the graphic processing chip based on the binding relationship.
3. A video processing apparatus comprising: the playing device and the VR toolkit are connected with each other;
the playing device is used for processing the received program media data to obtain program video information and video compression data, and sending the video compression data to the graphics processing chip so that the graphics processing chip can decode the video compression data to obtain panoramic video data;
the VR toolkit is used for acquiring the panoramic video data from the graphic processing chip, processing the panoramic video data based on a VR model and rendering the panoramic video data to the VR model; based on the current view angle parameter, a model matrix of the VR model and a view matrix for generating a user view angle, sending the VR model with rendered panoramic video data to the graphics processing chip, so that the graphics processing chip renders the obtained panoramic video data of the current view angle on the VR model according to the view matrix and outputs the rendered video image to a display of the electronic equipment; wherein the VR model is pre-created according to program video information and open graphics configured in the electronic device; when receiving visual angle change parameters for adjusting steering and angle, updating the VR model based on the visual angle change parameters, processing subsequent panoramic video data by using the updated VR model, matching the panoramic video data with the adjusted visual angle, generating an updated view matrix, and sending the updated view matrix to the graphic processing chip, so that the graphic processing chip outputs a video image with the adjusted visual angle to the display based on the updated view matrix.
4. The video processing apparatus according to claim 3, wherein the playing means is configured to process the received program media data to obtain video compression data and program video information.
5. The video processing apparatus according to claim 4, wherein the playing apparatus is configured to establish a binding relationship between the VR toolkit and the graphics processing chip before the VR-based model processes the panoramic video data;
the VR toolkit is used for acquiring the panoramic video data from the graphic processing chip based on the binding relationship.
6. An electronic device, comprising: the video processing device of any of claims 3 to 5.
CN202010553664.6A 2020-06-17 2020-06-17 Video processing method and device and electronic equipment Active CN111741343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010553664.6A CN111741343B (en) 2020-06-17 2020-06-17 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010553664.6A CN111741343B (en) 2020-06-17 2020-06-17 Video processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111741343A CN111741343A (en) 2020-10-02
CN111741343B true CN111741343B (en) 2022-11-15

Family

ID=72649535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010553664.6A Active CN111741343B (en) 2020-06-17 2020-06-17 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111741343B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804514A (en) * 2020-12-31 2021-05-14 百视通网络电视技术发展有限责任公司 VR panoramic video display interaction method, medium and equipment
CN113473104A (en) * 2021-07-12 2021-10-01 广州浩传网络科技有限公司 Video playing method, player and playing device based on naked eye VR

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841389A (en) * 2014-04-02 2014-06-04 北京奇艺世纪科技有限公司 Video playing method and player
CN108012178A (en) * 2016-10-27 2018-05-08 三星电子株式会社 The method of image display device and display image
CN108289228A (en) * 2017-01-09 2018-07-17 阿里巴巴集团控股有限公司 A kind of panoramic video code-transferring method, device and equipment
CN110572712A (en) * 2018-06-05 2019-12-13 杭州海康威视数字技术股份有限公司 decoding method and device
CN110930489A (en) * 2018-08-29 2020-03-27 英特尔公司 Real-time system and method for rendering stereoscopic panoramic images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9232177B2 (en) * 2013-07-12 2016-01-05 Intel Corporation Video chat data processing
US20170186243A1 (en) * 2015-12-28 2017-06-29 Le Holdings (Beijing) Co., Ltd. Video Image Processing Method and Electronic Device Based on the Virtual Reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841389A (en) * 2014-04-02 2014-06-04 北京奇艺世纪科技有限公司 Video playing method and player
CN108012178A (en) * 2016-10-27 2018-05-08 三星电子株式会社 The method of image display device and display image
CN108289228A (en) * 2017-01-09 2018-07-17 阿里巴巴集团控股有限公司 A kind of panoramic video code-transferring method, device and equipment
CN110572712A (en) * 2018-06-05 2019-12-13 杭州海康威视数字技术股份有限公司 decoding method and device
CN110930489A (en) * 2018-08-29 2020-03-27 英特尔公司 Real-time system and method for rendering stereoscopic panoramic images

Also Published As

Publication number Publication date
CN111741343A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN107018370B (en) Display method and system for video wall
US7667707B1 (en) Computer system for supporting multiple remote displays
US20140074911A1 (en) Method and apparatus for managing multi-session
WO2019134235A1 (en) Live broadcast interaction method and apparatus, and terminal device and storage medium
US9420324B2 (en) Content isolation and processing for inline video playback
WO2017107911A1 (en) Method and device for playing video with cloud video platform
CN111741343B (en) Video processing method and device and electronic equipment
KR102646030B1 (en) Image providing apparatus, controlling method thereof and image providing system
US20140012898A1 (en) Web browser proxy-client video system and method
WO2017080175A1 (en) Multi-camera used video player, playing system and playing method
US20120069218A1 (en) Virtual video capture device
JP2010286811A (en) Assembling display equipment, and methdo and system for control of screen thereof
CN111510773A (en) Resolution adjustment method, display screen, computer storage medium and equipment
CN102664939A (en) Method and device for mobile terminal of screen mirror image
CN111464828A (en) Virtual special effect display method, device, terminal and storage medium
WO2010114512A1 (en) System and method of transmitting display data to a remote display
CN113596571B (en) Screen sharing method, device, system, storage medium and computer equipment
US9335964B2 (en) Graphics server for remotely rendering a composite image and method of use thereof
CN113475091A (en) Display apparatus and image display method thereof
JP2009145727A (en) Image display apparatus
JP2023093406A (en) Image processing method and apparatus for virtual reality device, and virtual reality device
EP3582504B1 (en) Image processing method, device, and terminal device
US8982128B2 (en) Method of providing image and display apparatus applying the same
CN111385590A (en) Live broadcast data processing method and device and terminal
JP6396342B2 (en) Wireless docking system for audio-video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant