CN116700943A - Video playing system and method and electronic equipment - Google Patents

Video playing system and method and electronic equipment Download PDF

Info

Publication number
CN116700943A
CN116700943A CN202210205973.3A CN202210205973A CN116700943A CN 116700943 A CN116700943 A CN 116700943A CN 202210205973 A CN202210205973 A CN 202210205973A CN 116700943 A CN116700943 A CN 116700943A
Authority
CN
China
Prior art keywords
video
decoded data
data
decoding
video playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210205973.3A
Other languages
Chinese (zh)
Inventor
戴真
朱志军
夏青
段豪杰
李奇
汪明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210205973.3A priority Critical patent/CN116700943A/en
Publication of CN116700943A publication Critical patent/CN116700943A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a video playing system, a video playing method and electronic equipment, wherein the system comprises the following components: the video decoding unit, the shared memory unit and the video playing unit; the video decoding unit receives a video stream sent by a video playing application to obtain decoding data; determining decoded data indication information, the decoded data indication information indicating an address at which the decoded data is stored in the shared memory unit; transmitting the decoded data indication information to a video playing application; the video playing unit receives the decoded data indication information and reads the decoded data from the shared memory unit according to the decoded data indication information; and drawing video playing data based on the decoding data, and sending the video playing data to a video playing application for playing. The decoding data indicating information is transmitted among the video decoding unit, the video playing application and the video playing unit, so that the effect of copying the decoding data is achieved, the consumption of copying the decoding data to a processor is reduced, and the playing performance of the video playing application is improved.

Description

Video playing system and method and electronic equipment
Technical Field
The present application relates to the field of video playback, and in particular, to a video playback system, a video playback method, and an electronic device.
Background
Players generally refer to applications on electronic devices that play local video, audio, and other media files. It is one of the main application software of personal computer, mobile phone, TV and other electronic equipment. The performance of the player mainly comprises the number of supported audio and video protocols, the maximum resolution of supporting playing video, the maximum frame rate of supporting playing video, the smoothness of actually playing video, the occupancy rate of a central processing unit (central processing unit, CPU) in use and the like. In general, the electronic device uses a CPU to perform software decoding and playing on audio and video data, and the method has the defects of few supported video protocols, excessively high occupation of the CPU and the like.
To solve this problem, electronic devices generally use a video hard decoder built in a System On Chip (SOC) to improve the decoding performance of the player, and in this way, the playing performance can be improved to a certain extent, and the CPU occupancy rate can be reduced. But is limited by the influence of the current mainstream player frame, a large number of copies of video data still exist in the player, so that the CPU occupancy rate is high, and the overall user experience of other application software and electronic equipment is influenced. The copying of video data inside the player thus becomes a performance bottleneck for the player, which is more and more evident especially when playing high definition video resources of 4K resolution, 8K resolution, etc.
The large number of copies of data inside the player is mainly referred to as copies of pixel format (luminance chrominance, YUV) data where luminance parameters and chrominance parameters are represented separately after video decoding. The decoded video is made up of several single frame images, which are typically in YUV format. The larger the resolution of a frame of video image, the more pixels the YUV data contains, and the larger the data required.
A typical player frame with a video hard decoder is shown with reference to fig. 1. Wherein the player is in a user state, and the demultiplexer (demux), the video decoder, and the graphics processor (graphics processing unit, GPU) are in a kernel state. The general workflow of a player is as follows: the player reads an original video stream (video stream) file in a user mode, then sends the original video stream (video stream) file to the demultiplexer for demultiplexing into an basic audio stream and a basic video stream, wherein the basic video stream is sent to the video decoder, video YUV data which can be drawn by the GPU is obtained by decoding of the video decoder, then the YUV data is copied to the user mode by the player, and finally the YUV data is copied to the GPU again by the player for drawing and displaying. In this flow, there are two copies of YUV data from the video decoder to the player and from the player to the GPU. Since YUV data itself is large, a large amount of CPU resources are occupied in the copying process. In view of this, a new video playing mode needs to be proposed, so as to reduce the CPU occupancy rate and improve the performance of the media player.
Disclosure of Invention
The application provides a video playing system, a video playing method and electronic equipment, which are used for reducing the CPU occupancy rate and improving the performance of a media player.
In a first aspect, the present application provides a video playing system, comprising: the video decoding unit, the shared memory unit and the video playing unit; the shared memory space in the shared memory unit is shared by the video decoding unit and the video playing unit; the video decoding unit is used for receiving the video stream sent by the video playing application, and decoding the video stream to obtain decoded data; and storing the decoded data in the shared memory unit; determining decoding data indicating information, wherein the decoding data indicating information is used for indicating the address of the decoding data stored in the shared memory unit; transmitting the decoded data indication information to a video playing application; the video playing unit is used for receiving the decoded data indication information sent by the video playing application and reading the decoded data from the shared memory unit according to the decoded data indication information; and drawing video playing data based on the decoding data, and sending the video playing data to a video playing application for playing.
In the video playing system provided by the application, the video decoding unit can store the decoded data obtained by decoding the video stream in the shared memory to obtain the decoded data indicating information, so that the effect of copying the decoded data is achieved through the transmission of the decoded data indicating information among the video decoding unit, the video playing application and the video playing unit. The video playing unit directly uses the decoded data output by the video decoding unit, and multiple copies of the decoded data are eliminated, so that the consumption of CPU (Central processing Unit) is reduced when the decoded data are copied, and the performance of the media player is improved.
As a possible implementation manner, the video decoding unit stores the decoded data in the shared memory unit, and when determining the decoded data indication information, the video decoding unit is configured to send a memory allocation request to the shared memory unit, where the memory allocation request carries a memory capacity required by the decoded data; the shared memory unit is used for responding to the memory allocation request, allocating the shared memory for the decoded data according to the memory capacity required by the decoded data, and storing the decoded data sent by the video decoding unit in the allocated shared memory; establishing a mapping relation between the decoded data and the address of the allocated shared memory, generating decoded data indication information according to the mapping relation, and sending the decoded data indication information to a video decoding unit; and the video decoding unit is used for receiving the decoded data indication information sent by the shared memory unit. When the shared memory unit stores the decoded data, the association relationship between the decoded data and the shared memory unit can be established in advance. That is, the decoded data sent by the video decoding unit may be mapped to any block of buffer space within the shared memory unit.
As a possible implementation, the shared memory unit is further configured to: and deleting the stored decoding data after the video playing application plays the video playing data corresponding to the decoding data. By the mode, space can be saved, and video playing efficiency is further improved.
As a possible implementation, the data type of the decoded data may be YUV data.
As a possible implementation manner, when the video decoding unit performs decoding processing on the video stream to obtain decoded data, the video decoding unit is specifically configured to: demultiplexing the video stream to obtain a base video stream video elementary stream; and decoding the basic video stream to obtain decoded data. The video decoding unit may include a demultiplexer, and the video stream is decoded by using the demultiplexer, so as to obtain a basic video stream.
As a possible implementation, the video decoding unit is further configured to: and receiving a video playing code rate indication sent by the video playing application, and decoding the video stream, wherein the video playing code rate indication comprises a video playing code rate indicated by a user. And decoding the video stream through the video playing code rate indication, so as to obtain decoding data corresponding to the video playing code rate, and the video frame rate corresponding to the output video data can be adjusted in real time.
As one possible implementation, the video playing unit includes: the video processing module and the video output module; the video processing module is used for receiving the decoded data indication information sent by the video playing application; reading the decoded data from the shared memory unit according to the decoded data indication information; preprocessing the decoded data to obtain an image with at least one resolution; the video output module is used for receiving the decoded data indication information sent by the video playing application and reading the decoded data from the shared memory unit according to the decoded data indication information; and drawing video playing data according to the decoded data and the at least one resolution image, and sending the video playing data to a video playing application for playing. Some SOCs may not be pre-equipped with an image processor based on cost considerations, which is a complex arithmetic unit that contains a large number of logic arrays for converting the required display information to drive and provide line scan signals to the screen to control the correct display of the screen, performing the complex mathematical and geometric calculations required for graphics rendering. The video processing module is a simple hardware logic unit, and compared with the GPU, the video processing module comprises fewer logic arrays, is small in size and low in power consumption, and is generally used for scaling images to a size matched with the size of the display device.
The video processing module supports unified preprocessing of an input image, such as denoising, de-interlacing and the like, then performs scaling, sharpening and the like on each channel respectively, and finally outputs images with different resolutions. According to the embodiment of the application, the video processing module and the video output module which are smaller and simpler replace the GPU to perform a part of format conversion operation and scaling operation, so that the power consumption is lower, and the use experience of a user can be improved.
In a second aspect, the present application provides a video playing method, which can be performed by a video decoding unit in the video playing system, the method comprising: receiving a video stream sent by a video playing application, and decoding the video stream to obtain decoded data; storing the decoded data in the shared memory, and determining decoded data indication information, wherein the decoded data indication information is used for indicating an address of the decoded data stored in the shared memory; and transmitting the decoded data indication information to a video playing application.
In a third aspect, the present application provides a video playing method, which can be performed by a video playing unit in the video playing system, the method comprising: receiving decoding data indication information sent by a video playing application, and reading decoding data from a shared memory according to the decoding data indication information; and drawing video playing data based on the decoding data, and sending the video playing data to a video playing application for playing.
In a fourth aspect, the present application provides a video playback device comprising means for performing the method of the second aspect described above, and means for performing the method of the third aspect described above.
In a fifth aspect, the present application provides a computer readable storage medium storing computer program instructions/code which, when executed by an electronic device, cause the electronic device to carry out the method of the above second aspect or the above third aspect.
The technical effects that can be achieved by any one of the second to fifth aspects described above may be described with reference to the effects that can be achieved by each of the possible designs in the first aspect described above, and these aspects or other aspects of the present application will be more briefly understood in the following description of embodiments.
Drawings
FIG. 1 is a schematic diagram of a conventional video playback process;
FIG. 2 is a schematic diagram of a video playback system;
FIG. 3 is a schematic diagram of a video playback system;
fig. 4 is a schematic structural diagram of a video playing device.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus a repetitive description thereof will be omitted. The words expressing the positions and directions described in the present application are described by taking the drawings as an example, but can be changed according to the needs, and all the changes are included in the protection scope of the present application. The drawings of the present application are merely schematic representations of relative positional relationships and are not intended to represent true proportions.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The electronic device usually decodes and plays the audio and video data by using the CPU, and the method has the defects of few supported video protocols, excessively high CPU occupation and the like. In order to solve the above problem, the SOC on the electronic device may improve decoding performance of the player through a built-in video hard decoder. But is limited by the current mainstream player frame, a large number of copies of video data still exist in the player, so that the CPU occupancy rate is high, and the overall user experience of other application software and electronic equipment is affected. Thus, the copying of video data inside the player affects the playing performance of the player, and this effect is more pronounced especially when playing high definition audio and video such as high resolution.
With continued reference to fig. 1, as a possible implementation manner, YUV data decoded by the video decoder in kernel space is transferred to the player in user space, and then transferred to the image processor in kernel space by the player. When the video decoder decodes, the storage space occupied by the YUV data in each transmission process can be reduced by compressing the YUV data, so that CPU performance consumption caused by twice copying is reduced, and the player performance is improved. However, from the basic flow, two copies of YUV data between the user space and the kernel space still exist, even if the YUV data is compressed, the compressed YUV data still increases with the increase of the video resolution, the performance consumption of the player on the CPU is still very high, and it is difficult to cope with the requirement of the user on the gradual increase of the video resolution.
It should be understood that, in the embodiment of the present application, the video stream sent by the video playing application may include not only the video file, but also: a combination of files such as audio files, image files, animation files, etc. The types of video played by a video playback application may be broadly referred to as: video, image, audio, animation, and so forth. In the following, a video file is taken as an example to describe the video playing system of the present application, and when the type of the video file is an audio file or an image file, the description related to playing the video can be referred to, and will not be repeated here.
The running medium of the video playing application in the embodiment of the application can be an electronic device, such as a portable terminal. Portable terminals such as: a mobile phone, a tablet computer, a notebook computer, etc. In this case, the electronic device may respond to the request for the video playback application (e.g.) Playing the corresponding video. In other embodiments, the video playback device may also be a non-portable terminal, such as a smart screen, desktop computer, television, car playback device, or the like. For example, in the case where the video playing device is a television, the corresponding video may be played in response to an operation of selecting a certain channel by the user.
The device for running the video playing system in the embodiment of the application can be an electronic device, such as a portable terminal. For example, the portable terminal may be a mobile phone, a tablet computer, a notebook computer, or the like. Alternatively, the device for running the video playing system in the embodiment of the present application may be a non-portable terminal, such as a smart screen, a desktop computer, or a television, which is not limited thereto. When the device running the video playing system is a portable terminal or a non-portable terminal, the device running the video playing system and the running medium of the video playing application may be the same device or different devices, which is not limited in the present application. Or, the running medium of the video playing application may also be a video server, etc., and the video server in the embodiment of the present application may be a cloud server or a local server, which is not limited thereto. The video playing system of the embodiment of the application can be applied to video-on-demand scenes and live scenes, and is not particularly limited.
Of course, in other embodiments of the application, the video file may be generated by other devices. For example, a video shooting application on an electronic device running a video playing application responds to a shooting request of a user to shoot, and after shooting video is completed, the shot video file and a video playing request corresponding to the video file are sent to a video playing system.
In the following, some terms involved in the embodiments of the present application are explained for easy understanding by those skilled in the art.
(1) YUV data: YUV is a color coding method, which refers to a pixel coding format in which luminance parameters (Y: luminance or luma) and chrominance parameters (UV: chrominance or chroma) are expressed separately. Due to the large amount of YUV data in one frame, YUV is typically encoded into a base video stream (video elementary stream) in media processing and then packaged with a base audio stream (audio elementary stream) into a packaged stream for storage and transmission. The size of YUV data is related to the resolution of video, and the larger the resolution of a frame of video image is, the more pixels the YUV data contains, and the larger the required data is. For example, in YUV color coding, the color of a pixel may be labeled (Y, U, V), where "Y" represents brightness, "U" is chromaticity, and "V" is density.
(2) Video frame rate: the video stream, after decoding, has a size related to the frame rate (frame rate) of the video, which is a measure for measuring the number of display frames, in addition to single-frame YUV data. The unit is the display frame number per second (frames per second, FPS) or hertz (Hz), the larger the video frame rate of the video with the same resolution, the larger the YUV total data amount of the video stream after decoding.
(3) Shared memory: refers to a mass memory that may be accessed by different CPUs or different processing units in a computer system.
(4) File descriptor (file descriptor): also known as a file handle, the kernel uses a file descriptor to access a file. The file descriptor is a non-negative integer. When an existing file or a newly created file is opened, the kernel returns a file descriptor, and the file descriptor is also required to be used for specifying the file to be read and written.
(5) Multiplexing (mux), demultiplexing (demux): in the field of media playback, multiplexing, also called encapsulation, is used to mix multiple streams (video, audio, subtitles, etc.) into one output according to some container rule. De-multiplexing, also called de-encapsulation, is the inverse of multiplexing to parse and separate multiple streams (video, audio, subtitles, etc.) from one input.
In order to reduce the CPU occupation rate and improve the performance of a player, the application provides a video playing system, in which a video decoding unit can store decoding data obtained by decoding a video stream in a shared memory to obtain decoding data indicating information, so that the effect of copying the decoding data is achieved through the transmission of the decoding data indicating information among the video decoding unit, a video playing application and the video playing unit. The video playing unit directly uses the decoded data output by the video decoding unit, and multiple copies of the decoded data are eliminated, so that the consumption of CPU (Central processing Unit) is reduced when the decoded data are copied, and the performance of the media player is improved.
Referring to fig. 2, a video playing system 200 according to an embodiment of the present application is shown, where the video playing system 200 includes: a video decoding unit 201, a shared memory unit 202, and a video playing unit 203; the shared memory space in the shared memory unit 202 is shared by the video decoding unit 201 and the video playback unit 203.
It should be noted that, in the embodiment of the present application, the shared memory unit 202 may be an external storage device, or a storage device preset on the SOC, for example: dynamic random access memory (dynamic random access memory, DRAM), flash memory devices, universal flash memory (universal flash storage, UFS), and the like. The storage space of the storage device may store the decoded data in the embodiment of the present application partially or completely. The storage space in the shared memory unit 202 may be dynamically adjusted according to the size relationship between the storage space and the data to be stored.
The video decoding unit 201 is configured to receive a video stream sent by a video playing application, and decode the video stream to obtain decoded data; and storing the decoded data in the shared memory unit 202; determining decoding data indication information, wherein the decoding data indication information is used for indicating the address of the decoding data stored in the shared memory unit; and sending the decoded data indication information to the video playing application.
The current layered architecture divides the software system into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, taking the android system as an example, the android system is divided into four layers, namely an application layer, an application framework layer, an Zhuoyun row (android run) and system library, and a kernel layer from top to bottom. The application layer may comprise a series of applications, with the video playback system running in the kernel layer. Compared with the repeated copying of the decoded data in the prior art, the embodiment of the application copies only the decoded data indication information, thereby reducing the consumption of a CPU. The video playing program can be a system native video playing application or a third party application, and the third party application can be understood as an application program downloaded by a user from an application market APP or a network according to own requirements.
Optionally, the method of decoding the video stream to obtain the decoded data may, but is not limited to, parsing the video stream by soft or hard decoding, thereby obtaining the decoded data. The format of the video stream may be avi, flv, mkv, mp4, and when the format of the video stream is the above format, the video stream needs to be demultiplexed, so as to obtain the basic video stream and the basic audio stream by decomposition. When the format of the video stream sent by the video playing application is the basic video stream, the basic video stream can be directly decoded to obtain decoded data. It should be noted that: the video stream may have only audio packets, only video packets, or both audio and video packets.
The application can decode the video stream by adopting a hard solution or soft solution mode. The hardware decoding is to replace the CPU with a video decoder. Soft decoding is software decoding, which means that the CPU decodes by running the CPU with decoding software. The processor performing the soft solution may include one or more processing units such as: the processor may include an application processor (application processor, AP), a modem processor, an image signal processor (image signal processor, ISP), a controller, a digital signal processor (digital signal processor, DSP) application specific integrated circuit (application specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA), or the like. The present application is not limited to the decoding method employed by the video decoding unit 201.
The storage space of the shared memory unit 202 may be divided into a plurality of groups, each group includes a plurality of data storage entries, each data storage entry includes an identification bit, an indication bit, and a data storage bit, where the identification bit is used to identify data stored in the data storage bit, the data storage bit is used to store the decoded data, and the indication bit corresponds to the decoded data stored in the data storage bit.
The identification bit may be a tag bit in a storage address of the decoded data in the shared memory unit 202, or may be a partial field in a tag in a storage address of the decoded data in the shared memory unit 202, or may be selected from a tag bit and an index bit in a storage address of the decoded data in the shared memory unit 202 as the identification bit.
In the embodiment of the present application, when the shared memory unit 202 stores the decoded data, an association relationship between the decoded data and the shared memory unit 202 may be established in advance. That is, the decoded data transmitted by the video decoding unit 201 may be mapped to any one block buffer space within the shared memory unit 202; when the shared memory unit 202 stores the decoded data, in addition to the decoded data, the association relationship between the storage address of the shared memory unit 202 and the cache space of the shared memory unit 202 of the decoded data may be additionally stored.
The video playing unit 203 is configured to receive decoding data indication information sent by the video playing application, and read the decoding data from the shared memory unit 202 according to the decoding data indication information; and drawing video playing data based on the decoding data, and sending the video playing data to the video playing application for playing.
The video playing unit 203 may use the decoded data indication information to read the decoded data from the shared memory unit 202. The video playback unit 203 may use the decoded data a plurality of times when performing video drawing. Whether the decoded data is used multiple times is related to the format of the decoded data or the output video format. For example, some of the decoded data are data that any one of the stream processors in the video playing unit 203 needs to use multiple times, and since the stream processors deploy a plurality of thread blocks to perform corresponding video drawing actions, each thread block needs to use data when performing video drawing actions, the data that any one of the stream processors needs to use multiple times is the data that the plurality of thread blocks in any one of the stream processors needs to use; while some decoded data are data associated between stream processors, i.e. data that need to be used by different stream processors.
Since the video playing application generally sends a video stream to the video decoding unit 201 in units of frames, the video stream may include splitting each frame of video data into several media packets (decoded data), and the video playing unit 203 may perform framing processing on all media packets of one frame of video data after receiving all media packets of one frame of video data, so as to generate one frame of video frame, and until all media packets corresponding to a plurality of video frames that are requested to be played are framed into video frames. And, according to the frame sequence number mark carried on the media packet, a corresponding frame sequence number can be generated. Video drawing is performed in the order of the frame numbers from small to large. For example, assuming that 4 frames of image frames are continuously displayed on a picture, the frame interval is 1 frame, the sequence number of the currently played video frame is 1-4, if the adjacent video frame requested to be played is the next frame of the currently played video frame, the decoded data corresponding to the first frame in the shared memory unit 202 is deleted or the memory space corresponding to the decoded data corresponding to the first frame is released, so as to save the resource occupation.
Still taking the case of sending video data in frame as a unit as an example, if the transmission frequency of the video stream is 30Hz (30 times in one second), and the display frequency of the display screen is 60Hz, in order to make the transmission frequency of the video data finally input to the display module be the same as the display frequency of the display module, the video data generated by the decoded data may be repeatedly transmitted, thereby improving the smoothness of the video. If the transmission frequency of the video stream is 60Hz and the display frequency of the display screen is 30Hz, deleting video data of each frame in the video data generated by the decoded data, thereby improving the fluency of the video.
The display screen according to the embodiment of the present application may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In addition, the display screen in the embodiment of the present application may further include a speaker, a touch sensor, and the like, which is not limited.
As a possible implementation manner, the video decoding unit 201 stores the decoded data in the shared memory unit 202, and when determining the decoded data indication information, the video decoding unit 201 and the shared memory unit 202 are specifically configured to:
the video decoding unit 201 is configured to send a memory allocation request to the shared memory unit 202, where the memory allocation request carries a memory capacity required by the decoded data; the shared memory unit 202 is configured to allocate a shared memory for the decoded data according to a memory capacity required by the decoded data in response to the memory allocation request, and store the decoded data sent from the video decoding unit 201 in the allocated shared memory; establishing a mapping relation between the decoded data and the allocated address of the shared memory, generating the decoded data indication information according to the mapping relation, and sending the decoded data indication information to the video decoding unit 201; the video decoding unit 201 is configured to receive the decoded data indication information sent by the shared memory unit.
After receiving the memory allocation request sent by the video decoding unit 201, the shared memory unit 202 may allocate a memory space for the decoded data based on the memory capacity required by the decoded data carried in the memory allocation request, and when the remaining capacity of the memory page of the currently occupied part of the space in the shared memory unit 202 is less than the memory capacity required by the decoded data, the shared memory unit 202 may obtain an unallocated memory page, designate the memory space corresponding to the unallocated memory page as the memory space for storing the decoded data, and store the decoded data in the memory space corresponding to the newly allocated memory page, thereby preventing video playing errors caused by the memory resource deficiency of the currently occupied memory page. Or, when the remaining capacity of the memory page in the shared memory unit 202 that is currently occupied with a part of space is less than the memory capacity required by the decoded data, the shared memory unit 202 may delete the decoded data corresponding to the video that has been played in the memory page in the currently occupied part of space, thereby releasing the memory space, effectively reducing the memory resource waste and simultaneously ensuring smooth video playing.
The indication information of the decoding data in the embodiment of the present application is used to identify the storage address of the decoding data, and any indication mode that can identify the storage address of the decoding data is applicable to the embodiment of the present application. For example, the decoded data indication information may be formed of a memory block address indicating an address of a memory block (block) where the decoded data is located in the shared memory unit 202, and an offset (offset) indicating an offset of the decoded data in the shared memory unit 202 in the memory block, that is, a combination of the block address and the offset may determine a storage location of the decoded data in the shared memory unit 202. Illustratively, the memory block address may be divided into tag bits (tags) and index bits (indices); the address of the memory block where the shared memory unit 202 is located can be determined by combining the tag bit and the index bit.
As a possible implementation manner, the shared memory unit 202 is further configured to: and deleting the stored decoded data after the video playing application plays the video playing data corresponding to the decoded data. By the mode, space can be saved, and video playing efficiency is further improved.
As a possible implementation manner, when the video decoding unit 201 performs decoding processing on the video stream to obtain decoded data, the method is specifically used for: demultiplexing the video stream to obtain a basic video stream video elementary stream; and decoding the basic video stream to obtain the decoded data. The video decoding unit 201 may include a demultiplexer, and the video stream is decoded by using the demultiplexer, so as to obtain a basic video stream.
As a possible implementation manner, the video decoding unit 201 is further configured to: receiving a video playing code rate indication sent by the video playing application, and correspondingly decoding the video stream according to the indicated video playing code rate to obtain a video with a corresponding code rate, wherein the video playing code rate indication comprises a video playing code rate indicated by a user. And decoding the video stream through the video playing code rate indication, so that decoded data corresponding to the video playing code rate can be obtained, and the video frame rate corresponding to the output video data can be adjusted in real time.
As a possible implementation manner, referring to fig. 3, the video playing unit 203 specifically includes: a video processing module 2031 and a video output module 2032; the video processing module 2031 is configured to receive decoded data indication information sent by the video playing application; reading the decoded data from the shared memory unit according to the decoded data indication information; preprocessing the decoded data to obtain an image with at least one resolution; the video output module 2032 is configured to receive decoded data indication information sent by the video playing application, and read the decoded data from the shared memory unit 202 according to the decoded data indication information; and drawing video playing data according to the decoded data and the at least one resolution image, and sending the video playing data to the video playing application for playing.
Currently, when drawing video play data, an image processor (GPU) is typically used to process the decoded data, but the cost is also high because of the many arithmetic units and logic arrays in the image processor. However, when the decoded data is actually used to draw the video play data, it is not necessary to use excessive arithmetic units and logic arrays, but only the video processing module 2031 is used to preprocess the decoded data, and the drawing of the video play data can be completed, where the video processing module 2031 and the video output module 2032 are usually processing modules inherent to the device running the video play system 200, and the number of arithmetic units and logic arrays included in the video processing module 2031 and the video output module 2032 is smaller than that of image processors, so that the power consumption is lower and the cost is lower.
For example, on a device running a Linux operating system, the video processing module 2031 and the video output module 2032 may be a video processing subsystem (video process sub-system, VPSS) and a video output module (VO), respectively, and the VPSS and the VO replace a relatively high-cost image processor together to complete a corresponding video processing function. The VPSS supports unified simple preprocessing of an input image, such as denoising, de-interlacing and the like, then scaling, sharpening and the like are respectively carried out, and finally, multiple images with different resolutions are output. The VO is used as a video output port for outputting video playing data. After the decoded data is preprocessed through the VPSS, the preprocessed image is directly transmitted to the VO, and video playing data is sent to a display screen through a screen video access interface by utilizing the VO. In the embodiment of the application, the video processing module 2031 and the video output module 2032 which are smaller and simpler replace the GPU to draw the video playing data, so that the power consumption is smaller, and the video watching experience of a user is not influenced on the premise of saving the cost.
The video decoding unit in the video playing system provided by the application can store the decoded data obtained by decoding the video stream in the shared memory to obtain the decoded data indication information, so that the effect of copying the decoded data is achieved through the transmission of the decoded data indication information among the video decoding unit, the video playing application and the video playing unit. The video playing unit directly uses the decoded data output by the video decoding unit, and multiple copies of the decoded data are eliminated, so that the consumption of CPU (Central processing Unit) is reduced when the decoded data are copied, and the performance of the media player is improved.
Based on the same conception, the embodiment of the application also provides a video playing method, which comprises the following steps: receiving a video stream sent by a video playing application, and decoding the video stream to obtain decoded data; storing the decoded data in a shared memory, and determining decoded data indication information, wherein the decoded data indication information is used for indicating an address of the decoded data stored in the shared memory; and sending the decoded data indication information to the video playing application.
Based on the same conception, the embodiment of the application also provides a video playing method, which comprises the following steps: receiving decoding data indication information sent by a video playing application, and reading decoding data from a shared memory according to the decoding data indication information; and drawing video playing data based on the decoding data, and sending the video playing data to the video playing application for playing.
Based on the same concept, referring to fig. 4, the present application provides a video playing apparatus 400 including means for performing the video playing method described above separately or together. As one possible implementation, the video playback device 400 may include a processor 410 and a memory 420 therein; the memory 420 has stored therein program instructions; the program instructions, when executed, cause the video playback device 400 to perform the video playback method described in the above embodiments.
Further, in some embodiments, the video playback device 400 may further include a transceiver 430 for communicating with other devices via a transmission medium, so that the video playback device 400 may communicate with other devices. By way of example, transceiver 430 may be a communication interface, circuit, bus, module, etc., and the other devices may be terminals or servers, etc. Illustratively, transceiver 430 may be used to send video acquisition requests to a video server, receive video streams, and the like. In other embodiments, the video playback device 400 may further include a display screen 440 for displaying the rendered video.
In addition, the video playback device 400 in the embodiment of the present application may further include a speaker, a touch sensor, and the like, which is not limited. The connection medium between the processor 410, the memory 420, the transceiver 430, and the display screen 440 is not limited in the embodiment of the present application. For example, the processor 410, the memory 420, the transceiver 430, and the display 440 may be connected through buses, which may be divided into address buses, data buses, control buses, etc. in an embodiment of the present application.
Processor 410 in embodiments of the present application may be a general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a memory medium well known in the art such as random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable read-only memory, or electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads instructions from the memory and, in combination with its hardware, performs the steps of the method described above.
In an embodiment of the present application, the memory 420 may be a nonvolatile memory, such as a hard disk (HDD) or a Solid State Drive (SSD), or may be a volatile memory (volatile memory), for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in embodiments of the present application may also be circuitry or any other device capable of performing memory functions for storing program instructions and/or data.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (11)

1. A video playback system, comprising: the video decoding unit, the shared memory unit and the video playing unit; the shared memory space in the shared memory unit is shared by the video decoding unit and the video playing unit;
the video decoding unit is used for receiving a video stream sent by a video playing application, and decoding the video stream to obtain decoded data; and storing the decoded data in the shared memory unit; determining decoding data indication information, wherein the decoding data indication information is used for indicating the address of the decoding data stored in the shared memory unit; transmitting the decoded data indication information to the video playing application;
The video playing unit is used for receiving decoding data indicating information sent by the video playing application and reading the decoding data from the shared memory unit according to the decoding data indicating information; and drawing video playing data based on the decoding data, and sending the video playing data to the video playing application for playing.
2. The system of claim 1, wherein the video decoding unit stores the decoded data in the shared memory unit, and wherein the video decoding unit and the shared memory unit are configured to, when determining the decoded data indication information:
the video decoding unit is configured to send a memory allocation request to the shared memory unit, where the memory allocation request carries a memory capacity required by the decoded data;
the shared memory unit is used for responding to the memory allocation request, allocating a shared memory for the decoded data according to the memory capacity required by the decoded data, and storing the decoded data sent by the video decoding unit in the allocated shared memory; establishing a mapping relation between the decoded data and the allocated address of the shared memory, generating the decoded data indication information according to the mapping relation, and sending the decoded data indication information to the video decoding unit;
The video decoding unit is configured to receive the decoded data indication information sent by the shared memory unit.
3. The system of claim 1 or 2, wherein the shared memory unit is further configured to:
and deleting the stored decoded data after the video playing application plays the video playing data corresponding to the decoded data.
4. A system according to any one of claims 1-3, characterized in that the data type of the decoded data is YUV data.
5. The system according to any one of claims 1-4, wherein the video decoding unit is configured to, when performing decoding processing on the video stream to obtain decoded data:
demultiplexing the video stream to obtain a basic video stream video elementary stream; and decoding the basic video stream to obtain the decoded data.
6. The system of any of claims 1-5, wherein the video decoding unit is further configured to: and receiving a video playing code rate indication sent by the video playing application, and decoding the video stream, wherein the video playing code rate indication comprises a video playing code rate indicated by a user.
7. The system of any of claims 1-6, wherein the video playback unit comprises: the video processing module and the video output module;
the video processing module is used for receiving decoding data indication information sent by the video playing application; reading the decoded data from the shared memory unit according to the decoded data indication information; preprocessing the decoded data to obtain an image with at least one resolution;
the video output module is used for receiving the decoded data indication information sent by the video playing application and reading the decoded data from the shared memory unit according to the decoded data indication information; and drawing video playing data according to the decoded data and the at least one resolution image, and sending the video playing data to the video playing application for playing.
8. A video playing method, the method comprising:
receiving a video stream sent by a video playing application, and decoding the video stream to obtain decoded data;
storing the decoded data in a shared memory, and determining decoded data indication information, wherein the decoded data indication information is used for indicating an address of the decoded data stored in the shared memory;
And sending the decoded data indication information to the video playing application.
9. A video playing method, the method comprising:
receiving decoding data indication information sent by a video playing application, and reading decoding data from a shared memory according to the decoding data indication information;
and drawing video playing data based on the decoding data, and sending the video playing data to the video playing application for playing.
10. A video playback device comprising means for performing the method of claim 8 and means for performing the method of claim 9.
11. A computer readable storage medium storing computer program instructions/code which, when executed by an electronic device, cause the electronic device to carry out the method of claim 8 or claim 9.
CN202210205973.3A 2022-02-28 2022-02-28 Video playing system and method and electronic equipment Pending CN116700943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210205973.3A CN116700943A (en) 2022-02-28 2022-02-28 Video playing system and method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210205973.3A CN116700943A (en) 2022-02-28 2022-02-28 Video playing system and method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116700943A true CN116700943A (en) 2023-09-05

Family

ID=87826368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210205973.3A Pending CN116700943A (en) 2022-02-28 2022-02-28 Video playing system and method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116700943A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676064A (en) * 2024-02-01 2024-03-08 南京国兆光电科技有限公司 Video signal transmission method, equipment and storage medium based on SPI communication

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676064A (en) * 2024-02-01 2024-03-08 南京国兆光电科技有限公司 Video signal transmission method, equipment and storage medium based on SPI communication
CN117676064B (en) * 2024-02-01 2024-05-21 南京国兆光电科技有限公司 Video signal transmission method, equipment and storage medium based on SPI communication

Similar Documents

Publication Publication Date Title
US9990690B2 (en) Efficient display processing with pre-fetching
US20140086309A1 (en) Method and device for encoding and decoding an image
CN108881916A (en) The video optimized processing method and processing device of remote desktop
CN112929705B (en) Texture compression and decompression method and device, computer equipment and storage medium
US11755271B2 (en) Stitching display system and image processing method of the same
RU2656727C1 (en) Compression control surfaces supported by virtual memory
WO2023011033A1 (en) Image processing method and apparatus, computer device and storage medium
CN114339412B (en) Video quality enhancement method, mobile terminal, storage medium and device
US20120218292A1 (en) System and method for multistage optimized jpeg output
CN116700943A (en) Video playing system and method and electronic equipment
US9832476B2 (en) Multiple bit rate video decoding
CN115396674B (en) Method, apparatus, medium, and computing apparatus for processing at least one image frame
CN111052742A (en) Image processing
US20230388542A1 (en) A method and apparatus for adapting a volumetric video to client devices
US20130286285A1 (en) Method, apparatus and system for exchanging video data in parallel
US11503310B2 (en) Method and apparatus for an HDR hardware processor inline to hardware encoder and decoder
CN116601695A (en) Method and apparatus for adaptive subsampling for DEMURA correction
CN104737225A (en) System and method for memory-bandwidth efficient display composition
US9883194B2 (en) Multiple bit rate video decoding
US11622113B2 (en) Image-space function transmission
CN110930480B (en) Method for directly rendering startup animation video of liquid crystal instrument
CN108235144A (en) Broadcasting content acquisition methods, device and computing device
CN101166244A (en) Screen display device and its display method
CN118069582A (en) Data processing method and system-level chip
KR101695007B1 (en) Apparatus for parallel processing of large-scale video data and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination