CN117857872A - Memory processing method and display device - Google Patents

Memory processing method and display device Download PDF

Info

Publication number
CN117857872A
CN117857872A CN202310259284.5A CN202310259284A CN117857872A CN 117857872 A CN117857872 A CN 117857872A CN 202310259284 A CN202310259284 A CN 202310259284A CN 117857872 A CN117857872 A CN 117857872A
Authority
CN
China
Prior art keywords
memory block
data
frame
target
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310259284.5A
Other languages
Chinese (zh)
Inventor
廖院松
汤雯
刘剑
杨圣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidaa Netherlands International Holdings BV
Original Assignee
Vidaa Netherlands International Holdings BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vidaa Netherlands International Holdings BV filed Critical Vidaa Netherlands International Holdings BV
Priority to CN202310259284.5A priority Critical patent/CN117857872A/en
Publication of CN117857872A publication Critical patent/CN117857872A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application discloses a memory processing method and display equipment, which are used for responding to media asset playing operation, acquiring media asset data from a server, and constructing a buffer zone, wherein the buffer zone comprises at least one memory block; analyzing the media resource data to obtain a data stream, wherein the data stream comprises a plurality of continuously distributed frame data, and the frame data is video frame data or audio frame data; according to the data stream, a plurality of frame memory units for storing frame data are correspondingly generated in a memory block, one frame memory unit is mapped with one frame data, the memory block independently stores common attribute information of the plurality of frame data included in the data stream, and the frame memory units do not store the common attribute information. The method and the device utilize the large memory blocks of the buffer area to continuously and densely store the plurality of frame memory units corresponding to the data stream, can reduce the extra memory consumption generated when small memories are allocated in a fragmentation mode, improve the memory utilization rate, and further reduce the memory consumption by combining the common attribute information of each frame data of the data stream in the memory blocks.

Description

Memory processing method and display device
Technical Field
The present disclosure relates to the field of display devices, and in particular, to a memory processing method and a display device.
Background
Applications may be installed in the display device and the user may view media of interest through some applications. Some applications may follow the MSE (Media Source Extensions, media source expansion) specification, which allows applications to preload media data to cope with network fluctuations, for example, the applications have previously downloaded media data for the first 5 minutes before the current playing of the media to 1 st minute, so as to avoid media playing jamming and ensure media playing fluency. Under the scenes of media resource preloading and the like, the application often consumes more memory, thereby affecting the running performance of the system.
Disclosure of Invention
The embodiment of the application provides a memory processing method and display equipment, which are used for reducing memory consumption of the display equipment when media resources are played and improving the memory use efficiency.
In a first aspect, embodiments of the present application provide a display device, including:
the communicator is used for being in communication connection with the server so as to download media asset data to the server;
a controller for performing:
responding to media asset playing operation, acquiring media asset data from a server, and constructing a buffer zone, wherein the buffer zone comprises at least one memory block;
analyzing the media resource data to obtain a data stream, wherein the data stream comprises a plurality of continuously distributed frame data, and the frame data are video frame data or audio frame data;
And correspondingly generating a plurality of frame memory units for storing the frame data in the memory block according to the data stream, wherein one frame memory unit maps one frame data, the memory block independently stores common attribute information of the plurality of frame data included in the data stream, and the frame memory units do not store the common attribute information.
In the first aspect, large memory blocks are allocated based on the buffer area, and a plurality of frame memory units corresponding to the data stream are continuously and densely stored by utilizing the large memory blocks, so that extra memory consumption generated by a small memory allocation fragmentation scheme can be reduced, the memory utilization rate is improved, the memory consumption is further reduced by combining common attribute information of each frame of data of the data stream in the large memory blocks, and particularly, the memory can be reasonably used and allocated for media resource preloading scenes under the MSE specification, and the operation performance of media resource application and a system is ensured.
In some implementations, the controller builds a buffer, including: responding to media resource playing operation, constructing a media source corresponding to the media resource, and constructing at least one source buffer area mapped by the media source; constructing at least one track buffer in the source buffer, the track buffer for storing data of a single media type; and constructing at least one memory block in the track buffer. In this way, the application further provides a distributed architecture from the source buffer area, the track buffer area and the memory block to the frame memory unit under the MSE standard, which can reduce the extra memory consumption generated by the small memory fragmentation allocation scheme, improve the memory utilization rate and construct an infrastructure for links such as memory read-write and memory cleaning.
In some implementations, the controller is further to perform: identifying the working state of the memory block according to the state identifier of the memory block, wherein the working state comprises a used state and an unused state, and the used state comprises a writing state and a reading state; if the memory block is in an unused state, compressing the memory block; and if the memory block is in a use state, not compressing the memory block. Therefore, the unused memory blocks are compressed, so that the memory consumption can be reduced, and the memory blocks are not compressed when being used, so that the memory blocks can normally execute the work such as reading and writing, memory cleaning and the like.
In some implementations, the controller generates a plurality of frame memory units in the memory block for storing the frame data, including: searching a first target track buffer corresponding to the data stream in a source buffer according to the media type and the track ID; searching a first target memory block in a writing state in the first target track buffer area; if the first target memory block is found, generating a plurality of frame memory units in the first target memory block; if the first target memory block is not found, creating a first newly-increased memory block in the first target track buffer, setting a first sub-identifier for the first newly-increased memory block, and taking the first newly-increased memory block as the first target memory block; the first sub-identifier is used for indicating that the memory block is in a writing state. Therefore, based on the memory processing architecture, when the memory writing flow is executed, the memory block capable of writing data at present can be automatically positioned, and the efficiency of writing data into the memory is improved.
In some implementations, after correspondingly generating a plurality of frame memory units in the memory block for storing the frame data, the controller is further configured to perform: if the first target memory block is fully written, canceling the first identification of the first target memory block; if the first target memory block is not in a read-out state, setting a first state identifier for the first target memory block, and compressing the first target memory block; the first state identifier is used for indicating that the memory block is in an unused state; if the first target memory block is in a read state, or after the first target memory block is compressed, a second newly-added memory block is created in the first target track buffer, the first sub-identifier is set for the second newly-added memory block, and frame data of the data stream is continuously stored into the second newly-added memory block. Therefore, when the memory block is fully written, the fully written memory block can be compressed to reduce memory consumption, and simultaneously, a new memory block capable of continuously writing data is created to ensure the normal cache of media data.
In some implementations, after correspondingly generating a plurality of frame memory units in the memory block for storing the frame data, the controller is further configured to perform: searching a second target track buffer corresponding to the data stream in a source buffer according to the media type and the track ID; searching a second target memory block in a readout state in the second target track buffer area; the control decoder reads the cached data stream from the second target memory block and decodes the data stream; and if the second target memory block is read, setting the first state identifier for the second target memory block, and compressing the second target memory block. Therefore, based on the memory processing architecture, when the memory reading flow is executed, the memory block capable of reading data currently can be automatically positioned, the efficiency of reading data from the memory is improved, and when the memory block is read out, the read memory block can be compressed to reduce the memory consumption.
In some implementations, after compressing the second target memory block, the controller is further configured to: searching a third target memory block positioned at the rear position of the second target memory block in the second target track buffer area; if the third target memory block is in an unused state, decompressing the third target memory block; if the third target memory block is in a use state, or after the third target memory block is decompressed, setting a second sub-identifier for the third target memory block, where the second sub-identifier is used to indicate that the memory block is in a readout state; and controlling a decoder to read the cached data stream from the third target memory block continuously, and decoding the data stream. Therefore, after one memory block is read, the next memory block which can enable the decoder to read data continuously can be automatically searched, so that the normal decoding and playing of the media data are ensured.
In some implementations, the controller is further to perform: searching a third target track buffer corresponding to the data to be cleaned in the source buffer according to the media type and the track ID of the data to be cleaned; searching a fourth target memory block from the third target track buffer according to a preset time interval; the fourth target memory block comprises Q frame memory units to be cleaned, which are determined according to the preset time interval, wherein Q is less than or equal to S; q is a first number and used for representing the number of frame memory units to be cleaned in the fourth target memory block; s is a second number, used for representing the total number of frame memory units included in the fourth target memory block; and deleting the fourth target memory block in the third target track buffer if the first number is equal to the first number. In this way, the memory block with the data to be cleaned is determined in advance, and if all the frame memory units in the memory block are to be cleaned, the whole memory block can be directly deleted from the track buffer, so that the memory cleaning efficiency is improved.
In some implementations, the controller is further to perform: if the first number is smaller than the second number and the fourth target memory block is in an unused state, decompressing the fourth target memory block and setting a second state identifier for the fourth target memory block, wherein the second state identifier is used for indicating that the memory block is in a used state; when the fourth target memory block is in a use state, determining a frame memory unit to be cleaned from the fourth target memory block again according to the preset time interval; deleting the frame memory unit to be cleaned from the fourth target memory block; and setting the first state identifier for the fourth target memory block, and compressing the fourth target memory block. Therefore, if only part of the frame memory units in the memory block are to be cleaned, the frame memory units to be cleaned can be redetermined when the memory block is in a use state, so that the accuracy of memory cleaning is ensured, and the memory block is compressed after the frame memory units to be cleaned are deleted, so that the memory consumption is reduced.
In a second aspect, an embodiment of the present application further provides a memory processing method, including:
Responding to media asset playing operation, acquiring media asset data from a server, and constructing a buffer zone, wherein the buffer zone comprises at least one memory block;
analyzing the media resource data to obtain a data stream, wherein the data stream comprises a plurality of continuously distributed frame data, and the frame data are video frame data or audio frame data;
and correspondingly generating a plurality of frame memory units for storing the frame data in the memory block according to the data stream, wherein one frame memory unit maps one frame data, the memory block independently stores common attribute information of the plurality of frame data included in the data stream, and the frame memory units do not store the common attribute information.
In the second aspect, large memory blocks are allocated based on the buffer area, and a plurality of frame memory units corresponding to the data stream are continuously and densely stored by utilizing the large memory blocks, so that extra memory consumption generated by a small memory allocation fragmentation scheme can be reduced, the memory utilization rate is improved, the memory consumption is further reduced by combining common attribute information of each frame of data of the data stream in the large memory blocks, and particularly, the memory can be reasonably used and allocated for media resource preloading scenes under the MSE specification, and the operation performance of media resource application and a system is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an operation scenario between a display device and a control apparatus 100 provided in an embodiment of the present application;
fig. 2 is a hardware configuration block diagram of the control device 100 provided in the embodiment of the present application;
fig. 3 is a hardware configuration block diagram of a display device 200 provided in an embodiment of the present application;
fig. 4 is a software configuration diagram of a display device 200 according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a media source for implementing media playback according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a memory structure of a TrackBuffer according to an embodiment of the present application;
fig. 7 is a schematic diagram of a memory structure of an ES frame buffer according to an embodiment of the present application;
fig. 8 is a schematic diagram of a memory structure based on a large memory block according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a system processing architecture according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a first memory processing method according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a second memory processing method according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of a third memory processing method according to an embodiment of the present application.
Detailed Description
For purposes of clarity and implementation of the present application, the following description will make clear and complete descriptions of exemplary implementations of the present application with reference to the accompanying drawings in which exemplary implementations of the present application are illustrated, it being apparent that the exemplary implementations described are only some, but not all, of the examples of the present application.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for limiting a particular order or sequence, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The display device provided in the embodiment of the application may have various implementation forms, for example, may be a television, an intelligent television, a laser projection device, a display 260 (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), and the like. Fig. 1 and 2 are specific embodiments of a display device of the present application.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus 100 according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some implementations, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication methods, and the display device 200 is controlled by a wireless or wired method. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc. Alternatively, the control device 100 may be a mouse, and the mouse and the display device may be connected by a wired or wireless manner.
In some implementations, the smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some implementations, the display device may not receive instructions using the smart device or control device described above, but rather receive control of the user by touch or gesture, or the like.
In some implementations, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 shows a block diagram of a configuration of the control apparatus 100 according to an exemplary embodiment in some embodiments. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
In some implementations, as shown in fig. 3, the display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
In some implementations the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
The display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 260 may be a liquid crystal display 260, an OLED display 260, a projection display 260, or a projection device or screen.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
A user interface, which may be used to receive control signals from a control device 100 (e.g., an infrared remote control, a mouse, etc.).
The external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The controller 250 controls the operation of the display device and responds to the user's operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some implementations, the controller includes at least one of a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some implementations, a graphics processor is used to generate various graphical objects, such as: at least one of icons, operation menus, page contents displayed based on an input instruction of a user, and the like. The graphic processor comprises an arithmetic unit, which is used for receiving various interactive instructions input by a user to operate and displaying various objects according to display attributes; the device also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some implementations, as shown in fig. 4, the system of the display device is divided into three layers, an application layer, a middleware layer, and a hardware layer, respectively, from top to bottom.
In some implementations, the application layer mainly includes a common application on the television, and an application framework (Application Framework), where the common application is mainly an application developed based on a Browser, such as: HTML5 APPs; native applications (Native APPs);
in some implementations, the application framework (Application Framework) is a complete program model with all the basic functions required by standard application software, such as: file access, data exchange, and the interface for the use of these functions (toolbar, status column, menu, dialog box).
In some implementations, native APPs (Native APPs) may support online or offline, message pushing, or local resource access.
In some implementations, the middleware layer includes middleware such as various television protocols, multimedia protocols, and system components. The middleware can use basic services (functions) provided by the system software to connect various parts of the application system or different applications on the network, so that the purposes of resource sharing and function sharing can be achieved.
In some implementations, the hardware layer mainly includes a HAL interface, hardware and a driver, where the HAL interface is a unified interface for all television chips to interface, and specific logic is implemented by each chip. The driving mainly comprises: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
For devices of the same or similar software and hardware configuration as the aforementioned display device, different types of applications may be installed, through some of which a user may view media resources (hereinafter referred to as: assets), for example: after the user starts the video application a, any film source provided in the video application a is requested.
The display device requests the media asset data from the application mapped server 400 through the communicator 220, after acquiring the media asset data, the system may allocate memory for the application to buffer the media asset data, the decoder reads the media asset data from the designated memory, decodes the media asset data, sends the decoded video frame data of the media asset to the display, so that the display displays the video frame data in the video playing interface of the application, and sends the decoded audio frame data of the media asset to the sound player, so that the sound player plays the audio frame data, and the video frame data and the audio frame data remain synchronized. Wherein the sound player includes, but is not limited to: and a speaker arranged in the display device is connected with an external power amplifier device (such as a sound box, a Bluetooth earphone and the like) in a wireless way or in a wired way through an audio output interface.
In some implementations, the display device installed application may support the MSE specification, and the MSE may implement plug-in-free and Web-based streaming media playback functionality. The MSE allows the application to preload the media data, for example, the media data exceeding the current playing progress for a certain period of time (for example, five minutes) is loaded in advance, so that the problem of playing clamping caused by factors such as network fluctuation can be avoided, and the media playing is smoother.
In some implementations, based on MSE, the display device may dynamically construct media source objects for audio (audio) and video (video) through JavaScript, where the media source objects may be used as media data sources for HTMLMediaElement in HTML5, enabling a Web browser supporting HTML5 to play media resources without plug-ins.
Fig. 5 is a schematic diagram of media source implementation of media asset playing according to an embodiment of the present application. As shown in fig. 5, the MediaSource object may create n sourcebuffers, where n is greater than or equal to 1. The SourceBuffer corresponds to a container for storing a Media stream, and an application may dynamically add Media Segments to the SourceBuffer object through MediaSource. The Media Segments are a series of Segments into which the server Segments the Media content, and each segment may contain related information such as encoding and decoding format, segment duration, resolution, code rate, URL, etc., so that the display device end decodes and plays the Media content.
Referring to fig. 5, sourcebuffer is data management with TrackBuffer. Each SourceBuffer may include m trackbuffers, where m is greater than or equal to 1, each working in parallel and buffering data of one media type, respectively.
As shown in the example of fig. 5, the MediaSource object creates 3 sourcebuffers, sourceBuffer1, sourceBuffer2, and SourceBuffer3, respectively.
The SourceBuffer1 includes a TrackBuffer11, a TrackBuffer12 and a TrackBuffer13, wherein the TrackBuffer11 is used for buffering video data 1 in a media segment input to the SourceBuffer1, the TrackBuffer12 is used for buffering audio data 1 in a media segment input to the SourceBuffer1, and the TrackBuffer13 is used for buffering text data 1 (e.g. subtitles) in a media segment input to the SourceBuffer 1.
The SourceBuffer2 includes a TrackBuffer21, and the TrackBuffer21 is used for buffering the video data 2 input into the media segment of the SourceBuffer 2.
The SourceBuffer3 includes a TrackBuffer31, and the TrackBuffer31 is used for buffering the audio data 2 in the media segment input to the SourceBuffer3.
The video decoder 1 is used for reading and decoding video data 1 from the TrackBuffer11, the audio decoder 1 is used for reading and decoding audio data 1 from the TrackBuffer12, the text decoder 1 is used for reading and decoding text data 1 from the TrackBuffer13, the video decoder 2 is used for reading and decoding video data 2 from the TrackBuffer21, and the audio decoder 2 is used for reading and decoding audio data 2 from the TrackBuffer 31. In this way, the controller can transmit the decoded audio data 1 or audio data 2 to the sound player, and can also transmit the mixed data of the audio data 1 and the audio data 2 to the sound player. The controller can control the application to display the decoded Video data 1, the decoded Video data 2 and the decoded text data 1 in different window areas on the media playing interface based on the Video Tag of the HTML5, and can realize multi-path Video playing.
In some implementations, after the media asset application adds a media fragment to the SourceBuffer, the controller may start a browser application, and the browser application calls a Stream Parser (Stream Parser) to parse the media fragment into an ES (elementary Stream) and related metadata information, where the ES contains one type of media content, for example, video Stream data, audio Stream data, text Stream data, or the like corresponding to the media fragment is contained in the ES. The ES data is stored in a buffer queue corresponding to the TrackBuffer in the form of a linked list. Take fig. 5 as an example: the media segment in SourceBuffer1 is parsed out of video stream data 1, audio stream data 1 and text stream data 1, then video stream data 1 is stored in the buffer queue of TrackBuffer11, audio stream data 1 is stored in the buffer queue of TrackBuffer12, and text stream data 1 is stored in the buffer queue of TrackBuffer 13.
In the scene of preloading the media data, the display device can load the media data with specified duration or data quantity in advance. The larger the amount of preloaded data, the more powerful the application's performance against network fluctuations, but the system needs to allocate more memory for the application to cache the preloaded media data, especially for higher resolution (e.g., UHD, ultra High Definition, ultra high definition) sources, the SourceBuffer may need to use 50-100 MB of memory, resulting in excessive memory consumption.
Fig. 6 is a schematic diagram of a memory structure of a TrackBuffer according to an embodiment of the present application. As shown in fig. 6, the TrackBuffer may include K ES frame buffers, where K is greater than or equal to 1, and the ES frame buffers are used to store preloaded media data. The TrackBuffer may sequentially link K ES framebuffers according to DTS (decoding Timestamp), i.e., sourceBuffer and TrackBuffer may be equivalent to a linked list formed by ordering multiple ES framebuffers. The decoder may read the data stored in the ES frame buffer by accessing the TrackBuffer.
Fig. 7 is a schematic diagram of a memory structure of an ES frame buffer according to an embodiment of the present application. As shown in fig. 7, the ES Frame buffer may store Type, track_id, data_ptr, data_size, side_data_ptr, side_data_size, is_key_frame (whether or not it IS a key Frame), decrypt_config_ptr, time_info, decode_ timestamp, config _id, and the like.
Wherein the type may be used to indicate the data type of the current frame (e.g., video frame or audio frame). The Track_ID is used to indicate the ID of the Trackbuffer to which the ES frame buffer belongs. Data_ptr is a pointer type Data structure for storing frame Data of a current frame in an ES frame buffer; the data_size is used to record the Data size of the current frame. Side_data_ptr is used to store Side data in the ES frame buffer, which may include parameter information related to the current frame, including, for example, but not limited to: streamID (ID of stream), SPS (Sequence Parameter Set ), PPS (Picture Parameter Set, picture parameter set), etc.; the side_data_size is used to record the Side data size of the current frame. Is_key_frame IS used to record whether the current Frame IS a key Frame. Decrypt_config_ptr is used to store decryption configuration information of the current frame. Time_info is used to record Time information (e.g., the dwell Time on the display) for the current frame. The decode_timestamp is the decoding timestamp of the current frame. The config_id is the configuration ID of the current time frame.
For video stream data, the data size of an I frame is typically several hundred KB to 1mb, the data sizes of p and B frames are typically several KB to several tens KB, and the data size of each frame in audio stream data is typically several hundred bytes to several KB. Therefore, the memory consumption of SourceBuffer is mainly reflected in the memory units corresponding to the ES frames, and the memory size occupied by each memory unit may be the same or different. Assuming that the frame rate is calculated according to 30fps (Frames Per Second, frames/second, representing the frame rate), if one minute of audio/video data is preloaded, the SourceBuffer includes 1800 ES frame buffers corresponding to video frames and 1800 ES frame buffers corresponding to audio frames, that is, the system needs to dynamically allocate a total of 3600 memory units corresponding to ES frame buffers for the SourceBuffer to store the preloaded data.
The system generally dynamically allocates memory according to the structure of header, payload and pad, where a payload is an effective memory space that needs to be dynamically allocated, a header is used to store information related to the management of the payload, and a pad is a hole generated for implementing memory alignment, where the header and the pad belong to additional memory overhead generated during memory allocation. The smaller the memory allocated by the system, the larger the relative duty cycle of the overhead, for example: if 100 bytes of memory are allocated, the actual physical memory consumption is 16 bytes (header) +100 bytes (payload) +12 bytes (pad) =128, so that the additional memory overhead is about 22% compared to the memory consumption; if 10000 bytes of memory are allocated, the overhead is about 0.22% compared to the memory consumption.
Since ES data is usually hundreds to thousands of bytes, the ES frame buffer allocated by the system is a smaller memory unit, that is, it appears to allocate small memory in a fragmented manner, which results in relatively large additional memory overhead generated when allocating small memory units each time, resulting in low memory utilization and increased memory consumption.
In order to solve the above technical problem, fig. 8 is a schematic diagram of a memory structure based on a large memory block according to an embodiment of the present application, referring to view (a) in fig. 8, a TrackBuffer includes a plurality of (assumed to be P) Buffer blocks (memory blocks), where a Buffer block is a memory block with a larger memory space, and a memory size of the Buffer block is configurable, for example: the Buffer Bulk storing video frame data can be configured with 5MB memory, and the Buffer Bulk storing audio frame data can be configured with 512KB memory.
Referring to view (b) in fig. 8, each Buffer block includes a plurality of ES frame buffers linked in series, i.e., the Buffer block may be regarded as a linked list made up of a plurality of ES frame buffers, and the Buffer block filters and stores common attribute information of the ES frame buffers. For example: the Type and track_id of all ES frames under the same Track are identical, such that the common attribute information includes, but is not limited to: type, track_id, etc.
In this way, the plurality of ES frame buffers are continuously and compactly stored in one Buffer block in a centralized manner, so that the extra memory overhead generated when the ES frame buffers are allocated in each individual fragmentation can be reduced, and each Buffer block only stores one share of the common attribute information of the plurality of ES frame buffers, as shown in the view (c) in fig. 8, each ES frame Buffer does not need to repeatedly store the common attribute information such as Type, track_id and the like, and the ES frame buffers tightly store key information such as data_ size, data, side _data_size, side_data, is_key_ Frame, decrypt _config, time_info and decode_ timestamp, config _id, thereby reducing the memory waste caused by multiple storage of the common information.
Fig. 9 is a schematic diagram of a system processing architecture according to an embodiment of the present application. As shown in fig. 9, the system processing architecture may include: a media asset application, a browser application, and a hardware platform. The media resource application is a network application program supporting MSE specification, and the user plays interested media resources and film sources through the media resource application. Browser applications include, but are not limited to: HTML module, MSE extension module, HTTP module, media player module, etc. Hardware platforms include, but are not limited to, audio decoders, video decoders, displays, sound players, and the like. The media resource application creates media source and source Buffer through MSE expansion module, in the playing process, the media resource application requests to download or pre-load media fragments from server through HTTP module, and sends the media fragments to source Buffer object created by browser application, the source Buffer calls de-multiplexer and stream analyzer module, etc., analyzes the media fragments into ES data, and writes the ES data into Buffer Bulk, so that video decoder can read video frame data and relevant configuration information from Buffer Bulk, and sends decoded video frame data to display, and audio decoder reads audio frame data and relevant configuration information from Buffer Bulk, and sends decoded audio frame data to sound player, so as to make display and sound player synchronously complete sound and picture playing.
In some implementations, to enable the controller 250 to accurately identify the running condition of each Buffer Bulk, a first state identifier may be set for the Buffer Bulk (hereinafter referred to as a first memory block) that is currently in an unused state, and a second state identifier may be set for the Buffer Bulk (hereinafter referred to as a second memory block) that is in a used state. Or, the second state identifier is set only for the Buffer Bulk in the use state, and the Buffer Bulk without the second state identifier is in the unused state by default. Thus, the controller can determine which Buffer Bulk is in use and which Buffer Bulk is not in use by querying the status identifier.
In some implementations, to distinguish between read and write states when using the memory, the second state identifier may further include a first sub-identifier and a second sub-identifier, where the first sub-identifier is used to indicate that the use state of the Buffer Bulk is a write state ("write"), which indicates that there is currently an object (e.g., sourceBuffer) storing ES data into the Buffer Bulk. The second sub-flag is used to indicate that the use state of Buffer Bulk is read, indicating that there is currently an object (e.g., decoder) reading ES data already stored in Buffer Bulk.
In some implementations, the TrackBuffer may include: one Buffer Bulk in the write state (hereinafter referred to as a third memory block) and one Buffer Bulk in the read state (hereinafter referred to as a fourth memory block). The third memory block and the fourth memory block may be the same or different Buffer Bulk. If the third memory block and the fourth memory block are the same Buffer block, the Buffer block is in a read-write state, that is, one side of the Buffer block is written with data, and the other side is read with data.
In some implementations, since the first memory block is not currently being used, the controller may perform data compression on the first memory block to reduce memory occupied and consumed by the SourceBuffer. If the compressed Buffer Bulk is switched to the use state, the controller can decompress the Buffer Bulk in advance and then execute the read-write operation. And if the second memory block is in the use state currently, the controller does not compress the second memory block so as to ensure that the second memory block can be normally used. Since the decoder may read one frame of data from Buffer Bulk every several tens of milliseconds, the access frequency of SourceBuffe memory is relatively low, and thus the performance of the controller may be loaded: and performing programs such as compression or decompression based on the use state of the Buffer Bulk.
In some implementations, based on the memory allocation structure of the large memory block, the embodiments of the present application further provide the following processing logic, including but not limited to: the media resource application adds media slices to the SourceBuffer, and the decoder reads audio frames or video frames from the SourceBuffer and the SourceBuffer memory reclamation (i.e. how to free the memory that is not occupied).
Fig. 10 is a schematic diagram of a first memory processing method according to an embodiment of the present application, by which media clips are added to SourceBuffer by a media application. As shown in fig. 10, the method includes the following program steps that the controller is configured to perform:
step S101, in response to the operation of playing the target media asset, media fragment data of the target media asset are acquired from the server.
Step S102, analyzing the media fragment data to obtain a basic data stream. Wherein the base data stream comprises video stream data and audio stream data, each of which is processed separately.
Step S103, according to the media type and the track ID of the basic data stream, searching a first target track buffer corresponding to the basic data stream in the source buffer.
Step S104, a first target memory block in a writing state is searched in a first target track buffer.
Step S105, determine whether there is a first target memory block. If the first target memory block is not present, executing step S106; if there is a first target memory block, step S107 is performed.
Step S106, creating a new memory block in the first target track buffer, and setting the new memory block to be in a writing state to obtain the first target memory block.
In step S107, according to the basic data stream, frame memory units are written in the first target memory block, and each frame memory unit stores one frame data included in the basic data stream.
After searching or obtaining the first target memory block, i.e. a memory block (Buffer block) object capable of writing data, the basic data stream includes a plurality of continuous frames, the controller can enter the data into the first target memory block frame by frame, each frame maps one frame memory unit (i.e. ES frame Buffer), referring to fig. 8, so that the first target memory block includes frame memory units for storing multi-frame maps in the basic data stream, and the plurality of frame memory units are distributed sequentially, continuously and tightly, thereby reducing memory consumption and improving memory utilization.
When writing the elementary data stream into Buffer Bulk, under-write and over-write situations may occur. If the Buffer Bulk is full, the Buffer Bulk memory space is insufficient, at this time, a new Buffer Bulk can be created, and the memory of the TrackBuffer can be expanded to accommodate and store more data.
Step S108, judging whether the first target memory block is fully written.
In some implementations, the controller may compare the amount of stored data to the first target memory block, if the amount of stored data exceeds the memory, the first target memory block is fully written, otherwise the first target memory block is not fully written. Alternatively, the controller may obtain the remaining available memory of the first target memory block, and if the remaining available memory is less than a preset threshold (e.g., zero), the memory space of the first target memory block is insufficient.
If the first target memory block is not full, step 109 is performed. If the first target memory block is full, then step S1010 is performed;
in step S109, the writing state set by the first target memory block is canceled.
In step S1010, it is determined whether the first target memory block is currently in a read state.
If the first target memory block is not currently in the writing state and is also not in the reading state, i.e., the first target memory block is not used, step S1011 and step S1012 are performed.
If the first target memory block is not currently in the written state and the first target memory block is currently in the read state, indicating that the underlying data stream stored in the first target memory block is being read and used, for example: the video decoder is reading the buffer memory and performing decoding processing on the video stream data, and if the first target memory block is not compressed, step S1012 is performed.
In step S1011, the first target memory block is set to be in an unused state, and the first target memory block is compressed.
Because the first target memory block is not currently used, the controller can perform data compression on the first target memory block to reduce the memory occupied and consumed by the SourceBuffer.
Step S1012, creating a new memory block in the first target track buffer, setting the new memory block to be in a writing state, and continuing to execute the memory writing process.
After step S1012, referring to the memory writing process as illustrated in steps S107 to S1012, the basic data stream that the first target memory block cannot continue to accommodate may be imported into the newly built memory block until the basic data stream is completely buffered.
Fig. 11 is a schematic diagram of a second memory processing method according to an embodiment of the present application, by which a decoder reads a basic data stream from a SourceBuffer. As shown in fig. 11, the method includes the following program steps that the controller is configured to perform:
step S111, according to the media type and track ID of the base data stream, a second target track buffer corresponding to the base data stream is searched in the source buffer.
In step S112, the second target memory block currently in the read state is searched in the second target track buffer.
In step S113, the control decoder reads the buffered base data stream from the second target memory block, and performs decoding processing on the base data stream.
In step S114, it is determined whether the second target memory block is completely read.
When the elementary streams are read from Buffer Bulk, both underrun and underrun conditions may occur. In some implementations, the decoder may read the plurality of frame memory cells included in the Buffer block in sequence when reading the base data stream stored in the Buffer block. The controller may set the read state of the frame memory cells, e.g., the read state includes: an unread state and a read state.
In some implementations, after the decoder reads the data stored in one frame memory unit, the controller may set the frame memory unit to a read state and determine whether there is a next frame memory unit in the second target memory block after the current frame memory unit that is in an unread state. If there is a next unread frame memory location, indicating that the second target memory block has not been read, then step S113 may continue. If there is no next unread frame memory location, indicating that the second target memory block is read and there is no newly written data (i.e., the second target memory block is not currently in a written state), step S115 is performed.
In step S115, the second target memory block is set to be in an unused state, and the second target memory block is compressed.
In step S116, a third target memory block located at a position subsequent to the second target memory block is searched in the second target track buffer.
In step S117, it is determined whether the third target memory block is in a use state.
If the third target memory block is in the unused state, indicating that the third target memory block has been previously compressed, step S118 is performed. If the third target memory block is in the use state (including the write state and/or the read state), indicating that the third target memory block is not currently compressed, step S119 is performed.
In step S118, the third target memory block is decompressed. Decompressing the third target memory block to ensure that the third target memory block can execute normal read-write operation.
In step S119, the third target memory block is set in a read state.
At this time, the third target memory block may be in the read state only, or in both the read state and the write state (i.e., read-write state).
In step S1110, the control decoder reads the buffered base data stream from the third target memory block, and performs decoding processing on the base data stream.
After step S1110, the data stored in the third target memory block may be read until the data reading is completed, referring to the memory read flow as exemplified in steps S113 to S1110.
Fig. 12 is a schematic diagram of a third memory processing method according to an embodiment of the present application, by which SourceBuffer memory reclamation is implemented to release memory that is not occupied. As shown in fig. 12, the method includes the following program steps that the controller is configured to perform:
step S121, a memory reclamation process is started, and a third target track buffer corresponding to the data to be cleaned is searched in the source buffer according to the media type and track ID of the data to be cleaned.
In some implementations, the controller may initiate the memory reclamation procedure after detecting that all frames included in a group of pictures have been read. The Group Of Pictures (GOP) includes a Group Of consecutive Picture frames, and frames in the Group Of pictures are read, decoded and played, so that after the Group Of pictures is read, memory recovery is performed on a frame memory unit mapped by the Picture frames included in the data to be cleaned, thereby releasing the memory occupied by invalidation.
Step S122, searching a fourth target memory block from the third target track buffer according to the preset time interval.
The preset time interval may be determined according to the start time (gctime_start) and the end time (gctime_end) of the memory reclamation, that is, the preset time interval is [ gctime_start, gctime_end ]. The gctime_start and gctime_end may be set according to a memory clean requirement, for example: setting GCTime_start as the playing time of the played picture group, and setting GCTime_start as the playing end time of the played picture group.
The generating time (hereinafter referred to as first time) of each ES frame buffer may also be recorded in the TrackBuffer, where the first time is used to characterize the storing time of the frame data in the ES frame buffer, so that the target object of memory reclamation may be determined by searching the ES frame buffer in the preset time interval at the first time.
After the third target track buffer is located, a fourth target memory block is searched from the third target track buffer, wherein the fourth target memory block comprises at least one frame memory unit to be cleaned, and the first time corresponding to the frame memory unit to be cleaned is within a preset time interval. Suppose that the fourth target memory block includes Q frame memory cells to be cleaned, Q is less than or equal to S. Wherein Q represents a first number, which is the number of frame memory units to be cleaned in the fourth target memory block; s represents a second number, which is the total number of frame memory cells in the fourth target memory block.
Step S123, determining whether the first number is equal to the second number.
If q=s, indicating that all S frame memory units included in the fourth target memory block are within the memory reclamation range indicated by the preset time interval, step S124 is performed. If Q is not equal to S, i.e., Q is less than S, then step S125 is performed.
In step S124, the fourth target memory block is deleted in the third target track buffer.
In step S125, it is determined whether the fourth target memory block is in a use state.
If the fourth target memory block is in the use state (including the write state and/or the read state), indicating that the fourth target memory block is not compressed, step S127 is performed. If the fourth target memory block is in the unused state, the fourth target memory block has been previously compressed, step S126 is performed.
In step S126, the fourth target memory block is decompressed, and the fourth target memory block is set to be in a use state.
After the memory reclamation process is started, if the memory block is decompressed, the usage state at this time is not necessarily a writing state or a reading state, for example, the fourth target memory block is not in a reading or writing state, but only decompression is performed for accurately searching or resetting the frame memory unit to be cleaned in the fourth target memory block. At this time, for convenience of distinction, the usage state may further include a clear memory state, so in step S126, after decompressing the fourth target memory block, the fourth target memory block may be set to the clear memory state.
After the fourth target memory block is decompressed, the fourth target memory block is in a usable state, and if the fourth target memory block is not fully written, the controller may continue writing the basic stream data into the fourth target memory block, and/or if the fourth target memory block is not fully read, the controller may further control the decoder to read the required target data from the fourth target memory block.
After the fourth target memory block is decompressed, the frame memory units to be cleaned up in the fourth target memory block may change, for example: the fourth target memory block may have newly added frame memory units, or the working state of the original frame memory units may be changed, etc. For example, the user may advance the playing progress of the media asset through seek, so that the decoder may request to the SourceBuffer to read the data cached in some frame memory units in the fourth target memory block again, so that the frame memory units do not suggest to clean, so as to avoid the media asset application from downloading the media asset content that has been played to the server again.
Step S127, determining the frame memory unit to be cleaned from the fourth target memory block again according to the preset time interval.
In some implementations, when performing step S127, the controller may readjust gctime_start and gctime_end to update the preset time interval, and reset the frame memory units to be cleaned based on the current fourth target memory block, for example, based on the frame memory units included in the fourth target memory block and the operation state of each frame memory unit, and so on.
In some implementations, when step S127 is performed, the preset time interval that is originally used in step S122 may be kept unchanged, and the frame memory unit to be cleaned in the current fourth target memory block is reset.
In step S128, the frame memory unit to be cleaned is deleted from the fourth target memory block.
In step S129, the fourth target memory block is set to be in an unused state, and the fourth target memory block is compressed, so that the memory recycling process is finished.
After deleting the frame memory units meeting the preset time interval and the memory recovery requirement, a part of frame memory units may still remain in the fourth target memory block, and at this time, the fourth target memory block is not in any one of the writing state, the reading state and the memory cleaning state, so that the fourth target memory block can be set to be in an unused state, thereby performing memory block compression, reducing memory consumption, and ending the memory recovery process.
In the embodiment of the application, the large memory block is allocated based on the SourceBuffer, and the large memory block is utilized to continuously and densely store a plurality of frame memory units corresponding to the ES data stream, so that the extra memory consumption generated by a small memory allocation scheme by fragmentation can be reduced, the memory consumption is reduced by merging the common attribute information of each frame in the ES data in the large memory block and performing compression processing when the memory block is not used, the memory utilization rate is improved, and the memory can be reasonably used and recycled especially for the scene of preloading media resources by MSE, so that the operation performance of the media resource application and the system is ensured. Based on the memory architecture, the application provides flow logic such as memory writing, memory reading and memory recycling, and the like, and realizes accurate and efficient processing of each link in the use of the memory.
In some implementations, the embodiments of the present application further provide a memory, where the memory is configured to store SourceBuffer constructed by a browser application when a user plays media resources, and may be controlled by a controller, to cooperate to complete memory management and maintenance on SourceBuffer, trackBuffer and Buffer Bulk, so as to implement logic such as memory structure construction, memory read-write, memory recovery release, and the like based on fig. 8, which is not described herein again.
In some implementations, embodiments of the present application also provide a computer storage medium, which may store a program. When the computer storage medium is configured in the display device 200, the program may include program steps of the page display method in the above embodiment in which the controller 250 is configured, when the program is executed. The computer storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the disclosure and to enable others skilled in the art to best utilize the embodiments.

Claims (10)

1. A display device, characterized by comprising:
the communicator is used for being in communication connection with the server so as to download media asset data to the server;
a controller for performing:
responding to media asset playing operation, acquiring media asset data from a server, and constructing a buffer zone, wherein the buffer zone comprises at least one memory block;
analyzing the media resource data to obtain a data stream, wherein the data stream comprises a plurality of continuously distributed frame data, and the frame data are video frame data or audio frame data;
and correspondingly generating a plurality of frame memory units for storing the frame data in the memory block according to the data stream, wherein one frame memory unit maps one frame data, the memory block independently stores common attribute information of the plurality of frame data included in the data stream, and the frame memory units do not store the common attribute information.
2. The display device of claim 1, wherein the controller constructs a buffer comprising:
responding to media resource playing operation, constructing a media source corresponding to the media resource, and constructing at least one source buffer area mapped by the media source;
constructing at least one track buffer in the source buffer, the track buffer for storing data of a single media type;
and constructing at least one memory block in the track buffer.
3. The display device of claim 2, wherein the controller is further configured to perform:
identifying the working state of the memory block according to the state identifier of the memory block, wherein the working state comprises a used state and an unused state, and the used state comprises a writing state and a reading state;
if the memory block is in an unused state, compressing the memory block;
and if the memory block is in a use state, not compressing the memory block.
4. A display device according to claim 3, wherein the controller generates a plurality of frame memory units for storing the frame data in the memory block, respectively, comprising:
Searching a first target track buffer corresponding to the data stream in a source buffer according to the media type and the track ID;
searching a first target memory block in a writing state in the first target track buffer area;
if the first target memory block is found, generating a plurality of frame memory units in the first target memory block;
if the first target memory block is not found, creating a first newly-increased memory block in the first target track buffer, setting a first sub-identifier for the first newly-increased memory block, and taking the first newly-increased memory block as the first target memory block; the first sub-identifier is used for indicating that the memory block is in a writing state.
5. The display device of claim 4, wherein after correspondingly generating a plurality of frame memory units in the memory block for storing the frame data, the controller is further configured to perform:
if the first target memory block is fully written, canceling the first identification of the first target memory block;
if the first target memory block is not in a read-out state, setting a first state identifier for the first target memory block, and compressing the first target memory block; the first state identifier is used for indicating that the memory block is in an unused state;
If the first target memory block is in a read state, or after the first target memory block is compressed, a second newly-added memory block is created in the first target track buffer, the first sub-identifier is set for the second newly-added memory block, and frame data of the data stream is continuously stored into the second newly-added memory block.
6. The display device of claim 5, wherein after correspondingly generating a plurality of frame memory units in the memory block for storing the frame data, the controller is further configured to perform:
searching a second target track buffer corresponding to the data stream in a source buffer according to the media type and the track ID;
searching a second target memory block in a readout state in the second target track buffer area;
the control decoder reads the cached data stream from the second target memory block and decodes the data stream;
and if the second target memory block is read, setting the first state identifier for the second target memory block, and compressing the second target memory block.
7. The display device of claim 6, wherein after compressing the second target memory block, the controller is further configured to perform:
Searching a third target memory block positioned at the rear position of the second target memory block in the second target track buffer area;
if the third target memory block is in an unused state, decompressing the third target memory block;
if the third target memory block is in a use state, or after the third target memory block is decompressed, setting a second sub-identifier for the third target memory block, where the second sub-identifier is used to indicate that the memory block is in a readout state;
and controlling a decoder to read the cached data stream from the third target memory block continuously, and decoding the data stream.
8. The display device of claim 5, wherein the controller is further configured to perform:
searching a third target track buffer corresponding to the data to be cleaned in the source buffer according to the media type and the track ID of the data to be cleaned;
searching a fourth target memory block from the third target track buffer according to a preset time interval; the fourth target memory block comprises Q frame memory units to be cleaned, which are determined according to the preset time interval, wherein Q is less than or equal to S; q is a first number and used for representing the number of frame memory units to be cleaned in the fourth target memory block; s is a second number, used for representing the total number of frame memory units included in the fourth target memory block;
And deleting the fourth target memory block in the third target track buffer if the first number is equal to the first number.
9. The display device of claim 8, wherein the controller is further configured to perform:
if the first number is smaller than the second number and the fourth target memory block is in an unused state, decompressing the fourth target memory block and setting a second state identifier for the fourth target memory block, wherein the second state identifier is used for indicating that the memory block is in a used state;
when the fourth target memory block is in a use state, determining a frame memory unit to be cleaned from the fourth target memory block again according to the preset time interval;
deleting the frame memory unit to be cleaned from the fourth target memory block;
and setting the first state identifier for the fourth target memory block, and compressing the fourth target memory block.
10. A memory processing method, comprising:
responding to media asset playing operation, acquiring media asset data from a server, and constructing a buffer zone, wherein the buffer zone comprises at least one memory block;
Analyzing the media resource data to obtain a data stream, wherein the data stream comprises a plurality of continuously distributed frame data, and the frame data are video frame data or audio frame data;
and correspondingly generating a plurality of frame memory units for storing the frame data in the memory block according to the data stream, wherein one frame memory unit maps one frame data, the memory block independently stores common attribute information of the plurality of frame data included in the data stream, and the frame memory units do not store the common attribute information.
CN202310259284.5A 2023-03-09 2023-03-09 Memory processing method and display device Pending CN117857872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310259284.5A CN117857872A (en) 2023-03-09 2023-03-09 Memory processing method and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310259284.5A CN117857872A (en) 2023-03-09 2023-03-09 Memory processing method and display device

Publications (1)

Publication Number Publication Date
CN117857872A true CN117857872A (en) 2024-04-09

Family

ID=90544967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310259284.5A Pending CN117857872A (en) 2023-03-09 2023-03-09 Memory processing method and display device

Country Status (1)

Country Link
CN (1) CN117857872A (en)

Similar Documents

Publication Publication Date Title
CN108989885B (en) Video file transcoding system, segmentation method, transcoding method and device
CN101513048B (en) Method for controlling receiver and receiver therefor
KR100478460B1 (en) Wireless receiver to receive a multi-contents file and method to output a data in the receiver
CN108924582B (en) Video recording method, computer readable storage medium and recording and playing system
US20070006065A1 (en) Conditional event timing for interactive multimedia presentations
CN110381326B (en) Video system, processing method, device and computer readable medium
CN104065979A (en) Method for dynamically displaying information related with video content and system thereof
JP2008243367A (en) Method and device for recording broadcast data
US7698528B2 (en) Shared memory pool allocation during media rendering
CN109840879B (en) Image rendering method and device, computer storage medium and terminal
US20120254357A1 (en) Server, reproduction apparatus, and information reproduction system
CN113424553B (en) Method and system for playback of media items
US10630809B2 (en) Information processing apparatus, image processing apparatus and control methods thereof
JP2009003639A (en) Client device, data processing method, and its program
CN117857872A (en) Memory processing method and display device
JP2008108296A (en) Information reproducing device and information reproducing method
CN115278323A (en) Display device, intelligent device and data processing method
CN113542765A (en) Media data jumping continuous playing method and display equipment
KR20170072590A (en) Broadcast Receiving apparatus and control method thereof
JP2006339980A (en) Image reproducer
US9386059B2 (en) Apparatus and method for providing time shift function in cloud computing system
US20140169755A1 (en) Data processing device
CN112749033B (en) Display device and system notification calling method
JP2013090102A (en) Distribution system
CN115270030A (en) Display device and media asset playing method

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination