CN117278796B - Multi-area image data display method, device, playing equipment and storage medium - Google Patents

Multi-area image data display method, device, playing equipment and storage medium Download PDF

Info

Publication number
CN117278796B
CN117278796B CN202311568545.8A CN202311568545A CN117278796B CN 117278796 B CN117278796 B CN 117278796B CN 202311568545 A CN202311568545 A CN 202311568545A CN 117278796 B CN117278796 B CN 117278796B
Authority
CN
China
Prior art keywords
image data
playing
target
drm
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311568545.8A
Other languages
Chinese (zh)
Other versions
CN117278796A (en
Inventor
刘良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fanlian Information Technology Co ltd
Shenzhen Youfang Data Technology Co ltd
Original Assignee
Shenzhen Youfang Data Technology Co ltd
Shenzhen Fanlian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Youfang Data Technology Co ltd, Shenzhen Fanlian Information Technology Co ltd filed Critical Shenzhen Youfang Data Technology Co ltd
Priority to CN202311568545.8A priority Critical patent/CN117278796B/en
Publication of CN117278796A publication Critical patent/CN117278796A/en
Application granted granted Critical
Publication of CN117278796B publication Critical patent/CN117278796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention provides a multi-region image data display method, a device, playing equipment and a storage medium, and relates to the technical field of video playing. The method comprises the steps of obtaining video data of a video area, and decoding the video data through a GPU to obtain first image data. And playing synchronization is carried out according to the first image data and the second image data, so that target image data are obtained. And storing the target image data into the multi-canvas playing area of the corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area. And the binding relation with the DRM layer is used for directly superposing and rendering the multi-region image data, so that the consumption of CPU resources or GPU resources is effectively reduced. Meanwhile, by combining with a play synchronization strategy of the multi-region image data, the efficiency of image data processing and rendering display is improved, and the smoothness of image play is further improved. And the rendering smoothness is promoted by utilizing multiple canvas, so that image tearing during DRM refreshing display is effectively avoided.

Description

Multi-area image data display method, device, playing equipment and storage medium
Technical Field
The present invention relates to the field of video playing technologies, and in particular, to a method and apparatus for displaying multi-region image data, a playing device, and a storage medium.
Background
With the development of the terminal playing industry and the diversification of application scenes, in order to meet the high performance requirement of users on media resource playing, and simultaneously support embedding of artificial intelligence large models in large data scenes to provide efficient computing power service, the playing equipment is required to have the capability of supporting smooth rendering of multi-area dynamic images and providing computing power protection.
However, the current playing device cannot guarantee the effects of multi-area image data processing and smooth rendering in many application scenarios. Therefore, how to ensure smooth playing of a multi-region dynamic image on one screen is a technical problem to be solved.
Disclosure of Invention
Accordingly, an object of the present invention is to provide a multi-region image data display method, apparatus, playback device, and storage medium, which can directly superimpose and render multi-region image data by binding relationship with a DRM layer, thereby improving rendering efficiency.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
In a first aspect, the present invention provides a multi-region image data display method, the method comprising:
Acquiring video data of a video area, and decoding the video data through a GPU to obtain first image data;
performing playing synchronization according to the first image data and the second image data to obtain target image data; the second image data is the image data of the dynamic area;
storing the target image data into a plurality of canvas playing areas of a corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area; each DRM layer is created with a corresponding multi-canvas playing area; the multi-canvas playing area includes a plurality of playing areas.
In an optional embodiment, the performing playing synchronization according to the first image data and the second image data to obtain target image data includes:
Calculating to obtain the actual playing time length and the theoretical playing time length of the candidate image data; the candidate image data includes the first image data and the second image data;
And determining target image data in the candidate image data according to the actual playing time length, the theoretical playing time length and the time threshold.
In an alternative embodiment, the calculating to obtain the actual playing duration and the theoretical playing duration of the candidate image data includes:
Acquiring a frame rate according to the first image data;
determining the theoretical playing duration according to the frame sequence number and the frame rate of each candidate image data;
determining the actual playing time length according to the actual playing time and the initial time of each candidate image data; the initial time characterizes a system time for turning on an image display function.
In an optional embodiment, the determining target image data in the candidate image data according to the actual playing duration, the theoretical playing duration and the time threshold includes:
when the difference value obtained by subtracting the theoretical playing time length from the actual playing time length is larger than the time threshold value, discarding the corresponding candidate image data;
When the difference value obtained by subtracting the actual playing time from the theoretical playing time is larger than the time threshold, accelerating the acquisition speed of video data, and determining the corresponding candidate image data as the target image data;
When the absolute value of the difference value between the actual playing time length and the theoretical playing time length is smaller than or equal to the time threshold value, determining the corresponding candidate image data as the target image data;
And when the target image data is the image data of the dynamic region, storing the target image data into a video memory.
In an alternative embodiment, the storing the target image data in the multiple canvas playing areas of the corresponding DRM layer, so that the DRM sequentially renders and displays the target image data of each playing area, including:
Determining a target DRM layer according to the target image data and the layer mapping table; the map layer mapping table is used for recording the binding relation of the area to which the image data belongs and the DRM map layer in one-to-one correspondence;
Determining a target multi-canvas playing area according to the target DRM layer and the canvas mapping table; the canvas mapping table is used for recording the one-to-one correspondence between the DRM layers and the multiple canvas playing areas;
Determining a target playing area in the target multi-canvas playing area according to the target multi-canvas playing area and the updating cursor, and storing the target image data into the target playing area; the updating cursor is used for identifying a playing area of the image data to be updated;
Determining target image data of a playing area currently rendered and displayed by DRM according to the target multi-canvas playing area and the playing cursor; the play cursor is used to identify the play area where the DRM currently needs to render the display.
In an alternative embodiment, the determining the target DRM layer according to the target image data and the layer mapping table includes:
determining a target area identifier according to the target image data; the target area identifier is used for uniquely identifying an area;
When the map layer mapping table records the target area identifier, determining a DRM map layer corresponding to the target area identifier as the target DRM map layer;
when the map layer mapping table does not record the target area identification, determining the target DRM map layer in candidate DRM map layers; and the candidate DRM layers are DRM layers which do not establish binding relation with any area.
In an alternative embodiment, the determining the target DRM layer in the candidate DRM layers includes:
traversing and comparing the playing format of the candidate DRM layer with the image format of the target image data;
When the playing format which is the same as the image format exists, determining the corresponding candidate DRM layer as the target DRM layer;
when the playing format which is the same as the image format does not exist, determining any candidate DRM layer as the target DRM layer;
and establishing a binding relation between the area of the target image data and the target DRM layer, and recording the binding relation to the layer mapping table.
In a second aspect, the present invention provides a multi-region image data display device, the device comprising:
The acquisition module is used for acquiring video data of the video area and decoding the video data through the GPU to obtain first image data;
The synchronization module is used for performing playing synchronization according to the first image data and the second image data to obtain target image data; the second image data is the image data of the dynamic area;
The processing module is used for storing the target image data into a plurality of canvas playing areas of the corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area; each DRM layer is created with a corresponding multi-canvas playing area; the multi-canvas playing area includes a plurality of playing areas.
In a third aspect, the present invention provides a playback device comprising a memory for storing a computer program and a processor for executing the multi-region image data display method according to any one of the preceding embodiments when the computer program is invoked.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the multi-region image data display method according to any of the preceding embodiments.
Compared with the prior art, the multi-region image data display method, the device, the playing equipment and the storage medium provided by the embodiment of the invention have the advantages that the video data of the video region are obtained, and the video data are decoded through the GPU to obtain the first image data. And playing synchronization is carried out according to the first image data and the second image data, so that target image data are obtained. And storing the target image data into the multi-canvas playing area of the corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area. The binding strategy of the multi-region image data and the DRM layers is realized by utilizing the multi-layer resources in the DRM, and the multi-region and multi-format image data hardware merging and playing are realized in a layer stacking and rendering mode, so that the CPU resource or GPU resource consumption is effectively reduced. Meanwhile, by combining with a play synchronization strategy of the multi-region image data, the efficiency of image data processing and rendering display is improved, and the smoothness of image play is further improved. And the rendering smoothness is promoted by utilizing multiple canvas, so that image tearing during DRM refreshing display is effectively avoided.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic diagram of a prior art software decoding transcoding scheme.
Fig. 2 shows a schematic diagram of a GPU decoding transcoding scheme in the prior art.
Fig. 3 is a schematic block diagram of a playback device according to an embodiment of the present invention.
Fig. 4 is a schematic flow chart of a multi-region image data display method according to an embodiment of the present invention.
Fig. 5 shows a flow diagram of the sub-steps of step S20 and step S30 of fig. 4.
Fig. 6 shows a flow diagram of the sub-steps of step S201 and step S202 in fig. 5.
Fig. 7 shows a flow diagram of the sub-steps of step S301 in fig. 5.
Fig. 8 is a schematic flow chart of a multi-region image data display method according to an embodiment of the invention.
Fig. 9 is a block diagram of a multi-region image data display device according to an embodiment of the present invention.
Icon: 100-a playing device; 110-memory; a 120-processor; 130-a communication module; 200-a multi-region image data display device; 201-an acquisition module; 202-a synchronization module; 203-a processing module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Along with the development of the terminal playing industry, the application scenes are diversified, so that high-performance requirements of users on media resource playing are met, and meanwhile, an artificial intelligent large model is embedded in a big data scene to provide efficient computing power service. For the playback device, it is necessary to provide a computational guarantee that not only supports smooth rendering of multi-region moving images, but also saves more resources of the image processor (Graphics Processing Unit, abbreviated as GPU).
In the prior art, the processing method for realizing the multi-region dynamic effect synthesis playing of the picture mainly aims at directly combining the image data of the same image format in a layer, and the image data of different image formats are firstly converted into the same image format and then are combined in the layer. Currently, in order to convert image data of different image formats into the same image format, software decoding transcoding or GPU decoding transcoding is generally adopted.
The software decoding and transcoding scheme is shown in fig. 1, and it is assumed that a display device communicatively connected to a playing device displays images of three areas, including two dynamic areas and one video area, on one screen. The playing device acquires the media video data and performs software decoding to obtain image data. The software decoding refers to decoding by a central processing unit (Central Processing Unit, abbreviated as CPU) occupied by the software, namely, decompressing a compressed packet of video data by adopting an algorithm such as H264 or H265 to obtain image data.
The playing device acquires the image data of the dynamic region to be displayed at fixed time, and simultaneously, the playing thread controls the image data of the video region to be displayed. The playing device transmits the image data of the three areas to be displayed to a graphical user interface (GRAPHICAL USER INTERFACE, GUI for short), the GUI dispatches the Update interface to trigger the drawing event, the GUI monitors the drawing event to transcode the image format into a uniform image format, and the image format is rendered to the three areas corresponding to the display device, so that the effect of rendering and playing is achieved.
Because the software decoding and transcoding scheme only depends on the rendering of the GUI framework, a large amount of image data needs to be processed at a software layer, and the software transcoding speed is low, the GUI rendering and displaying efficiency is low, the refreshing frequency of rendering and displaying all areas is inconsistent, and meanwhile, the display effect of part of high-frequency refreshing areas is influenced due to the fact that CPU resources are occupied, so that a clamping phenomenon is caused.
While GPU decoding transcoding scheme as shown in fig. 2, the assumption continues that a display device communicatively coupled to a playback device includes two dynamic regions and one video region. The playing device acquires the media video data and decodes the media video data by hardware to obtain image data. Wherein, the hardware decoding refers to decoding by using a GPU. The playing device copies the dynamic region image data to be displayed from the memory to the GPU video memory at regular time, and simultaneously, the CPU controls the playing thread to acquire the video region image data to be displayed.
The playback device uses the GPU to transcode the image data of the dynamic region into the same image format as the image data of the video region, for example, transcode the dynamic region image data in RGB (i.e., the image data is represented by three color channels of Red (Red), green (Green), and Blue (Blue)) format into YUV (i.e., the image data is represented by Luminance and color differences, Y represents brightness (luminence or Luma), U represents chromaticity (Chrominance), and V represents density (Chroma)) format. And finally, carrying out image merging on the image data of the dynamic region and the video region with uniform formats by utilizing the GPU, and rendering and displaying the image data into the display device.
The GPU decoding transcoding scheme utilizes the GPU to realize transcoding, synthesizing and rendering display of the image data, so that occupation of memory and video memory resources is effectively reduced. However, since a large amount of image data is processed by using the GPU, the GPU resource utilization rate is increased, and for the playing device, the GPU resource is relatively less and the cost of expansion is higher, and on the premise of ensuring the smooth rendering display effect preferentially, excessive image data transcoding synthesis operation should be avoided. In the scene that the image data source has large variation, the software decoding transcoding or GPU decoding transcoding has poor adaptability, namely the data processing and rendering playing effects are obviously influenced by the data size.
Based on this, the method and the device for displaying multi-region image data provided by the embodiments of the present invention realize the binding strategy of the multi-region image data and the DRM layer by using the multi-layer resources in the DRM, and realize the hardware merging and playing of the multi-region and multi-format image data in a graphics layer stacking rendering mode, so as to effectively reduce the consumption of CPU resources or GPU resources. Meanwhile, by combining with a play synchronization strategy of the multi-region image data, the efficiency of image data processing and rendering display is improved, and the smoothness of image play is further improved. And the rendering smoothness is promoted by utilizing multiple canvas, so that image tearing during DRM refreshing display is effectively avoided.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a block diagram illustrating a playback device 100 according to an embodiment of the invention. The playback device 100 may be an electronic device with a video playback function in which the linux system is installed. The playback device 100 includes a memory 110, a processor 120, and a communication module 130. The memory 110, the processor 120, and the communication module 130 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
Wherein the memory 110 is used for storing programs or data. The memory 110 may be, but is not limited to, random access memory (Random Access Memory, RAM), read Only Memory (ROM), programmable read only memory (Programmable Read-only memory, PROM), erasable read only memory (Erasable Programmable Read-only memory, EPROM), electrically erasable read only memory (Electric Erasable Programmable Read-only memory, EEPROM), etc.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions. For example, the multi-region image data display method disclosed in the above embodiments may be implemented when a computer program stored in the memory 110 is executed by the processor 120.
The communication module 130 is used for establishing a communication connection between the playback device 100 and other communication terminals through a network, and for transceiving data through the network.
It should be understood that the structure shown in fig. 3 is merely a schematic structural diagram of the playback device 100, and that the playback device 100 may further include more or fewer components than those shown in fig. 3, or have a different configuration than that shown in fig. 3. The components shown in fig. 3 may be implemented in hardware, software, or a combination thereof.
The multi-region image data display method and device provided by the embodiment of the invention can be applied to the playing equipment. The user configures video resources of the video area and image contents of the dynamic area according to the actual application scene, and simultaneously configures positions and sizes of the video area and the dynamic area, so that the non-overlapping of the areas is required to be ensured.
When the playing device is powered on and initialized, the playing device reads the configuration file to obtain configuration information, wherein the configuration information comprises an area identifier, a resource path, an access mode, an area position and an area size. Meanwhile, a direct rendering manager (DIRECT RENDERING MANAGER, DRM for short) is utilized to acquire layer information of the display device, wherein the layer information comprises layer identification, a play format supported by the layer (such as YUV, RGB or YUV/RGB) and the total number of the layers. If the layer information is stored in the layer mapping table after the layer information is successfully acquired, the state of each layer can be set to be an unbound state. If the acquisition of the layer information fails, the DRM initialization failure is returned, and the display equipment cannot normally display the images of the video area and the dynamic area.
Referring to fig. 4, fig. 4 is a flow chart illustrating a multi-region image data display method according to an embodiment of the invention, the method includes the following steps:
step S10, obtaining video data of a video area, and decoding the video data through a GPU to obtain first image data.
In the embodiment of the invention, when the system is powered on, the playing device starts a video decoding thread, the video decoding thread acquires video data of a video area according to a resource path and an access mode in configuration information, then the GPU is utilized to decode the video data to obtain first image data, and the first image data is stored in a buffer area to be played.
Step S20, playing synchronization is carried out according to the first image data and the second image data, and target image data are obtained.
Wherein the second image data is image data of a dynamic region.
In the embodiment of the invention, the image data of the dynamic region is not compressed, and the original image data is directly drawn into the memory. After the image display function is started, the first image data and the second image data are respectively obtained from the buffer area to be played and the memory frame by frame, the playing time of the first image data of the video area and the playing time of the second image data of each dynamic area are synchronously processed, and the first image data and the second image data which meet synchronous playing conditions are determined to be target image data.
And step S30, storing the target image data into the multi-canvas playing area of the corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area.
Wherein, each DRM layer is created with a corresponding multi-canvas playing area; the multi-canvas playing area includes a plurality of playing areas.
In the embodiment of the invention, a multi-canvas playing area is created for each DRM layer at the beginning of playing equipment, each multi-canvas playing area comprises a plurality of playing areas, target image data obtained through playing synchronization processing is sequentially and circularly updated and stored in the playing areas of the corresponding DRM layers, the target image data of the updated playing areas are sequentially rendered according to the playing frequency, and the target image data is displayed in the corresponding areas of display equipment in communication connection with the playing equipment.
In summary, in the multi-region image data display method provided by the embodiment of the invention, the video data of the video region is obtained, and the video data is decoded by the GPU to obtain the first image data. And playing synchronization is carried out according to the first image data and the second image data, so that target image data are obtained. And storing the target image data into the multi-canvas playing area of the corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area. The binding strategy of the multi-region image data and the DRM layers is realized by utilizing the multi-layer resources in the DRM, and the multi-region and multi-format image data hardware merging and playing are realized in a layer stacking and rendering mode, so that the CPU resource or GPU resource consumption is effectively reduced. Meanwhile, by combining with a play synchronization strategy of the multi-region image data, the efficiency of image data processing and rendering display is improved, and the smoothness of image play is further improved. And the rendering smoothness is promoted by utilizing multiple canvas, so that image tearing during DRM refreshing display is effectively avoided.
Optionally, in practical application, the image data to be rendered and displayed meeting the synchronous playing condition can be screened according to the playing time length. Referring to fig. 5, the substeps of step S20 in fig. 4 may include:
step S201, calculating to obtain an actual playing duration and a theoretical playing duration of the candidate image data.
Wherein the candidate image data includes first image data and second image data.
Step S202, determining target image data in the candidate image data according to the actual playing time length, the theoretical playing time length and the time threshold.
In the embodiment of the invention, the first image data and the second image data of the current frame are used as candidate image data, and the actual playing time length and the theoretical playing time length of each candidate image data are calculated. And checking according to the difference value of the playing time length and the theoretical playing time length and the time threshold value, judging whether synchronous playing conditions are met, and determining candidate image data meeting the synchronous playing conditions as target image data.
Alternatively, in practical application, to ensure that the playing of the image data in multiple areas is synchronous, the system time for starting the image display function may be used as the initial time of the image data in each area, and the frame rate of the image data in the video area may be used as the frame rate of the image data played in each area. Referring to fig. 6, the substeps of step S201 in fig. 5 may include:
in step S2011, a frame rate is acquired according to the first image data.
In the embodiment of the invention, in order to ensure that the image data of each region can be synchronously rendered and displayed, the frame rate is acquired by utilizing the image data of the video region, and the frame rate of the first image data is used as the frame rate of each candidate image data in the current frame. Wherein the frame rate characterizes the number of frames of processed image data per unit time. If the first image data of the current frame is acquired, acquiring a frame rate from the first image data of the current frame; and if the first image data of the current frame is not acquired, acquiring the frame rate in the first image data of the historical frame.
Step S2012, determining theoretical playing time length according to the frame sequence number and the frame rate of each candidate image data.
In the embodiment of the present invention, a frame number of each candidate image data is acquired. Wherein the frame number of the first image data is calculated based on the play time stamp and the time base of the video frame, and the frame number is recorded in the first image data. For each dynamic region, the frame number of the dynamic region may be directly accumulated when the second image data is acquired. For example, the playing device records that the frame number of the first dynamic area is n, acquires the second image data of the current frame of the first dynamic area, sets the second image data of the current frame to n+1, and updates the frame number of the first dynamic area by using n+1.
As one embodiment, the theoretical playback time length is calculated according to the frame number and the frame rate of each candidate image data. The calculation formula of the theoretical playing time length is as follows:
wherein, Is the theoretical playing duration; /(I)Frame number of candidate image data for the current frame; /(I)The frame rate of candidate image data for the current frame.
In step S2013, an actual playing duration is determined according to the actual playing time and the initial time of each candidate image data.
Wherein the initial time characterizes a system time for turning on the image display function.
In the embodiment of the invention, the multi-region image data is played, and the initial time of synchronization, namely the initial synchronization time stamp, is preset, and the actual playing time of the candidate image data of each region is calculated.
In one possible implementation manner, when the playing device is initialized to be powered on, a playing control thread is started to perform data interaction, for example, a video playing module in the playing device performs data interaction on an application layer in a callback mode, and the current system time for starting the playing control thread is determined as an initial time. The playing time of the image data of all the areas is based on the initial time, and the system time for acquiring the candidate image data is determined as the actual playing time. And calculating the actual playing time according to the actual playing time and the initial time of each candidate image data.
The calculation formula of the actual playing time length is as follows:
wherein, The actual playing time length; /(I)The actual playing time of the candidate image data of the current frame is the actual playing time; /(I)To initiate the image display function.
Therefore, the embodiment of the invention adopts a play synchronization mode of combining the reference clock with the frame number to decouple the play synchronization of the image data of each region, and has real-time synchronization and simple policy maintenance.
Optionally, in practical application, the playing frequency of playing the multi-area image data is adjusted according to the actual playing time length, so that the playing synchronization of the multi-area image data is realized. Referring to fig. 6, the substeps of step S202 in fig. 5 may include:
in step S2021, when the difference obtained by subtracting the theoretical playing duration from the actual playing duration is greater than the time threshold, the corresponding candidate image data is discarded.
In the embodiment of the invention, the user can set the time threshold according to the fluency requirement of the image playing, and the time length error is determined according to the difference value between the actual playing time length and the theoretical playing time length. The calculation formula of the duration error is as follows:
wherein, A time length error exists between the actual playing time length and the theoretical playing time length of the candidate image data of the current frame; /(I)The actual playing time length; /(I)Is the theoretical playing duration.
And checking the time length error and a preset time threshold value, and judging whether the candidate image data of the current frame meets the synchronous playing effect.
In one possible implementation manner, when the duration error is a positive value and is greater than the time threshold, the playing speed of the image data is slower, so as to ensure that the images of each region can be smoothly played, the candidate image data of the current frame of the region is discarded, and the candidate image data of the region is continuously played until the region meets the synchronous playing condition.
In step S2022, when the difference obtained by subtracting the actual playing time from the theoretical playing time is greater than the time threshold, the video data acquiring speed is increased, and the corresponding candidate image data is determined as the target image data.
In another possible implementation manner, when the duration error is a negative value and the absolute value of the duration error is greater than the time threshold, it is indicated that candidate image data cannot be obtained in time, and quick playing needs to be performed on the region image, that is, the speed of obtaining video data of the video region and decoding the video data by using the GPU is increased until the synchronous playing condition is met and then the speed of obtaining the video data is increased. And simultaneously, the candidate image data of the current frame is determined as target image data.
In step S2023, when the absolute value of the difference between the actual playing duration and the theoretical playing duration is less than or equal to the time threshold, the corresponding candidate image data is determined as the target image data.
In another possible implementation, when the absolute value of the time length error does not exceed the time threshold, the corresponding candidate image data is considered to satisfy the synchronous play condition. Candidate image data of the current frame is determined as target image data.
In step S2024, when the target image data is the image data of the dynamic region, the target image data is saved in the video memory.
In the embodiment of the invention, the image data of the video area is stored in the buffer area to be played after being decoded, and belongs to video memory data. And the image data of the dynamic region are stored in the memory, belonging to the memory data. After the target image data is acquired, the target image data of the dynamic region is copied into the video memory from the memory, so that the target image data of each region is stored into a multi-canvas playing area corresponding to the DRM layer for subsequent rendering.
Optionally, in practical application, the multi-canvas switching can be realized by using an asynchronous interval cyclic refreshing mode, so that image tearing during rendering and displaying can be effectively avoided. Referring to fig. 5, the substeps of step S30 in fig. 4 may include:
Step S301, determining a target DRM layer according to the target image data and the layer mapping table.
The map layer mapping table is used for recording the binding relation of the area to which the image data belongs and the DRM map layer in one-to-one correspondence.
In the embodiment of the invention, the corresponding target area is acquired according to the target image data, and the target DRM layer is determined according to the target area and the layer mapping table.
Step S302, determining a target multi-canvas playing area according to the target DRM layer and the canvas mapping table.
The canvas mapping table is used for recording the one-to-one correspondence between the DRM layers and the multiple canvas playing areas.
In the embodiment of the invention, the DRM layer is uniquely identified by the layer identification, and the canvas mapping table records the corresponding relationship between the DRM layer identification and the multi-canvas playing area corresponding to the DRM layer. And acquiring a target multi-canvas playing area in a target mapping table according to the target DRM layer identification.
Step S303, determining a target playing area in the target multi-canvas playing area according to the target multi-canvas playing area and the updating cursor, and storing target image data in the target playing area.
The update cursor is used for identifying a playing area of the image data to be updated.
In the embodiment of the invention, the multiple playing areas of the multiple canvas playing areas of each DRM layer are assumed to be managed by using an array, the array subscript of the playing area to be updated is recorded by using an update cursor, the target playing area to be updated is found in the target multiple canvas playing area according to the update cursor, the target image data bound with the DRM layer is stored in the target playing area, and the update cursor is updated to be the array subscript of the next playing area to be updated.
And step S304, determining target image data of the playing area currently rendered and displayed by the DRM according to the target multi-canvas playing area and the playing cursor.
Wherein the play cursor is used to identify the play area where the DRM currently needs to render the display.
In the embodiment of the invention, the playing cursor is an array index different from the updating cursor and is used for recording the array index of the playing area which is needed to be rendered and displayed currently, determining the playing area which is needed to be rendered and displayed currently from the array corresponding to the target multi-canvas playing area according to the playing cursor, and rendering and displaying the target image data of the playing area which is needed to be rendered and displayed currently on the display device, thereby realizing the switching use of the multi-canvas when the image data is set and effectively avoiding the problem of image tearing of DRM asynchronous refreshing display.
In one possible implementation manner, the multi-canvas switching is realized by adopting asynchronous interval cyclic refreshing, and according to a mechanism of asynchronous refreshing display of the DRM image layer, a plurality of play areas of the created multi-canvas play area can be compared with an image data frame buffer cyclic queue, so that the image data is ensured to be updated to the corresponding multi-canvas play area, and the image data which is being displayed by the play equipment is not covered by adopting an asynchronous updating image data mode.
Assuming that the index of the play area array is 1-n, when the play cursor is 1, representing that the image data in the play area of the 1 st array element is updated, and when the DRM renders the image data in the play area of the 1 st array element to the display equipment and continuously displays the image data, updating the next frame of image data to the play area of the 2 nd array element, wherein the update cursor is 2; when the DRM renders the image data in the playing area of the nth array element to the display device and continuously displays the image data, the next frame of image data is updated to the playing area of the 1 st array element, that is, the playing cursor is n, the updating cursor is 1, and the like, and the image data is recycled in turn.
Therefore, the embodiment of the invention sets a multi-canvas playing area for each DRM layer, and circularly queues the DRM layer canvases, thereby promoting smooth rendering and avoiding image tearing during DRM refreshing display.
It should be noted that, the playing device calls the DRM interface to render the image data of the current playing area of each DRM layer to the display device. In the DRM hardware rendering process, since the DRM layer rendering has superposition characteristics, when the display area is preset, the areas need to be ensured not to be covered, the image data is directly superposed on the main layer, and rendering display is completed.
Optionally, in practical application, when the image data of the region is processed for the first time or the image data format of the region is changed, a binding relationship between the region and the DRM layer needs to be established. Referring to fig. 7, the substeps of step S301 in fig. 5 may include:
step S3011, determining a target area identifier according to the target image data.
Wherein the target area identification is used to uniquely identify the area.
In the embodiment of the invention, the region identifier is used for uniquely identifying the video region and the dynamic region, the video region and the dynamic region are uniformly addressed, and the image data of each region carries the region identifier, which can be an ID or a name. The present invention is not limited in this regard.
In step S3012, when the layer mapping table records the target area identifier, the DRM layer corresponding to the target area identifier is determined to be the target DRM layer.
In the embodiment of the invention, if the map layer mapping table can record the target area identifier, the DRM map layer which has a binding relation with the target area is described, and the DRM map layer bound with the target area is obtained as the target DRM map layer according to the target area identifier.
In step S3013, when the layer mapping table does not record the target area identifier, the target DRM layer is determined from the candidate DRM layers.
And the candidate DRM layers are DRM layers which do not establish binding relation with any area.
In the embodiment of the invention, if the target area identifier is not recorded in the layer mapping table, the target area is not established with any DRM layer. It is assumed that the DRM layer that has created the binding relationship is set to the bound state and the DRM layer that has not created the binding relationship is set to the unbound state. And screening the DRM layers in the unbound state as candidate DRM layers, and determining a target DRM layer corresponding to the target area identifier in the candidate DRM layers.
Optionally, in practical application, in order to improve the rendering display efficiency of the image data, a DRM layer with the same image format should be selected as far as possible to establish a binding relationship. The substeps of step S3013 in fig. 6 may include:
Traversing and comparing the playing format of the candidate DRM layers with the image format of the target image data, and determining the corresponding candidate DRM layers as the target DRM layers when the playing format identical to the image format exists; when there is no play format identical to the image format, any one of the candidate DRM layers is determined as a target DRM layer. And establishing a binding relation between the region to which the target image data belongs and the target DRM layer, and recording the binding relation to the layer mapping table.
In the embodiment of the invention, the layer identification (for example, layer ID) of the candidate DRM layer is sequentially traversed and acquired, and the DRM bottom layer interface is scheduled to check whether the candidate DRM layer supports the image format of the target image data according to the layer identification and the image format of the target image data, if the candidate DRM layer supports the image format of the target image data, the candidate DRM layer supporting the image format of the target image data is set as the target DRM layer, and the playing format of the target DRM layer is set as the image format of the target image data. Thereby determining a one-to-one correspondence of the regions and the DRM layer.
When all the candidate DRM layers do not support the image format of the target image data, one DRM layer is selected as the target DRM layer, the playing format of the target DRM layer is set to be the image format supported by the DRM layer, and the transcoding identification is set for the target image data. When rendering image data, the GPU is utilized to transcode the target image data with the transcoding identifier, so that the image format identical to that of the target DRM image layer is obtained, and the abnormity of DRM rendering pictures caused by inconsistent image formats is prevented. And executing the binding of the area of the target image data and the target DRM layer, setting the state of the DRM layer as a bound state, and recording the bound state into a layer mapping table.
It should be noted that, if the number of layers is not enough, the image data of the binding relationship area may be directly rendered and displayed by using a software decoding transcoding scheme in the prior art.
In order to more clearly illustrate the multi-region image data display method provided by the embodiment of the present invention, an exemplary illustration is given by fig. 8. Based on a DRM framework of a linux system, a mode that software processes multi-region image data and performs data merging and re-rendering in the prior art is changed, and multi-layer resources in display equipment are identified by the DRM framework to realize direct rendering of the multi-region multi-format image data. In fig. 8, the GPU is used to decode the video data of the video area to obtain the first image data, and store the first image data in the video memory, and periodically obtain the second image data of the dynamic area, and play the first image data and the second image data of the current frame synchronously to obtain the target image data. And storing the target image data of the dynamic region into a video memory. The binding operation of multiple areas and DRM layers is realized by utilizing the multi-layer resources of the DRM framework, and transcoding operation of image data can be avoided pertinently, so that CPU scheduling and playing equipment power consumption are reduced. In addition, due to the superposition characteristics of the image layers, the synthesis operation of multi-region image data can be directly avoided, the occupation of GPU or CPU resources is reduced, and the efficiency of high-speed processing of image data and high-speed rendering and displaying is effectively improved.
Based on the same inventive concept, the embodiment of the invention also provides a multi-region image data display device. The basic principle and the technical effects are the same as those of the above embodiments, and for brevity, reference is made to the corresponding matters in the above embodiments where the description of the present embodiment is omitted.
Referring to fig. 9, fig. 9 is a block diagram illustrating a multi-region image data display device 200 according to an embodiment of the invention. The multi-area image data display apparatus 200 includes an acquisition module 201, a synchronization module 202, and a processing module 203.
The obtaining module 201 is configured to obtain video data of the video area, and decode the video data by using the GPU to obtain first image data.
A synchronization module 202, configured to perform play synchronization according to the first image data and the second image data, so as to obtain target image data; the second image data is image data of a dynamic region.
The processing module 203 is configured to store the target image data in a multi-canvas playing area of the corresponding DRM layer, so that the DRM sequentially renders and displays the target image data of each playing area; each DRM layer is created with a corresponding multi-canvas playing area; the multi-canvas playing area includes a plurality of playing areas.
In summary, the multi-region image data display device provided in the embodiments of the present invention includes an acquisition module, a synchronization module, and a processing module. The acquisition module is used for acquiring video data of the video area and decoding the video data through the GPU to obtain first image data. The synchronization module is used for performing playing synchronization according to the first image data and the second image data to obtain target image data; the second image data is image data of a dynamic region. The processing module is used for storing the target image data into a plurality of canvas playing areas of the corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area; each DRM layer is created with a corresponding multi-canvas playing area; the multi-canvas playing area includes a plurality of playing areas. The binding strategy of the multi-region image data and the DRM layers is realized by utilizing the multi-layer resources in the DRM, and the multi-region and multi-format image data hardware merging and playing are realized in a layer stacking and rendering mode, so that the CPU resource or GPU resource consumption is effectively reduced. Meanwhile, by combining with a play synchronization strategy of the multi-region image data, the efficiency of image data processing and rendering display is improved, and the smoothness of image play is further improved. And the rendering smoothness is promoted by utilizing multiple canvas, so that image tearing during DRM refreshing display is effectively avoided.
Optionally, the synchronization module 202 is specifically configured to calculate an actual playing duration and a theoretical playing duration of the candidate image data; the candidate image data includes first image data and second image data; and determining target image data in the candidate image data according to the actual playing time length, the theoretical playing time length and the time threshold.
Optionally, the synchronization module 202 is specifically configured to obtain a frame rate according to the first image data; determining theoretical playing time length according to the frame sequence number and the frame rate of each candidate image data; determining the actual playing time length according to the actual playing time and the initial time of each candidate image data; the initial time characterizes the system time for turning on the image display function.
Optionally, the synchronization module 202 is specifically configured to discard the corresponding candidate image data when a difference obtained by subtracting the theoretical playing duration from the actual playing duration is greater than a time threshold; when the difference value obtained by subtracting the actual playing time from the theoretical playing time is larger than the time threshold, the acquisition speed of the video data is increased, and the corresponding candidate image data is determined as target image data; when the absolute value of the difference value between the actual playing time length and the theoretical playing time length is smaller than or equal to a time threshold value, determining the corresponding candidate image data as target image data; and when the target image data is the image data of the dynamic region, storing the target image data into a video memory.
Optionally, the processing module 203 is specifically configured to determine a target DRM layer according to the target image data and the layer mapping table; the map layer mapping table is used for recording the binding relation of the one-to-one correspondence of the region of the image data and the DRM map layer; determining a target multi-canvas playing area according to the target DRM layer and the canvas mapping table; the canvas mapping table is used for recording the one-to-one correspondence between the DRM layer and the multiple canvas playing areas; determining a target playing area in the target multi-canvas playing area according to the target multi-canvas playing area and the updating cursor, and storing target image data in the target playing area; the updating cursor is used for identifying a playing area of the image data to be updated; determining target image data of a playing area currently rendered and displayed by DRM according to the target multi-canvas playing area and the playing cursor; the play cursor is used to identify the play area where the DRM currently needs to render the display.
Optionally, the processing module 203 is specifically configured to determine a target area identifier according to the target image data; the target area identification is used for uniquely identifying the area; when the map layer mapping table records the target area identifier, determining the DRM map layer corresponding to the target area identifier as a target DRM map layer; when the map layer mapping table does not record the target area identification, determining a target DRM map layer in the candidate DRM map layers; the candidate DRM layer is a DRM layer that does not establish a binding relationship with any region.
Optionally, the processing module 203 is specifically configured to traverse and compare the playing format of the candidate DRM layer and the image format of the target image data; when the playing format which is the same as the image format exists, determining the corresponding candidate DRM layers as target DRM layers; when the playing format which is the same as the image format does not exist, determining any candidate DRM layer as a target DRM layer; and establishing a binding relation between the region to which the target image data belongs and the target DRM layer, and recording the binding relation to the layer mapping table.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by the processor 120, implements the multi-region image data display method disclosed in the above embodiments.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A multi-region image data display method, the method comprising:
Acquiring video data of a video area, and decoding the video data through a GPU to obtain first image data;
performing playing synchronization according to the first image data and the second image data to obtain target image data; the second image data is the image data of the dynamic area;
Storing the target image data into a plurality of canvas playing areas of a corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area; each DRM layer is created with a corresponding multi-canvas playing area; the multi-canvas playing area comprises a plurality of playing areas;
the storing the target image data in the multi-canvas playing area of the corresponding DRM layer so that DRM sequentially renders and displays the target image data of each playing area, including:
Determining a target DRM layer according to the target image data and the layer mapping table; the map layer mapping table is used for recording the binding relation of the area to which the image data belongs and the DRM map layer in one-to-one correspondence;
Determining a target multi-canvas playing area according to the target DRM layer and the canvas mapping table; the canvas mapping table is used for recording the one-to-one correspondence between the DRM layers and the multiple canvas playing areas;
Determining a target playing area in the target multi-canvas playing area according to the target multi-canvas playing area and the updating cursor, and storing the target image data into the target playing area; the updating cursor is used for identifying a playing area of the image data to be updated;
Determining target image data of a playing area currently rendered and displayed by DRM according to the target multi-canvas playing area and the playing cursor; the play cursor is used to identify the play area where the DRM currently needs to render the display.
2. The method for displaying multi-region image data according to claim 1, wherein the performing playback synchronization according to the first image data and the second image data to obtain target image data comprises:
Calculating to obtain the actual playing time length and the theoretical playing time length of the candidate image data; the candidate image data includes the first image data and the second image data;
And determining target image data in the candidate image data according to the actual playing time length, the theoretical playing time length and the time threshold.
3. The method for displaying multi-region image data according to claim 2, wherein the calculating the actual playing time length and the theoretical playing time length of the candidate image data includes:
Acquiring a frame rate according to the first image data;
determining the theoretical playing duration according to the frame sequence number and the frame rate of each candidate image data;
determining the actual playing time length according to the actual playing time and the initial time of each candidate image data; the initial time characterizes a system time for turning on an image display function.
4. The method of claim 2, wherein determining target image data in the candidate image data according to the actual playing time period, the theoretical playing time period, and a time threshold value comprises:
when the difference value obtained by subtracting the theoretical playing time length from the actual playing time length is larger than the time threshold value, discarding the corresponding candidate image data;
When the difference value obtained by subtracting the actual playing time from the theoretical playing time is larger than the time threshold, accelerating the acquisition speed of video data, and determining the corresponding candidate image data as the target image data;
When the absolute value of the difference value between the actual playing time length and the theoretical playing time length is smaller than or equal to the time threshold value, determining the corresponding candidate image data as the target image data;
And when the target image data is the image data of the dynamic region, storing the target image data into a video memory.
5. The multi-region image data display method of claim 1, wherein the determining a target DRM layer from the target image data and layer map comprises:
determining a target area identifier according to the target image data; the target area identifier is used for uniquely identifying an area;
When the map layer mapping table records the target area identifier, determining a DRM map layer corresponding to the target area identifier as the target DRM map layer;
when the map layer mapping table does not record the target area identification, determining the target DRM map layer in candidate DRM map layers; and the candidate DRM layers are DRM layers which do not establish binding relation with any area.
6. The multi-region image data display method of claim 5, wherein the determining the target DRM layer among candidate DRM layers comprises:
traversing and comparing the playing format of the candidate DRM layer with the image format of the target image data;
When the playing format which is the same as the image format exists, determining the corresponding candidate DRM layer as the target DRM layer;
when the playing format which is the same as the image format does not exist, determining any candidate DRM layer as the target DRM layer;
and establishing a binding relation between the area of the target image data and the target DRM layer, and recording the binding relation to the layer mapping table.
7. A multi-region image data display device, the device comprising:
The acquisition module is used for acquiring video data of the video area and decoding the video data through the GPU to obtain first image data;
The synchronization module is used for performing playing synchronization according to the first image data and the second image data to obtain target image data; the second image data is the image data of the dynamic area;
The processing module is used for determining a target DRM layer according to the target image data and the layer mapping table; the map layer mapping table is used for recording the binding relation of the area to which the image data belongs and the DRM map layer in one-to-one correspondence; determining a target multi-canvas playing area according to the target DRM layer and the canvas mapping table; the canvas mapping table is used for recording the one-to-one correspondence between the DRM layers and the multiple canvas playing areas; determining a target playing area in the target multi-canvas playing area according to the target multi-canvas playing area and the updating cursor, and storing the target image data into the target playing area; the updating cursor is used for identifying a playing area of the image data to be updated; determining target image data of a playing area currently rendered and displayed by DRM according to the target multi-canvas playing area and the playing cursor; the play cursor is used for identifying a play area which is needed to be rendered and displayed by DRM at present; each DRM layer is created with a corresponding multi-canvas playing area; the multi-canvas playing area includes a plurality of playing areas.
8. A playback device comprising a memory for storing a computer program and a processor for executing the multi-region image data display method as claimed in any one of claims 1 to 6 when the computer program is invoked.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the multi-region image data display method according to any one of claims 1-6.
CN202311568545.8A 2023-11-23 2023-11-23 Multi-area image data display method, device, playing equipment and storage medium Active CN117278796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311568545.8A CN117278796B (en) 2023-11-23 2023-11-23 Multi-area image data display method, device, playing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311568545.8A CN117278796B (en) 2023-11-23 2023-11-23 Multi-area image data display method, device, playing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117278796A CN117278796A (en) 2023-12-22
CN117278796B true CN117278796B (en) 2024-04-19

Family

ID=89209133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311568545.8A Active CN117278796B (en) 2023-11-23 2023-11-23 Multi-area image data display method, device, playing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117278796B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050899A (en) * 2021-02-07 2021-06-29 厦门亿联网络技术股份有限公司 Method and system for directly displaying video and UI drm based on Wayland protocol
CN114143595A (en) * 2021-12-08 2022-03-04 珠海豹趣科技有限公司 Video wallpaper playing method and device, electronic equipment and readable storage medium
CN115809106A (en) * 2022-09-19 2023-03-17 阿里巴巴(中国)有限公司 Cloud desktop system, cloud desktop display method, terminal device and storage medium
WO2023116090A1 (en) * 2021-12-22 2023-06-29 京东方科技集团股份有限公司 Method and apparatus for synchronously playing video, and storage medium and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10592417B2 (en) * 2017-06-03 2020-03-17 Vmware, Inc. Video redirection in virtual desktop environments

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050899A (en) * 2021-02-07 2021-06-29 厦门亿联网络技术股份有限公司 Method and system for directly displaying video and UI drm based on Wayland protocol
CN114143595A (en) * 2021-12-08 2022-03-04 珠海豹趣科技有限公司 Video wallpaper playing method and device, electronic equipment and readable storage medium
WO2023116090A1 (en) * 2021-12-22 2023-06-29 京东方科技集团股份有限公司 Method and apparatus for synchronously playing video, and storage medium and electronic device
CN115809106A (en) * 2022-09-19 2023-03-17 阿里巴巴(中国)有限公司 Cloud desktop system, cloud desktop display method, terminal device and storage medium

Also Published As

Publication number Publication date
CN117278796A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN108810622B (en) Video frame extraction method and device, computer readable medium and electronic equipment
JP4519082B2 (en) Information processing method, moving image thumbnail display method, decoding device, and information processing device
CN113691846A (en) Multi-window screen projection method and electronic equipment
US20030133506A1 (en) Image processor controlling b-picture memory
CN109640167B (en) Video processing method and device, electronic equipment and storage medium
CN1124531A (en) Displaying a subsampled video image on a computer display
CN111899322A (en) Video processing method, animation rendering SDK, device and computer storage medium
CN112073810B (en) Multi-layout cloud conference recording method and system and readable storage medium
US20060088279A1 (en) Reproduction apparatus, data processing system, reproduction method, program, and storage medium
JP5082153B2 (en) Video conversion device, video playback device, video conversion playback system, and program
US20120082240A1 (en) Decoding apparatus, decoding method, and editing apparatus
JP4573957B2 (en) Image control apparatus, image control method, and television receiver
US7751687B2 (en) Data processing apparatus, data processing method, data processing system, program, and storage medium
EP2166770B1 (en) Moving-image reproducing apparatus and moving-image reproducing method
CN117278796B (en) Multi-area image data display method, device, playing equipment and storage medium
CN113490047A (en) Android audio and video playing method
CN116126487A (en) Multi-instance codec scheduling method and device, storage medium and terminal
CN116112691A (en) Picture stream intelligent analysis and inspection method and device, storage medium and terminal
CN112134999B (en) Method and device for processing video color ring and computer readable storage medium
WO2010134238A1 (en) Video recording device
JP3406255B2 (en) Image decoding apparatus and method
JP2003125401A (en) Video data reproducing method
CN113450293A (en) Video information processing method, device and system, electronic equipment and storage medium
CN108933945A (en) A kind of compression method, device and the storage medium of GIF picture
US20240022737A1 (en) Image processing method, non-transitory storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240328

Address after: 1305, yuemeite building, No.1, Gaoxin South 7th Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Fanlian Information Technology Co.,Ltd.

Country or region after: China

Applicant after: Shenzhen Youfang Data Technology Co.,Ltd.

Address before: 1305, yuemeite building, No.1, Gaoxin South 7th Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen Fanlian Information Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant