CN114040246A - Image format conversion method, device, equipment and storage medium of graphic processor - Google Patents
Image format conversion method, device, equipment and storage medium of graphic processor Download PDFInfo
- Publication number
- CN114040246A CN114040246A CN202111315545.8A CN202111315545A CN114040246A CN 114040246 A CN114040246 A CN 114040246A CN 202111315545 A CN202111315545 A CN 202111315545A CN 114040246 A CN114040246 A CN 114040246A
- Authority
- CN
- China
- Prior art keywords
- pixel
- data format
- value
- map
- new
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 77
- 238000009877 rendering Methods 0.000 claims abstract description 127
- 238000013507 mapping Methods 0.000 claims description 36
- 238000004590 computer program Methods 0.000 claims description 2
- 241000023320 Luma <angiosperm> Species 0.000 claims 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical group COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 23
- 238000007667 floating Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The application provides an image format conversion method, device and equipment of a graphic processor and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring an original rendering map corresponding to the current frame picture from a central processing unit; reading a pixel value of a first data format of each pixel point in a current frame picture from an original rendering map; converting the pixel value of the first data format of each pixel point into the pixel value of the second data format, and generating a new rendering chartlet according to the pixel value of the second data format; and adopting the new rendering map for rendering processing to generate a target picture corresponding to the current frame picture. In the method, the conversion of the image data format is carried out in the graphic processor, and the data format parallel conversion calculation of a large quantity of pixel points can be processed based on the advantages of the floating point data calculation of the graphic processor and the special graphic parallel calculation capability, so that the calculation efficiency of the image data format conversion is effectively improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image format conversion method, an image format conversion device, an image format conversion apparatus, and a storage medium for a graphics processor.
Background
Generally, a network video watched by a user is remotely sent to a user terminal for display after picture rendering is performed locally on the device. The distortion of a picture rendered by a traditional RGB (red, green and blue) pixel format is minimum, the picture is suitable for screen display, but in the process of network video transmission, the ultrahigh requirements on bandwidth and speed cannot adapt to the limitation of the current network technical conditions, and the YUV (luminance and chrominance) pixel format allows the UV component to be compressed to a certain degree, so that the transmission speed can be improved under the condition of ensuring a certain image reduction degree, and therefore, in the process of network video transmission, the conversion from RGB to YUV pixel format is inevitably required.
In the prior art, a programming language (C/C + +) with high operating efficiency is mostly used in combination with a high main frequency CPU to perform pixel format conversion from RGB to YUV, and a multithreading technology is also used in combination to improve the concurrency of conversion and shorten the conversion time.
However, the pixel conversion calculation is a complete floating-point operation, the floating-point operation capability of the CPU is weak, the physical number of the CPU is very limited, and the degree of algorithm concurrency is still limited even if the multithreading technology is used, thereby resulting in low pixel format conversion efficiency.
Disclosure of Invention
An object of the present application is to provide an image format conversion method, apparatus, device and storage medium for a graphics processor, so as to solve the problem of low image pixel format conversion efficiency in the prior art.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides an image format conversion method for a graphics processor, which is applied to the graphics processor, and the method includes:
acquiring an original rendering map corresponding to the current frame picture from a central processing unit;
reading the pixel value of a first data format of each pixel point in the current frame picture from the original rendering map;
converting the pixel value of each pixel point in the first data format into a pixel value in a second data format, and generating a new rendering chartlet according to the pixel value in the second data format;
and adopting the new rendering map for rendering processing to generate a target picture corresponding to the current frame picture.
Optionally, the first data format is a red-green-blue format, and the second data format is a luminance-chrominance format; the converting the pixel value of each pixel point in the first data format into a pixel value of each pixel point in a second data format includes:
determining the pixel value of each pixel point on the brightness component according to the mapping relation between the red, green and blue values and the value of the brightness component;
dividing each pixel point into a plurality of pixel areas;
determining the pixel value of each pixel area on the first chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the first chrominance component;
and determining the pixel value of each pixel area on the second chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the second chrominance component.
Optionally, the determining the pixel value of each pixel region on the first chrominance component according to the mapping relationship between the red, green, and blue values of each pixel point in each pixel region and the value of the first chrominance component includes:
determining a central pixel point of a first pixel area according to coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the first chrominance component according to the first mapping relation between the red, green and blue values of the central pixel point and the value of the first chrominance component.
Optionally, the determining the pixel value of each pixel region on the second chrominance component according to the mapping relationship between the red, green, and blue values of each pixel point in each pixel region and the value of the second chrominance component includes:
determining a central pixel point of a first pixel area according to coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the second chrominance component according to the second mapping relation between the red, green and blue values of the central pixel point and the value of the second chrominance component.
Optionally, before generating a new rendering map according to the pixel values of the second data format, the method further includes:
determining the size parameter of a new rendering map to be created according to the size parameter of the original rendering map, wherein the size parameter comprises: the height of the new rendering map to be created is a preset multiple of the original rendering map, and the width of the new rendering map to be created is the same as the width of the original rendering map;
creating a map template according to the size parameter of the new rendering map to be created;
creating a new cache space, and binding the cache space with the map template, wherein the new cache space is used for storing the pixel values of the second data format;
and storing the converted pixel values in the second data format into the new buffer space.
Optionally, the generating a new rendering map according to the pixel values of the second data format includes:
reading pixel values of the second data format from the new buffer space;
and transmitting the pixel values in the second data format into the map template to obtain the new rendering map.
Optionally, after the image rendering is performed by using the new rendering map and a target picture corresponding to a current frame picture is generated, the method further includes:
respectively generating at least one frame of target picture corresponding to at least one frame of picture;
and sending the at least one frame of target picture to a user terminal.
In a second aspect, an embodiment of the present application further provides an image format conversion apparatus for a graphics processor, where the apparatus is applied to the graphics processor, and the apparatus includes: the device comprises an acquisition module, a reading module, a conversion module and a generation module;
the acquisition module is used for acquiring an original rendering map corresponding to the current frame picture from the central processing unit;
the reading module is used for reading the pixel value of the first data format of each pixel point in the current frame picture from the original rendering map;
the conversion module is used for converting the pixel value of each pixel point in the first data format into a pixel value in a second data format and generating a new rendering map according to the pixel value in the second data format;
and the generating module is used for adopting the new rendering map to perform rendering processing and generating a target picture corresponding to the current frame picture.
Optionally, the first data format is a red-green-blue format, and the second data format is a luminance-chrominance format; the conversion module is specifically configured to:
determining the pixel value of each pixel point on the brightness component according to the mapping relation between the red, green and blue values and the value of the brightness component;
dividing each pixel point into a plurality of pixel areas;
determining the pixel value of each pixel area on the first chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the first chrominance component;
and determining the pixel value of each pixel area on the second chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the second chrominance component.
Optionally, the conversion module is specifically for
Determining a central pixel point of a first pixel area according to coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the first chrominance component according to the first mapping relation between the red, green and blue values of the central pixel point and the value of the first chrominance component.
Optionally, the conversion module is specifically for
Determining a central pixel point of a first pixel area according to coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the second chrominance component according to the second mapping relation between the red, green and blue values of the central pixel point and the value of the second chrominance component.
Optionally, the apparatus further comprises: a creation module;
the creating module is configured to determine a size parameter of a new rendering map to be created according to the size parameter of the original rendering map, where the size parameter includes: the height of the new rendering map to be created is a preset multiple of the original rendering map, and the width of the new rendering map to be created is the same as the width of the original rendering map;
creating a map template according to the size parameter of the new rendering map to be created;
creating a new cache space, and binding the cache space with the map template, wherein the new cache space is used for storing the pixel values of the second data format;
and storing the converted pixel values in the second data format into the new buffer space.
Optionally, the generating module is specifically configured to
Reading pixel values of the second data format from the new buffer space;
and transmitting the pixel values in the second data format into the map template to obtain the new rendering map.
Optionally, the apparatus further comprises: a sending module;
the generating module is further configured to generate at least one frame of target picture corresponding to the at least one frame of picture;
and the sending module is used for sending the at least one frame of target picture to the user terminal.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operated, the processor executing the machine-readable instructions to perform the steps of the method as provided in the first aspect when executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, performs the steps of the method as provided in the first aspect.
The beneficial effect of this application is:
the application provides an image format conversion method, device, equipment and storage medium of a graphic processor, wherein the method comprises the following steps: acquiring an original rendering map corresponding to the current frame picture from a central processing unit; reading a pixel value of a first data format of each pixel point in a current frame picture from an original rendering map; converting the pixel value of the first data format of each pixel point into the pixel value of the second data format, and generating a new rendering chartlet according to the pixel value of the second data format; and adopting the new rendering map for rendering processing to generate a target picture corresponding to the current frame picture. In the method, the original rendering chartlet of the current frame picture is obtained from the central processing unit, the conversion of the data format of the picture is carried out in the graphics processing unit, and the parallel conversion calculation of the data format of a large batch of pixel points can be processed based on the advantages of the floating point data calculation of the graphics processing unit and the specific graphics parallel calculation capacity, so that the calculation efficiency of the conversion of the data format of the picture is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of a service scenario provided in an embodiment of the present application;
FIG. 2 is a first flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present disclosure;
fig. 3 is a second flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of converting an RGB format into a YUV format according to an embodiment of the present application;
fig. 5 is a third flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present disclosure;
fig. 6 is a fourth flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present disclosure;
fig. 7 is a fifth flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present disclosure;
fig. 8 is a sixth schematic flowchart of an image format conversion method of a graphics processor according to an embodiment of the present application;
fig. 9 is a seventh flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present disclosure;
FIG. 10 is a diagram illustrating an image format conversion apparatus of a graphics processor according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The noun terms to which this application may relate are explained first:
RGB: a pixel format represented by red, green and blue is used.
YUV: pixel format using luminance Y chrominance UV representation.
A CPU: a central processing unit.
GPU: a graphics processor.
GLSL: OpenGL rendering language.
Fig. 1 is a schematic view of a service scenario provided in an embodiment of the present application. Taking the application of the method to a cloud game service scenario as an example, as shown in fig. 1, the service scenario may include: game applications, system interfaces, image synthesizers, screen images, luminance chrominance format encoders (YUV encoders), streaming media servers, networks. From generation to transmission to the network, the game application screen may go through several stages:
firstly, the method comprises the following steps: the game application generates a frame of a screen and sends the frame to an image synthesizer (Composer).
The game application may refer to any game application program, and the game screen frame may be generated locally at the terminal in which the game application is installed, and the terminal device may transmit the game screen frame to the image synthesizer.
Secondly, the method comprises the following steps: the image synthesizer combines the game picture with the system interface to generate a final screen-on picture.
The system interface may refer to some system menus, status bars and other frames besides the game frame to be displayed, for example: the setting bar, control bar, time and network of the system, and the system interface and the game interface together form a complete picture frame.
It should be noted that, in the method, only the game picture part is focused, the collected game picture data is subjected to format conversion, and the system interface may not be the focus of attention.
Taking an android system as an example, a surfaceFlinger process can be used as a synthesizer, in the stage of synthesizing images of all layers by adopting a function doComposition, before the picture is processed by adopting a function postFramebuffer, the game picture and a system interface are merged in a rendering cache at the moment, and at the moment, the game picture and the system interface are intercepted by a mounting hook function, so that a complete screen-up picture can be obtained.
Thirdly, the method comprises the following steps: and intercepting the on-screen picture by a YUV encoder of the graphic processor while the picture is on screen, and performing YUV format conversion.
Optionally, the first and second steps may be executed locally in the terminal, the execution subject may be a central processing unit of the terminal, and in this step, the graphics processor may obtain the on-screen picture from the central processing unit, that is, obtain the image to be subjected to the pixel format conversion, and perform the preset format conversion on the image.
Fourthly: the YUV format picture obtained after the format conversion is completed is transferred to a streaming Server (Media Server).
The graphics processor converts the original RGB format picture into a YUV format picture, and then sends the converted picture to the streaming media server, wherein the streaming media server can be a server where a user terminal is located so as to display the picture at the user terminal.
Fifth, the method comprises the following steps: the streaming media server further packages the independent picture frame into a streaming media format for network transmission.
Optionally, since the above is performed for each frame of game picture, each frame of picture in YUV format obtained by conversion is sent to the streaming media server, and the streaming media server may encapsulate the frame of picture to generate a video, and further transmit the video through a network to a browser of the user terminal for the user to watch the video.
The above description is made briefly for the service scenario to which the present application is applied. It should be noted that the image format conversion method of the graphics processor provided in the present application is a description of a specific implementation of the step three, and as follows, a detailed description will be given through a plurality of embodiments.
Fig. 2 is a first flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present application, where an execution subject of the method may be the graphics processor, and as shown in fig. 2, the method may include:
s201, obtaining an original rendering map corresponding to the current frame picture from a central processing unit.
The central processing unit may first obtain the original data of the current frame image from the application program, where the application program may refer to any type of application program, and is not limited to a game application program, and the images generated by different application programs may be different, such as a game image, a movie image, and the like.
Optionally, based on the raw data fetched, the central processor may call the glcopytexmage 2 to generate a raw rendering map, i.e., a material object, of the raw data fetched from the image compositor rendering cache for processing by the graphics processor.
The graphics processor can obtain the original rendering map corresponding to the current frame from the central processor and perform the subsequent processing.
S202, reading the pixel value of each pixel point in the current frame picture in the first data format from the original rendering map.
In some embodiments, since the graphics processor is configured to process a graphic, an input of the graphics processor is the original rendering map corresponding to the current frame picture, and the graphics processor may first read original data of the current frame picture from the original rendering map, that is, read a pixel value of the first data format of each pixel point in the current frame picture.
S203, converting the pixel value of the first data format of each pixel point into the pixel value of the second data format, and generating a new rendering map according to the pixel value of the second data format.
The graphics processor may invoke a corresponding function to process to convert the pixel values of the first data format of each pixel point to pixel values of the second data format. In this embodiment, a core algorithm may be written in a GLSL (OpenGL Shading Language) Language, and a target of performing pixel conversion from a first data format to a second data format through a graphics processor is finally achieved.
It should be noted that the first data format and the second data format may be any data formats, and the first data format and the second data format are different.
Alternatively, since the input and output of the graphics processor are both image information, a new rendering map may be generated based on the converted pixel values in the second data format, that is, the pixel values in the second data format are stored in a map form.
And S204, adopting the new rendering map for rendering processing to generate a target picture corresponding to the current frame picture.
Optionally, based on the generated new rendering map, a conventional image rendering manner may be adopted to perform rendering processing, so as to generate a target picture corresponding to the current frame picture, that is, to generate a target picture in the second data format after the current frame picture is converted.
In summary, the image format conversion method of the graphics processor provided in this embodiment includes: acquiring an original rendering map corresponding to the current frame picture from a central processing unit; reading a pixel value of a first data format of each pixel point in a current frame picture from an original rendering map; converting the pixel value of the first data format of each pixel point into the pixel value of the second data format, and generating a new rendering chartlet according to the pixel value of the second data format; and adopting the new rendering map for rendering processing to generate a target picture corresponding to the current frame picture. In the method, the original rendering chartlet of the current frame picture is obtained from the central processing unit, the conversion of the data format of the picture is carried out in the graphics processing unit, and the parallel conversion calculation of the data format of a large batch of pixel points can be processed based on the advantages of the floating point data calculation of the graphics processing unit and the specific graphics parallel calculation capacity, so that the calculation efficiency of the conversion of the data format of the picture is effectively improved.
Fig. 3 is a second flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present disclosure, and fig. 4 is a diagram illustrating an RGB format converted into a YUV format according to an embodiment of the present disclosure. Optionally, the first data format is a red-green-blue format, and the second data format is a luminance-chrominance format; in step S203, converting the pixel value of the first data format of each pixel point into a pixel value of a second data format may include:
s301, determining the pixel value of each pixel point on the brightness component according to the mapping relation between the red, green and blue values and the brightness component value.
In this embodiment, the first data format may be a RGB format, that is, an RGB format, and the second data format is a luminance and chrominance format, that is, a YUV format. The method is used for realizing conversion from the RGB format image to the YUV format image.
It should be noted that, in the YUV format, Y represents luminance, and U and V represent chrominance, and since human perception capability has the highest sensitivity to luminance and relatively low sensitivity to chrominance, the YUV format only completely retains the Y component when encoding photos or videos, and allows a certain degree of compression for the UV component, thereby effectively reducing transmission bandwidth and increasing transmission rate while ensuring a certain degree of image restoration.
Optionally, due to the above characteristics of the YUV format, when the RGB format is converted into the YUV format, as shown in fig. 4, taking a 4 × 4 image as an example, the luminance component, that is, the Y component, may keep the number of pixels unchanged, perform 1:1 sampling, and may respectively determine the pixel value of each pixel in the YUV format on the Y component according to the RGB value of each pixel in the RGB format image.
S302, dividing each pixel point into a plurality of pixel areas.
In addition, for the determination of the chrominance component, since the UV component can be compressed to a certain extent, each pixel point in the RGB format can be divided into a plurality of pixel regions first. As shown in fig. 4, an achievable division manner is shown, and each pixel in the RGB format can be divided into 4 pixel regions by using 4 adjacent pixels as one pixel region.
Of course, the figure only exemplifies a dividing mode, and in practical application, other different dividing modes may exist according to the image proportion of the RGB format and the number of included pixel points.
S303, determining the pixel value of each pixel area on the first chrominance component according to the mapping relation between the red, green and blue values of each pixel point in each pixel area and the value of the first chrominance component.
Alternatively, the first chrominance component may refer to a U component in YUV format. Different pixel regions can correspond to U components in the converted YUV format, 4 pixel regions can respectively correspond to one converted U component, and the converted U components can be compressed into 4 by 16 pixel points.
S304, determining the pixel value of each pixel area on the second chrominance component according to the mapping relation between the red, green and blue values of each pixel point in each pixel area and the value of the second chrominance component.
Alternatively, the second chrominance component may refer to the V component in YUV format. Similar to the processing of the U component, different pixel regions may correspond to the V component in the converted YUV format, 4 pixel regions may respectively correspond to a converted V component, and the converted V component may be compressed into 4 by 16 pixels.
After the mapping relationship between each pixel point in the RGB format and each pixel point in the YUV format is determined according to the mapping relationship, conversion calculation can be further performed according to the pixel value of each pixel point in the RGB format, and the pixel value of each pixel point in the YUV format is respectively obtained.
In some embodiments, color sampling may be performed by texture2D calls in GLSL to obtain RGB components of each pixel, and then corresponding YUV components may be calculated.
For the calculation of the Y component, the formula can be used: y is 0.95 × R (0.299 × R +0.587 × G +0.114 × B), that is, for the Y component of each pixel, it can be calculated by using the RGB component value of the corresponding pixel.
Fig. 5 is a third flowchart of the image format conversion method of the graphics processor according to the embodiment of the present application, and optionally, in step S303, determining the pixel value of each pixel region in the first chrominance component according to the mapping relationship between the red, green, and blue values of each pixel point in each pixel region and the value of the first chrominance component may include:
s501, determining a central pixel point of the first pixel area according to the coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas.
For the calculation of the U components, the central pixel point may be determined according to the coordinates of each pixel point in the pixel region corresponding to each U component.
Taking the first pixel region as shown in fig. 4 as an example, the coordinates of each pixel point in the first pixel region can be directly obtained, and the central pixel point of the first region can be obtained by calculation according to the coordinates of each pixel point.
S502, determining a pixel value of the first pixel area on the first chrominance component according to a first mapping relation between the red, green and blue values of the central pixel point and the value of the first chrominance component.
Alternatively, taking the first pixel region corresponding to the first U component as an example, a formula may be adopted: and determining the pixel value of the first pixel region on the first color component, namely calculating the pixel value of the first U component, wherein the U is-0.169R-0.331G + 0.5B + 0.502.
The RGB components in the formula can be used as the RGB components of the central pixel point of the first pixel region. Similarly, for each pixel region, the pixel value of the corresponding U component can be calculated correspondingly.
Fig. 6 is a fourth flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present application, and optionally, in step S304, determining a pixel value of each pixel region in the second chrominance component according to a mapping relationship between a red, green, and blue value of each pixel point in each pixel region and a value of the second chrominance component may include:
s601, determining a central pixel point of the first pixel area according to the coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas.
Similar to the calculation of the U component, for the calculation of the V component, a central pixel point may be determined according to coordinates of each pixel point in a pixel region corresponding to each V component.
S602, determining a pixel value of the first pixel area on the second chrominance component according to a second mapping relation between the red, green and blue values of the central pixel point and the value of the second chrominance component.
Alternatively, taking the first pixel region corresponding to the first V component as an example, the formula may be adopted: and V-0.5R-0.419G-0.081B +0.502, and determining the pixel value of the first pixel region on the second chrominance component, i.e., calculating the pixel value of the first V component.
The RGB components in the formula can be used as the RGB components of the central pixel point of the first pixel region. Similarly, for each pixel region, the pixel value of the corresponding V component can be calculated correspondingly.
Fig. 7 is a fifth flowchart illustrating an image format conversion method of a graphics processor according to an embodiment of the present application, and optionally, before generating a new rendering map according to a pixel value in a second data format in step S203, the method of the present application may further include:
s701, determining the size parameter of a new rendering map to be created according to the size parameter of the original rendering map, wherein the size parameter comprises the following steps: the width and the height of the new rendering map to be created are preset multiples of the original rendering map, and the width of the new rendering map to be created is the same as the width of the original rendering map.
Taking the image size shown in fig. 4 as an example, assuming that the original rendering map is in a ratio of 4 × 4, a new rendering map to be created, that is, a YUV format rendering map after conversion, may be called to create a new rendering map (a single-channel texture object) by using the gltexmage 2D according to the size of the original rendering map, where the size parameter of the new rendering map may include: height and width, the height can be 1.5 times of the height of the original rendering map, and the width is unchanged.
S702, creating a map template according to the size parameter of the new rendering map to be created.
Because the conversion of the pixel data format is not completed yet, and the corresponding YUV format data is not obtained yet, at this time, the created template is the mapping template, and the mapping template does not contain the pixel data.
And S703, creating a new cache space, and binding the cache space with the mapping template, wherein the new cache space is used for storing the pixel values in the second data format.
Alternatively, a new cache space may be created through the glGenFramebuffers function call and bound to the map template through the glFramebuffertTexture 2D function call. The new buffer space is used for storing the pixel data after conversion, i.e. storing the pixel values of the converted second data format.
And S704, storing the pixel values of the converted second data format into a new buffer space.
It should be noted that, the purpose of binding the new buffer space with the map template is that the converted pixel values in YUV format can be directly read from the new buffer space and stored in the map template to generate a new rendering map.
Fig. 8 is a sixth schematic flowchart of an image format conversion method of a graphics processor according to an embodiment of the present application, and optionally, in step S203, generating a new rendering map according to a pixel value in a second data format may include:
and S801, reading the pixel value of the second data format from the new buffer space.
Alternatively, the YUV format pixel values converted from the RGB format may be stored in a new buffer space, so that the YUV format pixel values may be read from the new buffer space.
S802, transmitting the pixel values in the second data format into a map template to obtain a new rendering map.
In some embodiments, the read pixel values in the YUV format may be transferred to a corresponding pixel channel in the map template, so as to generate a new rendering map, and the new rendering map may be used for image rendering, so as to render a picture in the YUV data format.
Fig. 9 is a seventh flowchart of an image format conversion method of a graphics processor according to an embodiment of the present application, and optionally, in step S204, after performing image rendering by using a new rendering map and generating a target picture corresponding to a current frame picture, the method of the present application may further include:
s901, respectively generating at least one frame of target picture corresponding to at least one frame of picture.
The above steps can be used to convert the pixel format of a frame of picture to obtain converted target pictures corresponding to the frame of picture.
S902, at least one frame of target picture is sent to the user terminal.
Optionally, each obtained target picture may be transmitted to the user terminal in real time, that is, to a streaming media server of the user terminal, so as to perform multi-frame picture encapsulation on the side of the streaming media server, where continuous multi-frame pictures may be encapsulated to obtain a video picture, and the obtained video picture may be transmitted through a network, so as to be played at a browser end of the user.
In summary, the image format conversion method of the graphics processor provided in this embodiment includes: acquiring an original rendering map corresponding to the current frame picture from a central processing unit; reading a pixel value of a first data format of each pixel point in a current frame picture from an original rendering map; converting the pixel value of the first data format of each pixel point into the pixel value of the second data format, and generating a new rendering chartlet according to the pixel value of the second data format; and adopting the new rendering map for rendering processing to generate a target picture corresponding to the current frame picture. In the method, the original rendering chartlet of the current frame picture is obtained from the central processing unit, the conversion of the data format of the picture is carried out in the graphics processing unit, and the parallel conversion calculation of the data format of a large batch of pixel points can be processed based on the advantages of the floating point data calculation of the graphics processing unit and the specific graphics parallel calculation capacity, so that the calculation efficiency of the conversion of the data format of the picture is effectively improved.
The following describes an apparatus, a device, a storage medium, and the like for executing the image format conversion method of the graphics processor provided in the present application, and specific implementation procedures and technical effects thereof are referred to above, and are not described again below.
Fig. 10 is a schematic diagram of an image format conversion apparatus of a graphics processor according to an embodiment of the present application, where functions implemented by the image format conversion apparatus of the graphics processor correspond to steps executed by the foregoing method. The apparatus may be understood as the above-described graphic processor, and as shown in fig. 10, the apparatus may include: the device comprises an acquisition module 110, a reading module 120, a conversion module 130 and a generation module 140;
an obtaining module 110, configured to obtain an original rendering map corresponding to a current frame from a central processing unit;
a reading module 120, configured to read a pixel value in a first data format of each pixel point in a current frame picture from an original rendering map;
the conversion module 130 is configured to convert the pixel value of the first data format of each pixel into a pixel value of a second data format, and generate a new rendering map according to the pixel value of the second data format;
and the generating module 140 is configured to perform rendering processing by using the new rendering map, and generate a target picture corresponding to the current frame picture.
Optionally, the first data format is a red-green-blue format, and the second data format is a luminance-chrominance format; the conversion module 130 is specifically configured to:
determining the pixel value of each pixel point on the brightness component according to the mapping relation between the red, green and blue values and the value of the brightness component;
dividing each pixel point into a plurality of pixel areas;
determining the pixel value of each pixel area on the first chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the first chrominance component;
and determining the pixel value of each pixel area on the second chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the second chrominance component.
Optionally, a conversion module 130, in particular for
Determining a central pixel point of a first pixel area according to the coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the first chrominance component according to the first mapping relation between the red, green and blue values of the central pixel point and the value of the first chrominance component.
Optionally, a conversion module 130, in particular for
Determining a central pixel point of a first pixel area according to the coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the second chrominance component according to the second mapping relation between the red, green and blue values of the central pixel point and the value of the second chrominance component.
Optionally, the apparatus further comprises: a creation module;
the creating module is used for determining the size parameter of a new rendering map to be created according to the size parameter of the original rendering map, and the size parameter comprises: the method comprises the steps of obtaining a new rendering map to be created, wherein the height of the new rendering map to be created is a preset multiple of the original rendering map, and the width of the new rendering map to be created is the same as the width of the original rendering map;
creating a map template according to the size parameter of the new rendering map to be created;
creating a new cache space, and binding the cache space with the map template, wherein the new cache space is used for storing the pixel values in the second data format;
and storing the pixel values of the second data format obtained by conversion into a new buffer space.
Optionally, a generating module 140, in particular for
Reading pixel values of a second data format from the new buffer space;
and transmitting the pixel values in the second data format into the map template to obtain a new rendering map.
Optionally, the apparatus further comprises: a sending module;
the generating module 140 is further configured to generate at least one frame of target picture corresponding to at least one frame of picture respectively;
and the sending module is used for sending the at least one frame of target picture to the user terminal.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
The modules may be connected or in communication with each other via a wired or wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, etc., or any combination thereof. The wireless connection may comprise a connection over a LAN, WAN, bluetooth, ZigBee, NFC, or the like, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application.
It should be noted that the above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device may be the graphics processor described above.
The apparatus may include: a processor 801 and a memory 802.
The memory 802 is used for storing programs, and the processor 801 calls the programs stored in the memory 802 to execute the above-mentioned method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
Wherein the memory 802 stores program code that, when executed by the processor 801, causes the processor 801 to perform various steps in methods according to various exemplary embodiments of the present application described in the "exemplary methods" section above in this description.
The Processor 801 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware components, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Optionally, the present application also provides a program product, such as a computer readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Claims (10)
1. An image format conversion method for a graphics processor, the method being applied to the graphics processor, the method comprising:
acquiring an original rendering map corresponding to the current frame picture from a central processing unit;
reading the pixel value of a first data format of each pixel point in the current frame picture from the original rendering map;
converting the pixel value of each pixel point in the first data format into a pixel value in a second data format, and generating a new rendering chartlet according to the pixel value in the second data format;
and adopting the new rendering map for rendering processing to generate a target picture corresponding to the current frame picture.
2. The method of claim 1, wherein the first data format is a red, green, blue format and the second data format is a luma chroma format; the converting the pixel value of each pixel point in the first data format into a pixel value of each pixel point in a second data format includes:
determining the pixel value of each pixel point on the brightness component according to the mapping relation between the red, green and blue values and the value of the brightness component;
dividing each pixel point into a plurality of pixel areas;
determining the pixel value of each pixel area on the first chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the first chrominance component;
and determining the pixel value of each pixel area on the second chrominance component according to the mapping relation between the red, green and blue value of each pixel point in each pixel area and the value of the second chrominance component.
3. The method of claim 2, wherein determining the pixel value of each pixel region at the first chrominance component according to the mapping relationship between the red, green, and blue values of each pixel point in each pixel region and the value of the first chrominance component comprises:
determining a central pixel point of a first pixel area according to coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the first chrominance component according to the first mapping relation between the red, green and blue values of the central pixel point and the value of the first chrominance component.
4. The method of claim 2, wherein determining the pixel value of each pixel region at the second chrominance component according to the mapping relationship between the red, green, and blue values of each pixel point in each pixel region and the value of the second chrominance component comprises:
determining a central pixel point of a first pixel area according to coordinates of each pixel point in the first pixel area, wherein the first pixel area is any one of the pixel areas;
and determining the pixel value of the first pixel area on the second chrominance component according to the second mapping relation between the red, green and blue values of the central pixel point and the value of the second chrominance component.
5. The method of any of claims 1-4, wherein prior to generating the new render map from the pixel values of the second data format, the method further comprises:
determining the size parameter of a new rendering map to be created according to the size parameter of the original rendering map, wherein the size parameter comprises: the height of the new rendering map to be created is a preset multiple of the original rendering map, and the width of the new rendering map to be created is the same as the width of the original rendering map;
creating a map template according to the size parameter of the new rendering map to be created;
creating a new cache space, and binding the cache space with the map template, wherein the new cache space is used for storing the pixel values of the second data format;
and storing the converted pixel values in the second data format into the new buffer space.
6. The method of claim 5, wherein generating a new render map from the pixel values of the second data format comprises:
reading pixel values of the second data format from the new buffer space;
and transmitting the pixel values in the second data format into the map template to obtain the new rendering map.
7. The method according to claim 1, wherein after the image rendering is performed by using the new rendering map and a target picture corresponding to a current frame picture is generated, the method further comprises:
respectively generating at least one frame of target picture corresponding to at least one frame of picture;
and sending the at least one frame of target picture to a user terminal.
8. An image format conversion apparatus for a graphics processor, applied to the graphics processor, the apparatus comprising: the device comprises an acquisition module, a reading module, a conversion module and a generation module;
the acquisition module is used for acquiring an original rendering map corresponding to the current frame picture from the central processing unit;
the reading module is used for reading the pixel value of the first data format of each pixel point in the current frame picture from the original rendering map;
the conversion module is used for converting the pixel value of each pixel point in the first data format into a pixel value in a second data format and generating a new rendering map according to the pixel value in the second data format;
and the generating module is used for adopting the new rendering map to perform rendering processing and generating a target picture corresponding to the current frame picture.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing program instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is running, the processor executing the program instructions to perform the steps of the method according to any one of claims 1 to 7 when executed.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111315545.8A CN114040246A (en) | 2021-11-08 | 2021-11-08 | Image format conversion method, device, equipment and storage medium of graphic processor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111315545.8A CN114040246A (en) | 2021-11-08 | 2021-11-08 | Image format conversion method, device, equipment and storage medium of graphic processor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114040246A true CN114040246A (en) | 2022-02-11 |
Family
ID=80143470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111315545.8A Pending CN114040246A (en) | 2021-11-08 | 2021-11-08 | Image format conversion method, device, equipment and storage medium of graphic processor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114040246A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115225615A (en) * | 2022-06-30 | 2022-10-21 | 如你所视(北京)科技有限公司 | Illusion engine pixel streaming method and device |
CN115766976A (en) * | 2022-11-09 | 2023-03-07 | 深圳市南电信息工程有限公司 | Image display system and control method thereof |
CN116129816A (en) * | 2023-02-06 | 2023-05-16 | 格兰菲智能科技有限公司 | Pixel rendering method, device, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040189677A1 (en) * | 2003-03-25 | 2004-09-30 | Nvidia Corporation | Remote graphical user interface support using a graphics processing unit |
CN104244087A (en) * | 2014-09-19 | 2014-12-24 | 青岛海信移动通信技术股份有限公司 | Video rendering method and device |
CN106210883A (en) * | 2016-08-11 | 2016-12-07 | 浙江大华技术股份有限公司 | A kind of method of Video Rendering, equipment |
US20170201758A1 (en) * | 2016-01-08 | 2017-07-13 | Futurewei Technologies, Inc. | Jpeg image to compressed gpu texture transcoder |
CN107770618A (en) * | 2017-11-02 | 2018-03-06 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and storage medium |
CN108322722A (en) * | 2018-01-24 | 2018-07-24 | 阿里巴巴集团控股有限公司 | Image processing method, device based on augmented reality and electronic equipment |
CN113096233A (en) * | 2021-06-11 | 2021-07-09 | 腾讯科技(深圳)有限公司 | Image processing method and device, electronic equipment and readable storage medium |
WO2021217428A1 (en) * | 2020-04-28 | 2021-11-04 | 深圳市思坦科技有限公司 | Image processing method and apparatus, photographic device and storage medium |
-
2021
- 2021-11-08 CN CN202111315545.8A patent/CN114040246A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040189677A1 (en) * | 2003-03-25 | 2004-09-30 | Nvidia Corporation | Remote graphical user interface support using a graphics processing unit |
CN104244087A (en) * | 2014-09-19 | 2014-12-24 | 青岛海信移动通信技术股份有限公司 | Video rendering method and device |
US20170201758A1 (en) * | 2016-01-08 | 2017-07-13 | Futurewei Technologies, Inc. | Jpeg image to compressed gpu texture transcoder |
CN106210883A (en) * | 2016-08-11 | 2016-12-07 | 浙江大华技术股份有限公司 | A kind of method of Video Rendering, equipment |
CN107770618A (en) * | 2017-11-02 | 2018-03-06 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and storage medium |
CN108322722A (en) * | 2018-01-24 | 2018-07-24 | 阿里巴巴集团控股有限公司 | Image processing method, device based on augmented reality and electronic equipment |
WO2021217428A1 (en) * | 2020-04-28 | 2021-11-04 | 深圳市思坦科技有限公司 | Image processing method and apparatus, photographic device and storage medium |
CN113096233A (en) * | 2021-06-11 | 2021-07-09 | 腾讯科技(深圳)有限公司 | Image processing method and device, electronic equipment and readable storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115225615A (en) * | 2022-06-30 | 2022-10-21 | 如你所视(北京)科技有限公司 | Illusion engine pixel streaming method and device |
CN115225615B (en) * | 2022-06-30 | 2024-02-23 | 如你所视(北京)科技有限公司 | Illusion engine pixel streaming method and device |
CN115766976A (en) * | 2022-11-09 | 2023-03-07 | 深圳市南电信息工程有限公司 | Image display system and control method thereof |
CN115766976B (en) * | 2022-11-09 | 2023-10-13 | 深圳市南电信息工程有限公司 | Image display system and control method thereof |
CN116129816A (en) * | 2023-02-06 | 2023-05-16 | 格兰菲智能科技有限公司 | Pixel rendering method, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106331850B (en) | Browser live broadcast client, browser live broadcast system and browser live broadcast method | |
CN114040246A (en) | Image format conversion method, device, equipment and storage medium of graphic processor | |
WO2018184468A1 (en) | Image file processing method, device and storage medium | |
US20140092439A1 (en) | Encoding images using a 3d mesh of polygons and corresponding textures | |
JP7359521B2 (en) | Image processing method and device | |
CN113041617B (en) | Game picture rendering method, device, equipment and storage medium | |
CN113096233B (en) | Image processing method and device, electronic equipment and readable storage medium | |
US11882297B2 (en) | Image rendering and coding method and related apparatus | |
JP2006014341A (en) | Method and apparatus for storing image data using mcu buffer | |
CN110782387B (en) | Image processing method and device, image processor and electronic equipment | |
US11263805B2 (en) | Method of real-time image processing based on rendering engine and a display apparatus | |
CN115314617A (en) | Image processing system and method, computer readable medium, and electronic device | |
WO2012109582A1 (en) | System and method for multistage optimized jpeg output | |
CN114501141B (en) | Video data processing method, device, equipment and medium | |
CN109658488B (en) | Method for accelerating decoding of camera video stream through programmable GPU in virtual-real fusion system | |
CN110049347B (en) | Method, system, terminal and device for configuring images on live interface | |
US20180097527A1 (en) | 32-bit hdr pixel format with optimum precision | |
CN114245137A (en) | Video frame processing method performed by GPU and video frame processing apparatus including GPU | |
US20240037701A1 (en) | Image processing and rendering | |
EP3298766A1 (en) | Method and device for processing color image data representing colors of a color gamut. | |
KR20100098948A (en) | Image processor, electric device including the same, and image processing method | |
CN116489457A (en) | Video display control method, device, equipment, system and storage medium | |
CN114245027B (en) | Video data hybrid processing method, system, electronic equipment and storage medium | |
CN112862905B (en) | Image processing method, device, storage medium and computer equipment | |
JP2005322233A (en) | Memory efficient method and apparatus for compression encoding large overlaid camera image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220211 |