WO2022257750A1 - 图像处理方法、装置、电子设备、程序及可读存储介质 - Google Patents

图像处理方法、装置、电子设备、程序及可读存储介质 Download PDF

Info

Publication number
WO2022257750A1
WO2022257750A1 PCT/CN2022/094621 CN2022094621W WO2022257750A1 WO 2022257750 A1 WO2022257750 A1 WO 2022257750A1 CN 2022094621 W CN2022094621 W CN 2022094621W WO 2022257750 A1 WO2022257750 A1 WO 2022257750A1
Authority
WO
WIPO (PCT)
Prior art keywords
texture
storage area
image data
target
encoding format
Prior art date
Application number
PCT/CN2022/094621
Other languages
English (en)
French (fr)
Inventor
曹文升
操伟
陈瑭羲
袁利军
王晓杰
张冲
翟萌
朱星元
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022257750A1 publication Critical patent/WO2022257750A1/zh
Priority to US18/299,157 priority Critical patent/US20230252758A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Definitions

  • the present application relates to the fields of image processing, games, cloud technology, and blockchain, and in particular to an image processing method, device, electronic equipment, program, and readable storage medium.
  • RGB color space For example, red, green and blue RGB color space, luminance and chrominance YUV color space, and so on.
  • Embodiments of the present application provide an image processing method, device, electronic equipment, program, and readable storage medium, which improve the efficiency of converting image data in a source coding format into target image data in a target coding format.
  • an embodiment of the present application provides an image processing method, which is executed in an electronic device, and the method includes:
  • the color encoding format corresponding to the original image data is the source encoding format
  • the shader invoked by the graphics processing unit GPU performs encoding format conversion on the original image data stored in the first texture storage area, so as to generate target image data corresponding to each texture coordinate of the second texture storage area , and store the target image data corresponding to each texture coordinate in a corresponding storage location in the second texture storage area.
  • an image processing device which includes:
  • a source data acquisition module configured to acquire the image size of the image to be processed and the original image data of the image to be processed, the color encoding format corresponding to the original image data is the source encoding format;
  • the first texture processing module is configured to create a first texture storage area according to the image size, and store the image data of the image to be processed in the first texture storage area;
  • the second texture processing module is configured to create a second texture storage area for storing target image data to be generated according to the above image size and target encoding format, wherein the encoding format corresponding to the target image data is the target encoding Format;
  • the target data acquisition module is configured to convert the encoding format of the original image data stored in the first texture storage area through the shader invoked by the graphics processing unit GPU, so as to generate each texture of the second texture storage area The target image data corresponding to the texture coordinates, and storing the target image data corresponding to each texture coordinate in a corresponding storage location in the second texture storage area.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor and a memory, the processor and the memory are connected to each other; the memory is used to store a computer program; the processor is configured to call the computer
  • the program executes the method provided by any possible implementation of the image processing method above.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is executed by a processor to implement any possible implementation of the above image processing method.
  • the embodiment of the present application provides a computer program product or computer program
  • the computer program product or computer program includes computer instructions
  • the computer instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the electronic device executes the method provided by any possible implementation of the above-mentioned image processing method.
  • FIG. 1 is a schematic structural diagram of an image processing system provided in an embodiment of the present application in an application scenario
  • FIG. 2 is a schematic flow diagram of an image processing method provided in an embodiment of the present application.
  • Fig. 3 is a schematic layout diagram of a second texture storage area provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of storing target image data provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of texture coordinates of a first texture storage area and a second texture storage area provided by an embodiment of the present application;
  • FIG. 6 is a schematic flowchart of an image processing method provided in an embodiment of the present application.
  • Fig. 7 is a schematic diagram of the principle of image coding format conversion provided by the embodiment of the present application.
  • FIG. 8 is a schematic diagram of an image coding format conversion process provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an image processing device provided in an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the image processing method provided by the embodiment of the present application relates to various fields of cloud technology, such as cloud computing, cloud service, cloud game, etc. in cloud technology (Cloud technology).
  • cloud technology cloud technology
  • Cloud technology refers to a hosting technology that unifies a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data calculation, storage, processing, and sharing.
  • the image processing method provided in the embodiment of the present application can be implemented based on cloud computing in cloud technology.
  • Cloud computing refers to obtaining the required resources through the network in an on-demand and easy-to-expand manner. It is Grid Computing, Distributed Computing, Parallel Computing, Utility Computing, It is the product of the integration of traditional computer and network technologies such as Network Storage Technologies, Virtualization, and Load Balance.
  • Cloud gaming also known as gaming on demand, is an online gaming technology based on cloud computing technology. Cloud gaming technology enables thin clients with relatively limited graphics processing and data computing capabilities to run high-quality games.
  • the game logic is not in the player's game terminal, but runs in the cloud server, and the cloud server renders the game scene into a video and audio stream, which is transmitted to the player's game terminal through the network.
  • the player's game terminal does not need to have powerful graphics computing and data processing capabilities, but only needs to have basic streaming media playback capabilities and the ability to obtain player input instructions and send them to the cloud server.
  • the user terminals and servers (such as cloud game servers) involved in it can be composed into a block chain, and the user terminals and servers (such as cloud game servers) are block chains.
  • the nodes on the chain, the data involved in the image processing method or device in the embodiment of the present application, such as the image data of the image to be processed and the target image data can be stored on the blockchain.
  • the applicable scenarios of the image processing methods in the embodiments of the present application are not limited. In practical applications, the embodiments of the present application can be applied to various applications that need to convert an image coded in one color space to an image coded in another color space. Scenarios include but are not limited to game application scenarios, for example, in a game scenario, an image encoded in an RGB color space is converted to an image encoded in a YUV color space, and so on.
  • the embodiment of the present application does not make any limitation on what application scenario the image to be processed in the embodiment of the present application is.
  • it may be an image to be processed in a game application scenario.
  • the embodiment of the present application does not limit the specific application of the game application.
  • the game application can be a cloud game, or a game that requires the installation of a client. Users can experience online games through the user terminal.
  • the client may be a web client, an applet client or a game client of the game application, which is not limited in this embodiment of the present application.
  • FIG. 1 shows a schematic structural diagram of an image processing system applicable to an application scenario in the embodiment of the present application. It can be understood that the image processing method provided in the embodiment of the present application can be It is applicable to but not limited to the application scenario shown in Fig. 1 .
  • the image to be processed is each image in at least one virtual scene image in the cloud game scene as an example for illustration.
  • the image processing system in this embodiment of the present application may include a user terminal and a server. As shown in FIG. 1 , the image processing system in this example may include but not limited to a user terminal 101 , a network 102 and a server 103 .
  • the user terminal 101 (such as the user's smart phone) can communicate with the server 103 through the network 102, and the server 103 is used to convert the image data of the image to be processed in the source encoding format into the target image data in the target encoding format.
  • Step S11 acquire the image size of the image to be processed and the original image data of the image to be processed, and the color coding format corresponding to the original image data of the image to be processed is the source coding format.
  • Step S12 creating a first texture storage area according to the size of the image, and storing the image data of the image to be processed in the first texture storage area.
  • Step S13 Create a second texture storage area for storing the target image data to be generated according to the image size and the target encoding format, wherein the encoding format corresponding to the target image data is the target encoding format.
  • Step S14 the shader invoked by the graphics processing unit GPU uses a parallel computing method to convert the encoding format of the original image data stored in the first texture storage area, so as to generate each texture of the second texture storage area The target image data corresponding to the texture coordinates, and storing the target image data corresponding to each texture coordinate in a corresponding storage location in the second texture storage area.
  • step S14 takes the image data of the image to be processed stored in the first texture storage area as the sampling source, and takes the second texture storage area as the target to be rendered, and uses the shader called by the graphics processing unit GPU to parallelize the image data of the first texture storage area.
  • the image data at each coordinate position in a texture storage area is resampled (that is, the encoding format is converted) to generate the target image data corresponding to each texture coordinate in the second texture, and store the target image data corresponding to each texture coordinate to the corresponding memory location in the second texture memory area.
  • Step S15 read the target image data from the second texture storage area corresponding to at least one virtual scene image; perform image encoding processing on the read target image data to obtain a video stream; send the target image data to the user terminal 101 through the network 102 video stream.
  • Step S16 the user terminal 101 receives the video stream sent by the server 103 , and plays the video stream in the user terminal 101 .
  • step S11 to step S14 is the server 103 .
  • the server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication Cloud servers or server clusters for basic cloud computing services such as middleware services, domain name services, security services, CDN (Content Delivery Network, content distribution network), and big data and artificial intelligence platforms.
  • the above-mentioned network may include but not limited to: wired network, wireless network, wherein, the wired network includes: local area network, metropolitan area network and wide area network, and the wireless network includes: bluetooth, Wi-Fi and other networks that realize wireless communication.
  • User terminals can be smart phones (such as Android phones, iOS phones, etc.), tablet computers, notebook computers, digital broadcast receivers, MIDs (Mobile Internet Devices, mobile Internet devices), PDAs (personal digital assistants), desktop computers, vehicle terminals (such as a car navigation terminal), a smart speaker, a smart watch, etc., the user terminal and the server may be directly or indirectly connected through wired or wireless communication, but are not limited thereto. The details can also be determined based on actual application scenario requirements, and are not limited here.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the method can be executed in an electronic device, for example, a user terminal or a server, or a system or a block chain including a user terminal and a server.
  • the electronic device is a server.
  • the image processing method provided by the embodiment of the present application includes the following steps:
  • Step S201 acquire the image size of the image to be processed and the original image data of the image to be processed, and the color encoding format corresponding to the original image data of the image to be processed is the source encoding format.
  • Step S202 creating a first texture storage area according to the size of the image, and storing the original image data of the image to be processed in the first texture storage area.
  • Step S203 Create a second texture storage area for storing the target image data to be generated according to the image size and the target encoding format, wherein the encoding format corresponding to the target image data is the above-mentioned target encoding format.
  • Step S204 through the shader invoked by the graphics processor GPU, the encoding format conversion is performed on the original image data stored in the first texture storage area to generate the target image data corresponding to the texture coordinates of the second texture storage area, and The target image data corresponding to each texture coordinate is stored in a corresponding storage location in the second texture storage area.
  • the original image data of the image to be processed stored in the above-mentioned first texture storage area is used as the sampling source (that is, the data in the converted encoding format), and the second texture storage area is used as the target to be rendered, and the graphics processor GPU calls
  • the shader calculates and obtains the target image data corresponding to each texture coordinate, and stores the target image data corresponding to each texture coordinate in a corresponding storage location in the second texture storage area.
  • the image to be processed may be an image acquired in various scenarios, which is not limited in this embodiment of the present application.
  • the image to be processed may be an image obtained by calling a virtual camera through a game engine in a game scene.
  • the image size of the image to be processed may be set as required, and this embodiment of the present application does not make any limitation here.
  • the size of the image to be processed may be Width*Height, where Width represents width and Height represents height.
  • the color coding format corresponding to the image data of the image to be processed is the source coding format.
  • the embodiment of the present application does not limit the specific format of the source coding format. It can be a coding format corresponding to various forms of color space.
  • the source coding format The format can be an RGB encoding format.
  • the data structure of the texture storage area (that is, the first texture storage area and the second texture storage area) can be, for example, a two-dimensional array, and the elements stored in the texture storage area are some color values, taking the first texture storage area as an example , the elements of the first texture are the color values of each pixel of the image to be processed. Individual color values are called texture elements or texels. Each texel has a unique address in the texture. This address can be thought of as a column and row value, denoted by U and V, respectively.
  • texture coordinates are commonly referred to as UV coordinates (texture coordinates), which can be understood as the percentage coordinates of the image.
  • the coordinates in the horizontal direction are called U coordinates
  • the coordinates in the vertical direction are called V coordinates
  • the coordinates in the horizontal direction of UV coordinates The value range of the coordinates in the vertical direction and the vertical direction is [0, 1], which has nothing to do with the texture size, and has nothing to do with the texture aspect ratio. It is a relative coordinate.
  • the texture image is the image to be processed
  • the texture coordinates correspond to the image coordinates in the image to be processed.
  • the width and height of the image to be processed are W and H respectively
  • the lower left corner of the image is the origin of the image
  • the texture coordinate (0, 0) corresponds to the origin of the image
  • the texture coordinate (0.5, 0.5) corresponds to the image to be processed
  • the texture coordinates in the embodiment of the present application refer to the texture coordinates corresponding to the coordinates of the pixel points to be applied to the pixel values of any primitive in the image when the image to be processed is used as the texture image, such as It is necessary to apply the pixel value of the above-mentioned pixel point coordinate (0.5W, 0.5H) to a primitive, and the texture coordinate corresponding to the pixel point coordinate (0.5W, 0.5H) is (0.5, 0.5).
  • first texture storage area also referred to as an RGB texture storage area
  • the size is Width*Height
  • the first texture storage area is used to store the original image data of the image to be processed
  • the original image data of the image to be processed can be written into the first texture storage area, that is, the image data of the image to be processed is stored to the first texture storage area.
  • a second texture storage area is created.
  • the target encoding format is the YUV encoding format
  • the width of the size of the second texture storage area can be the same as the width of the image size
  • the height of the size of the second texture storage area may be 1.5 times the height of the image size.
  • the size of the second texture storage area is Width*1.5Height.
  • the second texture storage area is used to store the target image data to be generated.
  • the encoding format of the target image data is the target encoding format.
  • the embodiment of the present application does not limit the target encoding format.
  • the target encoding format may be YUV Encoding format.
  • the original image data stored in the first texture storage area is encoded and format-converted through the shader invoked by the graphics processing unit GPU, so as to generate target image data corresponding to each texture coordinate in the second texture storage area ,include:
  • the shader invoked by the graphics processing unit GPU performs encoding format conversion on the original image data stored in the first texture storage area in a parallel computing manner, so as to obtain target image data corresponding to each texture coordinate.
  • the original image data of the image to be processed stored in the first texture storage area is used as the sampling source.
  • the sampling source means that the image data of the image to be processed is used as The image data in the image format to be converted, and the second texture storage area is used as the target to be rendered, and the shader is invoked by the graphics processor GPU to obtain the target image data corresponding to the target encoding format for each texture coordinate in a parallel computing manner, And store the target image data into a corresponding storage location in the second texture storage area.
  • shader refers to the shader (Shader) language written in the game engine, such as Computer Graphics (CG for short), High Level Shader Language (HLSL for short), and used to describe object rendering. way of code or modules. Shader functions are supported by most major game engines.
  • computer animation CG is a general term for all graphics drawn by computer software. With the formation of a series of related industries that use computers as the main tool for visual design and production.
  • the main function of the high-level shader language HLSL is to quickly and efficiently complete some complex image processing on the display card.
  • Shader is used in the field of computer graphics and refers to a set of instructions for computer graphics resources to use when performing rendering tasks to calculate the color or shade of an image.
  • the first texture storage area for storing the original image data of the image to be processed will be created according to the image size of the image to be processed, and according to the image size and target encoding format creates a second texture storage area for storing the target image data.
  • the original image data of the image to be processed stored in the first texture storage area is used as the sampling source, and the second texture storage area is used as the target to be rendered, through the Graphics Processing Unit (GPU for short). ) to obtain the target image data corresponding to each texture coordinate through parallel calculation, and store the target image data corresponding to each texture coordinate in a corresponding storage location in the second texture storage area.
  • GPU Graphics Processing Unit
  • the original image data of the image to be processed in the source encoding format is converted into the target image data in the target encoding format
  • the original image data is stored by creating a texture, so that it can be treated by calling the shader by the GPU
  • the converted original image data is processed in parallel, avoiding the use of pixel-by-pixel calculations, and can quickly complete the image encoding format conversion, improving the processing of converting the image data of the image to be processed in the source encoding format into the target image data in the target encoding format efficiency.
  • the above-mentioned source coding format is a red-green-blue RGB coding format
  • the above-mentioned target coding format is a luminance chroma YUV coding format
  • the above-mentioned second texture storage area includes a first A storage area and a second storage area for storing chrominance components, wherein the first storage area and the second storage area are continuous, and each luminance component stored in the first storage area is identical to that in the second storage area
  • the stored first chroma component corresponds to a second chroma component, and the target image data of the first chroma component and the target image data of the second chroma component are continuously stored in the second storage area.
  • the source coding format may be a red, green and blue RGB coding format.
  • each pixel of an image is composed of three components of red, green and blue.
  • the target encoding format can be a luminance-chrominance YUV encoding format, which can perform lossy compression on an image in an RGB encoding format to reduce the occupied space, and can be used in a video encoding process.
  • the second texture storage area includes a first storage area and a second storage area. As shown in Figure 3, the first storage area is the storage area for the luminance component (i.e.
  • the Y component of the YUV encoding format
  • the second storage area is for storing the YUV encoding format
  • the storage area of the chrominance component that is, the U component and the V component
  • the U area and the V area shown in FIG. 3 The storage area of the chrominance component (that is, the U component and the V component), that is, the U area and the V area shown in FIG. 3 .
  • the shader when using the shader to convert the encoding format, it is necessary to ensure that the first storage area and the second storage area are continuous, wherein, the target image data of the first chroma component (ie U component) and the second chroma component (ie V component) of the target image data is continuously stored in the second storage area.
  • the Y area, U area, and V area are continuous when storing data.
  • the Y area is used to store the color value of the color component Y, that is, Y1 to Y24
  • the U area is used for
  • the V area is used for storing the color values of the color component U, namely U1 to U6
  • the V region is used to store the color values of the color component V, namely V1 to V6 .
  • every 4 Y components share a set of UV components, among which, Y1, Y2, Y7 and Y8 share U1 and V1, Y3, Y4, Y9 and Y10 share U2 and V2, Y5, Y6, Y11 and Y12 share U3 and V3 , Y13, Y14, Y19 and Y20 share U4 and V4, Y15, Y16, Y21 and Y22 share U5 and V5, Y17, Y18, Y23 and Y24 share U6 and V6.
  • the storage method that uses the target image data of the first chroma component and the target image data of the second chroma component to be continuously stored in the second storage area can be largely consistent with the working method of the shader Compatibility improves adaptability.
  • the size of the first storage area is the same as the image size
  • the second storage area includes a first sub-area corresponding to the first chrominance component and a second sub-area corresponding to the second chrominance component area
  • the size of the first sub-area and the second sub-area are the same
  • the aspect ratios of the first storage area, the first sub-area and the second sub-area are the same
  • the first sub-area and the second sub-area The width of the region is determined by the target encoding format.
  • the embodiment of the present application does not limit the size of the first storage area and the second storage area of the second texture.
  • the layout of the second texture is as shown in FIG. 3
  • the first storage area is the Y area shown in the figure
  • the Y area is used to store the color value corresponding to the luminance component (that is, the Y component).
  • a storage area may have the same size as the image size of the image to be processed.
  • the second storage area includes a first sub-area and a second sub-area, wherein the first sub-area is the U area shown in the figure, the second sub-area is the V area shown in the figure, and the U area is used to store The color value corresponding to the first chroma component (ie, the U component), and the V area is used to store the color value corresponding to the second chroma component (ie, the V component).
  • the U area and the V area have the same size. In practical applications, it is necessary to ensure that the aspect ratios of the first storage area, the first sub-area, and the second sub-area are the same, that is, it is necessary to ensure that the aspect ratios of the Y area, the U area, and the V area are the same.
  • the width of the first sub-region and the width of the second sub-region are both 1/2 of the width of the first storage region. That is, both the width of the U area and the width of the V area are 1/2 of the width of the Y area.
  • the layout of the first storage area and the second storage area according to the above manner can adapt to the working mode of the shader to a great extent, and the adaptability is improved.
  • the above-mentioned shader invoked by the graphics processing unit GPU performs encoding format conversion on the original image data stored in the first texture storage area, so as to generate each image in the second texture storage area.
  • Target image data corresponding to the texture coordinates, and storing the target image data corresponding to the texture coordinates in corresponding storage locations in the second texture storage area including:
  • the GPU calls the shader to perform the following operations to obtain the target image data corresponding to each of the above texture coordinates:
  • the storage location correspondence between the first texture storage area and the second texture storage area determine a second storage location corresponding to the texture coordinates in the first texture
  • the original image data of the image to be processed corresponding to the second storage location calculate and obtain the target image data corresponding to the texture coordinates, and store the target image data in the first storage location.
  • the shader invoked by the graphics processing unit GPU may perform encoding format conversion on the original image data stored in the first texture storage area in a parallel computing manner, so as to obtain the corresponding texture coordinates. target image data.
  • the original image data of the image to be processed stored in the first texture storage area is used as the sampling source, and the second texture storage area is used as the target to be rendered, and the shader can be called by the GPU to determine at one time through parallel computing Output the target image data corresponding to each texture coordinate.
  • any one of the texture coordinates is taken as an example below.
  • the shader invoked by the graphics processing unit GPU determines the first storage location corresponding to the texture coordinate in the second texture storage area, and then, according to the storage location correspondence, determines that the texture coordinate is in the first storage location.
  • the corresponding second storage location in the texture storage area determine the first storage location where the current texture coordinates are located in the second texture storage area, and then determine the second storage location corresponding to the first storage location in the first texture storage area.
  • calculate the target image data corresponding to the texture coordinates calculate the target image data corresponding to the texture coordinates, and store the target image data in the first storage location.
  • the size of the first texture storage area is Width*Height
  • the size of the second texture storage area is Width*1.5Height
  • Width represents the image width
  • Height represents the image height
  • the part shown on the left side of Figure 5 is The coordinates of the storage location of the first texture storage area, the coordinates are expressed by texture coordinates (that is, the UV coordinates whose coordinate value range is in [0, 1]), wherein, for the first texture storage area, the texture coordinates (0, 0) means the position where the actual width is 0 and the actual height is 0; the texture coordinate (1, 0) means the position where the actual width is Width and the actual height is 0; the texture coordinate (0, 1) means the actual width is 0.
  • the actual height is the position where the Height is; the texture coordinate (1, 1) indicates that the actual width is Width, and the actual height is the position where the Height is.
  • the part shown on the right side of Fig. 5 is the coordinate of the storage location of the second texture storage area, wherein, for the second texture storage area, the texture coordinate (0, 0) represents the position where the actual width is 0 and the actual height is 0; Coordinates (1, 0) represent the position where the actual width is Width and the actual height is 0; texture coordinates (0, 1) represent the position where the actual width is 0 and the actual height is 1.5 Height; texture coordinates (1, 1) represent The position where the actual width is Width and the actual height is 1.5Height.
  • 0 in the texture coordinate (0, 1/3) means that the ratio of the width of the position to be rendered to the width of the second texture storage area is 0, and the texture coordinate 1/3 in (0, 1/3) means that the ratio of the height of the current position to be rendered to the height of the second texture storage area is 1/3, that is, the texture coordinates (0, 1/3) represent is the position of the lower left corner of the Y area of the second texture storage area (that is, point 1 shown in the figure), which is the first storage position, according to the distance between the first texture storage area and the second texture storage area Correspondence between storage locations, in the first texture storage area, the second storage location corresponding to the first storage location is the location of (0, 0) in the first texture storage area (that is, the point 2 shown in the figure ).
  • the size of the first texture storage area is different from the size of the second texture storage area.
  • the size of the first storage area in the two texture storage areas is the same. In practical applications, the size of the first texture storage area and the size of the first storage area in the second texture storage area may also be different.
  • it is necessary to ensure that the size of the first texture storage area The sizes of the first storage areas in the storage areas are processed under the same size ratio, so as to ensure that the storage positions correspond to each other.
  • the image data to be converted is processed by calling the shader by the GPU, which avoids the use of pixel-by-pixel calculation, and can quickly complete the image conversion, which improves the image data conversion of the image to be processed in the source encoding format.
  • the processing efficiency of the target image data for the target encoding format is improved.
  • determine the corresponding second texture coordinate in the first texture storage area storage location including:
  • the target storage area is one of the first storage area and the second storage area
  • the target image data corresponding to the texture coordinates is calculated and obtained, including:
  • the target image data corresponding to the texture coordinates is calculated by using the image data conversion method corresponding to the target storage area.
  • the first storage location corresponding to the texture coordinate in the second texture storage area is determined, and the corresponding first storage location of the texture coordinate in the first texture storage area is determined according to the storage location correspondence.
  • the second storage location can be implemented in the following manner:
  • the storage area to which the first storage location corresponding to the texture coordinate belongs is marked as the target storage area. Then, according to the conversion method corresponding to the target storage area and the corresponding relationship between the storage locations, the texture coordinates are converted into texture coordinates corresponding to the first texture storage area, and the converted texture coordinates are obtained, and the converted texture coordinates are determined at The second memory location in the first texture.
  • the converted texture coordinates in the second storage location corresponding to the first texture correspond to the texture coordinates before conversion in the first storage location corresponding to the second texture.
  • the current texture coordinates correspond to the first storage location on the second texture
  • the current texture coordinates are converted according to the mapping relationship between the first texture storage area and the second texture storage area (that is, the storage location correspondence), to obtain the conversion Texture coordinates after.
  • the transformed texture coordinates correspond to the second storage location on the first texture. Therefore, the second storage location of the first texture corresponds to the first storage location of the second texture.
  • each texture coordinate is used to represent each position of the current object to be rendered (ie, the second texture storage area).
  • the coordinate in the X axis direction is marked as the abscissa X
  • the axis used to represent the vertical direction in the texture coordinate As the Y axis, the coordinate in the Y axis direction is marked as ordinate Y.
  • the actual size of the first texture that is, the image size
  • the actual size of the second texture is Width*1.5Height.
  • the texture coordinates are represented by UV coordinates.
  • the range of UV coordinates is [0, 1].
  • the ordinate Y of the texture coordinate UV1 and the first threshold (such as 1/3) determine which area the current texture coordinate belongs to in the second texture storage area. If the ordinate Y is greater than or equal to the first threshold, it indicates that the texture coordinate belongs to Y area.
  • the converted texture coordinate UV2 is in the second storage location corresponding to the first texture storage area (RGB texture storage area), and the texture coordinate UV1 before conversion is in the second texture storage area (YUV texture storage area).
  • a memory location is corresponding.
  • the current texture coordinates are (0.2, 0.5)
  • the vertical coordinate Y of the current texture coordinates (ie 0.5) is greater than 1/3 of the first threshold, it indicates that the current texture coordinates belong to the Y area in the YUV texture
  • the conversion method corresponding to the Y area subtract 1/3 from the ordinate of the current texture coordinate 0.5, multiply by 3/2, and keep the abscissa 0.2 unchanged
  • the converted texture coordinate can be obtained as (0.2, 1/4)
  • the storage location corresponding to the texture coordinate (0.2, 1/4) in the RGB texture storage area is corresponding to the storage location corresponding to the texture coordinate (0.2, 0.5) in the YUV texture storage area.
  • the texture coordinates are represented by UV coordinates, the range of UV coordinates is [0, 1], and the texture coordinates follow the standard of (0, 0) in the lower left corner and (1, 1) in the upper right corner
  • the first threshold such as 1/3
  • the texture coordinate belongs to the U area. According to the storage location correspondence and the conversion method corresponding to the U area, multiply the abscissa X of the current texture coordinate by 2 and the ordinate Y by 3 to obtain the converted texture coordinate UV2, which is stored in the first texture storage area The corresponding second storage location corresponds to the first storage location corresponding to UV1 in the second texture storage area.
  • the current texture coordinate is (0.1, 0.2)
  • X that is, 0.1
  • 1/2 of the second threshold then the texture coordinate belongs to the U area.
  • the converted texture coordinates can be obtained as (0.2, 0.6), and the texture coordinates (0.2, 0.6)
  • the storage location corresponding to the first texture storage area corresponds to the first storage location corresponding to the texture coordinate (0.1, 0.2) in the second texture storage area.
  • the texture coordinates are represented by UV coordinates.
  • the range of UV coordinates is [0, 1].
  • the ordinate Y of UV1 and the first threshold (such as 1/3) determine which area the current texture coordinate UV1 belongs to in the second texture storage area. If the ordinate Y is smaller than the first threshold, the texture coordinate belongs to the second texture of the second texture.
  • the storage area is further judged according to the abscissa X of UV1 and the second threshold (such as 1/2). If the abscissa X is greater than or equal to the second threshold, the texture coordinate belongs to the V area.
  • the second storage location corresponding to the first texture storage area corresponds to the first storage location corresponding to UV1 in the second texture storage area.
  • the texture coordinate belongs to the V area.
  • the storage location corresponding to the texture coordinate (0.6, 0.6) in the first texture storage area corresponds to the second storage location corresponding to the texture coordinate (0.8, 0.2) in the second texture storage area.
  • the above i.uv.y represents the ordinate Y in the texture coordinates
  • the above i.uv.x represents the abscissa X in the texture coordinates.
  • the original image data of the image to be processed corresponding to the second storage location can be obtained, that is, the data corresponding to the second storage location can be obtained.
  • the color value is calculated by using the image data conversion formula corresponding to the target storage area to which the texture coordinate belongs, to obtain the target image data corresponding to the texture coordinate.
  • the image data of the image to be processed is applied to the field of video coding
  • the video encoder requires video input in YUV4:2:0 format
  • the sampling format of YCbCr 4:2:0 can be used.
  • BT Broadcasting service (television) broadcasting service (television), which is a replica of YUV after scaling and offsetting.
  • Y has the same meaning as Y in YUV, and Cb and Cr both refer to color.
  • Y in YCbCr refers to the luminance component
  • Cb refers to the blue chrominance component
  • Cr refers to the red chrominance component.
  • the image data conversion formula is the formula for converting RGB encoding format to YUV encoding format, that is, the formula for mutual conversion between YCbCr and RGB, mainly including three formulas, namely Y formula, U formula and V formula, as follows:
  • Y 0.257*R+0.504*G+0.098*B+16.
  • V 0.439*R-0.368*G-0.071*B+128.
  • the image to be processed is a virtual scene image in a game scene
  • the electronic device is a user terminal
  • the method further includes:
  • the virtual scene image is displayed based on the image data to be displayed.
  • the game scene may be a common game scene or a cloud game scene, which is not limited here.
  • an image to be processed is a virtual scene image in the game scene.
  • the encoding format of the virtual scene image is RGB encoding format and the image in RGB encoding format needs to be displayed on the target terminal, you can follow the The following method realizes displaying the virtual scene image on the target terminal.
  • the original image data of the image to be processed in the source encoding format (ie, the virtual scene image data in the RGB encoding format) can be converted into the target image data in the target encoding format (ie, the image data in the YUV encoding format), that is to say , which can convert the RGB image in the game scene to a YUV image.
  • the target image data is read from the second texture storage area, that is, the YUV image data is read from the second texture, and the read YUV image data is converted into a format corresponding to the source encoding format (ie, the RGB encoding format ) to be displayed image data, based on the to-be-displayed image data to display the virtual scene image (ie game screen).
  • the source encoding format ie, the RGB encoding format
  • the method of converting YUV image data into RGB image data can refer to the method of converting RGB image data into YUV image data described above, that is, the method of converting RGB image data into YUV image data in reverse, which will not be repeated here detail.
  • the target image data in the target encoding format can be converted into the image data to be displayed in the source encoding format as required, and the encoding format of the image can be flexibly converted as required to meet various needs and improve applicability.
  • the image to be processed is at least one virtual scene image in the cloud game scene
  • the electronic device is a cloud game server.
  • the above method also includes:
  • the cloud game server when the cloud game server encodes the scene picture in the game into a video stream, for example, the digital video compression format of the video stream is H264 format, because in this process it needs The image data in the YUV encoding format is used, and the scene picture in the game is the original image data in the RGB encoding format. Therefore, it is necessary to convert the original image data in the RGB encoding format into the target image data in the YUV encoding format.
  • the image to be processed is each image in at least one virtual scene image in the cloud game scene, and the encoding format of each virtual scene image is RGB encoding format.
  • each virtual scene image is obtained in its corresponding The target image data stored in the second texture storage area, the target image data can also be understood as the game screen after rendering, and then, the cloud game server reads the target image data from the second texture storage area corresponding to each virtual scene image , and perform image encoding processing on the read target image data, that is, perform video compression processing to obtain a video stream, wherein each virtual scene image corresponds to a frame of image in the video stream, and the video stream is sent to the user terminal , the user terminal does not need any high-end processor and graphics card, but only needs basic video decompression capabilities, that is, the user terminal only needs to decompress and play the received video stream.
  • the cloud game server can compress the rendered game screen to obtain a video stream, and transmit the video stream to the user terminal through the network.
  • the user terminal does not need any high-end processors and graphics cards, and only needs basic video
  • the decompression capability can play the video stream sent by the cloud game server.
  • the cloud game server can use the image processing method in the embodiment of this application to more efficiently convert the encoding method, speed up the processing speed of the game video stream, and reduce the server load.
  • the computing pressure of the CPU greatly increases the carrying capacity of cloud games, optimizes the game experience and reduces the cost of cloud game servers.
  • the following uses a cloud game scene as an example to describe in detail.
  • the cloud server encodes the scene picture in the game into a video stream
  • the scene picture in the game is an RGB encoded image
  • the image processing method in the embodiment of the application can be used to convert the RGB-encoded image into a YUV-encoded image, as shown in Figure 6.
  • the detailed process is as follows:
  • Step 601 create a rendering texture (RenderTexture) storage area corresponding to image data in RGB encoding format.
  • the texture is a storage area, denoted by RT_RGB.
  • the RT_RGB is the first texture storage area described above, which is used to store the game screen captured by the scene camera of the game engine.
  • the game engine may be the game engine Unity
  • the game screen is image data in RGB encoding format, that is, the original image data of the image to be processed described above.
  • the size of the storage area RT_RGB is consistent with the image size of the game screen, for example, the size of the storage area RT_RGB and the image size of the game screen are both Width*Height. It can be understood that the embodiment of the present application does not limit the image size of the game screen, which can be determined according to actual needs, for example, it can be adapted to the screen size of the corresponding user terminal. In one embodiment, the image size of the game screen is 1920*1080.
  • the embodiment of the present application does not limit the format of the RT_RGB storage area, for example, the format of the RT_RGB storage area may be BGRA32, ARGB32 or the like.
  • Step 602 create a rendering texture (RenderTexture) storage area to store target image data in YUV encoding format.
  • the texture is a storage area, denoted by RT_YUV
  • the RT_YUV is the second texture storage area described above, which is used to store the target image data in the YUV encoding format converted from the image data in the RGB encoding format (that is, the target encoding described above format target image data).
  • the size of the storage area RT_YUV is Width*1.5Height
  • the height of the storage area RT_YUV is 1.5 times the height of the storage area RT_RGB.
  • the size of the storage area RT_YUV is 1920*1620.
  • the embodiment of the present application does not limit the format of the storage area RT_YUV, for example, the format of the storage area RT_YUV may be R8 format (that is, only includes one color channel) and the like.
  • Figure 3 is a schematic diagram of the layout of the storage area RT_YUV.
  • the storage area RT_YUV includes three areas, namely the Y area, U area and V area shown in the figure, the aspect ratio of the Y area and the U area , The aspect ratio of the V area needs to be consistent.
  • the height of Y area is 2/3 of the height of RT_YUV
  • the width of Y area is the same as the width of RT_YUV
  • the height of U area and V area is 1/3 of the height of RT_YUV
  • the width of U area and V area is equal to 1/2 of the width of RT_YUV.
  • Step 603 Set the virtual camera rendering target of the game engine Unity to RT_RGB, and execute the camera rendering operation to obtain an image in RGB encoding format containing the content of the game scene. That is, the current game screen is obtained, and the current game screen is captured and stored in the storage area RT_RGB. That is, call the virtual camera for rendering as shown in the figure, and write the obtained game screen into the RGB texture (RT_RGB).
  • Step 604 the game engine Unity operates through texture resampling.
  • RT_RGB is used as the sampling source
  • RT_YUV is used as the rendering target (that is, the target texture storage area, that is, the second texture storage area)
  • the GPU converts RGB
  • the image data is converted to YUV image data (i.e. texture resampling using shaders as shown in the figure).
  • the position of each texture point in RT_YUV (which can be understood as an image point in the YUV encoding format) has a mapping relationship with each pixel point of the RGB image data in RT_RGB.
  • the main process is that the GPU calls the shader in parallel.
  • Each texture point of RT_YUV corresponds to each pixel point in RT_RGB, and is converted into the corresponding color value through the conversion formula.
  • the process of converting the game screen in the RGB encoding format into an image in the YUV encoding format is shown in Figure 8.
  • the game screen can be called the original image
  • the rendering target can be called the target image.
  • the specific process is as follows:
  • Step S1 using RT_RGB as the sampling source and RT_YUV as the rendering target, the GPU calls the shader to perform resampling, and inputs the texture coordinates (ie UV coordinates) of the rendering target to the shader. That is, the GPU enters the texture coordinates of the target image for the shader, that is, the UV coordinates.
  • the UV coordinate refers to which position of the target image the position to be rendered by the shader currently executed by the GPU is.
  • U in the UV coordinates is the horizontal direction
  • V is the vertical direction
  • the axis used to represent the horizontal direction in the UV coordinates is used as the X axis
  • the coordinates in the X axis direction are marked as the abscissa X
  • the UV coordinates are used to represent the vertical direction.
  • the direction axis is used as the Y axis
  • the coordinates in the Y axis direction are marked as ordinate Y.
  • Step S2 judging which position in RT_YUV the shader is currently rendering according to the UV coordinates.
  • the rendered area corresponding to the currently input UV coordinates that is, to judge which area of the Y area, U area, and V area should be rendered currently
  • RT_RGB and RT_YUV Mapping relationship according to the mapping relationship, the image data corresponding to the current UV coordinates corresponding to RT_RGB is obtained, and the conversion formula (that is, Y formula, U formula and V formula) of the corresponding area is used for calculation to obtain the color corresponding to the current UV coordinates
  • the conversion formula that is, Y formula, U formula and V formula
  • Case 1 Use the Y formula corresponding to the Y component in converting RGB image data to YUV image data, calculate each image data of RT_RGB through the shader, and sample it to the Y area.
  • the ordinate Y in the UV coordinate is greater than or equal to 1/3, it indicates that the current UV coordinate corresponds to the Y area, and the Y formula needs to be used for calculation.
  • Scale the vertical coordinate Y to the range of [0, 1] use the UV coordinates to sample RT_RGB, and obtain the RGB color corresponding to the current UV coordinates, that is, obtain the RGB image data corresponding to the current UV coordinates from the original image. Then, use the Y formula in the conversion of RGB image data to YUV image data for calculation, and use the RGB image data corresponding to the current UV coordinates to obtain the corresponding Y value according to the Y formula.
  • Y 0.257*R+0.504*G+0.098*B+16.
  • the lower left corner is (0, 0)
  • the right direction is the positive direction of the UV coordinate X axis
  • Up is the positive direction of the Y axis of the UV coordinate.
  • the UV coordinate (0, 1/3) is the 0 point of the Y area of RT_YUV.
  • the UV coordinate (0, 1/3 ) corresponds to the (0, 0) point of RT_RGB. At this time, the (0, 0) point of RT_RGB needs to be sampled.
  • Scenario 2 Use the U formula corresponding to the U component in the conversion of RGB image data into YUV image data, calculate each image data of RT_RGB through the shader, and sample it into the U area.
  • the ordinate Y in the UV coordinate is less than 1/3, scale the ordinate Y to the range of [0, 1], and further judge the abscissa X, if the abscissa X is less than or equal to 1/2 , it indicates that the current UV coordinate corresponds to the U area, which needs to be calculated using the U formula.
  • Scale the abscissa X to the range of [0, 1] use the UV coordinates to sample RT_RGB, and obtain the RGB color corresponding to the current UV coordinates, that is, obtain the RGB image data corresponding to the current UV coordinates from the original image. Then, use the U formula in the conversion of RGB image data to YUV image data for calculation, and use the RGB image data corresponding to the current UV coordinates to obtain the corresponding U value according to the U formula.
  • the ordinate Y in the UV coordinate is less than 1/3, scale the ordinate Y to the range of [0, 1], and further judge the abscissa X, if the abscissa X is greater than 1/2, then Indicates that the current UV coordinates correspond to the V area and need to be calculated using the V formula.
  • Scale the abscissa X to the range of [0, 1] use the UV coordinates to sample RT_RGB, and obtain the RGB color corresponding to the current UV coordinates, that is, obtain the RGB image data corresponding to the current UV coordinates from the original image. Then, use the V formula in the conversion of RGB image data to YUV image data for calculation, and use the RGB image data corresponding to the current UV coordinates to obtain the corresponding V value according to the V formula.
  • V 0.439*R-0.368*G-0.071*B+128.
  • the resampling result is written into the RT_YUV texture storage area, wherein the resampling result is the obtained target image data (that is, the YUV encoding format) image data).
  • Step S605 the image data in the YUV encoding format can be obtained, continue to execute the subsequent program logic, read the image data in the YUV encoding format from RT_RGB, and use the video encoder based on the image data in the YUV encoding format to obtain a video Stream, play the video stream on the client.
  • the client is a client in the above-mentioned user terminal, and may be a client in various forms, which is not limited here.
  • the data in the Y area can be read by block, and the data in the U and V areas can be read by row according to the layout diagram shown in Figure 3 .
  • the specific process is as follows:
  • the obtained YUV image data is stored in RT_YUV as shown in Figure 4, when reading the data in the Y area, read the data in the Y area by block, and when reading the data in the U area and V area, read the data in the row Read the data of U area and V area.
  • the area where Y1 to Y12 is located constitutes an area block, which is denoted as area block 1
  • the area where Y13 to Y24 is located constitutes an area block, which is denoted as area block 2.
  • the row where U1 to U3 and V1 to V3 are located is denoted as row 1
  • the row where U4 to U6 and V4 to V6 is located is denoted as row 2.
  • the texture storage area is created to store the image data, so that the GPU can call the shader
  • the method performs parallel processing on the image data to be converted, avoids the use of pixel-by-pixel calculation, can quickly complete the image conversion, and improves the processing efficiency of converting the image data of the image to be processed in the source encoding format into the target image data in the target encoding format .
  • FIG. 9 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • the image processing device 1 provided in the embodiment of the present application includes:
  • the source data obtaining module 11 is used to obtain the image size of the image to be processed and the original image data of the image to be processed, and the color coding format corresponding to the original image data is a source coding format;
  • the first texture processing module 12 is configured to create a first texture storage area according to the image size, and store the image data of the image to be processed in the first texture storage area;
  • the second texture processing module 13 is configured to create a second texture storage area for storing the target image data to be generated according to the image size and the target encoding format, wherein the encoding format corresponding to the target image data is the target encoding format;
  • the target data acquisition module 14 is configured to convert the encoding format of the original image data stored in the first texture storage area through a shader invoked by the graphics processing unit GPU, so as to generate each image in the second texture storage area.
  • Target image data corresponding to texture coordinates and storing the target image data corresponding to each texture coordinate in a corresponding storage location in the second texture storage area.
  • the target data acquisition module 14 is specifically used for:
  • the source encoding format is an RGB encoding format
  • the target encoding format is a luminance and chrominance YUV encoding format
  • the second texture storage area includes: a first storage area for storing luminance components in the YUV encoding format and a second storage area for storing chrominance components in the YUV encoding format.
  • the first storage area and the second storage area are continuous, and each luminance component stored in the first storage area is related to one first chrominance component and one chrominance component stored in the second storage area
  • the target image data of the first chroma component and the target image data of the second chroma component are continuously stored in the second storage area.
  • the size of the first storage area is the same as the image size
  • the second storage area includes a first sub-area corresponding to the first chrominance component and a second sub-area corresponding to the second chrominance component area
  • the size of the first sub-area and the second sub-area are the same
  • the aspect ratios of the first storage area, the first sub-area and the second sub-area are the same
  • the first sub-area and the second sub-area The width of the region is determined by the target encoding format described above.
  • the target data acquisition module 14 is specifically used for:
  • the target storage area is one of the first storage area and the second storage area
  • the target data acquisition module 14 is specifically used for:
  • the target image data corresponding to the texture coordinates is calculated by using the image data conversion method corresponding to the target storage area.
  • the target data acquisition module 14 is specifically used for:
  • the shader invoked by the graphics processor GPU uses a parallel computing method to convert the encoding format of the original image data stored in the first texture storage area, so as to obtain the target image data corresponding to each texture coordinate.
  • the above-mentioned image to be processed is a virtual scene image in a game scene
  • the device also includes an image display module, which is used for:
  • the virtual scene image is displayed based on the image data to be displayed.
  • the above-mentioned image to be processed is each image in at least one virtual scene image in the cloud game, and the above-mentioned device also includes a video stream generation module, which is used for:
  • the above video stream is sent to the user terminal, so that the user terminal plays the above video stream.
  • the texture storage area is created to store the image data, so that the GPU can call the shader
  • the original image data to be converted is processed, avoiding the use of pixel-by-pixel calculations, image conversion can be completed quickly, and the processing of converting the original image data of the image to be processed in the source encoding format into the target image data in the target encoding format is greatly improved efficiency.
  • the above-mentioned image processing device 1 can implement the implementation methods provided by the above-mentioned steps in FIG. 2 through various built-in functional modules. For details, reference can be made to the implementation methods provided by the above-mentioned steps, and details will not be repeated here.
  • the execution subject is hardware to implement the image processing method in this application, but the execution subject of the image processing method in this application is not limited to hardware, and the execution subject of the image processing method in this application can also be software
  • the above-mentioned image processing device may be a computer program (including program code) running in a computer device, for example, the image processing device is an application software; the device may be used to execute the corresponding steps in the method provided by the embodiment of the present application .
  • the image processing device provided in the embodiment of the present application may be realized by a combination of software and hardware.
  • the image processing device provided in the embodiment of the present application may be a processor in the form of a hardware decoding processor.
  • the processor in the form of a hardware decoding processor can adopt one or more application-specific integrated circuits (ASIC, Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field Programmable Gate Array (FPGA, Field-Programmable Gate Array) or other electronic components.
  • ASIC application-specific integrated circuits
  • DSP digital signal processor
  • PLD programmable logic device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the image processing device provided by the embodiment of the present application can be realized by software.
  • the image processing device 1 shown in FIG. 9 can be software in the form of programs and plug-ins, and includes a series of modules. It includes a source data acquisition module 11 , a first texture processing module 12 , a second texture processing module 13 and a target data acquisition module 14 for realizing the image processing method provided by the embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 1000 in this embodiment may include: a processor 1001, a network interface 1004, and a memory 1005.
  • the above-mentioned electronic device 1000 may also include: a user interface 1003, and at least one communication bus 1002.
  • the communication bus 1002 is used to realize connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 can be a high-speed RAM memory, or a non-volatile memory, such as at least one disk memory.
  • the memory 1005 may also be at least one storage device located away from the aforementioned processor 1001 .
  • the memory 1005 as a computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a device control application program.
  • the network interface 1004 can provide network communication functions; the user interface 1003 is mainly used to provide an input interface for the user; and the processor 1001 can be used to call computer programs stored in the memory 1005 .
  • the above-mentioned processor 1001 may be a central processing unit (central processing unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (digital signal processor, DSP) , application specific integrated circuit (ASIC), off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like.
  • the memory which can include read only memory and random access memory, provides instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
  • the above-mentioned electronic device 1000 can implement the implementation methods provided by the above-mentioned steps in FIG. 2 through various built-in functional modules. For details, refer to the implementation methods provided by the above-mentioned steps, which will not be repeated here.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program and is executed by a processor to implement the method provided by each step in FIG. 2 . The implementation manner will not be repeated here.
  • the foregoing computer-readable storage medium may be an internal storage unit of the task processing apparatus provided in any of the preceding embodiments, such as a hard disk or memory of an electronic device.
  • the computer-readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk equipped on the electronic device, a smart memory card (smart media card, SMC), a secure digital (secure digital, SD) card, Flash card (flash card), etc.
  • the above-mentioned computer-readable storage medium may also include a magnetic disk, an optical disk, a read-only memory (read-only memory, ROM) or a random access memory (random access memory, RAM), etc.
  • the computer-readable storage medium may also include both an internal storage unit of the electronic device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the electronic device.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
  • An embodiment of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method provided by any one of the implementations in FIG. 2 above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Image Generation (AREA)

Abstract

本申请实施例公开了一种图像处理方法、装置、电子设备、程序及可读存储介质,涉及图像处理、游戏、云技术以及区块链等领域。该方法包括:获取待处理图像的图像尺寸和待处理图像的原始图像数据;根据图像尺寸创建第一纹理存储区域,并将待处理图像的图像数据存储至所述第一纹理存储区域中;根据图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,目标图像数据对应的编码格式为目标编码格式;通过图形处理器GPU调用的着色器,对原始图像数据进行编码格式转换,以生成第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将各纹理坐标对应的目标图像数据存储至第二纹理存储区域中相应的存储位置。

Description

图像处理方法、装置、电子设备、程序及可读存储介质
本申请要求于2021年6月11日提交中国专利局、申请号为202110655426.0、申请名称为“图像处理方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理、游戏、云技术以及区块链等领域,尤其涉及一种图像处理方法、装置、电子设备、程序及可读存储介质。
背景技术
目前,随着图像处理技术的发展,出现了多种颜色空间来表示图像颜色,例如,红绿蓝RGB颜色空间、亮度色度YUV颜色空间等。
在实际的很多应用场景中,经常会存在需要将一种颜色空间下编码的图像转换为另一种颜色空间下编码的图像,相关技术中,在进行不同颜色编码格式的图像数据转换时,通常都是采用对源编码格式的图像中的每个像素点进行逐个遍历的方式,逐个计算得到另一种编码格式的像素值。虽然采用目前的方式能够完成转换,但是存在转换效率低的问题。
发明内容
本申请实施例提供了一种图像处理方法、装置、电子设备、程序及可读存储介质,提高了将源编码格式的图像数据转换为目标编码格式的目标图像数据的效率。
一方面,本申请实施例提供一种图像处理方法,在电子设备中执行,该方法包括:
获取待处理图像的图像尺寸和待处理图像的原始图像数据,所述原始图像数据对应的颜色编码格式为源编码格式;
根据所述图像尺寸创建第一纹理存储区域,并将所述待处理图像的图像数据存储至所述第一纹理存储区域中;
根据所述图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,所述目标图像数据对应的编码格式为所述目标编码格式;
通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置。
一方面,本申请实施例提供了一种图像处理装置,该装置包括:
源数据获取模块,用于获取待处理图像的图像尺寸和待处理图像的原始图像数据,所述原始图像数据对应的颜色编码格式为源编码格式;
第一纹理处理模块,用于根据上述图像尺寸创建第一纹理存储区域,并将上述待处理图像的图像数据存储至第一纹理存储区域中;
第二纹理处理模块,用于根据上述图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,所述目标图像数据对应的编码格式为所述目标编码格式;
目标数据获取模块,用于通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置。
一方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,该处理器和存储器相互连接;该存储器用于存储计算机程序;该处理器被配置用于在调用上述计算机程序时,执行上述图像处理方法的任一种可能的实现方式提供的方法。
一方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行以实现上图像处理方法的任一种可能的实现方式提供的方法。
一方面,本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程 序包括计算机指令,该计算机指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该电子设备执行上述图像处理方法的任一种可能的实现方式提供的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种图像处理系统在一种应用场景下的结构示意图;
图2是本申请实施例提供的一种图像处理方法的流程示意图;
图3是本申请实施例提供的一种第二纹理存储区域的布局示意图;
图4是本申请实施例提供的一种存储目标图像数据的示意图;
图5是本申请实施例提供的一种第一纹理存储区域和第二纹理存储区域的纹理坐标的示意图;
图6是本申请实施例提供的一种图像处理方法的流程示意图;
图7是本申请实施例提供的一种图像编码格式转换的原理示意图;
图8是本申请实施例提供的一种图像编码格式转换的流程的示意图;
图9是本申请实施例提供的一种图像处理装置的结构示意图;
图10是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的图像处理方法涉及云技术的多种领域,如云技术(Cloud technology)中的云计算、云服务、云游戏等。
云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。本申请实施例所提供的图像处理方法可基于云技术中的云计算(cloud computing)实现。
云计算是指通过网络以按需、易扩展的方式获得所需资源,是网格计算(Grid Computing)、分布式计算(Distributed Computing)、并行计算(Parallel Computing)、效用计算(Utility Computing)、网络存储(Network Storage Technologies)、虚拟化(Virtualization)、负载均衡(Load Balance)等传统计算机和网络技术发展融合的产物。
云游戏(Cloud gaming)又可称为游戏点播(gaming on demand),是一种以云计算技术为基础的在线游戏技术。云游戏技术使图形处理与数据运算能力相对有限的轻端设备(thin client)能运行高品质游戏。在云游戏场景下,游戏逻辑并不在玩家游戏终端,而是在云端服务器中运行,并由云端服务器将游戏场景渲染为视频音频流,通过网络传输给玩家游戏终端。玩家游戏终端无需拥有强大的图形运算与数据处理能力,仅需拥有基本的流媒体播放能力与获取玩家输入指令并发送给云端服务器的能力即可。
其中,如本申请所公开的图像处理方法或装置,其中涉及到的用户终端、服务器(如云游戏服务器)可组成为一区块链,而用户终端、服务器(如云游戏服务器)为区块链上的节点,本申请实施例中图像处理方法或装置中涉及到的数据,如待处理图像的图像数据、目标图像数据可保存于区块链上。
本申请实施例中的图像处理方式的适用场景不作限定,实际应用中,本申请实施例可以适用于需要将一种颜色空间下编码的图像转换为另一种颜色空间下编码的图像的各种场景,包括但不限于游戏类应用场景,例如,在游戏场景中,将RGB颜色空间下编码的图像转换为YUV颜色空间下编码的图像的场景,等等。
对于本申请实施例中的待处理图像具体是什么应用场景下的图像,本申请实施例不作任何限定,例如,可以是游戏类应用场景下的待处理图像。本申请实施例对于游戏类应用具体是什么应用不作限 定,该游戏类应用可以是云游戏,也可以是需要安装客户端的游戏,用户可以通过该用户终端体验网络游戏。该客户端可以为该游戏类应用的web客户端、小程序客户端或者游戏客户端,本申请实施例在此不作限定。
作为一个示例,图1中示出了本申请实施例所适用的一种应用于图像处理系统在一种应用场景下的结构示意图,可以理解的是,本申请实施例所提供的图像处理方法可以适用于但不限于应用于如图1所示的应用场景中。在该示例中,以待处理图像为云游戏场景中至少一张虚拟场景图像中的每一图像为例进行说明。
本申请实施例中的图像处理系统可以包括用户终端和服务器,如图1中所示,该示例中的图像处理系统可以包括但不限于用户终端101、网络102、服务器103。用户终端101(如用户的智能手机)可以通过网络102与服务器103通信,服务器103用于将源编码格式的待处理图像的图像数据转换为目标编码格式的目标图像数据。
以下结合上述应用场景对本申请一可选实施例中的图像处理方法进行说明,该方法的实施过程可以包括以下步骤:
步骤S11,获取待处理图像的图像尺寸和待处理图像的原始图像数据,待处理图像的原始图像数据对应的颜色编码格式为源编码格式。
步骤S12,根据图像尺寸创建第一纹理存储区域,并将待处理图像的图像数据存储至第一纹理存储区域中。
步骤S13,根据图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,目标图像数据对应的编码格式为目标编码格式。
步骤S14,通过图形处理器GPU调用的着色器,利用并行计算方式对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置。
换言之,步骤S14将第一纹理存储区域中存储的待处理图像的图像数据作为采样源、将第二纹理存储区域作为待渲染的目标,通过图形处理器GPU所调用的着色器,并行地对第一纹理存储区域中各坐标位置的图像数据进行重采样(即进行编码格式转换),以生成所述第二纹理中各纹理坐标对应的目标图像数据,并将各纹理坐标对应的目标图像数据存储至第二纹理存储区域中相应的存储位置。
步骤S15,从至少一张虚拟场景图像对应的第二纹理存储区域中读取目标图像数据;对所读取的目标图像数据进行图像编码处理,得到视频流;通过网络102向用户终端101发送该视频流。
步骤S16,用户终端101接收服务器103发送的视频流,在该用户终端101中播放该视频流。
其中,上述步骤S11至步骤S14的执行主体为服务器103。
可理解,上述仅为一种示例,本申请实施例在此不作任何限定。
其中,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN(Content Delivery Network,内容分发网络)、以及大数据和人工智能平台等基础云计算服务的云服务器或服务器集群。上述网络可以包括但不限于:有线网络,无线网络,其中,该有线网络包括:局域网、城域网和广域网,该无线网络包括:蓝牙、Wi-Fi及其他实现无线通信的网络。用户终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、笔记本电脑、数字广播接收器、MID(Mobile Internet Devices,移动互联网设备)、PDA(个人数字助理)、台式计算机、车载终端(例如车载导航终端)、智能音箱、智能手表等,用户终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,但并不局限于此。具体也可基于实际应用场景需求确定,在此不作限定。
参见图2,图2是本申请实施例提供的一种图像处理方法的流程示意图。该方法可以在电子设备中执行,电子设备例如为用户终端或者服务器,也可以是包括用户终端和服务器的系统或者区块链。在一个实施例中,电子设备为服务器。如图2所示,本申请实施例提供的图像处理方法包括如下步骤:
步骤S201,获取待处理图像的图像尺寸和待处理图像的原始图像数据,待处理图像的原始图像数据对应的颜色编码格式为源编码格式。
步骤S202,根据图像尺寸创建第一纹理存储区域,并将待处理图像的原始图像数据存储至第一纹理存储区域中。
步骤S203,根据图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,目标图像数据对应的编码格式为上述目标编码格式。
步骤S204,通过图形处理器GPU调用的着色器,对第一纹理存储区域中存储的原始图像数据进行编码格式转换,以生成第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将各纹理坐标对应的目标图像数据存储至第二纹理存储区域中相应的存储位置。换言之,将上述第一纹理存储区域中存储的待处理图像的原始图像数据作为采样源(即被转换编码格式的数据)、将第二纹理存储区域作为待渲染的目标,通过图形处理器GPU调用的着色器,计算得到各纹理坐标所对应的目标图像数据,并将各纹理坐标对应的目标图像数据存储至第二纹理存储区域中相应的存储位置。
在一个实施例中,待处理图像可以是各种场景下获取的图像,本申请实施例在此不作任何限定。例如,待处理图像可以是游戏场景下,通过游戏引擎调用虚拟相机所获取到的图像。对于该待处理图像的图像尺寸,可以根据需要设定,本申请实施例在此不作任何限定。例如,该待处理图像的图像尺寸可以为Width*Height,其中,Width表示宽度,Height表示高度。待处理图像的图像数据对应的颜色编码格式为源编码格式,本申请实施例对于该源编码格式具体是什么格式不作限定,可以为各种形式的颜色空间对应的编码格式,例如,该源编码格式可以为RGB编码格式。
其中,纹理存储区域的数据结构(即第一纹理存储区域和第二纹理存储区域)例如可以是一个二维数组,纹理存储区域中存储的元素是一些颜色值,以第一纹理存储区域为例,该第一纹理的元素即为待处理图像的各像素点的颜色值。单个的颜色值被称为纹理元素(texture elements)或纹理像素(texel)。每一个纹理像素在纹理中都有一个唯一的地址。这个地址可以被认为是一个列(column)和行(row)的值,它们分别由U和V来表示。
其中,纹理坐标也就是通常所说的UV坐标(贴图坐标),可以理解为图像的百分比坐标,水平方向的坐标被称为U坐标,垂直方向的坐标叫做V坐标,UV坐标的水平方向的坐标和垂直方向的坐标取值范围均是[0,1],与纹理尺寸无关,与纹理宽高比也无关,是一个相对坐标。当把纹理图像应用于图元时,需要为图元的每个顶点指定纹理坐标,标明该顶点在纹理图像中的位置,从而建立起图元和纹理图像之间的映射关系。本申请实施例中,纹理图像即待处理图像,纹理坐标与待处理图像中的图像坐标相对应。比如,待处理图像的宽和高分别为W和H,以图像的左下角为图像的原点,则纹理坐标(0,0)对应该图像的原点,纹理坐标(0.5,0.5)则对应待处理图像的像素点坐标(0.5W,0.5H),相应的。也就是说,本申请实施例中的各纹理坐标指的是在将待处理图像作为纹理图像时,该图像中所要作用于任意图元的像素值的像素点的坐标所对应的纹理坐标,比如需要将上述像素点坐标(0.5W,0.5H)的像素值作用于一个图元上,该像素点坐标(0.5W,0.5H)对应的纹理坐标就是(0.5,0.5)。
在获取到该待处理图像后,根据该待处理图像的图像尺寸,创建一个和该图像尺寸相同的第一纹理存储区域(还可以称为RGB纹理存储区域),即该第一纹理存储区域的尺寸为Width*Height,该第一纹理存储区域用于存储待处理图像的原始图像数据,可以将待处理图像的原始图像数据写入该第一纹理存储区域中,即将待处理图像的图像数据存储至该第一纹理存储区域中。
然后,根据待处理图像的图像尺寸和目标编码格式,创建第二纹理存储区域,在目标编码格式为YUV编码格式的情况下,该第二纹理存储区域的尺寸的宽度可以与该图像尺寸的宽度相同,该第二纹理存储区域的尺寸的高度可以为该图像尺寸的高度的1.5倍。例如,该第二纹理存储区域的尺寸为Width*1.5Height。其中,该第二纹理存储区域用于存储待生成的目标图像数据,该目标图像数据的编码格式为目标编码格式,本申请实施例对于目标编码格式不作限定,例如,该目标编码格式可以为YUV编码格式。
在一实施例中,通过图形处理器GPU调用的着色器,对第一纹理存储区域中存储的原始图像数据进行编码格式转换,以生成第二纹理存储区域的各纹理坐标所对应的目标图像数据,包括:
通过图形处理器GPU调用的着色器,利用并行计算方式对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以得到各纹理坐标所对应的目标图像数据。
在创建好第一纹理存储区域和第二纹理存储区域之后,将第一纹理存储区域中存储的待处理图像的原始图像数据作为采样源,采样源的意思也就是将待处理图像的图像数据作为待转换图像格式的图像数据,并将第二纹理存储区域作为待渲染的目标,通过图形处理器GPU调用着色器,以并行计算的方式,得到各纹理坐标对应于目标编码格式的目标图像数据,并将该目标图像数据存储至第二纹理存储区域中相应的存储位置。
其中,着色器:是指游戏引擎中,使用计算机动画(Computer Graphics,简称CG)、高阶着色器语言(High Level Shader Language,简称HLSL)等着色器(Shader)语言编写、用于描述物体渲染方式的代码或模块。大多数主流游戏引擎都支持着色器功能。其中,计算机动画CG是通过计算机软件所绘制的一切图形的总称,随着以计算机为主要工具进行视觉设计和生产的一系列相关产业的形成。高阶着色器语言HLSL的主要作用为将一些复杂的图像处理,快速而又有效率地在显示卡上完成。着色器Shader应用于计算机图形学领域,指一组供计算机图形资源在执行渲染任务时使用的指令,用于计算图像的颜色或明暗。
通过本申请实施例,对于需要转换编码格式的待处理图像,会根据待处理图像的图像尺寸创建用于存储待处理图像的原始图像数据的第一纹理存储区域,并根据该图像尺寸和目标编码格式创建用于存储目标图像数据的第二纹理存储区域。在进行编码格式转换时,将第一纹理存储区域中存储的待处理图像的原始图像数据作为采样源、将第二纹理存储区域作为待渲染的目标,通过图形处理器(Graphics Processing Unit,简称GPU)调用的着色器,并行计算得到各纹理坐标所对应的目标图像数据,并将各纹理坐标对应的目标图像数据存储至第二纹理存储区域中相应的存储位置。采用上述技术方案,在将源编码格式的待处理图像的原始图像数据转换为目标编码格式的目标图像数据时,通过创建纹理来存储原始图像数据的方式,使得可以通过GPU调用着色器的方式对待转换的原始图像数据进行并行处理,避免了采用逐像素计算的方式,可以快速完成图像编码格式转换,提高了将源编码格式的待处理图像的图像数据转换为目标编码格式的目标图像数据的处理效率。
为了便于更加直观的理解第二纹理存储区域,以下结合示例进行详细说明。
在一种实施例中,上述源编码格式为红绿蓝RGB编码格式,上述目标编码格式为亮度色度YUV编码格式,上述第二纹理存储区域包括用于存储YUV编码格式的亮度分量的第一存储区域和用于存储色度分量的第二存储区域,其中,上述第一存储区域和上述第二存储区域连续,上述第一存储区域中所存储的每个亮度分量与上述第二存储区域中所存储的一个第一色度分量和一个第二色度分量相对应,上述第一色度分量的目标图像数据和上述第二色度分量的目标图像数据在上述第二存储区域中连续存储。
在一个实施例中,源编码格式可以为红绿蓝RGB编码格式,在RGB编码格式下,图像的每个像素由红绿蓝三个分量组成。目标编码格式可以为亮度色度YUV编码格式,可以对RGB编码格式的图像进行有损压缩,减小占用空间,可以用于视频编码过程。第二纹理存储区域包括第一存储区域和第二存储区域。如图3所示,第一存储区域是用于YUV编码格式的亮度分量(即Y分量)的存储区域,也就是图3中所示的Y区域,第二存储区域是用于存储YUV编码格式的色度分量(即U分量和V分量)的存储区域,也就是图3中所示的U区域和V区域。
其中,在使用着色器进行编码格式的转换时,需要保证第一存储区域和第二存储区域连续,其中,第一色度分量(即U分量)的目标图像数据和第二色度分量(即V分量)的目标图像数据,在第二存储区域中连续存储。换言之,Y区域、U区域和V区域在存储数据时是连续的。
如图4所示,假设第二纹理中存储的目标图像数据按照图4所示的方式进行存储,可以看出,Y区域用于存储颜色分量Y的颜色值,即Y1至Y24,U区域用于存储颜色分量U的颜色值,即U1至U6,V区域用于存储颜色分量V的颜色值,即V1至V6。其中,每4个Y分量共用一组UV分量,其中,Y1、Y2、Y7和Y8共用U1和V1,Y3、Y4、Y9和Y10共用U2和V2,Y5、Y6、Y11和Y12共用U3和V3,Y13、Y14、Y19和Y20共用U4和V4,Y15、Y16、Y21和Y22共用U5和V5,Y17、Y18、Y23和Y24共用U6和V6。
通过本申请实施例,这种采用第一色度分量的目标图像数据和第二色度分量的目标图像数据在第二存储区域中连续存储的存储方式,能够极大程度与着色器的工作方式相适配,提高了适配性。
在一种实施例中,上述第一存储区域的尺寸与上述图像尺寸相同,上述第二存储区域包括对应于第一色度分量的第一子区域和对应于第二色度分量的第二子区域,上述第一子区域和上述第二子区域尺寸相同,上述第一存储区域、上述第一子区域和上述第二子区域的宽高比相同,且上述第一子区域和上述第二子区域的宽度是由目标编码格式确定的。
在一个实施例中,本申请实施例对第二纹理的第一存储区域的尺寸和第二存储区域的尺寸不作限定。在一示例中,第二纹理的布局如图3所示,第一存储区域即为图中所示的Y区域,该Y区域用于存储亮度分量(即Y分量)对应的颜色值,该第一存储区域的尺寸可以与待处理图像的图像尺寸相同。第二存储区域包括第一子区域和第二子区域,其中,第一子区域即为图中所示的U区域,第二子区域即为图中所示的V区域,U区域用于存储第一色度分量(即U分量)对应的颜色值,V区域用于存储第二色度分量(即V分量)对应的颜色值。U区域和V区域的尺寸相同。在实际应用中,需要保证第一存储区域、第一子区域和第二子区域的高宽比相同,也就是说,需要保证Y区域、U区域和V区域的高宽比相同。在一个实施例中,第一子区域的宽度和第二子区域的宽度均为第一存储区域的宽度的1/2。也就是说,U区域的宽度和V区域的宽度均为Y区域的宽度的1/2。
通过本申请实施例,按照上述方式布局第一存储区域和第二存储区域,能够极大程度与着色器的工作方式相适配,提高了适配性。
在一种实施例中,上述通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置,包括:
通过图形处理器GPU调用的着色器,对于所述第二纹理存储区域的任一纹理坐标,确定该纹理坐标在所述第二纹理存储区域中对应的第一存储位置;
根据所述第一纹理存储区域和所述第二纹理存储区域之间的存储位置对应关系,确定该纹理坐标在所述第一纹理存储区域中对应的第二存储位置;
根据所述第二存储位置对应的原始图像数据,计算得到该纹理坐标对应的目标图像数据,并将该目标图像数据存储至所述第一存储位置。
换言之,通过GPU调用着色器执行以下操作,得到各上述纹理坐标所对应的目标图像数据:
对于任一纹理坐标,确定该纹理坐标在上述第二纹理中对应的第一存储位置;
根据上述第一纹理存储区域和上述第二纹理存储区域之间的存储位置对应关系,确定该纹理坐标在上述第一纹理中对应的第二存储位置;
根据上述第二存储位置对应的待处理图像的原始图像数据,计算得到该纹理坐标对应的目标图像数据,并将该目标图像数据存储至上述第一存储位置。
在一个实施例中,可以通过图形处理器GPU调用的着色器,利用并行计算方式对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以得到各纹理坐标所对应的目标图像数据。换言之,将第一纹理存储区域中存储的待处理图像的原始图像数据作为采样源、将第二纹理存储区域作为待渲染的目标,可以通过GPU调用着色器,通过并行计算的方式一次性的确定出每个纹理坐标所对应的目标图像数据。为便于描述,以下以其中任意一个纹理坐标为例进行说明。
通过图形处理器GPU调用的着色器,对于任一纹理坐标,确定该纹理坐标在第二纹理存储区域中对应的第一存储位置,然后,根据存储位置对应关系,确定出该纹理坐标在第一纹理存储区域中对应的第二存储位置。换言之,确定当前纹理坐标在第二纹理存储区域中所在的第一存储位置,然后在第一纹理存储区域中,确定与该第一存储位置对应的第二存储位置。最后,根据该第二存储位置对应的待处理图像的原始图像数据,计算得到该纹理坐标对应的目标图像数据,并将该目标图像数据存储至第一存储位置。
其中,关于第一存储位置和第二存储位置,以下结合一示例进行说明:
举例来说,假设第一纹理存储区域的尺寸为Width*Height,第二纹理存储区域的尺寸为Width*1.5Height,其中,Width表示图像宽度,Height表示图像高度,图5左边所示的部分为第一纹理存储区域的存储位置的坐标,该坐标使用纹理坐标来表述(即坐标取值范围在[0,1]中的UV坐标),其中,对于第一纹理存储区域,纹理坐标(0,0)表示其实际宽度为0、实际高度为0所在的位置; 纹理坐标(1,0)表示其实际宽度为Width、实际高度为0所在的位置;纹理坐标(0,1)表示实际宽度为0、实际高度为Height所在的位置;纹理坐标(1,1)表示实际宽度为Width、实际高度为Height所在的位置。图5右边所示的部分为第二纹理存储区域的存储位置的坐标,其中,对于第二纹理存储区域,纹理坐标(0,0)表示实际宽度为0、实际高度为0所在的位置;纹理坐标(1,0)表示实际宽度为Width、实际高度为0所在的位置;纹理坐标(0,1)表示实际宽度为0、实际高度为1.5Height所在的位置;纹理坐标(1,1)表示实际宽度为Width、实际高度为1.5Height所在的位置。
以纹理坐标(0,1/3)为例,该纹理坐标(0,1/3)中的0表示当前要渲染的位置的宽度占第二纹理存储区域的宽度的比值为0,该纹理坐标(0,1/3)中的1/3表示当前要渲染的位置的高度占第二纹理存储区域的高度的比值为1/3,也就是说,纹理坐标(0,1/3)表示的是第二纹理存储区域的Y区域的左下角(即图中所示的点1)所在的位置,该位置即为第一存储位置,根据第一纹理存储区域和第二纹理存储区域之间的存储位置对应关系,在第一纹理存储区域中,与该第一存储位置对应的第二存储位置即为第一纹理存储区域的(0,0)所在的位置(即图中所示的点2)。
需要说明的是,上述示例中,在将第一纹理存储区域中存储的原始图像数据作为采样源、将第二纹理存储区域作为待渲染的目标进行处理时,第一纹理存储区域的尺寸与第二纹理存储区域中的第一存储区域的尺寸是相同的。实际应用中,第一纹理存储区域的尺寸与第二纹理存储区域中的第一存储区域的尺寸也可以不相同,此时在处理过程中,需要保证第一纹理存储区域的尺寸和第二纹理存储区域中的第一存储区域的尺寸在同一尺寸比例下进行处理,以保证存储位置的相互对应。
通过本申请实施例,通过GPU调用着色器的方式对待转换的图像数据进行处理,避免了采用逐像素计算的方式,可以快速完成图像转换,提高了将源编码格式的待处理图像的图像数据转换为目标编码格式的目标图像数据的处理效率。
为了更加清楚的解释如何确定第一存储位置以及第二存储位置,以下结合一示例进行详细说明。
在一种实施例中,所述根据所述第一纹理存储区域和所述第二纹理存储区域之间的存储位置对应关系,确定该纹理坐标在所述第一纹理存储区域中对应的第二存储位置,包括:
确定所述第一存储位置在所述第二纹理存储区域中所属的目标存储区域,所述目标存储区域为所述第一存储区域和所述第二存储区域之一;
根据所述存储位置对应关系和所述目标存储区域,将该纹理坐标转换为对应于所述第一纹理存储区域的纹理坐标,得到转换后的纹理坐标;
确定所述转换后的纹理坐标在所述第一纹理存储区域中的第二存储位置。
在一实施例中,根据所述第二存储位置对应的原始图像数据,计算得到该纹理坐标对应的目标图像数据,包括:
根据所述第二存储位置对应的原始图像数据,采用所述目标存储区域所对应的图像数据转换方式,计算得到该纹理坐标对应的目标图像数据。
在一个实施例中,对于任一纹理坐标,确定该纹理坐标在第二纹理存储区域中对应的第一存储位置,以及根据存储位置对应关系,确定该纹理坐标在第一纹理存储区域中对应的第二存储位置,可以按照以下方式实现:
确定该纹理坐标在第二纹理存储区域中对应的第一存储位置,然后确定该第一存储位置应该属于第二纹理存储区域的哪个存储区域,即确定第一存储位置应该属于第一存储区域(即Y区域)、第一子区域(即U区域)或者第二子区域(即V区域)中的哪个区域。将该纹理坐标所对应的第一存储位置所属的存储区域记为目标存储区域。然后根据该目标存储区域所对应的转换方式,以及存储位置对应关系,将该纹理坐标转换为对应于第一纹理存储区域的纹理坐标,得到转换后的纹理坐标,确定该转换后的纹理坐标在第一纹理中的第二存储位置。该转换后的纹理坐标在第一纹理所对应的第二存储位置、与转换前的纹理坐标在第二纹理对应的第一存储位置相对应。
也就是说,当前纹理坐标对应于第二纹理上的第一存储位置,根据第一纹理存储区域和第二纹理存储区域的映射关系(即存储位置对应关系)对当前纹理坐标进行转换,得到转换后的纹理坐标。转换后的纹理坐标对应于第一纹理上的第二存储位置。因此,第一纹理的第二存储位置与第二纹理的第一存储位置是相对应的。
在一个实施例中,各个纹理坐标用于表示当前要渲染的目标(即第二纹理存储区域)的各个位置。在确定第二存储位置时,以下是几种可能出现的情形:
将任一纹理坐标视为当前纹理坐标,其中,将纹理坐标的中用于表示水平方向的轴作为X轴,X轴方向的坐标记为横坐标X,将纹理坐标中用于表示垂直方向轴作为Y轴,Y轴方向的坐标记为纵坐标Y。其中,以下示例中,第一纹理的实际尺寸(即图像尺寸)为Width*Height,第二纹理的实际尺寸为Width*1.5Height。
情形1:本情形中纹理坐标均使用UV坐标表示,UV坐标的范围是[0,1],纹理坐标按照左下角为(0,0)、右上角为(1,1)的标准,根据当前纹理坐标UV1的纵坐标Y与第一阈值(如1/3)判断当前纹理坐标是属于第二纹理存储区域的哪个区域,若纵坐标Y大于或等于第一阈值,则表明该纹理坐标属于Y区域。根据存储位置对应关系、以及该Y区域对应的转换方式,将纵坐标Y减去1/3,再乘以3/2,并保持UV1的横坐标X不变,可以得到转换后的纹理坐标UV2,该转换后的纹理坐标UV2在第一纹理存储区域(RGB纹理存储区域)所对应的第二存储位置、与转换前的纹理坐标UV1在第二纹理存储区域(YUV纹理存储区域)对应的第一存储位置是相对应的。
例如,假设当前纹理坐标为(0.2,0.5),由于当前纹理坐标的纵坐标Y(即0.5)大于第一阈值1/3,表明当前纹理坐标属于YUV纹理中的Y区域,根据存储位置对应关系、以及该Y区域对应的转换方式,将当前纹理坐标的纵坐标0.5减去1/3,再乘以3/2,并保持横坐标0.2不变,可以得到转换后的纹理坐标为(0.2,1/4),纹理坐标(0.2,1/4)在RGB纹理存储区域所对应的存储位置、与纹理坐标(0.2,0.5)在YUV纹理存储区域对应的存储位置是相对应的。
情形2:同样的,本情形中纹理坐标均使用UV坐标表示,UV坐标的范围是[0,1],纹理坐标按照左下角为(0,0)、右上角为(1,1)的标准,根据当前纹理坐标UV1的纵坐标Y与第一阈值(如1/3)判断当前纹理坐标UV1是属于第二纹理存储区域的哪个区域,若纵坐标Y小于第一阈值则该纹理坐标UV1属于第二纹理存储区域的第二存储区域,进一步根据当前纹理坐标UV1的横坐标X与第二阈值(如1/2)进行判断,若横坐标X小于第二阈值则该纹理坐标属于U区域。根据存储位置对应关系、以及该U区域对应的转换方式,将当前纹理坐标的横坐标X乘以2、纵坐标Y乘以3,可以得到转换后的纹理坐标UV2,UV2在第一纹理存储区域所对应的第二存储位置、与UV1在第二纹理存储区域对应的第一存储位置是相对应的。
例如,假设当前纹理坐标为(0.1,0.2),由于其纵坐标Y(即0.2)小于第一阈值1/3,表明当前纹理坐标属于第二纹理存储区域中的第二存储区域,由于横坐标X(即0.1)小于第二阈值1/2,则该纹理坐标属于U区域。根据存储位置对应关系、以及该U区域对应的转换方式,将横坐标0.1乘以2,将纵坐标0.2乘以3,可以得到转换后的纹理坐标为(0.2,0.6),纹理坐标(0.2,0.6)在第一纹理存储区域所对应的存储位置、与纹理坐标(0.1,0.2)在第二纹理存储区域中对应的第一存储位置是相对应的。
情形3:本情形中纹理坐标均使用UV坐标表示,UV坐标的范围是[0,1],纹理坐标按照左下角(0,0)右上角为(1,1)的标准,根据当前纹理坐标UV1的纵坐标Y与第一阈值(如1/3)判断当前纹理坐标UV1是属于第二纹理存储区域的哪个区域,若纵坐标Y小于第一阈值则该纹理坐标属于第二纹理的第二存储区域,进一步根据UV1的横坐标X与第二阈值(如1/2)进行判断,若横坐标X大于或等于第二阈值则该纹理坐标属于V区域。根据存储位置对应关系、以及该V区域对应的转换方式,将横坐标X乘以减去1/2再乘以2,并将纵坐标Y乘以3,可以得到转换后的纹理坐标UV2,UV2在第一纹理存储区域所对应的第二存储位置、与UV1在第二纹理存储区域对应的第一存储位置是相对应的。
例如,假设当前纹理坐标为(0.8,0.2),由于当前纹理坐标的纵坐标Y(即0.2)小于第一阈值1/3,表明当前纹理坐标属于第二纹理存储区域中的第二存储区域,由于横坐标X(即0.1)大于第二阈值1/2,则该纹理坐标属于V区域。根据存储位置对应关系、以及该V区域对应的转换方式,将横坐标0.8减去1/2再乘以2,将纵坐标0.2乘以3,可以得到转换后的纹理坐标为(0.6,0.6),纹理坐标(0.6,0.6)在第一纹理存储区域所对应的存储位置与纹理坐标(0.8,0.2)在第二纹理存储区域对应的第二存储位置是相对应的。
其中,上述三种情形对应的部分代码如下:
//定义三个颜色分量的Flag,标记当前处理的纹理坐标在YUV纹理存储区域(即第二纹理存储区域)中对应的存储区域
int y=0;//Y分量对应的Flag
int u=0;//U分量对应的Flag
int v=0;//V分量对应的Flag
//对纹理坐标进行缩放,采样结果映射到YUV纹理存储区域的指定区域
if(i.uv.y>=1.0/3.0)//如果纹理坐标中的纵坐标Y大于或等于1/3,则将纹理存储区域中上部的2/3作为Y分量的存储区域
Figure PCTCN2022094621-appb-000001
其中,上述i.uv.y表示纹理坐标中的纵坐标Y,上述i.uv.x表示纹理坐标中的横坐标X。
可理解,以上仅为一种示例,本申请实施例在此不作限定。
在根据上述方式确定各纹理坐标在第一纹理中所对应的第二存储位置后,可以获取该第二存储位置所对应的待处理图像的原始图像数据,即获取该第二存储位置所对应的颜色值,采用该纹理坐标所属的目标存储区域对应的图像数据转换公式,计算得到该纹理坐标对应的目标图像数据。
在一示例中,当将待处理图像的图像数据应用于视频编码领域时,需要先将源编码格式(即RGB编码格式)的待处理图像的原始图像数据转换目标编码格式(即YUV编码格式)的目标图像数据,由于视频编码器要求YUV4:2:0格式的视频输入,在对将待处理图像的原始图像数据作为采样源进行采样时,可以采用YCbCr 4:2:0的采样格式。其中,YCbCr是在世界数字组织视频标准研制过程中作为数字电视标准(ITU-R BT.601)建议的一部分,其中的ITU=International Telecommunication Union(联合国)国际电信联盟,R=Radiocommunication Sector无线电部,BT=Broadcasting service(television)广播服务(电视),是YUV经过缩放和偏移的翻版。其中Y与YUV中的Y含义一致,Cb,Cr都指色彩。YCbCr中的Y是指亮度分量,Cb指蓝色色度分量,而Cr指红色色度分量。
其中,图像数据转换公式即为RGB编码格式转换为YUV编码格式的公式,也就是YCbCr与RGB的相互转换的公式,主要包括三个公式,分别为Y公式、U公式和V公式,具体如下:
Y公式为:Y=0.257*R+0.504*G+0.098*B+16。
U公式为:U=-0.148*R-0.291*G+0.439*B+128。
V公式为:V=0.439*R-0.368*G-0.071*B+128。
通过本申请实施例,可以通过坐标转换的方式,确定待渲染的目标(即第二纹理存储区域)在第一纹理存储区域中的各颜色值,并根据相应的原始图像数据转换公式进行相应的转换,在这个过程中, 由于对各个纹理坐标的处理是GPU通过调用着色器,以并行处理的方式进行的,处理速度很快,极大地提高了转换效率。
在一种实施例中,上述待处理图像为游戏场景中的虚拟场景图像,电子设备为用户终端,上述方法还包括:
从上述第二纹理存储区域中读取目标图像数据;
将所读取的上述目标图像数据转换为对应于上述源编码格式的待展示图像数据;
基于上述待展示图像数据展示上述虚拟场景图像。
在一个实施例中,以游戏场景为例,该游戏场景可以为普通游戏场景也可以为云游戏场景,在此不作限定。此时,一张待处理图像即为游戏场景中的一张虚拟场景图像,当存在虚拟场景图像的编码格式为RGB编码格式、以及需要在目标终端显示RGB编码格式的图像的需求时,可以按照以下方式实现在目标终端显示虚拟场景图像。
按照前文描述,可以将源编码格式的待处理图像的原始图像数据(即RGB编码格式的虚拟场景图像数据)转换为目标编码格式的目标图像数据(即YUV编码格式的图像数据),也就是说,可以将游戏场景中的RGB图像转换为YUV图像。然后,从第二纹理存储区域中将目标图像数据读取出来,即从第二纹理中读取YUV图像数据,并将读取的该YUV图像数据转换为对应于源编码格式(即RGB编码格式)的待展示图像数据,基于该待展示图像数据展示该虚拟场景图像(即游戏画面)。
其中,将YUV图像数据转换为RGB图像数据的方法,可以参考前文描述的将RGB图像数据转换为YUV图像数据的方法,即逆向使用将RGB图像数据转换为YUV图像数据的方法,在此不再详述。
通过本申请实施例,可以根据需要将目标编码格式的目标图像数据转换为源编码格式的待展示图像数据,能够根据需要灵活的转换图像的编码格式,满足多种需求,提高了适用性。
在一种实施例中,上述待处理图像为云游戏场景中至少一张虚拟场景图像,电子设备为云游戏服务器,上述方法还包括:
从上述至少一张虚拟场景图像中各图像对应的第二纹理存储区域中读取目标图像数据;
对所读取的目标图像数据进行图像编码处理,得到视频流;
将上述视频流发送至用户终端,以使上述用户终端播放上述视频流。
在一个实施例中,在云游戏场景中,云游戏服务器在将游戏内的场景画面编码成视频流的过程中,例如,该视频流的数字视频压缩格式为H264格式,由于在该过程中需要用到YUV编码格式的图像数据,而游戏内的场景画面是RGB编码格式的原始图像数据,因此,需要将RGB编码格式的原始图像数据转换成YUV编码格式的目标图像数据。
其中,待处理图像为云游戏场景中至少一张虚拟场景图像中的每一图像,每一虚拟场景图像的编码格式为RGB编码格式,按照前文描述的方式,得到各虚拟场景图像在各自对应的第二纹理存储区域存储的目标图像数据,该目标图像数据还可以理解为渲染完毕后的游戏画面,然后,云游戏服务器从各虚拟场景图像各自对应的第二纹理存储区域中读取目标图像数据,并将读取的各目标图像数据进行图像编码处理,即进行视频压缩处理,得到视频流,其中,每一虚拟场景图像对应于视频流中的一帧图像,将该视频流发送至用户终端,该用户终端可以不需要任何高端处理器和显卡,只需要基本的视频解压能力,即用户终端只需将接收到的视频流进行解压播放即可。
通过本申请实施例,云游戏服务器可以基于渲染完毕后的游戏画面压缩得到视频流,将该视频流通过网络传送给用户终端,用户终端可以不需要任何高端处理器和显卡,只需要基本的视频解压能力就可以播放云游戏服务器发送的视频流,在这个过程中,云游戏服务器可以利用本申请实施例中的图像处理方法更高效地转换编码方式,加快游戏视频流的处理速度,降低服务器中的CPU的运算压力,大大提高了云游戏的承载量,优化游戏体验的同时也降低了云游戏服务器的成本。
为了更加清楚的理解本申请实施例中的图像处理方法,以下结合云游戏场景为例进行详细说明。在云游戏场景中,云服务端器在将游戏内的场景画面编码成视频流的过程中,由于在该过程中需要用到YUV编码的图像,而游戏内的场景画面是RGB编码的图像,因此,需要将RGB编码的图像转换 成YUV编码的图像,可以采用本申请实施例中的图像处理方法将RGB编码的图像转换成YUV编码的图像,如图6所示,详细过程如下:
步骤601,创建一张RGB编码格式的图像数据对应的渲染纹理(RenderTexture)存储区域。该纹理是一个存储区域,用RT_RGB表示,该RT_RGB即前文描述的第一纹理存储区域,用于存储游戏引擎的场景相机捕获到的游戏画面,其中,本申请实施例对于游戏引擎具体是什么引擎不作限定,例如该游戏引擎可以为游戏引擎Unity,该游戏画面是RGB编码格式的图像数据,即前文描述的待处理图像的原始图像数据。其中,该存储区域RT_RGB的尺寸与游戏画面的图像尺寸一致,例如,该存储区域RT_RGB的尺寸和游戏画面的图像尺寸均是Width*Height。可理解,本申请实施例对于该游戏画面的图像尺寸不作限定,可以根据实际需要确定,如可以与对应的用户终端的屏幕尺寸适配。在一个实施例中,该游戏画面的图像尺寸为1920*1080。本申请实施例对于该RT_RGB存储区域的格式不作限定,例如该RT_RGB存储区域的格式可以为BGRA32、ARGB32等。
步骤602,创建一张待存储YUV编码格式的目标图像数据的渲染纹理(RenderTexture)存储区域。该纹理是一个存储区域,用RT_YUV表示,该RT_YUV即前文描述的第二纹理存储区域,用于存储由RGB编码格式的图像数据转换成的YUV编码格式的目标图像数据(即前文描述的目标编码格式的目标图像数据)。其中,该存储区域RT_YUV的尺寸为Width*1.5Height,该存储区域RT_YUV的高度是存储区域RT_RGB的高度的1.5倍。假设游戏画面的图像尺寸为1920*1080,那么该存储区域RT_YUV的尺寸为1920*1620。本申请实施例对于该存储区域RT_YUV的格式不作限定,例如该存储区域RT_YUV的格式可以为R8格式(即只包含一个颜色通道)等。
参见图3,图3为存储区域RT_YUV的布局示意图,可以看出,存储区域RT_YUV包含三个区域,即图中所示的Y区域、U区域和V区域,Y区域的高宽比和U区域、V区域的宽高比需要保持一致。其中,Y区域的高度为RT_YUV的高度的2/3,Y区域的宽度和RT_YUV的宽度一致,U区域和V区域的高度均为RT_YUV的高度的1/3,U区域和V区域的宽度均为RT_YUV的宽度的1/2。
步骤603,将游戏引擎Unity的虚拟相机渲染目标设置为RT_RGB,并执行相机的渲染操作,得到一张包含游戏场景内容的RGB编码格式的图像。即得到当前游戏画面,将该当前游戏画面捕获并存储到存储区域RT_RGB内。即图中所示的调用虚拟相机进行渲染,将得到的游戏画面写入RGB纹理(RT_RGB)。
步骤604,游戏引擎Unity通过纹理重采样操作,在显卡GPU中,将RT_RGB作为采样源,RT_YUV作为渲染目标(即目标纹理存储区域,也即第二纹理存储区域),GPU通过调用着色器将RGB图像数据转换成YUV图像数据(即图中所示的使用着色器对纹理重采样)。其中,RT_YUV中的各个纹理点(可以理解为YUV编码格式下的一个图像点)的位置与RT_RGB中的RGB图像数据的各像素点具有映射关系,主要过程为,GPU通过调用着色器,并行地将RT_YUV的各个纹理点对应于RT_RGB中的各像素点、通过转换公式转换为相应的颜色值。
主要过程如图7所示,使用Y公式,将RT_RGB中的各图像数据通过着色器的计算,采样到Y区域。使用U公式,将RT_RGB中的各图像数据通过着色器的计算,采样到U区域。使用V公式,将RT_RGB中的各图像数据通过着色器的计算,采样到V区域。
具体地,将RGB编码格式的游戏画面转换为YUV编码格式的图像的过程如图8所示,为便于描述,可以将游戏画面称为原始图像,渲染目标称为目标图像,具体过程如下:
步骤S1,以RT_RGB为采样源,RT_YUV为渲染目标,GPU调用着色器进行重采样,向着色器输入渲染目标的纹理坐标(即UV坐标)。也就是GPU为着色器输入目标图像的纹理坐标,即UV坐标。
其中,该UV坐标是指GPU当前执行的着色器要渲染的位置是目标图像的哪个位置。其中,UV坐标中的U是水平方向,V是垂直方向,将UV坐标中用于表示水平方向的轴作为X轴,X轴方向的坐标记为横坐标X,将UV坐标中用于表示垂直方向轴作为Y轴,Y轴方向的坐标记为纵坐标Y。
步骤S2,根据UV坐标判断出着色器当前要渲染的是RT_YUV中的哪个位置。
可以根据图3所示中的YUV布局来判断,当前输入的UV坐标所对应的渲染的区域,即判断当前应该渲染Y区域、U区域和V区域中的哪个区域,然后,根据RT_RGB和RT_YUV的映射关系,根 据该映射关系,获取当前UV坐标对应于RT_RGB的图像数据,并使用所对应的区域的转换公式(即Y公式、U公式和V公式)进行计算,得到当前UV坐标所对应的颜色分量的值。以下是几种可能的情形:
情形1:使用将RGB图像数据转换为YUV图像数据中Y分量对应的Y公式,将RT_RGB的各图像数据通过着色器进行计算,采样到Y区域。
在一个实施例中,若UV坐标中的纵坐标Y大于或等于1/3,则表明当前的UV坐标对应于Y区域,需要使用Y公式进行计算。将纵坐标Y缩放到[0,1]的范围,使用UV坐标对RT_RGB进行采样,得到当前UV坐标对应的RGB颜色,即从原始图像中获取当前UV坐标所对应的RGB图像数据。然后,使用RGB图像数据转YUV图像数据中的Y公式进行计算,将当前UV坐标所对应的RGB图像数据,根据该Y公式,得到相应的Y值。
其中,Y公式为:Y=0.257*R+0.504*G+0.098*B+16。
举例来说,在如图5所示的RT_YUV(即图中所示的第二纹理存储区域)的布局图中,左下角为(0,0),向右为UV坐标X轴的正方向,向上为UV坐标Y轴的正方向,以UV坐标(0,1/3)为例,UV坐标(0,1/3)就是RT_YUV的Y区域的0点,该UV坐标(0,1/3)对应于RT_RGB的(0,0)点,此时,需要对RT_RGB的(0,0)点进行采样。
情形2:使用将RGB图像数据转换为YUV图像数据中的U分量对应的U公式,将RT_RGB的各图像数据通过着色器进行计算,采样到U区域。
在一个实施例中,若UV坐标中的纵坐标Y小于1/3,将纵坐标Y缩放到[0,1]的范围,并进一步判断横坐标X,若横坐标X小于或等于1/2,则表明当前的UV坐标对应于U区域,需要使用U公式进行计算。将横坐标X缩放到[0,1]的范围,使用UV坐标对RT_RGB进行采样,得到当前UV坐标对应的RGB颜色,即从原始图像中获取当前UV坐标所对应的RGB图像数据。然后,使用RGB图像数据转YUV图像数据中的U公式进行计算,将当前UV坐标所对应的RGB图像数据,根据该U公式,得到相应的U值。
其中,U公式为:U=-0.148*R-0.291*G+0.439*B+128;
情形3,使用将RGB图像数据转换为YUV图像数据中V分量对应的V公式,将RT_RGB的各图像数据通过着色器进行计算,采样到V区域。
在一个实施例中,若UV坐标中的纵坐标Y小于1/3,将纵坐标Y缩放到[0,1]的范围,并进一步判断横坐标X,若横坐标X大于1/2,则表明当前的UV坐标对应于V区域,需要使用V公式进行计算。将横坐标X缩放到[0,1]的范围,使用UV坐标对RT_RGB进行采样,得到当前UV坐标对应的RGB颜色,即从原始图像中获取当前UV坐标所对应的RGB图像数据。然后,使用RGB图像数据转YUV图像数据中的V公式进行计算,将当前UV坐标所对应的RGB图像数据,根据该V公式,得到相应的V值。
V公式为:V=0.439*R-0.368*G-0.071*B+128。
按照上述方式,当Y区域、U区域和V区域的数据全部采样完毕之后,将重采样结果写入RT_YUV纹理存储区域中,其中,重采样结果即为得到的目标图像数据(即YUV编码格式的图像数据)。
步骤S605,按照上述过程可以得到YUV编码格式的图像数据,继续执行后续的程序逻辑,从RT_RGB中读取出YUV编码格式的图像数据,并通过视频编码器基于YUV编码格式的图像数据,得到视频流,在客户端播放该视频流。其中,该客户端是上述用户终端中的客户端,可以为各种形式的客户端,在此不作限定。
在一个实施例中,从RT_RGB中读取出YUV编码格式的图像数据时,可以按照图3所示的布局图,按块读取Y区域的数据,按行读取U、V区域的数据,具体过程如下:
假设得到的YUV图像数据按照图4所示的方式存储于RT_YUV中,在读取Y区域的数据时,按块读取Y区域的数据,在读取U区域和V区域的数据时,按行读取U区域和V区域的数据。例如,Y1至Y12所在的区域构成一个区域块,记为区域块1,Y13至Y24所在的区域构成一个区域块,记为区域块2。U1至U3和V1至V3所在的行,记为行1,U4至U6和V4至V6所在的行,记为行2。按 块读取区域块1的数据,按行读取行1的数据,可以得到Y1至Y12的数据,以及所共用的UV分量,即U1至U3和V1至V3。按块读取区域块2的数据,按行读取行2的数据,可以得到Y13至Y24的数据,以及所共用的UV分量,即U4至U6和V4至V6。
通过本申请实施例,在将源编码格式的待处理图像的原始图像数据转换为目标编码格式的目标图像数据时,通过创建纹理存储区域来存储图像数据的方式,使得可以通过GPU调用着色器的方式对待转换的图像数据进行并行处理,避免了采用逐像素计算的方式,可以快速完成图像转换,提高了将源编码格式的待处理图像的图像数据转换为目标编码格式的目标图像数据的处理效率。
参见图9,图9是本申请实施例提供的一种图像处理装置的结构示意图。本申请实施例提供的图像处理装置1包括:
源数据获取模块11,用于获取待处理图像的图像尺寸和待处理图像的原始图像数据,所述原始图像数据对应的颜色编码格式为源编码格式;
第一纹理处理模块12,用于根据所述图像尺寸创建第一纹理存储区域,并将所述待处理图像的图像数据存储至所述第一纹理存储区域中;
第二纹理处理模块13,用于根据所述图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,所述目标图像数据对应的编码格式为所述目标编码格式;
目标数据获取模块14,用于通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置。
在一种实施例中,目标数据获取模块14,具体用于:
通过图形处理器GPU调用的着色器,对于所述第二纹理存储区域的任一纹理坐标,确定该纹理坐标在所述第二纹理存储区域中对应的第一存储位置;
根据所述第一纹理存储区域和所述第二纹理存储区域之间的存储位置对应关系,确定该纹理坐标在所述第一纹理存储区域中对应的第二存储位置;
根据所述第二存储位置对应的原始图像数据,计算得到该纹理坐标对应的目标图像数据,并将该目标图像数据存储至所述第一存储位置。
在一种实施例中,所述源编码格式为红绿蓝RGB编码格式,所述目标编码格式为亮度色度YUV编码格式。所述第二纹理存储区域包括:用于存储YUV编码格式中亮度分量的第一存储区域和用于存储YUV编码格式的色度分量的第二存储区域。其中,所述第一存储区域和所述第二存储区域连续,所述第一存储区域中所存储的每个亮度分量与所述第二存储区域中所存储的一个第一色度分量和一个第二色度分量相对应,所述第一色度分量的目标图像数据和所述第二色度分量的目标图像数据在所述第二存储区域中连续存储。
在一种实施例中,上述第一存储区域的尺寸与上述图像尺寸相同,上述第二存储区域包括对应于第一色度分量的第一子区域和对应于第二色度分量的第二子区域,上述第一子区域和上述第二子区域尺寸相同,上述第一存储区域、上述第一子区域和上述第二子区域的宽高比相同,且上述第一子区域和上述第二子区域的宽度是由上述目标编码格式确定的。
在一种实施例中,目标数据获取模块14,具体用于:
确定所述第一存储位置在所述第二纹理存储区域中所属的目标存储区域,所述目标存储区域为所述第一存储区域和所述第二存储区域之一;
根据所述存储位置对应关系和所述目标存储区域,将该纹理坐标转换为对应于所述第一纹理存储区域的纹理坐标,得到转换后的纹理坐标;
确定所述转换后的纹理坐标在所述第一纹理存储区域中的第二存储位置。
在一实施例中,目标数据获取模块14,具体用于:
根据所述第二存储位置对应的原始图像数据,采用所述目标存储区域所对应的图像数据转换方式,计算得到该纹理坐标对应的目标图像数据。在一种实施例中,目标数据获取模块14,具体用于:
通过图形处理器GPU调用的着色器,利用并行计算方式对所述第一纹理存储区域中存储的所述原 始图像数据进行编码格式转换,以得到各纹理坐标所对应的目标图像数据。
在一种实施例中,上述待处理图像为游戏场景中的虚拟场景图像,该装置还包括图像展示模块,该模块用于:
从上述第二纹理存储区域中读取目标图像数据;
将所读取的上述目标图像数据转换为源编码格式的待展示图像数据;
基于上述待展示图像数据展示上述虚拟场景图像。
在一种实施例中,上述待处理图像为云游戏中至少一张虚拟场景图像中的每一图像,上述装置还包括视频流生成模块,该模块用于:
从上述至少一张虚拟场景图像中各图像对应的第二纹理存储区域中读取目标图像数据;
对所读取的目标图像数据进行图像编码处理,得到视频流;
将上述视频流发送至用户终端,以使该用户终端播放上述视频流。
通过本申请实施例,在将源编码格式的待处理图像的图像数据转换为目标编码格式的目标图像数据时,通过创建纹理存储区域来存储图像数据的方式,使得可以通过GPU调用着色器的方式对待转换的原始图像数据进行处理,避免了采用逐像素计算的方式,可以快速完成图像转换,大大提高了将源编码格式的待处理图像的原始图像数据转换为目标编码格式的目标图像数据的处理效率。
具体实现中,上述图像处理装置1可通过其内置的各个功能模块执行如上述图2中各个步骤所提供的实现方式,体可参见上述各个步骤所提供的实现方式,在此不再赘述。
上文主要介绍说明了执行主体为硬件,来实施本申请中的图像处理方法,但是本申请的图像处理方法的执行主体并不仅限于硬件,本申请中的图像处理方法的执行主体还可以为软件,上述图像处理装置可以是运行于计算机设备中的一个计算机程序(包括程序代码),例如,该图像处理装置为一个应用软件;该装置可以用于执行本申请实施例提供的方法中的相应步骤。
在一些实施例中,本申请实施例提供的图像处理装置可以采用软硬件结合的方式实现,作为示例,本申请实施例提供的图像处理装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本申请实施例提供的图像处理方法,例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)或其他电子元件。
在另一些实施例中,本申请实施例提供的图像处理装置可以采用软件方式实现,图9示出的图像处理装置1,其可以是程序和插件等形式的软件,并包括一系列的模块,包括源数据获取模块11、第一纹理处理模块12、第二纹理处理模块13和目标数据获取模块14,用于实现本申请实施例提供的图像处理方法。
参见图10,图10是本申请实施例提供的一种电子设备的结构示意图。如图10所示,本实施例中的电子设备1000可以包括:处理器1001,网络接口1004和存储器1005,此外,上述电子设备1000还可以包括:用户接口1003,和至少一个通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。其中,用户接口1003可以包括显示屏(Display)、键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器1005还可以是至少一个位于远离前述处理器1001的存储装置。如图10所示,作为一种计算机可读存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及设备控制应用程序。
在图10所示的电子设备1000中,网络接口1004可提供网络通讯功能;而用户接口1003主要用于为用户提供输入的接口;而处理器1001可以用于调用存储器1005中存储的计算机程序。
应当理解,在一些可行的实施方式中,上述处理器1001可以是中央处理单元(central processing unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器 可以是微处理器或者该处理器也可以是任何常规的处理器等。该存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器的一部分还可以包括非易失性随机存取存储器。例如,存储器还可以存储设备类型的信息。
具体实现中,上述电子设备1000可通过其内置的各个功能模块执行如上述图2中各个步骤所提供的实现方式,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,被处理器执行以实现图2中各个步骤所提供的方法,具体可参见上述各个步骤所提供的实现方式,在此不再赘述。
上述计算机可读存储介质可以是前述任一实施例提供的任务处理装置的内部存储单元,例如电子设备的硬盘或内存。该计算机可读存储介质也可以是该电子设备的外部存储设备,例如该电子设备上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。上述计算机可读存储介质还可以包括磁碟、光盘、只读存储记忆体(read-only memory,ROM)或随机存储记忆体(random access memory,RAM)等。进一步地,该计算机可读存储介质还可以既包括该电子设备的内部存储单元也包括外部存储设备。该计算机可读存储介质用于存储该计算机程序以及该电子设备所需的其他程序和数据。该计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述图2中任一种的实施方式所提供的方法。
本申请的权利要求书和说明书及附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置展示该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (13)

  1. 一种图像处理方法,在电子设备中执行,所述方法包括:
    获取待处理图像的图像尺寸和待处理图像的原始图像数据,所述原始图像数据对应的颜色编码格式为源编码格式;
    根据所述图像尺寸创建第一纹理存储区域,并将所述待处理图像的图像数据存储至所述第一纹理存储区域中;
    根据所述图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,所述目标图像数据对应的编码格式为所述目标编码格式;
    通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置。
  2. 根据权利要求1所述的方法,其中,所述通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置,包括:
    通过图形处理器GPU调用的着色器,对于所述第二纹理存储区域的任一纹理坐标,确定该纹理坐标在所述第二纹理存储区域中对应的第一存储位置;
    根据所述第一纹理存储区域和所述第二纹理存储区域之间的存储位置对应关系,确定该纹理坐标在所述第一纹理存储区域中对应的第二存储位置;
    根据所述第二存储位置对应的原始图像数据,计算得到该纹理坐标对应的目标图像数据,并将该目标图像数据存储至所述第一存储位置。
  3. 根据权利要求2所述的方法,其中,所述源编码格式为红绿蓝RGB编码格式,所述目标编码格式为亮度色度YUV编码格式;所述第二纹理存储区域包括:用于存储YUV编码格式中亮度分量的第一存储区域和用于存储YUV编码格式的色度分量的第二存储区域,其中,所述第一存储区域和所述第二存储区域连续,所述第一存储区域中所存储的每个亮度分量与所述第二存储区域中所存储的一个第一色度分量和一个第二色度分量相对应,所述第一色度分量的目标图像数据和所述第二色度分量的目标图像数据在所述第二存储区域中连续存储。
  4. 根据权利要求3所述的方法,其中,所述第一存储区域的尺寸与所述图像尺寸相同,所述第二存储区域包括对应于第一色度分量的第一子区域和对应于第二色度分量的第二子区域,所述第一子区域和所述第二子区域尺寸相同,所述第一存储区域、所述第一子区域和所述第二子区域的宽高比相同,且所述第一子区域和所述第二子区域的宽度是由所述目标编码格式确定的。
  5. 根据权利要求3或4所述的方法,其中,所述根据所述第一纹理存储区域和所述第二纹理存储区域之间的存储位置对应关系,确定该纹理坐标在所述第一纹理存储区域中对应的第二存储位置,包括:
    确定所述第一存储位置在所述第二纹理存储区域中所属的目标存储区域,所述目标存储区域为所述第一存储区域和所述第二存储区域之一;
    根据所述存储位置对应关系和所述目标存储区域,将该纹理坐标转换为对应于所述第一纹理存储区域的纹理坐标,得到转换后的纹理坐标;
    确定所述转换后的纹理坐标在所述第一纹理存储区域中的第二存储位置。
  6. 根据权利要求5所述的方法,其中,所述根据所述第二存储位置对应的原始图像数据,计算得到该纹理坐标对应的目标图像数据,包括:
    根据所述第二存储位置对应的原始图像数据,采用所述目标存储区域所对应的图像数据转换方式,计算得到该纹理坐标对应的目标图像数据。
  7. 根据权利要求1至4中任一项所述的方法,其中,所述通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区 域的各纹理坐标所对应的目标图像数据,包括:
    通过图形处理器GPU调用的着色器,利用并行计算方式对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以得到各纹理坐标所对应的目标图像数据。
  8. 根据权利要求1至4中任一项所述的方法,其中,所述待处理图像包括游戏场景中的虚拟场景图像,所述电子设备为用户终端,所述方法还包括:
    从所述第二纹理存储区域中读取目标图像数据;
    将所读取的所述目标图像数据转换为所述源编码格式的待展示图像数据;
    基于所述待展示图像数据展示所述虚拟场景图像。
  9. 根据权利要求1至4中任一项所述的方法,其中,所述待处理图像包括云游戏中至少一张虚拟场景图像,所述电子设备为云游戏服务器,所述方法还包括:
    从所述至少一张虚拟场景图像中各图像对应的第二纹理存储区域中读取目标图像数据;
    对所读取的目标图像数据进行图像编码处理,得到视频流;
    将所述视频流发送至用户终端,以使所述用户终端播放所述视频流。
  10. 一种图像处理装置,包括:
    源数据获取模块,用于获取待处理图像的图像尺寸和待处理图像的原始图像数据,所述原始图像数据对应的颜色编码格式为源编码格式;
    第一纹理处理模块,用于根据所述图像尺寸创建第一纹理存储区域,并将所述待处理图像的图像数据存储至所述第一纹理存储区域中;
    第二纹理处理模块,用于根据所述图像尺寸和目标编码格式,创建用于存储待生成的目标图像数据的第二纹理存储区域,其中,所述目标图像数据对应的编码格式为所述目标编码格式;
    目标数据获取模块,用于通过图形处理器GPU调用的着色器,对所述第一纹理存储区域中存储的所述原始图像数据进行编码格式转换,以生成所述第二纹理存储区域的各纹理坐标所对应的目标图像数据,并将所述各纹理坐标对应的目标图像数据存储至所述第二纹理存储区域中相应的存储位置。
  11. 一种电子设备,包括处理器和存储器,所述处理器和存储器相互连接;
    所述存储器用于存储计算机程序;
    所述处理器被配置用于在调用所述计算机程序时,执行如权利要求1至9任一项中所述的方法。
  12. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行以实现权利要求1至9任一项中所述的方法。
  13. 一种计算机程序,该计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中;当处理器执行该计算机指令时,使得处理器执行如权利要求1至9任一所述的图像处理方法。
PCT/CN2022/094621 2021-06-11 2022-05-24 图像处理方法、装置、电子设备、程序及可读存储介质 WO2022257750A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/299,157 US20230252758A1 (en) 2021-06-11 2023-04-12 Image processing method and apparatus, electronic device, program, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110655426.0 2021-06-11
CN202110655426.0A CN113096233B (zh) 2021-06-11 2021-06-11 图像处理方法、装置、电子设备及可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/299,157 Continuation US20230252758A1 (en) 2021-06-11 2023-04-12 Image processing method and apparatus, electronic device, program, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2022257750A1 true WO2022257750A1 (zh) 2022-12-15

Family

ID=76662705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094621 WO2022257750A1 (zh) 2021-06-11 2022-05-24 图像处理方法、装置、电子设备、程序及可读存储介质

Country Status (3)

Country Link
US (1) US20230252758A1 (zh)
CN (1) CN113096233B (zh)
WO (1) WO2022257750A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472592A (zh) * 2023-12-27 2024-01-30 中建三局集团有限公司 基于顶点着色器与纹理映射的三维模型爆炸方法及系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096233B (zh) * 2021-06-11 2021-08-27 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及可读存储介质
CN114040246A (zh) * 2021-11-08 2022-02-11 网易(杭州)网络有限公司 图形处理器的图像格式转换方法、装置、设备及存储介质
CN117750025B (zh) * 2024-02-20 2024-05-10 上海励驰半导体有限公司 一种图像数据处理方法、装置、芯片、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800122A (zh) * 2012-06-20 2012-11-28 广东威创视讯科技股份有限公司 基于Direct3D技术的图像处理方法及其装置
CN106210883A (zh) * 2016-08-11 2016-12-07 浙江大华技术股份有限公司 一种视频渲染的方法、设备
US10467803B1 (en) * 2018-09-11 2019-11-05 Apple Inc. Techniques for providing virtual lighting adjustments utilizing regression analysis and functional lightmaps
CN111093096A (zh) * 2019-12-25 2020-05-01 广州酷狗计算机科技有限公司 视频编码方法及装置、存储介质
CN113096233A (zh) * 2021-06-11 2021-07-09 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154539A (zh) * 2017-12-18 2018-06-12 北京酷我科技有限公司 一种基于Opengl ES的颜色空间数据转化算法
CN110177287A (zh) * 2019-06-11 2019-08-27 广州虎牙科技有限公司 一种图像处理和直播方法、装置、设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800122A (zh) * 2012-06-20 2012-11-28 广东威创视讯科技股份有限公司 基于Direct3D技术的图像处理方法及其装置
CN106210883A (zh) * 2016-08-11 2016-12-07 浙江大华技术股份有限公司 一种视频渲染的方法、设备
US10467803B1 (en) * 2018-09-11 2019-11-05 Apple Inc. Techniques for providing virtual lighting adjustments utilizing regression analysis and functional lightmaps
CN111093096A (zh) * 2019-12-25 2020-05-01 广州酷狗计算机科技有限公司 视频编码方法及装置、存储介质
CN113096233A (zh) * 2021-06-11 2021-07-09 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472592A (zh) * 2023-12-27 2024-01-30 中建三局集团有限公司 基于顶点着色器与纹理映射的三维模型爆炸方法及系统
CN117472592B (zh) * 2023-12-27 2024-03-19 中建三局集团有限公司 基于顶点着色器与纹理映射的三维模型爆炸方法及系统

Also Published As

Publication number Publication date
US20230252758A1 (en) 2023-08-10
CN113096233A (zh) 2021-07-09
CN113096233B (zh) 2021-08-27

Similar Documents

Publication Publication Date Title
WO2022257750A1 (zh) 图像处理方法、装置、电子设备、程序及可读存储介质
CN111681167B (zh) 画质调整方法和装置、存储介质及电子设备
CN109983757B (zh) 全景视频回放期间的视图相关操作
WO2021057097A1 (zh) 图像渲染和编码方法及相关装置
CN107665128B (zh) 图像处理方法、系统、服务器及可读存储介质
CN113041617B (zh) 一种游戏画面渲染方法、装置、设备及存储介质
CN111899155A (zh) 视频处理方法、装置、计算机设备及存储介质
CN115695857B (zh) 云应用的视频编码方法及装置
KR101805550B1 (ko) 프리젠테이션 가상화를 위한 화면 부호화 방법 및 서버
WO2023011033A1 (zh) 图像处理方法、装置、计算机设备及存储介质
CN114040246A (zh) 图形处理器的图像格式转换方法、装置、设备及存储介质
CN112316433A (zh) 游戏画面渲染方法、装置、服务器和存储介质
CN114786040B (zh) 数据通信方法、系统、电子设备和存储介质
CN114938408B (zh) 一种云手机的数据传输方法、系统、设备及介质
CN110858388B (zh) 一种增强视频画质的方法和装置
US20180097527A1 (en) 32-bit hdr pixel format with optimum precision
WO2012109582A1 (en) System and method for multistage optimized jpeg output
WO2021147463A1 (zh) 视频处理方法、装置及电子设备
CN112653905B (zh) 图像处理方法、装置、设备及存储介质
CN115278301B (zh) 视频处理方法、系统及设备
WO2021169817A1 (zh) 视频处理方法及电子设备
CN114245137A (zh) 由gpu执行的视频帧处理方法和包括gpu的视频帧处理装置
CN110858389B (zh) 一种增强视频画质的方法、装置、终端及转码设备
WO2023185856A1 (zh) 数据传输方法、装置、电子设备及可读存储介质
CN116193151A (zh) 虚拟礼物特效播放方法及其装置、设备、介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819348

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE