WO2023169121A1 - 图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质 - Google Patents

图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质 Download PDF

Info

Publication number
WO2023169121A1
WO2023169121A1 PCT/CN2023/074883 CN2023074883W WO2023169121A1 WO 2023169121 A1 WO2023169121 A1 WO 2023169121A1 CN 2023074883 W CN2023074883 W CN 2023074883W WO 2023169121 A1 WO2023169121 A1 WO 2023169121A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel block
image
interpolation
resolution
pixel
Prior art date
Application number
PCT/CN2023/074883
Other languages
English (en)
French (fr)
Inventor
连冠荣
昔文博
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023169121A1 publication Critical patent/WO2023169121A1/zh
Priority to US18/379,332 priority Critical patent/US20240037701A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the present application relates to the field of computer technology, and in particular to an image processing method, a game rendering method, a device, a device, a program product and a storage medium.
  • the image resolution is usually increased through upsampling, and the low-resolution image is enlarged to high resolution using a spatial amplification algorithm, and the amplification process does not rely on other additional data, making the low-resolution image Obtained better display effect.
  • This application provides an image processing method, game rendering method, device, equipment, program product and storage medium.
  • the technical solution is as follows:
  • an image processing method which method includes:
  • the first image calculate an interpolation feature of a first pixel block in the first image, where the interpolation feature is used to describe the image content of the first pixel block;
  • the interpolation feature of the first pixel block When the interpolation feature of the first pixel block does not satisfy a feature judgment condition, perform a first interpolation on the first pixel block to obtain an interpolated pixel block, wherein the feature judgment condition is about the first pixel block. Judgment conditions for the complexity of the image content of the pixel block; when the interpolation feature of the first pixel block satisfies the feature judgment condition, perform a second interpolation on the first pixel block to obtain the interpolation value block of pixels;
  • the first interpolation and the second interpolation are used to upsample the first pixel block, and the computing resource consumption of the second interpolation is greater than the computing resource consumption of the first interpolation.
  • a game rendering method is provided, the method is executed by a game device, the method includes:
  • the first resolution being the output resolution of the game engine
  • the second resolution being the display resolution of the game device
  • the image processing method is the above-mentioned image processing method.
  • an image processing device comprising:
  • An acquisition module configured to acquire a first image with a first resolution
  • a calculation module configured to calculate interpolation features of the first pixel block in the first image according to the first image, where the interpolation feature is used to describe the image content of the first pixel block;
  • a processing module configured to perform a first interpolation on the first pixel block to obtain an interpolated pixel block when the interpolation feature of the first pixel block does not satisfy a feature judgment condition, wherein the feature judgment condition is Determination conditions regarding the complexity of the image content of the first pixel block;
  • the processing module is further configured to perform a second interpolation on the first pixel block to obtain the interpolated pixel block when the interpolation feature of the first pixel block satisfies the feature judgment condition;
  • An output module configured to output a second image with a second resolution based on the interpolated pixel block, the second resolution being greater than the first resolution
  • the first interpolation and the second interpolation are used to upsample the first pixel block, and the computing resource consumption of the second interpolation is greater than the computing resource consumption of the first interpolation.
  • a game rendering device is provided, the device is executed by a game device, and the device includes:
  • Determining module used to determine a first resolution and a second resolution, the first resolution is the output resolution of the game engine, and the second resolution is the display resolution of the game device;
  • An acquisition module configured to acquire the first image output by the game engine based on the first resolution
  • a processing module configured to use an image processing device to obtain a second image with the second resolution based on the first image for display
  • the image processing device is the above image processing device.
  • a computer device includes: a processor and a memory, at least one program is stored in the memory; and the processor is configured to execute all the programs in the memory. Describe at least one program to implement the above image processing method or game rendering method.
  • a computer-readable storage medium in which executable instructions are stored, and the executable instructions are loaded and executed by a processor to implement the above image processing method or game. Rendering method.
  • a computer program product includes computer instructions.
  • the computer instructions are stored in a computer-readable storage medium.
  • the processor reads the computer-readable storage medium from the computer-readable storage medium. and execute the computer instructions to implement the above image processing method or game rendering method.
  • Figure 1 is a block diagram of a computer system provided by an exemplary embodiment of the present application.
  • Figure 2 is a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • Figure 3 is a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • Figure 4 is a schematic diagram of a first image provided by an exemplary embodiment of the present application.
  • Figure 5 is a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • Figure 6 is a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • Figure 7 is a schematic diagram of a first image provided by an exemplary embodiment of the present application.
  • Figure 8 is a schematic diagram of a first image provided by an exemplary embodiment of the present application.
  • Figure 9 is a flow chart for performing the first interpolation provided by an exemplary embodiment of the present application.
  • Figure 10 is a flow chart for performing second interpolation provided by an exemplary embodiment of the present application.
  • Figure 11 is a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • Figure 12 is a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • Figure 13 is a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • Figure 14 is a schematic diagram of a first image provided by an exemplary embodiment of the present application.
  • Figure 15 is a schematic diagram of a first image provided by an exemplary embodiment of the present application.
  • Figure 16 is a schematic diagram of a first image provided by an exemplary embodiment of the present application.
  • Figure 17 is a schematic diagram of a first image provided by an exemplary embodiment of the present application.
  • Figure 18 is a flow chart of a game rendering method provided by an exemplary embodiment of the present application.
  • Figure 19 is a schematic diagram of displaying a first image provided by an exemplary embodiment of the present application.
  • Figure 20 is a schematic diagram of displaying a second image provided by an exemplary embodiment of the present application.
  • Figure 21 is a structural block diagram of an image processing device provided by an exemplary embodiment of the present application.
  • Figure 22 is a structural block diagram of a game rendering device provided by an exemplary embodiment of the present application.
  • Figure 23 is a structural block diagram of a server provided by an exemplary embodiment of the present application.
  • the user information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • the user information and data involved in this application are all It is information and data authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data need to comply with the relevant laws, regulations and standards of relevant countries and regions.
  • the first image and feature judgment conditions involved in this application were obtained with full authorization.
  • first, second, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
  • first parameter may also be called a second parameter, and similarly, the second parameter may also be called a first parameter.
  • word “if” as used herein may be interpreted as "when” or “when” or “in response to determining.”
  • Rendering passes When creating computer-generated images, the final scenes seen in film and television productions are often produced by rendering multiple "layers" or “passes", which are multiple images, Aiming to bring them together via digital compositing to form a complete frame. Pass rendering is based on the tradition of motion control photography that preceded Computer-Generated Imagery (CGI). For example, for a visual effects shot, the camera could be programmed to make one pass through the physical model of the spaceship to capture the fully illuminated passage of the spacecraft, and then repeat the exact same camera move through the spacecraft again to capture other elements, such as the lighting on the ship. window or its propeller. Once all the channels have been photographed, they can be optically printed together to form the complete shot.
  • CGI Computer-Generated Imagery
  • render layers and render passes can be used interchangeably.
  • layered rendering specifically refers to dividing different objects into separate images, such as one layer each for foreground characters, scenery, distance and sky.
  • Pass rendering refers to separating different aspects of a scene (such as shadows, highlights, or reflections) into separate images.
  • Resolution The resolution of a digital television, computer monitor, or display device is the number of distinct pixels that can be displayed in each dimension. Resolution is controlled by different factors. Usually quoted as width ⁇ height, the unit is pixels: for example, 1024 ⁇ 768 means the width is 1024 pixels and the height is 768 pixels. This example is often called "ten point twenty-four times seven sixty-eight" or "ten point twenty-four times seven sixty-eight".
  • the resolution of the display device corresponds to an aspect ratio; for example, common aspect ratios include but are not limited to: 4:3, 16:9, 8 :5;
  • Full High Definition (Full HD) resolution is 1920 ⁇ 1080, aspect ratio is 16:9;
  • Ultra eXtended Graphics Array (UXGA) resolution is 1600 ⁇ 1200, length The width ratio is 4:3;
  • WQXGA Wide Quad eXtended Graphics Array
  • FIG. 1 shows a schematic diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system can implement a system architecture that becomes an image processing method and/or a game rendering method.
  • the computer system may include: a terminal 100 and a server 200.
  • terminal 100 may be an electronic device such as a mobile phone, a tablet computer, a vehicle-mounted terminal (vehicle machine), a wearable device, a PC (Personal Computer, personal computer), an unmanned reservation terminal, etc.
  • the terminal 100 may be installed with a client that runs a target application program.
  • the target application program may be an image processing application program or other application programs that provide image processing functions. This application does not limit this.
  • this application does not limit the form of the target application, including but not limited to App (Application, application program) installed in the terminal 100, applet, etc., and may also be in the form of a web page.
  • the server 200 can be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server that provides cloud computing services.
  • the server 200 may be a background server of the above-mentioned target application, and is used to provide background services for clients of the target application.
  • the execution subject of each step may be a computer device.
  • the computer device refers to an electronic device with data calculation, processing and storage capabilities.
  • the image processing method and/or game rendering method can be executed by the terminal 100 (for example, the client of the target application installed and running in the terminal 100 executes the image processing method and/or game rendering).
  • the image processing method and/or game rendering method may also be executed by the server 200, or executed by the terminal 100 and the server 200 in interactive cooperation, which is not limited in this application.
  • the technical solution of this application can be combined with blockchain technology.
  • some of the data involved in the image processing method and/or game rendering method disclosed in this application can be saved on the blockchain.
  • the terminal 100 and the server 200 can communicate through a network, such as a wired or wireless network.
  • Figure 2 shows a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • the method can be performed by a computer device.
  • the method includes:
  • Step 510 Obtain a first image with a first resolution
  • the first image includes at least two pixel blocks; for example, the first image includes multiple pixel points, and the at least two pixel blocks may include all pixel points of the first image, or may include all pixel points of the first image. part of all pixels;
  • a pixel block includes one or more pixels. There is usually no overlapping portion between at least two pixel blocks included in the first image, but the possibility that overlapping portions may exist is not excluded.
  • Step 520 Calculate the interpolation feature of the first pixel block in the first image according to the first image
  • the first pixel block is any pixel block among at least two pixel blocks
  • the interpolation feature is used to describe the image content of the first pixel block, which is any pixel block among at least two pixel blocks; for example, the dimensions of the interpolation feature used to describe the image content of the first pixel block include but It is not limited to at least one of the following: color information of the first pixel block, brightness information of the first pixel block, grayscale information of the first pixel block, position information of the first pixel block in the first image; it should be noted that, for The interpolation feature may directly describe at least one of the above-mentioned information of the first pixel block, or may describe the change between the first pixel block and other pixel blocks, or describe the change between the first pixel block and other pixel blocks.
  • Convolution result indirectly describes at least one of the above information; illustratively, other pixel blocks are usually pixel blocks adjacent to the first pixel block, but it does not exclude the case where they are not adjacent to the first pixel block;
  • Example sexually, the color information of the first pixel block, the brightness information of the first pixel block, and the grayscale of the first pixel block can be indirectly described through at least one of direction features, gradient features, and Sobel operators. information and at least one of position information of the first pixel block in the first image.
  • Step 530 When the interpolation feature of the first pixel block does not meet the feature judgment condition, perform the first interpolation on the first pixel block to obtain the interpolated pixel block;
  • the feature judgment condition is used to determine that the first pixel block is a pixel block with complex image content, that is, it is used to describe the complexity of the image content of the first pixel block.
  • the feature judgment condition is a judgment condition regarding the complexity of the image content.
  • the feature judgment condition includes that the complexity of the image content of the first pixel block exceeds the target threshold;
  • the feature judgment condition determines the interpolation feature by setting a threshold.
  • the feature judgment conditions are preconfigured and can be adjusted; that is, for different first pixel blocks, different feature judgment conditions can be set; for example, when the interpolation features of the first pixel block do not meet the feature judgment conditions
  • the first pixel block is a pixel block with simple image content
  • the first interpolation is used to upsample the first pixel block, and the upsampling is used to increase the resolution of the first image;
  • Step 540 When the interpolation feature of the first pixel block satisfies the feature judgment condition, perform the second interpolation on the first pixel block to obtain the interpolated pixel block;
  • the first pixel block has complex image content. block of pixels;
  • the first interpolation and the second interpolation are used to upsample the first pixel block.
  • the computational resource consumption of the second interpolation is greater than that of the first interpolation.
  • the computational resource consumption is used to describe the computational complexity of the interpolation; for example, the complexity of the interpolation calculation There is a positive correlation between degree and computing resource consumption;
  • Step 550 Based on the interpolated pixel block, output a second image with a second resolution
  • the pixel blocks in the first image are used as the first pixel blocks one by one, the corresponding interpolated pixel blocks are calculated sequentially, and the second image is output according to the interpolated pixel blocks; the first interpolation and the second Interpolation is used to upsample the first pixel block, and the second image output based on the interpolated pixel block has a second resolution greater than the first resolution of the first image; that is, the second resolution is greater than the first resolution.
  • the method provided by this embodiment calculates the interpolation characteristics of the first pixel block and performs different interpolation on the first pixel block according to the complexity of the image content in the first pixel block; effectively reducing the calculation of upsampling.
  • the degree of complexity avoids the waste of computing resources caused by using high computing resource consumption interpolation when the image content is simple; it reduces the computing resource consumption while ensuring the upsampling effect, effectively reducing the computational complexity.
  • the method of the embodiment of the present application uses a corresponding difference method of calculation complexity for different image content complexities of pixel blocks, and can select an interpolation method according to the complexity of the image content, which helps to improve the device's adjustment of image resolution. flexibility, and save the computing resources of the device while ensuring the upsampling effect.
  • FIG 3 shows a flow chart of an image processing method provided by an exemplary embodiment of the present application. The method can be performed by a computer device. Step 520 in Figure 2 can be implemented as the following steps:
  • Step 522 Calculate the interpolation features of the first pixel block in the first image based on the plurality of second pixel blocks;
  • Each second block of pixels is a block of pixels of the first image.
  • Each second pixel block includes one or more pixel points.
  • the first pixel block includes the same number and/or arrangement of pixel points as a second pixel block; further, a second pixel block includes multiple pixel blocks.
  • the plurality of second pixel blocks are adjacent pixel blocks surrounding the first pixel block.
  • a plurality of second pixel blocks are arranged around the first pixel block.
  • FIG. 4 shows a schematic diagram of the first image.
  • the first image 310 includes 9 pixel blocks; among them, the pixel blocks adjacent to the first pixel block 310a above, below, left and right are all second pixel blocks. 310b.
  • the number of the second pixel blocks 310b is, for example, four, which are respectively adjacent to the first pixel block 310a above, below, left, and right.
  • dirR represents the interpolation feature
  • G represents the grayscale information of the pixel block
  • Red, Green and Blue represent the red channel, green channel and blue channel of the RGB color system
  • A represents the second pixel adjacent to the first pixel block.
  • block represents the second pixel block adjacent to the left of the first pixel block
  • D represents the second pixel block adjacent to the right of the first pixel block
  • E represents the second pixel block adjacent to the bottom of the first pixel block
  • AH2 represents Encapsulated as two-dimensional floating point (Half) data
  • dir2.x represents the component of dir2 in the X direction, that is, the component in the left and right direction
  • dir2.y represents the component of dir2 in the Y direction, that is, the component in the up and down direction
  • up There are also intermediate variables in the formula for convenient expression, such as: dir.
  • step 522 can be implemented as the following sub-steps:
  • Sub-step 1 Calculate the directional characteristics of the first pixel block based on the brightness factors of the plurality of second pixel blocks;
  • Sub-step 2 Determine the direction feature as the interpolation feature.
  • the direction feature is used to describe the relationship between the first pixel block and the plurality of second pixel blocks. brightness difference;
  • the color information of the first image includes a brightness factor.
  • a brightness factor For example, when the RGB color system is used to describe the image color information, the green channel has the greatest impact on the brightness of the image; the green channel in the RGB color system is used as the brightness factor.
  • sub-step 1 has at least the following implementation methods:
  • I represents the brightness factor of the pixel block
  • the difference between the brightness factors of the pixel block D and the pixel block B is determined as the brightness difference between the first pixel block and the plurality of second pixel blocks in the first direction
  • dir2 represents the brightness feature of the first pixel block
  • AH2 represents encapsulation into two-dimensional Half data
  • dir is an intermediate variable for convenient expression.
  • dirR represents the directional feature of the first pixel block
  • dir2.x represents the first directional component of the brightness feature in the first image
  • dir2.y represents the second directional component of the brightness feature in the first image
  • the first direction and the second direction are perpendicular to each other.
  • the brightness between the first pixel block and the plurality of second pixel blocks in the first direction and the second direction is determined according to the difference in brightness factors between different second pixel blocks. Differences include:
  • the first pixel is determined according to the difference in brightness factor between a second pixel block located in front of the first pixel block and a second pixel block located behind the first pixel block in the first direction. a first brightness difference of the block in a first direction;
  • the difference in brightness factor between the second pixel block in the second pixel block located in front of the first pixel block and the second pixel block located in the rear side of the first pixel block in the second direction determine the second brightness difference of the first pixel block in the second direction.
  • encapsulating the brightness difference between the first pixel block and the second pixel block as two-dimensional floating point data to determine the brightness characteristics of the first pixel block includes: : Encapsulate the first brightness difference and the second brightness difference as two-dimensional floating point data to determine the brightness characteristics of the first pixel block.
  • the method provided by this embodiment calculates the interpolation characteristics of the first pixel block in the first image based on the second pixel block, and expands the dimension of describing the image content of the first pixel block; based on the first pixel block in the first pixel block
  • the complexity of the image content performs different interpolations on the first pixel block; effectively reducing the computational complexity of upsampling and avoiding the waste of computing resources caused by using high computing resource consumption interpolation when the image content is simple; while ensuring the upsampling effect On the premise of reducing the consumption of computing resources, it effectively reduces the computational complexity.
  • the method of the embodiment of the present application uses a corresponding difference method of calculation complexity for different image content complexities of pixel blocks, and can select an interpolation method according to the complexity of the image content, which helps to improve the device's adjustment of image resolution. flexibility, and save the computing resources of the device while ensuring the upsampling effect.
  • FIG. 5 shows a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • the method may be performed by a computer device OK. That is, in an optional design, based on the embodiment shown in Figure 2, step 512 is also included, and step 550 can be implemented as step 552:
  • Step 512 Divide the first image into at least two pixel blocks according to the dividing rules
  • the dividing rule does not place any restrictions on at least one of the number of pixels included in the at least two pixel blocks, the arrangement of the pixels, and the image information of the pixels;
  • the division rule is used to describe the division basis for dividing at least two pixel blocks in the first image; in one example, the division rule includes the pixel block position, and the division rule can directly or indirectly represent the pixel block position;
  • the first image includes 16*16 pixels
  • the division rule indicates that the divided pixel blocks include 4*4 pixels, and the pixel blocks are closely arranged on the first image; tight arrangement means that there are no gaps between the pixel blocks and as much as possible Divide pixel blocks into multiple pixel blocks; the division rules indirectly indicate the pixel block position by indicating the pixel block size and close arrangement;
  • the first image includes 16*16 pixels, and the division rule indicates that two pixel blocks are divided.
  • the position of pixel block 1 is from the first pixel to the eighth pixel from left to right in the first image, from top to bottom. From the first pixel to the sixteenth pixel; the dividing rule directly indicates the position of the pixel block.
  • Step 552 Based on the interpolated pixel blocks, splice into a second image with a second resolution according to the combination rules;
  • the interpolated pixel block is determined based on the first pixel block, which is a part of the first image, and the first pixel block is determined in the first image based on the dividing rule; according to the inverse of the dividing rule
  • the combination rule is based on the interpolation pixel block splicing to obtain the second image, that is, the combination rule and the division rule are opposite sorting rules.
  • the method provided by this embodiment by dividing the pixel blocks in the first image, lays the foundation for performing different interpolation on the first pixel block according to the complexity of the image content in the first pixel block; effectively reducing
  • the computational complexity of upsampling avoids the waste of computational resources caused by using high computational resource consumption interpolation when the image content is simple; it reduces the computational resource consumption while ensuring the upsampling effect, effectively reducing the computational complexity.
  • Figure 6 shows a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • the method can be performed by a computer device. That is, in an optional design, based on the embodiment shown in Figure 2, step 530 can be implemented as step 532; step 540 can be implemented as step 542:
  • Step 532 If the interpolation feature of the first pixel block does not meet the feature judgment condition, perform the first interpolation on the first pixel block based on the third pixel block to obtain the interpolated pixel block;
  • the third pixel block is a neighboring pixel block of the first pixel block, and the third pixel block is arranged around the first pixel block in a second arrangement; for example, FIG. 7 shows a schematic diagram of the first image, the first The image includes 16 pixel blocks, and the upsampled second image includes 36 pixel blocks. The second image is compressed and mapped onto an image of the same size as the first image.
  • the first mark 322 shown in the figure is 16
  • the circular marks represent the center position of the 16 pixel blocks of the first image
  • the second mark 324 that is, the 36 fork-shaped marks represent the center position of the 36 pixel blocks of the second image
  • the target second mark 324a is an interpolation pixel
  • the center position of the block perform the first interpolation on the first pixel block to obtain the interpolated pixel block, and the center position of the first pixel block is indicated using the target first mark 322a; perform the first interpolation on the first pixel block according to the third pixel block
  • the third pixel block includes adjacent pixel blocks around the first pixel block, and the center positions of the plurality of third pixel blocks are indicated using the target first mark 322a and the associated first mark 322b; it can be understood that the plurality of third pixel blocks include Four pixel blocks of the same size as the first pixel block.
  • a third pixel block among the plurality of third pixel blocks is a first pixel block.
  • the above description is only an illustrative example, and more or fewer pixel blocks adjacent to the first pixel block are third pixel blocks; in this application, the first arrangement may be the same, or the first arrangement may be the same. can be different.
  • Step 542 When the interpolation feature of the first pixel block satisfies the feature judgment condition, perform the second interpolation on the first pixel block according to the fourth pixel block to obtain the interpolated pixel block;
  • the fourth pixel block includes neighboring pixel blocks surrounding the first pixel block.
  • the fourth pixel block is arranged, for example, in a third arrangement around the first pixel block;
  • FIG. 8 shows a schematic diagram of the first image, the first image includes 16 pixel blocks, and the upsampled second image includes 36 pixel blocks.
  • the second image is compressed and mapped onto an image of the same size as the first image.
  • the first mark 332 shown in the figure, that is, the 16 circular marks represents the center position of the 16 pixel blocks of the first image.
  • the second Mark 334 which is 36 forks The mark represents the center position of the 36 pixel block of the second image; the target second mark 334a is the center position of the interpolated pixel block.
  • the second interpolation is performed on the first pixel block to obtain the interpolated pixel block.
  • the center position of the first pixel block is used
  • the target first mark 332a indicates; perform a second interpolation on the first pixel block based on the fourth pixel block, which is a neighboring pixel block of the first pixel block, and the center position of the fourth pixel block uses the target first mark 332a and Associated with the first mark 332b indication.
  • the computing resource consumption of the second interpolation is greater than that of the first interpolation
  • the computing resource consumption is used to describe the computational complexity of the interpolation; in an optional implementation, the number of the fourth pixel blocks is greater than that of the third interpolation. Pixel blocks; that is, the computational complexity of performing the second interpolation based on a large number of fourth pixel blocks is greater than the computational complexity of performing the first interpolation based on a small number of third pixel blocks.
  • the third pixel block and the fourth pixel block there may be situations in which the first pixel block is not included; for example, the eight pixel blocks adjacent to the first pixel block are used as the third pixel block or the third pixel block. Four pixel blocks.
  • the method provided by this embodiment calculates the interpolation characteristics of the first pixel block and performs different interpolation on the first pixel block according to the complexity of the image content in the first pixel block; and performs different interpolation on the third pixel block according to the third pixel block.
  • the first interpolation is performed on one pixel block
  • the second interpolation is performed on the first pixel block based on the fourth pixel block, providing different interpolation methods for the first pixel block; effectively reducing the computational complexity of upsampling and avoiding the need to modify the image content.
  • the use of high computing resource consumption interpolation causes a waste of computing resources; on the premise of ensuring the upsampling effect, the computing resource consumption is reduced, effectively reducing the computational complexity.
  • embodiments of the present application can adopt corresponding methods for different image content complexity of the pixel block.
  • the difference method of calculation complexity can select the interpolation method according to the complexity of the image content, which helps to improve the flexibility of the device in adjusting the image resolution and save the computing resources of the device while ensuring the upsampling effect.
  • Figure 9 shows a flow chart for performing the first interpolation provided by an exemplary embodiment of the present application; it includes the following steps:
  • Step 610 Interpolate the first pixel block in the first direction
  • the first interpolation is linear interpolation as an example for explanation; those skilled in the art can understand that the first interpolation can be implemented as other interpolation methods, including but not limited to at least one of the following: nearest neighbor interpolation, bilinear interpolation sexual interpolation.
  • Interpolate the first pixel block in the first direction Interpolate the first pixel block in the first direction. Taking the schematic diagram of the first image shown in Figure 7 above as an example, interpolate the first pixel block in the first direction to obtain an interpolation result in the first direction;
  • the first direction is the x-axis direction of the first image; for example, the interpolation result in the first direction is:
  • f(x,y 1 ) and f(x,y 2 ) represent the interpolation results in the first direction
  • x represents the abscissa of the center position of the interpolated pixel block
  • x 1 represents the left side of the third pixel block.
  • x 2 represents the abscissa of the center position of the pixel block located on the right side of the third pixel block
  • f (Q 12 ) represents the color information of the pixel block located on the upper left side of the third pixel block
  • f(Q 11 ) represents the color information of the pixel block located on the lower left side of the third pixel block
  • f(Q 22 ) represents the color information of the pixel block located on the upper right side of the third pixel block
  • f(Q 21 ) represents the color information of the pixel block located on the lower left side of the third pixel block
  • Step 620 Based on the interpolation result in the first direction, interpolate the first pixel block in the second direction to obtain an interpolated pixel block;
  • Interpolate the first pixel block in the second direction Interpolate the first pixel block in the second direction.
  • interpolate the first pixel block in the second direction to obtain an interpolation result in the second direction
  • the interpolation result in the second direction is the interpolated pixel block;
  • the first direction is the y-axis direction of the first image; for example, the interpolation result in the second direction is:
  • f(x,y 1 ) and f(x,y 2 ) represent the interpolation results in the first direction
  • f(x,y) represents the interpolation results in the second direction, that is, the color information of the interpolated pixel block
  • y represents the ordinate of the center position of the interpolated pixel block
  • y 1 represents the ordinate of the center position of the lower pixel block in the third pixel block
  • y 2 represents the center position of the upper pixel block in the third pixel block
  • the ordinate; the above second formula is obtained by expanding the interpolation result in the first direction in the first formula.
  • each parameter in the expansion formula of the interpolation result in the first direction please refer to the content in step 610 above. ; Those skilled in the art can understand that the above third formula is obtained by simplifying the second formula.
  • the method provided in this embodiment implements the first interpolation as linear interpolation, provides an interpolation method with low computational resource consumption when the first pixel block is a simple pixel block, and effectively reduces the upsampling cost.
  • the computing complexity avoids the waste of computing resources caused by using high computing resource consumption interpolation when the image content is simple; it reduces the computing resource consumption while ensuring the upsampling effect, effectively reducing the computing complexity.
  • Figure 10 shows a flow chart for performing second interpolation provided by an exemplary embodiment of the present application; it includes the following steps:
  • Step 630 Calculate the characteristic length of the first pixel block
  • the second interpolation is Lanczos interpolation as an example for explanation; those skilled in the art can understand that the second interpolation can be implemented as other interpolation methods, including but not limited to cubic interpolation.
  • I represents the brightness factor of the pixel block.
  • the brightness factor is represented by the green channel of the RGB color system;
  • A represents the pixel block adjacent to the top of the first pixel block, and B represents the pixel adjacent to the left of the first pixel block.
  • block, D represents the pixel block adjacent to the right of the first pixel block,
  • E represents the pixel block adjacent to the bottom of the first pixel block,
  • AH2 represents encapsulation as two-dimensional Half data;
  • dir2.x represents the component of dir2 in the Value calculation; intermediate variables for convenient expression also appear in the above formula, such as: dir.
  • Step 640 Calculate the weighting parameters of the first pixel block
  • the weighting parameter of the first pixel block provides the weight of the fourth pixel block adjacent to the first pixel block used when constructing the interpolated pixel block;
  • sqrt represents square root calculation
  • max represents maximum value calculation
  • abs represents absolute value calculation
  • AH1 represents encapsulation as one-dimensional Half data
  • AH2 represents encapsulation as two-dimensional Half data
  • dir.x represents the component of dir in the X direction, that is The components in the left and right directions
  • dir.y represents the component of dir in the Y direction, that is, the component in the up and down direction
  • the weighting parameters include len2 and clp, where clp represents the clipping point and lob represents the negative leaf intensity; in the above formula, also Intermediate variables appear for convenient expression, such as: stretch.
  • Step 650 Based on the weighting parameters, perform the second interpolation on the first pixel block to obtain the interpolated pixel block;
  • the fourth pixel block performs the second interpolation on the first pixel block according to the weighting parameter determined in step 640 to obtain the interpolated pixel block;
  • the fourth pixel block in this embodiment is the same as in Figure 8
  • the fourth pixel block shown is the same, that is, it includes 12 pixel blocks;
  • the weight of the fourth pixel block is:
  • x represents the weighting parameter len2 in step 640
  • w represents the weighting parameter clp in step 640
  • L(x) represents the weight of the fourth pixel block, that is, the weight coefficient including 12 pixel blocks.
  • the color information of the interpolated pixel block is the weighted average of the color information of the first pixel block; that is, the average of the color information of the fourth pixel block multiplied by the weight coefficient is determined as the color information of the interpolated pixel block.
  • the method provided by this embodiment implements the second interpolation as Lanzos interpolation, provides an interpolation method that consumes a large amount of computing resources when the first pixel block is a complex pixel block, and effectively ensures that Upsampling effect on complex pixel blocks; at the same time, it avoids the waste of computing resources caused by using high computing resource consumption interpolation when the image content is simple, effectively reducing the computational complexity.
  • Figure 11 shows a flow chart of an image processing method provided by an exemplary embodiment of the present application.
  • the method can be performed by a computer device. That is, in an optional design, based on the embodiment shown in Figure 2, the following steps are also included:
  • Step 524 Determine the feature judgment conditions of the first pixel block according to the first image
  • different feature judgment conditions can be set for different first pixel blocks.
  • the feature judgment condition is determined based on the first image; since the calculation complexity of the second interpolation is greater than that of the first interpolation, that is, the upsampling effect of the second interpolation is better than that of the first interpolation; the first image is divided into key areas and non-key areas, determine the feature judgment conditions based on the first image; for example: the key areas in the first image have high display requirements, set loose feature judgment conditions in the key areas to increase the number of first pixel blocks that perform the second interpolation Quantity; the display requirements of the non-key areas in the first image are low, and strict feature judgment conditions are set in the non-key areas to reduce the number of first pixel blocks for performing the second interpolation;
  • step 524 can be implemented as step 524a:
  • Step 524a Determine the feature judgment condition of the first pixel block based on the position information of the first pixel block in the first image;
  • the target area is determined in the first image, and the feature judgment condition of the first pixel block is determined based on whether the position of the first pixel block is within the target area; it should be noted that the target area is predetermined, and for the target At least one of the shape, size, and position of the area does not impose any restrictions; the target area is a partial area of the first image;
  • step 524a can be implemented as:
  • determining that the feature judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds the first target threshold
  • determining that the feature judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds the second target threshold
  • the first target threshold is smaller than the second target threshold, and the target area is a partial area of the first image.
  • the feature judgment condition is to set a threshold to perform interpolation features judge
  • a first threshold is set for the feature judgment condition; when the position of the first pixel block is outside the target area, a second threshold is set for the feature judgment condition;
  • the first threshold is smaller than the second threshold; that is, in the target area, the proportion of pixel blocks performing the second interpolation is increased; the target area obtains a better display effect;
  • step 524a can be implemented as step 524b:
  • Step 524b Determine the feature judgment conditions of the first pixel block based on the image content of the first image and the position information of the first pixel block in the first image;
  • the image main area is determined in the first image, and based on whether the position of the first pixel block is located in the image main area, the feature judgment condition of the first pixel block is determined; it should be noted that , there is no restriction on at least one of the shape, size and position of the image main area; the image main area is a partial area of the first image;
  • step 524b can be implemented as:
  • determining that the characteristic judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds a third target threshold
  • determining that the characteristic judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds a fourth target threshold
  • the third target threshold is smaller than the fourth target threshold
  • main image area may be directly determined based on the image content of the first image, or may be determined indirectly based on the image content of the first image; an exemplary description is given below:
  • the image subject area is directly determined based on the image content of the first image
  • the first image recognition model uses the display area of the virtual object in the first image as the image main area; sets loose feature judgment conditions in the image main area, To increase the number of first pixel blocks that perform the second interpolation; specifically, Figure 14 shows a schematic diagram of the first image provided by an exemplary embodiment of the present application; the display area 412 of the virtual object in the first image is used as an image In the main body area, when the position of the first pixel block is located within the main body area of the image, the feature judgment conditions are relaxed; when the position of the first pixel block is outside the main body area of the image, if the position of the first pixel block is within the virtual box In the case of display areas of objects, virtual vehicles, or virtual roads, the feature judgment conditions are strict.
  • the first image recognition model uses the display area of the virtual building in the first image as the image main area; sets loose feature judgment conditions in the image main area, To increase the number of first pixel blocks that perform the second interpolation; specifically, Figure 15 shows a schematic diagram of the first image provided by an exemplary embodiment of the present application; the display area 422 of the virtual building in the first image is used as an image
  • the feature judgment conditions are loose; when the position of the first pixel block is outside the main body area of the image, for example, the position of the first pixel block is within the virtual plant , virtual fence or virtual mountain display area, the feature judgment conditions are strict.
  • the image subject area is determined indirectly based on the image content of the first image
  • the second image recognition model is called to determine the image type of the first image in the first image, and the corresponding image main area is determined according to the image type.
  • the second image recognition model determines that the image type of the first image is the first type in the first image, and the first image is The corresponding first area in is used as the image main area; specifically, Figure 16 shows a schematic diagram of the first image provided by an exemplary embodiment of the present application; in the first type of image, the trapezoidal area 432 needs to be focused on area, for images of FPS games, there is a large amount of information and game content in the trapezoidal area 432.
  • the position of the first pixel block is located in the main body area of the image, the feature judgment conditions are loose.
  • the second image recognition model determines in the first image that the image type of the first image is the second type, and the second image is the second type.
  • the corresponding second area in an image is used as the main image area; specifically, FIG. 17 shows a schematic diagram of the first image provided by an exemplary embodiment of the present application; in the first type of image, the elliptical area 442 is the focus area.
  • the area of concern for images of FPS games, there is a large amount of information and game content in the elliptical area 442.
  • first image recognition model and the second image recognition model are different models with different model structures and/or model parameters.
  • the method provided by this embodiment improves the evaluation capability of the first pixel block by determining the feature judgment conditions of the first pixel block; and provides different interpolations for the first pixel blocks at different positions. Basis; It effectively reduces the computational complexity of upsampling, further avoiding the waste of computing resources caused by using high computing resource consumption interpolation when the image content is simple; it reduces the consumption of computing resources while ensuring the upsampling effect, effectively Reduces computational complexity.
  • Figure 18 shows a flow chart of a game rendering method provided by an exemplary embodiment of the present application.
  • the method can be performed by a computer device, which is a gaming device that can run a game engine.
  • the method includes:
  • Step 710 Determine the first resolution and the second resolution
  • the first resolution is the output resolution of the game engine
  • the second resolution is the display resolution of the game device; for example, the first resolution is smaller than the second resolution
  • the first resolution is the output resolution of the game engine, that is, the game engine renders the game screen according to the first resolution; those skilled in the art can understand that when the first resolution is small, the computational complexity of game screen rendering is small; That is, there is a positive correlation between the size of the first resolution and the computational complexity of game screen rendering.
  • the second resolution is the display resolution of the game device; the display resolution can be equal to the device resolution or smaller than the device resolution; taking the game device as a smartphone as an example, for a game device with a resolution of 1920 ⁇ 1080 Smartphones can support multiple display modes and display at different resolutions; for example, smartphones can also support display at either 1280 ⁇ 720 resolution or 640 ⁇ 360 resolution. In the case where the monitor displays at a resolution of 640 ⁇ 360, the display resolution is 640 ⁇ 360, which is smaller than the device resolution.
  • determining the first resolution and the second resolution may be independent of each other, or may be related.
  • the second resolution may be determined first, and then the first resolution may be determined based on the second resolution.
  • Step 720 Obtain the first image output by the game engine based on the first resolution
  • the first image is a game screen image rendered by a game engine;
  • Figure 19 shows a schematic diagram of displaying the first image provided by an exemplary embodiment of the present application; since the first resolution is smaller than the second resolution; the device adopts the first resolution When the first image 342 is displayed at a high rate, the display device cannot be filled, and there is a blank area 344.
  • Step 730 Based on the first image, use an image processing method to obtain a second image with a second resolution for display;
  • the image processing method is obtained according to any of the above embodiments of the image processing method. Since the second image has the second resolution, the second resolution is the device display resolution; Figure 20 shows a schematic diagram of displaying the second image provided by an exemplary embodiment of the present application; the device displays the second image with the second resolution. When the second image is 346, it can cover the display device without any blank area.
  • the method provided by this embodiment determines the first resolution and the second resolution in a game rendering scenario, and performs different interpolations on the first pixel block based on the complexity of the image content in the first pixel block; It effectively improves the quality of game rendering images and avoids low rendering effects caused by the computing power of computer equipment. Reduces the consumption of computing resources and reduces the complexity of calculations.
  • determining the first resolution can be implemented as:
  • the attribute information of the game device includes at least one of the following: computing power of the game device, load condition of the game device, temperature of the game device, and model characteristics of the game device.
  • the above game device usually includes a processor, such as at least one of a central processing unit (Central Processing Unit, CPU) and a graphics processor (Graphics Processing Unit, GPU); of course, it may also include other devices with computing capabilities. gaming equipment.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the computing power of the game device used to describe the number of calculations that the game device can carry per unit time; the stronger the computing power, the more calculations can be performed in the same time;
  • the load condition of the game device used to describe the current working status of the game device; for example, when the load condition of the game device is high, the first resolution is low.
  • the temperature of the game device for example, when the temperature of the game device is high, the game device is protected, and the first resolution is low to reduce the calculation amount of the game device;
  • Model characteristics of the game device used to describe the specifications of the game device.
  • the model characteristics of the game device indicate that the game device is a high-standard device, the first resolution is high.
  • the first resolution is determined as A1 times B1;
  • the first resolution is determined as A2 times B2;
  • A1 is greater than A2 and/or B1 is greater than B2;
  • A1, A2, B1 and B2 are all positive integers.
  • the first resolution is represented by the number of horizontal pixels multiplied by the number of vertical pixels, such as: 1920 ⁇ 1080.
  • Target conditions include at least one of the following:
  • the computing power of the game device is greater than the target capability threshold; for example, the target capability threshold is used to describe the number of calculations that the game device can carry per unit time; for example: the target capability threshold is 100,000 operations per minute.
  • the computing power of the game device is greater than At 100,000 operations per minute, the attribute information of the game device meets the target conditions;
  • the load condition of the game device is less than the target load threshold; for example, the target load threshold is used to describe the working status of the game device, for example: the target load threshold is 75%, when the load condition of the game device is less than 75% of the full load , the attribute information of the game device meets the target conditions;
  • the temperature of the game device is less than the target temperature threshold; for example, the target temperature threshold is used to describe the operating temperature of the game device, for example: the target temperature threshold is 85 degrees Celsius. When the temperature of the game device is less than 85 degrees Celsius, the properties of the game device The information meets the target conditions;
  • the model characteristics of the game device exceed the target model characteristics; for example, the target model characteristics are used to describe the specifications of the game device; than For example: the target model feature is the first model product of the fourth update; when the model feature of the game device is the first model product of the sixth update, the target model feature is exceeded, and the attribute information of the game device meets the target conditions.
  • step 710 when it is determined that the first resolution and the second resolution are associated, step 710 may be implemented as:
  • the game device when the game device displays the second image with the second resolution, it can fill the display device without any blank area.
  • the first resolution is smaller than the second resolution, there is a multiple relationship between the first resolution and the second resolution, and the preset multiple is less than 1.
  • the resolution is usually expressed by the number of horizontal pixels multiplied by the number of vertical pixels, such as: 1920 ⁇ 1080; but it is not excluded that the resolution can be expressed by the total number of pixels and the horizontal and vertical ratio, such as: 2073600, 16:9.
  • Multiplying the second resolution by a preset multiple usually involves multiplying the number of horizontal pixels and the number of vertical pixels by the preset multiple to obtain the first resolution.
  • the method provided by this embodiment determines the first resolution and the second resolution in a game rendering scenario, and performs different interpolations on the first pixel block based on the complexity of the image content in the first pixel block; Effectively improves the quality of game rendering images; determines the first resolution through the attribute information of the computer device, effectively ensures that the computing power of the computer device is fully and reasonably used; lays the foundation for obtaining a high-resolution second image, while avoiding Inferior rendering results caused by the computing power of computer equipment. Reduces the consumption of computing resources and reduces the complexity of calculations.
  • Figure 21 shows a block diagram of an image processing device provided by an exemplary embodiment of the present application.
  • the device includes:
  • Acquisition module 810 used to acquire a first image with a first resolution
  • Calculation module 820 configured to calculate interpolation features of the first pixel block in the first image according to the first image, where the interpolation features are used to describe the image content of the first pixel block;
  • the processing module 830 is configured to perform a first interpolation on the first pixel block to obtain an interpolated pixel block when the interpolation feature of the first pixel block does not satisfy the feature judgment condition;
  • the processing module 830 is also configured to perform a second interpolation on the first pixel block to obtain the interpolated pixel block when the interpolation feature of the first pixel block satisfies the feature judgment condition,
  • the feature judgment condition is a judgment condition regarding the complexity of the image content of the first pixel block;
  • An output module 840 is configured to output a second image with a second resolution based on the interpolated pixel block, where the second resolution is greater than the first resolution;
  • the first interpolation and the second interpolation are used to upsample the first pixel block, and the computing resource consumption of the second interpolation is greater than the computing resource consumption of the first interpolation.
  • calculation module 820 is also used to:
  • the plurality of second pixel blocks include adjacent pixel blocks surrounding the first pixel block.
  • the color information of the first image includes a brightness factor; the calculation module 820 is also used to:
  • the direction feature is determined as the interpolation feature, and the direction feature is used to describe the brightness difference between the first pixel block and the plurality of second pixel blocks.
  • calculation module 820 is also used to:
  • the first direction and the second direction are perpendicular to each other.
  • the calculation module 820 is used to:
  • the first pixel is determined according to the difference in brightness factor between a second pixel block located in front of the first pixel block and a second pixel block located behind the first pixel block in the first direction. a first brightness difference of the block in a first direction;
  • the difference in brightness factor between the second pixel block in the second pixel block located in front of the first pixel block and the second pixel block located in the rear side of the first pixel block in the second direction determine the second brightness difference of the first pixel block in the second direction
  • Encapsulating the brightness difference between the first pixel block and the plurality of second pixel blocks as two-dimensional floating point data to determine the brightness characteristics of the first pixel block includes: The first brightness difference and the second brightness difference are encapsulated as two-dimensional floating point data to determine the brightness characteristics of the first pixel block.
  • the device further includes:
  • the dividing module 850 is configured to divide the first image into at least two pixel blocks according to the dividing rules, and the first pixel block is any pixel block among the at least two pixel blocks;
  • the output module 840 is further configured to: based on the interpolated pixel blocks, splice into the second image with the second resolution according to a combination rule, where the combination rule and the division rule are in reverse order. rule.
  • the device further includes:
  • Determining module 860 configured to determine the feature determination condition of the first pixel block according to the first image.
  • the determination module 860 is also used to:
  • the feature determination condition of the first pixel block is determined according to the position information of the first pixel block in the first image.
  • the determination module 860 is also used to:
  • the characteristic judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds a first target threshold, so
  • the target area is a partial area of the first image
  • the feature judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds a second target threshold.
  • the first target threshold is smaller than the second target threshold.
  • the determination module 860 is also used to:
  • the feature determination condition of the first pixel block is determined according to the image content of the first image and the position information of the first pixel block in the first image.
  • the determination module 860 is also used to:
  • the characteristic judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds a third target. threshold
  • the characteristic judgment condition of the first pixel block includes that the complexity of the image content of the first pixel block exceeds a fourth target. threshold, and the third target threshold is smaller than the fourth target threshold.
  • the determination module 860 is also used to:
  • processing module 830 is also used to:
  • the first interpolation is performed on the first pixel block according to the third pixel block to obtain the interpolated pixel block, so
  • the third pixel block includes adjacent pixel blocks surrounding the first pixel block;
  • the second interpolation is performed on the first pixel block according to the fourth pixel block to obtain the interpolated pixel block,
  • the fourth pixel block includes adjacent pixel blocks surrounding the first pixel block, and the number of the fourth pixel blocks is greater than the number of the third pixel blocks.
  • the first interpolation includes linear interpolation
  • the second interpolation includes Lanthos interpolation
  • Figure 22 shows a block diagram of a game rendering device provided by an exemplary embodiment of the present application.
  • the device is executed by the game device, and the device includes:
  • Determining module 870 used to determine a first resolution and a second resolution, the first resolution is the output resolution of the game engine, and the second resolution is the display resolution of the game device;
  • Acquisition module 880 used to acquire the first image output by the game engine based on the first resolution
  • the processing module 890 is configured to use an image processing device to obtain a second image with the second resolution based on the first image for display;
  • the image processing device is the image processing device according to any one of claims 1 to 13.
  • the determination module 870 is also used to:
  • the attribute information of the game device includes at least one of the following: computing power of the game device, load condition of the game device, temperature of the game device, and model characteristics of the game device.
  • the determination module 870 is also configured to: determine the first resolution as A1 times B1 when the attribute information of the game device meets the target condition;
  • A1 is greater than A2 and/or B1 is greater than B2
  • the target condition includes at least one of the following: the computing power of the game device is greater than the target capability threshold, the load condition of the game device is less than the target load threshold, the game device The temperature is less than the target temperature threshold, and the model characteristics of the game device exceed the target model characteristics.
  • the determination module 870 is also used to:
  • the product of the second resolution and a preset multiple is determined as the first resolution, and the preset multiple is less than 1.
  • the device provided in the above embodiment implements its functions, only the division of the above functional modules is used as an example. In practical applications, the above functions can be allocated to different functional modules according to actual needs. That is, the content structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • each module performs operations has been described in detail in the embodiments related to the method; the technical effects achieved by each module performing operations are the same as those in the embodiments related to the method. , will not be elaborated here.
  • An embodiment of the present application also provides a computer device, which computer device includes: a processor and a memory, with a computer program stored in the memory; the processor is used to execute the computer program in the memory to implement the above-mentioned tasks.
  • the method embodiment provides an image processing method or a game rendering method.
  • the computer device is a server.
  • FIG. 23 is a structural block diagram of a server provided by an exemplary embodiment of the present application.
  • the server 2300 includes: a processor 2301 and a memory 2302.
  • the processor 2301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 2301 can adopt at least one hardware form among digital signal processing (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA).
  • DSP Digital Signal Processing
  • FPGA field-programmable gate array
  • PLA programmable logic array
  • the processor 2301 can also include a main processor and a co-processor.
  • the main processor is a processor used to process data in the wake-up state, also called a central processing unit (Central Processing Unit, CPU); the co-processor is A low-power processor used to process data in standby mode.
  • CPU Central Processing Unit
  • the processor 2301 may be integrated with a graphics processor (Graphics Processing Unit, GPU), and the GPU is responsible for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 2301 may also include an artificial intelligence (Artificial Intelligence, AI) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 2302 may include one or more computer-readable storage media, which may be non-transitory. Memory 2302 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 2302 is used to store at least one instruction, The at least one instruction is used to be executed by the processor 2301 to implement the image processing method or the game rendering method provided by the method embodiments in this application.
  • the server 2300 optionally further includes: an input interface 2303 and an output interface 2304.
  • the processor 2301, the memory 2302, the input interface 2303, and the output interface 2304 may be connected through a bus or signal line.
  • Each peripheral device can be connected to the input interface 2303 and the output interface 2304 through a bus, a signal line or a circuit board.
  • the input interface 2303 and the output interface 2304 may be used to connect at least one peripheral device related to input/output (I/O) to the processor 2301 and the memory 2302 .
  • the processor 2301, the memory 2302, the input interface 2303, and the output interface 2304 are integrated on the same chip or circuit board; in some other embodiments, the processor 2301, the memory 2302, the input interface 2303, and the output interface 2304 are integrated on the same chip or circuit board. Any one or two of 2304 can be implemented on a separate chip or circuit board, which is not limited in the embodiment of the present application.
  • server 2300 does not constitute a limitation on the server 2300, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
  • a chip is also provided.
  • the chip includes programmable logic circuits and/or program instructions. When the chip is run on a computer device, it is used to implement the image processing method described in the above aspect. , or game rendering method.
  • a computer program product including computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor reads and executes the computer instructions from the computer-readable storage medium to implement the image processing method or the game rendering method provided by the above method embodiments. .
  • a computer-readable storage medium is also provided, and a computer program is stored in the computer-readable storage medium.
  • the computer program is loaded and executed by the processor to realize the images provided by the above method embodiments. processing method, or game rendering method.
  • Computer-readable media includes computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • Storage media can be any available media that can be accessed by a general purpose or special purpose computer.

Abstract

本申请公开了一种图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质,属于计算机技术领域。该方法包括:获取具有第一分辨率的第一图像;根据第一图像,计算第一图像中第一像素块的插值特征;在第一像素块的插值特征不满足特征判断条件的情况下,对第一像素块执行第一插值,获得插值像素块;在第一像素块的插值特征满足特征判断条件的情况下,对第一像素块执行第二插值,获得插值像素块;基于插值像素块,输出具有第二分辨率的第二图像。本申请通过第一像素块中的图像内容复杂程度对第一像素块执行不同的插值;有效降低了升采样的计算复杂程度,避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费;降低了计算复杂程度。

Description

图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质
本申请要求于2022年03月10日提交中国专利局、申请号为202210230954.6、申请名称为“图像处理方法、游戏渲染方法、装置、设备及存储介质”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质。
背景技术
随着计算机技术的发展,为了追求更好的图像显示效果,对图像分辨率提出了更高要求。
在相关技术中,面对低分辨率图像,通常通过升采样提高图像分辨率,将低分辨率的图像使用空间放大算法放大至高分辨率,且放大过程不依赖其他额外数据,使得低分辨率图像获得了更好的显示效果。
然而上述升采样过程需要进行大量计算,实际应用过程中对计算机设备的计算能力提出了很高要求,如何降低计算复杂程度是亟待解决的问题。
发明内容
本申请提供了一种图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质,技术方案如下:
根据本申请的一方面,提供了一种图像处理方法,所述方法包括:
获取具有第一分辨率的第一图像;
根据所述第一图像,计算所述第一图像中第一像素块的插值特征,所述插值特征用于描述所述第一像素块的图像内容;
在所述第一像素块的所述插值特征不满足特征判断条件的情况下,对所述第一像素块执行第一插值,获得插值像素块,其中所述特征判断条件为关于所述第一像素块的图像内容的复杂程度的判断条件;在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,对所述第一像素块执行第二插值,获得所述插值像素块;
基于所述插值像素块,输出具有第二分辨率的第二图像,所述第二分辨率大于所述第一分辨率;
其中,所述第一插值和所述第二插值用于对所述第一像素块进行升采样,所述第二插值的计算资源消耗大于所述第一插值的计算资源消耗。
根据本申请的另一方面,提供了一种游戏渲染方法,所述方法由游戏设备执行,所述方法包括:
确定第一分辨率和第二分辨率,所述第一分辨率是游戏引擎的输出分辨率,所述第二分辨率是所述游戏设备的显示分辨率;
获取所述游戏引擎基于所述第一分辨率输出的第一图像;
基于所述第一图像,采用图像处理方法获得具有所述第二分辨率的第二图像进行显示;
其中,所述图像处理方法是上述的图像处理方法。
根据本申请的另一方面,提供了一种图像处理装置,所述装置包括:
获取模块,用于获取具有第一分辨率的第一图像;
计算模块,用于根据所述第一图像,计算所述第一图像中第一像素块的插值特征,所述插值特征用于描述所述第一像素块的图像内容;
处理模块,用于在所述第一像素块的所述插值特征不满足特征判断条件的情况下,对所述第一像素块执行第一插值,获得插值像素块,其中所述特征判断条件为关于所述第一像素块的图像内容的复杂程度的判断条件;
所述处理模块,还用于在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,对所述第一像素块执行第二插值,获得所述插值像素块;
输出模块,用于基于所述插值像素块,输出具有第二分辨率的第二图像,所述第二分辨率大于所述第一分辨率;
其中,所述第一插值和所述第二插值用于对所述第一像素块进行升采样,所述第二插值的计算资源消耗大于所述第一插值的计算资源消耗。
根据本申请的另一方面,提供了一种游戏渲染装置,所述装置由游戏设备执行,所述装置包括:
确定模块,用于确定第一分辨率和第二分辨率,所述第一分辨率是游戏引擎的输出分辨率,所述第二分辨率是所述游戏设备的显示分辨率;
获取模块,用于获取所述游戏引擎基于所述第一分辨率输出的第一图像;
处理模块,用于基于所述第一图像,采用图像处理装置获得具有所述第二分辨率的第二图像进行显示;
其中,所述图像处理装置是上述图像处理装置。
根据本申请的另一方面,提供了一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一段程序;所述处理器,用于执行所述存储器中的所述至少一段程序以实现上述的图像处理方法或游戏渲染方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述可读存储介质中存储有可执行指令,所述可执行指令由处理器加载并执行以实现上述图像处理方法或游戏渲染方法。
根据本申请的另一方面,提供了一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现上述图像处理方法或游戏渲染方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的计算机系统的框图;
图2是本申请一个示例性实施例提供的图像处理方法的流程图;
图3是本申请一个示例性实施例提供的图像处理方法的流程图;
图4是本申请一个示例性实施例提供的第一图像的示意图;
图5是本申请一个示例性实施例提供的图像处理方法的流程图;
图6是本申请一个示例性实施例提供的图像处理方法的流程图;
图7是本申请一个示例性实施例提供的第一图像的示意图;
图8是本申请一个示例性实施例提供的第一图像的示意图;
图9是本申请一个示例性实施例提供的执行第一插值的流程图;
图10是本申请一个示例性实施例提供的执行第二插值的流程图;
图11是本申请一个示例性实施例提供的图像处理方法的流程图;
图12是本申请一个示例性实施例提供的图像处理方法的流程图;
图13是本申请一个示例性实施例提供的图像处理方法的流程图;
图14是本申请一个示例性实施例提供的第一图像的示意图;
图15是本申请一个示例性实施例提供的第一图像的示意图;
图16是本申请一个示例性实施例提供的第一图像的示意图;
图17是本申请一个示例性实施例提供的第一图像的示意图;
图18是本申请一个示例性实施例提供的游戏渲染方法的流程图;
图19是本申请一个示例性实施例提供的显示第一图像的示意图;
图20是本申请一个示例性实施例提供的显示第二图像的示意图;
图21是本申请一个示例性实施例提供的图像处理装置的结构框图;
图22是本申请一个示例性实施例提供的游戏渲染装置的结构框图;
图23是本申请一个示例性实施例提供的服务器的结构框图。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的第一图像、特征判断条件都是在充分授权的情况下获取的。
应当理解,尽管在本公开可能采用术语第一、第二等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一参数也可以被称为第二参数,类似地,第二参数也可以被称为第一参数。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
首先对本申请涉及的若干个名词进行简介:
渲染通道:在创建计算机生成的图像时,电影和电视作品中出现的最终场景通常是通过渲染多个“层”或“通道”来产生的,这些“层”或“通道”是多个图像,旨在通过数字合成将它们组合在一起以形成一个完整的帧。通道渲染基于电脑三维动画合成技术(Computer-Generated Imagery,CGI)之前的运动控制摄影传统。例如,对于视觉效果拍摄,可以对摄像机进行编程,使其一次通过宇宙飞船的物理模型,以拍摄飞船完全照亮的通道,然后重复完全相同的摄像机移动通过飞船再次拍摄其他元素,例如船上的照明窗户或其推进器。拍摄完所有通道后,就可以将它们光学打印在一起以形成完整的镜头。在一种表达方式中,渲染层和渲染通道可以互换使用。其中,分层渲染特指将不同的对象分成单独的图像,例如前景人物、布景、远景和天空各一个图层。另一方面,通道渲染是指将场景的不同方面(例如阴影、高光或反射)分离到单独的图像中。
分辨率:数字电视、计算机显示器或显示设备的分辨率是每个维度中可以显示的不同像素的数量。分辨率受不同因素控制。通常引用为width×height,单位为像素:例如1024×768表示宽度为1024像素,高度为768像素。这个例子通常被称为“十点二十四乘七六十八”或“十点二十四乘七六八”。本领域技术人员可以理解,根据长度、宽度上的像素数量,显示设备的分辨率对应有长宽比例;示例性的,常见的长宽比例包括但不限于:4:3、16:9、8:5;比如:全高清(Full High Definition,Full HD)分辨率为1920×1080,长宽比为16:9;极速扩展图形阵列(Ultra eXtended Graphics Array,UXGA)分辨率为1600×1200,长宽比为4:3;宽四轴扩展图形阵列(Wide Quad eXtended Graphics Array,WQXGA)分辨率为2560×1600,长宽比为8:5。
下面对本申请实施方式作进一步地详细描述:
图1示出了本申请一个示例性实施例提供的计算机系统的示意图。该计算机系统可以实现成为图像处理方法和/或游戏渲染方法的系统架构。该计算机系统可以包括:终端100和服务器200。终端 100可以是诸如手机、平板电脑、车载终端(车机)、可穿戴设备、PC(Personal Computer,个人计算机)、无人预定终端等电子设备。终端100中可以安装运行目标应用程序的客户端,该目标应用程序可以是图像处理应用程序,也可以是提供有图像处理功能的其他应用程序,本申请对此不作限定。另外,本申请对该目标应用程序的形式不作限定,包括但不限于安装在终端100中的App(Application,应用程序)、小程序等,还可以是网页形式。服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器。服务器200可以是上述目标应用程序的后台服务器,用于为目标应用程序的客户端提供后台服务。
本申请实施例提供的图像处理方法和/或游戏渲染方法,各步骤的执行主体可以是计算机设备,所述计算机设备是指具备数据计算、处理和存储能力的电子设备。以图1所示的方案实施环境为例,可以由终端100执行图像处理方法和/或游戏渲染方法(如终端100中安装运行的目标应用程序的客户端执行该图像处理方法和/或游戏渲染方法),也可以由服务器200执行该图像处理方法和/或游戏渲染方法,或者由终端100和服务器200交互配合执行,本申请对此不作限定。
此外,本申请技术方案可以和区块链技术相结合。例如,本申请所公开的图像处理方法和/或游戏渲染方法,其中涉及的一些数据(第一图像、第一像素块、第二像素块等数据)可以保存于区块链上。终端100和服务器200之间可以通过网络进行通信,如有线或无线网络。
图2示出了本申请一个示例性实施例提供的图像处理方法的流程图。该方法可以由计算机设备执行。该方法包括:
步骤510:获取具有第一分辨率的第一图像;
在一个实施例中,第一图像包括至少两个像素块;示例性的,第一图像包括多个像素点,至少两个像素块可以包括第一图像的所有像素点,也可以包括第一图像的所有像素点中的一部分;
本领域技术人员可以理解,一个像素块中包括一个或多个像素点。第一图像包括的至少两个像素块之间通常没有重合部分,但不排除可以存在重合部分的可能性。
步骤520:根据第一图像,计算第一图像中第一像素块的插值特征;
示例性的,第一像素块是至少两个像素块中的任一像素块;
插值特征用于描述第一像素块的图像内容,第一像素块是至少两个像素块中的任一像素块;示例性的,插值特征用于描述第一像素块的图像内容的维度包括但不限于如下至少之一:第一像素块的色彩信息、第一像素块亮度信息、第一像素块的灰度信息、第一像素块在第一图像中的位置信息;需要说明的是,对于插值特征,可以直接描述第一像素块的上述信息中的至少之一,也可以通过描述第一像素块与其他像素块之间的变化,或通过描述第一像素块与其他像素块之间的卷积结果;间接的描述上述信息中的至少之一;示例性的,其他像素块通常是于第一像素块邻近的像素块,但也不排除与第一像素块不相邻的情况;示例性的,可以但不限于通过方向特征、梯度特征、索贝尔(Sobel)算子中的至少之一间接描述第一像素块的色彩信息、第一像素块亮度信息、第一像素块的灰度信息和第一像素块在第一图像中的位置信息中的至少之一。
步骤530:在第一像素块的插值特征不满足特征判断条件的情况下,对第一像素块执行第一插值,获得插值像素块;
示例性的,特征判断条件用于判断第一像素块是图像内容复杂的像素块,即用于描述第一像素块的图像内容的复杂程度。换言之,特征判断条件为关于图像内容的复杂程度的判断条件。例如,特征判断条件包括第一像素块的图像内容的复杂程度超过目标阈值;
示例性的,特征判断条件通过设定阈值,对插值特征进行判断。示例性的,特征判断条件是预配置且可以调整的;即对于不同的第一像素块,可以设置不同的特征判断条件;示例性的,在第一像素块的插值特征不满足特征判断条件的情况下,第一像素块是图像内容简单的像素块;
第一插值用于对第一像素块进行升采样,升采样用于提高第一图像的分辨率;
步骤540:在第一像素块的插值特征满足特征判断条件的情况下,对第一像素块执行第二插值,获得插值像素块;
示例性的,在第一像素块的插值特征满足特征判断条件的情况下,第一像素块是图像内容复杂的 像素块;
第一插值和第二插值用于对第一像素块进行升采样,第二插值的计算资源消耗大于第一插值,计算资源消耗用于描述插值的计算复杂程度;示例性的,插值计算的复杂程度与计算资源消耗呈现正相关关系;
步骤550:基于插值像素块,输出具有第二分辨率的第二图像;
示例性的,在一种实现方式中,将第一图像中的像素块逐个作为第一像素块,依次计算对应的插值像素块,并根据插值像素块输出第二图像;第一插值和第二插值用于对第一像素块进行升采样,基于插值像素块输出的第二图像具有的第二分辨率大于第一图像具有的第一分辨率;即,第二分辨率大于第一分辨率。
综上所述,本实施例提供的方法,通过计算第一像素块的插值特征,根据第一像素块中的图像内容复杂程度对第一像素块执行不同的插值;有效降低了升采样的计算复杂程度,避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费;在保证升采样效果的前提下降低了计算资源消耗量,有效降低了计算复杂程度。换言之,本申请实施例的方法,针对像素块的不同图像内容复杂程度,采用相应的计算复杂程度的差值方式,能够根据图像内容复杂程度选择插值方式,有助于提高设备对图像分辨率调整的灵活性,以及在保证升采样效果的前提下节省设备的计算资源。
接下来,将通过以下实施例对计算第一图像的插值特征的过程进行介绍:
图3示出了本申请一个示例性实施例提供的图像处理方法的流程图。该方法可以由计算机设备执行。图2中步骤520可以实现为如下步骤:
步骤522:根据多个第二像素块计算第一图像中第一像素块的插值特征;
每个第二像素块是第一图像的像素块。每个第二像素块包括一个或多个像素点。在一个实现方式中,第一像素块包括的像素点与一个第二像素块包括的像素点数量和/或排列方式相同;进一步的,一个第二像素块包括多个像素块。
示例性的,多个第二像素块是处于第一像素块周围的邻近像素块。换言之,多个第二像素块排列在第一像素块周围。比如,图4示出了第一图像的示意图,第一图像310包括9个像素块;其中,与第一像素块310a在上方、下方、左方和右方相邻的像素块均为第二像素块310b。换言之,第二像素块310b的数量例如为4个,分别与第一像素块310a在上方、下方、左方和右方相邻。本领域技术人员可以理解,上述描述只是一个示例性举例,与第一像素块邻近的更多或更少像素块为第二像素块;
示例性的,第一像素块的插值特征为:
dirX=GD-GB
dirY=GE-GA
dir=AH2(dirX,dirY);
dir2=dir*dir;
dirR=dir2.x+dir2.y;
G=0.299*Red+0.587*Green+0.114*Blue;
其中,dirR表示插值特征,G表示像素块的灰度信息,Red、Green和Blue表示RGB颜色系统的红色通道、绿色通道和蓝色通道;A表示与第一像素块上方相邻的第二像素块,B表示与第一像素块左方相邻的第二像素块,D表示与第一像素块右方相邻的第二像素块,E表示与第一像素块下方相邻的第二像素块,AH2表示封装为二维浮点(Half)数据,dir2.x表示dir2在X方向上的分量,即左右方向上的分量;dir2.y表示dir2在Y方向上的分量,即上下方向上的分量;上式中还出现了为方便表示的中间变量,如:dir。
在一个可选的实现方式中,步骤522可以实现为如下子步骤:
子步骤1:根据多个第二像素块的亮度因子,计算第一像素块的方向特征;
子步骤2:将方向特征确定为插值特征,方向特征用于描述第一像素块与多个第二像素块之间的 亮度差异;
第一图像的颜色信息包括亮度因子,示例性的,使用RGB颜色系统对图像颜色描述图像颜色信息时,绿色通道对图像的亮度影响最大;将RGB颜色系统中的绿色通道作为亮度因子。
可选的,子步骤1至少存在以下实现方式:
根据不同第二像素块之间的亮度因子的差值,确定第一像素块在第一方向和第二方向上与多个第二像素块之间的亮度差异;
示例性的:
dirX=ID-IB
dirY=IE-IA
其中,I表示像素块的亮度因子,将像素块D和像素块B的亮度因子之间的差值确定为第一像素块在第一方向上与多个第二像素块之间的亮度差异;将像素块E和像素块A的亮度因子之间的差值确定为第一像素块在第二方向上与多个第二像素块之间的亮度差异,第一方向与第二方向相互垂直;多个第二像素块与第一像素块之间的位置关系,请参考本步骤中上文的描述;
将第一像素块与多个第二像素块之间的亮度差异封装为二维浮点数据,以确定第一像素块的亮度特征;
示例性的:
dir=AH2(dirX,dirY);
dir2=dir*dir;
其中,dir2表示第一像素块的亮度特征,AH2表示封装为二维Half数据,dir为方便表示的中间变量。
将亮度特征在第一图像中的第一方向分量与第二方向分量之和,确定为第一像素块的方向特征;
示例性的:
dirR=dir2.x+dir2.y;
其中,dirR表示第一像素块的方向特征,dir2.x表示亮度特征在第一图像中的第一方向分量,dir2.y表示亮度特征在第一图像中的第二方向分量。
示例性的,在第一图像中,第一方向与第二方向相互垂直。
在一个实施例中,根据不同第二像素块之间的亮度因子的差值,确定所述第一像素块在第一方向和第二方向上与所述多个第二像素块之间的亮度差异,包括:
根据在第一方向上处于所述第一像素块前侧的第二像素块和处于所述第一像素块后侧的第二像素块之间的亮度因子的差值,确定所述第一像素块在第一方向上的第一亮度差异;
根据所述第二像素块中在第二方向上处于所述第一像素块前侧的第二像素块和处于所述第一像素块后侧的第二像素块之间的亮度因子的差值,确定所述第一像素块在第二方向上的第二亮度差异。
在一个实施例中,所述将所述第一像素块与所述第二像素块之间的所述亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征,包括:将所述第一亮度差异和所述第二亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征。
综上所述,本实施例提供的方法,根据第二像素块计算第一图像中第一像素块的插值特征,拓展了描述第一像素块的图像内容的维度;根据第一像素块中的图像内容复杂程度对第一像素块执行不同的插值;有效降低了升采样的计算复杂程度,避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费;在保证升采样效果的前提下降低了计算资源消耗量,有效降低了计算复杂程度。换言之,本申请实施例的方法,针对像素块的不同图像内容复杂程度,采用相应的计算复杂程度的差值方式,能够根据图像内容复杂程度选择插值方式,有助于提高设备对图像分辨率调整的灵活性,以及在保证升采样效果的前提下节省设备的计算资源。
接下来,将通过以下实施例对在图像中划分像素块的过程进行介绍:
图5示出了本申请一个示例性实施例提供的图像处理方法的流程图。该方法可以由计算机设备执 行。即在一种可选设计中,在图2示出的实施例的基础上,还包括步骤512,步骤550可以实现为步骤552:
步骤512:根据划分规则,将第一图像划分为至少两个像素块;
在本实施例中,划分规则对至少两个像素块包括的像素点数量、像素点排列方式、像素点的图像信息中的至少之一不作出任何限制;
示例性的,划分规则用于描述在第一图像中划分出至少两个像素块的划分依据;在一个示例中,划分规则包括像素块位置,划分规则可以直接或间接地表示像素块位置;
比如:第一图像包括16*16个像素点,划分规则指示划分的像素块包括4*4个像素点,像素块在第一图像上紧密排列;紧密排列表示像素块之间没有间隙且尽可能多的划分像素块;划分规则通过指示像素块大小与紧密排列间接的指示了像素块位置;
比如:第一图像包括16*16个像素点,划分规则指示划分两个像素块,像素块1的位置在第一图像的从左至右第一个像素点至第八个像素点,从上至下第一个像素点至第十六个像素点;划分规则直接指示了像素块位置。
步骤552:基于插值像素块,根据组合规则拼接为具有第二分辨率的第二图像;
示例性的,插值像素块是基于第一像素块确定的,第一像素块是第一图像中的一部分,第一像素块是基于划分规则在第一图像中确定的;根据与划分规则相逆的组合规则,基于插值像素块拼接得到第二图像,即组合规则与划分规则是相逆的排序规则。
综上所述,本实施例提供的方法,通过在第一图像中划分像素块,为根据第一像素块中的图像内容复杂程度对第一像素块执行不同的插值奠定了基础;有效降低了升采样的计算复杂程度,避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费;在保证升采样效果的前提下降低了计算资源消耗量,有效降低了计算复杂程度。
接下来,将通过以下实施例对第一插值和第二插值进行介绍:
图6示出了本申请一个示例性实施例提供的图像处理方法的流程图。该方法可以由计算机设备执行。即在一种可选设计中,在图2示出的实施例的基础上,步骤530可以实现为步骤532;步骤540可以实现为步骤542:
步骤532:在第一像素块的插值特征不满足特征判断条件的情况下,根据第三像素块对第一像素块执行第一插值,获得插值像素块;
示例性的,第三像素块是第一像素块的邻近像素块,第三像素块以第二排列方式排列在第一像素块周围;比如,图7示出了第一图像的示意图,第一图像包括16个像素块,经过升采样的第二图像包括36个像素块,将第二图像经过压缩,映射在于第一图像相同大小的图像上,图中示出的第一标记322,即16个圆形标记表示第一图像的16个像素块的中心位置,第二标记324,即36个叉状标记表示第二图像的36个像素块的中心位置;目标第二标记324a是一个插值像素块的中心位置;对第一像素块执行第一插值,获得插值像素块,第一像素块的中心位置使用目标第一标记322a指示;根据第三像素块对第一像素块执行第一插值,第三像素块包括处于第一像素块周围的邻近像素块,多个第三像素块的中心位置使用目标第一标记322a和关联第一标记322b指示;可以理解的,多个第三像素块包括四个与第一像素块相同大小的像素块。在一个实施例中,多个第三像素块中一个第三像素块为第一像素块。
本领域技术人员可以理解,上述描述只是一个示例性举例,与第一像素块邻近的更多或更少像素块为第三像素块;在本申请中,第一排列方式可以是相同的,也可以是不同的。
步骤542:在第一像素块的插值特征满足特征判断条件的情况下,根据第四像素块对第一像素块执行第二插值,获得插值像素块;
示例性的,第四像素块包括处于第一像素块周围的邻近像素块。第四像素块例如以第三排列方式排列在第一像素块周围;图8示出了第一图像的示意图,第一图像包括16个像素块,经过升采样的第二图像包括36个像素块,将第二图像经过压缩,映射在于第一图像相同大小的图像上,图中示出的第一标记332,即16个圆形标记表示第一图像的16个像素块的中心位置,第二标记334,即36个叉状 标记表示第二图像的36个像素块的中心位置;目标第二标记334a是插值像素块的中心位置,对第一像素块执行第二插值,获得插值像素块,第一像素块的中心位置使用目标第一标记332a指示;根据第四像素块对第一像素块执行第二插值,第四像素块是第一像素块的邻近像素块,第四像素块的中心位置使用目标第一标记332a和关联第一标记332b指示。
本领域技术人员可以理解,由于第二插值的计算资源消耗大于第一插值,计算资源消耗用于描述插值的计算复杂程度;在一个可选的实现方式中,第四像素块的数量大于第三像素块;即基于数量多的第四像素块执行第二插值的计算复杂程度大于基于数量少的第三像素块执行第一插值的计算复杂程度。
本领域技术人员可以理解,对于第三像素块和第四像素块,可以存在不包括第一像素块的情况;比如,将与第一像素块邻近的八个像素块作为第三像素块或第四像素块。
综上所述,本实施例提供的方法,通过计算第一像素块的插值特征,根据第一像素块中的图像内容复杂程度对第一像素块执行不同的插值;根据第三像素块对第一像素块执行第一插值,根据第四像素块对第一像素块执行第二插值,为第一像素块提供了不同的插值方式;有效降低了升采样的计算复杂程度,避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费;在保证升采样效果的前提下降低了计算资源消耗量,有效降低了计算复杂程度。换言之,通过计算第一像素块的插值特征,根据第一像素块中的图像内容复杂程度对第一像素块执行不同的插值,本申请实施例可以针对像素块的不同图像内容复杂程度,采用相应的计算复杂程度的差值方式,能够根据图像内容复杂程度选择插值方式,有助于提高设备对图像分辨率调整的灵活性,以及在保证升采样效果的前提下节省设备的计算资源。
接下来,对第一插值和第二插值的具体方式进行介绍:
图9示出了本申请一个示例性实施例提供的执行第一插值的流程图;包括如下步骤:
步骤610:对第一像素块在第一方向上进行插值;
在本实施例中,以第一插值为线性插值为例进行说明;本领域技术人员可以理解,第一插值可以实现为其他插值方法,包括但不限于如下至少之一:最邻近插值、双线性插值。
对第一像素块在第一方向上进行插值,以上文中图7示出的第一图像的示意图为例,对第一像素块在第一方向上进行插值得到在第一方向上的插值结果;第一方向是第一图像的x轴方向;示例性的,第一方向上的插值结果为:

其中,f(x,y1)和f(x,y2)表示第一方向上的插值结果,x表示插值像素块的中心位置的横坐标;x1表示第三像素块中位于左侧的像素块的中心位置的横坐标,x2表示第三像素块中位于右侧的像素块的中心位置的横坐标,f(Q12)表示第三像素块中位于左上侧的像素块的色彩信息,f(Q11)表示第三像素块中位于左下侧的像素块的色彩信息,f(Q22)表示第三像素块中位于右上侧的像素块的色彩信息,f(Q21)表示第三像素块中位于右下侧的像素块的色彩信息。
步骤620:基于在第一方向上的插值结果,对第一像素块在第二方向上进行插值,得到插值像素块;
对第一像素块在第二方向上进行插值,以上文中图7示出的第一图像的示意图为例,对第一像素块在第二方向上进行插值得到在第二方向上的插值结果,第二方向上的插值结果即插值像素块;第一方向是第一图像的y轴方向;示例性的,第二方向上的插值结果为:


其中,f(x,y1)和f(x,y2)表示第一方向上的插值结果,f(x,y)表示第二方向上的插值结果,即插值像素块的色彩信息;y表示插值像素块的中心位置的纵坐标;y1表示第三像素块中位于下侧的像素块的中心位置的纵坐标,y2表示第三像素块中位于上侧的像素块的中心位置的纵坐标;上述第二式是将第一式中的第一方向上的插值结果展开得到的,其中,第一方向上的插值结果展开式中的各参数含义请参见上文步骤610中的内容;本领域技术人员可以理解,上述第三式是将第二式化简得到的。
综上所述,本实施例提供的方法,通过将第一插值实现为线性插值,为第一像素块是简单像素块的情况下提供了计算资源消耗量小的插值方式,有效降低了升采样的计算复杂程度,避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费;在保证升采样效果的前提下降低了计算资源消耗量,有效降低了计算复杂程度。
图10示出了本申请一个示例性实施例提供的执行第二插值的流程图;包括如下步骤:
步骤630:计算第一像素块的特征长度;
在本实施例中,以第二插值为兰索斯(Lanczos)插值为例进行说明;本领域技术人员可以理解,第二插值可以实现为其他插值方法,包括但不限于三次插值。
示例性的,计算第一像素块的特征长度为:
dirX=ID-IB
dirY=IE-IA
dir=AH2(dirX,dirY);
dir2=dir*dir;
dirR=dir2.x+dir2.y;
dc=ID-IC
cb=IC-IB

lenX=saturate(abs(dirX)*lenX);
lenX=lenX*lenX;
ec=IE-IC
ca=IC-IA

lenY=saturate(abs(dirY)*lenY);
lenY=lenY*lenY;
其中,I表示像素块的亮度因子,示例性的,亮度因子通过RGB颜色系统的绿色通道表示;A表示与第一像素块上方相邻的像素块,B表示与第一像素块左方相邻的像素块,D表示与第一像素块右方相邻的像素块,E表示与第一像素块下方相邻的像素块,AH2表示封装为二维Half数据;dir2.x 表示dir2在X方向上的分量,即左右方向上的分量;dir2.y表示dir2在Y方向上的分量,即上下方向上的分量;saturate表示饱和函数计算,max表示最大值计算,abs表示绝对值计算;上式中还出现了为方便表示的中间变量,如:dir。
需要说明的是,上述第八式至第十式是按照现有顺序逐次执行的,第九式、第十式中的等号“=”为赋值符号,即通过赋值符号右侧的计算对等号左侧的lenX进行更新,lenX表示在X方向上的特征长度,即左右方向上的特征长度;相似的,对于第十三式至第十五式是按照现有顺序逐次执行的,对lenY进行更新,lenY表示在Y方向上的特征长度,即上下方向上的特征长度;
步骤640:计算第一像素块的加权参数;
示例性的,第一像素块的加权参数提供了构建插值像素块时,使用的与第一像素块邻近的第四像素块的权重;
示例性的,加权参数为:
len=lenX+lenY;

dir=dir*AH2(dirR);
len=len*AH1(0.5);
len=len*len;

len2=AH2(AH1(1.0)+(stretch-AH1(1.0))*len,AH1(1.0)+AH1(-0.5)*len);
lob=AH1(0.5)+AH1((1.0/4.0-0.04)-0.5)*len;
clp=1.0/lob;
其中,sqrt表示平方根计算,max表示最大值计算,abs表示绝对值计算;AH1表示封装为一维Half数据,AH2表示封装为二维Half数据;dir.x表示dir在X方向上的分量,即左右方向上的分量;dir.y表示dir在Y方向上的分量,即上下方向上的分量;加权参数包括len2和clp,其中,clp表示削波点,lob表示负叶强度;上式中还出现了为方便表示的中间变量,如:stretch。
需要说明的是,上述第二式至第五式是按照现有顺序逐次执行的,式中的等号“=”为赋值符号,即通过赋值符号右侧的计算对等号左侧的dirR、dir和len进行更新。
步骤650:基于加权参数,对第一像素块执行第二插值,得到插值像素块;
示例性的,基于第四像素块,根据步骤640中确定的加权参数对第一像素块执行第二插值,得到插值像素块;示例性的,本实施例中的第四像素块与图8中示出的第四像素块相同,即包括12个像素块;
示例性的,第四像素块的权重为:
其中,x表示步骤640中的加权参数len2,w表示步骤640中的加权参数clp;L(x)表示第四像素块的权重,即包括12个像素块的权重系数。
插值像素块的色彩信息是第一像素块的色彩信息的加权平均数;即,将第四像素块的色彩信息与权重系数相乘后的平均数,确定为插值像素块的色彩信息。
综上所述,本实施例提供的方法,通过将第二插值实现为兰索斯插值,为第一像素块是复杂像素块的情况下提供了计算资源消耗量大的插值方式,有效保证了对复杂像素块的升采样效果;同时避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费,有效降低了计算复杂程度。
接下来,对特征判断条件进行进一步介绍:
接下来,将通过以下实施例对特征判断条件进行介绍:
图11示出了本申请一个示例性实施例提供的图像处理方法的流程图。该方法可以由计算机设备执行。即在一种可选设计中,在图2示出的实施例的基础上,还包括如下步骤:
步骤524:根据第一图像,确定第一像素块的特征判断条件;
示例性的,对于不同的第一像素块,可以设置不同的特征判断条件。示例性的,特征判断条件是根据第一图像确定的;由于第二插值的计算复杂程度大于第一插值,即第二插值的升采样效果优于第一插值;将第一图像划分为重点区域和非重点区域,根据第一图像确定特征判断条件;比如:第一图像中重点区域的显示要求高,在重点区域中设置宽松的特征判断条件,以实现增加执行第二插值的第一像素块数量;第一图像中非重点区域的显示要求低,在非重点区域中设置严苛的特征判断条件,以实现减少执行第二插值的第一像素块数量;
可选的,如图12所示,步骤524可以实现为步骤524a:
步骤524a:根据第一像素块在第一图像中的位置信息,确定第一像素块的特征判断条件;
示例性的,在第一图像中确定目标区域,根据第一像素块的位置是否位于目标区域内,确定第一像素块的特征判断条件;需要说明的是,目标区域是预先确定的,对于目标区域的形状、大小、位置中的至少之一不作出任何限制;目标区域是第一图像的部分区域;
在一种可选的实现方式中,步骤524a可以实现为:
在第一像素块的位置在目标区域内的情况下,确定第一像素块的特征判断条件包括第一像素块的图像内容的复杂程度超过第一目标阈值;
在第一像素块的位置在目标区域外的情况下,确定第一像素块的特征判断条件包括第一像素块的图像内容的复杂程度超过第二目标阈值;
其中,第一目标阈值小于第二目标阈值,目标区域是第一图像的部分区域。
在一个具体的例子中,在第一图像的中心位置,确定一个面积为第一图像面积的50%,且形状与第一图像相同的目标区域;特征判断条件通过设定阈值,对插值特征进行判断;
在第一像素块的位置位于目标区域内的情况下,为特征判断条件设定第一阈值;在第一像素块的位置位于目标区域外的情况下,为特征判断条件设定第二阈值;第一阈值小于第二阈值;即在目标区域中,提高了执行第二插值的像素块的比例;目标区域获得了更好的显示效果;
本领域技术人员可以理解,上述确定目标区域的方法仅是一种示例性描述,可以通过不同的依据,确定不同的目标区域。
可选的,如图13所示,步骤524a可以实现为步骤524b:
步骤524b:根据第一图像的图像内容和第一像素块在第一图像中的位置信息,确定第一像素块的特征判断条件;
示例性的,根据第一图像的图像内容,在第一图像中确定图像主体区域,根据第一像素块的位置是否位于图像主体区域内,确定第一像素块的特征判断条件;需要说明的是,对于图像主体区域的形状、大小、位置中的至少之一不作出任何限制;图像主体区域是第一图像的部分区域;
在一种可选的实现方式中,步骤524b可以实现为:
根据第一图像的图像内容确定第一图像中的图像主体区域;
在第一像素块的位置在图像主体区域内的情况下,确定第一像素块的特征判断条件包括第一像素块的图像内容的复杂程度超过第三目标阈值;
在第一像素块的位置在图像主体区域外的情况下,确定第一像素块的特征判断条件包括第一像素块的图像内容的复杂程度超过第四目标阈值;
其中。第三目标阈值小于第四目标阈值;
需要说明的是,图像主体区域可以是根据第一图像的图像内容直接确定的,也可以是根据第一图像的图像内容间接确定的;以下进行示例性描述:
在一个实施例中,根据第一图像的图像内容直接确定图像主体区域;
调用第一图像识别模型在第一图像中识别目标对象,将目标对象的显示区域确定第一图像中的图 像主体区域。
比如:第一图像识别模型在第一图像中识别的目标对象为虚拟对象的情况下,将第一图像中虚拟对象的显示区域作为图像主体区域;在图像主体区域中设置宽松的特征判断条件,以实现增加执行第二插值的第一像素块数量;具体的,图14示出了本申请一个示例性实施例提供的第一图像的示意图;第一图像中的虚拟对象的显示区域412作为图像主体区域,在第一像素块的位置位于图像主体区域内的情况下,特征判断条件宽松;在第一像素块的位置位于图像主体区域外的情况下,如第一像素块的位置在虚拟箱体、虚拟车辆或虚拟道路的显示区域的情况下,特征判断条件严苛。
比如:第一图像识别模型在第一图像中识别的目标对象为虚拟建筑的情况下,将第一图像中虚拟建筑的显示区域作为图像主体区域;在图像主体区域中设置宽松的特征判断条件,以实现增加执行第二插值的第一像素块数量;具体的,图15示出了本申请一个示例性实施例提供的第一图像的示意图;第一图像中的虚拟建筑的显示区域422作为图像主体区域,在第一像素块的位置位于图像主体区域内的情况下,特征判断条件宽松;在第一像素块的位置位于图像主体区域外的情况下,如第一像素块的位置在虚拟植物、虚拟栅栏或虚拟山体的显示区域的情况下,特征判断条件严苛。
在一个实施例中,根据第一图像的图像内容间接确定图像主体区域;
调用第二图像识别模型在第一图像中确定第一图像的图像类型,根据图像类型确定对应的图像主体区域。
比如:对于第一图像是第一人称射击游戏(First-Person Shooting Game,FPS)的游戏图像,第二图像识别模型在第一图像中确定第一图像的图像类型为第一类型,将第一图像中对应的第一区域作为图像主体区域;具体的,图16示出了本申请一个示例性实施例提供的第一图像的示意图;在第一类型的图像中,梯形区域432是需要重点关注的区域,对于FPS游戏的图像,在梯形区域432中存在大量信息与游戏内容,在第一像素块的位置位于图像主体区域内的情况下,特征判断条件宽松。
比如:对于第一图像是多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)的游戏图像,第二图像识别模型在第一图像中确定第一图像的图像类型为第二类型,将第一图像中对应的第二区域作为图像主体区域;具体的,图17示出了本申请一个示例性实施例提供的第一图像的示意图;在第一类型的图像中,椭圆区域442是需要重点关注的区域,对于FPS游戏的图像,在椭圆区域442中存在大量信息与游戏内容,在第一像素块的位置位于图像主体区域内的情况下,特征判断条件宽松。
需要说明的是,上述第一图像识别模型和第二图像识别模型是不同的模型,具有不同的模型结构和/或模型参数。
综上所述,本实施例提供的方法,通过确定第一像素块的特征判断条件,提升了特征判断条件对第一像素块的评价能力;为不同位置的第一像素块提供了不同的插值依据;有效降低了升采样的计算复杂程度,进一步避免了在图像内容简单的情况下使用高计算资源消耗插值造成的计算资源浪费;在保证升采样效果的前提下降低了计算资源消耗量,有效降低了计算复杂程度。
图18示出了本申请一个示例性实施例提供的游戏渲染方法的流程图。该方法可以由计算机设备执行,计算机设备是可以运行游戏引擎的游戏设备。该方法包括:
步骤710:确定第一分辨率和第二分辨率;
示例性的,第一分辨率是游戏引擎的输出分辨率,第二分辨率是游戏设备的显示分辨率;示例性的,第一分辨率小于第二分辨率;
其中,第一分辨率是游戏引擎的输出分辨率,即游戏引擎根据第一分辨率对游戏画面进行渲染;本领域技术人员可以理解,第一分辨率小,游戏画面渲染的计算复杂程度小;即第一分辨率的大小与游戏画面渲染的计算复杂程度呈现正相关关系。需要说明的是,第二分辨率是游戏设备的显示分辨率;显示分辨率可以等于设备分辨率,也可以小于设备分辨率;以游戏设备为智能手机为例,对于分辨率为1920×1080的智能手机,可以支持多种显示方式,按照不同分辨率进行显示;比如,智能手机还支持以1280×720分辨率、640×360分辨率中的任意一种进行显示。在显示器以640×360分辨率进行显示的情况下,显示分辨率为640×360,即小于设备分辨率。
需要说明的是,确定第一分辨率和第二分辨率可以相互独立的,也可以存在关联的,比如:先确定第二分辨率,再基于第二分辨率确定第一分辨率。
步骤720:获取游戏引擎基于第一分辨率输出的第一图像;
第一图像是游戏引擎渲染得到的游戏画面图像;图19示出了本申请一个示例性实施例提供的显示第一图像的示意图;由于第一分辨率小于第二分辨率;设备按照第一分辨率显示第一图像342时无法铺满显示设备,存在空白区域344。
步骤730:基于第一图像,采用图像处理方法获得具有第二分辨率的第二图像进行显示;
其中,所述图像处理方法是根据上述任一的图像处理方法的实施例得到的。由于第二图像具有第二分辨率,第二分辨率为设备显示分辨率;图20示出了本申请一个示例性实施例提供的显示第二图像的示意图;设备显示具有第二分辨率的第二图像346时可以铺满显示设备,不存在空白区域。
综上所述,本实施例提供的方法,通过在游戏渲染场景下确定第一分辨率和第二分辨率,并第一像素块中的图像内容复杂程度对第一像素块执行不同的插值;有效提高了游戏渲染图像的质量,避免了计算机设备计算能力造成的渲染效果低下。降低了计算资源消耗量,降低了计算复杂程度。
接下来,对第一分辨率和第二分辨率进行介绍:
在确定第一分辨率和第二分辨率相互独立的情况下,确定第一分辨率可以实现为:
基于游戏设备的属性信息,确定第一分辨率;
其中,游戏设备的属性信息包括如下至少之一:游戏设备的计算能力、游戏设备的负载情况、游戏设备的温度、游戏设备的型号特征。示例性的,上述游戏设备通常包括处理器,比如:中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)中的至少之一;当然,也可以包括其他具有计算能力的游戏设备。
具体的:
游戏设备的计算能力;用于描述游戏设备单位时间可以承载的计算次数;计算能力越强,在相同时间内可以执行更多次计算;
游戏设备的负载情况;用于描述游戏设备当前的工作状态;示例性的,在游戏设备的负载情况高的情况下,第一分辨率低。
游戏设备的温度;示例性的,在游戏设备的温度高的情况下,对游戏设备进行保护,第一分辨率低,减少游戏设备的计算量;
游戏设备的型号特征;用于描述游戏设备的规格,在游戏设备的型号特征指示游戏设备为高规格设备的情况下,第一分辨率高。
在一种可选的实现方式中,在游戏设备的属性信息满足目标条件的情况下,将第一分辨率确定为A1乘B1;
在游戏设备的属性信息不满足目标条件的情况下,将第一分辨率确定为A2乘B2;
其中,A1大于A2和/或B1大于B2;A1、A2、B1和B2均为正整数。
在本实施例中,第一分辨率的表示方式为横向像素点数量乘纵向像素点数量,如:1920×1080。
目标条件包括如下至少之一:
游戏设备的计算能力大于目标能力阈值;示例性的,目标能力阈值用于描述游戏设备单位时间可以承载的计算次数;比如:目标能力阈值为每分钟十万次运算,在游戏设备的计算能力大于每分钟十万次运算的情况下,游戏设备的属性信息满足目标条件;
游戏设备的负载情况小于目标负载阈值;示例性的,目标负载阈值用于描述游戏设备的工作状态,比如:目标负载阈值为75%,在游戏设备的负载情况小于满负载的75%的情况下,游戏设备的属性信息满足目标条件;
游戏设备的温度小于目标温度阈值;示例性的,目标温度阈值用于描述游戏设备的工作温度,比如:目标温度阈值为85摄氏度,在游戏设备的温度小于85摄氏度的情况下,游戏设备的属性信息满足目标条件;
游戏设备的型号特征超过目标型号特征;示例性的,目标型号特征用于描述游戏设备的规格;比 如:目标型号特征为第四次更新的第一型号产品;在游戏设备的型号特征为第六次更新的第一型号产品时,超过了目标型号特征,游戏设备的属性信息满足目标条件。
在一个实施例中,在确定第一分辨率和第二分辨率存在关联的情况下,步骤710可以实现为:
根据游戏设备的显示分辨率确定第二分辨率;
示例性的,游戏设备显示具有第二分辨率的第二图像时可以铺满显示设备,不存在空白区域。
将第二分辨率与预设倍数的乘积确定为第一分辨率;
第一分辨率小于第二分辨率,第一分辨率与第二分辨率之间存在倍数关系,预设倍数小于1。需要说明的是,分辨率的表示方式通常为横向像素点数量乘纵向像素点数量,如:1920×1080;但也不排除可以通过像素点总数量和横纵比例表示分辨率,如:2073600,16:9。将第二分辨率与预设倍数相乘通常是将横向像素点数量、纵向像素点数量分别与预设倍数相乘得到第一分辨率。
综上所述,本实施例提供的方法,通过在游戏渲染场景下确定第一分辨率和第二分辨率,并第一像素块中的图像内容复杂程度对第一像素块执行不同的插值;有效提高了游戏渲染图像的质量;通过计算机设备的属性信息确定第一分辨率,有效保证计算机设备的计算能力得到充分的合理使用;为获得高分辨率的第二图像奠定了基础,同时避免了计算机设备计算能力造成的渲染效果低下。降低了计算资源消耗量,降低了计算复杂程度。
图21示出了本申请一个示例性实施例提供的图像处理装置的框图。该装置包括:
获取模块810,用于获取具有第一分辨率的第一图像;
计算模块820,用于根据所述第一图像,计算所述第一图像中第一像素块的插值特征,所述插值特征用于描述所述第一像素块的图像内容;
处理模块830,用于在所述第一像素块的所述插值特征不满足特征判断条件的情况下,对所述第一像素块执行第一插值,获得插值像素块;
所述处理模块830,还用于在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,对所述第一像素块执行第二插值,获得所述插值像素块,其中所述特征判断条件为关于所述第一像素块的图像内容的复杂程度的判断条件;
输出模块840,用于基于所述插值像素块,输出具有第二分辨率的第二图像,所述第二分辨率大于所述第一分辨率;
其中,所述第一插值和所述第二插值用于对所述第一像素块进行升采样,所述第二插值的计算资源消耗大于所述第一插值的计算资源消耗。
在本申请的一个可选设计中,所述计算模块820,还用于:
根据多个第二像素块计算所述第一图像中所述第一像素块的所述插值特征;
其中,所述多个第二像素块包括处于所述第一像素块周围的邻近像素块。
在本申请的一个可选设计中,所述第一图像的颜色信息包括亮度因子;所述计算模块820,还用于:
根据所述多个第二像素块的亮度因子,计算所述第一像素块的方向特征;
将所述方向特征确定为所述插值特征,所述方向特征用于描述所述第一像素块与所述多个第二像素块之间的亮度差异。
在本申请的一个可选设计中,所述计算模块820,还用于:
根据不同所述第二像素块之间的亮度因子的差值,确定所述第一像素块在第一方向和第二方向上与所述第二像素块之间的亮度差异;
将所述第一像素块与所述多个第二像素块之间的所述亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征;
将所述亮度特征在所述第一图像中的第一方向分量与第二方向分量之和,确定为所述第一像素块的方向特征;
其中,在所述第一图像中,所述第一方向与所述第二方向相互垂直。
在一个实施例中,为了确定所述第一像素块在第一方向和第二方向上与所述多个第二像素块之间的亮度差异,计算模块820用于:
根据在第一方向上处于所述第一像素块前侧的第二像素块和处于所述第一像素块后侧的第二像素块之间的亮度因子的差值,确定所述第一像素块在第一方向上的第一亮度差异;
根据所述第二像素块中在第二方向上处于所述第一像素块前侧的第二像素块和处于所述第一像素块后侧的第二像素块之间的亮度因子的差值,确定所述第一像素块在第二方向上的第二亮度差异;
所述将所述第一像素块与所述多个第二像素块之间的所述亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征,包括:将所述第一亮度差异和所述第二亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征。
在本申请的一个可选设计中,所述装置还包括:
划分模块850,用于根据划分规则,将所述第一图像划分为至少两个像素块,所述第一像素块是所述至少两个像素块中的任一像素块;
所述输出模块840,还用于:基于所述插值像素块,根据组合规则拼接为具有所述第二分辨率的所述第二图像,所述组合规则与所述划分规则是相逆的排序规则。
在本申请的一个可选设计中,所述装置还包括:
确定模块860,用于根据所述第一图像,确定所述第一像素块的所述特征判断条件。
在本申请的一个可选设计中,所述确定模块860,还用于:
根据所述第一像素块在所述第一图像中的位置信息,确定所述第一像素块的所述特征判断条件。
在本申请的一个可选设计中,所述确定模块860,还用于:
在所述第一像素块的位置在目标区域内的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第一目标阈值,所述目标区域是所述第一图像的部分区域;
在所述第一像素块的位置在所述目标区域外的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第二目标阈值,所述第一目标阈值小于所述第二目标阈值。
在本申请的一个可选设计中,所述确定模块860,还用于:
根据所述第一图像的图像内容和所述第一像素块在所述第一图像中的位置信息,确定所述第一像素块的所述特征判断条件。
在本申请的一个可选设计中,所述确定模块860,还用于:
根据所述第一图像的图像内容确定所述第一图像中的图像主体区域;
在所述第一像素块的位置在所述图像主体区域内的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第三目标阈值;
在所述第一像素块的位置在所述图像主体区域外的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第四目标阈值,所述第三目标阈值小于所述第四目标阈值。
在本申请的一个可选设计中,所述确定模块860,还用于:
调用第一图像识别模型在所述第一图像中识别目标对象,将所述目标对象的显示区域确定所述第一图像中的所述图像主体区域;
或,调用第二图像识别模型在所述第一图像中确定第一图像的图像类型,根据所述图像类型确定对应的所述图像主体区域。
在本申请的一个可选设计中,所述处理模块830,还用于:
在所述第一像素块的所述插值特征不满足所述特征判断条件的情况下,根据第三像素块对所述第一像素块执行所述第一插值,获得所述插值像素块,所述第三像素块包括处于所述第一像素块周围的邻近像素块;
在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,根据第四像素块对所述第一像素块执行所述第二插值,获得所述插值像素块,所述第四像素块包括处于所述第一像素块周围的邻近像素块,所述第四像素块的数量大于所述第三像素块的数量。
在本申请的一个可选设计中,所述第一插值包括线性插值,所述第二插值包括兰索斯插值。
图22示出了本申请一个示例性实施例提供的游戏渲染装置的框图。所述装置由游戏设备执行,该装置包括:
确定模块870,用于确定第一分辨率和第二分辨率,所述第一分辨率是游戏引擎的输出分辨率,所述第二分辨率是所述游戏设备的显示分辨率;
获取模块880,用于获取所述游戏引擎基于所述第一分辨率输出的第一图像;
处理模块890,用于基于所述第一图像,采用图像处理装置获得具有所述第二分辨率的第二图像进行显示;
其中,所述图像处理装置是上述如权利要求1至13任一所述的图像处理装置。
在本申请的一个可选设计中,所述确定模块870,还用于:
基于所述游戏设备的属性信息,确定所述第一分辨率;
其中,所述游戏设备的属性信息包括如下至少之一:所述游戏设备的计算能力、所述游戏设备的负载情况、所述游戏设备的温度、所述游戏设备的型号特征。
在本申请的一个可选设计中,所述确定模块870,还用于:在所述游戏设备的属性信息满足目标条件的情况下,将所述第一分辨率确定为A1乘B1;
在所述游戏设备的属性信息不满足所述目标条件的情况下,将所述第一分辨率确定为A2乘B2;
其中,A1大于A2和/或B1大于B2,所述目标条件包括如下至少之一:所述游戏设备的计算能力大于目标能力阈值、所述游戏设备的负载情况小于目标负载阈值、所述游戏设备的温度小于目标温度阈值、所述游戏设备的型号特征超过目标型号特征。
在本申请的一个可选设计中,所述确定模块870,还用于:
根据所述游戏设备的显示分辨率确定所述第二分辨率;
将所述第二分辨率与预设倍数的乘积确定为所述第一分辨率,所述预设倍数小于1。
需要说明的一点是,上述实施例提供的装置在实现其功能时,仅以上述各个功能模块的划分进行举例说明,实际应用中,可以根据实际需要而将上述功能分配由不同的功能模块完成,即将设备的内容结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述;各个模块执行操作取得的技术效果与有关该方法的实施例中的技术效果相同,此处将不做详细阐述说明。
本申请实施例还提供了一种计算机设备,该计算机设备包括:处理器和存储器,存储器中存储有计算机程序;所述处理器,用于执行所述存储器中的所述计算机程序以实现上述各方法实施例提供的图像处理方法,或游戏渲染方法。
可选地,该计算机设备为服务器。示例地,图23是本申请一个示例性实施例提供的服务器的结构框图。
通常,服务器2300包括有:处理器2301和存储器2302。
处理器2301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器2301可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器2301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称中央处理器(Central Processing Unit,CPU);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器2301可以在集成有图像处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器2301还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器2302可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器2302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器2302中的非暂态的计算机可读存储介质用于存储至少一个指令, 该至少一个指令用于被处理器2301所执行以实现本申请中方法实施例提供的图像处理方法,或游戏渲染方法。
在一些实施例中,服务器2300还可选包括有:输入接口2303和输出接口2304。处理器2301、存储器2302和输入接口2303、输出接口2304之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与输入接口2303、输出接口2304相连。输入接口2303、输出接口2304可被用于将输入/输出(Input/Output,I/O)相关的至少一个外围设备连接到处理器2301和存储器2302。在一些实施例中,处理器2301、存储器2302和输入接口2303、输出接口2304被集成在同一芯片或电路板上;在一些其他实施例中,处理器2301、存储器2302和输入接口2303、输出接口2304中的任意一个或两个可以在单独的芯片或电路板上实现,本申请实施例对此不加以限定。
本领域技术人员可以理解,上述示出的结构并不构成对服务器2300的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性实施例中,还提供了一种芯片,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片在计算机设备上运行时,用于实现上述方面所述的图像处理方法,或游戏渲染方法。
在示例性实施例中,还提供了一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器从计算机可读存储介质读取并执行该计算机指令,以实现上述各方法实施例提供的图像处理方法,或游戏渲染方法。
在示例性实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述各方法实施例提供的图像处理方法,或游戏渲染方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (21)

  1. 一种图像处理方法,在计算机设备中执行,所述方法包括:
    获取具有第一分辨率的第一图像;
    根据所述第一图像,计算所述第一图像中第一像素块的插值特征,所述插值特征用于描述所述第一像素块的图像内容;
    在所述第一像素块的所述插值特征不满足特征判断条件的情况下,对所述第一像素块执行第一插值,获得插值像素块,其中所述特征判断条件为关于所述第一像素块的图像内容的复杂程度的判断条件;
    在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,对所述第一像素块执行第二插值,获得所述插值像素块;
    基于所述插值像素块,输出具有第二分辨率的第二图像,所述第二分辨率大于所述第一分辨率;
    其中,所述第一插值和所述第二插值用于对所述第一像素块进行升采样,所述第二插值的计算资源消耗大于所述第一插值的计算资源消耗。
  2. 根据权利要求1所述的方法,其中,所述根据所述第一图像,计算所述第一图像中第一像素块的插值特征,包括:
    根据多个第二像素块计算所述第一图像中所述第一像素块的所述插值特征;
    其中,所述多个第二像素块包括处于所述第一像素块周围的邻近像素块。
  3. 根据权利要求2所述的方法,其中,所述第一图像的颜色信息包括亮度因子;所述根据多个第二像素块计算所述第一图像中所述第一像素块的所述插值特征,包括:
    根据所述多个第二像素块的亮度因子,计算所述第一像素块的方向特征;
    将所述方向特征确定为所述插值特征,所述方向特征用于描述所述第一像素块与所述多个第二像素块之间的亮度差异。
  4. 根据权利要求3所述的方法,其中,所述根据所述多个第二像素块的亮度因子,计算所述第一像素块的方向特征,包括:
    根据不同所述第二像素块之间的亮度因子的差值,确定所述第一像素块在第一方向和第二方向上与所述多个第二像素块之间的亮度差异;将所述第一像素块与所述多个第二像素块之间的所述亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征;
    将所述亮度特征在所述第一图像中的第一方向分量与第二方向分量之和,确定为所述第一像素块的方向特征;
    其中,在所述第一图像中,所述第一方向与所述第二方向相互垂直。
  5. 根据权利要求4所述的方法,其中,
    所述根据不同所述第二像素块之间的亮度因子的差值,确定所述第一像素块在第一方向和第二方向上与所述多个第二像素块之间的亮度差异,包括:
    根据在第一方向上处于所述第一像素块前侧的第二像素块和处于所述第一像素块后侧的第二像素块之间的亮度因子的差值,确定所述第一像素块在第一方向上的第一亮度差异;
    根据所述多个第二像素块中在第二方向上处于所述第一像素块前侧的第二像素块和处于所述第一像素块后侧的第二像素块之间的亮度因子的差值,确定所述第一像素块在第二方向上的第二亮度差异;
    所述将所述第一像素块与所述多个第二像素块之间的所述亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征,包括:将所述第一亮度差异和所述第二亮度差异封装为二维浮点数据,以确定所述第一像素块的亮度特征。
  6. 根据权利要求1至5中任一所述的方法,其中,所述方法还包括:
    根据划分规则,将所述第一图像划分为至少两个像素块,所述第一像素块是所述至少两个像素块中的任一像素块;
    所述基于所述插值像素块,输出具有第二分辨率的第二图像,包括:
    基于所述插值像素块,根据组合规则拼接为具有所述第二分辨率的所述第二图像,所述组合规则与所述划分规则是相逆的排序规则。
  7. 根据权利要求1至5中任一所述的方法,其中,所述方法还包括:
    根据所述第一图像,确定所述第一像素块的所述特征判断条件。
  8. 根据权利要求7所述的方法,其中,所述根据所述第一图像,确定所述第一像素块的所述特征判断条件,包括:
    根据所述第一像素块在所述第一图像中的位置信息,确定所述第一像素块的所述特征判断条件。
  9. 根据权利要求8所述的方法,其中,所述根据所述第一像素块在所述第一图像中的位置信息,确定所述第一像素块的所述特征判断条件,包括:
    在所述第一像素块的位置在目标区域内的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第一目标阈值,所述目标区域是所述第一图像的部分区域;
    在所述第一像素块的位置在所述目标区域外的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第二目标阈值,所述第一目标阈值小于所述第二目标阈值。
  10. 根据权利要求8所述的方法,其中,所述根据所述第一像素块在所述第一图像中的位置信息,确定所述第一像素块的所述特征判断条件,包括:
    根据所述第一图像的图像内容和所述第一像素块在所述第一图像中的位置信息,确定所述第一像素块的所述特征判断条件。
  11. 根据权利要求10所述的方法,其中,所述根据所述第一图像的图像内容和所述第一像素块在所述第一图像中的位置信息,确定所述第一像素块的所述特征判断条件,包括:
    根据所述第一图像的图像内容确定所述第一图像中的图像主体区域;
    在所述第一像素块的位置在所述图像主体区域内的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第三目标阈值;
    在所述第一像素块的位置在所述图像主体区域外的情况下,确定所述第一像素块的所述特征判断条件包括所述第一像素块的图像内容的复杂程度超过第四目标阈值,所述第三目标阈值小于所述第四目标阈值。
  12. 根据权利要求11所述的方法,其中,所述根据所述第一图像的图像内容确定所述第一图像中的图像主体区域,包括:
    调用第一图像识别模型在所述第一图像中识别目标对象,将所述目标对象的显示区域确定所述第一图像中的所述图像主体区域;
    或,
    调用第二图像识别模型在所述第一图像中确定第一图像的图像类型,根据所述图像类型确定对应的所述图像主体区域。
  13. 根据权利要求1至5任一所述的方法,其中,
    所述在所述第一像素块的所述插值特征不满足特征判断条件的情况下,对所述第一像素块执行第一插值,获得插值像素块,包括:
    在所述第一像素块的所述插值特征不满足所述特征判断条件的情况下,根据第三像素块对所述第一像素块执行所述第一插值,获得所述插值像素块,所述第三像素块包括处于所述第一像素块周围的邻近像素块;
    所述在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,对所述第一像素块执行第二插值,获得所述插值像素块,包括:
    在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,根据第四像素块对所述第一像素块执行所述第二插值,获得所述插值像素块,所述第四像素块包括处于所述第一像素块周围的邻近像素块,所述第四像素块的数量大于所述第三像素块的数量。
  14. 一种游戏渲染方法,所述方法由游戏设备执行,所述方法包括:
    确定第一分辨率和第二分辨率,所述第一分辨率是游戏引擎的输出分辨率,所述第二分辨率是所述游戏设备的显示分辨率;
    获取所述游戏引擎基于所述第一分辨率输出的第一图像;
    基于所述第一图像,采用图像处理方法获得具有所述第二分辨率的第二图像进行显示;
    其中,所述图像处理方法是上述如权利要求1至13任一所述的图像处理方法。
  15. 根据权利要求14所述的方法,其中,所述确定第一分辨率,包括:
    基于所述游戏设备的属性信息,确定所述第一分辨率;
    其中,所述游戏设备的属性信息包括如下至少之一:所述游戏设备的计算能力、所述游戏设备的负载情况、所述游戏设备的温度、所述游戏设备的型号特征。
  16. 根据权利要求15所述的方法,其中,所述基于所述游戏设备的属性信息,确定所述第一分辨率,包括:
    在所述游戏设备的属性信息满足目标条件的情况下,将所述第一分辨率确定为A1乘B1;
    在所述游戏设备的属性信息不满足所述目标条件的情况下,将所述第一分辨率确定为A2乘B2;
    其中,A1大于A2和/或B1大于B2,所述目标条件包括如下至少之一:所述游戏设备的计算能力大于目标能力阈值、所述游戏设备的负载情况小于目标负载阈值、所述游戏设备的温度小于目标温度阈值、所述游戏设备的型号特征超过目标型号特征。
  17. 一种图像处理装置,所述装置包括:
    获取模块,用于获取具有第一分辨率的第一图像;
    计算模块,用于根据所述第一图像,计算所述第一图像中第一像素块的插值特征,所述插值特征用于描述所述第一像素块的图像内容;
    处理模块,用于在所述第一像素块的所述插值特征不满足特征判断条件的情况下,对所述第一像素块执行第一插值,获得插值像素块,其中所述特征判断条件为关于所述第一像素块的图像内容的复杂程度的判断条件;
    所述处理模块,还用于在所述第一像素块的所述插值特征满足所述特征判断条件的情况下,对所述第一像素块执行第二插值,获得所述插值像素块;
    输出模块,用于基于所述插值像素块,输出具有第二分辨率的第二图像,所述第二分辨率大于所述第一分辨率;
    其中,所述第一插值和所述第二插值用于对所述第一像素块进行升采样,所述第二插值的计算资源消耗大于所述第一插值的计算资源消耗。
  18. 一种游戏渲染装置,所述装置由游戏设备执行,所述装置包括:
    确定模块,用于确定第一分辨率和第二分辨率,所述第一分辨率是游戏引擎的输出分辨率,所述第二分辨率是所述游戏设备的显示分辨率;
    获取模块,用于获取所述游戏引擎基于所述第一分辨率输出的第一图像;
    处理模块,用于基于所述第一图像,采用图像处理装置获得具有所述第二分辨率的第二图像进行显示;
    其中,所述图像处理装置是上述如权利要求16所述的图像处理装置。
  19. 一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一段程 序;所述处理器,用于执行所述存储器中的所述至少一段程序以实现上述如权利要求1至13任一所述的图像处理方法,或如权利要求14至16任一所述的游戏渲染方法。
  20. 一种计算机可读存储介质,所述可读存储介质中存储有可执行指令,所述可执行指令由处理器加载并执行以实现上述如权利要求1至13任一所述的图像处理方法,或如权利要求14至16任一所述的游戏渲染方法。
  21. 一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现上述如权利要求1至13任一所述的图像处理方法,或如权利要求14至16任一所述的游戏渲染方法。
PCT/CN2023/074883 2022-03-10 2023-02-08 图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质 WO2023169121A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/379,332 US20240037701A1 (en) 2022-03-10 2023-10-12 Image processing and rendering

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210230954.6 2022-03-10
CN202210230954.6A CN116777739A (zh) 2022-03-10 2022-03-10 图像处理方法、游戏渲染方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/379,332 Continuation US20240037701A1 (en) 2022-03-10 2023-10-12 Image processing and rendering

Publications (1)

Publication Number Publication Date
WO2023169121A1 true WO2023169121A1 (zh) 2023-09-14

Family

ID=87937107

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/074883 WO2023169121A1 (zh) 2022-03-10 2023-02-08 图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质

Country Status (3)

Country Link
US (1) US20240037701A1 (zh)
CN (1) CN116777739A (zh)
WO (1) WO2023169121A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745531A (zh) * 2024-02-19 2024-03-22 瑞旦微电子技术(上海)有限公司 图像插值方法、设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412592A (zh) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
US20170046811A1 (en) * 2015-01-14 2017-02-16 Lucidlogix Technologies Ltd. Method and apparatus for controlling spatial resolution in a computer system
CN112508783A (zh) * 2020-11-19 2021-03-16 西安全志科技有限公司 基于方向插值的图像处理方法、计算机装置及计算机可读存储介质
CN113015021A (zh) * 2021-03-12 2021-06-22 腾讯科技(深圳)有限公司 云游戏的实现方法、装置、介质及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046811A1 (en) * 2015-01-14 2017-02-16 Lucidlogix Technologies Ltd. Method and apparatus for controlling spatial resolution in a computer system
CN106412592A (zh) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN112508783A (zh) * 2020-11-19 2021-03-16 西安全志科技有限公司 基于方向插值的图像处理方法、计算机装置及计算机可读存储介质
CN113015021A (zh) * 2021-03-12 2021-06-22 腾讯科技(深圳)有限公司 云游戏的实现方法、装置、介质及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745531A (zh) * 2024-02-19 2024-03-22 瑞旦微电子技术(上海)有限公司 图像插值方法、设备及可读存储介质

Also Published As

Publication number Publication date
US20240037701A1 (en) 2024-02-01
CN116777739A (zh) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109934776B (zh) 模型生成方法、视频增强方法、装置及计算机可读存储介质
US20150319423A1 (en) Multi-perspective stereoscopy from light fields
WO2020143191A1 (en) Image frame prediction method, image frame prediction apparatus and head display apparatus
CN109598673A (zh) 图像拼接方法、装置、终端及计算机可读存储介质
CN111681177B (zh) 视频处理方法及装置、计算机可读存储介质、电子设备
WO2023169121A1 (zh) 图像处理方法、游戏渲染方法、装置、设备、程序产品及存储介质
US8698830B2 (en) Image processing apparatus and method for texture-mapping an image onto a computer graphics image
KR20070074590A (ko) 2차원 이미지의 원근 변환
JP2023545660A (ja) ランドスケープ仮想画面の表示方法及び装置並びに電子装置及びコンピュータプログラム
CN114040246A (zh) 图形处理器的图像格式转换方法、装置、设备及存储介质
CN114782648A (zh) 图像处理方法、装置、电子设备及存储介质
EP4261784A1 (en) Image processing method and apparatus based on artificial intelligence, and electronic device, computer-readable storage medium and computer program product
CN103686110A (zh) 一种rgb转rgbw的方法及装置
CN109643462B (zh) 基于渲染引擎的实时图像处理方法以及显示设备
US10650488B2 (en) Apparatus, method, and computer program code for producing composite image
CN112750190B (zh) 三维热力图生成方法、装置、设备及存储介质
US20220108420A1 (en) Method and system of efficient image rendering for near-eye light field displays
CN116091292A (zh) 数据处理方法和相关装置
EP4002289A1 (en) Picture processing method and device, storage medium, and electronic apparatus
CN113470156A (zh) 纹理贴图的混合处理方法、装置、电子设备及存储介质
JP4212430B2 (ja) 多重画像作成装置、多重画像作成方法、多重画像作成プログラム及びプログラム記録媒体
CN114677464A (zh) 图像处理方法、图像处理装置、计算机设备及存储介质
CN112396671B (zh) 水波纹效果实现方法、装置、电子设备和计算机可读存储介质
RU2757563C1 (ru) Способ визуализации 3d портрета человека с измененным освещением и вычислительное устройство для него
CN117651125A (zh) 视频生成方法、装置、非易失性存储介质和计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765699

Country of ref document: EP

Kind code of ref document: A1