CN116934934A - Anti-aliasing method and device for picture, computer readable medium and electronic equipment - Google Patents

Anti-aliasing method and device for picture, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN116934934A
CN116934934A CN202310837087.7A CN202310837087A CN116934934A CN 116934934 A CN116934934 A CN 116934934A CN 202310837087 A CN202310837087 A CN 202310837087A CN 116934934 A CN116934934 A CN 116934934A
Authority
CN
China
Prior art keywords
current frame
pixel value
previous frame
coordinates
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310837087.7A
Other languages
Chinese (zh)
Inventor
陈徐悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yoerha Technology Co ltd
Original Assignee
Shanghai Yoerha Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yoerha Technology Co ltd filed Critical Shanghai Yoerha Technology Co ltd
Priority to CN202310837087.7A priority Critical patent/CN116934934A/en
Publication of CN116934934A publication Critical patent/CN116934934A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/12Indexing scheme for image data processing or generation, in general involving antialiasing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application provides an antialiasing method, an antialiasing device, a computer-readable medium and electronic equipment for a picture, wherein the method comprises the following steps: transforming the clipping coordinates of the fragment in the current frame into relative space coordinates of the fragment in the camera relative space of the current frame; transforming the relative spatial coordinates of the fragment in the camera relative space of the current frame into the relative spatial coordinates of the fragment in the camera relative space of the previous frame; transforming the relative space coordinates of the fragment in the camera relative space of the previous frame into clipping coordinates of the fragment in the previous frame; obtaining an output sampling pixel value of the fragment in the previous frame according to the clipping coordinate of the fragment in the previous frame; mixing an output sampling pixel value of the fragment in a previous frame with a sampling pixel value of the fragment in a current frame to obtain a mixed pixel value; and outputting the color of the pixel point corresponding to the pixel element in the current frame picture according to the mixed pixel value. The embodiment of the application can solve the problem of severe picture jitter.

Description

Anti-aliasing method and device for picture, computer readable medium and electronic equipment
Technical Field
The present application relates to the field of computer graphics, and in particular, to an antialiasing method and apparatus for a picture, a computer readable medium, and an electronic device.
Background
Currently, game developers commonly develop games based on game engines.
However, currently, games developed based on existing game engines still use a conventional antialiasing scheme, which results in a strong jitter of game pictures and poor picture effects in some game scenes.
Disclosure of Invention
The embodiment of the application provides an anti-aliasing method and device for a picture, a computer readable medium and electronic equipment, which can overcome the problem of severe picture jitter at least to a certain extent and can improve the picture effect.
Other features and advantages of the application will be apparent from the following detailed description, or may be learned by the practice of the application.
According to an aspect of an embodiment of the present application, there is provided an antialiasing method of a picture, the method including: according to the inverse matrix of the rotating part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame, converting the clipping coordinates of the slice element in the current frame into the relative space coordinates of the slice element in the relative space of the camera corresponding to the current frame; determining a displacement transformation matrix from a camera position corresponding to a current frame to a camera position corresponding to a previous frame, and transforming relative space coordinates of the fragment in a camera relative space corresponding to the current frame into relative space coordinates of the fragment in a camera relative space corresponding to the previous frame according to the displacement transformation matrix; according to the projection matrix of the camera corresponding to the previous frame and the rotating part in the observation matrix of the camera corresponding to the previous frame, converting the relative space coordinates of the fragment in the relative space of the camera corresponding to the previous frame into clipping coordinates of the fragment in the previous frame; acquiring an output sampling pixel value of the fragment in a previous frame according to the clipping coordinate of the fragment in the previous frame; mixing the output sampling pixel value of the fragment in the previous frame with the sampling pixel value of the fragment in the current frame to obtain a mixed pixel value; and outputting the color of the pixel point corresponding to the pixel in the current frame picture according to the mixed pixel value.
According to an aspect of an embodiment of the present application, there is provided an antialiasing apparatus for a picture, the apparatus comprising: the first transformation unit is used for transforming the clipping coordinates of the slice element in the current frame into the relative space coordinates of the slice element in the relative space of the camera corresponding to the current frame according to the inverse matrix of the rotating part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame; the second transformation unit is used for determining a displacement transformation matrix from the camera position corresponding to the current frame to the camera position corresponding to the previous frame, and transforming the relative space coordinate of the fragment in the camera relative space corresponding to the current frame into the relative space coordinate of the fragment in the camera relative space corresponding to the previous frame according to the displacement transformation matrix; a third transformation unit, configured to transform, according to the projection matrix of the camera corresponding to the previous frame and the rotation part in the observation matrix of the camera corresponding to the previous frame, the relative spatial coordinates of the patch in the camera relative space corresponding to the previous frame into clipping coordinates of the patch in the previous frame; the acquisition unit is used for acquiring an output sampling pixel value of the fragment in the previous frame according to the clipping coordinate of the fragment in the previous frame; the mixing unit is used for carrying out mixing operation on the output sampling pixel value of the fragment in the previous frame and the sampling pixel value of the fragment in the current frame to obtain a mixed pixel value; and the output unit is used for outputting the color of the pixel point corresponding to the pixel element in the current frame picture according to the mixed pixel value.
In some embodiments of the application, based on the foregoing scheme, the acquiring unit is configured to: selecting a pixel point area with a preset size from a current frame by taking a current pixel point corresponding to the fragment as a center; sampling a plurality of pixel values in the pixel point area; constructing a color space according to a statistical model based on the plurality of pixel values; acquiring an original sampling pixel value of the fragment in a previous frame according to the clipping coordinate of the fragment in the previous frame; performing mixing operation according to the original sampling pixel value to obtain a target sampling pixel value of the fragment in a previous frame; if the target sampling pixel value exceeds the color space, correcting the target sampling pixel value into an output sampling pixel value belonging to the color space; and if the original sampling pixel value of the fragment in the previous frame does not exceed the color space, taking the target sampling pixel value as an output sampling pixel value.
In some embodiments of the application, based on the foregoing scheme, the acquiring unit is configured to: establishing a connection between the target sampling pixel value and the sampling pixel value of the fragment in the current frame; and determining an intersection point of the connecting line and the color space, and taking the intersection point as an output sampling pixel value belonging to the color space.
In some embodiments of the application, based on the foregoing scheme, the output unit is configured to: and carrying out sharpening processing on the mixed pixel values based on the pixel values obtained by sampling, and outputting the color of the pixel point corresponding to the pixel in the current frame picture according to the pixel values obtained by the sharpening processing.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes an observation matrix perturbation unit, where the observation matrix perturbation unit is configured to perform perturbation processing on an original projection matrix of a camera corresponding to each frame according to a two-dimensional hall-effect sequence, to obtain a projection matrix of the camera corresponding to each frame, before transforming a clipping coordinate of the primitive in the current frame into a relative spatial coordinate of the primitive in a camera relative space corresponding to the current frame.
In some embodiments of the application, based on the foregoing, the mixing unit is configured to: and carrying out mixing operation on the output sampling pixel value of the pixel in the previous frame and the sampling pixel value of the pixel in the current frame based on the mixing weight to obtain a mixed pixel value.
In some embodiments of the present application, based on the foregoing scheme, before performing a blending operation on the output sampled pixel values of the tile in the previous frame and the sampled pixel values of the tile in the current frame based on the blending weights, the blending unit is further configured to: acquiring configured original weights; determining an offset distance of coordinates of the patch in a current frame relative to coordinates of the patch in a previous frame; and determining a mixing weight according to the original weight and the offset distance, wherein if the first mixing weight is larger than the second mixing weight, the offset distance according to which the first mixing weight is determined is larger than the offset distance according to which the second mixing weight is determined.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes a tile generating unit, before transforming the clipping coordinates of the tile in the current frame into the relative spatial coordinates of the tile in the camera relative space corresponding to the current frame, the tile generating unit is configured to: transforming the vertex coordinates of the model from model space to world space according to the model transformation matrix corresponding to the current frame; transforming the vertex coordinates of the model from world space to observation space according to the observation matrix of the camera corresponding to the current frame; transforming the vertex coordinates of the model from the observation space to the clipping space according to the projection matrix corresponding to the current frame; mapping the vertex coordinates of the model in the clipping space to a screen space to obtain the vertex coordinates of the primitives of the model in the screen space; and determining a triangular grid according to the vertex coordinates of the primitives of the model in the screen space, and generating primitives for target pixel points in all pixel points according to the coverage condition of the pixel points of the screen by the triangular grid.
In some embodiments of the application, the screen is a game scene screen based on the foregoing scheme.
According to an aspect of an embodiment of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements an antialiasing method of a picture as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic apparatus including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the antialiasing method of a picture as described in the embodiments above.
According to an aspect of an embodiment of the present application, there is provided a computer program product including computer instructions stored in a computer-readable storage medium, from which computer instructions a processor of a computer device reads, the processor executing the computer instructions, causing the computer device to perform an antialiasing method as described in the above embodiments.
In some embodiments of the present application, the method includes converting the clipping coordinates of a pixel in a current frame into the relative spatial coordinates of the pixel in a previous frame by using the inverse matrix of the rotation part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame, determining the displacement conversion matrix from the camera position corresponding to the current frame to the camera position corresponding to the previous frame, converting the relative spatial coordinates of the pixel in the camera relative to the current frame into the relative spatial coordinates of the pixel in the camera relative to the previous frame based on the displacement conversion matrix, continuing to convert the relative spatial coordinates of the pixel in the camera relative to the previous frame into the clipping coordinates of the pixel in the previous frame based on the clipping coordinates of the pixel in the previous frame, obtaining the final pixel value of the pixel in the previous frame based on the clipping coordinates of the pixel in the previous frame, sampling the pixel value of the pixel in the previous frame and the pixel in the mixed pixel value of the pixel in the current frame, and obtaining the pixel value of the pixel in the mixed pixel value of the pixel in the previous frame. Because the inverse matrix of the rotating part in the observation matrix of the camera corresponding to the current frame, the inverse matrix of the projection matrix of the camera corresponding to the current frame and the clipping coordinates of the slice element in the current frame are all minimum value matrices, the relative space coordinates of the slice element in the relative space of the camera corresponding to the current frame, the displacement transformation matrix from the camera position corresponding to the current frame to the camera position corresponding to the previous frame, the relative space coordinates of the slice element in the relative space of the camera corresponding to the previous frame, the projection matrix of the camera corresponding to the previous frame and the rotating part in the observation matrix of the camera corresponding to the previous frame are also minimum value matrices, the precision loss caused by matrix transformation is avoided when the time anti-aliasing re-projection is carried out in the world coordinate scene, the problem of game picture jitter is effectively solved, and the visual effect and user experience of pictures can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is evident that the drawings in the following description are only some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture that may be used to implement aspects of embodiments of the present application;
FIG. 2 shows a flow chart of an antialiasing method of a picture according to an embodiment of the application;
FIG. 3 shows a flowchart of the steps preceding the transformation of clipping coordinates of a tile in a current frame to relative spatial coordinates of the tile in a camera relative space corresponding to the current frame, in accordance with one embodiment of the present application;
FIG. 4 shows a flow chart of steps preceding step 220 in the embodiment of FIG. 2, according to one embodiment of the application;
FIG. 5 illustrates a flow chart for obtaining output sampled pixel values for a tile in a previous frame based on clipping coordinates for the tile in the previous frame, in accordance with one embodiment of the application;
FIG. 6 shows a flowchart of details of step 560 in the embodiment of FIG. 5, according to one embodiment of the application;
FIG. 7 shows a flowchart of details of step 260 in the embodiment of FIG. 2, according to one embodiment of the application;
FIG. 8 shows a flowchart of steps followed by a blending operation of output sampled pixel values of a tile in a previous frame and sampled pixel values of a tile in a current frame based on blending weights, in accordance with an embodiment of the present application;
FIG. 9 shows a block diagram of an antialiasing apparatus of a screen according to an embodiment of the application;
fig. 10 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the application may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the related art, temporal antialiasing (TAA) is a common antialiasing approach. The temporal antialiasing can apply a sub-pixel level dithering to the projection matrix of the camera before rendering the opaque object, and combine with the historical frame data to mix, so as to achieve the effect of antialiasing of sub-pixel points of frame sampling, and realize antialiasing. In a world scene, the coordinate value range is huge, usually from thousands to tens of thousands, and the phenomenon of severe shaking and blurring of a picture can be caused by using TAA due to the limitation of floating point number coordinate precision.
In particular, the inventors of the present application found that the temporal antialiasing scheme in the related art mainly has the following drawbacks:
1. high light flicker problem: because the sub-pixels sampled for each frame are different, flicker can occur if the highlight region is relatively slender.
2. Problem of floating point number precision loss in world scenes: the world scene in SLG (Game of strategy) is common, the coordinate range of the world scene often reaches thousands or tens of thousands, and the effective precision of the floating point number of the Unity Game engine can be approximately 7 bits, and in the case of large coordinates, the effective bit number of the coordinates in the decimal part is reduced. The temporal antialiasing involves a reprojection step in which a series of matrix transformations are performed to obtain the coordinates of the current pixel of the previous frame, in which the required fractional part of the coordinates is usually at least 0.0001m to ensure that the coordinates deviate in a controllable range after transformation. In the SLG world scene, the accuracy of the fractional part of coordinates is usually about 0.01, and therefore, the accuracy error caused by the re-projection may cause the frame obtained by using the TAA to be severely dithered.
Taking the world scenario with a coordinate range of [0,15000] as an example, when the camera has coordinates in the x-axis and z-axis (13000), the effective precision is approximately 7 bits, where 5 bits would be used for the integer part and only 2 bits would remain for the fractional part, representing that the fractional part can provide a precision of 0.01m. In a typical TAA re-projection process, first, the world coordinates of the patch are restored by equation (1):
posWS=V -1 ·P -1 ·posCS (1)
wherein posCS is the clipping coordinate of the element, V -1 V is the inverse of the View matrix (View) of the camera corresponding to the current frame -1 The PosWS is the world coordinate of the slice element, and is the inverse matrix of the Projection matrix (Projection) of the camera corresponding to the current frame; p when in a large coordinate scene, such as when the camera has coordinates (13000) in the x-axis and z-axis -1 PosCS is a minimum matrix, and V -1 As a maximum value matrix, and multiplying a minimum value matrix by a maximum value matrix causes a coordinate error to become large.
After obtaining the world coordinate posWS of the patch, the clipping coordinates of the patch in the previous frame are also restored according to formula (2):
posCS =P ·V ·posWS (2)
wherein P is For the projection matrix of the camera corresponding to the previous frame, V Since V '·posws is the maximum matrix and P' is the minimum matrix for the camera corresponding to the previous frame, multiplying the two matrices increases the coordinate error even further.
If the clipping coordinate of the patch in the x-axis and z-axis of the current frame is (0.5 ), theoretically, the clipping coordinate of the patch in the x-axis and z-axis of the previous frame should be (0.6), but due to the precision error caused by matrix transformation, the clipping coordinate of the patch in the x-axis and z-axis of the previous frame is (0.7, 0.8), the error is (0.1, 0.2), and then the reprojection error is caused.
3. Problem of blurring of picture after antialiasing:
the improper re-projection of TAA and the anti-aliasing nature of TAA, due to loss of world precision, can cause the picture to become blurred.
To this end, the present application first provides an antialiasing method for a picture. The anti-aliasing method for the picture provided by the embodiment of the application can overcome the defects and provides a time anti-aliasing scheme for realizing the worldwide scene. On the one hand, it is proposed to calculate the re-projection result by the relative position of the camera for the first time, and the accuracy loss caused by matrix transformation is reduced by avoiding the method of restoring the coordinates to world space, so that a more accurate calculation result can be obtained, and the blurring of pictures is reduced; on the other hand, the temporal antialiasing tends to cause the picture to become blurred to a certain extent, and the sharpening effect can be realized in a low-cost manner by repeatedly using the pixel value results sampled when the statistical model is established, so that the blurring caused by TAA is further reduced, and the visual effect of the picture is improved.
The embodiment of the application can be applied to the rendering of pictures, and the pictures can be pictures in games such as SLG (strategy Game) and the like.
FIG. 1 shows a schematic diagram of an exemplary system architecture that may be used to implement the technical solution of an embodiment of the present application. As shown in fig. 1, the system architecture 100 may include a user terminal 110 and a cloud 120 communicatively connected to the user terminal 110, where the user terminal 110 is deployed with an SLG game client capable of communicating with the cloud 120, the cloud 120 includes a background server for a game, and the background server includes a login server 121, a file server 122, a database server 123, a logic server 124, and a chat server 125, where the login server 121 is used to process a login operation of the SLG game client, the file server 122 is used to provide a new version of a game client file to the SLG game client on the user terminal 110 when the SLG game client is updated, the database server 123 is used to store game data and player data, the logic server 124 is used to process game logic of a player, the chat server 125 is used to process chat content between players, and the user terminal 110 is an execution subject of an embodiment of the present application, and when an antialiasing method of a picture provided by the present application is applied in the system architecture shown in fig. 1, a procedure may be as follows: firstly, a user establishes connection with a background server in a cloud 120 through an SLG game client on a user terminal 110 to log in an SLG game; then, the user interacts with the background server through the SLG game client and through the connection established with the background server in the cloud 120, loads game resources, and enters a world scene in the game, wherein the world scene contains an object model to be rendered; finally, according to the movement instruction or the viewing angle switching instruction of the user, the user terminal 110 continuously outputs the game picture through the SLG game client, and when each frame of game picture is output, the time antialiasing is performed based on the antialiasing method of the picture provided by the embodiment of the present application, so that the game picture seen by the user through the user terminal 110 is smoother and clearer.
In one embodiment of the present application, the antialiasing method of pictures provided by embodiments of the present application is used when entering a large coordinate scene (the camera has coordinates in both the x-axis and z-axis that are greater than 7000) in a world scene, while the traditional temporal antialiasing method is used when entering other scenes in the world scene.
In one embodiment of the present application, the antialiasing method of the picture provided by the embodiment of the present application is used when entering a large coordinate scene in a world scene, and the conventional temporal antialiasing method is used when entering other scenes in the world scene, which is determined according to the user's setup instruction at the SLG game client.
In one embodiment of the application, the game engine employed by the SLG game is a Unity game engine.
It should be understood that the number of user terminals, servers in the cloud, and the number of servers of each type in the cloud in fig. 1 are merely illustrative. According to the implementation requirement, the cloud terminal can have any number of user terminals, the number of servers in the cloud terminal and the number of servers of each type in the cloud terminal can be set at any level, namely, the number of the user terminals can be multiple, and the servers of the same type can be a server cluster formed by multiple servers.
It should be noted that fig. 1 shows only one embodiment of the present application. Although in the solution of the embodiment of fig. 1, the user terminal is a desktop computer, in other embodiments of the present application, the user terminal may also be various types of terminal devices such as a notebook computer, a workstation, a tablet computer, a vehicle-mounted terminal, a smart phone, a portable wearable device, and the like; although in the solution of the embodiment of fig. 1, the background server only includes five servers, i.e. a login server, a file server, a database server, a logic server and a chat server, in other embodiments of the present application, the background server may also include other types of servers, i.e. a security server, a payment server, etc., where the security server is used to check the security of the game environment, and the payment server may be used to complete the recharging and payment functions of the game; although in the scenario of the embodiment of fig. 1, the scenario is used in a SLG game based on a Unity game engine, in other embodiments of the application, it may also be applied in other types of games, and other types of game engines such as other illusions may be employed; although in the solution of the embodiment of fig. 1, the antialiasing method of a picture is applied to a game, in other embodiments of the present application, the antialiasing method of a picture may also be applied to various scenes such as movie special effects, animation, virtual reality, augmented reality, design, and the like. The embodiments of the present application should not be limited in any way, nor should the scope of the application be limited in any way.
It is easy to understand that the antialiasing method of the picture provided by the embodiment of the present application is generally executed by the terminal device, and accordingly, the antialiasing device of the picture is generally disposed in the terminal device. However, in other embodiments of the present application, the server may also have a similar function to the terminal device, so as to perform the antialiasing scheme of the picture provided by the embodiments of the present application.
Therefore, the embodiment of the application can be applied to a terminal or a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
The implementation details of the technical scheme of the embodiment of the application are described in detail below:
fig. 2 shows a flow chart of an antialiasing method of a picture according to an embodiment of the application. The antialiasing method of the picture may be performed by various devices with computing, processing and picture display functions, such as a user terminal or cloud server, where the user terminal includes, but is not limited to, a mobile phone, a computer, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, an aircraft, a portable wearable device, etc. Referring to fig. 2, the antialiasing method of the frame at least includes the following steps:
and 220, converting the clipping coordinates of the slice element in the current frame into the relative space coordinates of the slice element in the relative space of the camera corresponding to the current frame according to the inverse matrix of the rotating part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame.
Step 230, determining a displacement transformation matrix from the camera position corresponding to the current frame to the camera position corresponding to the previous frame, and transforming the relative spatial coordinates of the slice element in the camera relative space corresponding to the current frame into the relative spatial coordinates of the slice element in the camera relative space corresponding to the previous frame according to the displacement transformation matrix.
Step 240, according to the rotation parts in the projection matrix of the camera corresponding to the previous frame and the observation matrix of the camera corresponding to the previous frame, transforming the relative space coordinates of the slice element in the camera relative space corresponding to the previous frame into clipping coordinates of the slice element in the previous frame.
Step 250, obtaining the output sampling pixel value of the fragment in the previous frame according to the clipping coordinate of the fragment in the previous frame.
Step 260, performing a mixing operation on the output sampled pixel value of the primitive in the previous frame and the sampled pixel value of the primitive in the current frame to obtain a mixed pixel value.
Step 270, outputting the color of the pixel point corresponding to the pixel element in the current frame according to the mixed pixel value.
In the following, it is first described how the elements involved in the above steps are obtained.
Fig. 3 shows a flowchart of the steps before transforming clipping coordinates of a tile in a current frame to relative spatial coordinates of the tile in a camera relative space corresponding to the current frame, according to one embodiment of the application. Referring to fig. 3, before transforming the clipping coordinates of the slice element in the current frame into the relative spatial coordinates of the slice element in the camera relative space corresponding to the current frame, the antialiasing method of the picture may include the following steps:
In step 310, the vertex coordinates of the model are transformed from model space to world space according to the model transformation matrix corresponding to the current frame.
The current frame is a current frame picture, taking a world scene of the SLG game as an example, the user changes coordinates, the SLG game outputs different pictures to the screen, and the current frame is a frame picture which needs to be output to the screen at present.
The model space is provided with a model in a scene, the model has vertex coordinates, and the model can be various objects such as a person, a building, a plant and the like. The model space may be also referred to as an object space, and the model transformation matrix is used to transform the vertex coordinates of the model from the model space to the world space. The operations of translation, scaling, rotation and the like of the model can be realized through model transformation.
In step 320, the vertex coordinates of the model are transformed from world space to viewing space according to the viewing matrix of the camera to which the current frame corresponds.
The view space (view space) may also be referred to as camera space (camera space), where the axes may be chosen arbitrarily, in the Unity view space the camera is located at the origin of coordinates, +x-axis pointing to the right, +y-axis pointing upwards, and +z-axis pointing to the rear of the camera. The vertex coordinates of the model can be transformed from world space to viewing space by a viewing matrix (view matrix).
In step 330, the vertex coordinates of the model are transformed from the viewing space to the clipping space according to the projection matrix corresponding to the current frame.
The clipping space (clip space), also referred to as homogeneous clipping space, and the projection matrix (projection matrix), also referred to as clip matrix, by which the vertex coordinates of the model can be transformed from the viewing space to the clipping space. The clipping space is used for clipping rendering primitives, and only primitives located inside the clipping space are reserved, wherein the primitives are formed by vertexes, such as triangles forming a model.
In SLG games, transformation to the crop space may be based on perspective projection or orthogonal projection.
In step 340, the vertex coordinates of the model in the clipping space are mapped to the screen space, so as to obtain the vertex coordinates of the primitives of the model in the screen space.
The x and y coordinates of each primitive may be transformed under the screen coordinate system by viewport transformation by a viewport transformation matrix (Viewport transform matrix), mapping the vertex coordinates of the model in clipping space to screen space.
In step 350, a triangle mesh is determined according to the vertex coordinates of the primitives of the model in the screen space, and primitives are generated for the target pixel point in each pixel point according to the coverage condition of the pixel points of the screen by the triangle mesh.
In one embodiment of the present application, generating a primitive for a target pixel in each pixel according to a coverage condition of a pixel of a screen by a triangle mesh includes: traversing the pixel points of the screen, and generating a fragment for the target pixel point if the target pixel point is covered by the triangular mesh.
And determining a triangular grid in the screen based on the vertex coordinates of the primitives in the screen space, and then checking whether each pixel point is covered by the triangular grid by judging whether the center point in each pixel point is positioned in the triangular grid, so that the triangular grid is sampled, and if so, obtaining a primitive corresponding to the pixel point, wherein the primitive contains various information such as the screen coordinates, depth, normal, texture coordinates and the like corresponding to the pixel point.
Next, how to obtain the observation matrix of each frame corresponding to the camera is described.
Fig. 4 shows a flow chart of steps preceding step 220 in the embodiment of fig. 2 according to an embodiment of the application. Referring to fig. 4, before transforming the clipping coordinates of the slice element in the current frame into the relative spatial coordinates of the slice element in the camera relative space corresponding to the current frame, the antialiasing method of the picture may further include the following steps:
In step 210, the original projection matrix of the camera corresponding to each frame is perturbed according to the two-dimensional holton sequence, so as to obtain the projection matrix of the camera corresponding to each frame.
The original projection matrix is an unprocessed projection matrix. A two-dimensional hall-effect (Halton) sequence of variable length may be constructed to apply a perturbation to the original projection matrix of the camera as a perturbation offset value in the X-axis and Y-axis dimensions, specifically, the perturbation offset values in the X-axis and Y-axis dimensions may be added to the positions of the [2,0] and [2,1] projection matrices. The two-dimensional Hall-ton sequence is utilized to perform disturbance processing on the original projection matrix, namely, sub-pixel-level dithering is applied to the original projection matrix, so that the position of the vertex in the clipping space can be changed, and the position of the sampling pixel point corresponding to the vertex is changed. Because the two-dimensional Hall (Halton) sequence is a low-difference sequence, random sampling is avoided, and sampling points can be uniformly distributed, so that the anti-aliasing effect is better.
The inventor of the present application found that in the conventional time anti-aliasing re-projection process, pixels are transformed to world space through the inverse of the observation matrix and the projection matrix of the current frame camera, and then the observation matrix and the projection matrix of the previous frame camera are multiplied to the left to obtain the position of the pixel on the previous frame, and under the world coordinate, the translation part of the observation matrix of the camera is the maximum value, and the position of the pixel in the clipping space is the minimum value, and when the matrix containing the maximum value and the matrix containing the minimum value are multiplied, the calculation accuracy is lost to a certain extent. Therefore, the main solution idea of the embodiment of the present application is to avoid multiplication of the maximum value matrix and the minimum value matrix to realize improvement of temporal antialiasing.
The steps of the modified back-projection process shown in fig. 2 are described in detail below.
And 220, converting the clipping coordinates of the slice element in the current frame into the relative space coordinates of the slice element in the relative space of the camera corresponding to the current frame according to the inverse matrix of the rotating part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame.
In this step, a camera relative space corresponding to the current frame is constructed by taking the current frame camera as the origin of coordinates and the xyz axis of world coordinates as the coordinate axis. Specifically, the clipping coordinates of the slice element in the current frame are transformed into the relative spatial coordinates of the slice element in the camera relative space corresponding to the current frame by the following formula (3):
wherein, posCS is the clipping coordinate of the fragment in the current frame, P -1 Is the inverse of the projection matrix of the camera corresponding to the current frame,the posCWS is the relative space coordinate of the fragment in the relative space of the camera corresponding to the current frame, which is the inverse of the rotating part in the observation matrix of the camera corresponding to the current frame. The viewing matrix of the camera may comprise a rotating part and a translating part.
In formula (3), 3 matrices due to the right formula All are minimum value matrixes, so that the precision loss is avoided.
In one embodiment of the present application, before transforming the clipping coordinates of the tile in the current frame into the relative spatial coordinates of the tile in the camera relative space corresponding to the current frame, the method further comprises: and carrying out depth test on the obtained fragments to obtain fragments passing the depth test, wherein the fragments converted into relative space coordinates in the relative space of the camera corresponding to the current frame are fragments passing the depth test.
In step 230, a displacement transformation matrix from the camera position corresponding to the current frame to the camera position corresponding to the previous frame is determined, and the relative spatial coordinates of the slice element in the camera relative space corresponding to the current frame are transformed into the relative spatial coordinates of the slice element in the camera relative space corresponding to the previous frame according to the displacement transformation matrix.
The conversion of the relative spatial coordinates of the patch in the camera relative space corresponding to the current frame to the relative spatial coordinates of the patch in the camera relative space corresponding to the previous frame can be achieved by multiplying the position of the camera corresponding to the current frame by the displacement conversion matrix of the camera position corresponding to the current frame to the camera position corresponding to the previous frame in the posCWS in equation (4).
posCWS =V trans ·posCWS (4)
Wherein, the posCWS is the relative space coordinate of the chip element in the relative space of the camera corresponding to the current frame, V trans For the displacement transformation matrix from the camera position corresponding to the current frame to the camera position corresponding to the previous frame, the posCWS The relative spatial coordinates of the fragment in the camera relative space corresponding to the previous frame.
In equation (4), two matrices V due to the right equation trans The posCWS is a minimum value matrix, so that the precision loss is avoided.
In step 240, the relative spatial coordinates of the tile in the camera relative space corresponding to the previous frame are transformed into clipping coordinates of the tile in the previous frame according to the rotation in the projection matrix of the camera corresponding to the previous frame and the viewing matrix of the camera corresponding to the previous frame.
Finally, the calculated result posCWS of equation (4) The rotation part in the observation matrix of the camera corresponding to the previous frame and the projection matrix of the camera corresponding to the previous frame are multiplied by left to obtain the clipping coordinates of the patch element in the clipping space in the previous frame, so that the improved re-projection process of the embodiment of the application is completed, as shown in a formula (5):
posCS =P ·V r ot ·posCWS (5)
wherein, the PosCWS For the relative spatial coordinates of the element in the camera relative space corresponding to the previous frame, V r ot For the rotated part, P, in the observation matrix of the camera corresponding to the previous frame PosCS is the projection matrix of the camera corresponding to the previous frame Clipping coordinates for a fragment located in clipping space in the previous frame.
In equation (5), three matrices posCWS due to the right equation 、V r ot 、P All are minimum value matrixes, so that the precision loss is avoided.
In step 250, output sampled pixel values for the primitive in the previous frame are obtained based on the clipping coordinates of the primitive in the previous frame.
The following describes how to get the sampled pixel values that the tile outputs in the previous frame. FIG. 5 illustrates a flow chart for obtaining output sampled pixel values for a tile in a previous frame based on clipping coordinates for the tile in the previous frame, according to one embodiment of the application. As shown in fig. 5, the method for obtaining the output sampling pixel value of the fragment in the previous frame according to the clipping coordinate of the fragment in the previous frame specifically may include the following steps:
in step 510, a pixel region of a predetermined size is selected from the current frame with the current pixel corresponding to the primitive as the center.
The pixel area of the predetermined size may be set as needed, for example, may be set as an area of 3x3 size centered on the current pixel.
In step 520, a plurality of pixel values are sampled within a pixel point area.
The pixel areas may be sampled according to a specified rule, such as a pixel value at a center point of each pixel in the pixel area.
In step 530, a color space is constructed according to a statistical model based on the plurality of pixel values.
Specifically, a color space can be constructed from the statistical model as follows:
CP min =μ-λσ,CP max =μ+λσ, where CP min CP, the lower limit of pixel values in color space max For the upper limit of pixel values in the color space, μ is the average value of a plurality of pixel values, σ is the standard deviation of a plurality of pixel values, and λ is the adjustment coefficient, which may be between 0 and 1, for example, may be 0.8.
In step 540, the original sampled pixel values of the primitive in the previous frame are obtained from the clipping coordinates of the primitive in the previous frame.
The original sampling pixel value of the primitive in the previous frame is the result obtained by sampling the pixel value of the primitive of the model according to the clipping coordinates after the primitive of the model is projected to the screen space in the previous frame.
In step 550, a blending operation is performed according to the original sampled pixel values, so as to obtain the target sampled pixel value of the tile in the previous frame.
The original sampled pixel values of the patch in the previous frame also need to be mixed with the output sampled pixel values of the more previous frame obtained based on the modified post-reprojection scheme to obtain the target sampled pixel values of the patch in the previous frame.
In step 560, if the target sampled pixel value exceeds the color space, the target sampled pixel value is modified to an output sampled pixel value belonging to the color space.
FIG. 6 shows a flowchart of the details of step 560 in the embodiment of FIG. 5, according to one embodiment of the application. Referring to fig. 6, if the target sampled pixel value exceeds the color space, the method corrects the target sampled pixel value to an output sampled pixel value belonging to the color space, and may specifically include the following steps:
in step 561, a connection is established between the target sampled pixel value and the sampled pixel value of the tile in the current frame.
It is easy to understand that the sampled pixel values of the tile in the current frame belong to this color space.
In step 562, the intersection of the join line with the color space is determined and taken as the output sampled pixel value belonging to the color space.
Since the target sample pixel value is located outside and the sample pixel value of the tile in the current frame is within the color space, the intersection of the line between the target sample pixel value and the sample pixel value of the tile in the current frame and the color space is the point in the color space where the sum of the distances from the target sample pixel value and the sample pixel value of the tile in the current frame is the smallest.
In step 570, if the primitive's original sampled pixel value in the previous frame does not exceed the color space, the target sampled pixel value is taken as the output sampled pixel value.
In the embodiment of the application, the problem of high-light flickering can be avoided by constructing a color space by using a statistical model and limiting the output sampling pixel value of the fragment in the previous frame to the color space.
In step 260, the output sampled pixel value of the primitive in the previous frame and the sampled pixel value of the primitive in the current frame are mixed to obtain a mixed pixel value.
Fig. 7 shows a flowchart of the details of step 260 in the embodiment of fig. 2, according to one embodiment of the application. Referring to fig. 7, the mixing operation is performed on the output sampled pixel value of the primitive in the previous frame and the sampled pixel value of the primitive in the current frame to obtain a mixed pixel value, which specifically includes the following steps:
in step 260', the output sampled pixel values of the tile in the previous frame and the sampled pixel values of the tile in the current frame are blended based on the blending weights to obtain blended pixel values.
The output sampling pixel value of the fragment in the previous frame and the sampling pixel value of the fragment in the current frame are mixed, so that the anti-aliasing effect can be achieved.
Specifically, the blended pixel value can be obtained by the following formula:
S(t)=α*x(t)+(1-α)*S(t-1)
wherein α is a mixed weight, S (t) is a mixed pixel value corresponding to the tile in the current frame, that is, an output sampling pixel value of the tile in the current frame, x (t) is a sampling pixel value of the tile in the current frame, and S (t-1) is an output sampling pixel value of the tile in the previous frame.
S (t-1) is obtained by fusing a sampled pixel value obtained by resampling by reprojection in the previous frame with an output sampled pixel value in the further previous frame.
It can be seen that, in this embodiment, the mixed weight is a weight corresponding to the sampled pixel value of the tile in the current frame, which is easy to understand, and the corresponding weights may be set for the sampled pixel value of the tile in the current frame and the output sampled pixel value of the tile in the previous frame, respectively, only by making the sum of the weights of the two be 1.
Fig. 8 shows a flowchart of steps before performing a blending operation on output sampled pixel values of a tile in a previous frame and sampled pixel values of a tile in a current frame based on blending weights, according to one embodiment of the present application. Referring to fig. 8, before performing a blending operation on an output sampling pixel value of a tile in a previous frame and a sampling pixel value of a tile in a current frame based on a blending weight, the antialiasing method of the picture may include the steps of:
In step 810, the original weights of the configuration are obtained.
The original weights are weights used to determine the mix weights.
In step 820, the offset distance of the coordinates of the primitive in the current frame relative to the coordinates of the primitive in the previous frame is determined.
In step 830, a mixing weight is determined according to the original weight and the offset distance, wherein if the first mixing weight is greater than the second mixing weight, the offset distance according to which the first mixing weight is determined is greater than the offset distance according to which the second mixing weight is determined.
If the first mixing weight is greater than the second mixing weight, the offset distance according to which the first mixing weight is determined is greater than the offset distance according to which the second mixing weight is determined, that is, the mixing weight and the offset distance may be in a positive correlation relationship or other relationships, and if the first mixing weight is greater than the second mixing weight, the first offset distance according to which the first mixing weight is determined is greater than the second offset distance according to which the second mixing weight is determined.
For example, the mixing weight α can be calculated by the following formula:wherein, beta is the original weight, L is the offset distance of the coordinates of the fragment in the current frame relative to the coordinates of the fragment in the previous frame, and delta is the offset distance threshold;
For another example, the mixing weight α may also be calculated by the following formula: α=βk,
wherein, the liquid crystal display device comprises a liquid crystal display device,a and B are two positive real numbers, and A>B。
If the first mixing weight is greater than the second mixing weight, determining that the offset distance from which the first mixing weight is determined is greater than the offset distance from which the second mixing weight is determined, which means that if the first offset distance is greater than the second offset distance, determining that the first mixing weight determined from the first offset distance is greater than the second mixing weight determined from the second offset distance. Therefore, if the coordinate distance between two frames of the same element is far, the object or the camera moves in a large extent, and at this time, the object may be blocked, and at this time, the same pixel may actually draw a different object in two frames, in this case, by making the corresponding mixing weight larger, that is, the output sampling pixel value of the element in the previous frame smaller, the effective information provided by the output sampling pixel value of the element in the previous frame can be reduced, and finally the accuracy of the determined mixed pixel value is improved.
In one embodiment of the present application, the steps of obtaining the blended pixel value and the previous steps in the antialiasing method of the picture are performed when the world coordinates of the current frame camera on at least one coordinate axis reach a predetermined threshold, and when the world coordinates of the current frame camera on all coordinate axes do not reach the predetermined threshold, the blended pixel value is obtained by: according to the inverse matrix of the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame, converting the clipping coordinates of the slice element in the current frame into world coordinates of the slice element in world space; according to the projection matrix of the camera corresponding to the previous frame and the observation matrix of the camera corresponding to the previous frame, converting the world coordinates of the fragment in the world space into the clipping coordinates of the fragment in the previous frame; obtaining an output sampling pixel value of the fragment in the previous frame according to the clipping coordinate of the fragment in the previous frame; and carrying out mixing operation on the output sampling pixel value of the pixel in the previous frame and the sampling pixel value of the pixel in the current frame to obtain a mixed pixel value.
The predetermined threshold of world coordinates may be set as desired. Taking the world scene applied to the game as an example, the preset threshold value can be set to 7000, when the world coordinates of the current frame camera on at least one or two coordinate axes of the x and z axes are above 7000, the current frame camera is determined to enter the large coordinate scene, and at the moment, the process shown in fig. 2 is adopted for carrying out re-projection to obtain the mixed pixel value; and if the world coordinates of the current frame camera on at least one or both coordinate axes of the x and z axes do not reach 7000, determining that a large coordinate scene is not entered, and obtaining the mixed pixel value by adopting the re-projection method provided by the formula (1) and the formula (2).
Referring to fig. 2, in step 270, the color of the pixel corresponding to the pixel is output in the current frame according to the mixed pixel value.
And outputting colors in pixel points corresponding to the picture according to the mixed pixel values determined for each pixel, so as to realize the display of the picture.
In one embodiment of the present application, outputting a color of a pixel point corresponding to a tile in a current frame picture according to a blended pixel value includes: and carrying out sharpening processing on the mixed pixel values based on the plurality of pixel values obtained by sampling, and outputting the color of the pixel point corresponding to the pixel in the current frame picture according to the pixel values obtained by the sharpening processing.
In one embodiment of the present application, sharpening the blended pixel values based on the sampled plurality of pixel values includes: performing low-pass filtering processing on the plurality of pixel values obtained by sampling to obtain fuzzy pixel values corresponding to the pixel values; determining an average value of a plurality of sampled pixel values; for each blurred pixel value, determining the product of a preset coefficient and the blurred pixel value, and determining the addition result of the product and the blurred pixel value; and determining the average value of the blurred pixel value and the average value for each blurred pixel value, and taking the average value as the sharpened pixel value corresponding to the blurred pixel value.
The low-pass filtering process actually realizes the blurring process, and in the embodiment of the application, the blurring effect caused by temporal antialiasing can be reduced by repeatedly using the pixel values obtained by sampling in the previous steps to sharpen the mixed pixel values.
In summary, the antialiasing method of the picture provided by the embodiment of the application has at least the following advantages:
1. by sampling adjacent pixel information, a statistical model color space is constructed, and a historical frame result obtained by sampling is limited in the color space, so that the highlight flickering phenomenon caused by temporal antialiasing is greatly reduced.
2. By constructing the relative space and the relative coordinate system of the camera and multiplying the relative space and the relative coordinate system based on a plurality of minimum value matrixes, the coordinate can be prevented from being restored to the world space, a more accurate calculation result can be obtained, the problem that floating point number precision is lost when the TAA is reprojected in the world coordinate of the SLG game is avoided, the problem of picture jitter is solved, and the phenomenon of picture blurring is also relieved.
3. The adjacent pixel information obtained by repeated sampling is sharpened at low cost after antialiasing, so that the picture blurring phenomenon caused by temporal antialiasing is further reduced.
The following describes an embodiment of the apparatus of the present application that may be used to perform the antialiasing method of the picture in the above-described embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiment of the antialiasing method of the picture described above.
Fig. 9 shows a block diagram of an antialiasing apparatus of a picture according to an embodiment of the application. Referring to fig. 9, a risk recognition apparatus 1400 according to an embodiment of the present application includes: the first transforming unit 910, the second transforming unit 920, the third transforming unit 930, the obtaining unit 940, the mixing unit 950 and the output unit 960, where the first transforming unit 910 is configured to transform the clipping coordinates of the slice element in the current frame into the relative spatial coordinates of the slice element in the camera relative space corresponding to the current frame according to the inverse matrix of the rotation part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame; the second transformation unit 920 is configured to determine a displacement transformation matrix from a camera position corresponding to a current frame to a camera position corresponding to a previous frame, and transform, according to the displacement transformation matrix, a relative spatial coordinate of the primitive in a camera relative space corresponding to the current frame to a relative spatial coordinate of the primitive in a camera relative space corresponding to the previous frame; the third transforming unit 930 is configured to transform, according to the projection matrix of the camera corresponding to the previous frame and the rotation part in the observation matrix of the camera corresponding to the previous frame, the relative spatial coordinates of the patch in the camera relative space corresponding to the previous frame into clipping coordinates of the patch in the previous frame; the obtaining unit 940 is configured to obtain an output sampling pixel value of the primitive in the previous frame according to the clipping coordinate of the primitive in the previous frame; the mixing unit 950 is configured to perform a mixing operation on the output sampled pixel value of the tile in the previous frame and the sampled pixel value of the tile in the current frame, so as to obtain a mixed pixel value; the output unit 960 is configured to output, in the current frame picture, a color of a pixel point corresponding to the pixel according to the mixed pixel value.
In some embodiments of the present application, based on the foregoing scheme, the obtaining unit 940 is configured to: selecting a pixel point area with a preset size from a current frame by taking a current pixel point corresponding to the fragment as a center; sampling a plurality of pixel values in the pixel point area; constructing a color space according to a statistical model based on the plurality of pixel values; acquiring an original sampling pixel value of the fragment in a previous frame according to the clipping coordinate of the fragment in the previous frame; performing mixing operation according to the original sampling pixel value to obtain a target sampling pixel value of the fragment in a previous frame; if the target sampling pixel value exceeds the color space, correcting the target sampling pixel value into an output sampling pixel value belonging to the color space; and if the original sampling pixel value of the fragment in the previous frame does not exceed the color space, taking the target sampling pixel value as an output sampling pixel value.
In some embodiments of the present application, based on the foregoing scheme, the obtaining unit 940 is configured to: establishing a connection between the target sampling pixel value and the sampling pixel value of the fragment in the current frame; and determining an intersection point of the connecting line and the color space, and taking the intersection point as an output sampling pixel value belonging to the color space.
In some embodiments of the present application, based on the foregoing scheme, the output unit 960 is configured to: and carrying out sharpening processing on the mixed pixel values based on the pixel values obtained by sampling, and outputting the color of the pixel point corresponding to the pixel in the current frame picture according to the pixel values obtained by the sharpening processing.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes an observation matrix perturbation unit, where the observation matrix perturbation unit is configured to perform perturbation processing on an original projection matrix of a camera corresponding to each frame according to a two-dimensional hall-effect sequence, to obtain a projection matrix of the camera corresponding to each frame, before transforming a clipping coordinate of the primitive in the current frame into a relative spatial coordinate of the primitive in a camera relative space corresponding to the current frame.
In some embodiments of the application, based on the foregoing scheme, the mixing unit 950 is configured to: and carrying out mixing operation on the output sampling pixel value of the pixel in the previous frame and the sampling pixel value of the pixel in the current frame based on the mixing weight to obtain a mixed pixel value.
In some embodiments of the present application, based on the foregoing scheme, before performing the blending operation on the output sampled pixel values of the tile in the previous frame and the sampled pixel values of the tile in the current frame based on the blending weights, the blending unit 950 is further configured to: acquiring configured original weights; determining an offset distance of coordinates of the patch in a current frame relative to coordinates of the patch in a previous frame; and determining a mixing weight according to the original weight and the offset distance, wherein if the first mixing weight is larger than the second mixing weight, the offset distance according to which the first mixing weight is determined is larger than the offset distance according to which the second mixing weight is determined.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes a tile generating unit, before transforming the clipping coordinates of the tile in the current frame into the relative spatial coordinates of the tile in the camera relative space corresponding to the current frame, the tile generating unit is configured to: transforming the vertex coordinates of the model from model space to world space according to the model transformation matrix corresponding to the current frame; transforming the vertex coordinates of the model from world space to observation space according to the observation matrix of the camera corresponding to the current frame; transforming the vertex coordinates of the model from the observation space to the clipping space according to the projection matrix corresponding to the current frame; mapping the vertex coordinates of the model in the clipping space to a screen space to obtain the vertex coordinates of the primitives of the model in the screen space; and determining a triangular grid according to the vertex coordinates of the primitives of the model in the screen space, and generating primitives for target pixel points in all pixel points according to the coverage condition of the pixel points of the screen by the triangular grid.
In some embodiments of the application, the screen is a game scene screen based on the foregoing scheme.
Fig. 10 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the application.
It should be noted that, the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 10, the computer system 1000 includes a central processing unit (Central Processing Unit, CPU) 1001 that can perform various appropriate actions and processes, such as performing the method described in the above embodiment, according to a program stored in a Read-Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a random access Memory (Random Access Memory, RAM) 1003. In the RAM 1003, various programs and data required for system operation are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. An Input/Output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and a speaker; a storage portion 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011. When executed by a Central Processing Unit (CPU) 1001, the computer program performs various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As an aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
It will be appreciated that in the specific embodiments of the present application, where data relating to visual rendering is involved, user approval or consent is required when the above embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the relevant data is required to comply with relevant laws and regulations and standards of the relevant countries and regions.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (13)

1. A method of antialiasing a picture, the method comprising:
according to the inverse matrix of the rotating part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame, converting the clipping coordinates of the slice element in the current frame into the relative space coordinates of the slice element in the relative space of the camera corresponding to the current frame;
Determining a displacement transformation matrix from a camera position corresponding to a current frame to a camera position corresponding to a previous frame, and transforming relative space coordinates of the fragment in a camera relative space corresponding to the current frame into relative space coordinates of the fragment in a camera relative space corresponding to the previous frame according to the displacement transformation matrix;
according to the projection matrix of the camera corresponding to the previous frame and the rotating part in the observation matrix of the camera corresponding to the previous frame, converting the relative space coordinates of the fragment in the relative space of the camera corresponding to the previous frame into clipping coordinates of the fragment in the previous frame;
acquiring an output sampling pixel value of the fragment in a previous frame according to the clipping coordinate of the fragment in the previous frame;
mixing the output sampling pixel value of the fragment in the previous frame with the sampling pixel value of the fragment in the current frame to obtain a mixed pixel value;
and outputting the color of the pixel point corresponding to the pixel in the current frame picture according to the mixed pixel value.
2. The antialiasing method of a picture as claimed in claim 1, characterized in that said obtaining output sample pixel values of said tile in a previous frame from clipping coordinates of said tile in a previous frame, comprises:
Selecting a pixel point area with a preset size from a current frame by taking a current pixel point corresponding to the fragment as a center;
sampling a plurality of pixel values in the pixel point area;
constructing a color space according to a statistical model based on the plurality of pixel values;
acquiring an original sampling pixel value of the fragment in a previous frame according to the clipping coordinate of the fragment in the previous frame;
performing mixing operation according to the original sampling pixel value to obtain a target sampling pixel value of the fragment in a previous frame;
if the target sampling pixel value exceeds the color space, correcting the target sampling pixel value into an output sampling pixel value belonging to the color space;
and if the original sampling pixel value of the fragment in the previous frame does not exceed the color space, taking the target sampling pixel value as an output sampling pixel value.
3. The antialiasing method of a picture as claimed in claim 2, characterized in that said correction of said target sample pixel value to an output sample pixel value belonging to said color space comprises:
establishing a connection between the target sampling pixel value and the sampling pixel value of the fragment in the current frame;
And determining an intersection point of the connecting line and the color space, and taking the intersection point as an output sampling pixel value belonging to the color space.
4. The antialiasing method as claimed in claim 2, characterized in that said outputting, in the current frame picture, the color of the pixel point corresponding to the tile according to the blended pixel value, comprising:
and carrying out sharpening processing on the mixed pixel values based on the pixel values obtained by sampling, and outputting the color of the pixel point corresponding to the pixel in the current frame picture according to the pixel values obtained by the sharpening processing.
5. The method of antialiasing a picture as claimed in claim 1, characterized in that, before transforming the clipping coordinates of the tile in the current frame into the relative spatial coordinates of the tile in the camera relative space to which the current frame corresponds, the method further comprises:
and carrying out disturbance processing on the original projection matrix of the camera corresponding to each frame according to the two-dimensional Hall sequence to obtain the projection matrix of the camera corresponding to each frame.
6. The antialiasing method as claimed in claim 1, characterized in that said blending operation of the output sampled pixel values of the tile in the previous frame with the sampled pixel values of the tile in the current frame, resulting in blended pixel values, comprises:
And carrying out mixing operation on the output sampling pixel value of the pixel in the previous frame and the sampling pixel value of the pixel in the current frame based on the mixing weight to obtain a mixed pixel value.
7. The antialiasing method of a picture as claimed in claim 6, characterized in that, before the output sampled pixel values of the tile in the previous frame and the sampled pixel values of the tile in the current frame are subjected to a blending operation based on blending weights, the method further comprises:
acquiring configured original weights;
determining an offset distance of coordinates of the patch in a current frame relative to coordinates of the patch in a previous frame;
and determining a mixing weight according to the original weight and the offset distance, wherein if the first mixing weight is larger than the second mixing weight, the offset distance according to which the first mixing weight is determined is larger than the offset distance according to which the second mixing weight is determined.
8. The method of antialiasing a picture as claimed in claim 1, characterized in that, before transforming the clipping coordinates of the tile in the current frame into the relative spatial coordinates of the tile in the camera relative space to which the current frame corresponds, the method further comprises:
Transforming the vertex coordinates of the model from model space to world space according to the model transformation matrix corresponding to the current frame;
transforming the vertex coordinates of the model from world space to observation space according to the observation matrix of the camera corresponding to the current frame;
transforming the vertex coordinates of the model from the observation space to the clipping space according to the projection matrix corresponding to the current frame;
mapping the vertex coordinates of the model in the clipping space to a screen space to obtain the vertex coordinates of the primitives of the model in the screen space;
and determining a triangular grid according to the vertex coordinates of the primitives of the model in the screen space, and generating primitives for target pixel points in all pixel points according to the coverage condition of the pixel points of the screen by the triangular grid.
9. The method of antialiasing a frame as claimed in any of claims 1 to 8, characterized in that the frame is a game scene frame.
10. An antialiasing apparatus for a picture, the apparatus comprising:
the first transformation unit is used for transforming the clipping coordinates of the slice element in the current frame into the relative space coordinates of the slice element in the relative space of the camera corresponding to the current frame according to the inverse matrix of the rotating part in the observation matrix of the camera corresponding to the current frame and the inverse matrix of the projection matrix of the camera corresponding to the current frame;
The second transformation unit is used for determining a displacement transformation matrix from the camera position corresponding to the current frame to the camera position corresponding to the previous frame, and transforming the relative space coordinate of the fragment in the camera relative space corresponding to the current frame into the relative space coordinate of the fragment in the camera relative space corresponding to the previous frame according to the displacement transformation matrix;
a third transformation unit, configured to transform, according to the projection matrix of the camera corresponding to the previous frame and the rotation part in the observation matrix of the camera corresponding to the previous frame, the relative spatial coordinates of the patch in the camera relative space corresponding to the previous frame into clipping coordinates of the patch in the previous frame;
the acquisition unit is used for acquiring an output sampling pixel value of the fragment in the previous frame according to the clipping coordinate of the fragment in the previous frame;
the mixing unit is used for carrying out mixing operation on the output sampling pixel value of the fragment in the previous frame and the sampling pixel value of the fragment in the current frame to obtain a mixed pixel value;
and the output unit is used for outputting the color of the pixel point corresponding to the pixel element in the current frame picture according to the mixed pixel value.
11. A computer readable medium on which a computer program is stored, which when executed by a processor implements the antialiasing method of a picture as claimed in any of claims 1 to 9.
12. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the antialiasing method of a picture as claimed in any of claims 1 to 9.
13. A computer program product, characterized in that the computer program product comprises computer instructions stored in a computer readable storage medium, from which computer instructions a processor of a computer device reads, the processor executing the computer instructions, causing the computer device to perform the antialiasing method as claimed in any of claims 1 to 9.
CN202310837087.7A 2023-07-07 2023-07-07 Anti-aliasing method and device for picture, computer readable medium and electronic equipment Pending CN116934934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310837087.7A CN116934934A (en) 2023-07-07 2023-07-07 Anti-aliasing method and device for picture, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310837087.7A CN116934934A (en) 2023-07-07 2023-07-07 Anti-aliasing method and device for picture, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116934934A true CN116934934A (en) 2023-10-24

Family

ID=88393452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310837087.7A Pending CN116934934A (en) 2023-07-07 2023-07-07 Anti-aliasing method and device for picture, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116934934A (en)

Similar Documents

Publication Publication Date Title
CN115082639B (en) Image generation method, device, electronic equipment and storage medium
US7542049B2 (en) Hardware accelerated anti-aliased primitives using alpha gradients
US11711563B2 (en) Methods and systems for graphics rendering assistance by a multi-access server
DE102017108096A1 (en) SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR TRANSMITTING TO VARIABLE SCRAP BY MEANS OF PROJECTIVE GEOMETRIC DISTORTION
US20220215618A1 (en) Image processing method and apparatus, computer storage medium, and electronic device
CN109887062B (en) Rendering method, device, equipment and storage medium
CN112652046B (en) Game picture generation method, device, equipment and storage medium
EP4213102A1 (en) Rendering method and apparatus, and device
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
US20230125255A1 (en) Image-based lighting effect processing method and apparatus, and device, and storage medium
CN110917617A (en) Method, device and equipment for generating water ripple image and storage medium
US8390623B1 (en) Proxy based approach for generation of level of detail
CN113077541A (en) Virtual sky picture rendering method and related equipment
CN113032698A (en) 3D image display method and device, computer equipment and readable storage medium
CN116934934A (en) Anti-aliasing method and device for picture, computer readable medium and electronic equipment
CN117237511A (en) Cloud image processing method, cloud image processing device, computer and readable storage medium
CN115908687A (en) Method and device for training rendering network, method and device for rendering network, and electronic equipment
Nowrouzezahrai et al. Fast global illumination on dynamic height fields
WO2018175299A1 (en) System and method for rendering shadows for a virtual environment
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
Jia et al. Research of color correction algorithm for multi-projector screen based on projector-camera system
WO2022135050A1 (en) Rendering method, device, and system
CN111212297A (en) Definition switching platform and method based on cloud operation speed analysis
CN114581588A (en) Rendering method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination