CN115779418A - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115779418A
CN115779418A CN202211609840.9A CN202211609840A CN115779418A CN 115779418 A CN115779418 A CN 115779418A CN 202211609840 A CN202211609840 A CN 202211609840A CN 115779418 A CN115779418 A CN 115779418A
Authority
CN
China
Prior art keywords
dimensional
image
transformation matrix
space
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211609840.9A
Other languages
Chinese (zh)
Inventor
杨杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CN202211609840.9A priority Critical patent/CN115779418A/en
Publication of CN115779418A publication Critical patent/CN115779418A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an image rendering method, an image rendering device, electronic equipment and a storage medium, belongs to the field of image processing, and aims to obtain a target transformation matrix for space transformation and three-dimensional perspective projection, transform a two-dimensional image based on the target transformation matrix to obtain a three-dimensional perspective image of the two-dimensional image, process the two-dimensional image based on a set cutting area, and determine a display area of the two-dimensional image, so that only an area corresponding to the display area in the three-dimensional perspective image is displayed, the two-dimensional image is drawn and displayed in a three-dimensional perspective projection mode, and the visual effect of the image can be greatly improved.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image rendering method, an image rendering apparatus, an electronic device, and a storage medium.
Background
The game interface refers to a user interface of game software, and comprises buttons, animations, characters, sounds, windows and other game design elements which are in direct or indirect contact with a game user in a game picture. The game interface is an important information transfer mode, and the user interaction interface has important influence on the whole game picture performance and the user interaction experience.
At present, a two-dimensional user interaction interface drawing technology is generally adopted to present a game user interface on a target device screen in a two-dimensional manner, so as to convey important information in a game to a game player in a manner of graphics or characters and the like. However, the visual effect of the two-dimensional user interaction interface rendering technology is poor.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image rendering method, an image rendering apparatus, an electronic device and a storage medium, which can solve the problem of poor visual effect of the conventional two-dimensional image rendering technology.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image rendering method, where the method includes:
acquiring a target transformation matrix; the target transformation matrix is a transformation matrix used for carrying out space transformation and three-dimensional perspective projection;
based on the target transformation matrix, carrying out transformation processing on the two-dimensional image to obtain a three-dimensional perspective image of the two-dimensional image;
and performing coloring processing on the two-dimensional image based on the set cutting area, determining a display area of the two-dimensional image, and coloring an area corresponding to the display area in the three-dimensional perspective image.
The step of obtaining the target transformation matrix includes:
establishing a three-dimensional reference mapping space based on a camera space;
calculating a first transformation matrix from a two-dimensional plane space of the two-dimensional image to the reference mapping space according to a target projection screen;
acquiring mapping points corresponding to preselected points in the two-dimensional plane space in the reference mapping space, and constructing a second transformation matrix which rotates around the mapping points in the reference mapping space in a three-dimensional manner;
and calculating the target transformation matrix according to the first transformation matrix, the second transformation matrix and the perspective projection matrix.
Further, the step of establishing a three-dimensional reference mapping space based on the camera space includes:
the method comprises the steps of establishing a reference mapping space with a triaxial direction consistent with a triaxial direction of a camera space and a unit size consistent with a unit size of the camera space by taking a position with a set distance from an origin of the camera space in a direction of a camera orientation of the camera space as the origin.
Further, the step of calculating a first transformation matrix from the two-dimensional plane space of the two-dimensional image to the reference mapping space according to the target projection screen includes:
determining a condition point of a two-dimensional plane space of the two-dimensional image based on the width and height of a target projection screen; the abscissa value of the condition point is one half of the width of the target projection screen, and the ordinate of the condition point is one half of the height of the target projection screen;
and determining a first transformation matrix from the two-dimensional plane space of the two-dimensional image to the reference mapping space by using the coincidence of the condition point and the origin of the reference mapping space as a mapping condition.
Further, the target transformation matrix includes:
AMat=ProjectMat*RMat*UMat*LMat
wherein AMat denotes a target transformation matrix, projectMat denotes a perspective projection matrix, RMat denotes a second transformation matrix, UMat denotes a first transformation matrix, and LMat denotes a translational-rotational scaling matrix of a two-dimensional planar space.
Further, the step of determining a display area of the two-dimensional image by performing a rendering process on the two-dimensional image based on the set clipping area includes:
receiving an input two-dimensional transformation matrix, and transmitting the clipping information of the two-dimensional image and the two-dimensional transformation matrix into a vertex coloring stage;
in a vertex coloring stage, calculating the position coordinate of each vertex of the two-dimensional image in a two-dimensional plane space through the two-dimensional transformation matrix, and obtaining the proportion value of each vertex based on the position coordinate and the cutting information;
transmitting the proportional values of all the vertexes into a pixel coloring stage, and calculating the proportional value of each fragment of the two-dimensional image through GPU linear interpolation;
and determining the display area of the two-dimensional image according to the proportion value of each fragment.
Further, the cropping information comprises a starting point, a height and a width of the cropping area;
the step of obtaining the proportion value of each vertex based on the position coordinates and the clipping information includes:
for each vertex, subtracting the starting point of the clipping area from the position coordinate of the vertex to obtain the relative position of the vertex to the clipping area;
and calculating the product between the width and the height of the clipping region, and taking the ratio of the relative position to the product as the proportion value of the vertex.
In a second aspect, an embodiment of the present invention provides an image rendering apparatus, including a spatial transform module, a perspective projection module, and a rendering module:
the space transformation module is used for acquiring a target transformation matrix; the target transformation matrix is a transformation matrix used for carrying out space transformation and three-dimensional perspective projection;
the perspective projection module is used for carrying out transformation processing on the two-dimensional image based on the target transformation matrix to obtain a three-dimensional perspective image of the two-dimensional image;
and the coloring drawing module is used for performing coloring processing on the two-dimensional image based on the set cutting area, determining the display area of the two-dimensional image and coloring the area corresponding to the display area in the three-dimensional perspective image.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a graphics processor and a memory, where the memory stores a computer program that can be executed by the graphics processor, and the graphics processor can execute the computer program to implement the image rendering method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a storage medium on which a computer program is stored, the computer program, when executed by a graphics processor, implementing the image rendering method according to the first aspect.
The image rendering method, the image rendering device, the electronic device and the storage medium, provided by the embodiment of the invention, are used for acquiring a target transformation matrix for performing space transformation and three-dimensional perspective projection, performing transformation processing on a two-dimensional image based on the target transformation matrix to obtain a three-dimensional perspective image of the two-dimensional image, processing the two-dimensional image based on a set cutting area, and determining a display area of the two-dimensional image, so that only an area corresponding to the display area in the three-dimensional perspective image is displayed, the two-dimensional image is drawn and displayed in a perspective projection mode with a three-dimensional sense, and the visual effect of the image can be greatly improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram illustrating an image rendering system according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating an image rendering method according to an embodiment of the present invention.
Fig. 3 shows a schematic flow diagram of a part of the sub-steps of step S11 in fig. 2.
Fig. 4 shows a schematic flow chart of a part of the sub-steps of step S112 in fig. 3.
Fig. 5 shows a schematic flow diagram of part of the sub-steps of step S15 in fig. 2.
Fig. 6 shows a block schematic diagram of an image rendering apparatus according to an embodiment of the present invention.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present invention.
Reference numerals: 100-an image rendering system; 110-a server; 120-a client; 130-image rendering means; 140-a spatial transformation module; 150-a perspective projection module; 160-coloring drawing module; 170-electronic devices.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
In electronic games, the user interface of the game is usually displayed in a two-dimensional manner on the screen of the client to convey important information in the game to the game player in a graphical or textual manner. The user interface is an important information transfer mode, and the attractive interactive interface also has an important influence on the picture performance and the user interactive experience of the whole game.
Currently, a common user interaction interface rendering technology is as follows: writing the cutting area into a template buffer area, drawing a user interface by drawing a user interaction interface of a screen pixel unit on a projection screen in a camera space in an orthogonal projection mode, wherein the fragments outside the cutting area in the drawn user interface are not displayed after passing through a template test stage.
However, such user interaction interface rendering techniques suffer from the following drawbacks: the visual effect of three-dimensional perspective cannot be realized, and the visual effect is poor; the region clipping causes extra drawing overhead and template testing overhead, and the resource consumption is large.
Based on the above consideration, the embodiment of the invention provides an image rendering method, which can draw an image in a three-dimensional perspective manner, improve a visual effect, and reduce resource overhead. Hereinafter, the image rendering method will be described.
The image rendering method provided by the embodiment of the present invention may be applied to the image rendering system 100 shown in fig. 1, where the image rendering system 100 includes a server 110 and a client 120, and the server 110 may be in communication connection with the client 120 through a network.
The server 110 is configured to send the two-dimensional image to the client 120. The two-dimensional image may be a two-dimensional image of a game screen, for example, a game user interface, or a two-dimensional image of any other screen.
The client 120 is configured to receive the two-dimensional image and implement the image rendering method according to the embodiment of the present invention, so as to render and display the two-dimensional image in a perspective projection manner with a three-dimensional sense of space.
The server 110 may be an independent server 110, or a server 110 cluster, and the client 120 includes but is not limited to: intelligent terminals such as personal computers, notebook computers, tablet computers, smart phones and wearable portable devices. The client 120 includes an image processing unit (GPU) and a memory.
In this embodiment, a fragment means: the GPU graphics pipeline rasterizes the triangle into a set of information in 3D space for the pixels projected to the screen. And (3) coloring stage: programmable stages in a GPU graphics pipeline. Camera space: an abstract space in a 3D graphics transformation pipeline refers to a three-dimensional coordinate system constructed from a left-hand coordinate system or a right-hand coordinate system with a camera origin and orientation. In general, a 3D graphics variation pipeline may include a local space, a world space, a camera space, and a screen space (projection screen space).
In a possible implementation manner, an embodiment of the present invention provides an image rendering method, and referring to fig. 2, the method may include the following steps. In the present embodiment, the image rendering method is applied to the client 120 in fig. 1 for illustration.
And S11, acquiring a target transformation matrix. The target transformation matrix is a transformation matrix used for carrying out space transformation and three-dimensional perspective projection.
And S13, carrying out transformation processing on the two-dimensional image based on the target transformation matrix to obtain a three-dimensional perspective image of the two-dimensional image.
And S15, performing coloring processing on the two-dimensional image based on the set cutting area, determining a display area of the two-dimensional image, and coloring an area corresponding to the display area in the three-dimensional perspective image.
After receiving the two-dimensional image sent by the server 110, the client 120 acquires a target transformation matrix for performing spatial transformation and three-dimensional perspective projection, and the GPU on the client 120 performs transformation processing on the two-dimensional image by using the target transformation matrix, so as to perform spatial transformation and three-dimensional perspective projection on the two-dimensional image, and draw a three-dimensional perspective image of the two-dimensional image. And meanwhile, the GPU enters a coloring stage based on a cutting area set on the two-dimensional image to perform coloring processing on the two-dimensional image so as to determine a display area of the two-dimensional image. So that the GPU on the client 120 colors the area corresponding to the display area in the three-dimensional perspective image.
Compared with the traditional user interactive interface rendering technology, the image rendering method provided by the embodiment of the invention realizes that the two-dimensional image is drawn and displayed in a perspective projection mode of three-dimensional sense, generates a three-dimensional effect and greatly improves the visual effect of the image.
Considering that a two-dimensional image (which may be a user interface) needs to be always displayed on the screen, the camera space is selected as the principal coordinate system for the two-dimensional user interface transformation. In addition, in order to realize the function of rendering a two-dimensional image in a perspective projection manner with a three-dimensional sense of space by using the object transformation matrix, referring to fig. 3, the above step S11 may be further implemented as the following steps S111 to S114.
And S111, establishing a three-dimensional reference mapping space based on the camera space.
The method for establishing the reference mapping space may be flexibly selected, for example, the reference mapping space may be established according to a preset rule, or the reference mapping space may be established as the same as the camera space, and in this embodiment, the method is not particularly limited.
Taking a camera space as a three-dimensional coordinate system constructed by adopting a right-hand coordinate system and taking a camera orientation of the camera space as a Z-axis of the camera space as an example, a position which is a set distance away from an origin of the camera space in a camera orientation direction of the camera space can be taken as the origin, and a reference mapping space in which a triaxial direction is consistent with a triaxial direction of the camera space and a unit size is consistent with a unit size of the camera space is established. The obtained reference mapping space is the same as the camera space except for the origin position.
And S112, calculating a first transformation matrix from the two-dimensional plane space of the two-dimensional image to the reference mapping space according to the target projection screen.
It should be appreciated that the aspect ratio of the target projection screen is consistent with the resolution (i.e., aspect ratio) of the display screen of client 120.
It should be noted that the two-dimensional plane space of the two-dimensional image and the space where the target projection screen is located satisfy the following conditions: the X-axis direction and the Y-axis direction of the two are the same, the unit size is consistent, and the unit is pixel. When the scaling mapping rule of the two-dimensional image to the reference mapping space is different, the corresponding first transformation matrix is different.
S113, mapping points corresponding to the preselected points in the two-dimensional plane space in the reference mapping space are obtained, and a second transformation matrix which rotates around the mapping points in the reference mapping space in a three-dimensional mode is constructed.
It should be noted that the preselected point may be any point in the two-dimensional plane space.
And S114, calculating a target transformation matrix according to the first transformation matrix, the second transformation matrix and the perspective projection matrix.
Through the steps S111 to S114, the obtained target transformation matrix can transform the two-dimensional image from the two-dimensional plane space to the reference mapping space, and perform standard 3D space transformation on the reference mapping space and then perform perspective projection on the reference mapping space to generate the effect of a three-dimensional perspective image.
Since the two-dimensional image is ultimately projected onto the target projection screen, in one possible embodiment, the final three-dimensional perspective image is adapted to the display screen of the client to a higher degree. Based on this, the above step S112 may be further implemented as: and determining a condition point of a two-dimensional plane space of the two-dimensional image based on the width and the height of the target projection screen, and determining a first transformation matrix from the two-dimensional plane space of the two-dimensional image to a reference mapping space by using the condition point to coincide with the origin of the reference mapping space as a mapping condition.
The abscissa value of the condition point is one half of the width of the target projection screen, and the ordinate of the condition point is one half of the height of the target projection screen.
Assuming that the width of the target projection screen is pw and the height is ph, and a = pw/ph, the mapping condition is: the condition points (pw/2, ph/2) coincide with the origin of the reference mapping space. It should be understood that the width of the target projection screen may be the same as the width of the display screen of client 120, and the height of the target projection screen may also be the same as the height of the display screen of client 120.
Assuming that the distance between the origin of the reference mapping space and the origin of the camera space is d, the default value of d may be-1, the width of the display screen of the client is pw, the height is ph, and the camera view range of the camera space is FOV radian. The tan _ FOV is defined as tan (FOV/2), i.e. the tan delta value of one half FOV. The symbol a = pw/ph is defined to represent the aspect ratio of the display screen of the client. Since the aspect ratio of the target projection screen coincides with the aspect ratio of the display screen of the client, a is also the aspect ratio of the target projection screen.
In order to further improve the degree of adaptation of the three-dimensional perspective image of the two-dimensional image to the display screen of the client, the aspect ratio of the target projection screen and the distance between the origin of the reference mapping space and the origin of the camera space are introduced in the calculation of the first transformation matrix from the two-dimensional screen of the two-dimensional image to the reference mapping space, and referring to fig. 4, the manner of determining the first transformation matrix from the two-dimensional plane space of the two-dimensional image to the reference mapping space may be further implemented as the following steps.
S1121, based on the distance between the origin of the reference mapping space and the origin of the camera space, calculates the height scaling from the two-dimensional plane space of the two-dimensional image to the XY plane of the reference mapping space.
S1122, based on the aspect ratio of the target projection screen, calculates the width scaling from the two-dimensional plane space of the two-dimensional image to the XY plane of the reference map space.
The height scaling in the above steps can be expressed as: h = tan _ FOV _ d. The width scaling can be expressed as: w = h a.
S1123, calculating a first transformation matrix from the two-dimensional plane space of the two-dimensional image to the reference mapping space based on the height scaling and the height scaling.
Consider a two-dimensional user interface in which half of the traditional two-dimensional user interface is in a two-dimensional coordinate system with the X-axis facing right and the Y-axis facing down, as opposed to the Y-axis of a standard right-handed coordinate system. Thus, the scaling mapping rule may be: dividing the two-dimensional image by the product of the width and the height of the target projection screen, zooming along the XY axes, and shifting by a unit length in the X-axis negative direction and the Y-axis positive direction after twice amplification, wherein the Z axis adopts zooming consistent with the Y axis.
Since the unit size of the reference mapping space is consistent with the camera space, the unit size of the reference mapping space changes with the change of the camera parameters, and therefore, in the zooming process, the unit proportion of the two-dimensional screen space of the two-dimensional image and the reference mapping space is obtained through dynamic calculation according to the camera parameters.
In the case of the above scaling mapping rule, the first transformation matrix may be expressed as:
Figure BDA0003992723430000101
in order to realize the three-dimensional effect of the user interface, in step S113, a mode that a preselected point on a two-dimensional plane space of the two-dimensional image is three-dimensionally rotated around a certain point in the three-dimensional pixel space is adopted, and assuming that the preselected point is a p point, the step S113 may be further implemented as follows: and converting the P point in the two-dimensional plane space into a P point of a reference mapping space, and constructing a second transformation matrix which carries out 3D transformation around the point P in the reference mapping space.
When the coordinates of the P point are represented as P (px, py, pz), the second transformation matrix can be represented as:
Figure BDA0003992723430000102
where BMat represents the translational-rotational scaling matrix in the reference mapping space. Since the translational and rotational scaling matrix of the three-dimensional space is common knowledge, it is not described in this embodiment.
On the basis of the above, the target transformation matrix in step S114 can be expressed as: AMat = project mat RMat UMat LMat.
Wherein AMat denotes a target transformation matrix, projectMat denotes a perspective projection matrix, RMat denotes a second transformation matrix, UMat denotes a first transformation matrix, and LMat denotes a translational-rotational scaling matrix of a two-dimensional plane space. Since the perspective projection matrix and the translational and rotational scaling matrix of the two-dimensional plane space are common knowledge in the spatial transformation, they are not described in this embodiment.
In order to solve the problems that the current region clipping can cause extra drawing overhead and template testing overhead, the resource consumption is large, and the efficiency is low. In one possible embodiment, referring to fig. 5, regarding step S15, a manner of performing a coloring process on the two-dimensional image based on the set trimming area to determine the display area of the two-dimensional image may be further implemented as the following step.
S151, receiving the input two-dimensional transformation matrix, and transmitting the clipping information of the two-dimensional image and the two-dimensional transformation matrix into a vertex coloring stage.
S152, in the vertex coloring stage, the position coordinates of each vertex of the two-dimensional image in the two-dimensional plane space are calculated through the two-dimensional transformation matrix, and the proportion value of each vertex is obtained based on the position coordinates and the cutting information.
S153, transmitting the proportional values of all the vertexes into a pixel coloring stage, and calculating the proportional value of each fragment of the two-dimensional image through GPU linear interpolation.
And S154, determining the display area of the two-dimensional image according to the proportion value of each fragment.
When the client 120 receives the two-dimensional image sent by the server 110, a Central Processing Unit (CPU) of the client 120 transmits the two-dimensional image including the clipping region and the two-dimensional transformation matrix to the graphics processor. The two-dimensional transformation matrix is a conventional transformation matrix in the vertex shading stage, and is not further described in this embodiment. The trimming area is a preset area, and the trimming information may include a start point, a height, and a width of the trimming area.
The graphics processor receives the two-dimensional transformation matrix and the two-dimensional image comprising the clipping area sent by the central processing unit, and transmits clipping information of the two-dimensional image and the two-dimensional transformation matrix to the vertex coloring stage. In the vertex coloring stage, the graphics processor calculates the position coordinates of each vertex of the two-dimensional image in a two-dimensional plane space through the two-dimensional transformation matrix, and aiming at each vertex, the proportion value of the vertex is calculated according to the clipping information and the position coordinate sum of the vertex.
In order to obtain the scale value of each vertex of the two-dimensional image more accurately, in a possible implementation manner, the obtaining of the scale value of each vertex based on the position coordinates and the cropping information in S152 may be further implemented as: and for each vertex, subtracting the starting point of the clipping area from the position coordinate of the vertex to obtain the relative position of the vertex to the clipping area, further calculating the product of the width and the height of the clipping area, and taking the ratio of the relative position of the vertex to the product as the proportion value of the vertex. Thus, the scale values of all the vertexes of the two-dimensional image are obtained.
And the graphics processor transmits the proportional values of all the vertexes of the two-dimensional image into a pixel coloring stage, and calculates the proportional value of each fragment in the two-dimensional image through GPU linear interpolation. And for each fragment, if the proportion value of the fragment is between [0 and 1], determining that the fragment is positioned in a clipping area, otherwise, determining that the fragment is positioned in the clipping area. And further, the fragment which is not positioned in the cutting area is not displayed, and the display area is determined. And the graphics processor performs pixel coloring on the area corresponding to the display area in the three-dimensional perspective image, and performs template test and depth test on each fragment after the pixel coloring so as to finally display on the display screen of the client.
In the conventional image rendering method, all fragments need to be subjected to template testing, and fragments which cannot pass the template testing are not displayed. Compared with the conventional image rendering method, in the image rendering method provided by the embodiment of the invention, the display area is determined through the processing of the vertex coloring stage and the pixel coloring stage in the steps S151 to S154, so that in the subsequent template test, only the fragment located in the display area needs to be subjected to the template test, the drawing cost and the template test cost can be greatly reduced, and the clipping efficiency is improved.
The image rendering method provided by the embodiment of the invention realizes the 3D conversion of the two-dimensional image to be rendered, and the three-dimensional perspective image is obtained by performing perspective projection on the two-dimensional image after the 3D conversion to the target projection screen, and meanwhile, the fragment located in the cutting area in the two-dimensional image is determined in the coloring stage to determine the display area, so that only the area corresponding to the display area in the three-dimensional perspective image is colored, and the subsequent test on the colored fragment template in the display area can greatly reduce the drawing expense and the target test expense, reduce the resource consumption and improve the efficiency.
Based on the above inventive concept of the image rendering method, in a possible implementation manner, an embodiment of the present invention further provides an image rendering apparatus 130, where the image rendering apparatus 130 may be applied to the client 120 in fig. 1. Referring to fig. 6, the image rendering apparatus 130 may include a spatial transform module 140, a perspective projection module 150, and a rendering module 160.
And a spatial transform module 140, configured to obtain a target transform matrix. The target transformation matrix is used for carrying out space transformation and three-dimensional perspective projection.
And the perspective projection module 150 is configured to perform transformation processing on the two-dimensional image based on the target transformation matrix to obtain a three-dimensional perspective image of the two-dimensional image.
And a coloring and drawing module 160, configured to perform coloring processing on the two-dimensional image based on the set clipping region, determine a display region of the two-dimensional image, and color a region corresponding to the display region in the three-dimensional perspective image.
In the image rendering apparatus 130, the spatial transformation module 140, the perspective projection module 150, and the rendering module 160 cooperate with each other to transform the two-dimensional image based on the target transformation matrix for performing the spatial transformation and the three-dimensional perspective projection to obtain the three-dimensional perspective image of the two-dimensional image, and process the two-dimensional image based on the set clipping region to determine the display region of the two-dimensional image, so that only the region corresponding to the display region in the three-dimensional perspective image is rendered, and the two-dimensional image is rendered and displayed in a perspective projection manner with a three-dimensional sense, thereby greatly improving the visual effect of the image.
For specific limitations of the image rendering apparatus 130, reference may be made to the above limitations of the image rendering method, which will not be described herein again. The various modules in the image rendering apparatus 130 described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules may be embedded in the hardware or independent of the processor in the electronic device 170, or may be stored in the memory of the electronic device 170 in the software form, so that the processor calls and executes operations corresponding to the modules.
In one embodiment, an electronic device 170 is provided, and the electronic device 170 may be a client, and the internal structure thereof may be as shown in fig. 7. The electronic device 170 includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the electronic device 170 is configured to provide computing and control capabilities. The memory of the electronic device 170 includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device 170 is used for performing wired or wireless communication with an external terminal, and the wireless communication may be implemented through WIFI, an operator network, near Field Communication (NFC), or other technologies. The computer program, when executed by a processor, implements the image rendering method as provided in the above embodiments.
The configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the present invention, and does not constitute a limitation on the electronic device 170 to which the present invention is applied, and a specific electronic device 170 may include more or less components than those shown in fig. 7, or may combine some components, or have a different arrangement of components.
In one embodiment, the image rendering apparatus 130 provided by the present invention can be implemented in the form of a computer program, and the computer program can be run on the electronic device 170 shown in fig. 7. The memory of the electronic device 170 may store various program modules constituting the image rendering apparatus 130, such as the spatial transform module 140, the perspective projection module 150, and the rendering module 160 shown in fig. 6. The computer program constituted by the respective program modules causes the processor to execute the steps in the image rendering method described in this specification.
For example, the electronic device 170 shown in fig. 7 may perform step S11 by the spatial transform module 140 in the image rendering apparatus 130 shown in fig. 6. The electronic device 170 may perform step S13 through the perspective projection module 150. The electronic device 170 may perform step S15 through the coloring drawing module 160.
In one embodiment, an electronic device 170 is provided, comprising a memory storing a computer program and a graphics processor, the processor implementing the following steps when executing the computer program: acquiring a target transformation matrix; based on the target transformation matrix, carrying out transformation processing on the two-dimensional image to obtain a three-dimensional perspective image of the two-dimensional image; and performing coloring processing on the two-dimensional image based on the set cutting area, determining a display area of the two-dimensional image, and coloring an area corresponding to the display area in the three-dimensional perspective image.
In one embodiment, a storage medium is provided, on which a computer program is stored, which computer program, when executed by a graphics processor, performs the steps of: acquiring a target transformation matrix; based on the target transformation matrix, carrying out transformation processing on the two-dimensional image to obtain a three-dimensional perspective image of the two-dimensional image; and performing coloring processing on the two-dimensional image based on the set cutting area, determining a display area of the two-dimensional image, and coloring an area corresponding to the display area in the three-dimensional perspective image.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of image rendering, the method comprising:
acquiring a target transformation matrix; the target transformation matrix is a transformation matrix used for carrying out space transformation and three-dimensional perspective projection;
based on the target transformation matrix, carrying out transformation processing on a two-dimensional image to obtain a three-dimensional perspective image of the two-dimensional image;
and performing coloring processing on the two-dimensional image based on the set cutting area, determining a display area of the two-dimensional image, and coloring an area corresponding to the display area in the three-dimensional perspective image.
2. The image rendering method of claim 1, wherein the step of obtaining the object transformation matrix comprises:
establishing a three-dimensional reference mapping space based on a camera space;
according to a target projection screen, calculating a first transformation matrix from a two-dimensional plane space of the two-dimensional image to the reference mapping space;
acquiring mapping points corresponding to preselected points in the two-dimensional plane space in the reference mapping space, and constructing a second transformation matrix which rotates around the mapping points in the reference mapping space in a three-dimensional manner;
and calculating the target transformation matrix according to the first transformation matrix, the second transformation matrix and the perspective projection matrix.
3. The image rendering method according to claim 2, wherein the step of establishing a three-dimensional reference mapping space based on the camera space comprises:
the method comprises the steps of taking a position with a set distance from a camera orientation direction of a camera space to an original point of the camera space, and establishing a reference mapping space with the same triaxial direction and the same unit size of the camera space.
4. The image rendering method according to claim 2, wherein the step of calculating a first transformation matrix from the two-dimensional plane space of the two-dimensional image to the reference mapping space according to the target projection screen includes:
determining a condition point of a two-dimensional plane space of the two-dimensional image based on the width and height of a target projection screen; the abscissa value of the condition point is one half of the width of the target projection screen, and the ordinate of the condition point is one half of the height of the target projection screen;
and determining a first transformation matrix from the two-dimensional plane space of the two-dimensional image to the reference mapping space by using the coincidence of the condition point and the origin of the reference mapping space as a mapping condition.
5. The image rendering method of claim 2, wherein the object transformation matrix comprises:
AMat=ProjectMat*RMat*UMat*LMat
wherein AMat denotes a target transformation matrix, projectMat denotes a perspective projection matrix, RMat denotes a second transformation matrix, UMat denotes a first transformation matrix, and LMat denotes a translational-rotational scaling matrix of a two-dimensional plane space.
6. The image rendering method according to claim 1, wherein the step of determining the display area of the two-dimensional image by performing a rendering process on the two-dimensional image based on the set clipping area includes:
receiving an input two-dimensional transformation matrix, and transmitting the clipping information of the two-dimensional image and the two-dimensional transformation matrix into a vertex coloring stage;
in a vertex coloring stage, calculating the position coordinate of each vertex of the two-dimensional image in a two-dimensional plane space through the two-dimensional transformation matrix, and obtaining the proportion value of each vertex based on the position coordinate and the cutting information;
transmitting the proportional values of all the vertexes into a pixel coloring stage, and calculating the proportional value of each fragment of the two-dimensional image through GPU linear interpolation;
and determining the display area of the two-dimensional image according to the proportion value of each fragment.
7. The image rendering method of claim 6, wherein the clipping information includes a start point, a height, and a width of a clipping region;
the step of obtaining a proportional value of each vertex based on the position coordinates and the clipping information includes:
for each vertex, subtracting the starting point of the clipping area from the position coordinate of the vertex to obtain the relative position of the vertex to the clipping area;
and calculating the product of the width and the height of the clipping region, and taking the ratio of the relative position to the product as the proportion value of the vertex.
8. An image rendering apparatus, comprising a spatial transform module, a perspective projection module, and a rendering module:
the space transformation module is used for acquiring a target transformation matrix; the target transformation matrix is a transformation matrix used for carrying out space transformation and three-dimensional perspective projection;
the perspective projection module is used for carrying out transformation processing on the two-dimensional image based on the target transformation matrix to obtain a three-dimensional perspective image of the two-dimensional image;
and the coloring drawing module is used for performing coloring processing on the two-dimensional image based on the set cutting area, determining the display area of the two-dimensional image and coloring the area corresponding to the display area in the three-dimensional perspective image.
9. An electronic device comprising a graphics processor and a memory, the memory storing a computer program executable by the graphics processor, the computer program executable by the graphics processor to implement the image rendering method of any of claims 1 to 7.
10. A storage medium on which a computer program is stored, the computer program, when executed by a graphics processor, implementing the image rendering method of any one of claims 1 to 7.
CN202211609840.9A 2022-12-12 2022-12-12 Image rendering method and device, electronic equipment and storage medium Pending CN115779418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211609840.9A CN115779418A (en) 2022-12-12 2022-12-12 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211609840.9A CN115779418A (en) 2022-12-12 2022-12-12 Image rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115779418A true CN115779418A (en) 2023-03-14

Family

ID=85419362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211609840.9A Pending CN115779418A (en) 2022-12-12 2022-12-12 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115779418A (en)

Similar Documents

Publication Publication Date Title
CN107154063B (en) Method and device for setting shape of image display area
CN111369655B (en) Rendering method, rendering device and terminal equipment
CN108230435B (en) Graphics processing using cube map textures
US9495767B2 (en) Indexed uniform styles for stroke rendering
CN111754381B (en) Graphics rendering method, apparatus, and computer-readable storage medium
CN109584168B (en) Image processing method and apparatus, electronic device, and computer storage medium
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
US20160125649A1 (en) Rendering apparatus and rendering method
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
WO2023066121A1 (en) Rendering of three-dimensional model
CN109410213A (en) Polygon pel method of cutting out, computer readable storage medium, electronic equipment based on bounding box
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
KR20230172014A (en) Image processing methods, devices, devices, storage media, program products and programs
US20180211434A1 (en) Stereo rendering
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
WO2022237116A1 (en) Image processing method and apparatus
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN109598672B (en) Map road rendering method and device
CN115761123B (en) Three-dimensional model processing method, three-dimensional model processing device, electronic equipment and storage medium
KR102225281B1 (en) Techniques for reduced pixel shading
JP7086180B2 (en) Dynamic styling of digital maps
CN112465692A (en) Image processing method, device, equipment and storage medium
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN111652807A (en) Eye adjustment method, eye live broadcast method, eye adjustment device, eye live broadcast device, electronic equipment and storage medium
CN115779418A (en) Image rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination