CN116954785A - Image processing apparatus and image processing method for game cycle - Google Patents
Image processing apparatus and image processing method for game cycle Download PDFInfo
- Publication number
- CN116954785A CN116954785A CN202310457927.7A CN202310457927A CN116954785A CN 116954785 A CN116954785 A CN 116954785A CN 202310457927 A CN202310457927 A CN 202310457927A CN 116954785 A CN116954785 A CN 116954785A
- Authority
- CN
- China
- Prior art keywords
- image
- game
- image processing
- motion
- interpolated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 95
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 238000009877 rendering Methods 0.000 claims abstract description 25
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 16
- 238000002156 mixing Methods 0.000 claims description 6
- VJTAZCKMHINUKO-UHFFFAOYSA-M chloro(2-methoxyethyl)mercury Chemical compound [Cl-].COCC[Hg+] VJTAZCKMHINUKO-UHFFFAOYSA-M 0.000 description 40
- 238000010586 diagram Methods 0.000 description 12
- 238000000034 method Methods 0.000 description 12
- 238000012986 modification Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 238000007796 conventional method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/308—Details of the user interface
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
An image processing method for a game cycle of a game. The game cycle includes a game rendering module and a dynamic compensation module that are executed by more than one processing unit to generate an output image. The image processing method comprises the steps of drawing one of the scenes of the game by the game drawing module so as to obtain a first image; drawing, by the game drawing module, a user interface to obtain a second image; dynamically compensating the first image by the dynamic compensation module to generate an interpolated first image; and synthesizing, by the motion compensation module, the second image and the interpolated first image for displaying the output image.
Description
Technical Field
The present invention relates to an image processing apparatus and an image processing method for game loop (game loop), and more particularly, to an image processing apparatus and an image processing method for achieving an ideal dynamic compensation (Motion Estimation and Motion Compensation, MEMC) effect by disassembling a game loop.
Background
A game cycle is a series of programs that run continuously throughout the course of a game. Conventionally, a game cycle may be divided into three distinct phases, namely, a phase of processing input, updating game states, and drawing game content. The game content to be drawn is typically a combination of a 2D or 3D main scene and a 2D User Interface (UI). In the stage of drawing game content, the main scene is generally drawn first, and then a 2D user interface is superimposed to generate a final image for display.
Motion compensation (Motion Estimation and Motion Compensation, MEMC) is a technique in video processing that calculates motion vectors (motion vectors) for corresponding locations between two consecutive video frames (frames) and generates an additional estimated video frame to increase the picture update rate. The MEMC algorithm has now been widely used in many fields of video display, game animation, etc. to create a smoother visual experience. However, current MEMC algorithms are not as ideal for use in game animation, particularly in combination with a main scene and user interface. This is because the user interface is typically a static picture without motion (motion) compared to the main scene, and thus does not require interpolation (interpolation) or extrapolation (extrapolation), unlike the characteristics of the main scene of a game.
As previously mentioned, it is difficult to apply the MEMC algorithm to both a dynamic background (main scene) and a fixed user interface. To apply the EMC algorithm in this case, the user interface needs to be separated from the background look ahead, and this approach may involve modification of the game cycle or game engine. Modifying the game engine would be a arduous task for a game studio with limited development capacity, and therefore a viable solution is needed to solve this problem.
Disclosure of Invention
Accordingly, it is an object of the present invention to provide a method for modifying a game cycle to separate a main scene from a user interface, thereby enabling the application of the MEMC algorithm to achieve a desired effect on a game screen.
The embodiment of the invention discloses an image processing method which is used for game circulation of a game. The game cycle includes a game rendering module and a dynamic compensation module that are executed by more than one processing unit to generate output images. The image processing method comprises the steps of drawing a scene of the game by the game drawing module to obtain a first image; drawing, by the game drawing module, a user interface to obtain a second image; dynamically compensating the first image by the dynamic compensation module to generate an interpolated first image; and synthesizing, by the motion compensation module, the second image and the interpolated first image for displaying the output image.
The embodiment of the invention also discloses an image processing device which comprises more than one processing unit and a storage unit. The more than one processing units are for executing a game cycle of the game and generating one of the output images for display. The storage unit is coupled to the more than one processing units and is used for storing program codes of the game cycle, wherein the game cycle comprises a game drawing module and a dynamic compensation module, and the game cycle is used for instructing the more than one processing units to execute an image processing method. The image processing method comprises the steps of drawing one of the scenes of the game by the game drawing module so as to obtain a first image; drawing, by the game drawing module, a user interface to obtain a second image; dynamically compensating the first image by the dynamic compensation module to generate an interpolated first image; and synthesizing, by the motion compensation module, the second image and the interpolated first image for displaying the output image.
Drawings
Fig. 1 is a schematic diagram of a conventional method of applying a MEMC algorithm system in a game cycle.
FIG. 2 is a schematic illustration of a primary scene image, a user interface image, and a blended image.
Fig. 3 is a schematic diagram of an image processing system according to an embodiment of the invention.
Fig. 4 is a flowchart of an image processing method according to an embodiment of the invention.
Fig. 5 is a schematic diagram of a logic architecture of an image processing system according to an embodiment of the invention.
Fig. 6 is a flowchart of an image processing method according to an embodiment of the invention.
Fig. 7 is a schematic diagram of a logic architecture of an image processing system according to an embodiment of the invention.
FIG. 8 is a pseudo-code diagram of an OpenGL example of a conventional method.
Fig. 9 is a pseudo code diagram of an OpenGL example according to an embodiment of the present invention.
Fig. 10 is a pseudo code diagram of an OpenGL example according to an embodiment of the present invention.
Detailed Description
Certain terms are used throughout the description and following claims to refer to particular components. It will be appreciated by those of ordinary skill in the art that a hardware manufacturer may refer to the same component by different names. The present specification and the claims to follow do not take the form of an element differentiated by the name of the element, but rather by the functional difference between the elements. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. In addition, the term "coupled" as used herein includes any direct or indirect electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
Referring to fig. 1, fig. 1 is a schematic diagram of a system 1 for processing a game. The system 1 applies the MEMC algorithm in the game by conventional methods, which includes a processing unit 10, a graphics processor (Graphics Processing Unit, GPU) 12, a MEMC chip 14, and a display unit 16. The processing unit 10 executes the game cycle and instructs the graphics processor 12 to draw (render) the images required for the game cycle. In particular, processing unit 10 instructs graphics processor 12 to draw an image of one of the game scenes and an image of the user interface; the graphics processor 12 then renders the two images as instructed and synthesizes an image of the (blend) user interface with an image of the scene to generate a blended image. Referring to FIG. 2, a schematic diagram of a main scene image 20, a user interface image 22, and a blended image 24 of a game processed by the system 1 is shown. In fig. 2, the main scene image 20 is obtained by 2D or 3D rendering, which is one of a plurality of consecutive video frames constituting a dynamic scene of a game, and the contents of the main scene image 20 are continuously updated as the game progresses. The user interface image 22 is an image with opacity (opacity) that provides a user interface for adjusting the game configuration, the user interface image 22 typically having static content compared to the main scene image 20. The blended image 24 is the main scene image 20 and the user interface image 22 synthesized by transparency blending (alpha blending) and is the final image presented to the user. In other words, the graphic processor 12 renders the main scene image 20 and the user interface image 22 according to the instruction of the processing unit 10, and generates the mixed image 24 from the main scene image 20 and the user interface image 22. The MEMC chip 14 then performs the MEMC algorithm directly on the blended image 24 to generate an interpolated frame that is output to the display unit 16.
In the system 1, game cycles are typically run by a game application through the processing unit 10. With existing development tools provided by game engines, game studios can efficiently develop game applications without regard to details of the screen display algorithms, such as computer graphics and visual optimization. In this case, the MEMC chip 14 is only able to obtain the blended image 24 and not the main scene image 20 and the user interface image 22, respectively, resulting in a situation where the MEMC algorithm can only be performed on the superimposed image (i.e., the blended image 24) obtained from the graphics processor 12, resulting in distortion of the user interface components in the user interface image 22.
In view of this, the present invention proposes an image processing system and an image processing method for executing a MEMC algorithm in a game cycle of a game. The image processing method solves the above-described problem by making small modifications to the game cycle, whereby the main scene image 20 with dynamic content and the user interface image 22 with static content can be processed separately before being synthesized into the mixed image 24. The image processing method of the invention not only prevents distortion and deformation of the user interface component after using the MEMC algorithm, but also avoids guessing the image content covered by the user interface by using artificial intelligent algorithms such as machine learning. At the same time, the effect of the MEMC algorithm can also be greatly enhanced for game studios with limited development capabilities by such small modifications to the game cycle. The image processing method of the present invention can be applied to various types of game engines, such as OpenGL, vulkan, and the embodiment of the present invention is described by taking OpenGL as an example, but is not limited thereto.
Referring to fig. 3, fig. 3 is a schematic diagram of an image processing system 3 according to one embodiment of the invention. The image processing system 3 includes a first processing unit 32, a second processing unit 30, a storage unit 34, and a display unit 36. The first processing unit 32 is coupled to the second processing unit 30 and the display unit 36, and the second processing unit is coupled to the first processing unit 32 and the storage unit 34. Either of the first processing unit 32 and the second processing unit 30 may be, but is not limited to, a general purpose processor, a microprocessor, or an application-specific integrated circuit (ASIC). In particular, the first processing unit 32 may be a Graphics Processor (GPU) for performing computer graphics and image processing. The storage unit 34 may be any type of storage device for storing the program code 340 for reading and execution by the second processing unit 30. For example, the storage unit 34 may be a Read Only Memory (ROM), a flash memory (flash), a Random Access Memory (RAM), a hard disk (HDD), an optical data storage device, or a nonvolatile (nonvolatile) storage unit, and is not limited thereto. The display unit 36 may be a Liquid Crystal Display (LCD), a light emitting diode display (LED), an organic light emitting diode display (OLED), or the like provided in a television, a Personal Computer (PC), a notebook computer (notebook, laptop), or a mobile device, a mobile phone, and the like, without being limited thereto. The image processing system 3 is only used to illustrate the necessary components to implement the embodiments of the present invention, and various modifications and adaptations may be made as required by those of ordinary skill in the art. For example, the first processing unit 32, the second processing unit 30, and the storage unit 34 may be defined in combination as an image processing device to cooperate with one or more display units.
The image processing method executed by the image processing system 3 is a processing method of an EMC algorithm for a game cycle, and may be compiled into the program code 340 and stored in the storage unit 34 to be executed by the second processing unit 30. In this embodiment, the stage of drawing the game content in the game cycle may be divided into two parts: the game drawing module and the dynamic compensation module. The game drawing module mainly comprises instructions provided by a game engine and is responsible for materializing and presenting picture contents to be displayed according to game progress; the dynamic compensation module then comprises relevant instructions for implementing the MEMC algorithm by the first processing unit 32.
The game rendering module instructs the first processing unit 32, via instructions provided by the game engine, to render the game frames, such as rendering the main scene image 20 and the user interface image 22. The dynamic compensation module is configured for applying the MEMC algorithm to the image rendered by the first processing unit 32. The motion compensation module implements the software operating logic of the MEMC algorithm and uses the internal format of the first processing unit 32 and the internal information to accelerate the processing of the MEMC algorithm, such as by the first processing unit 32 performing calculations of motion estimation (motion estimation) and motion compensation (motion compensation), to generate an interpolated image of the main scene image 20. The dynamic compensation module may also instruct the first processing unit 32 to synthesize the user interface image 22 with the interpolated image of the main scene image 20. The image processing method can be generalized as flow 4, as shown in fig. 4. Scheme 4 comprises the steps of:
step 401: the game rendering module instructs the first processing unit 32 to render the main scene of the game to obtain the main scene image 20.
Step 402: the game rendering module instructs the first processing unit 32 to render the user interface to obtain the user interface image 22.
Step 403: the dynamic compensation module communicates the main scene image 20 and the user interface image 22 to the dynamic compensation module.
Step 404: the motion compensation module instructs the first processing unit 32 to perform a MEMC algorithm on the main scene image 20 to generate an interpolated main scene image 20'.
Step 405: the motion compensation module instructs the first processing unit 32 to synthesize the user interface image 22 and the interpolated main scene image 20' as output images.
Step 406: the output image is displayed by the display unit 36.
Referring to fig. 5, fig. 5 is a schematic diagram of a logic architecture 5 of an image processing system 3 according to an embodiment of the invention. In the logic architecture 5, the game drawing module 50 and the dynamic compensation module 52 are executed by the second processing unit 30 in a game cycle and instruct the graphics processor 54 (i.e., the first processing unit 32 in fig. 3) to execute the game drawing and the EMC algorithm to generate an interpolated image for display on the display unit 56. According to flow 4, the game rendering module 50 first instructs the graphics processor 54 to render the game scene and the user interface to obtain the main scene image 20 and the user interface image 22 (steps 401, 402). The game rendering module 50 then communicates the obtained master scene image 20 and the user interface image 22 to the dynamic compensation module 52 for subsequent processing (step 403). The motion compensation module 52 instructs the graphics processor 54 to perform motion estimation and motion compensation algorithms on the main scene image 20 to generate an interpolated main scene image 20 '(step 404), and finally instructs the graphics processor 54 to synthesize the user interface image 22 and the interpolated main scene image 20' into an output image (step 405) and display it on the display unit 56 (step 406). Accordingly, the static content in the user interface image 22 can be prevented from being distorted by the execution of the MEMC algorithm.
Generally, the MEMC algorithm can be divided into two phases: the first stage performs motion estimation from two images that are temporally consecutive to obtain motion data, such as a motion vector (motion vector) or a motion map (motion map); and at the other stage, motion compensation is performed according to the motion data to generate an interpolated image between two continuous images, and the interpolated image can improve the updating rate of the displayed picture so as to obtain smoother visual experience. Unlike the method of calculating movement data using the graphics processor 54, embodiments of the present invention provide a method of utilizing movement data provided by the game rendering module 50. The image processing method of the present embodiment can be compiled into the program code 340 and generalized as the flow 6, as shown in fig. 6. The process 6 comprises the following steps:
step 601: the game rendering module generates motion data and instructs the first processing unit 32 to generate a motion vector map from the motion data.
Step 602: the game rendering module instructs the first processing unit 32 to render the main scene of the game to obtain the main scene image 20.
Step 603: the game rendering module instructs the first processing unit 32 to render the user interface to obtain the user interface image 22.
Step 604: the motion compensation module communicates the main scene image 20, the user interface image 22, and the motion vector map to the motion compensation module.
Step 605: the motion compensation module instructs the first processing unit 32 to perform a motion compensation algorithm on the main scene image 20 based on the motion vector map to generate an interpolated main scene image 20'.
Step 606: the motion compensation module instructs the first processing unit 32 to synthesize (blend) the user interface image 22 and the interpolated main scene image 20' as output images.
Step 607: the output image is displayed by the display unit 36.
Referring to fig. 7, fig. 7 is a schematic diagram of a logic architecture 7 of the image processing system 3 according to an embodiment of the invention. In the logic architecture 7, the game drawing module 50 and the dynamic compensation module 52 are executed by the second processing unit 30 in a game cycle and instruct the graphics processor 54 (i.e., the first processing unit 32 in fig. 3) to execute the game drawing and the EMC algorithm to generate an interpolated image for display on the display unit 56. According to flow 6, the game rendering module 50 provides motion data (step 601) that may be generated by the game engine in addition to instructing the graphics processor 54 to render the game scene and user interface to obtain the master scene image 20 and the user interface image 22 (steps 602, 603). Game rendering module 50 may instruct graphics processor 54 to generate the motion data as a motion vector map according to the format in which it is used. The game rendering module 50 may then transmit the obtained main scene image 20, user interface image 22, and the motion vector map to the motion compensation module 52 for subsequent processing (step 604). The motion compensation module 52 instructs the graphics processor 54 to perform a motion compensation algorithm on the main scene image 20 according to the motion vector map to generate an interpolated main scene image 20 '(step 605), and finally instructs the graphics processor 54 to synthesize the user interface image 22 and the interpolated main scene image 20' into an output image by transparency blending (alpha blending) (step 606), and display it on the display unit 56 (step 607). The motion profile provided by the game rendering module 50 is more accurate than the motion profile obtained by the motion estimation performed by the graphics processor 54, and thus better MEMC results may be obtained.
Accordingly, the present invention solves the problem of distortion and deformation of user interface components caused by execution of the MEMC algorithm by making small modifications to the game cycle. It should be noted that fig. 3-7 are used to illustrate embodiments of the present invention, which illustrate the basic method by which two processing units cooperate to complete the application of the MEMC algorithm to a game cycle. However, details of drawing and operation between two processing units are not limited thereto, and those skilled in the art may make appropriate changes or modifications according to different requirements, software implementation, hardware architecture or support level of graphics acceleration. For example, as described above, the image processing method of the present invention can be applied to an OpenGL game engine. Specifically, an example 8 of OpenGL pseudo code (pseudo code) is illustrated in fig. 8, which illustrates the flow of the system 1 applying the MEMC algorithm to the game cycle according to a conventional method.
In the game cycle of FIG. 8, the game application repeatedly instructs the graphics processor 12 to draw and synthesize the main scene image 20 and the user interface image 22 (step 804). In this step, the frame buffer (defined by OpenGL) may be first mapped to the frame content of the main scene, and then the frame content of the user interface may be mapped directly to the same frame buffer. The blended image 24 is then transferred by the graphics processor 12 to the MEMC chip 14 to automatically execute the MEMC algorithm (step 806), and finally displayed on the display unit 16. In this conventional approach, the output screen is subject to distortion of the user interface components in the user interface image 22.
Unlike the system 1 using the conventional method, the image processing system 3 of the present invention provides a different architecture (as shown in fig. 5, 7) and method of applying the MEMC algorithm. Specifically, fig. 9 is an example 9 of OpenGL pseudo code, which illustrates a game cycle based on the image processing method of flow 4. In the game cycle of fig. 9, the MEMC algorithm is used only on the main scene image 20, the motion data of which is calculated by the first processing unit 32.
In the game cycle of fig. 9, the game rendering module 50 repeatedly instructs the graphics processor 54 to render the game scene and the user interface to obtain the main scene image 20 and the user interface image 22 (step 906), which is then passed to the dynamic compensation module 52 (step 908) (i.e., call the MEMC function library). Note that in the present embodiment, the MEMC algorithm is performed only on the main scene image 20. Accordingly, different framebuffers (fboMainScene, fboUI) need to be respectively configured for the main scene image 20 and the user interface image 22 (step 902). The motion compensation module 52 is responsible for instructing the graphics processor 54 to motion estimate and motion compensate the main scene image 20 to obtain an interpolated main scene image 20', and instructing the graphics processor 54 to synthesize the user interface image 22 and the interpolated main scene image 20' (steps 910, 912).
Fig. 10 is another example 10 of OpenGL pseudo-code, illustrating a game cycle based on the image processing method of flow 6. In the game cycle of fig. 10, the MEMC algorithm is used only on the main scene image 20, and its motion data is provided by the second processing unit 30 (i.e. by the game rendering module 50).
In the game cycle of fig. 10, the game rendering module 50 repeatedly instructs the graphics processor 54 to render the game scene and the user interface to obtain the main scene image 20 and the user interface image 22 (step 1008), and further provides the motion data of the main scene image 20 to the graphics processor 54 to generate a motion vector map (step 1006). The main scene image 20, the user interface image 22, and the motion vector map are then transferred to the motion compensation module 52 (step 1010) (i.e., call the MEMC function library). As in the previous embodiment, in the present embodiment, the MEMC algorithm is performed only on the main scene image 20. Accordingly, different framebuffers (fboMainScene, fboUI) need to be respectively configured for the main scene image 20 and the user interface image 22 (step 1002). Note that since the game rendering module 50 may instruct the graphics processor 54 to generate a motion vector map, another FrameBuffer (fboMotion) is configured to store the motion data required for subsequent processing. The motion compensation module 52 is responsible for instructing the graphics processor 54 to motion compensate the main scene image 20 according to the motion vector map to obtain an interpolated main scene image 20', and instructing the graphics processor 54 to synthesize the user interface image 22 and the interpolated main scene image 20' (steps 1012, 1014).
The game cycle in examples 8-10 illustrates how the stages of drawing game content in the game cycle can be disassembled, and the method of the M EMC algorithm can be easily applied by modifying the game cycle. By the image processing method of the present invention, the game studio only needs to draw the information related to the main scene and the user interface into different textures (objects defined by OpenGL, attached to the frame buffer) and transmit the information to the dynamic compensation module 54, respectively, without considering the logic of applying the MEMC algorithm. The method of modifying game loops is not limited to application to OpenGL, but the concept can also be applied to any existing game engine.
In summary, the present invention provides an image processing method and an image processing apparatus for solving the problem of distortion and deformation of a user interface due to the influence of the MEMC algorithm by slightly modifying a game cycle. By separating the user interface from the output image, it is not necessary to guess hidden information of the main scene obscured by the user interface, so that a more ideal MEMC application effect is achieved.
The foregoing description of the preferred embodiments of the invention is merely exemplary of the invention, and it is intended to cover modifications and variations of the invention as fall within the scope of the invention.
Symbol description
1: system and method for controlling a system
10: processing unit
12: graphics processor
14: MEMC chip
16: display unit
20: main scene image
22: user interface image
24: hybrid image
3: image processing system
30: a second processing unit
32: first processing unit
34: storage unit
340: program code
36: display unit
4: process flow
401-406: step (a)
5: logic architecture for image processing system
50: game drawing module
52: dynamic compensation module
54: graphics processor
56: display unit
20': interpolated primary scene images
6: process flow
601-607: step (a)
7: logic architecture for image processing system
Claims (18)
1. An image processing method for a game cycle of a game, the game cycle including a game drawing module and a dynamic compensation module, executed by more than one processing unit to generate an output image, the image processing method comprising:
drawing, by the game drawing module, one of the scenes of the game to obtain a first image;
drawing, by the game drawing module, a user interface to obtain a second image;
dynamically compensating the first image by the dynamic compensation module to generate an interpolated first image; and
the second image and the interpolated first image are synthesized by the dynamic compensation module for displaying the output image.
2. The image processing method of claim 1, wherein one of the more than one processing units is a graphics processor.
3. The image processing method of claim 1, wherein the first image is obtained in 2D or 3D rendering of the scene of the game.
4. The image processing method of claim 1, wherein the second image is an image having opacity.
5. The image processing method of claim 1, wherein dynamically compensating the first image by the dynamic compensation module to generate the interpolated first image comprises:
motion compensation is performed by the motion compensation module based on the motion data and the first image to generate the interpolated first image.
6. The image processing method of claim 5, wherein the motion data is obtained by performing motion estimation.
7. The image processing method of claim 5, wherein the motion data is provided by the game rendering module or the motion compensation module.
8. The image processing method of claim 5, wherein the motion datagram contains a plurality of motion vectors for the first image.
9. The image processing method of claim 1, wherein synthesizing the second image and the interpolated first image as the output image for display by the motion compensation module further comprises synthesizing the second image and the interpolated first image by performing transparency blending.
10. An image processing apparatus comprising:
more than one processing unit for executing a game cycle of a game and generating a display output image; and
a storage unit, coupled to the more than one processing units, for storing program codes of the game cycle, wherein the game cycle includes a game drawing module and a dynamic compensation module for instructing the more than one processing units to execute an image processing method, the image processing method comprising:
drawing, by the game drawing module, one of the scenes of the game to obtain a first image;
drawing, by the game drawing module, a user interface to obtain a second image;
dynamically compensating the first image by the dynamic compensation module to generate an interpolated first image; and
the second image and the interpolated first image are synthesized by the dynamic compensation module for displaying the output image.
11. The image processing apparatus of claim 10, wherein one of the more than one processing units is a graphics processor.
12. The image processing apparatus of claim 10, wherein the first image is obtained in 2D or 3D rendering of the scene of the game.
13. The image processing apparatus of claim 10, wherein the second image is an image having opacity.
14. The image processing apparatus of claim 10, wherein dynamically compensating the first image by the dynamic compensation module to generate the interpolated first image comprises:
motion compensation is performed by the motion compensation module based on the motion data and the first image to generate the interpolated first image.
15. The image processing apparatus of claim 14, wherein the motion data is obtained by performing motion estimation.
16. The image processing apparatus of claim 14, wherein the motion data is provided by the game rendering module or the dynamic compensation module.
17. The image processing device of claim 14, wherein the motion datagram contains a plurality of motion vectors for the first image.
18. The image processing apparatus of claim 10, wherein the step of synthesizing, by the motion compensation module, the second image and the interpolated first image as the output image for display further comprises synthesizing the second image and the interpolated first image by performing transparency blending.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/334,312 | 2022-04-25 | ||
US18/135,751 | 2023-04-18 | ||
US18/135,751 US20230338843A1 (en) | 2022-04-25 | 2023-04-18 | Image Processing Device and Image Processing Method for Game Loop |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116954785A true CN116954785A (en) | 2023-10-27 |
Family
ID=88453712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310457927.7A Pending CN116954785A (en) | 2022-04-25 | 2023-04-25 | Image processing apparatus and image processing method for game cycle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116954785A (en) |
-
2023
- 2023-04-25 CN CN202310457927.7A patent/CN116954785A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111161392B (en) | Video generation method and device and computer system | |
US20140152676A1 (en) | Low latency image display on multi-display device | |
EP2245598B1 (en) | Multi-buffer support for off-screen surfaces in a graphics processing system | |
US8253722B2 (en) | Method, medium, and system rendering 3D graphics data to minimize power consumption | |
KR20100004119A (en) | Post-render graphics overlays | |
CN112862659B (en) | Method and device for generating a series of frames by means of a synthesizer | |
US20040085310A1 (en) | System and method of extracting 3-D data generated for 2-D display applications for use in 3-D volumetric displays | |
JP4742051B2 (en) | Spatial and temporal motion blur effect generation method | |
WO2017122092A1 (en) | Method and system for high-performance real-time adjustment of one or more elements in a playing video, interactive 360° content or image | |
CN116091329B (en) | Image processing method, device, equipment and storage medium | |
CN115546410A (en) | Window display method and device, electronic equipment and storage medium | |
CN106886974B (en) | Image accelerator apparatus and related methods | |
CN114387914A (en) | System and display device for high dynamic range post-processing | |
CN116954785A (en) | Image processing apparatus and image processing method for game cycle | |
US20230338843A1 (en) | Image Processing Device and Image Processing Method for Game Loop | |
CN117115276B (en) | Picture processing method, device and storage medium | |
CN117453170B (en) | Display control method, device and storage medium | |
WO2024087971A1 (en) | Method and apparatus for image processing, and storage medium | |
Butz et al. | Lean Modeling: The Intelligent Use of Geometrical Abstraction in 3D Animations. | |
CN117156059A (en) | Image processing method, electronic device, and readable storage medium | |
WO2024091613A1 (en) | Method and system for ray tracing | |
JP2007025861A (en) | Virtual reality system and method, and interpolation image generation device and method | |
CN117710548A (en) | Image rendering method and related equipment thereof | |
JP2020135004A (en) | Image processing device and program | |
GB2448717A (en) | Three-dimensional rendering engine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |