CN111729304B - Method for displaying mass objects - Google Patents

Method for displaying mass objects Download PDF

Info

Publication number
CN111729304B
CN111729304B CN202010457293.1A CN202010457293A CN111729304B CN 111729304 B CN111729304 B CN 111729304B CN 202010457293 A CN202010457293 A CN 202010457293A CN 111729304 B CN111729304 B CN 111729304B
Authority
CN
China
Prior art keywords
image
coordinates
atlas
model
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010457293.1A
Other languages
Chinese (zh)
Other versions
CN111729304A (en
Inventor
郭耀琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Zunyou Software Technology Co ltd
Original Assignee
Guangzhou Zunyou Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Zunyou Software Technology Co ltd filed Critical Guangzhou Zunyou Software Technology Co ltd
Priority to CN202010457293.1A priority Critical patent/CN111729304B/en
Publication of CN111729304A publication Critical patent/CN111729304A/en
Application granted granted Critical
Publication of CN111729304B publication Critical patent/CN111729304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Abstract

The invention discloses a method for displaying mass objects, which comprises the following steps: s1, outputting an image of the action of the model; s2, outputting the image to an atlas corresponding to the model, and calculating position information and size information of the image in the atlas; s3, calculating and storing offset coordinates and texture coordinates of the image in the atlas; s4, counting objects to be displayed, acquiring identification information of the objects and offset coordinates and texture coordinates of a model corresponding to the objects in a graph set for each object to be displayed, and calculating real coordinates; s5, packaging and sending the identification information, the offset coordinates, the texture coordinates and the real coordinates of the object obtained in the step S4 to a shader according to the model corresponding to the object, and rendering the model by the shader. The method has the advantages that the objects to be displayed and the real coordinates thereof are processed in batches before each frame is rendered, so that the same model adopts the same DrawCall, and the number of times of triggering the DrawCall is reduced.

Description

Method for displaying mass objects
Technical Field
The invention relates to the technical field of information, in particular to a method for smoothly displaying mass objects in a mobile terminal.
Background
Currently, in various games, it is a common requirement to display a large number of repeated models on the same screen, for example, repeated presentation of characters, buildings, and plants is required to enrich or restore the content of the game. For mobile devices, the processor can only operate at low power consumption due to device size limitations. Most mobile phones display hundreds of 3D models on a screen at most, otherwise, equipment generates heat, the frame rate is reduced, the model is derived slowly and the like due to overlarge operand. In order to achieve the purpose of displaying more models, the prior art scheme mainly uses a game engine of Unity3D to display 2D pictures, and replaces a 3D model with frame animation of the 2D pictures for displaying.
On this basis, taking Unity3D as an example, the CPU may initiate a DrawCall command that receives the data to be rendered and informs the GPU to perform rendering directly or through a command buffer. As the number of models increases, both the number of dragcall times and the performance loss of the animation component itself become greater.
In order to reduce the number of DrawCall times, the closest patent application (application number: 201810085651.3, the name of the invention: a method for storing and rendering primitives in a game engine) proposes to perform hierarchical batch rendering through the front-back relationship among layers, so as to reduce the call of DrawCall during drawing; and a large buffer area is manually set, when an object is to be rendered, vertex information is stored in the buffer area, and when the buffer area is filled or texture memory of a texture resource manager is distributed, the vertices in the buffer area are thrown into a renderer for drawing at one time. According to the scheme, the optimal display effect is achieved by adjusting the rendering sequence, and the buffer mechanism is adjusted, so that triggering of a plurality of DrawCall when one model is rendered is avoided as much as possible, but the problem of reducing the calculation amount of a program per se, particularly vertex transformation calculation required by real-time rendering of a large number of models is not considered.
Disclosure of Invention
In order to overcome the defects in the prior art, all vertex calculation and texture coordinate calculation of the frame animation are optimized, drawCall of each model is combined into 1 DrawCall, and a great amount of real-time calculation and repeated calculation of a CPU are avoided. To this end, the invention provides a method of presenting a mass object.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method of displaying a mass object, comprising the steps of:
s1, outputting an image of the action of the model;
s2, outputting the image to an atlas corresponding to the model, and calculating position information and size information of the image in the atlas;
s3, calculating and storing offset coordinates and texture coordinates of the image in the atlas;
s4, counting objects to be displayed, acquiring identification information of the objects and offset coordinates and texture coordinates of a model corresponding to the objects in a graph set for each object to be displayed, and calculating real coordinates;
s5, packaging and sending the identification information, the offset coordinates, the texture coordinates and the real coordinates of the object obtained in the step S4 to a shader according to the model corresponding to the object, and rendering the model by the shader.
The principle of the method is as follows: firstly, decomposing the model action into images of each frame, outputting the images into an atlas, and storing the images and the position information in the atlas for subsequent reading; on the basis of the image and its position information, offset coordinates and texture coordinates are calculated and also saved, thus completing the previous preparation work.
When the game is in progress, firstly counting objects to be displayed, directly acquiring offset coordinates and texture coordinates which are prepared in advance, and calculating real coordinates of the objects to be displayed on a real world map according to a specific process (such as combat) in the game; thus, all relevant information of the object is directly sent to the shader in a package before each frame is displayed, the shader obtains an image from the atlas according to the information and renders each frame of action of the model on the map.
Further, in the step S1, firstly, an animation of the motion of the model is obtained, and an image of each frame in the animation is read and sequentially input into the atlas;
wherein if the image is out of range of the atlas, the image is built and input to another atlas such that the model corresponds to a plurality of atlas.
Further, the size and position of the image of each frame after being input into the atlas can be set in a customized way.
Further, in the step S1, the image is a 2D sequence frame image of a model action, wherein a horizontal line at the right center of the image is a horizontal line when the model stands.
Further, the position information in the step S2 is the coordinates of one corner of the image, and the specific process of calculating the offset coordinates in the step S3 is as follows:
and acquiring the coordinates of one corner of the image, acquiring the length and the width of the image according to the size information, and calculating the coordinates of the other corner of the image opposite to the coordinates of the one corner of the image according to the length and the width of the image.
Further, texture coordinates are calculated based on the image width, the image height, the atlas width, and the atlas height.
Further, the specific process of step S5 is:
according to the model corresponding to the object, the identification information, the offset coordinates, the texture coordinates and the real coordinates of the object belonging to the same model are output as respective rendering primitives and recorded in the parameters of a single DrawCall command;
when all objects finish outputting rendering graphic elements and recording parameters, a DrawCall command and rendering graphic elements are sent to a shader, and the shader reads the graphic set of the model corresponding to the DrawCall command according to the DrawCall command and renders the objects by using the rendering graphic elements corresponding to the objects.
Further, in step S5, the shader reads the atlas according to the identification information of the object, acquires the image to be displayed from the atlas according to the offset coordinate obtained in step S4, and renders the image to be displayed according to the real coordinate and the texture coordinate.
Further, the step S3 stores the offset coordinates and texture coordinates in xml or json format.
Further, the step S4 is triggered at every frame update.
Further, the atlas takes the same size, with 1024 pixels wide and 1024 pixels high.
Further, the step S4 may be a batch process, and then the step S5 is triggered periodically according to a preset frame rate to send the identification information, the offset coordinates, the texture coordinates and the real coordinates of the object to the color former.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
firstly, extracting an image of each frame from an animation of a model, inputting the extracted image into an atlas, and presetting offset coordinates and texture coordinates so that a CPU (Central processing Unit) does not need to perform vertex transformation and texture switching calculation in real time; when the animation is played in real time, the 2D picture display component and the animation component which are carried by the engine can be abandoned, the common model rendering component is used, and meanwhile, the objects to be displayed and the real coordinates thereof are processed in batches before each frame is rendered, so that the same model adopts the same DrawCall, and the number of times of triggering the DrawCall is reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method of the present invention for presenting a mass object.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
A method for displaying mass objects, as shown in fig. 1, firstly outputting an image of the action of a model; firstly, making an animation of a model action by an artist, and sequentially inputting 2D sequence frame images in the animation into a graph set through an atlas tool, wherein the horizontal line at the right center of the 2D sequence frame images is the horizontal line when the model stands; if the image is out of range of the atlas, the image is built and input to another atlas such that the model corresponds to multiple atlas. Thus, the action of the model may correspond to one or more atlases, and the artist may set the size and position of each frame of image input to the atlases by himself.
In this embodiment, the atlas tool is a Unity3D textureplacker, but the invention is not limited thereto, and similar tools in other tool libraries may be applied as well.
In the present embodiment, the atlas adopts the same size, in which the width is 1024 pixels and the height is 1024 pixels, but the present invention is not limited thereto; wherein the same dimensions are used as a preferred way of implementing the invention, while the choice of dimensions and aspect ratio does not affect the effect of the invention.
Then, the image is output to an atlas corresponding to the model, position information and size information of the image in the atlas are calculated, offset coordinates and texture coordinates of the image in the atlas are calculated and stored, and preparation operation of the atlas is completed.
Specifically, the texture coordinate calculation is exemplified as follows:
assuming that the left lower corner coordinates of a certain action picture in the atlas are (200, 300), the picture width 20, the height 30, the atlas size is 1024x1024, and the atlas coordinate system origin is the left lower corner of the atlas;
the lower left corner x coordinate of the picture texture coordinate is: the lower left corner x/atlas width, i.e. 200/1024= 0.1953;
the lower right corner x coordinate of the texture coordinate of the picture is: (lower left corner x+picture width)/album width, i.e., (200+20)/1024= 0.2148;
the lower left corner y coordinate of the picture texture coordinate is the lower left corner coordinate y/picture height, namely 300/1024=0.293;
the upper right corner y coordinate of the picture texture coordinate is (lower left corner coordinate y+picture height)/album height, i.e., (300+30)/1024=0.3222;
finally, the texture coordinates of the right lower corner, the right upper corner and the left upper corner of the picture are respectively obtained, (0.2148,0.293), (0.2148,0.3222) and (0.1953,0.3222).
In this embodiment, the position information is the coordinates of one corner of the image, and the length and width of the image are obtained according to the coordinates of one corner of the image and the size information, and the coordinates of the other corner of the image opposite to the coordinates of one corner of the image are calculated. The positional information of the present invention is not limited thereto, and may be other parameters that can determine the position of an image, such as the middle point of the image, and in the case where the image is held in a parallelogram, may be calculated from the image size as well.
Specifically, an example of calculating offset coordinates using position information is as follows:
assuming that the size of the original picture is 200 in width and 150 in height, the picture coordinate system takes the lower left corner as an original point, the lower left corner of an effective pixel of the model is (80, 40), and the upper right corner is (130, 120), and obtaining offset coordinates taking the original picture as a center point;
the lower left corner offset coordinates are: ((artwork width/2) -lower left corner x-coordinate, (artwork height/2) -lower left corner y-coordinate)
Taking the above data as an example: (200/2) -80=20, (150/2) -40=35, i.e. the lower left corner coordinates are (-20, -35);
the upper right corner offset coordinates are: (upper right-hand corner x-coordinate- (artwork width/2), upper right-hand corner y-coordinate- (artwork height/2));
taking the above data as an example: 130- (200/2) =30, 120- (150/2) =45, i.e. the upper right-hand corner coordinates are (30, 45);
and obtaining the left lower corner offset coordinate and the right upper corner offset coordinate, and determining the offset coordinates of other two points of the model picture.
And determining the offset coordinates, and then controlling the image size by multiplying different coefficients according to actual needs.
In this embodiment, the offset coordinates and texture coordinates are stored in xml or json format text, and may be stored in other ways, for example, in a database.
When playing a game and playing the animation of the object, counting the objects to be displayed, for each object to be displayed, acquiring offset coordinates and texture coordinates of images corresponding to all the objects before starting to display, and calculating real coordinates. Before each frame is displayed, for each object, the offset coordinates, texture coordinates and true coordinates of the object are directly packed into data, such as an array, and sent to a shader for batch rendering without real-time dynamic computation.
In this embodiment, each frame of picture of a certain action of the model of one object includes a front view and 8 direction views, wherein the 8 direction views are images obtained by adjusting the angles of the model by 45 degrees in the directions of up, down, left, right, up left, down left, up right, and down right with respect to the front of the model facing the camera as the facing angles of the model. Since the motion is set to be generally not more than 2 seconds, not more than 10 key frame pictures are output for the motion in 1 direction.
Finally, according to the model corresponding to the object, the identification information, the offset coordinate, the texture coordinate and the real coordinate of the object belonging to the same model are output as respective rendering primitives and recorded in the parameters of a single DrawCall command;
when all objects finish outputting rendering graphic elements and recording parameters, a DrawCall command and rendering graphic elements are sent to a color device, the color device reads an atlas of a model corresponding to the DrawCall command according to the DrawCall command, acquires an image to be displayed from the atlas according to offset coordinates of the rendering graphic elements, displays the image on a corresponding screen position which is actually output according to real coordinates, and finally renders the image to be displayed according to texture coordinates.
The applicant has carried out experimental comparison between the prior art and the scheme of the present invention, taking the example of displaying a total of 4900 units of 70x70 and the open shadow special effect at the same PC end, the following are specific:
for the program based on the existing scheme, the 2D frame Animation is realized, a tool is written to process and import the key frame of art output, then the key frame is converted into an Animation Clip of the Unity3D, and the Animation Clip is displayed by a 2D picture display component and an Animation component of the Unity 3D.
When the playing of the PC end reaches 2000 units, the existing scheme can not reach 20fps, and a great burden is caused to equipment.
Also based on 2D frame animation, but according to the procedure of the scheme of the invention, a frame rate of 120fps can be achieved when 4900 soldiers are displayed on the PC side, and a frame rate of 40fps-50fps can be achieved on the mobile side of the mobile phone.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (9)

1. A method of displaying a mass object, comprising the steps of:
s1, outputting an image of the action of the model;
s2, outputting the image to an atlas corresponding to the model, and calculating position information and size information of the image in the atlas;
s3, calculating and storing offset coordinates and texture coordinates of the image in the atlas;
s4, counting objects to be displayed, acquiring identification information of the objects and offset coordinates and texture coordinates of a model corresponding to the objects in a graph set for each object to be displayed, and calculating real coordinates;
s5, packaging and transmitting the identification information, the offset coordinates, the texture coordinates and the real coordinates of the object obtained in the step S4 to a shader according to the model corresponding to the object, and rendering the model by the shader;
the specific process of step S5 is:
according to the model corresponding to the object, the identification information, the offset coordinates, the texture coordinates and the real coordinates of the object belonging to the same model are output as respective rendering primitives and recorded in the parameters of a single DrawCall command;
when all objects finish outputting rendering graphic elements and recording parameters, a DrawCall command and rendering graphic elements are sent to a shader, and the shader reads the graphic set of the model corresponding to the DrawCall command according to the DrawCall command and renders the objects by using the rendering graphic elements corresponding to the objects.
2. The method for displaying mass objects according to claim 1, wherein in the step S1, an animation of the motion of the model is first obtained, and an image of each frame in the animation is read and sequentially input into the atlas;
wherein if the image is out of range of the atlas, the image is built and input to another atlas such that the model corresponds to a plurality of atlas.
3. A method of exposing a plurality of objects as recited in claim 2, wherein the size and location of the image of each frame after input to the atlas is customizable.
4. A method of presenting a mass object as claimed in claim 1, wherein in said step S1, said image is a 2D sequence of frame images of a model action, wherein the horizontal line at the very center of the image is the horizontal line when the model stands.
5. The method for displaying a mass object according to claim 1, wherein the position information in step S2 is coordinates of a corner of the image, and the specific process of calculating the offset coordinates in step S3 is as follows:
and acquiring the coordinates of one corner of the image, acquiring the length and the width of the image according to the size information, and calculating the coordinates of the other corner of the image opposite to the coordinates of the one corner of the image according to the length and the width of the image.
6. A method of exposing a mass object as recited in claim 1, wherein in said step S3 texture coordinates are calculated based on image width, image height, atlas width and atlas height.
7. A method for displaying a mass object according to claim 1, wherein in step S5, the shader reads an atlas according to the identification information of the object, acquires the image to be displayed from the atlas according to the offset coordinates obtained in step S4, and renders the image to be displayed according to the real coordinates and the texture coordinates.
8. A method of exposing a mass object as recited in claim 1, wherein said step S3 stores the offset coordinates and texture coordinates in xml or json format.
9. A method of exposing a mass object as recited in claim 1, wherein said step S4 is triggered at every frame update.
CN202010457293.1A 2020-05-26 2020-05-26 Method for displaying mass objects Active CN111729304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010457293.1A CN111729304B (en) 2020-05-26 2020-05-26 Method for displaying mass objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010457293.1A CN111729304B (en) 2020-05-26 2020-05-26 Method for displaying mass objects

Publications (2)

Publication Number Publication Date
CN111729304A CN111729304A (en) 2020-10-02
CN111729304B true CN111729304B (en) 2024-04-05

Family

ID=72647691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010457293.1A Active CN111729304B (en) 2020-05-26 2020-05-26 Method for displaying mass objects

Country Status (1)

Country Link
CN (1) CN111729304B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102985937A (en) * 2009-09-08 2013-03-20 文明帝国有限公司 Methods, computer program products, and systems for increasing interest in a massively multiplayer online game
CN104063424A (en) * 2014-05-30 2014-09-24 小米科技有限责任公司 Webpage picture displaying method and device
CN104392410A (en) * 2014-11-28 2015-03-04 北京搜狗科技发展有限公司 Method and equipment for integrating pictures in skin system and skin drawing method
CN106528174A (en) * 2016-11-25 2017-03-22 上海野火网络科技有限公司 Flash rendering method based on cocos2dx and rendering engine
CN106775225A (en) * 2016-12-02 2017-05-31 西安电子科技大学 The method that across document seamless roam browses PDF maps
CN106934397A (en) * 2017-03-13 2017-07-07 北京市商汤科技开发有限公司 Image processing method, device and electronic equipment
CN107085509A (en) * 2017-04-19 2017-08-22 腾讯科技(深圳)有限公司 A kind of processing method and terminal of the foreground picture in virtual scene
CN107789836A (en) * 2016-09-06 2018-03-13 盛趣信息技术(上海)有限公司 Implementation method and client of a kind of people of game on line thousand with screen
CN108196835A (en) * 2018-01-29 2018-06-22 东北大学 Pel storage and the method rendered in a kind of game engine
CN109045691A (en) * 2018-07-10 2018-12-21 网易(杭州)网络有限公司 A kind of the special efficacy implementation method and device of special efficacy object
CN110090440A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic equipment and storage medium
CN110263830A (en) * 2019-06-06 2019-09-20 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110647377A (en) * 2019-09-29 2020-01-03 上海沣沅星科技有限公司 Picture processing system, device and medium for human-computer interaction interface

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10218793B2 (en) * 2016-06-13 2019-02-26 Disney Enterprises, Inc. System and method for rendering views of a virtual space
CN107945112B (en) * 2017-11-17 2020-12-08 浙江大华技术股份有限公司 Panoramic image splicing method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102985937A (en) * 2009-09-08 2013-03-20 文明帝国有限公司 Methods, computer program products, and systems for increasing interest in a massively multiplayer online game
CN104063424A (en) * 2014-05-30 2014-09-24 小米科技有限责任公司 Webpage picture displaying method and device
CN104392410A (en) * 2014-11-28 2015-03-04 北京搜狗科技发展有限公司 Method and equipment for integrating pictures in skin system and skin drawing method
CN107789836A (en) * 2016-09-06 2018-03-13 盛趣信息技术(上海)有限公司 Implementation method and client of a kind of people of game on line thousand with screen
CN106528174A (en) * 2016-11-25 2017-03-22 上海野火网络科技有限公司 Flash rendering method based on cocos2dx and rendering engine
CN106775225A (en) * 2016-12-02 2017-05-31 西安电子科技大学 The method that across document seamless roam browses PDF maps
CN106934397A (en) * 2017-03-13 2017-07-07 北京市商汤科技开发有限公司 Image processing method, device and electronic equipment
CN107085509A (en) * 2017-04-19 2017-08-22 腾讯科技(深圳)有限公司 A kind of processing method and terminal of the foreground picture in virtual scene
CN108196835A (en) * 2018-01-29 2018-06-22 东北大学 Pel storage and the method rendered in a kind of game engine
CN109045691A (en) * 2018-07-10 2018-12-21 网易(杭州)网络有限公司 A kind of the special efficacy implementation method and device of special efficacy object
CN110090440A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic equipment and storage medium
CN110263830A (en) * 2019-06-06 2019-09-20 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110647377A (en) * 2019-09-29 2020-01-03 上海沣沅星科技有限公司 Picture processing system, device and medium for human-computer interaction interface

Also Published As

Publication number Publication date
CN111729304A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
WO2022110903A1 (en) Method and system for rendering panoramic video
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US6222551B1 (en) Methods and apparatus for providing 3D viewpoint selection in a server/client arrangement
US6654020B2 (en) Method of rendering motion blur image and apparatus therefor
EP2058768A1 (en) Image viewer, image displaying method and information storage medium
KR960025239A (en) Texture mapping device and method
US20090262139A1 (en) Video image display device and video image display method
EP1732040A1 (en) Image processor, image processing method and information storage medium
CN110706326B (en) Data display method and device
CN110968962B (en) Three-dimensional display method and system based on cloud rendering at mobile terminal or large screen
KR940024617A (en) Image Creation Method, Image Creation Device and Home Game Machine
US7391417B2 (en) Program and image processing system for rendering polygons distributed throughout a game space
JP2004213641A (en) Image processor, image processing method, information processor, information processing system, semiconductor device and computer program
KR100610689B1 (en) Method for inserting moving picture into 3-dimension screen and record medium for the same
CN109636885B (en) Sequential frame animation production method and system for H5 page
EP2065854A1 (en) Image processing device, control method for image processing device and information recording medium
CN114419099A (en) Method for capturing motion trail of virtual object to be rendered
CN111729304B (en) Method for displaying mass objects
JP2008027064A (en) Program, information recording medium, and image forming system
CN112700519A (en) Animation display method and device, electronic equipment and computer readable storage medium
JP2004178036A (en) Device for presenting virtual space accompanied by remote person's picture
JP2005346417A (en) Method for controlling display of object image by virtual three-dimensional coordinate polygon and image display device using the method
JP2010244450A (en) Image processor and image processing method
CN114419226A (en) Panorama rendering method and device, computer equipment and storage medium
JP2007241868A (en) Program, information storage medium, and image generation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant