CN106296786A - The determination method and device of scene of game visibility region - Google Patents

The determination method and device of scene of game visibility region Download PDF

Info

Publication number
CN106296786A
CN106296786A CN201610649889.5A CN201610649889A CN106296786A CN 106296786 A CN106296786 A CN 106296786A CN 201610649889 A CN201610649889 A CN 201610649889A CN 106296786 A CN106296786 A CN 106296786A
Authority
CN
China
Prior art keywords
view
scene
game
camp
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610649889.5A
Other languages
Chinese (zh)
Other versions
CN106296786B (en
Inventor
刘羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201610649889.5A priority Critical patent/CN106296786B/en
Publication of CN106296786A publication Critical patent/CN106296786A/en
Application granted granted Critical
Publication of CN106296786B publication Critical patent/CN106296786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses the determination method and device of a kind of scene of game visibility region.Wherein, the method includes: according to scene of game map building the first view, wherein, and the first view is for describing the field obscuration region in scene of game map;Obtain field of view information and the visual range of game role of the first view Scene element;The visibility region in the first view is determined by field of view information and visual range.The present invention solves the visibility region under two dimension dense fog shrouds of the scene of game in correlation technique and does not combines with the map landform of scene of game, reduces the technical problem of user's game experiencing.

Description

The determination method and device of scene of game visibility region
Technical field
The present invention relates to computer realm, in particular to determination method and the dress of a kind of scene of game visibility region Put.
Background technology
Fog of War (Fog of War), traditional sense refers in war due to enemy's information unclear and cannot Confirm the most area in addition to friendly troop region, the distribution of enemy and active situation.And at present in the range of game, especially It is in the most strategic and many people online tactics athletic game (MOBA), and the frequency that this word occurs is higher, the most more Player known to.In the game of Fog of War, player can only observe scope minimum around self base and unit, and Most map area are all hacked roue's mist and cover.When friend side's unit is after dark area moves, the region meeting of its approach Being automatically turned on, thus map becomes the most visible, and it can include but not limited to: the landform in this region/enemy is movable Situation.
At present, in correlation technique, the embodiment about Fog of War is recorded less, and it generally uses screen shade mode, That is, two dimension dense fog picture is integrated on two-dimensional screen.Fig. 1 be according to correlation technique game display picture in black occurs The schematic diagram of dense fog.As it is shown in figure 1, the generation of dense fog pattern generally uses map connecting method, according to the map size, to map Carry out gridding process, then calculate visibility region further according to character positions and visual field distance, and visibility region is spliced into stricture of vagina Reason, further mixes with map base map.But, although the calculation that this mode is used is relatively simple, calculates speed Relatively fast.But the two-dimentional dense fog generated lacks third dimension, and transition is stiff.And, most importantly, two dimension dense fog does not has Combine with map landform.Such as: in MOBA map, thick grass, mountain, trees can block the visual field, and stand in thick grass and can penetrate The thick grass visual field.These features can be that game player provides more comprehensively game experiencing, but, in current two-dimentional dense fog Cannot be embodied.
For above-mentioned problem, effective solution is the most not yet proposed.
Summary of the invention
Embodiments provide the determination method and device of a kind of scene of game visibility region, at least to solve to be correlated with The scene of game in technology visibility region under two dimension dense fog shrouds does not combines with the map landform of scene of game, reduces The technical problem of user's game experiencing.
An aspect according to embodiments of the present invention, it is provided that a kind of determination method of scene of game visibility region, including: According to scene of game map building the first view, wherein, the first view is for describing the field obscuration district in scene of game map Territory;Obtain field of view information and the visual range of game role of the first view Scene element;By field of view information and visual model Enclose the visibility region determined in the first view.
Alternatively, include according to scene of game map building the first view: in scene of game map, determine field obscuration Model region;Corresponding part with field obscuration model region in first view is set to the first color, and Remainder outside the corresponding part of field obscuration model region is set to the second color.
Alternatively, the field of view information obtaining situation elements includes: obtain the multiple camps arranged in scene of game map;From Multiple camps determine the ownership camp of situation elements;The part camp utilizing situation elements to belong to determines field of view information.
Alternatively, from multiple camps, determine that the ownership camp of situation elements includes: respectively will be in advance in multiple camps Each camp configuration hexadecimal parameter value be converted into binary parameters value;When multiple camps exist part or all of battle array When binary parameters value corresponding to battalion carries out step-by-step logic or arithmetic operation obtains the binary parameters value of situation elements, then by portion Point or all camp be defined as the ownership camp of situation elements.
Alternatively, the visual range obtaining game role includes: obtain game role residing in scene of game map Position;The initialization visual field according to game role determines visual range in the first view.
Alternatively, after determining visibility region by field of view information and visual range, also include: create and the first view Identical texture, wherein, each pixel in texture all includes: the first Color Channel and the second Color Channel;By adjacent twice The visibility region alternating sampling determined, to the first Color Channel and the second Color Channel, respectively obtains the second view and the 3rd and regards Figure;Time difference according to generating between the second view and generation three-view diagram is adjusted, and obtains transitional view;To transitional view In visibility region edge carry out Gaussian Blur process, obtain view to be rendered;View to be rendered is entered with scene of game map Row merges, and obtains view to be shown.
Alternatively, view to be rendered is merged with scene of game map, obtain view to be shown and include: by be rendered The space coordinate conversion of part to be shown corresponding with going game picture in view is screen coordinate;Screen coordinate is converted to World coordinates;The visibility region corresponding with going game picture and non-visible region is obtained by world coordinates;By get Visibility region and non-visible region are fused to scene of game map, obtain view to be shown.
Another aspect according to embodiments of the present invention, additionally provides the determination device of a kind of scene of game visibility region, bag Including: the first creation module, for according to scene of game map building the first view, wherein, the first view is used for describing sports ground Field obscuration region in scape map;Acquisition module, for obtaining field of view information and the game angle of the first view Scene element The visual range of color;Determine module, for determining the visibility region in the first view by field of view information and visual range.
Alternatively, the first creation module includes: first determines unit, for determining field obscuration in scene of game map Model region;Unit is set, for being set to by corresponding part with field obscuration model region in the first view First color, and the remainder outside the corresponding part of field obscuration model region is set to the second color.
Alternatively, acquisition module includes: the first acquiring unit, for obtaining the multiple battle arrays arranged in scene of game map Battalion;Second determines unit, for determining the ownership camp of situation elements from multiple camps;3rd determines unit, is used for utilizing The part camp of situation elements ownership determines field of view information.
Alternatively, second determines that unit includes: conversion subelement, is used for will being each battle array in multiple camp in advance respectively The hexadecimal parameter value of battalion's configuration is converted into binary parameters value;Determine subelement, for when multiple camps exist part Or the binary parameters value that all camp is corresponding carries out step-by-step logic or arithmetic operation obtains the binary parameters value of situation elements Time, then part or all of camp is defined as the ownership camp of situation elements.
Alternatively, acquisition module includes: second acquisition unit, is used for obtaining game role residing in scene of game map Position;4th determines unit, for determining visual range according to the initialization visual field of game role in the first view.
Alternatively, said apparatus also includes: the second creation module, for creating the texture identical with the first view, wherein, Each pixel in texture all includes: the first Color Channel and the second Color Channel;Sampling module, for determining adjacent twice Visibility region alternating sampling to the first Color Channel and the second Color Channel, respectively obtain the second view and three-view diagram;Adjust Mould preparation block, for being adjusted according to the time difference generated between the second view and generation three-view diagram, obtains transitional view;Place Reason module, for the visibility region edge in transitional view is carried out Gaussian Blur process, obtains view to be rendered;Merge mould Block, for being merged with scene of game map by view to be rendered, obtains view to be shown.
Alternatively, Fusion Module includes: the first converting unit, for by corresponding with going game picture in view to be rendered The space coordinate conversion of part to be shown be screen coordinate;Second converting unit, sits for screen coordinate is converted to the world Mark;3rd acquiring unit, for obtaining the visibility region corresponding with going game picture and non-visible region by world coordinates; Integrated unit, for the visibility region got and non-visible region are fused to scene of game map, obtains view to be shown.
In embodiments of the present invention, use according to scene of game map building for describing the visual field in scene of game map The mode of the first view of occlusion area, by field of view information and the visual range of game role of the first view Scene element Determine the visibility region in the first view, reached to take into full account sports ground during rendering the dense fog effect of scene of game The purpose of scape map landform, it is achieved thereby that strengthen third dimension and the technique effect of sense of reality of dense fog effect, and then solves The scene of game in correlation technique visibility region under two dimension dense fog shrouds does not combines with the map landform of scene of game, Reduce the technical problem of user's game experiencing.
Accompanying drawing explanation
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, this Bright schematic description and description is used for explaining the present invention, is not intended that inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the schematic diagram occurring black dense fog in game display picture according to correlation technique;
Fig. 2 is the flow chart of the determination method of scene of game visibility region according to embodiments of the present invention;
Fig. 3 is that in thick grass, the visual field penetrates schematic diagram according to the preferred embodiment of the invention;
Fig. 4 is the structured flowchart of the determination device of scene of game visibility region according to embodiments of the present invention;
Fig. 5 is the structured flowchart of the determination device of scene of game visibility region according to the preferred embodiment of the invention.
Detailed description of the invention
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with in the embodiment of the present invention Accompanying drawing, is clearly and completely described the technical scheme in the embodiment of the present invention, it is clear that described embodiment is only The embodiment of a present invention part rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people The every other embodiment that member is obtained under not making creative work premise, all should belong to the model of present invention protection Enclose.
It should be noted that term " first " in description and claims of this specification and above-mentioned accompanying drawing, " Two " it is etc. for distinguishing similar object, without being used for describing specific order or precedence.Should be appreciated that so use Data can exchange in the appropriate case, in order to embodiments of the invention described herein can with except here diagram or Order beyond those described is implemented.Additionally, term " includes " and " having " and their any deformation, it is intended that cover Cover non-exclusive comprising, such as, contain series of steps or the process of unit, method, system, product or equipment are not necessarily limited to Those steps clearly listed or unit, but can include the most clearly listing or for these processes, method, product Or intrinsic other step of equipment or unit.
According to embodiments of the present invention, it is provided that the embodiment of a kind of determination method of scene of game visibility region, need Bright, can hold in the computer system of such as one group of computer executable instructions in the step shown in the flow chart of accompanying drawing OK, and, although show logical order in flow charts, but in some cases, can be to be different from order herein Step shown or described by execution.
Fig. 2 is the flow chart of the determination method of scene of game visibility region according to embodiments of the present invention, as in figure 2 it is shown, The method comprises the steps:
Step S20, according to scene of game map building the first view, wherein, the first view is used for describing scene of game ground Field obscuration region in figure;
Step S22, obtains field of view information and the visual range of game role of the first view Scene element;
Step S24, determines the visibility region in the first view by field of view information and visual range.
By above-mentioned steps, use according to scene of game map building for describing the field obscuration in scene of game map The mode of first view in region, is determined by the field of view information of the first view Scene element and the visual range of game role Visibility region in first view, has reached to take into full account scene of game ground during rendering the dense fog effect of scene of game The purpose of figure landform, it is achieved thereby that strengthen third dimension and the technique effect of sense of reality of dense fog effect, and then solves relevant The scene of game in technology visibility region under two dimension dense fog shrouds does not combines with the map landform of scene of game, reduces The technical problem of user's game experiencing.
Alternatively, in step S20, step performed below can be included according to scene of game map building the first view:
Step S201, determines field obscuration model region in scene of game map;
Step S202, is set to the first color by corresponding part with field obscuration model region in the first view, And the remainder outside the corresponding part of field obscuration model region is set to the second color.
In a preferred embodiment, scene of game map can be carried out two-dimensional mesh and format process, with a MOBA scene ground As a example by figure, it is assumed that coordinate range is: x ∈ (-230,230), z ∈ (-140,140), ignore y value, it can include but not limited to: Roads etc. do not block the field obscuration models such as visual field model, and thick grass, stone, tree, can mark in the scene and block the visual field Region.According to the size of scene of game map, the graphical of two-dimensional matrix that can set up N*M (is equivalent to above-mentioned first regard Figure), N=460, M=280, with corresponding scene map;It addition, further according to whether being positioned at stop area of visual field on map, by above-mentioned Two-dimensional matrix is entered as 0-1 matrix, and wherein, 0 represents not stop, uses black to represent (quite in graphical two-dimensional matrix In above-mentioned first color), 1 indicates stop, uses white to represent and (be equivalent to above-mentioned second face in graphical two-dimensional matrix Color).
Alternatively, in step S22, the field of view information obtaining situation elements can include step performed below:
Step S221, obtains the multiple camps arranged in scene of game map;
Step S222, determines the ownership camp of situation elements from multiple camps;
Step S223, the part camp utilizing situation elements to belong to determines field of view information.
Assume to have in gaming three camps (its be respectively as follows: closely defend, natural disaster, wild strange), a two dimension can be used here Byte (byte) array record respective field of view information, the size of array is identical with scene map, is similarly 460*280, Wherein, each byte has 8 bits (bit), and the value using each bit is the most visible to represent each camp, alternatively, adopts Represent that camp is invisible with 0, use 1 expression camp visible.Because each camp needs to take 1bit, so, this array is How can support 8 camps.Assume that value (employing hexadecimal representation) corresponding to each camp is followed successively by:
Closely defend: 0x01, i.e. 00000001;
Natural disaster: 0x02, i.e. 00000010;
Wild strange: 0x04, i.e. 00000100;
If the field range that the region of the element-specific of array (being equivalent to above-mentioned situation elements) correspondence is in Jin Wei camp In, then by this element value plus 0x01;If this element-specific is in natural disaster camp within sweep of the eye, then by this element value Plus 0x02;If this element-specific strange camp out of office is within sweep of the eye, then by this element value plus 0x04, therefore, can To determine its visual field, camp belonged to according to array element value.
Alternatively, in step S222, from multiple camps, determine that the ownership camp of situation elements can include following holding Row step:
Step S2221, is converted into the hexadecimal parameter value configured for each camp in multiple camps in advance respectively Binary parameters value;
Step S2222, carries out step-by-step patrol when there is binary parameters value corresponding to part or all of camp in multiple camps Volume or time arithmetic operation obtains the binary parameters value of situation elements, then part or all of camp is defined as returning of situation elements Belong to camp.
The value assuming an array element is 0x03, firstly, it is necessary to it is as follows that hexadecimal transfers to binary result:
0x03=0000000000000011
0x01=0000000000000001
0x02=0000000000000010
Secondly, above-mentioned binary result step-by-step is performed or operation, i.e. 0 | 0=0,0 | 1=1,1 | 0=1, therefore, 0x03 =0x01 | 0x02, it represents that the scene areas that this array element is corresponding can be seen by Jin Wei camp and natural disaster camp, and not Can be seen by Ye Gua camp.
Alternatively, in step S22, the visual range obtaining game role can include step performed below:
Step S224, obtains game role location in scene of game map;
Step S225, determines visual range according to the initialization visual field of game role in the first view.
Each unit has field range to limit, i.e. each unit is not under having any circumstance of occlusion, it can be seen that At a distance.It addition, the visual field can be shared.If friend side's unit it can be seen that target, then this target is all In the visual field of same camp unit.By the display particular game unit (such as: hero) concrete position in scene of game map Putting, its visual range, by given shape (such as: circular) labelling, in conjunction with above-mentioned first view, just can be calculated this specific trip The viewing area of programme position.
In a preferred embodiment, camera lens visual angle (FOV) algorithm using Recursive shadowcasting calculates the visual field Table, full figure is divided into eight quadrants by this algorithm, and each quadrant circular recursion calculates, and the visual field result during table 1 attempts for scene is united Meter table, as shown in table 1:
Table 1
····ssssss·····ss 16 @=starting cell
····ssss#··s··ss 15 #=blocking cell
···ssss#··s··ss 14 =non-blocking cell
···ss##··#··## 13 S=shadowed cells
····##········ 12
············ 11
··········· 10
·········· 9
········· 8
········ 7
······· 6
······ 5
····· 4
···· 3
··· 2
·· 1
@
Wherein, "@" represents hero position, and " # " represents stop visual field unit, and numeral represents line number, in result of calculation " " represent visible unit, then observing from the position of hero (@), stopping that unit " # " position below in the visual field is can not See unit, i.e. use " s " to represent invisible unit.And for visible unit, then need to add in the corresponding position of visual field matrix Represent the value changing camp.
In addition, it is necessary to explanation, in above-mentioned model of place, the essence of thick grass can be understood as an one side glass, Be positioned at the unit in thick grass region it is observed that target outside thick grass region, but, be positioned at the unit outside thick grass region then without Method observes the target in thick grass region, i.e. although thick grass can stop the visual field, but visual field energy in same patch of grass region Enough penetrate.
Because thick grass is special field obscuration unit, in thick grass, the visual field can penetrate, so by continuous print thick grass pair The grid connection answered is stitched together, and generates unique thick grass mark (ID), then the visual field in same thick grass ID is just Can penetrate.Fig. 3 is that in thick grass, the visual field penetrates schematic diagram according to the preferred embodiment of the invention.As it is shown on figure 3, grid represents Thick grass region, and they interconnect, when hero's (black round dot) is not in the range of thick grass, as shown on the left side of figure 3, thick grass Region can stop the visual field (employing shadow representation);After hero's (black round dot) enters thick grass region, as shown on the right side of Fig. 3, just The thick grass field obscuration of original shadow representation can be cancelled, and then calculate this sheet viewing area.
It should be noted that service end and client are required to use above-mentioned view field algorithm, wherein, the meter that service end obtains Calculate result to be used for judging observability, it is simple to the transmission of visibility region (AOI) data, client performs behaviour according to visual field result of calculation Render, generate dense fog effect.
Alternatively, in step S24, after determining visibility region by field of view information and visual range, it is also possible to include with Lower execution step:
Step S25, creates the texture identical with the first view, and wherein, each pixel in texture all includes: the first color Passage and the second Color Channel;
Step S26, the visibility region alternating sampling that adjacent twice is determined to the first Color Channel and the second Color Channel, Respectively obtain the second view and three-view diagram;
Step S27, the time difference according to generating between the second view and generation three-view diagram is adjusted, and obtains transition and regards Figure;
Step S28, carries out Gaussian Blur process to the visibility region edge in transitional view, obtains view to be rendered;
Step S29, merges view to be rendered with scene of game map, obtains view to be shown.
Second view and three-view diagram represent that the visual field matrix result that dense fog calculates samples the R passage of two passages respectively (being equivalent to above-mentioned first Color Channel), with G passage (being equivalent to above-mentioned second Color Channel), the process of sampling is as follows: create The texture (texture) identical with above-mentioned first view size, the width (width) of this texture is 460 pixels, highly (height) it is 280 pixels, each pixel storage A8R8G8B8, i.e. there are ARGB passage and each passage storage 8bit data.
If the situation elements view_matrix [i] [j] in the first view is visibility status, then texture [i] [j] R passage or G passage will be assigned 255, be otherwise assigned 0, in i.e. the 0th time write R passage, the 1st write G passage In, in the 2nd write R passage (covering the write result of the 0th time), the 3rd write G passage (covering the write result of the 1st time), By that analogy.During being preferable to carry out, the first color (such as: white) can be used to represent visual fields, it is also possible to use Second color (such as: black) represents the invisible visual field.
It addition, in order to promote the efficiency that dense fog renders, a dense fog can be updated according at interval of preset duration (0.5s) Calculate, then the phenomenon of dense fog saltus step will occur.In order to solve this problem, the result of calculation of adjacent twice can be write R, G passage of texture, then carries out difference transition, such as according to the time: one piece of region is the most invisible (the most complete in shader Portion is shown as black), and after hero moves, become visible (being i.e. all shown as transparent), in this process, will be by black Progressively being converted to fully transparent, accordingly, it would be desirable to carry out transition according to the time, it is as follows that it specifically calculates process:
The value (0-1) of float c_red=R passage;
The value (0-1) of float c_green=G passage;
Float blend_val=is according to Time Calculation fusion value (0-1);
Float color=(1.0-blend_val) * c_red+blend_val*c_green;
It is interval that color value in tinter can be mapped as [0,1], and 255 correspondence 1,0 correspondences 0, c_red, c_green is for adopting The value of the RG passage that sample obtains, it represents the result that the double visual field calculates respectively, it is assumed that c_red=0.2, c_green= 1, then the result obtained according to blend_val (being equivalent to above-mentioned transitional view) change is as shown in table 2:
Table 2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.2 0.28 0.36 0.44 0.52 0.6 0.68 0.76 0.84 0.92 1.0
The most just dense fog jump problem can be solved so that dense fog transitions smooth is natural.
View to be rendered represents the stricture of vagina that transitional view is carried out obtain after M × N (such as: 3 × 3) Gaussian Blur processes Reason.For reaching edge blurry transition effect, further edge is carried out Gaussian Blur process, and due to portion big in dense fog texture Pixel and surrounding pixel point thereof is divided to be entirely visible or sightless, so having only to carry out Fuzzy Processing for edge, To improve calculation rate.The mode of rim detection is:
In order to detect whether pixel texture [i] [j] is positioned at dense fog edge (visibility region and the friendship of invisible area At boundary), texture [i] [j] texture [i+1] [j] around, texture [i+2] [j], texture [i-1] can be chosen [j], texture [i-2] [j], texture [i] [j+1], texture [i] [j+2], texture [i] [j-1], texture [i] [j-2] eight pixels, if the pixel value of these nine pixels is identical, then it represents that pixel texture [i] [j] does not exists Dense fog edge, otherwise, represents that pixel texture [i] [j], at dense fog edge, needs pixel texture [i] [j] is carried out height This Fuzzy Processing.
It should be noted that eight pixels that the above-mentioned texture [i] [j] chosen is around can also change into choosing 4 Put or more than eight pixels.But, if the pixel chosen is crossed can reduce detection efficiency at most;If the pixel chosen Put and can affect detection Detection accuracy at least.Preferably, the embodiment of the present invention chooses 8 pixels, it is not necessary to carry out again Sampling.
Assuming that Gaussian Blur parameter is blurweight [9], it is the array of a length of 9, the most corresponding 9 pixels above The weight of value, wherein, parameter occurrence " 9 " can adjust in real time according to reference to blur effect, and fuzzifying equation is:
Blurcolor=texture [i] [j] * blurweight [0];
Blurcolor+=texture [i-1] [j-1] * blurweight [1];
Blurcolor+=texture [i-1] [j] * blurweight [2];
Blurcolor+=texture [i-1] [j+1] * blurweight [3];
Blurcolor+=texture [i] [j-1] * blurweight [4];
Blurcolor+=texture [i] [j+1] * blurweight [5];
Blurcolor+=texture [i+1] [j-1] * blurweight [6];
Blurcolor+=texture [i+1] [j] * blurweight [7];
Blurcolor+=texture [i+1] [j+1] * blurweight [8];
In post processing (post process) stage rendered, obtain the three dimensional field rendered under current screen view Scape, then dense fog texture (being equivalent to above-mentioned view to be rendered) is merged with scene of game map, i.e. available final Fog of War effect (being equivalent to above-mentioned view to be shown).
Alternatively, in step S29, view to be rendered is merged with scene of game map, obtain view to be shown Can include step performed below:
Step S291, by the space coordinate conversion of part to be shown corresponding with going game picture in view to be rendered be Screen coordinate;
Step S292, is converted to world coordinates by screen coordinate;
Step S293, obtains the visibility region corresponding with going game picture and non-visible region by world coordinates;
Step S294, is fused to scene of game map by the visibility region got and non-visible region, obtains to be shown View.
Dense fog texture creates based on whole three-dimensional scenic, and the sub-fraction of the simply scene that screen shows, so Need to calculate the dense fog texture region that screen scene is corresponding.This is relative complex calculating process, if at central processing unit (CPU) calculate in, the time of whole render process can be increased.To this end, in a preferred embodiment, this process is forwarded to figure Processor (GPU) calculates, as follows by the concrete calculating process of screen coordinate → world coordinates → dense fog coordinate:
Game model is finally shown on screen need to render flow process through complete, and wherein, major part operation is by video card It is automatically performed, is mainly completed by vertex shader (vertex shader) and pixel coloring device (pixel shader).? In vertex shader, most important is summit change, will the model space coordinate of model then be turned by coordinate transform Changing screen coordinate into, this process is referred to as the world (world)-observation (view)-projection (project) matrixing, i.e. uses By matrixing, by model coordinate, (x, y z) are converted to screen coordinate (x to equation below1,y1):
(x1,y1,z1)=(x, y, z) * world*view*project;
Wherein, (x1,y1) it is screen coordinate, z1For depth coordinate, stored to depth texture (depthtexture) In, world is world's transformation matrix, and view is observation matrix, and project is projection matrix, and these matrixes are all to be drawn by game Hold up and pre-set.
Secondly, in the post process stage, the depth value of coordinate obtains z by depthtexture1
Again, it is calculated world coordinates by formula below:
Float z=texture2D (depthtexture, xy) .x;
Float4sPos=vec4 (xy*2.0-1.0, z, 1.0);
Float4pos=inv_proj_view_mat*sPos;
Wherein, texture2D represents that sampling function, xy represent screen coordinate, and depthtexture is depth texture, sPos For adding degree of depth screen coordinate, according to above-mentioned formula, screen coordinate obtain world coordinates, need to be multiplied by projection-observation Inverse of a matrix (inv_proj_view_mat), is:
(x1,y1,z1)=(x, y, z) * world*view*project;
(x1,y1,z1) * inv_proj_view=(x, y, z) * world*view*project*inv_proj_view;
(x1, y1, z1) * inv_proj_view=(x, y, z) * world;
Wherein, (x, y, z) * world is world coordinates to pos=.Then pass through:
Then, then by below equation obtain the dense fog coordinate that screen coordinate is corresponding:
Float x=(pos.x+width/2)/width;
Float y=(pos.y+height/2)/height;
Wherein, width is scene width 460, and height is scene height 280.
Thus, the dense fog coordinate that screen coordinate is corresponding, and then the dense fog that will process can just be obtained through Gaussian Blur Texture mutually merges with scene graph, obtains final game display picture.
According to embodiments of the present invention, it is provided that the embodiment of the determination device of a kind of scene of game visibility region.Fig. 4 is root According to the structured flowchart of determination device of the scene of game visibility region of the embodiment of the present invention, as shown in Figure 4, this device can wrap Including: the first creation module 10, for according to scene of game map building the first view, wherein, the first view is used for describing game Field obscuration region in scene map;Acquisition module 20, for obtaining field of view information and the trip of the first view Scene element The visual range of play role;Determine module 30, for determining the visual field in the first view by field of view information and visual range Territory.
Alternatively, Fig. 5 is the structural frames of determination device of scene of game visibility region according to the preferred embodiment of the invention Figure, as it is shown in figure 5, the first creation module 10 may include that first determines unit 100, for determining in scene of game map Field obscuration model region;Unit 102 is set, for by corresponding with field obscuration model region in the first view Part is set to the first color, and the remainder outside the corresponding part of field obscuration model region is set to second Color.
Alternatively, as it is shown in figure 5, acquisition module 20 may include that the first acquiring unit 200, it is used for obtaining scene of game The multiple camps arranged in map;Second determines unit 202, for determining the ownership camp of situation elements from multiple camps; 3rd determines unit 204, determines field of view information for the part camp utilizing situation elements to belong to.
Alternatively, second determines that unit 202 may include that conversion subelement (not shown), and being used for respectively will in advance The hexadecimal parameter value configured for each camp in multiple camps is converted into binary parameters value;Determine that subelement is (in figure Not shown), for carrying out step-by-step logic or fortune when the binary parameters value that there is part or all of camp in multiple camps corresponding When calculating the binary parameters value that operation obtains situation elements, then part or all of camp is defined as the ownership battle array of situation elements Battalion.
Alternatively, acquisition module 20 may include that second acquisition unit 206, is used for obtaining game role at scene of game Location in map;4th determines unit 208, for determining in the first view according to the initialization visual field of game role Visual range.
Alternatively, as it is shown in figure 5, said apparatus can also include: the second creation module 40, for creating and the first view Identical texture, wherein, each pixel in texture all includes: the first Color Channel and the second Color Channel;Sampling module 50, For the visibility region alternating sampling that determines adjacent twice to the first Color Channel and the second Color Channel, respectively obtain second View and three-view diagram;Adjusting module 60, for carrying out according to the time difference generated between the second view and generation three-view diagram Adjust, obtain transitional view;Processing module 70, for the visibility region edge in transitional view is carried out Gaussian Blur process, Obtain view to be rendered;Fusion Module 80, for being merged with scene of game map by view to be rendered, obtains to be shown regarding Figure.
Alternatively, as it is shown in figure 5, Fusion Module 80 includes: the first converting unit 800, for by view to be rendered with The space coordinate conversion of the part to be shown that going game picture is corresponding is screen coordinate;Second converting unit 802, for shielding Curtain Coordinate Conversion is world coordinates;3rd acquiring unit 804, for obtaining corresponding with going game picture by world coordinates Visibility region and non-visible region;Integrated unit 806, for being fused to game by the visibility region got and non-visible region Scene map, obtains view to be shown.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not has in certain embodiment The part described in detail, may refer to the associated description of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents, can be passed through other Mode realizes.Wherein, device embodiment described above is only schematically, the division of the most described unit, Ke Yiwei A kind of logic function divides, actual can have when realizing other dividing mode, the most multiple unit or assembly can in conjunction with or Person is desirably integrated into another system, or some features can be ignored, or does not performs.Another point, shown or discussed is mutual Between coupling direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, unit or module or communication link Connect, can be being electrical or other form.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple On unit.Some or all of unit therein can be selected according to the actual needs to realize the purpose of the present embodiment scheme.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.Above-mentioned integrated list Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part that in other words prior art contributed or this technical scheme completely or partially can be with the form of software product Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer Equipment (can be for personal computer, server or the network equipment etc.) perform the whole of method described in each embodiment of the present invention or Part steps.And aforesaid storage medium includes: USB flash disk, read only memory (ROM, Read-Only Memory), random access memory are deposited Reservoir (RAM, Random Access Memory), portable hard drive, magnetic disc or CD etc. are various can store program code Medium.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For Yuan, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (14)

1. the determination method of a scene of game visibility region, it is characterised in that including:
According to scene of game map building the first view, wherein, described first view is used for describing in described scene of game map Field obscuration region;
Obtain field of view information and the visual range of game role of described first view Scene element;
The visibility region in described first view is determined by described field of view information and described visual range.
Method the most according to claim 1, it is characterised in that according to the first view described in described scene of game map building Including:
Field obscuration model region is determined in described scene of game map;
Corresponding part with field obscuration model region in described first view is set to the first color, and by the visual field Block the remainder outside the corresponding part of model region and be set to the second color.
Method the most according to claim 1, it is characterised in that the described field of view information obtaining described situation elements includes:
Obtain the multiple camps arranged in described scene of game map;
The ownership camp of described situation elements is determined from the plurality of camp;
The part camp utilizing described situation elements to belong to determines described field of view information.
Method the most according to claim 3, it is characterised in that determine returning of described situation elements from the plurality of camp Belong to camp to include:
Respectively the hexadecimal parameter value configured for each camp in the plurality of camp in advance is converted into binary parameters Value;
Step-by-step logic or computing behaviour is carried out when the plurality of camp exists binary parameters value corresponding to part or all of camp When making the binary parameters value obtaining described situation elements, then described part or all of camp is defined as described situation elements Ownership camp.
Method the most according to claim 1, it is characterised in that the described visual range obtaining described game role includes:
Obtain described game role location in described scene of game map;
The initialization visual field according to described game role determines described visual range in described first view.
Method the most according to claim 1, it is characterised in that determined by described field of view information and described visual range After described visibility region, also include:
Creating the texture identical with described first view, wherein, each pixel in described texture all includes: the first Color Channel With the second Color Channel;
The visibility region alternating sampling determined adjacent twice is to described first Color Channel and described second Color Channel, respectively Obtain the second view and three-view diagram;
It is adjusted according to described second view of generation and the time difference generated between described three-view diagram, obtains transitional view;
Visibility region edge in described transitional view is carried out Gaussian Blur process, obtains view to be rendered;
Described view to be rendered is merged with described scene of game map, obtains view to be shown.
Method the most according to claim 6, it is characterised in that described view to be rendered is entered with described scene of game map Row merges, and obtains described view to be shown and includes:
It is screen coordinate by the space coordinate conversion of part to be shown corresponding with going game picture in described view to be rendered;
Described screen coordinate is converted to world coordinates;
The visibility region corresponding with described going game picture and non-visible region is obtained by described world coordinates;
The visibility region got and non-visible region are fused to described scene of game map, obtain described view to be shown.
8. the determination device of a scene of game visibility region, it is characterised in that including:
First creation module, for according to scene of game map building the first view, wherein, described first view is used for describing institute State the field obscuration region in scene of game map;
Acquisition module, for obtaining field of view information and the visual range of game role of described first view Scene element;
Determine module, for determining the visibility region in described first view by described field of view information and described visual range.
Device the most according to claim 8, it is characterised in that described first creation module includes:
First determines unit, for determining field obscuration model region in described scene of game map;
Unit is set, for corresponding part with field obscuration model region in described first view is set to the first face Color, and the remainder outside the corresponding part of field obscuration model region is set to the second color.
Device the most according to claim 8, it is characterised in that described acquisition module includes:
First acquiring unit, for obtaining the multiple camps arranged in described scene of game map;
Second determines unit, for determining the ownership camp of described situation elements from the plurality of camp;
3rd determines unit, determines described field of view information for the part camp utilizing described situation elements to belong to.
11. devices according to claim 10, it is characterised in that described second determines that unit includes:
Conversion subelement, the hexadecimal parameter value being used for configuring for each camp in the plurality of camp in advance respectively turns Turn to binary parameters value;
Determine subelement, for when the plurality of camp exists binary parameters value corresponding to part or all of camp carry out by When position logic or arithmetic operation obtain the binary parameters value of described situation elements, then described part or all of camp is defined as The ownership camp of described situation elements.
12. devices according to claim 8, it is characterised in that described acquisition module includes:
Second acquisition unit, is used for obtaining described game role location in described scene of game map;
4th determines unit, for according to the initialization visual field of described game role determine in described first view described visually Scope.
13. devices according to claim 8, it is characterised in that described device also includes:
Second creation module, for creating the texture identical with described first view, wherein, each pixel in described texture is equal Including: the first Color Channel and the second Color Channel;
Sampling module, for visibility region alternating sampling extremely described first Color Channel and described second determined adjacent twice Color Channel, respectively obtains the second view and three-view diagram;
Adjusting module, for being adjusted according to described second view of generation and the time difference generated between described three-view diagram, Obtain transitional view;
Processing module, for the visibility region edge in described transitional view is carried out Gaussian Blur process, obtains to be rendered regarding Figure;
Fusion Module, for being merged with described scene of game map by described view to be rendered, obtains view to be shown.
14. devices according to claim 13, it is characterised in that described Fusion Module includes:
First converting unit, for sitting the space of part to be shown corresponding with going game picture in described view to be rendered Mark is converted to screen coordinate;
Second converting unit, for being converted to world coordinates by described screen coordinate;
3rd acquiring unit, for obtaining the visibility region corresponding with described going game picture and non-by described world coordinates Visibility region;
Integrated unit, for the visibility region got and non-visible region are fused to described scene of game map, obtains institute State view to be shown.
CN201610649889.5A 2016-08-09 2016-08-09 The determination method and device of scene of game visibility region Active CN106296786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610649889.5A CN106296786B (en) 2016-08-09 2016-08-09 The determination method and device of scene of game visibility region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610649889.5A CN106296786B (en) 2016-08-09 2016-08-09 The determination method and device of scene of game visibility region

Publications (2)

Publication Number Publication Date
CN106296786A true CN106296786A (en) 2017-01-04
CN106296786B CN106296786B (en) 2019-02-15

Family

ID=57667535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610649889.5A Active CN106296786B (en) 2016-08-09 2016-08-09 The determination method and device of scene of game visibility region

Country Status (1)

Country Link
CN (1) CN106296786B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106730844A (en) * 2017-01-10 2017-05-31 网易(杭州)网络有限公司 A kind of scene run time method of testing and device
CN106975219A (en) * 2017-03-27 2017-07-25 网易(杭州)网络有限公司 Display control method and device, storage medium, the electronic equipment of game picture
CN107198876A (en) * 2017-06-07 2017-09-26 北京小鸟看看科技有限公司 The loading method and device of scene of game
CN107358579A (en) * 2017-06-05 2017-11-17 北京印刷学院 A kind of game war dense fog implementation method
CN107469351A (en) * 2017-06-20 2017-12-15 网易(杭州)网络有限公司 Game picture display methods and device, storage medium, electronic equipment
CN107875630A (en) * 2017-11-17 2018-04-06 杭州电魂网络科技股份有限公司 Render area determination method and device
CN108031117A (en) * 2017-12-06 2018-05-15 北京像素软件科技股份有限公司 Region mist effect implementation method and device
CN108196765A (en) * 2017-12-13 2018-06-22 网易(杭州)网络有限公司 Display control method, electronic equipment and storage medium
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN108389245A (en) * 2018-02-13 2018-08-10 鲸彩在线科技(大连)有限公司 Rendering intent, device, electronic equipment and the readable storage medium storing program for executing of cartoon scene
CN109360263A (en) * 2018-10-09 2019-02-19 温州大学 A kind of the Real-time Soft Shadows generation method and device of resourceoriented restricted movement equipment
CN109658495A (en) * 2017-10-10 2019-04-19 腾讯科技(深圳)有限公司 Rendering method, device and the electronic equipment of environment light screening effect
CN110251940A (en) * 2019-07-10 2019-09-20 网易(杭州)网络有限公司 A kind of method and apparatus that game picture is shown
CN110415320A (en) * 2019-07-25 2019-11-05 上海米哈游网络科技股份有限公司 A kind of scene prebake method, apparatus, storage medium and electronic equipment
CN110719532A (en) * 2018-02-23 2020-01-21 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment
CN111111172A (en) * 2019-12-02 2020-05-08 网易(杭州)网络有限公司 Method and device for processing ground surface of game scene, processor and electronic device
CN111111189A (en) * 2019-12-16 2020-05-08 北京像素软件科技股份有限公司 Game AOI management method and device and electronic equipment
CN111841010A (en) * 2019-04-26 2020-10-30 网易(杭州)网络有限公司 Urban road network generation method and device, storage medium, processor and terminal
CN112516592A (en) * 2020-12-15 2021-03-19 网易(杭州)网络有限公司 Method and device for processing view mode in game, storage medium and terminal equipment
CN112717390A (en) * 2021-01-12 2021-04-30 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium
CN112822397A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN113192208A (en) * 2021-04-08 2021-07-30 北京鼎联网络科技有限公司 Three-dimensional roaming method and device
CN113827960A (en) * 2021-09-01 2021-12-24 广州趣丸网络科技有限公司 Game visual field generation method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080254882A1 (en) * 2006-10-31 2008-10-16 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) Network game system, a network game terminal, a method of displaying a game screen, a computer program product and a storage medium
CN105677395A (en) * 2015-12-28 2016-06-15 珠海金山网络游戏科技有限公司 Game scene pixel blanking system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080254882A1 (en) * 2006-10-31 2008-10-16 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) Network game system, a network game terminal, a method of displaying a game screen, a computer program product and a storage medium
CN105677395A (en) * 2015-12-28 2016-06-15 珠海金山网络游戏科技有限公司 Game scene pixel blanking system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨元超: "基于HTML5的即时战略游戏的设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106730844A (en) * 2017-01-10 2017-05-31 网易(杭州)网络有限公司 A kind of scene run time method of testing and device
CN106975219A (en) * 2017-03-27 2017-07-25 网易(杭州)网络有限公司 Display control method and device, storage medium, the electronic equipment of game picture
CN107358579A (en) * 2017-06-05 2017-11-17 北京印刷学院 A kind of game war dense fog implementation method
CN107198876A (en) * 2017-06-07 2017-09-26 北京小鸟看看科技有限公司 The loading method and device of scene of game
CN107198876B (en) * 2017-06-07 2021-02-05 北京小鸟看看科技有限公司 Game scene loading method and device
CN107469351A (en) * 2017-06-20 2017-12-15 网易(杭州)网络有限公司 Game picture display methods and device, storage medium, electronic equipment
CN109658495A (en) * 2017-10-10 2019-04-19 腾讯科技(深圳)有限公司 Rendering method, device and the electronic equipment of environment light screening effect
CN109658495B (en) * 2017-10-10 2022-09-20 腾讯科技(深圳)有限公司 Rendering method and device for ambient light shielding effect and electronic equipment
CN107875630A (en) * 2017-11-17 2018-04-06 杭州电魂网络科技股份有限公司 Render area determination method and device
CN107875630B (en) * 2017-11-17 2020-11-24 杭州电魂网络科技股份有限公司 Rendering area determination method and device
CN108031117A (en) * 2017-12-06 2018-05-15 北京像素软件科技股份有限公司 Region mist effect implementation method and device
CN108031117B (en) * 2017-12-06 2021-03-16 北京像素软件科技股份有限公司 Regional fog effect implementation method and device
CN108196765A (en) * 2017-12-13 2018-06-22 网易(杭州)网络有限公司 Display control method, electronic equipment and storage medium
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN108389245A (en) * 2018-02-13 2018-08-10 鲸彩在线科技(大连)有限公司 Rendering intent, device, electronic equipment and the readable storage medium storing program for executing of cartoon scene
CN110719532A (en) * 2018-02-23 2020-01-21 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment
CN110719532B (en) * 2018-02-23 2023-10-31 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment
CN109360263B (en) * 2018-10-09 2019-07-19 温州大学 A kind of the Real-time Soft Shadows generation method and device of resourceoriented restricted movement equipment
CN109360263A (en) * 2018-10-09 2019-02-19 温州大学 A kind of the Real-time Soft Shadows generation method and device of resourceoriented restricted movement equipment
CN111841010A (en) * 2019-04-26 2020-10-30 网易(杭州)网络有限公司 Urban road network generation method and device, storage medium, processor and terminal
CN110251940A (en) * 2019-07-10 2019-09-20 网易(杭州)网络有限公司 A kind of method and apparatus that game picture is shown
CN110415320A (en) * 2019-07-25 2019-11-05 上海米哈游网络科技股份有限公司 A kind of scene prebake method, apparatus, storage medium and electronic equipment
CN111111172A (en) * 2019-12-02 2020-05-08 网易(杭州)网络有限公司 Method and device for processing ground surface of game scene, processor and electronic device
CN111111172B (en) * 2019-12-02 2023-05-26 网易(杭州)网络有限公司 Surface processing method and device for game scene, processor and electronic device
CN111111189A (en) * 2019-12-16 2020-05-08 北京像素软件科技股份有限公司 Game AOI management method and device and electronic equipment
CN112516592A (en) * 2020-12-15 2021-03-19 网易(杭州)网络有限公司 Method and device for processing view mode in game, storage medium and terminal equipment
CN112822397A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN112822397B (en) * 2020-12-31 2022-07-05 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN112717390A (en) * 2021-01-12 2021-04-30 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium
CN113192208A (en) * 2021-04-08 2021-07-30 北京鼎联网络科技有限公司 Three-dimensional roaming method and device
CN113827960A (en) * 2021-09-01 2021-12-24 广州趣丸网络科技有限公司 Game visual field generation method and device, electronic equipment and storage medium
CN113827960B (en) * 2021-09-01 2023-06-02 广州趣丸网络科技有限公司 Game view generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106296786B (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN106296786B (en) The determination method and device of scene of game visibility region
CN106780642B (en) Generation method and device of camouflage cover map
CN107945282A (en) The synthesis of quick multi-view angle three-dimensional and methods of exhibiting and device based on confrontation network
CN106504339A (en) Historical relic 3D methods of exhibiting based on virtual reality
CN107452048A (en) The computational methods and device of global illumination
CN103226830A (en) Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN101246600A (en) Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
CN108043027B (en) Storage medium, electronic device, game screen display method and device
CN104318605B (en) Parallel lamination rendering method of vector solid line and three-dimensional terrain
CN101477701A (en) Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
CN101477700A (en) Real tri-dimension display method oriented to Google Earth and Sketch Up
CN108421257A (en) Determination method, apparatus, storage medium and the electronic device of invisible element
CN107638690A (en) Method, device, server and medium for realizing augmented reality
CN101477702B (en) Built-in real tri-dimension driving method for computer display card
WO2022055367A1 (en) Method for emulating defocus of sharp rendered images
CN108230430A (en) The processing method and processing device of cloud layer shade figure
CN112200899B (en) Method for realizing model service interaction by adopting instantiation rendering
CN101488229B (en) PCI three-dimensional analysis module oriented implantation type ture three-dimensional stereo rendering method
Onrust et al. Ecologically sound procedural generation of natural environments
CN113117334B (en) Method and related device for determining visible area of target point
CN115272628A (en) Rendering method and device of three-dimensional model, computer equipment and storage medium
CN111104851B (en) Automatic generation method and system for defending area at basketball goal moment
CN101488230B (en) VirtualEarth oriented ture three-dimensional stereo display method
CN101482978B (en) ENVI/IDL oriented implantation type true three-dimensional stereo rendering method
Cheok et al. Humanistic Oriental art created using automated computer processing and non-photorealistic rendering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant