CN106296786B - The determination method and device of scene of game visibility region - Google Patents
The determination method and device of scene of game visibility region Download PDFInfo
- Publication number
- CN106296786B CN106296786B CN201610649889.5A CN201610649889A CN106296786B CN 106296786 B CN106296786 B CN 106296786B CN 201610649889 A CN201610649889 A CN 201610649889A CN 106296786 B CN106296786 B CN 106296786B
- Authority
- CN
- China
- Prior art keywords
- view
- scene
- game
- camp
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a kind of determination method and devices of scene of game visibility region.Wherein, this method comprises: according to scene of game map building first view, wherein first view is used to describe the field obscuration region in scene of game map;Obtain the field of view information of situation elements and the visual range of game role in first view;The visibility region in first view is determined by field of view information and visual range.The technical issues of present invention solves visibility region of the scene of game in the related technology in the case where two-dimentional dense fog shrouds and does not combine with the map landform of scene of game, reduces user's game experiencing.
Description
Technical field
The present invention relates to computer fields, in particular to the determination method and dress of a kind of scene of game visibility region
It sets.
Background technique
Fog of War (Fog of War), referring in traditional sense can not due to not knowing to enemy's information in war
Confirm the most area in addition to friendly troop region, the distribution and activity condition of enemy.And at present within the scope of game, especially
It is in instant strategy and the online tactics competitive game (MOBA) of more people, and the frequency that this word occurs is higher, more more
Player known to.In the game of Fog of War, player can only observe range minimum around itself base and unit, and
Most map areas are hacked the covering of roue's mist.After friend side's unit is mobile to dark area, the region meeting of institute's approach
It is automatically turned on, thus map becomes gradually as it can be seen that it can include but is not limited to: the landform in the region/enemy's activity
Situation.
Currently, it is less about the embodiment record of Fog of War in the related technology, screen mask mode is generallyd use,
That is, two-dimentional dense fog picture is integrated on two-dimensional screen.Fig. 1 is to show in picture black occur in game according to the relevant technologies
The schematic diagram of dense fog.As shown in Figure 1, the generation of dense fog pattern generallys use map connecting method, size according to the map, to map
Gridding processing is carried out, then calculates visibility region further according to character positions and visual field distance, and will be seen that region is spliced into line
Reason, further mixes with map base map.However, although calculation used by this mode is relatively simple, calculating speed
It is relatively fast.But the two-dimentional dense fog generated lacks three-dimensional sense, transition is stiff.Moreover, most importantly, two-dimentional dense fog does not have
It is combined with map landform.Such as: in MOBA map, thick grass, mountain, trees can block the visual field, and standing can penetrate in thick grass
The thick grass visual field.These features can provide more comprehensive game experiencing for game player, still, in current two-dimentional dense fog
It is unable to get embodiment.
For above-mentioned problem, currently no effective solution has been proposed.
Summary of the invention
The embodiment of the invention provides a kind of determination method and devices of scene of game visibility region, at least to solve correlation
Visibility region of the scene of game in the case where two-dimentional dense fog shrouds in technology is not combined with the map landform of scene of game, is reduced
The technical issues of user's game experiencing.
According to an aspect of an embodiment of the present invention, a kind of determination method of scene of game visibility region is provided, comprising:
According to scene of game map building first view, wherein first view is used to describe the field obscuration area in scene of game map
Domain;Obtain the field of view information of situation elements and the visual range of game role in first view;Pass through field of view information and visual model
Enclose the visibility region in determining first view.
It optionally, include: that field obscuration is determined in scene of game map according to scene of game map building first view
Model region;It will be set as the first color with the corresponding part of field obscuration model region in first view, and
The second color is set by the rest part except the corresponding part of field obscuration model region.
Optionally, the field of view information for obtaining situation elements includes: the multiple camps for obtaining and being arranged in scene of game map;From
The ownership camp of situation elements is determined in multiple camps;Field of view information is determined using the part camp that situation elements belong to.
It optionally, will in advance be respectively in multiple camps from determining that the ownership camp of situation elements includes: in multiple camps
Each camp configuration hexadecimal parameter value be converted into binary parameters value;When there are part or all of battle arrays in multiple camps
It seeks corresponding binary parameters value and carries out step-by-step logic or when arithmetic operation obtains the binary parameters value of situation elements, then by portion
Point or whole camp be determined as the ownership camps of situation elements.
Optionally, the visual range for obtaining game role includes: that acquisition game role is locating in scene of game map
Position;Visual range is determined in first view according to the initialization visual field of game role.
Optionally, after determining visibility region by field of view information and visual range, further includes: creation and first view
Identical texture, wherein each pixel in texture includes: the first Color Channel and the second Color Channel;By it is adjacent twice
Determining visibility region alternating sampling respectively obtains the second view and third view to the first Color Channel and the second Color Channel
Figure;It is adjusted according to the time difference generated between the second view and generation third view, obtains transitional view;To transitional view
In visibility region edge carry out Gaussian Blur processing, obtain view to be rendered;By view to be rendered and scene of game map into
Row fusion, obtains view to be shown.
Optionally, view to be rendered is merged with scene of game map, obtain view to be shown include: will be to be rendered
The space coordinate conversion of part to be shown corresponding with going game picture is screen coordinate in view;Screen coordinate is converted to
World coordinates;Visibility region corresponding with going game picture and non-visible region are obtained by world coordinates;It will acquire
Visibility region and non-visible region are fused to scene of game map, obtain view to be shown.
According to another aspect of an embodiment of the present invention, a kind of determining device of scene of game visibility region is additionally provided, is wrapped
It includes: the first creation module, for according to scene of game map building first view, wherein first view is for describing sports ground
Field obscuration region in scape map;Module is obtained, for obtaining the field of view information of situation elements and game angle in first view
The visual range of color;Determining module, for determining the visibility region in first view by field of view information and visual range.
Optionally, the first creation module includes: the first determination unit, for determining field obscuration in scene of game map
Model region;Setting unit, for setting the corresponding part in first view with field obscuration model region to
First color, and the second color is set by the rest part except the corresponding part of field obscuration model region.
Optionally, obtaining module includes: first acquisition unit, for obtaining the multiple battle arrays being arranged in scene of game map
Battalion;Second determination unit, for determining the ownership camp of situation elements from multiple camps;Third determination unit, for utilizing
The part camp of situation elements ownership determines field of view information.
Optionally, the second determination unit includes: conversion subunit, for will be in advance respectively each battle array in multiple camps
The hexadecimal parameter value of battalion's configuration is converted into binary parameters value;Subelement is determined, for when there are parts in multiple camps
Or the corresponding binary parameters value in whole camp carries out step-by-step logic or arithmetic operation obtains the binary parameters value of situation elements
When, then part or all of camp is determined as to the ownership camp of situation elements.
Optionally, obtaining module includes: second acquisition unit, locating in scene of game map for obtaining game role
Position;4th determination unit determines visual range for the initialization visual field according to game role in first view.
Optionally, above-mentioned apparatus further include: the second creation module, for creating texture identical with first view, wherein
Each pixel in texture includes: the first Color Channel and the second Color Channel;Sampling module, for being determined adjacent twice
Visibility region alternating sampling to the first Color Channel and the second Color Channel, respectively obtain the second view and third view;It adjusts
Mould preparation block obtains transitional view for being adjusted according to the time difference generated between the second view and generation third view;Place
Module is managed, for carrying out Gaussian Blur processing to the visibility region edge in transitional view, obtains view to be rendered;Merge mould
Block obtains view to be shown for merging view to be rendered with scene of game map.
Optionally, Fusion Module includes: the first converting unit, and being used for will be corresponding with going game picture in view to be rendered
Part to be shown space coordinate conversion be screen coordinate;Second converting unit is sat for screen coordinate to be converted to the world
Mark;Third acquiring unit, for obtaining visibility region corresponding with going game picture and non-visible region by world coordinates;
Integrated unit, visibility region and non-visible region for will acquire are fused to scene of game map, obtain view to be shown.
In embodiments of the present invention, using being used to describe the visual field in scene of game map according to scene of game map building
The mode of the first view of occlusion area passes through the field of view information of situation elements in first view and the visual range of game role
It determines the visibility region in first view, has reached and fully considered sports ground during rendering the dense fog effect of scene of game
The purpose of scape map landform to realize the three-dimensional sense of enhancing dense fog effect and the technical effect of the sense of reality, and then solves
Visibility region of the scene of game in the related technology in the case where two-dimentional dense fog shrouds is not combined with the map landform of scene of game,
The technical issues of reducing user's game experiencing.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair
Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is to show in picture the schematic diagram of black dense fog occur in game according to the relevant technologies;
Fig. 2 is the flow chart of the determination method of scene of game visibility region according to an embodiment of the present invention;
Fig. 3 is that the visual field according to the preferred embodiment of the invention in thick grass penetrates schematic diagram;
Fig. 4 is the structural block diagram of the determining device of scene of game visibility region according to an embodiment of the present invention;
Fig. 5 is the structural block diagram of the determining device of scene of game visibility region according to the preferred embodiment of the invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work
It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover
Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to
Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product
Or other step or units that equipment is intrinsic.
According to embodiments of the present invention, the embodiment of a kind of determination method of scene of game visibility region is provided, is needed
Bright, step shown in the flowchart of the accompanying drawings can be held in a computer system such as a set of computer executable instructions
Row, although also, logical order is shown in flow charts, and it in some cases, can be to be different from sequence herein
Execute shown or described step.
Fig. 2 is the flow chart of the determination method of scene of game visibility region according to an embodiment of the present invention, as shown in Fig. 2,
This method comprises the following steps:
Step S20, according to scene of game map building first view, wherein first view is for describing scene of game
Field obscuration region in figure;
Step S22 obtains the field of view information of situation elements and the visual range of game role in first view;
Step S24 determines the visibility region in first view by field of view information and visual range.
Through the above steps, using being used to describe field obscuration in scene of game map according to scene of game map building
The mode of the first view in region is determined by the field of view information of situation elements in first view and the visual range of game role
Visibility region in first view has reached during rendering the dense fog effect of scene of game with fully considering scene of game
The purpose of figure landform to realize the three-dimensional sense of enhancing dense fog effect and the technical effect of the sense of reality, and then solves correlation
Visibility region of the scene of game in the case where two-dimentional dense fog shrouds in technology is not combined with the map landform of scene of game, is reduced
The technical issues of user's game experiencing.
Optionally, may include step performed below according to scene of game map building first view in step S20:
Step S201 determines field obscuration model region in scene of game map;
The corresponding part in first view with field obscuration model region is set the first color by step S202,
And the second color is set by the rest part except the corresponding part of field obscuration model region.
In a preferred embodiment, can format to scene of game map progress two-dimensional mesh processing, with a MOBA scene
For figure, it is assumed that coordinate range are as follows: x ∈ (- 230,230), z ∈ (- 140,140) ignore y value, can include but is not limited to:
Road etc. does not block the field obscurations model such as visual field model and thick grass, stone, tree, can mark block the visual field in the scene
Region.According to the size of scene of game map, it can establish the graphical of the two-dimensional matrix of N*M and (be equivalent to above-mentioned first view
Figure), N=460, M=280, to correspond to scene map;In addition, stop area of visual field further according to whether being located on map, it will be above-mentioned
Two-dimensional matrix is assigned a value of 0-1 matrix, wherein 0 indicates not stop, and is indicated (quite in graphical two-dimensional matrix using black
In above-mentioned first color), 1 indicates blocking, (is equivalent to above-mentioned second face using white expression in graphical two-dimensional matrix
Color).
Optionally, in step S22, the field of view information for obtaining situation elements may include step performed below:
Step S221 obtains the multiple camps being arranged in scene of game map;
Step S222 determines the ownership camp of situation elements from multiple camps;
Step S223 determines field of view information using the part camp that situation elements belong to.
Assuming that a two dimension may be used herein there are three camp (its be respectively as follows: closely defend, natural disaster, wild monster) in gaming
Byte (byte) array record respective field of view information, the size of array is identical as scene map, be similarly 460*280,
Wherein, whether each byte has 8 bits (bit), indicate each camp as it can be seen that optionally, adopting using the value of each bit
It indicates that camp is invisible with 0, indicates that camp is visible using 1.Because each camp needs to occupy 1bit, the array is most
It can mostly support 8 camps.It is worth and (uses hexadecimal representation) successively assuming that each camp is corresponding are as follows:
It closely defends: 0x01, i.e., 00000001;
Natural disaster: 0x02, i.e., 00000010;
Open country is strange: 0x04, i.e., 00000100;
If the corresponding region of element-specific (being equivalent to above-mentioned situation elements) of array is in the field range in the camp Jin Wei
It is interior, then this element value is added into 0x01;If the element-specific natural disaster camp within sweep of the eye, by this element value
In addition 0x02;It, therefore can plus 0x04 by this element value if the element-specific strange camp out of office is within sweep of the eye
To determine its camp visual field belonged to according to array element value.
Optionally, in step S222, determine that the ownership camp of situation elements may include following holds from multiple camps
Row step:
Step S2221 respectively converts the hexadecimal parameter value in advance for each camp configuration in multiple camps to
Binary parameters value;
Step S2222 is patrolled when carrying out step-by-step there are the corresponding binary parameters value in part or all of camp in multiple camps
Volume or arithmetic operation when obtaining the binary parameters value of situation elements, then part or all of camp is determined as returning for situation elements
Belong to camp.
Assuming that the value of an array element is 0x03, firstly, it is necessary to which it is as follows that hexadecimal is switched to binary result:
0x03=0000000000000011
0x01=0000000000000001
0x02=0000000000000010
Secondly, above-mentioned binary result step-by-step is executed or operated, i.e., 0 | 0=0,0 | 1=1,1 | 0=1, therefore, 0x03
=0x01 | 0x02 indicates that the corresponding scene areas of this array element can be seen by the camp Jin Wei and natural disaster camp, without
It can be seen by the camp Ye Gua.
Optionally, in step S22, the visual range for obtaining game role may include step performed below:
Step S224 obtains game role the location of in scene of game map;
Step S225 determines visual range according to the initialization visual field of game role in first view.
Each unit has field range limitation, that is, each unit is under no any circumstance of occlusion, it can be seen that most
At a distance.In addition, the visual field can share.If friend side's unit it can be seen that target, the target is all
In the visual field of same camp's unit.Pass through specific position of the display particular game unit (such as: hero) in scene of game map
It sets, by specific shape (such as: round) label, in conjunction with above-mentioned first view, the specific trip can be calculated in visual range
The visible area of programme position.
In a preferred embodiment, the visual field is calculated using camera lens visual angle (FOV) algorithm of Recursive shadowcasting
Full figure is divided into eight quadrants by table, the algorithm, and each quadrant circular recursion calculates, and table 1 is the visual field result system during scene attempts
Table is counted, as shown in table 1:
Table 1
····ssssss·····ss | 16 | @=starting cell |
····ssss#··s··ss | 15 | #=blocking cell |
···ssss#··s··ss | 14 | =non-blocking cell |
···ss##··#··## | 13 | S=shadowed cells |
····##········ | 12 | |
············ | 11 | |
··········· | 10 | |
·········· | 9 | |
········· | 8 | |
········ | 7 | |
······· | 6 | |
······ | 5 | |
····· | 4 | |
···· | 3 | |
··· | 2 | |
·· | 1 | |
@ |
Wherein, "@" indicates heroic position, and " # " indicates to stop visual field unit, digital representation line number, in calculated result
" " indicate visible unit, then observing from the position of heroic (@), stop the subsequent position of visual field unit " # " as can not
See unit, i.e., invisible unit is indicated using " s ".And for visible unit, then it needs to add in the corresponding position of visual field matrix
Represent the value for changing camp.
In addition, it is necessary to explanation, in above-mentioned model of place, the essence of thick grass can be understood as a single side glass,
Unit in thick grass region is observed that the target outside thick grass region, still, unit outside thick grass region then without
Method observes the target in thick grass region, that is, although thick grass can stop the visual field, the visual field energy in same patch of grass region
Enough penetrate.
Because thick grass is special field obscuration unit, the visual field can be penetrated in thick grass, so by continuous thick grass pair
The grid connection answered is stitched together, and generates unique thick grass mark (ID), then the visual field in the same thick grass ID is just
It can penetrate.Fig. 3 is that the visual field according to the preferred embodiment of the invention in thick grass penetrates schematic diagram.As shown in figure 3, grid indicates
Thick grass region, and they interconnect, when heroic (black dot) be not within the scope of thick grass, as shown on the left side of figure 3, thick grass
Region can stop the visual field (using shadow representation);After heroic (black dot) enters thick grass region, as shown in the right side Fig. 3, just
The thick grass field obscuration of original shadow representation can be cancelled, and then calculate the piece visible area.
It should be noted that server-side and client are required to using above-mentioned view field algorithm, wherein the meter that server-side obtains
Result is calculated for judging visibility, is convenient for the transmission of visibility region (AOI) data, client executes behaviour according to visual field calculated result
It renders, generates dense fog effect.
Optionally, in step S24, after determining visibility region by field of view information and visual range, can also include with
Lower execution step:
Step S25 creates texture identical with first view, wherein each pixel in texture includes: the first color
Channel and the second Color Channel;
Step S26, by adjacent visibility region alternating sampling determining twice to the first Color Channel and the second Color Channel,
Respectively obtain the second view and third view;
Step S27 is adjusted according to the time difference generated between the second view and generation third view, obtains transition view
Figure;
Step S28 carries out Gaussian Blur processing to the visibility region edge in transitional view, obtains view to be rendered;
View to be rendered is merged with scene of game map, obtains view to be shown by step S29.
The visual field matrix result that second view and third view respectively indicate dense fog calculating samples the channel R in two channels
In (being equivalent to above-mentioned first Color Channel) and the channel G (being equivalent to above-mentioned second Color Channel), the process of sampling is as follows: creation
Texture (texture) identical with above-mentioned first view size, the width (width) of the texture are 460 pixels, height
It (height) is 280 pixels, each pixel stores A8R8G8B8, that is, there are the channel ARGB and each channel storage 8bit data.
If the situation elements view_matrix [i] [j] in first view is visibility status, texture [i] [j]
The channel R or the channel G will be assigned 255, be otherwise assigned 0, i.e., in the 0th time write-in channel R, the 1st write-in channel G
In, in the 2nd write-in channel R (the write-in result that covering is the 0th time), the 3rd write-in channel G (the write-in result that covering is the 1st time),
And so on.In a preferred implementation process, visual fields can be indicated using the first color (such as: white), can also used
Second color (such as: black) indicate the invisible visual field.
In addition, in order to promote the efficiency of dense fog rendering a dense fog can be updated according at interval of preset duration (0.5s)
It calculates, then the phenomenon that dense fog jump will occur.In order to solve this problem, adjacent calculated result twice can be written
Then the channel R, G of texture carries out difference transition according to the time in shader, such as: one piece of region is original invisible (i.e. complete
Portion is shown as black), and become visible (being all shown as transparent) after hero is mobile, it in this process, will be by black
Gradually be converted to fully transparent, therefore, it is necessary to carry out transition according to the time, specific calculating process is as follows:
The value (0-1) in the channel float c_red=R;
The value (0-1) in the channel float c_green=G;
Float blend_val=calculates fusion value (0-1) according to the time;
Float color=(1.0-blend_val) * c_red+blend_val*c_green;
Color value in tinter can be mapped as [0,1] section, and 255 corresponding 1,0 corresponding 0, c_red, c_green are to adopt
The value in the channel RG that sample obtains respectively represents the result that the visual field calculates twice in succession, it is assumed that c_red=0.2, c_green=
1, then obtain that the results are shown in Table 2 according to blend_val (being equivalent to above-mentioned transitional view) variation:
Table 2
0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 |
0.2 | 0.28 | 0.36 | 0.44 | 0.52 | 0.6 | 0.68 | 0.76 | 0.84 | 0.92 | 1.0 |
Dense fog jump problem can be solved in this way, so that dense fog transitions smooth is natural.
View to be rendered indicate to carry out transitional view M × N (such as: 3 × 3) line obtained after Gaussian Blur processing
Reason.To reach edge blurry transition effect, Gaussian Blur processing further is carried out to edge, and due to big portion in dense fog texture
Divide pixel and its surrounding pixel point is entirely visible or sightless, so only need to carry out Fuzzy Processing for edge,
To improve calculation rate.The mode of edge detection are as follows:
In order to which whether detection pixel point texture [i] [j] is located at (friendship of visibility region and invisible area of dense fog edge
At boundary), the texture [i+1] [j], texture [i+2] [j], texture [i-1] around texture [i] [j] can be chosen
[j], texture [i-2] [j], texture [i] [j+1], texture [i] [j+2], texture [i] [j-1], texture
[i] [j-2] eight pixels, if the pixel value of this nine pixels is identical, then it represents that pixel texture [i] [j] does not exist
Otherwise dense fog edge indicates pixel texture [i] [j] at dense fog edge, need to carry out pixel texture [i] [j] high
This Fuzzy Processing.
It should be noted that eight pixels around the texture [i] [j] of above-mentioned selection are also readily modified as choosing 4
Point is more than eight pixels.However, if choose pixel excessively if can reduce detection efficiency;If the pixel chosen
It put and then will affect detection Detection accuracy less.Preferably, the embodiment of the present invention chooses 8 pixels, does not need to carry out again
Sampling.
Assuming that Gaussian Blur parameter is blurweight [9], it is the array that length is 9, respectively corresponds 9 pixels above
The weight of value, wherein parameter occurrence " 9 " can be adjusted in real time according to referring to blur effect, fuzzifying equation are as follows:
Blurcolor=texture [i] [j] * blurweight [0];
Blurcolor+=texture [i-1] [j-1] * blurweight [1];
Blurcolor+=texture [i-1] [j] * blurweight [2];
Blurcolor+=texture [i-1] [j+1] * blurweight [3];
Blurcolor+=texture [i] [j-1] * blurweight [4];
Blurcolor+=texture [i] [j+1] * blurweight [5];
Blurcolor+=texture [i+1] [j-1] * blurweight [6];
Blurcolor+=texture [i+1] [j] * blurweight [7];
Blurcolor+=texture [i+1] [j+1] * blurweight [8];
In post-processing (post process) stage of rendering, the three dimensional field rendered under current screen view has just been obtained
Scape can be obtained final then dense fog texture (being equivalent to above-mentioned view to be rendered) is fused together with scene of game map
Fog of War effect (being equivalent to above-mentioned view to be shown).
Optionally, in step S29, view to be rendered is merged with scene of game map, obtains view to be shown
May include step performed below:
Step S291, the space coordinate conversion by part to be shown corresponding with going game picture in view to be rendered are
Screen coordinate;
Screen coordinate is converted to world coordinates by step S292;
Step S293 obtains visibility region corresponding with going game picture and non-visible region by world coordinates;
Step S294, the visibility region that will acquire and non-visible region are fused to scene of game map, obtain to be shown
View.
Dense fog texture is the sub-fraction of the only scene created based on entire three-dimensional scenic, and screen is shown, so
Need to calculate the corresponding dense fog texture region of screen scene.This is relative complex calculating process, if in central processing unit
(CPU) it is calculated in, will increase the time of entire render process.For this purpose, in a preferred embodiment, which is gone to figure
It is calculated in processor (GPU), as follows by the specific calculating process of screen coordinate → world coordinates → dense fog coordinate:
Game model, which is finally shown on screen, to be needed by completely rendering process, wherein most of operation is by video card
It is automatically performed, is mainly completed by vertex shader (vertex shader) and pixel coloring device (pixel shader).?
In vertex shader, most important is vertex variation, i.e., then turns the model space coordinate of model by coordinate transform
Change screen coordinate into, which is referred to as the world (world)-observation (view)-projection (project) matrixing, that is, uses
Model coordinate (x, y, z) is converted to screen coordinate (x by matrixing by following formula1,y1):
(x1,y1,z1)=(x, y, z) * world*view*project;
Wherein, (x1,y1) it is screen coordinate, z1For depth coordinate, stored to depth texture (depthtexture)
In, world is world's transformation matrix, and view is observation matrix, and project is projection matrix, these matrixes are drawn by game
It holds up and presets completion.
Secondly, in the post process stage, the depth value of coordinate obtains z by depthtexture1;
Again, world coordinates is calculated by following formula:
Float z=texture2D (depthtexture, xy) .x;
Float4sPos=vec4 (xy*2.0-1.0, z, 1.0);
Float4pos=inv_proj_view_mat*sPos;
Wherein, texture2D indicates that sampling function, xy indicate screen coordinate, and depthtexture is depth texture, sPos
It is needed according to above-mentioned formula it is found that obtaining world coordinates by screen coordinate multiplied by projection-observation for depth screen coordinate is added
Inverse of a matrix (inv_proj_view_mat), i.e., are as follows:
(x1,y1,z1)=(x, y, z) * world*view*project;
(x1,y1,z1) * inv_proj_view=(x, y, z) * world*view*project*inv_proj_view;
(x1, y1, z1) * inv_proj_view=(x, y, z) * world;
Wherein, pos=(x, y, z) * world is world coordinates.Then pass through:
Then, then by following formula obtain the corresponding dense fog coordinate of screen coordinate:
Float x=(pos.x+width/2)/width;
Float y=(pos.y+height/2)/height;
Wherein, width is scene width 460, and height is scene height 280.
The corresponding dense fog coordinate of screen coordinate, and then the dense fog that will be handled by Gaussian Blur can be obtained as a result,
Texture is mutually merged with scene figure, is obtained final game and is shown picture.
According to embodiments of the present invention, a kind of embodiment of the determining device of scene of game visibility region is provided.Fig. 4 is root
According to the structural block diagram of the determining device of the scene of game visibility region of the embodiment of the present invention, as shown in figure 4, the device can wrap
It includes: the first creation module 10, for according to scene of game map building first view, wherein first view is for describing game
Field obscuration region in scene map;Module 20 is obtained, for obtaining the field of view information of situation elements and trip in first view
The visual range of play role;Determining module 30, for determining the visual field in first view by field of view information and visual range
Domain.
Optionally, Fig. 5 is the structural frames of the determining device of scene of game visibility region according to the preferred embodiment of the invention
Figure, as shown in figure 5, the first creation module 10 may include: the first determination unit 100, for determining in scene of game map
Field obscuration model region;Setting unit 102, for will be corresponding with field obscuration model region in first view
Part is set as the first color, and sets second for the rest part except the corresponding part of field obscuration model region
Color.
Optionally, as shown in figure 5, obtaining module 20 may include: first acquisition unit 200, for obtaining scene of game
The multiple camps being arranged in map;Second determination unit 202, for determining the ownership camp of situation elements from multiple camps;
Third determination unit 204, the part camp for being belonged to using situation elements determine field of view information.
Optionally, the second determination unit 202 may include: conversion subunit (not shown), and being used for respectively will be preparatory
Hexadecimal parameter value for each camp configuration in multiple camps is converted into binary parameters value;Determine subelement (in figure
It is not shown), for when there are the corresponding binary parameters values in part or all of camp to carry out step-by-step logic or fortune in multiple camps
When calculation operation obtains the binary parameters value of situation elements, then part or all of camp is determined as to the ownership battle array of situation elements
Battalion.
Optionally, obtaining module 20 may include: second acquisition unit 206, for obtaining game role in scene of game
The location of in map;4th determination unit 208 determines in first view for the initialization visual field according to game role
Visual range.
Optionally, as shown in figure 5, above-mentioned apparatus can also include: the second creation module 40, for creation and first view
Identical texture, wherein each pixel in texture includes: the first Color Channel and the second Color Channel;Sampling module 50,
For adjacent visibility region alternating sampling determining twice to the first Color Channel and the second Color Channel, to be respectively obtained second
View and third view;Module 60 is adjusted, for carrying out according to the time difference generated between the second view and generation third view
Adjustment, obtains transitional view;Processing module 70, for carrying out Gaussian Blur processing to the visibility region edge in transitional view,
Obtain view to be rendered;Fusion Module 80 obtains view to be shown for merging view to be rendered with scene of game map
Figure.
Optionally, as shown in figure 5, Fusion Module 80 includes: the first converting unit 800, for by view to be rendered with
The space coordinate conversion of the corresponding part to be shown of going game picture is screen coordinate;Second converting unit 802, for that will shield
Curtain coordinate is converted to world coordinates;Third acquiring unit 804, it is corresponding with going game picture for being obtained by world coordinates
Visibility region and non-visible region;Integrated unit 806, visibility region and non-visible region for will acquire are fused to game
Scene map obtains view to be shown.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment
The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others
Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, Ke Yiwei
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module
It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can for personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or
Part steps.And storage medium above-mentioned includes: that USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic or disk etc. be various to can store program code
Medium.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (12)
1. a kind of determination method of scene of game visibility region characterized by comprising
According to scene of game map building first view, wherein the first view is for describing in the scene of game map
Field obscuration region;
Obtain the visual range of the field of view information of situation elements and game role in the first view;
The visibility region in the first view is determined by the field of view information and the visual range;
Wherein, after determining the visibility region by the field of view information and the visual range, further includes: creation and institute
State the identical texture of first view, wherein each pixel in the texture includes: that the first Color Channel and the second color are logical
Road;By adjacent visibility region alternating sampling determining twice to first Color Channel and second Color Channel, respectively
Obtain the second view and third view;It is carried out according to the time difference generated between second view and the generation third view
Adjustment, obtains transitional view;Gaussian Blur processing is carried out to the visibility region edge in the transitional view, obtains view to be rendered
Figure;The view to be rendered is merged with the scene of game map, obtains view to be shown.
2. the method according to claim 1, wherein the first view according to the scene of game map building
Include:
Field obscuration model region is determined in the scene of game map;
It will be set as the first color in the first view with the corresponding part of field obscuration model region, and by the visual field
It blocks the rest part except the corresponding part of model region and is set as the second color.
3. the method according to claim 1, wherein the field of view information for obtaining the situation elements includes:
Obtain the multiple camps being arranged in the scene of game map;
The ownership camp of the situation elements is determined from the multiple camp;
The field of view information is determined using the part camp that the situation elements belong to.
4. according to the method described in claim 3, it is characterized in that, determining returning for the situation elements from the multiple camp
Belonging to camp includes:
Binary parameters are converted by the hexadecimal parameter value in advance for each camp configuration in the multiple camp respectively
Value;
When there are the corresponding binary parameters values in part or all of camp to carry out step-by-step logic or operation behaviour in the multiple camp
When obtaining the binary parameters value of the situation elements, then the part or all of camp is determined as the situation elements
Belong to camp.
5. the method according to claim 1, wherein the visual range for obtaining the game role includes:
The game role is obtained the location of in the scene of game map;
The visual range is determined in the first view according to the initialization visual field of the game role.
6. the method according to claim 1, wherein by the view to be rendered and the scene of game map into
Row fusion, obtaining the view to be shown includes:
It is screen coordinate by the space coordinate conversion of part to be shown corresponding with going game picture in the view to be rendered;
The screen coordinate is converted into world coordinates;
Visibility region corresponding with the going game picture and non-visible region are obtained by the world coordinates;
The visibility region and non-visible region that will acquire are fused to the scene of game map, obtain the view to be shown.
7. a kind of determining device of scene of game visibility region characterized by comprising
First creation module, for according to scene of game map building first view, wherein the first view is for describing institute
State the field obscuration region in scene of game map;
Module is obtained, for obtaining the visual range of the field of view information of situation elements and game role in the first view;
Determining module, for determining the visibility region in the first view by the field of view information and the visual range;
Wherein, described device further include: the second creation module, for creating texture identical with the first view, wherein institute
Stating each pixel in texture includes: the first Color Channel and the second Color Channel;Sampling module, being used for will be adjacent true twice
Fixed visibility region alternating sampling respectively obtains the second view and to first Color Channel and second Color Channel
Three-view diagram;Module is adjusted, for being adjusted according to the time difference generated between second view and the generation third view
It is whole, obtain transitional view;Processing module, for carrying out Gaussian Blur processing to the visibility region edge in the transitional view,
Obtain view to be rendered;Fusion Module, for the view to be rendered to be merged with the scene of game map, obtain to
Show view.
8. device according to claim 7, which is characterized in that first creation module includes:
First determination unit, for determining field obscuration model region in the scene of game map;
Setting unit, for setting the first face for the corresponding part in the first view with field obscuration model region
Color, and the second color is set by the rest part except the corresponding part of field obscuration model region.
9. device according to claim 7, which is characterized in that the acquisition module includes:
First acquisition unit, for obtaining the multiple camps being arranged in the scene of game map;
Second determination unit, for determining the ownership camp of the situation elements from the multiple camp;
Third determination unit, the part camp for being belonged to using the situation elements determine the field of view information.
10. device according to claim 9, which is characterized in that second determination unit includes:
Conversion subunit, for respectively turning the hexadecimal parameter value in advance for each camp configuration in the multiple camp
Turn to binary parameters value;
Determine subelement, in the multiple camp there are the corresponding binary parameters value in part or all of camp carry out by
When position logic or arithmetic operation obtain the binary parameters value of the situation elements, then the part or all of camp is determined as
The ownership camp of the situation elements.
11. device according to claim 7, which is characterized in that the acquisition module includes:
Second acquisition unit, for obtaining the game role the location of in the scene of game map;
4th determination unit determines in the first view described visual for the initialization visual field according to the game role
Range.
12. device according to claim 7, which is characterized in that the Fusion Module includes:
First converting unit, for sitting the space of part to be shown corresponding with going game picture in the view to be rendered
Mark is converted to screen coordinate;
Second converting unit, for the screen coordinate to be converted to world coordinates;
Third acquiring unit, for passing through the world coordinates corresponding with the going game picture visibility region of acquisition and non-
Visibility region;
Integrated unit, visibility region and non-visible region for will acquire are fused to the scene of game map, obtain institute
State view to be shown.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610649889.5A CN106296786B (en) | 2016-08-09 | 2016-08-09 | The determination method and device of scene of game visibility region |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610649889.5A CN106296786B (en) | 2016-08-09 | 2016-08-09 | The determination method and device of scene of game visibility region |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106296786A CN106296786A (en) | 2017-01-04 |
CN106296786B true CN106296786B (en) | 2019-02-15 |
Family
ID=57667535
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610649889.5A Active CN106296786B (en) | 2016-08-09 | 2016-08-09 | The determination method and device of scene of game visibility region |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106296786B (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106730844B (en) * | 2017-01-10 | 2019-09-24 | 网易(杭州)网络有限公司 | A kind of scene runing time test method and device |
CN106975219B (en) * | 2017-03-27 | 2019-02-12 | 网易(杭州)网络有限公司 | Display control method and device, storage medium, the electronic equipment of game picture |
CN107358579B (en) * | 2017-06-05 | 2020-10-02 | 北京印刷学院 | Game war fog-lost realization method |
CN107198876B (en) * | 2017-06-07 | 2021-02-05 | 北京小鸟看看科技有限公司 | Game scene loading method and device |
CN107469351B (en) * | 2017-06-20 | 2021-02-09 | 网易(杭州)网络有限公司 | Game picture display method and device, storage medium and electronic equipment |
CN109658495B (en) * | 2017-10-10 | 2022-09-20 | 腾讯科技(深圳)有限公司 | Rendering method and device for ambient light shielding effect and electronic equipment |
CN107875630B (en) * | 2017-11-17 | 2020-11-24 | 杭州电魂网络科技股份有限公司 | Rendering area determination method and device |
CN108031117B (en) * | 2017-12-06 | 2021-03-16 | 北京像素软件科技股份有限公司 | Regional fog effect implementation method and device |
CN108196765A (en) * | 2017-12-13 | 2018-06-22 | 网易(杭州)网络有限公司 | Display control method, electronic equipment and storage medium |
CN108257103B (en) * | 2018-01-25 | 2020-08-25 | 网易(杭州)网络有限公司 | Method and device for eliminating occlusion of game scene, processor and terminal |
CN108389245B (en) * | 2018-02-13 | 2022-11-04 | 鲸彩在线科技(大连)有限公司 | Animation scene rendering method and device, electronic equipment and readable storage medium |
GB2571306A (en) * | 2018-02-23 | 2019-08-28 | Sony Interactive Entertainment Europe Ltd | Video recording and playback systems and methods |
CN109360263B (en) * | 2018-10-09 | 2019-07-19 | 温州大学 | A kind of the Real-time Soft Shadows generation method and device of resourceoriented restricted movement equipment |
CN111841010A (en) * | 2019-04-26 | 2020-10-30 | 网易(杭州)网络有限公司 | Urban road network generation method and device, storage medium, processor and terminal |
CN110251940A (en) * | 2019-07-10 | 2019-09-20 | 网易(杭州)网络有限公司 | A kind of method and apparatus that game picture is shown |
CN110415320A (en) * | 2019-07-25 | 2019-11-05 | 上海米哈游网络科技股份有限公司 | A kind of scene prebake method, apparatus, storage medium and electronic equipment |
CN111111172B (en) * | 2019-12-02 | 2023-05-26 | 网易(杭州)网络有限公司 | Surface processing method and device for game scene, processor and electronic device |
CN111111189A (en) * | 2019-12-16 | 2020-05-08 | 北京像素软件科技股份有限公司 | Game AOI management method and device and electronic equipment |
CN112516592A (en) * | 2020-12-15 | 2021-03-19 | 网易(杭州)网络有限公司 | Method and device for processing view mode in game, storage medium and terminal equipment |
CN112822397B (en) * | 2020-12-31 | 2022-07-05 | 上海米哈游天命科技有限公司 | Game picture shooting method, device, equipment and storage medium |
CN112717390A (en) * | 2021-01-12 | 2021-04-30 | 腾讯科技(深圳)有限公司 | Virtual scene display method, device, equipment and storage medium |
CN113192208A (en) * | 2021-04-08 | 2021-07-30 | 北京鼎联网络科技有限公司 | Three-dimensional roaming method and device |
CN113827960B (en) * | 2021-09-01 | 2023-06-02 | 广州趣丸网络科技有限公司 | Game view generation method and device, electronic equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105677395A (en) * | 2015-12-28 | 2016-06-15 | 珠海金山网络游戏科技有限公司 | Game scene pixel blanking system and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4671196B2 (en) * | 2006-10-31 | 2011-04-13 | 株式会社スクウェア・エニックス | NETWORK GAME SYSTEM, NETWORK GAME TERMINAL DEVICE, GAME SCREEN DISPLAY METHOD, PROGRAM, AND RECORDING MEDIUM |
-
2016
- 2016-08-09 CN CN201610649889.5A patent/CN106296786B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105677395A (en) * | 2015-12-28 | 2016-06-15 | 珠海金山网络游戏科技有限公司 | Game scene pixel blanking system and method |
Non-Patent Citations (1)
Title |
---|
基于HTML5的即时战略游戏的设计与实现;杨元超;《中国优秀硕士学位论文全文数据库信息科技辑》;20160315(第03期);第I138-5642页 |
Also Published As
Publication number | Publication date |
---|---|
CN106296786A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106296786B (en) | The determination method and device of scene of game visibility region | |
CN106780642B (en) | Generation method and device of camouflage cover map | |
US20190299097A1 (en) | Method and apparatus for enhanced graphics rendering in a video game environment | |
CN106157354B (en) | A kind of three-dimensional scenic switching method and system | |
CN109685909A (en) | Display methods, device, storage medium and the electronic device of image | |
CN107154032B (en) | A kind of image processing method and device | |
CN107638690B (en) | Method, device, server and medium for realizing augmented reality | |
CN106898051A (en) | The visual field elimination method and server of a kind of virtual role | |
CN105913476B (en) | The rendering intent and device of vegetation map picture | |
CN108421257A (en) | Determination method, apparatus, storage medium and the electronic device of invisible element | |
CN112102492B (en) | Game resource manufacturing method and device, storage medium and terminal | |
WO2016098690A1 (en) | Texture generation system | |
CN112669448A (en) | Virtual data set development method, system and storage medium based on three-dimensional reconstruction technology | |
CN112675545A (en) | Method and device for displaying surface simulation picture, storage medium and electronic equipment | |
CN108837510A (en) | Methods of exhibiting and device, storage medium, the electronic device of information | |
CN113457133B (en) | Game display method, game display device, electronic equipment and storage medium | |
CN112200899B (en) | Method for realizing model service interaction by adopting instantiation rendering | |
CN108230430A (en) | The processing method and processing device of cloud layer shade figure | |
CN116402931A (en) | Volume rendering method, apparatus, computer device, and computer-readable storage medium | |
Whelan et al. | Formulated silhouettes for sketching terrain | |
CN115272628A (en) | Rendering method and device of three-dimensional model, computer equipment and storage medium | |
CN113313796B (en) | Scene generation method, device, computer equipment and storage medium | |
CN113117334B (en) | Method and related device for determining visible area of target point | |
CN112037292B (en) | Weather system generation method, device and equipment | |
CN111462343B (en) | Data processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |