CN111773685A - Method and device for dynamically generating game role visual field - Google Patents

Method and device for dynamically generating game role visual field Download PDF

Info

Publication number
CN111773685A
CN111773685A CN202010549889.4A CN202010549889A CN111773685A CN 111773685 A CN111773685 A CN 111773685A CN 202010549889 A CN202010549889 A CN 202010549889A CN 111773685 A CN111773685 A CN 111773685A
Authority
CN
China
Prior art keywords
sub
rendered
obstacle
block
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010549889.4A
Other languages
Chinese (zh)
Inventor
李艳春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010549889.4A priority Critical patent/CN111773685A/en
Publication of CN111773685A publication Critical patent/CN111773685A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball

Abstract

A method and apparatus for dynamically generating a view of a game character, the method comprising the steps of: dividing a field of view shape predefined for a game character into a plurality of sub-blocks such that each sub-block is independently rendered in real-time; detecting whether an obstacle predefined as a view-blocking obstacle exists within a to-be-rendered range of each sub-block when each sub-block is rendered in real time, and limiting the real-time rendering range of at least one sub-block to an area of a side of the obstacle facing the game character when the obstacle is detected within the to-be-rendered range of the at least one sub-block. The invention saves the art making time and cost of the visual field picture, improves the game making efficiency and reduces the occupation of game resources. Moreover, the method can flexibly and dynamically adjust the real-time rendering range of the visual field shape according to the condition of the actual game scene, and more vividly reflect the change when the visual field of the human in the real world is blocked, thereby obviously improving the game experience of the player.

Description

Method and device for dynamically generating game role visual field
Technical Field
The invention relates to the field of game picture processing, in particular to a method and a device for dynamically generating a game role visual field.
Background
For some types of games, there is typically a defined field of view for characters (e.g., characters or monsters) within the game that needs to be displayed to the player, which may identify the area currently visible to the game character by displaying a shape under the game character's feet that extends a range of forward directions from the game character's direction of view. When an object of interest to a game character enters this area, i.e., indicating that the game character is able to see the object, the game character may be controlled autonomously or by the player to react accordingly to the object, depending on the settings of the game program. In an actual game scene, the scene environment may be complicated, for example, various obstacles such as walls, pillars, wooden boxes, etc. may block or partially block the view of the game character, at which time the game character should not see the objects behind the obstacles, so that the view of the game character cannot be displayed to the player in a manner of passing through the obstacles.
Currently, a common way to display the view of a game character is to allow an artist to draw a picture of a fixed size and shape (e.g., a sector) in advance, and then simply display the picture under the foot of the game character during the running of the game to display the view of the game character.
Taking a sector view as an example, the specific process of generating the view of the game character through art production is as follows: the game planners firstly determine the angle of the fan-shaped visual field according to requirements, for example, the angle with the smaller visual field is 30 degrees, the angle with the larger visual field is 60 degrees, and the angle with the larger visual field is 90 degrees. Then, different sector graphs of 30 degrees, 60 degrees, 90 degrees and the like are produced by the art personnel. In the game process, the game program displays the sector graphs corresponding to different roles under the feet of the corresponding game roles according to the visual field ranges set by the different game roles. In addition, the graphics may be scaled to change the actual radius of the field of view fan, as desired.
The traditional method for generating the visual field of the game character has the following defects:
first, this method can use only pictures prepared in advance, and as in the previous example, only pictures with 30 degrees, 60 degrees and 90 degrees fields of vision prepared in advance by artists can be used in a game, so that a game character can only generate 30 degrees, 60 degrees and 90 degrees fan-shaped fields of vision, and if one 45 degrees fan-shaped field of vision is generated, the artists need to prepare again, so that much time is required for art, and the efficiency is low. Meanwhile, in order to use these pictures in a game, it is necessary to save a resource file of the previously created pictures in a game resource manner, which occupies game resources.
As described above, since the view field of the game character can be generated only by selecting from view field pictures previously created by the artists during the game, the adjustment and change of the view field of the game character are not flexible when the actual complicated game scene is encountered. Moreover, even if more different view pictures can be made in advance, and the angle of the current view of the game character can be adjusted by selecting from the more pictures, on one hand, the method occupies more game resources, and on the other hand, the dynamic change of the view field generated when the view field of the human object in the real world is blocked by the obstacle cannot be realistically reflected. For example, if a game character originally has a 60-degree fan-shaped view and encounters an obstacle to block a part of the view angle in the fan-shaped view, if the 60-degree fan-shaped view picture is kept unchanged, a part of the fan-shaped view picture passes through the obstacle and is displayed behind the obstacle, which is not the same as the case of the actual view partially blocked. Further, when a part of the view angle is blocked by an obstacle, the real world situation cannot be reflected properly even if the fan-view picture of 60 degrees is replaced with a fan-view picture of a smaller angle, for example, 30 degrees, or the radius of the fan-view picture of 60 degrees is reduced. Although the obstacle blocks a part of the view angle, the game character can not see the objects behind the obstacle, and in the blocked part of the view angle, the game character still can see the objects in front of the obstacle, and the large-angle view picture is directly changed into the small-angle view picture or the view picture with the reduced radius, which is not consistent with the situation when the view is partially blocked in reality. Therefore, such a visual field display manner depending on the graphics produced in advance affects the game experience of the player.
Disclosure of Invention
The invention mainly aims to provide a method and a device for dynamically generating a game role visual field, so that the display mode of the game role visual field is more flexible and closer to the real situation, and the experience of a player is improved.
The invention provides a method for dynamically generating a game character visual field, which comprises the following steps:
dividing a field of view shape predefined for a game character into a plurality of sub-blocks such that each sub-block is independently rendered in real-time;
detecting whether an obstacle predefined as a view-blocking obstacle exists within a to-be-rendered range of each sub-block when each sub-block is rendered in real time, and limiting the real-time rendering range of at least one sub-block to an area of a side of the obstacle facing the game character when the obstacle is detected within the to-be-rendered range of the at least one sub-block.
Further, when each sub-block is rendered in real time, calculating the emission track of the virtual scene ray emitted from the origin of the visual field shape to the to-be-rendered range of each sub-block, detecting whether the emission track of the virtual scene ray collides with an obstacle predefined as a blocked visual field, when the emission track is detected to collide with the obstacle, determining the collision position, and rendering the sub-blocks in real time in the rendering range which does not exceed the collision position.
Further, the view shape is a sector, and the sub-blocks are a plurality of sub-sectors divided from the sector.
Further, the plurality of sub-sectors are sub-sectors equally divided by radian from the sector.
Further, the virtual scene ray is emitted into the sub-sector from a vertex of the sub-sector to be rendered, which corresponds to the origin, when the collision between the emission track of the virtual scene ray and the obstacle is detected, the collision position is determined, the radius of a new sub-sector is determined according to the collision position, and the new sub-sector is rendered within the angular range of the sub-sector to be rendered.
Further, the view shape is an approximate fan shape composed of a plurality of triangles, and the sub-blocks are a plurality of small triangles divided from the approximate fan shape.
Further, the small triangles are small triangles in which the approximate fan shape is equally divided according to angles.
Further, the virtual scene ray is emitted into the small triangle from a first vertex of the small triangle to be rendered, which corresponds to the origin, when the emitting trajectory of the virtual scene ray is detected to collide with the obstacle, the collision position is determined, the opposite side of the new small triangle, which is opposite to the vertex, is determined according to the collision position, and the new small triangle is rendered within the angle range of the small triangle to be rendered.
Further, the virtual scene ray is a middle line or an angular bisector on a vertex angle of the small triangle to be rendered, the length of the middle line or the angular bisector is determined according to the position of the collision between the emission locus of the middle line or the angular bisector and the obstacle, the positions of the other two vertexes of the new small triangle are determined according to the length of the middle line or the angular bisector, and the opposite side of the new small triangle opposite to the first vertex is determined according to the positions of the other two vertexes.
A second aspect of the present invention provides an apparatus for dynamically generating a visual field of a game character, comprising:
a view blocking module for dividing a view shape predefined for a game character into a plurality of sub-blocks such that each sub-block is independently rendered in real time;
and the detection and rendering module is used for detecting whether an obstacle predefined as a view blocking area exists in the to-be-rendered range of each sub-block when each sub-block is rendered in real time, and limiting the real-time rendering range of at least one sub-block to the area of one side, facing the game role, of the obstacle when the obstacle is detected in the to-be-rendered range of at least one sub-block.
A third aspect of the invention provides an electronic device comprising a computer-readable storage medium and a processor, the computer-readable storage medium storing an executable program that, when executed by the processor, implements the method.
A fourth aspect of the present invention is a computer-readable storage medium storing an executable program that, when executed by a processor, implements the method.
The invention has the following beneficial effects:
the invention provides a method and a device for dynamically generating a visual field of a game character, wherein the visual field shape of the game character is divided into a plurality of sub-blocks to be independently rendered in real time, whether an obstacle blocking the visual field exists in the range to be rendered of each sub-block is detected when each sub-block is rendered in real time, when the obstacle blocking the visual field exists in the range to be rendered of a certain sub-block, the real-time rendering range of the corresponding sub-block is limited to the area of one side, facing the game character, of the obstacle, and the corresponding sub-block is rendered only in the limited range. Therefore, compared with the traditional game role visual field generating mode, firstly, the visual field shape picture is not required to be made in advance by art personnel, so that the time and the personnel cost for making the art are saved, the game making efficiency is improved, the visual field shape picture is not required to be saved by the resource file for use when the game runs, and the occupation and the consumption of game resources are reduced. Furthermore, the invention can flexibly and dynamically adjust the real-time rendering range of each sub-block of the view field shape when the view field of the game role is blocked by the barrier according to the condition of the actual game scene, thereby vividly reflecting the change of the view field range when the view field of the human object is blocked by the barrier in the real world. For example, when an obstacle is detected to appear in the visual field shape and enters the visual field range of one or some sub-blocks, only the part of the affected sub-block positioned in front of the obstacle is rendered in real time, and the parts of the affected sub-block positioned in the obstacle and behind the obstacle are not rendered, so that the situation that the visual field shape passes through the obstacle to be displayed and the visual field shape is displayed in front of the obstacle is avoided, therefore, the dynamic generation mode of the visual field of the game role is consistent with the situation that the visual field of a human object in the real world is blocked, the generated visual field of the game role more vividly reflects the complex change situation when the visual field of the human object in the real world is blocked, and the game experience of a player is remarkably improved.
Drawings
FIG. 1 is a flowchart illustrating a method for dynamically generating a view of a game character according to a first embodiment of the present invention.
FIG. 2 is a flow chart of a method for dynamically generating a view of a game character according to a preferred embodiment of the present invention.
Fig. 3 is a schematic view of a 60 deg. sector field of view.
Fig. 4 is a schematic view of an approximately circular field of view consisting of a plurality of triangles.
FIG. 5 is a diagram of a sub-block of a plurality of small triangles demarcated from an approximate fan field of view in accordance with an embodiment of the present invention.
Fig. 6 is a schematic diagram of a rendering range determined by obstacle detection through a median line or a bisector during rendering of a plurality of small triangle sub-blocks according to an embodiment of the present invention.
FIG. 7 is a rendering effect diagram according to an embodiment of the present invention.
Fig. 8 is a block diagram illustrating a structure of an apparatus for dynamically generating a view of a character in a game according to a second embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
Referring to fig. 1, a first embodiment of the present invention provides a method for dynamically generating a view of a game character, comprising the steps of:
step S1, dividing a view shape predefined for the game character into a plurality of sub-blocks such that each sub-block is independently rendered in real time;
step S2, when each sub-block is rendered in real time, detecting whether an obstacle predefined as a view-blocking object exists in the to-be-rendered range of each sub-block, and when the obstacle is detected in the to-be-rendered range of at least one sub-block, limiting the real-time rendering range of the at least one sub-block to an area of a side of the obstacle facing the game character.
The game can be a game operated on any electronic equipment such as a smart phone, a tablet personal computer, a television and the like, and the method for dynamically generating the game role visual field can be applicable to any game needing to generate and display the game role visual field.
The view field shape may be various regular shapes, such as a fan shape or a polygon approximating a fan shape, or may be an irregular shape, and the method of dynamically generating the view field of the game character according to the present invention may be applied.
Referring to fig. 2, in a preferred embodiment, in step S2, when rendering each sub-block in real time, an emission trajectory of a virtual scene ray emitted from an origin of the shape of the field of view (i.e., a starting point of a point of view of a game character, which may be a current sole position of the game character) to a to-be-rendered range of each sub-block is calculated, whether the emission trajectory of the virtual scene ray collides with an obstacle predefined as a field of view blocking is detected, when it is detected that the emission trajectory collides with the obstacle, a position where the collision occurs is determined, and the sub-block is rendered in real time within a rendering range not exceeding the position where the collision occurs.
It should be understood that, in the present invention, the manner of detecting whether there is an obstacle in the range to be rendered of each sub-block is not limited to the manner of emitting the virtual scene ray for object detection in the above-mentioned embodiment, but may also be any other manner of detecting whether there is a predefined object entering in a predetermined area in the game scene.
In some preferred embodiments, the field of view shape of the game character is a sector (it is to be understood that a sector as referred to herein also includes a circle), and the plurality of sub-blocks are a plurality of sub-sectors divided within the sector field of view shape. Usually, these sub-blocks are sub-sectors having the same sector vertex and sequentially distributed in an angular concatenation relationship. More preferably, the plurality of sub-sectors are sub-sectors equally divided by a radian from the sector. For example, as shown in fig. 3, the shape of the field of view of one game character is a sector of 60 °, and the plurality of sub-blocks may be a plurality of sub-sectors having an apex angle of 10 °. Each sub-sector corresponds to a step of the field shape, the more steps, the more fine control the dynamic variation of the field shape will be able to achieve. Of course, the arc of each sub-sector need not be equal.
According to a preferred embodiment, in step S2, the virtual scene ray is emitted into the sub-sector from the vertex of the sub-sector to be rendered, which corresponds to the origin, when it is detected that the emission trajectory of the virtual scene ray collides with the obstacle, the position of the collision is determined, the radius of a new sub-sector is determined according to the position of the collision, and the new sub-sector is rendered within the angular range of the sub-sector to be rendered according to the radius. The rendered sub-sectors are located between the obstacle and the game character and the rendered portion does not enter the interior of the obstacle blocking the view, nor does it pass through the obstacle and extend behind it.
In other preferred embodiments, the shape of the field of view of the character may be an approximate fan shape (it is to be understood that the approximate fan shape referred to herein also includes an approximate circle shape) composed of a plurality of triangles, the plurality of sub-blocks being a plurality of small triangles divided within the approximate fan shape. As shown in fig. 4, is an approximate circle composed of a plurality of triangles. Generally, the sub-blocks are small triangles having the same vertex and distributed in turn according to an angle connection relationship. More preferably, the plurality of small triangles are small triangles in which the approximate fan shape is equally divided by an angle. For example, referring to fig. 5, the shape of the field of view of the game character is an approximate fan of 120 °, and the plurality of sub-blocks are a plurality of small triangles having a vertex angle of 30 °. Each small triangle corresponds to a step of the field shape, the more the steps, the closer the approximate fan is to the fan, and the more fine control the dynamic change of the field shape will be. Of course, the apex angle of each small triangle need not be equal.
Referring to fig. 6, according to a preferred embodiment, in step S2, the virtual scene ray is emitted into the small triangle from the first vertex of the small triangle to be rendered, which corresponds to the origin, when it is detected that the emission trajectory of the virtual scene ray collides with the obstacle, the position of the collision is determined, the opposite side of the new small triangle opposite to the vertex is determined according to the position of the collision, and the new small triangle is rendered within the angle range of the small triangle to be rendered according to the opposite side and the two sides of the small triangle to be rendered, which are connected to the vertex angle. The rendered small triangle is located between the obstacle and the game character, and the rendered part does not enter the inside of the obstacle blocking the view, nor passes through the obstacle and extends to the rear of the obstacle.
As further shown in fig. 6, in a preferred embodiment, in step S2, a central line or a bisector at a vertex angle of the small triangle to be rendered is used as the virtual scene ray, the length of the central line or the bisector is determined according to the position where the emission locus of the central line or the bisector collides with the obstacle, the positions of two other vertices of the new small triangle are determined according to the length of the central line or the bisector, the opposite side of the new small triangle opposite to the first vertex is determined according to the positions of the two other vertices, and the new small triangle is rendered.
Compared with the traditional game role visual field generation mode, the embodiment of the invention does not need art personnel to make visual field shape pictures in advance, thereby not only saving the time and the personnel cost for making the art and improving the game making efficiency, but also not needing to save the visual field shape pictures as resource files for use when the game runs, and reducing the occupation and the consumption of game resources. Furthermore, according to the situation of the actual game scene, the real-time rendering range of each sub-block of the view field shape can be flexibly and dynamically adjusted when the view field of the game role is blocked by the obstacle, so that the change of the view field range when the view field of the human object is blocked by the obstacle in the real world can be vividly reflected. For example, when an obstacle is detected to appear in the view field shape and enters the view field range of one or some sub-blocks, the embodiment of the invention only renders the part of the affected sub-block positioned in front of the obstacle in real time, but the part of the affected sub-block positioned in the obstacle and behind the obstacle is not rendered, which not only avoids the view field shape from passing through the obstacle to be displayed, but also does not affect the display of the view field shape in front of the obstacle, therefore, the dynamic generation mode of the view field of the game role is consistent with the situation when the view field of the human object is blocked in reality, so that the generated view field of the game role more realistically reflects the complex change situation when the view field of the human object in the real world is blocked, and the game experience of the player is obviously improved.
Referring to fig. 8, a second embodiment of the present invention provides an apparatus for dynamically generating a view of a game character, including:
a view blocking module for dividing a view shape predefined for a game character into a plurality of sub-blocks such that each sub-block is independently rendered in real time;
and the detection and rendering module is used for detecting whether an obstacle predefined as a view blocking area exists in the to-be-rendered range of each sub-block when each sub-block is rendered in real time, and limiting the real-time rendering range of at least one sub-block to the area of one side, facing the game role, of the obstacle when the obstacle is detected in the to-be-rendered range of at least one sub-block.
The game can be a game operated on any electronic equipment such as a smart phone, a tablet personal computer, a television and the like, and the device for dynamically generating the game role visual field can be suitable for all the games needing to generate and display the game role visual field.
In a preferred embodiment, the detection and rendering module calculates an emission trajectory of a virtual scene ray emitted from an origin of the shape of the field of view (i.e., a starting point of a perspective of a game character, which is generally a ground position where the game character currently stands) to a range to be rendered of each sub-block when rendering each sub-block in real time, detects whether the emission trajectory of the virtual scene ray collides with an obstacle predefined as a field of view blocking, determines a position where the collision occurs when detecting that the emission trajectory collides with the obstacle, and renders the sub-block in real time within a rendering range that does not exceed the position where the collision occurs.
It should be understood that, in the present invention, the manner of detecting whether there is an obstacle in the range to be rendered of each sub-block is not limited to the manner of emitting the virtual scene ray for object detection in the above-mentioned embodiment, but may also be any other manner of detecting whether there is a predefined object entering in a predetermined area in the game scene.
In some preferred embodiments, the field of view shape of the game character is a sector (it is to be understood that a sector as referred to herein also includes a circle), and the plurality of sub-blocks are a plurality of sub-sectors divided within the sector field of view shape. Usually, these sub-blocks are sub-sectors having the same sector vertex and sequentially distributed in an angular concatenation relationship. More preferably, the plurality of sub-sectors are sub-sectors equally divided by a radian from the sector. For example, as shown in fig. 3, the shape of the field of view of one game character is a sector of 60 °, and the plurality of sub-blocks may be a plurality of sub-sectors having an apex angle of 10 °. Each sub-sector corresponds to a step of the field shape, the more steps, the more fine control the dynamic variation of the field shape will be able to achieve. Of course, the arc of each sub-sector need not be equal.
According to a preferred embodiment, the detecting and rendering module emits the virtual scene ray from a vertex of the sub-sector to be rendered, which corresponds to the origin, into the sub-sector, determines a collision position when detecting that a collision between an emission trajectory of the virtual scene ray and the obstacle occurs, determines a radius of a new sub-sector according to the collision position, and renders the new sub-sector within an angle range of the sub-sector to be rendered according to the radius. The rendered sub-sectors are located between the obstacle and the game character and the rendered portion does not enter the interior of the obstacle blocking the view, nor does it pass through the obstacle and extend behind it.
In other preferred embodiments, the shape of the field of view of the character may be an approximate fan shape (it is to be understood that the approximate fan shape referred to herein also includes an approximate circle shape) composed of a plurality of triangles, the plurality of sub-blocks being a plurality of small triangles divided within the approximate fan shape. As shown in fig. 4, is an approximate circle composed of a plurality of triangles. Generally, the sub-blocks are small triangles having the same vertex and distributed in turn according to an angle connection relationship. More preferably, the plurality of small triangles are small triangles in which the approximate fan shape is equally divided by an angle. For example, referring to fig. 5, the shape of the field of view of the game character is an approximate fan of 120 °, and the plurality of sub-blocks are a plurality of small triangles having a vertex angle of 30 °. Each small triangle corresponds to a step of the field shape, the more the steps, the closer the approximate fan is to the fan, and the more fine control the dynamic change of the field shape will be. Of course, the apex angle of each small triangle need not be equal.
Referring to fig. 6, according to a preferred embodiment, the detecting and rendering module emits the virtual scene ray into the small triangle from a first vertex of the small triangle to be rendered, which corresponds to the origin, determines a collision position when detecting that a collision between an emission trajectory of the virtual scene ray and the obstacle occurs, determines an opposite side of a new small triangle opposite to the vertex according to the collision position, and renders the new small triangle within an angle range of the small triangle to be rendered according to the opposite side and two sides of the small triangle to be rendered, which are connected to the vertex angle. The rendered small triangle is located between the obstacle and the game character, and the rendered part does not enter the inside of the obstacle blocking the view, nor passes through the obstacle and extends to the rear of the obstacle.
As further shown in fig. 6, in a preferred embodiment, the detecting and rendering module uses a central line or a bisector on a vertex angle of the small triangle to be rendered as the virtual scene ray, determines the length of the central line or the bisector according to the position where the emission locus of the central line or the bisector collides with the obstacle, determines the positions of two other vertices of the new small triangle according to the lengths of the central line or the bisector, determines the opposite side of the new small triangle opposite to the first vertex according to the positions of the two other vertices, and renders the new small triangle.
Compared with the traditional game role visual field generation mode, the embodiment of the invention does not need art personnel to make visual field shape pictures in advance, thereby not only saving the time and the personnel cost for making the art and improving the game making efficiency, but also not needing to save the visual field shape pictures as resource files for use when the game runs, and reducing the occupation and the consumption of game resources. Furthermore, according to the situation of the actual game scene, the real-time rendering range of each sub-block of the view field shape can be flexibly and dynamically adjusted when the view field of the game role is blocked by the obstacle, so that the change of the view field range when the view field of the human object is blocked by the obstacle in the real world can be vividly reflected. For example, when an obstacle is detected to appear in the view field shape and enters the view field range of one or some sub-blocks, the embodiment of the invention only renders the part of the affected sub-block positioned in front of the obstacle in real time, but the part of the affected sub-block positioned in the obstacle and behind the obstacle is not rendered, which not only avoids the view field shape from passing through the obstacle to be displayed, but also does not affect the display of the view field shape in front of the obstacle, therefore, the dynamic generation mode of the view field of the game role is consistent with the situation when the view field of the human object is blocked in reality, so that the generated view field of the game role more realistically reflects the complex change situation when the view field of the human object in the real world is blocked, and the game experience of the player is obviously improved.
The specific processing procedures of the typical example are further described below with reference to the accompanying drawings.
Referring to fig. 5 and 6, the approximate sector view is divided into n independent small triangles according to a certain step length, and the rendering range of each small triangle is independently calculated according to the position of the obstacle in the game process. The median length of the small triangles is used to determine the radius of the approximate sector corresponding to each small triangle. For example, a small triangle may be generated every 5 degrees of arc, and an approximate fan of 30 degrees may be formed by 6 such small triangles. Of course, smaller steps can also be used to round the approximate sector. The radian and the size of the sector can be dynamically changed by increasing or decreasing the number of the small triangles and changing the size of the small triangles.
In order to detect obstacles in a game scene, the origin of an approximate fan shape, namely a field-of-view calculation starting point, is taken as a starting point, rays are emitted by taking the vertex angle central line of each small triangle as a direction, the track of the rays is calculated, whether the rays collide with the field-of-view obstacles is detected, and the length of the vertex angle central line of each small triangle is determined, wherein the length of the vertex angle central line is also the approximate fan-shaped radius of the small triangle. If the ray collides with the view barrier, the length from the vertex to the collision point is the length of the vertex angle central line, the range of the small triangle to be rendered is determined by the length of the vertex angle central line and two sides of the small triangle connected with the vertex, and the small triangle is rendered in the range. In this way, each small triangle is rendered independently in real time, and the rendered approximate fan shape can adapt to obstacles in the scene, so that the approximate fan-shaped view can be displayed to the player in a mode of not penetrating through the obstacles but reaching the obstacles at the blocked position.
Specifically, taking the embodiment shown in fig. 5 and 6 as an example, assume that the central point of the approximate fan shape is P0The central line direction of the nth triangle is DnMay be represented by P0Starting from DnFor direction, with R as the maximum radius of the approximate fan shape, a scene ray is emitted in the game scene by using a physical module (such as Physix), the scene ray is an endless line emitted from an origin to one direction in a 3D world, and when a collision occurs with an object in an emission track, the scene ray stops extending. If the ray has collision with the obstacle within the distance R, the collision point is compared with the center point P0The distance between them is taken as the median length of the triangle rayLength. Then, the position P of two points of which the triangle is not coincident with the center of the fan is calculated according to the lengthnAnd Pn+1. In FIGS. 5 and 6, the vertex denoted by 1 is P1The vertex with the number 2 is P2And so on. Suppose P0To P1Has a unit vector of D1,P0To P2Has a unit vector of D2And the step size of the approximate sector division is theta angle, the calculation formula of the two point positions is as follows:
P1=D1*rayLength/cos(θ/2)+P0
P2=D2*rayLength/cos(θ/2)+P0
and then, the two calculated points and the fan-shaped top points form a triangle for rendering.
And calculating all triangles in sequence according to the steps and finishing the rendering of the view shape.
FIG. 7 is a rendering effect diagram according to an embodiment of the present invention. A plurality of sub-blocks in the middle of the shape of the field of view of the game character are blocked by the barrier cylinder, the rendering range of the sub-blocks is limited to the front of the barrier cylinder, and the rendering range of the sub-blocks on two sides which are not blocked by the barrier cylinder extends to the rear side of the barrier cylinder, so that the situation in reality is reflected dynamically and vividly.
A third embodiment of the present invention provides an electronic device, including a computer-readable storage medium and a processor, where the computer-readable storage medium stores an executable program, and the executable program, when executed by the processor, implements the method for dynamically generating a view of a game character.
A fourth embodiment of the present invention provides a computer-readable storage medium storing an executable program that, when executed by a processor, implements the method of dynamically generating a field of view of a game character.
The processor, when executing the executable program, performs the steps in the various above-described method embodiments for dynamically generating a field of view for a game character. Alternatively, the processor implements the functions of each device/module in the above device embodiments when executing the executable program.
Illustratively, the computer program may be partitioned into one or more devices/modules that are stored in the memory and executed by the processor to implement the present invention. The one or more devices/modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the device for dynamically generating the view of the game character.
The electronic device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing device. Those skilled in the art will appreciate that the electronic device may include more or fewer components, or combine certain components, or different components, and may also include, for example, input-output devices, network access devices, buses, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center that interfaces and lines the various parts of the overall device.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions by executing or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the integrated module/unit of the device for dynamically generating the view of the game character can be stored in a computer readable storage medium if the integrated module/unit is realized in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
The background of the present invention may contain background information related to the problem or environment of the present invention and does not necessarily describe the prior art. Accordingly, the inclusion in the background section is not an admission of prior art by the applicant.

Claims (12)

1. A method for dynamically generating a view of a game character, comprising the steps of:
dividing a field of view shape predefined for a game character into a plurality of sub-blocks such that each sub-block is independently rendered in real-time;
detecting whether an obstacle predefined as a view-blocking obstacle exists within a to-be-rendered range of each sub-block when each sub-block is rendered in real time, and limiting the real-time rendering range of at least one sub-block to an area of a side of the obstacle facing the game character when the obstacle is detected within the to-be-rendered range of the at least one sub-block.
2. The method of claim 1, wherein, when rendering each sub-block in real time, an emission trajectory of a virtual scene ray emitted from an origin of the view shape to a range to be rendered of each sub-block is calculated, and whether the emission trajectory of the virtual scene ray collides with an obstacle predefined to block a view is detected, when the emission trajectory is detected to collide with the obstacle, a position where the collision occurs is determined, and the sub-block is rendered in real time with a rendering range not exceeding the position where the collision occurs.
3. The method of claim 2, wherein the view shape is a sector, and the sub-blocks are sub-sectors divided from the sector.
4. The method of claim 3, wherein the plurality of sub-sectors are sub-sectors that divide the sector equally by arc.
5. The method of claim 3 or 4, wherein the virtual scene ray is emitted into the sub-sector from a vertex of the sub-sector to be rendered corresponding to the origin, when a collision of the emission trajectory of the virtual scene ray with the obstacle is detected, a position of the collision is determined, and a radius of a new sub-sector is determined according to the position of the collision, the new sub-sector being rendered within an angular range of the sub-sector to be rendered.
6. The method of claim 2, wherein the view shape is an approximate fan shape composed of a plurality of triangles, and the plurality of sub-blocks are a plurality of small triangles divided from the approximate fan shape.
7. The method of claim 6, wherein the plurality of small triangles are small triangles in which the approximate fan is equally divided by angle.
8. The method of claim 6 or 7, wherein the virtual scene ray is shot into the small triangle from a first vertex of the small triangle to be rendered, which corresponds to the origin, when a collision of the shot trajectory of the virtual scene ray with the obstacle is detected, the position of the collision is determined, and an opposite side of a new small triangle to the vertex is determined according to the position of the collision, and the new small triangle is rendered within the angular range of the small triangle to be rendered.
9. The method of claim 8, wherein the virtual scene ray is a centerline or an angular bisector on a vertex angle of the small triangle to be rendered, the length of the centerline or the angular bisector is determined according to the position of the collision of the emission locus of the centerline or the angular bisector with the obstacle, the positions of two other vertexes of the new small triangle are determined according to the length of the centerline or the angular bisector, and the opposite side of the new small triangle opposite to the first vertex is determined according to the positions of the two other vertexes.
10. An apparatus for dynamically generating a field of view for a game character, comprising:
a view blocking module for dividing a view shape predefined for a game character into a plurality of sub-blocks such that each sub-block is independently rendered in real time;
and the detection and rendering module is used for detecting whether an obstacle predefined as a view blocking area exists in the to-be-rendered range of each sub-block when each sub-block is rendered in real time, and limiting the real-time rendering range of at least one sub-block to the area of one side, facing the game role, of the obstacle when the obstacle is detected in the to-be-rendered range of at least one sub-block.
11. An electronic device comprising a computer readable storage medium and a processor, the computer readable storage medium storing an executable program, wherein the executable program, when executed by the processor, implements the method of any of claims 1 to 9.
12. A computer-readable storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the method of any one of claims 1 to 9.
CN202010549889.4A 2020-06-16 2020-06-16 Method and device for dynamically generating game role visual field Pending CN111773685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010549889.4A CN111773685A (en) 2020-06-16 2020-06-16 Method and device for dynamically generating game role visual field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010549889.4A CN111773685A (en) 2020-06-16 2020-06-16 Method and device for dynamically generating game role visual field

Publications (1)

Publication Number Publication Date
CN111773685A true CN111773685A (en) 2020-10-16

Family

ID=72755987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010549889.4A Pending CN111773685A (en) 2020-06-16 2020-06-16 Method and device for dynamically generating game role visual field

Country Status (1)

Country Link
CN (1) CN111773685A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113117334A (en) * 2021-04-14 2021-07-16 广州虎牙科技有限公司 Method for determining visible area of target point and related device
CN113827960A (en) * 2021-09-01 2021-12-24 广州趣丸网络科技有限公司 Game visual field generation method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103608850A (en) * 2011-06-23 2014-02-26 英特尔公司 Stochastic rasterization with selective culling
US20170169653A1 (en) * 2015-12-11 2017-06-15 Igt Canada Solutions Ulc Enhanced electronic gaming machine with x-ray vision display
CN107103639A (en) * 2010-06-30 2017-08-29 巴里·林恩·詹金斯 Determine the method and system of the set of grid polygon or the polygonal segmentation of grid
CN107358579A (en) * 2017-06-05 2017-11-17 北京印刷学院 A kind of game war dense fog implementation method
CN107875630A (en) * 2017-11-17 2018-04-06 杭州电魂网络科技股份有限公司 Render area determination method and device
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN109745704A (en) * 2018-11-19 2019-05-14 苏州蜗牛数字科技股份有限公司 A kind of management method of voxel landform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103639A (en) * 2010-06-30 2017-08-29 巴里·林恩·詹金斯 Determine the method and system of the set of grid polygon or the polygonal segmentation of grid
CN103608850A (en) * 2011-06-23 2014-02-26 英特尔公司 Stochastic rasterization with selective culling
US20170169653A1 (en) * 2015-12-11 2017-06-15 Igt Canada Solutions Ulc Enhanced electronic gaming machine with x-ray vision display
CN107358579A (en) * 2017-06-05 2017-11-17 北京印刷学院 A kind of game war dense fog implementation method
CN107875630A (en) * 2017-11-17 2018-04-06 杭州电魂网络科技股份有限公司 Render area determination method and device
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN109745704A (en) * 2018-11-19 2019-05-14 苏州蜗牛数字科技股份有限公司 A kind of management method of voxel landform

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113117334A (en) * 2021-04-14 2021-07-16 广州虎牙科技有限公司 Method for determining visible area of target point and related device
CN113827960A (en) * 2021-09-01 2021-12-24 广州趣丸网络科技有限公司 Game visual field generation method and device, electronic equipment and storage medium
CN113827960B (en) * 2021-09-01 2023-06-02 广州趣丸网络科技有限公司 Game view generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
EP2466445B1 (en) Input direction determination terminal, method and computer program product
US9251622B2 (en) System and method for calculating multi-resolution dynamic ambient occlusion
CN108154548B (en) Image rendering method and device
US9569885B2 (en) Technique for pre-computing ambient obscurance
US8174527B2 (en) Environment mapping
JP7008733B2 (en) Shadow generation for inserted image content
CN1317666C (en) System and method suitable for setting up real time shadow of transparent target
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
US20130316817A1 (en) Information processing apparatus, method for information processing, and game apparatus
CA2312599A1 (en) Three-dimensional arrow
US20100190556A1 (en) Information storage medium, game program, and game system
CN111773685A (en) Method and device for dynamically generating game role visual field
KR20060052042A (en) Method for hardware accelerated anti-aliasing in 3d
US10078911B2 (en) System, method, and computer program product for executing processes involving at least one primitive in a graphics processor, utilizing a data structure
JP2008225985A (en) Image recognition system
EP2444940A1 (en) Method for estimation of occlusion in a virtual environment
CN111145329A (en) Model rendering method and system and electronic device
KR102108244B1 (en) Image processing method and device
US10068362B2 (en) Data processing apparatus and method of detecting position information for displaying virtual space
US7106324B1 (en) Image generating system and program
JP2011065396A (en) Program, information storage medium, and object generation system
US20230186575A1 (en) Method and apparatus for combining an augmented reality object in a real-world image
JP4266121B2 (en) Image generation system, program, and information storage medium
Jahrmann et al. Responsive real-time grass rendering for general 3d scenes
US10388061B2 (en) Generation of lighting data for a three-dimensional entity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination