CN117959721B - Game engine design system and method based on three-dimensional modeling - Google Patents

Game engine design system and method based on three-dimensional modeling Download PDF

Info

Publication number
CN117959721B
CN117959721B CN202410393649.8A CN202410393649A CN117959721B CN 117959721 B CN117959721 B CN 117959721B CN 202410393649 A CN202410393649 A CN 202410393649A CN 117959721 B CN117959721 B CN 117959721B
Authority
CN
China
Prior art keywords
view
game
character
point
available
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410393649.8A
Other languages
Chinese (zh)
Other versions
CN117959721A (en
Inventor
黄耀豪
王卫波
胡广
黄耀曦
贾瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Leyi Network Co ltd
Original Assignee
Shenzhen Leyi Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Leyi Network Co ltd filed Critical Shenzhen Leyi Network Co ltd
Priority to CN202410393649.8A priority Critical patent/CN117959721B/en
Publication of CN117959721A publication Critical patent/CN117959721A/en
Application granted granted Critical
Publication of CN117959721B publication Critical patent/CN117959721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a game engine design system and method based on three-dimensional modeling, and relates to the technical field of video games. The invention comprises an optimizing component, a position optimizing component and a position optimizing component, wherein the optimizing component is used for obtaining a plurality of viewpoint position points and view center lines corresponding to each position point in a character moving path according to viewpoint moving paths of viewpoints in the process of repeatedly obtaining a game character to travel along the character moving path; calculating according to the view angle of the view point of the game character, a plurality of view point position points corresponding to each position point in the character moving path and a view center line to obtain an available view field range of each position point in the character moving path and a corresponding priority coefficient; and obtaining rendering priorities of different positions in the game model when the game character moves to each point in the character moving path according to the available view field range of each position point in the character moving path and the corresponding priority coefficient. The invention improves the efficiency of picture rendering under the view angle of the game role under the limited computing power.

Description

Game engine design system and method based on three-dimensional modeling
Technical Field
The invention belongs to the technical field of video games, and particularly relates to a game engine design system and method based on three-dimensional modeling.
Background
Three-dimensional modeling is the process of creating or constructing three-dimensional digital objects, and game engines are a software framework dedicated to game development. The technology is widely applied in the fields of game development, movies, construction, engineering, virtual reality and the like.
Limited by the performance of the computing device, if all three-dimensional structures in the game scene are rendered in real time, the problem that rendering and loading are not timely occurs to the complex game scene easily, and picture tearing and clamping occur. If only the local picture is modeled and rendered, when the picture view is greatly switched, the picture tearing and the clamping of the game picture still occur.
Disclosure of Invention
The invention aims to provide a game engine design system and a method based on three-dimensional modeling, which are used for improving the efficiency of picture rendering under a game role view angle under a limited computing power by sequencing rendering priorities of different available view field ranges.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention provides a game engine design method based on three-dimensional modeling, which comprises the following steps of,
Acquiring a role movement path of a game role in a game model;
acquiring a view angle of a view point of a game character;
acquiring viewpoint moving paths and view neutral lines of viewpoints in the process of moving the game roles along the role moving paths for multiple times;
obtaining a plurality of viewpoint position points and the visual center line corresponding to each position point in the character moving path according to viewpoint moving paths of viewpoints in the process of repeatedly obtaining game characters to travel along the character moving path;
calculating according to the view angle of the view point of the game character, a plurality of view point position points corresponding to each position point in the character moving path and the view center line to obtain the available view field range of each position point in the character moving path and a corresponding priority coefficient;
and obtaining rendering priorities of different positions in the game model when the game character moves to each point in the character moving path according to the available view field range of each position point in the character moving path and the corresponding priority coefficient.
The invention also discloses a game engine design system based on three-dimensional modeling, which comprises,
The data collection interface is used for acquiring a role movement path of a game role in the game model;
acquiring a view angle of a view point of a game character;
acquiring viewpoint moving paths and view neutral lines of viewpoints in the process of moving the game roles along the role moving paths for multiple times;
The optimizing component is used for obtaining a plurality of viewpoint position points corresponding to each position point in the character moving path and the view center line according to viewpoint moving paths of viewpoints in the process of repeatedly obtaining the game characters to travel along the character moving path;
calculating according to the view angle of the view point of the game character, a plurality of view point position points corresponding to each position point in the character moving path and the view center line to obtain the available view field range of each position point in the character moving path and a corresponding priority coefficient;
Obtaining rendering priorities of different positions in the game model when the game character moves to each point in the character moving path according to the available view field range of each position point in the character moving path and the corresponding priority coefficient;
And the initial rendering engine component is used for rendering the game picture according to the rendering priority of different positions in the game model when the game character moves to each point in the character moving path.
According to the method, the view point moving path and the view center line of the game role in the multiple game processes are analyzed to obtain the available view field range of each position point in the role moving path and the corresponding priority coefficient, and then different positions of the game model are sequentially rendered according to the corresponding priority coefficient. In the process, the game model rendering completion within the game role view field range can be kept at the maximum degree of certainty, and the rendering efficiency of the rendering engine assembly is improved.
Of course, it is not necessary for any one product to practice the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of functional modules and information flow of an embodiment of a game engine design system based on three-dimensional modeling according to the present invention;
FIG. 2 is a schematic diagram of an optimizing component according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of step S5 according to an embodiment of the invention;
FIG. 4 is a diagram illustrating the step S53 according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an embodiment of the step S531 according to the present invention;
FIG. 6 is a diagram illustrating a step S533 according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating the step S6 according to an embodiment of the present invention;
In the drawings, the list of components represented by the various numbers is as follows:
1-data collection interface, 2-optimization component, 3-rendering engine component, 4-user information collection component.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like herein are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Three-dimensional game rendering refers to the process of converting graphics and effects in a three-dimensional scene into a final two-dimensional image and displaying the final two-dimensional image in a display device such as a screen, and mainly comprises geometric processing, rasterization, illumination and coloring, texture mapping, special effect processing and final synthesis. The process is usually calculated and synthesized by a rendering engine component, but due to the huge calculation amount of the three-bit game model, the game model in the scene where the game character is located is difficult to fully render, so that the game character can be blocked or torn when moving or the visual angle is switched, and the game experience is affected. In order to improve the rendering efficiency of the rendering engine component, the present invention provides the following scheme.
Referring to fig. 1 to 2, the present invention provides a game engine design system based on three-dimensional modeling, which includes a data collection interface 1, an optimizing component 2, a rendering engine component 3 and a user information collection component 4 divided from functional modules. In the specific implementation process, the data collection interface 1 first executes step S1 to obtain a character movement path of a game character in the game model. Step S2 may be performed next to acquire the perspective of the viewpoint of the game character. Next, step S3 may be performed a plurality of times to acquire a viewpoint moving path of a viewpoint and a line of sight in the course of the game character traveling along the character moving path. The gaming experience during this period is thus poor, and typically this process can collect data of the player's operation of the game character over a limited range during the introspection or crowd-sourcing phase of the game.
And then, the optimizing component 2 executes step S4 to obtain a plurality of viewpoint position points and view center lines corresponding to each position point in the character moving path according to the viewpoint moving path of the viewpoint in the process of repeatedly acquiring the game character to travel along the character moving path. Step S5 may be executed to calculate an available view range and a corresponding priority coefficient of each position point in the character moving path according to the view angle of the view point of the game character and the multiple view point position points and the view center line corresponding to each position point in the character moving path. Step S6 can be executed to obtain the rendering priority of different positions in the game model when the game character moves to each position in the character moving path according to the available view field range of each position in the character moving path and the corresponding priority coefficient, namely, the rendering of the position points needing to be rendered in the game model is prioritized, and the position points of the game model, which are most likely to be seen by a player, are preferentially rendered. The method can also continuously acquire the viewpoint moving path and the view center line of the viewpoint in the process of the game role moving along the role moving path, and iteratively update the rendering priority of different positions in the game model when the game role moves to each point in the role moving path.
Finally, the rendering engine component 3 renders the game picture according to the rendering priority of different positions in the game model when the game character moves to each point in the character moving path.
Since different game players have different operation habits, in order to attach to the personalized operation habits of different players, the user information collecting component 4 can acquire the viewpoint moving path and the view center line of the viewpoint in the process that the common game player controls the game character to travel along the character moving path under the condition of user permission. The rendering priorities of the different positions in the game model when the game character under control of the normal player moves to each point in the character movement path are then iteratively updated by the rendering engine component 3. Often this process always occurs after the formal release of the game with the aim of continuously improving the personalized game rendering experience.
Referring to fig. 3, when a game character travels to a certain position along a character moving path, viewpoint positions and view-neutral lines of different players manipulating the game character in different game rounds are also different, but a combination of viewpoint position points and view-neutral lines of each set generally has a correlation in a field of view range in a game model due to operational commonalities. This makes the rendering importance of different field of view ranges different. In order to select the available view field ranges from the multiple view field ranges and order the priorities of the different available view field ranges, step S51 may be executed first to obtain a combination of multiple sets of view point position points and view center lines corresponding to each position point in the character moving path according to multiple view point position points and view center lines corresponding to each position point in the character moving path. Step S52 may then be performed to derive a field of view range for each set of combinations of viewpoint position points and a line of sight within the game model from the perspective of the viewpoint of the game character. Finally, step S53 may be executed to calculate, for each location point in the character movement path, an available field of view range and a corresponding priority coefficient, that is, a priority degree calculated by rendering, for each location point in the character movement path according to the combination of each set of viewpoint location points and view lines and the corresponding field of view range.
Referring to fig. 4, for each position point in the character moving path, the operations of different players in different rounds of games are different, but there is also a commonality, and the commonality corresponding to different fields of view can be obtained by virtue of the commonality, and the higher the commonality is, the larger the representing priority coefficient is. Specifically, for each location point in the character movement path, step S53 described above may be performed in the implementation process by first dividing the available field of view range into several priority levels according to the combination of each set of viewpoint location points and the view center line in step S531. Within each priority level, step S532 may next be performed to calculate the coincidence ratio between the acquired different available field of view ranges. Step S533 may then be performed to derive a priority factor for the different available field of view ranges within each priority level based on the coincidence ratio between the different available field of view ranges within each priority level. Finally, step S534 may be executed to gather the available field of view range and the corresponding priority coefficient of each position point in the character movement path in different priority levels.
To supplement the procedure of calculating the coincidence ratio between the different available field of view ranges of step S532, the source codes of part of the functional modules are provided and the comparison explanation is made in the annotating part. In order to avoid data leakage involving trade secrets, a desensitization process is performed on portions of the data that do not affect implementation of the scheme, as follows.
#include<iostream>
#include<cmath>
#include<algorithm>
Structure of/(view cone)
struct ViewFrustum {
float leftPlane;
float rightPlane;
float topPlane;
float bottomPlane;
Construction function
ViewFrustum(float left, float right, float top, float bottom)
: leftPlane(left), rightPlane(right), topPlane(top), bottomPlane(bottom) {}
};
Function of the overlap ratio of two viewing cones
float calculateOverlap(const ViewFrustum&frustum1, const ViewFrustum&frustum2) {
Calculating horizontal overlap ratio
float horizontalOverlap = std::max(0.0f, std::min(frustum1.rightPlane, frustum2.rightPlane) -
std::max(frustum1.leftPlane, frustum2.leftPlane));
Calculating vertical overlap ratio
float verticalOverlap = std::max(0.0f, std::min(frustum1.topPlane, frustum2.topPlane) -
std::max(frustum1.bottomPlane, frustum2.bottomPlane));
Calculating the total area of the two viewing cones
float area1 = (frustum1.rightPlane - frustum1.leftPlane) × (frustum1.topPlane - frustum1.bottomPlane);
float area2 = (frustum2.rightPlane - frustum2.leftPlane) × (frustum2.topPlane - frustum2.bottomPlane);
Calculating the overlapping area
float overlapArea = horizontalOverlap × verticalOverlap;
Calculating the coincidence rate
Float overlapRatio =2× overlapArea/(area1+area2); ratio of the overlapping area of the two viewing cones to the total area
return overlapRatio;
}
int main() {
Example/: two view cones
ViewFrustum frustum1(-1, 1, 1, -1);
ViewFrustum frustum2(-0.5, 0.5, 0.5, -0.5);
Calculating the coincidence rate
float overlapRatio = calculateOverlap(frustum1, frustum2);
Output overlap ratio
std::cout<<"Overlap Ratio: "<<overlapRatio<<std::endl;
return 0;
}
Referring to fig. 5, the operation habits of different people in the game are different, but some people have similarity in operation or operation in different game rounds, and the combination of the viewpoint position points and the view center line of each group can be classified into different priority levels according to the similarity. Specifically, in the implementation process of step S531, step S5311 may be performed first to arrange coordinate data of viewpoint position points and angle data of view neutral line in the combination of viewpoint position points and view neutral line of each group in the same order to obtain a projection feature vector of the combination of viewpoint position points and view neutral line of each group. Step S5312 may then be performed to select a number of the total projection feature vectors as reference projection feature vectors. Step S5313 may then be performed to calculate a vector difference modulo length for each reference projection feature vector and each other projection feature vector. Step S5314 may be performed to divide each other projection feature vector into the same vector level as the reference projection feature vector having the smallest vector difference module length. Step S5315 may be performed to obtain, as the updated reference projection feature vector, a projection feature vector having the smallest vector difference modulus length from the mean vector in each vector hierarchy. Step S5316 may then be performed to determine whether the updated reference projection feature vector has changed. If so, steps S5313 to S5316 may be performed to continuously update the vector levels and the reference projection feature vectors, and if not, step S5317 may be performed to use the field of view range corresponding to the combination of each set of viewpoint position points and the view center line corresponding to the projection feature vectors included in each vector level as the available field of view range. Finally, step S5318 may be performed to divide all the available field of view ranges into priority levels according to the available field of view ranges corresponding to each vector level.
To supplement the above-described implementation procedures of steps S5311 to S5318, source codes of part of the functional modules are provided, and a comparison explanation is made in the annotation section.
#include<iostream>
#include<vector>
#include<cmath>
#include<limits>
#include<algorithm>
Defining a three-dimensional vector class
class Vector3 {
public:
float x, y, z;
Vector3(float x = 0, float y = 0, float z = 0) : x(x), y(y), z(z) {}
Vector subtraction
Vector3 operator-(const Vector3&other) const {
return Vector3(x - other.x, y - other.y, z - other.z);
}
Modular length of the vector
float length() const {
return std::sqrt(x × x + y × y + z × z);
}
};
Class of/(and cast feature vectors, containing viewpoint position and midline view direction
class ProjectionFeatureVector {
public:
Vector3 position;
Vector3 direction;// midline viewing direction
ProjectionFeatureVector(const Vector3&pos, const Vector3&dir) : position(pos), direction(dir) {}
Differences between the/(and another feature vector)
float distanceTo(const ProjectionFeatureVector&other) const {
Vector3 posDiff = position - other.position;
Vector3 dirDiff = direction - other.direction;
Return posdiff.length () + dirdiff.length ();// where the calculation is simplified, in practice it may be more complex
}
};
One cluster//
class Cluster {
public:
ProjectionFeatureVector centroid center of the/(Cluster)
Std: vector < ProjectionFeatureVector > members;// members in the cluster
Cluster(const ProjectionFeatureVector&center) : centroid(center) {}
Center of the update cluster is the average position of the members
void updateCentroid() {
Vector3 sumPos(0, 0, 0);
Vector3 sumDir(0, 0, 0);
for (const auto&member : members) {
sumPos.x += member.position.x;
sumPos.y += member.position.y;
sumPos.z += member.position.z;
sumDir.x += member.direction.x;
sumDir.y += member.direction.y;
sumDir.z += member.direction.z;
}
centroid.position = sumPos / members.size();
centroid.direction = sumDir / members.size();
}
};
Execution algorithm
void kMeansClustering(std::vector<ProjectionFeatureVector>&featureVectors, int k) {
Selection of k centers at random
std::vector<Cluster>clusters;
for (int i = 0; i<k; ++i) {
Clusters. Push_back (Cluster (featureVectors [ i ]))// where the first k are chosen simply as the center
}
bool centroidsChanged;
do {
Member of a/empty cluster
for (auto&cluster : clusters) {
cluster.members.clear();
}
Assigned to the nearest cluster for each feature vector
for (const auto&feature : featureVectors) {
float minDistance = std::numeric_limits<float>::max();
int clusterIndex = -1;
for (int i = 0; i<clusters.size(); ++i) {
float distance = feature.distanceTo(clusters[i].centroid);
if (distance<minDistance) {
minDistance = distance;
clusterIndex = i;
}
}
clusters[clusterIndex].members.push_back(feature);
}
Center of/(update cluster)
centroidsChanged = false;
for (auto&cluster : clusters) {
ProjectionFeatureVector oldCentroid = cluster.centroid;
cluster.updateCentroid();
if (cluster.centroid.distanceTo(oldCentroid)>0.001) {
centroidsChanged = true;
}
}
While (centroidsChanged) +// if there is a change in center, continue the iteration
Output of members of each cluster
for (const auto&cluster : clusters) {
std::cout<<"Cluster centroid at position ("<<cluster.centroid.position.x
#include<iostream>
#include<vector>
#include<algorithm>
#include<cmath>
#include<limits>
Defining three-dimensional vector classes
class Vector3 {
public:
float x, y, z;
Vector3(float x = 0, float y = 0, float z = 0) : x(x), y(y), z(z) {}
Length of vector
float length() const {
return std::sqrt(x × x + y × y + z × z);
}
Vector normalization
Vector3 normalize() const {
float len = length();
return {x / len, y / len, z / len};
}
};
Defining viewpoint information
struct Viewpoint {
Vector3 position;
Vector3 viewDirection;// midline view
Float fov;// viewing angle (degree)
Viewpoint(Vector3 pos, Vector3 dir, float angle)
: position(pos), viewDirection(dir.normalize()), fov(angle) {}
};
Definition field of view range
struct ViewFrustum {
Float nearest plane;/(near plane distance)
Float farPlane;// distance from far plane
Float fov;// viewing angle
Float aspectRatio/aspect ratio
Calculating various parameters of the view according to the viewpoint and the viewing angle
ViewFrustum(float fov, float aspect, float nearP, float farP)
: fov(fov), aspectRatio(aspect), nearPlane(nearP), farPlane(farP) {}
};
Function of the overlap ratio of the two fields of view
float calculateOverlap(const ViewFrustum&frustum1, const ViewFrustum&frustum2) {
Calculating horizontal and vertical viewing angle differences
float horizontalAngleDiff = std::min(frustum1.fov × frustum1.aspectRatio, frustum2.fov × frustum2.aspectRatio);
float verticalAngleDiff = std::min(frustum1.fov, frustum2.fov);
The magnitude of the coincidence viewing angle is calculated (this is a simplified example, the actual coincidence calculation is much more complex)
float overlapHorizontalAngle = std::max(0.0f, horizontalAngleDiff - std::fabs(frustum1.fov × frustum1.aspectRatio - frustum2.fov × frustum2.aspectRatio));
float overlapVerticalAngle = std::max(0.0f, verticalAngleDiff - std::fabs(frustum1.fov - frustum2.fov));
The// coincidence ratio is the average of the size of the coincident viewing angles divided by the respective viewing angle
float averageHorizontalAngle = (frustum1.fov × frustum1.aspectRatio + frustum2.fov × frustum2.aspectRatio) / 2;
float averageVerticalAngle = (frustum1.fov + frustum2.fov) / 2;
Calculating the coincidence rate
float overlapRate = (overlapHorizontalAngle / averageHorizontalAngle) × (overlapVerticalAngle / averageVerticalAngle);
return overlapRate;
}
int main() {
Example/: defining two field of view ranges
ViewFrustum frustum1(90, 16.0f/9.0f, 0.1f, 100.0f);
ViewFrustum frustum2(90, 4.0f/3.0f, 0.1f, 100.0f);
Calculating the coincidence rate
float overlap = calculateOverlap(frustum1, frustum2);
Output overlap ratio
std::cout<<"Overlap rate: "<<overlap<<std::endl;
return 0;
}
This code defines the structure of a field of view, including the viewing angle and aspect ratio, and a function that calculates the overlap ratio of the two fields of view. The function calculates the viewing angle difference between the two cones in the horizontal and vertical directions and estimates the degree of overlap thereof. The overlap ratio is calculated based on the ratio of these viewing angle differences relative to their average viewing angle size. Two example field of view ranges are created in the master function and their overlap rates are calculated and output.
Referring to fig. 6, the different fields of view within the same priority level are typically crossed, and a higher degree of crossing indicates a higher importance, requiring priority rendering. In order to quantitatively calculate the priority rendering degree of different available field of view ranges in the priority levels, in each priority level, step S533 may be executed first in the implementation process to obtain, as the cumulative overlap ratio of the available field of view ranges, the cumulative value of the overlap ratio of each available field of view range and other available field of view ranges. Step S5332 may then be performed to take the proportionality coefficient between the accumulated coincidence rates for each of the available field of view ranges as the priority coefficient for the available field of view ranges. Finally, step S5333 may be performed to aggregate the priority coefficients for different available field ranges within each priority level.
Referring to fig. 7, when a game character moves to each point in the character movement path, in the process of performing rendering sorting on the available view field ranges included in different priority levels, step S61 may be executed first to obtain the rendering priority of each priority level from at least more sorting according to the number of available view field ranges included. Within each priority level, step S62 may be performed to obtain a rendering priority for each available field of view range by sorting from high to low according to the priority coefficient corresponding to each available field of view range. Step S63 may be performed to obtain a rendering priority for each available field of view range based on the rendering priority for each priority level and the rendering priority for each available field of view range within each priority level. Step S64 may then be performed to obtain a projected location point within the game model for each available field of view range. Finally, step S65 may be executed to obtain the rendering priority of different positions in the game model when the game character moves to each point in the character movement path according to the rendering priority of each available field of view range when the game character moves to each point in the character movement path and the projection position point of each available field of view range in the game model.
To enhance the gaming experience, it is avoided that the available field of view ranges within the different priority layers are rendered accordingly resulting in insufficient rendering of the available field of view ranges contained within the highest priority level of the rendering priority. The currently available rendering algorithms may be prioritized for rendering the available field of view range contained within the highest prioritized priority level.
To supplement the implementation of the acquisition of projection location points for each available field of view within the game model, source code for portions of the functional modules is provided and a comparison explanation is made in the annotation section.
#include<iostream>
#include<vector>
#include<cmath>
Defining three-dimensional vector classes
class Vector3 {
public:
float x, y, z;
Vector3(float x = 0.0f, float y = 0.0f, float z = 0.0f) : x(x), y(y), z(z) {}
Vector normalization
Vector3 normalized() const {
float len = std::sqrt(x × x + y × y + z × z);
return {x / len, y / len, z / len};
}
};
Defining viewpoint information
struct Viewpoint {
Vector3 position;// viewpoint position
Vector3 viewDirection;// midline view
Float fov;// viewing angle (degree)
Float aspectRatio/aspect ratio
Float nearClip distance between adjacent cutting surfaces
Float farClip distance between the far cutting surfaces
Viewpoint(const Vector3&pos, const Vector3&dir, float angle, float ratio, float nearC, float farC)
: position(pos), viewDirection(dir.normalized()), fov(angle), aspectRatio(ratio), nearClip(nearC), farClip(farC) {}
};
Calculating field of view range projection position points
Vector3 calculateProjection(const Viewpoint&viewpoint) {
Calculating projection points based on viewpoint position and line of sight direction
Where// is to be projected onto a near-cut surface
Calculating width and height of near clipping surface
float tanFov = std::tan(viewpoint.fov × 0.5f × (M_PI / 180.0f));
float height = 2.0f × tanFov × viewpoint.nearClip;
float width = height × viewpoint.aspectRatio;
Calculating the center point of the projection point on the near clipping plane
Vector3 center = viewpoint.position + viewpoint.viewDirection.normalized() × viewpoint.nearClip;
The central point of the near clipping surface is returned as the projection position point
return center;
}
int main() {
Example/: defining a view point
Viewpoint viewpoint(Vector3(0.0f, 0.0f, 0.0f), Vector3(0.0f, 0.0f, -1.0f), 90.0f, 16.0f / 9.0f, 0.1f, 100.0f);
Obtaining projection location points
Vector3 projectionPoint = calculateProjection(viewpoint);
Output projection position point
std::cout<<"Projection Point: ("<<projectionPoint.x<<", "<<projectionPoint.y<<", "<<projectionPoint.z<<")"<<std::endl;
return 0;
}
The code defines a three-dimensional vector class and a view information structure including view position, line of sight direction, view angle, aspect ratio, and near-far clipping plane distance. The calculateProjection function calculates the projected location points of the field of view range for a given viewpoint information within the game model. A viewpoint is created in the exemplary main function and its projected location point is calculated using the calculateProjection function, which is finally output.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware, such as circuits or ASICs (Application SPECIFIC INTEGRATED circuits), which perform the corresponding functions or acts, or combinations of hardware and software, such as firmware and the like.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A game engine design method based on three-dimensional modeling is characterized by comprising the following steps of,
Acquiring a role movement path of a game role in a game model;
acquiring a view angle of a view point of a game character;
acquiring viewpoint moving paths and view neutral lines of viewpoints in the process of moving the game roles along the role moving paths for multiple times;
obtaining a plurality of viewpoint position points and the visual center line corresponding to each position point in the character moving path according to viewpoint moving paths of viewpoints in the process of repeatedly obtaining game characters to travel along the character moving path;
calculating according to the view angle of the view point of the game character, a plurality of view point position points corresponding to each position point in the character moving path and the view center line to obtain the available view field range of each position point in the character moving path and a corresponding priority coefficient;
and obtaining rendering priorities of different positions in the game model when the game character moves to each point in the character moving path according to the available view field range of each position point in the character moving path and the corresponding priority coefficient.
2. The method of claim 1, wherein the step of calculating the available field of view range and corresponding priority coefficient for each location point in the character movement path based on the perspective of the viewpoint of the character and the view points and the view-neutral line corresponding to each location point in the character movement path comprises,
Obtaining a plurality of groups of combinations of viewpoint position points and view center lines corresponding to each position point in the character moving path according to the viewpoint position points and view center lines corresponding to each position point in the character moving path;
obtaining a view field range of each group of combination of the view point position points and the view center line in the game model according to the view angle of the view point of the game role;
And for each position point in the character moving path, calculating according to the combination of each group of the viewpoint position points and the view center line and the corresponding view field range to obtain the available view field range and the corresponding priority coefficient of each position point in the character moving path.
3. The method of claim 2 wherein said step of calculating, for each location point within said character movement path, an available field of view range and corresponding priority factor for each location point within said character movement path from each set of combinations of said viewpoint location point and said view line and corresponding field of view range comprises,
For each location point within the character's path of movement,
The available field of view range is divided into several priority levels according to each set of combinations of the viewpoint position points and the view-neutral line,
Within each of said priority levels, calculating a coincidence ratio between the different available field of view ranges,
Obtaining priority coefficients of different available view field ranges in each priority level according to the coincidence rate between the different available view field ranges in each priority level;
And summarizing available view field ranges of each position point in the role moving path in different priority levels and corresponding priority coefficients.
4. The method of claim 3, wherein said step of dividing the available field of view range into priority levels based on each set of said viewpoint location point and said view neutral line, comprises,
Arranging coordinate data of the viewpoint position points in the combination of the viewpoint position points and the view center line and angle data of the view center line according to the same sequence to obtain projection feature vectors of the combination of the viewpoint position points and the view center line;
Selecting a plurality of projection feature vectors as reference projection feature vectors from all the projection feature vectors;
calculating and obtaining a vector difference module length of each reference projection characteristic vector and each other projection characteristic vector;
Dividing each other projection characteristic vector and a reference projection characteristic vector with the minimum vector difference module length into the same vector level;
acquiring a projection characteristic vector with the minimum vector difference module length between the vector level and the mean vector as an updated reference projection characteristic vector;
judging whether the updated reference projection feature vector changes or not;
if yes, continuously updating the vector level and the reference projection feature vector;
if not, taking a view field range corresponding to the combination of each group of view point position points corresponding to the projection feature vectors contained in each vector layer as an available view field range;
And dividing all the available view field ranges into a plurality of priority levels according to the plurality of available view field ranges corresponding to each vector level.
5. The method of claim 3 wherein said step of deriving a priority factor for different available field of view ranges within each of said priority levels based on the rate of coincidence between different available field of view ranges within each of said priority levels comprises,
Within each of the priority levels described above,
A cumulative value of the coincidence ratio of each available field of view range with other available field of view ranges is obtained as a cumulative coincidence ratio of the available field of view ranges,
Taking a proportionality coefficient between the accumulated coincidence rates of each available view field range as a priority coefficient of the available view field range;
And summarizing to obtain priority coefficients of different available view field ranges in each priority level.
6. The method of claim 3 wherein the step of deriving rendering priority for different locations within the game model as the game character moves to each point within the character movement path based on the available field of view range for each point within the character movement path and the corresponding priority coefficient comprises,
As a game character moves to each point within the character's path of movement,
The rendering priority for each priority level is derived from at least the multiple ordering by the number of available field of view ranges,
In each priority level, the rendering priority of each available view field range is obtained by sequencing from high to low according to the priority coefficient corresponding to each available view field range,
Obtaining the rendering priority of each available view field range according to the rendering priority of each priority level and the rendering priority of each available view field range in each priority level;
Acquiring projection position points of each available view field range in the game model;
and obtaining the rendering priority of different positions in the game model when the game character moves to each point in the character movement path according to the rendering priority of each available view field range when the game character moves to each point in the character movement path and the projection position point of each available view field range in the game model.
7. The method of claim 6, wherein the step of deriving rendering priority for different locations within the game model as the game character moves to each point within the character movement path based on the available field of view range for each point within the character movement path and the corresponding priority coefficient, further comprises,
The currently available rendering algorithms are prioritized for rendering the available field of view range contained within the highest prioritized hierarchy.
8. The method of claim 1, further comprising,
Continuously acquiring a viewpoint moving path and a view center line of a viewpoint in the process of moving a game role along the role moving path, and iteratively updating rendering priorities of different positions in the game model when the game role moves to each point in the role moving path.
9. A game engine design system based on three-dimensional modeling, which is characterized by comprising,
The data collection interface is used for acquiring a role movement path of a game role in the game model;
acquiring a view angle of a view point of a game character;
acquiring viewpoint moving paths and view neutral lines of viewpoints in the process of moving the game roles along the role moving paths for multiple times;
The optimizing component is used for obtaining a plurality of viewpoint position points corresponding to each position point in the character moving path and the view center line according to viewpoint moving paths of viewpoints in the process of repeatedly obtaining the game characters to travel along the character moving path;
calculating according to the view angle of the view point of the game character, a plurality of view point position points corresponding to each position point in the character moving path and the view center line to obtain the available view field range of each position point in the character moving path and a corresponding priority coefficient;
Obtaining rendering priorities of different positions in the game model when the game character moves to each point in the character moving path according to the available view field range of each position point in the character moving path and the corresponding priority coefficient;
and the rendering engine component is used for rendering the game picture according to the rendering priority of different positions in the game model when the game character moves to each point in the character moving path.
10. The system of claim 9, further comprising,
The user information collection component is used for acquiring a viewpoint moving path and a view center line of a viewpoint in the process that a common game player controls a game character to travel along the character moving path under the condition of permission of a user;
and the rendering engine component is also used for iteratively updating the rendering priorities of different positions in the game model when the game role under the control of the common game player moves to each point in the role movement path.
CN202410393649.8A 2024-04-02 2024-04-02 Game engine design system and method based on three-dimensional modeling Active CN117959721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410393649.8A CN117959721B (en) 2024-04-02 2024-04-02 Game engine design system and method based on three-dimensional modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410393649.8A CN117959721B (en) 2024-04-02 2024-04-02 Game engine design system and method based on three-dimensional modeling

Publications (2)

Publication Number Publication Date
CN117959721A CN117959721A (en) 2024-05-03
CN117959721B true CN117959721B (en) 2024-05-31

Family

ID=90864821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410393649.8A Active CN117959721B (en) 2024-04-02 2024-04-02 Game engine design system and method based on three-dimensional modeling

Country Status (1)

Country Link
CN (1) CN117959721B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007259990A (en) * 2006-03-27 2007-10-11 Konami Digital Entertainment:Kk Game apparatus, method of controlling thereof, and program
CN113034656A (en) * 2021-03-30 2021-06-25 完美世界(北京)软件科技发展有限公司 Rendering method, device and equipment for illumination information in game scene
CN113633971A (en) * 2021-08-31 2021-11-12 腾讯科技(深圳)有限公司 Video frame rendering method, device, equipment and storage medium
WO2022242854A1 (en) * 2021-05-19 2022-11-24 Telefonaktiebolaget Lm Ericsson (Publ) Prioritizing rendering by extended reality rendering device responsive to rendering prioritization rules
CN116832441A (en) * 2022-03-22 2023-10-03 网易(杭州)网络有限公司 Game display control method and device, storage medium and electronic equipment
KR20240002444A (en) * 2022-06-29 2024-01-05 (주)휴버 Service provding method for contents creating using rendering of object based on artificial intelligence and apparatus therefor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9256896B2 (en) * 2009-08-27 2016-02-09 International Business Machines Corporation Virtual universe rendering based on prioritized metadata terms
US9897805B2 (en) * 2013-06-07 2018-02-20 Sony Interactive Entertainment Inc. Image rendering responsive to user actions in head mounted display
US10969486B2 (en) * 2018-04-26 2021-04-06 SCRRD, Inc. Augmented reality platform and method for use of same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007259990A (en) * 2006-03-27 2007-10-11 Konami Digital Entertainment:Kk Game apparatus, method of controlling thereof, and program
CN113034656A (en) * 2021-03-30 2021-06-25 完美世界(北京)软件科技发展有限公司 Rendering method, device and equipment for illumination information in game scene
WO2022242854A1 (en) * 2021-05-19 2022-11-24 Telefonaktiebolaget Lm Ericsson (Publ) Prioritizing rendering by extended reality rendering device responsive to rendering prioritization rules
CN113633971A (en) * 2021-08-31 2021-11-12 腾讯科技(深圳)有限公司 Video frame rendering method, device, equipment and storage medium
CN116832441A (en) * 2022-03-22 2023-10-03 网易(杭州)网络有限公司 Game display control method and device, storage medium and electronic equipment
KR20240002444A (en) * 2022-06-29 2024-01-05 (주)휴버 Service provding method for contents creating using rendering of object based on artificial intelligence and apparatus therefor

Also Published As

Publication number Publication date
CN117959721A (en) 2024-05-03

Similar Documents

Publication Publication Date Title
EP0854441B1 (en) Method and apparatus for rapidly rendering computer generated images of complex structures
Agrawala et al. Efficient image-based methods for rendering soft shadows
JP2675339B2 (en) How to determine color image information
US11501488B2 (en) Systems, methods, and media for generating visualization of physical environment in artificial reality
US20030117402A1 (en) Systems and methods for simulating frames of complex virtual environments
CN107341853B (en) Virtual-real fusion method and system for super-large virtual scene and dynamic screen shooting
US20030117397A1 (en) Systems and methods for generating virtual reality (VR) file(s) for complex virtual environments
JP2000307949A (en) Image interpolating method, image processing method, image displaying method, image processor, image display device and computer program storage medium
GB2259432A (en) Three dimensional graphics processing
EP1303839A2 (en) System and method for median fusion of depth maps
US20040239673A1 (en) Rendering soft shadows using depth maps
JP2003510702A (en) How to build a 3D scene by analyzing a series of images
CN110533694A (en) Image processing method, device, terminal and storage medium
US20030011598A1 (en) Three dimensional rendering including motion sorting
US7209136B2 (en) Method and system for providing a volumetric representation of a three-dimensional object
CN107392990A (en) Render the global illumination of 3D scenes
US6828962B1 (en) Method and system for altering object views in three dimensions
CN113706373A (en) Model reconstruction method and related device, electronic equipment and storage medium
CN117959721B (en) Game engine design system and method based on three-dimensional modeling
CN111476869B (en) Virtual camera planning method for computing media
RU2295772C1 (en) Method for generation of texture in real time scale and device for its realization
CN110298917A (en) A kind of facial reconstruction method and system
Saito et al. View interpolation of multiple cameras based on projective geometry
Mulloni et al. Interactive walkthrough of large 3D models of buildings on mobile devices
Zachmann et al. Kinetic bounding volume hierarchies for deformable objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant