CN114219924A - Method, apparatus, device, medium, and program product for adaptive display of virtual scene - Google Patents

Method, apparatus, device, medium, and program product for adaptive display of virtual scene Download PDF

Info

Publication number
CN114219924A
CN114219924A CN202111671860.4A CN202111671860A CN114219924A CN 114219924 A CN114219924 A CN 114219924A CN 202111671860 A CN202111671860 A CN 202111671860A CN 114219924 A CN114219924 A CN 114219924A
Authority
CN
China
Prior art keywords
virtual scene
detail
model
level
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111671860.4A
Other languages
Chinese (zh)
Other versions
CN114219924B (en
Inventor
张道明
朱光育
李振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN114219924A publication Critical patent/CN114219924A/en
Application granted granted Critical
Publication of CN114219924B publication Critical patent/CN114219924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a method and a device for adaptive display of a virtual scene, electronic equipment, a computer readable storage medium and a computer program product; the method comprises the following steps: determining a plurality of level of detail models adapted to first parameters of a virtual scene; wherein the first parameter is any one of a camera position parameter and a graphic processing performance parameter; determining a target level of detail model adapted to a second parameter of the virtual scene from the plurality of level of detail models; wherein the second parameter is the other one of the camera distance parameter and the graphic processing performance parameter except the first parameter; and displaying the virtual object positioned in the visual field of the camera in the virtual scene according to the target detail level model. By the method and the device, the virtual object is smoothly switched and displayed based on the accurate detail level model.

Description

Method, apparatus, device, medium, and program product for adaptive display of virtual scene
The application requires application number 202111209465.4, application date 2021, 10/18, entitled: method, apparatus, device, medium and program product for adaptive display of virtual scenes.
Technical Field
The present application relates to computer human-computer interaction technologies, and in particular, to a method and an apparatus for adaptive display of a virtual scene, an electronic device, a computer-readable storage medium, and a computer program product.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the display technology of the virtual scene, can realize diversified interaction between virtual objects controlled by users or artificial intelligence according to the actual application requirements, has various typical application scenes, and can simulate the real fighting process between the virtual objects in the virtual scene of games and the like.
In a virtual scene, when a virtual object in the field of view of the camera is far from the camera, the level of detail that can be seen by the virtual object is reduced; when a virtual object in the field of view of the camera is very close to the camera, the level of detail at which the virtual object can be seen increases.
In the related technology, the detail amount of the virtual object is adjusted according to the distance between the virtual object in the virtual scene and the camera, and the scheme easily causes unsmooth switching of the virtual object with different detail degrees and poor human-computer interaction effect of the virtual scene, thereby influencing the use experience.
Disclosure of Invention
Embodiments of the present application provide a method and an apparatus for adaptive display of a virtual scene, an electronic device, a computer-readable storage medium, and a computer program product, which are capable of smoothly switching and displaying a virtual object based on an accurate level-of-detail model.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an adaptive display method of a virtual scene, which comprises the following steps:
determining a plurality of level of detail models adapted to first parameters of the virtual scene; wherein the first parameter is any one of a camera position parameter and a graphic processing performance parameter;
determining a target level of detail model adapted to a second parameter of the virtual scene from the plurality of level of detail models; wherein the second parameter is the other of the camera distance parameter and the graphics processing performance parameter, excluding the first parameter;
and displaying a virtual object in the visual field of the camera in the virtual scene according to the target detail level model.
The embodiment of the application provides an adaptive display method of a virtual scene, which comprises the following steps:
displaying a virtual object in the virtual scene that is in a field of view of a camera based on a first level of detail model;
updating a display of a virtual object in the virtual scene that is in a field of view of the camera based on a second level of detail model in response to a change in a vertical distance of the camera relative to a plane of the virtual scene;
wherein the level of detail of the second level of detail model is inversely related to the vertical distance after the change.
The embodiment of the application provides an adaptation display device of virtual scene, includes:
a first determining module for determining a plurality of level of detail models adapted to first parameters of the virtual scene; wherein the first parameter is any one of a camera position parameter and a graphic processing performance parameter;
a second determining module, configured to determine, from the multiple level of detail models, a target level of detail model adapted to a second parameter of the virtual scene; wherein the second parameter is the other of the camera distance parameter and the graphics processing performance parameter, excluding the first parameter;
and the first display module is used for displaying the virtual object positioned in the visual field of the camera in the virtual scene according to the target detail level model.
In the above technical solution, when the first parameter is the graphics processing performance parameter, the first determining module is further configured to obtain a performance parameter configuration file, where the performance parameter configuration file includes association relationships between different graphics processing performance parameters and different detail level models;
and inquiring the performance parameter configuration file based on the image processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the image processing performance parameters of the virtual scene.
In the above technical solution, the performance parameter configuration file includes a model configuration table for different virtual objects, and the model configuration table includes association relations between different graphics processing performance parameters and different detail level models for the virtual objects;
the first determination module is further configured to query the performance parameter configuration file based on a virtual object in the virtual scene to obtain a model configuration table for the virtual object;
and inquiring a model configuration table aiming at the virtual object based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models which are matched with the graphic processing performance parameters of the virtual scene and the virtual object.
In the above technical solution, the performance parameter configuration file includes association relations between different graphics processing performance parameters and different detail level models for different virtual objects;
the first determining module is further configured to query the performance parameter configuration file based on a virtual object in the virtual scene and a graphics processing performance parameter of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing performance parameter of the virtual scene and the virtual object.
In the above technical solution, the graphics processing performance parameters include graphics processing hardware parameters, and the performance parameter configuration file includes association relations between different graphics processing hardware parameters and different detail level models;
the first determining module is further configured to query the performance parameter configuration file based on the graphics processing hardware parameters of the virtual scene before the virtual scene ends to obtain a plurality of detail level models adapted to the graphics processing hardware parameters of the virtual scene;
the graphics processing hardware parameters are inquired according to the model of the electronic equipment displaying the virtual scene, and include at least one of the following: processor model, memory capacity.
In the above technical solution, the graphics processing performance parameters include graphics processing software parameters, and the performance parameter configuration file includes association relations between different graphics processing software parameters and different detail level models;
the first determining module is further configured to query the performance parameter configuration file based on the graphics processing software parameters during the virtual scene operation to obtain a plurality of detail level models adapted to the graphics processing software parameters during the virtual scene operation;
wherein the graphics processing software parameters include at least one of: memory free capacity, processor free computing power.
In the above technical solution, the graphics processing performance parameters include graphics processing software parameters and graphics processing hardware parameters, and the performance parameter configuration file includes first association relations between different graphics processing hardware parameters and different level of detail models, and second association relations between different graphics processing software parameters and different level of detail models;
the first determining module is further configured to, when the graphics processing software parameter of the virtual scene is greater than a performance threshold, query a first association relation included in the performance parameter configuration file based on the graphics processing hardware parameter of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing hardware parameter of the virtual scene;
when the graphic processing software parameters of the virtual scene are smaller than or equal to the performance threshold, querying a second association relation included in the performance parameter configuration file based on the graphic processing hardware parameters during the virtual scene operation to obtain a plurality of detail level models adapted to the graphic processing software parameters during the virtual scene operation.
In the above technical solution, when the second parameter is the camera distance parameter, the second determining module is further configured to obtain a distance parameter configuration file, where the distance parameter configuration file includes association relationships between different camera distance parameters and different detail level models;
and inquiring the distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a target detail level model adaptive to the camera distance parameter of the virtual scene.
In the above technical solution, the distance parameter configuration file includes association relations between different camera distance intervals and different detail level models, and the camera distance intervals are obtained by dividing value intervals of the camera distance parameters;
the second determining module is further configured to query a camera distance interval included in the distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a camera distance interval corresponding to the camera distance parameter of the virtual scene;
and inquiring the incidence relation between different camera distance intervals and different detail level models included in the distance parameter configuration file based on the camera distance interval corresponding to the camera distance of the virtual scene to obtain a target detail level model adaptive to the camera distance parameter of the virtual scene.
In the above technical solution, the camera distance parameter is one of the following:
a distance between a camera of the virtual scene and a virtual object in a field of view of the camera;
a vertical distance between the camera and a plane of the virtual scene;
a difference in a vertical distance between the camera and a plane of the virtual scene and a height of the virtual object.
In the above technical solution, the first display module is further configured to obtain model parameters of the target detail level model, where the model parameters include a model mesh and a model material;
and displaying a virtual object in the virtual scene, which is positioned in the visual field of the camera, according to the model parameters.
In the above technical solution, before the virtual object in the visual field of the camera in the virtual scene is displayed according to the target level-of-detail model, the second determining module is further configured to obtain model parameters of each level-of-detail model, where the model parameters include model meshes and model materials;
storing the model parameters of each detail level model into a cache space;
obtaining model parameters of the target detail level model from the cache space;
and displaying a virtual object in the visual field of the camera in the virtual scene according to the model parameters of the target detail level model.
In the above technical solution, when there are a plurality of detail level models adapted to the second parameter of the virtual scene, the second determining module is further configured to obtain a user portrait corresponding to an account controlling a virtual object in the virtual scene;
and obtaining the target detail level model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and a detail level preference model which is called by the user portrait.
In the above technical solution, when there are a plurality of detail level models adapted to the second parameter of the virtual scene, the second determining module is further configured to obtain a complexity of a virtual object located in a field of view of the camera in the virtual scene;
and calling a detail level prediction model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and the complexity degree to obtain the target detail level model.
The embodiment of the application provides an adaptation display device of virtual scene, includes:
a second display module for displaying a virtual object in the virtual scene in the field of view of the camera based on the first level of detail model;
a third display module for updating and displaying a virtual object in the virtual scene that is in the field of view of the camera based on the second level of detail model in response to a change in the vertical distance of the camera relative to the plane of the virtual scene;
wherein the level of detail of the second level of detail model is inversely related to the vertical distance after the change.
In the above technical solution, a vertical distance of the camera with respect to a plane of the virtual scene is one of:
a vertical distance between the camera and a plane of the virtual scene;
a difference in a vertical distance between the camera and a plane of the virtual scene and a height of the virtual object.
In the above technical solution, before the virtual object in the field of view of the camera in the virtual scene is updated and displayed based on the second level of detail model, the third display module is further configured to determine a plurality of level of detail models adapted to graphics processing performance parameters of the virtual scene;
determining a second level of detail model from the plurality of level of detail models that fits the changed vertical distance.
In the above technical solution, before the virtual object in the field of view of the camera in the virtual scene is updated and displayed based on the second level of detail model, the third display module is further configured to determine a plurality of level of detail models adapted to the changed vertical distance;
determining a second level of detail model from the plurality of level of detail models that is adapted to graphics processing performance parameters of the virtual scene.
An embodiment of the present application provides an electronic device for adaptive display, the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the adaptive display method of the virtual scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the method for adaptively displaying a virtual scene provided in the embodiment of the present application.
The embodiment of the present application provides a computer program product, which includes a computer program or an instruction, and when the computer program or the instruction is executed by a processor, the method for adaptively displaying a virtual scene provided in the embodiment of the present application is implemented.
The embodiment of the application has the following beneficial effects:
the accurate and adaptive target detail level model is determined by combining two parameters, namely the camera position parameter and the graphic processing performance parameter, of the virtual scene, so that smooth switching of virtual objects is performed based on the accurate and adaptive target detail level model, the human-computer interaction effect of the virtual scene is improved, and related communication resources and computing resources are saved.
Drawings
Fig. 1A is an application mode diagram of an adaptive display method of a virtual scene according to an embodiment of the present application;
fig. 1B is an application mode diagram of an adaptive display method of a virtual scene according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an electronic device for adapting a display provided in an embodiment of the present application;
3A-3C are schematic flow diagrams of a method for adaptive display of a virtual scene according to an embodiment of the present application;
fig. 4A is a schematic view of a low-viewing-angle interface of a high-end machine according to an embodiment of the present application;
fig. 4B is a schematic interface diagram of a medium viewing angle in a high-end computer according to an embodiment of the present disclosure;
FIG. 4C is a schematic view of a high-end-machine high-viewing-angle interface provided by an embodiment of the present application;
FIG. 4D is a schematic view of a low-angle interface of a low-end computer according to an embodiment of the present disclosure;
fig. 5 is a schematic configuration interface diagram of a model extension provided in an embodiment of the present application;
fig. 6 is a schematic configuration interface diagram of a model extension provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of an interface for configuration saving provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of configuration item partitioning provided by an embodiment of the present application;
fig. 9 is a schematic flowchart of a method for adaptively displaying a virtual scene according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, references to the terms "first", "second", and the like are only used for distinguishing similar objects and do not denote a particular order or importance, but rather the terms "first", "second", and the like may be used interchangeably with the order of priority or the order in which they are expressed, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated and described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) In response to: for indicating the condition or state on which the performed operation depends, when the condition or state on which the performed operation depends is satisfied, the performed operation or operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) A client: and the terminal is used for running application programs for providing various services, such as a video playing client, a game client and the like.
3) Virtual scene: the application program displays (or provides) a virtual scene when running on the terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
4) Virtual object: the image of various people and objects that can interact in the virtual scene, or the movable objects in the virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as a character, animal, etc., displayed in a virtual scene. The virtual object may be an avatar in a virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
5) Scene data: the characteristic data representing the virtual scene may be, for example, the area of a building area in the virtual scene, the current architectural style of the virtual scene, and the like; the position of the virtual building in the virtual scene, the floor space of the virtual building, and the like may also be included.
6) Multi-Level Of Detail model (LOD, Level Of Detail for model): when the virtual object in the virtual scene is far away from the camera, the amount of visible details is greatly reduced, and the amount of details (i.e., the degree of details) of the virtual object, i.e., the number of the triangular meshes, can be adjusted according to the distance from the virtual object to the camera, which is also referred to as distance lod. And opening corresponding detail level models with different precisions according to different distances so as to present the virtual objects in the virtual scene based on the detail level models (lod) with different precisions.
7) Multiple level of detail model component (log gradient): a component provided by the game engine (e.g., Unity) that manages the multi-level-of-detail model may add the multi-level-of-detail model to the log download and decide which level of detail model to use based on the virtual object's fraction on the screen.
8) Model level of detail model (model lod): different models provide different detail level models to meet model adaptation, and the different models are adapted to the detail level models with corresponding accuracy to smoothly experience games.
9) Default (prefab): a resource type in a game engine and a game development tool (Unity3D) can store the combination of engineering resources and resource information. For example, Prefab is a reusable game object stored in the project view, which can be placed in multiple scenes, or can be placed multiple times in the same scene, and when a Prefab is placed in a scene, a corresponding instance is created.
10) Matrix4x 4: a standard 4x4 transformation matrix, which can perform arbitrary linear 3D transformations (i.e., translation, rotation, scaling, clipping, etc.) and perspective transformations using homogeneous coordinates.
11) Image processor instantiation (gpu instance): multiple copies of the same grid can be rendered (render) at once using gpu instance, with only a small number of rendering instructions (DrawCalls), suitable for rendering objects that appear repeatedly in a scene, such as buildings, trees, grass, etc.
Embodiments of the present application provide a method and an apparatus for adaptive display of a virtual scene, an electronic device, a computer-readable storage medium, and a computer program product, which are capable of smoothly switching and displaying a virtual object based on an accurate level-of-detail model. In order to facilitate easier understanding of the adaptive display method for a virtual scene provided in the embodiments of the present application, an exemplary implementation scenario of the adaptive display method for a virtual scene provided in the embodiments of the present application is first described.
In some embodiments, the virtual scene may be an environment for game characters to interact with, for example, game characters to play against in the virtual scene, and the two-way interaction may be performed in the virtual scene by controlling actions of the game characters, so that the user can relieve life stress during the game.
In an implementation scenario, referring to fig. 1A, fig. 1A is an application mode schematic diagram of an adaptive display method for a virtual scene provided in an embodiment of the present application, and is applicable to application modes that can complete calculation of related data of the virtual scene 100 completely depending on a computing capability of graphics processing hardware of a terminal 400, such as a game in a single-computer/offline mode, and output of the virtual scene is completed through various different types of terminals 400, such as a smart phone, a tablet computer, and a virtual reality/augmented reality device.
As an example, types of Graphics Processing hardware include a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU).
When the visual perception of the virtual scene 100 is formed, the terminal 400 calculates and displays required data through the graphic computing hardware, completes the loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception on the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is displayed on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of an augmented reality/virtual reality glasses; in addition, the terminal 400 may also form one or more of auditory perception, tactile perception, motion perception, and taste perception by means of different hardware in order to enrich the perception effect.
As an example, the terminal 400 runs a client 410 (e.g. a standalone version of a game application), and outputs a virtual scene including role play during the running process of the client 410, wherein the virtual scene may be an environment for game role interaction, such as a plain, a street, a valley, and the like for game role battle; taking the example of displaying the virtual scene 100 from a first-person perspective, in the virtual scene 100, the virtual object 101 and the plane 102 are located in the field of view of the camera, the virtual object 101 may be a game character controlled by a user (or a player), that is, the virtual object 101 is controlled by a real user (an enemy), and will operate in the virtual scene in response to the real user's operation of buttons (including a rocker button, an attack button, a defense button, and the like), for example, when the real user moves the rocker button to the left, the virtual object will move to the left in the virtual scene, and may also remain stationary in place, jump, and use various functions (such as skills and props); the virtual object 101 may also be Artificial Intelligence (AI) set in a virtual scene fight by training; the virtual object 101 may also be a Non-user Character (NPC) set in the virtual scene interaction; the virtual object 101 may also be an immovable object or a movable object in the virtual scene 100.
For example, taking the top view angle of the first human-scale view angle to display the virtual scene 100 as an example, the virtual ground and the virtual object 101 in the field of view of the camera are displayed in the virtual scene, and the virtual object controlled by the player in the virtual scene is subjected to dive, that is, the camera is subjected to dive, so that the virtual object 101 and the virtual ground in the field of view of the camera are closer to the camera, and an accurate and adaptive detail level model is determined by combining two parameters, namely a camera position parameter and a graphic processing performance parameter, of the virtual scene 100, so that the virtual object 101 and the virtual ground are smoothly switched based on the accurate and adaptive detail level model, more appropriate details of the virtual object 101 and the virtual ground can be seen, the human-computer interaction effect of the virtual scene is improved, and the use experience of the user is also improved.
In another implementation scenario, referring to fig. 1B, fig. 1B is an application mode schematic diagram of the adaptive display method for a virtual scene provided in this embodiment, which is applied to a terminal 400 and a server 200, and is adapted to complete virtual scene calculation depending on the calculation capability of the server 200 and output an application mode of the virtual scene at the terminal 400.
Taking the visual perception forming the virtual scene 100 as an example, the server 200 performs calculation of display data (e.g., scene data) related to the virtual scene and sends the calculated display data to the terminal 400 through the network 300, the terminal 400 relies on graphics computing hardware to complete loading, parsing and rendering of the calculated display data, and relies on graphics output hardware to output the virtual scene to form the visual perception, for example, a two-dimensional video frame may be presented on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect may be projected on a lens of augmented reality/virtual reality glasses; for perception in the form of a virtual scene, it is understood that an auditory perception may be formed by means of corresponding hardware outputs of the terminal 400, for example using a microphone, a tactile perception using a vibrator, etc.
As an example, the terminal 400 runs a client 410 (e.g., a network-based game application) thereon, and performs game interaction with other users by connecting to the server 200 (e.g., a game server), the terminal 400 outputs the virtual scene 100 of the client 410, displays the virtual scene 100 in a first-person perspective as an example, a virtual object 101 and a plane 102 in the virtual scene 100, which are located in the field of view of the camera, the virtual object 101 may be a game character controlled by a user (or player), i.e., the virtual object 101 is under the control of a real user (adversary), will operate in a virtual scene in response to the real user's operation of buttons (including rocker buttons, attack buttons, defense buttons, etc.), for example, when a real user moves the rocker button to the left, the virtual object will move to the left in the virtual scene, and may also remain stationary, jump, and use various functions (such as skills and props); the virtual object 101 may also be Artificial Intelligence (AI) set in a virtual scene fight by training; the virtual object 101 may also be a Non-user Character (NPC) set in the virtual scene interaction; the virtual object 101 may also be an immovable object or a movable object in the virtual scene 100.
For example, taking the top view angle of the first human-scale view angle to display the virtual scene 100 as an example, the virtual ground and the virtual object 101 in the field of view of the camera are displayed in the virtual scene, and the virtual object controlled by the player in the virtual scene is subjected to dive, that is, the camera is subjected to dive, so that the virtual object 101 and the virtual ground in the field of view of the camera are closer to the camera, and an accurate and adaptive detail level model is determined by combining two parameters, namely a camera position parameter and a graphic processing performance parameter, of the virtual scene 100, so that the virtual object 101 and the virtual ground are smoothly switched based on the accurate and adaptive detail level model, more appropriate details of the virtual object 101 and the virtual ground can be seen, the human-computer interaction effect of the virtual scene is improved, and the use experience of the user is also improved.
In some embodiments, the terminal 400 may implement the adaptive display method of the virtual scene provided in the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a Native APPlication (APP), i.e., a program that needs to be installed in an operating system to run, such as a policy Game (SLG, Game) APP (i.e., the client 410 described above); or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an application program as an example, in actual implementation, the terminal 400 is installed and runs with an application program supporting a virtual scene. The application program may be any one of a First-Person Shooting game (FPS), a third-Person Shooting game, a virtual reality application program, a three-dimensional map program, or a multi-player gunfight type live game. The user uses the terminal 400 to operate virtual objects located in a virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as a simulated character or an animated character, among others.
In some embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying resources of hardware, software, network, and the like in a wide area network or a local area network to implement computation, storage, processing, and sharing of data.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
For example, the server 200 in fig. 1B may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device for adaptive display provided in an embodiment of the present application, and the electronic device is taken as a terminal 400 for example for explanation, where the electronic device 400 shown in fig. 2 includes: at least one processor 420, memory 460, at least one network interface 430, and a user interface 440. The various components in the terminal 400 are coupled together by a bus system 450. It is understood that the bus system 450 is used to enable connected communication between these components. The bus system 450 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 450 in fig. 2.
The Processor 420 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 440 includes one or more output devices 441, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 440 also includes one or more input devices 442 including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display screen, camera, other input buttons and controls.
The memory 460 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 460 may optionally include one or more storage devices physically located remote from processor 420.
The memory 460 may include volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 460 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 460 may be capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 461 comprising system programs for handling various basic system services and performing hardware related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and handling hardware based tasks;
a network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430, exemplary network interfaces 430 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 463 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 441 (e.g., display screens, speakers, etc.) associated with user interface 440;
an input processing module 464 for detecting one or more user inputs or interactions from one of the one or more input devices 442 and translating the detected inputs or interactions.
In some embodiments, the adaptive display device for a virtual scene provided in the embodiments of the present application may be implemented in software, and fig. 2 illustrates an adaptive display device 465 for a virtual scene stored in a memory 460, which may be software in the form of programs and plug-ins, and includes the following software modules: the first determining module 4651, the second determining module 4652 and the first displaying module 4653, or the second displaying module 4654 and the third displaying module 4655 are logical modules, and thus may be arbitrarily combined or further split according to the implemented functions. It should be noted that all the above modules are shown once in fig. 2 for convenience of expression, but should not be construed as excluding the implementation of the adaptive display device 465 in a virtual scene that may include only the first determining module 4651, the second determining module 4652 and the first display module 4653, or the second display module 4654 and the third display module 4655, the functions of each of which will be explained below.
In other embodiments, the adaptive display Device of the virtual scene provided in this embodiment may be implemented in hardware, and for example, the adaptive display Device of the virtual scene provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the adaptive display method of the virtual scene provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic elements.
The following describes a method for adaptive display of a virtual scene according to an embodiment of the present application with reference to the accompanying drawings. The method for adaptively displaying a virtual scene provided in the embodiment of the present application may be executed by the terminal 400 in fig. 1A alone, or may be executed by the terminal 400 and the server 200 in fig. 1B in cooperation.
Next, a description will be given taking an example in which the terminal 400 in fig. 1A alone performs the adaptive display method of the virtual scene provided in the embodiment of the present application. Referring to fig. 3A, fig. 3A is a schematic flowchart of an adaptive display method of a virtual scene provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 3A.
It should be noted that the method shown in fig. 3A can be executed by various forms of computer programs running on the terminal 400, and is not limited to the client 410 described above, but may also be the operating system 461, software modules and scripts described above, so that the client should not be considered as limiting the embodiments of the present application.
In step 101, determining a plurality of level of detail models adapted to first parameters of a virtual scene; wherein the first parameter is any one of a camera position parameter and a graphics processing performance parameter.
For example, when the first parameter is a graphics processing performance parameter, determining a plurality of detail level models adapted to the graphics processing performance parameter of the virtual scene, then determining a target detail level model adapted to a camera position parameter of the virtual scene from the plurality of detail level models, and displaying a virtual object in the virtual scene in the field of view of the camera according to the target detail level model; when the first parameter is the camera position parameter, a plurality of detail level models which are matched with the camera position parameter of the virtual scene are determined, then a target detail level model which is matched with the graph processing performance parameter of the virtual scene is determined from the plurality of detail level models, and a virtual object which is positioned in the visual field of the camera in the virtual scene is displayed according to the target detail level model.
In some embodiments, the camera distance parameter is one of: a distance between a camera of the virtual scene and a virtual object in a field of view of the camera; the vertical distance between the camera and the plane of the virtual scene (e.g., reference plane such as virtual ground, virtual water surface, virtual sky, etc.); the difference in the vertical distance between the camera and the plane of the virtual scene and the height of the virtual object.
For example, when the height of the virtual object in the virtual scene is small, the height of the virtual object in the virtual scene may be ignored in calculating the camera distance parameter, thereby increasing the calculation rate of the camera distance parameter.
The following description takes the first parameter as the graphics processing performance parameter as an example:
referring to fig. 3B, fig. 3B is an optional flowchart of a method for adaptively displaying a virtual scene according to an embodiment of the present application, and fig. 3B illustrates that step 101 in fig. 3A may be implemented by steps 1011 to 1012: in step 1011, when the first parameter is a graphic processing performance parameter, a performance parameter configuration file is obtained, wherein the performance parameter configuration file includes an association relationship between different graphic processing performance parameters and different detail level models; in step 1012, the performance parameter configuration file is queried based on the graphics processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene.
For example, the association relationship between different graphics processing performance parameters and different detail level models is configured in advance and stored in a performance parameter configuration file, after the graphics processing performance parameters of the virtual scene are obtained, the association relationship between the different graphics processing performance parameters and the different detail level models included in the performance parameter configuration file is inquired based on the graphics processing performance parameters of the virtual scene so as to inquire a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene, then the plurality of detail level models are further screened by combining the camera distance parameters so as to obtain a target detail level model adapted to the camera distance parameters and the graphics processing performance parameters, and all virtual objects in the virtual scene are displayed based on the target detail level model.
As an example, the graphics processing performance parameter of the virtual scene is high performance, and the performance parameter configuration file is queried based on the graphics processing performance parameter of the virtual scene, so that the multiple detail level models adapted to the graphics processing performance parameter of the virtual scene are lod1, lod2, and lod 3.
In some embodiments, the performance parameter profile includes model configuration tables for different virtual objects, the model configuration tables including associations of different graphics processing performance parameters to different level of detail models for the virtual objects; inquiring a performance parameter configuration file based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing performance parameters of the virtual scene, wherein the detail level models comprise: inquiring a performance parameter configuration file based on a virtual object in a virtual scene to obtain a model configuration table aiming at the virtual object; and inquiring a model configuration table aiming at the virtual object based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models which are matched with the graphic processing performance parameters of the virtual scene and the virtual object.
For example, the embodiment of the present application may display all virtual objects in the virtual scene based on the determined target level of detail model, for example, the level of detail of the virtual object 1 is lod1, and the level of detail of the virtual object 2 is lod 1; the embodiment of the present application may further perform personalized setting on the virtual objects in the virtual scene, that is, different virtual objects may be displayed by different degrees of detail, for example, the degree of detail of the virtual object 1 is lod1, and the degree of detail of the virtual object 2 is lod 2.
Configuring a configuration table comprising incidence relations of different graphic processing performance parameters and different detail level models for the virtual object in advance according to the condition of personalized setting of the virtual object in the virtual scene, storing the configuration table in a performance parameter configuration file, inquiring the performance parameter configuration file based on a certain virtual object in the virtual scene after acquiring the graphic processing performance parameters of the virtual scene to obtain a model configuration table for the virtual object, inquiring the model configuration table for the virtual object based on the graphic processing performance parameters of the virtual scene to inquire the graphic processing performance parameters of the virtual scene and a plurality of detail level models adapted to the virtual object, and further screening the plurality of detail level models by combining the camera distance parameters to obtain a target detail level model adapted to the camera distance parameters and the graphic processing performance parameters, and displaying the virtual object in the virtual scene based on the target detail level model.
It should be noted that, by configuring the configuration table by dividing the virtual object, a plurality of configuration tables can be configured at the same time, thereby speeding up the configuration efficiency and facilitating the lookup of the configuration information of the detail level model.
In some embodiments, the performance parameter profile includes associations of different graphics processing performance parameters to different level of detail models for different virtual objects; inquiring a performance parameter configuration file based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing performance parameters of the virtual scene, wherein the detail level models comprise: and inquiring the performance parameter configuration file based on the virtual object in the virtual scene and the graphic processing performance parameter of the virtual scene to obtain a plurality of detail level models which are matched with the graphic processing performance parameter and the virtual object of the virtual scene.
According to the situation that the virtual object in the virtual scene is set in an individualized mode, the incidence relation between different graphic processing performance parameters and different detail level models for the different virtual objects is configured in advance and stored in a performance parameter configuration file, after the graphic processing performance parameters of the virtual scene are obtained, the performance parameter configuration file is inquired on the basis of the virtual object in the virtual scene and the graphic processing performance parameters of the virtual scene so as to inquire a plurality of detail level models matched with the graphic processing performance parameters and the virtual objects of the virtual scene, the detail level models are further screened in combination with the camera distance parameters subsequently so as to obtain a target detail level model matched with the camera distance parameters and the graphic processing performance parameters, and the virtual object in the virtual scene is displayed on the basis of the target detail level model.
In some embodiments, the graphics processing performance parameters include graphics processing hardware parameters, and the performance parameter configuration file includes associations of different graphics processing hardware parameters to different level of detail models; inquiring a performance parameter configuration file based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing performance parameters of the virtual scene, wherein the detail level models comprise: before the virtual scene operation is finished, inquiring a performance parameter configuration file based on the graphic processing hardware parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing hardware parameters of the virtual scene; the hardware parameters of the graphic processing are inquired according to the model of the electronic equipment displaying the virtual scene, and the hardware parameters of the graphic processing comprise at least one of the following parameters: processor model, memory capacity.
For example, the processor includes a central processing unit, a graphics processing unit, and the like, processors of different models, memories of different capacities, and the speed of running software is different, and when the virtual scene starts running or runs, the performance parameter configuration file is queried based on the graphics processing hardware parameters of the virtual scene, so as to obtain a plurality of detail level models adapted to the graphics processing hardware parameters of the virtual scene, for example, when the memory capacity of the electronic device displaying the virtual scene is large and the processor capability is strong, the detail level of the plurality of detail level models adapted to the graphics processing hardware parameters of the virtual scene is high.
In some embodiments, the graphics processing performance parameters comprise graphics processing software parameters, and the performance parameter configuration file comprises associations of different graphics processing software parameters to different level of detail models; inquiring a performance parameter configuration file based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing performance parameters of the virtual scene, wherein the detail level models comprise: inquiring a performance parameter configuration file based on the parameters of the graphic processing software during the virtual scene operation to obtain a plurality of detail level models which are matched with the parameters of the graphic processing software during the virtual scene operation; wherein the graphics processing software parameters include at least one of: memory free capacity, processor free computing power.
For example, the processor includes a central processing unit, a graphics processing unit, and the like, when the electronic device runs software, the speed of running the software is different, and during the running of the virtual scene, the performance parameter configuration file is queried based on the graphics processing software parameters of the virtual scene, so as to obtain a plurality of detail level models adapted to the graphics processing software parameters of the virtual scene in real time. The computing power depends on the number of cores, the frequency of the cores, the core single clock cycle, and the like.
In some embodiments, the graphics processing performance parameters include graphics processing software parameters and graphics processing hardware parameters, the performance parameter configuration file includes first associations of different graphics processing hardware parameters to different level of detail models, second associations of different graphics processing software parameters to different level of detail models; inquiring a performance parameter configuration file based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing performance parameters of the virtual scene, wherein the detail level models comprise: when the graphic processing software parameters of the virtual scene are larger than the performance threshold, inquiring a first incidence relation included in a performance parameter configuration file based on the graphic processing hardware parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing hardware parameters of the virtual scene; and when the graphic processing software parameters of the virtual scene are smaller than or equal to the performance threshold, inquiring a second incidence relation included in the performance parameter configuration file based on the graphic processing hardware parameters during the virtual scene operation to obtain a plurality of detail level models adapted to the graphic processing software parameters during the virtual scene operation.
For example, when the graphics processing software parameter of the virtual scene is greater than the performance threshold, it is indicated that the speed of the software running is relatively fast, at this time, a plurality of detail level models adapted to the graphics processing hardware parameter of the virtual scene can be completely inquired according to the first association relationship between different graphics processing hardware parameters and different detail level models, and the detail level models adapted to the graphics processing hardware parameter of the virtual scene can meet the smoothness of the virtual scene running; when the graphic processing software parameters of the virtual scene are less than or equal to the performance threshold, it is indicated that the speed of the operating software is slow, and at this time, a plurality of detail level models which are adapted to the graphic processing software parameters of the virtual scene in real time need to be inquired according to the second association relationship between different graphic processing software parameters and different detail level models, and the detail level models which are adapted to the graphic processing software parameters of the virtual scene can meet the smoothness of the operation of the virtual scene.
In step 102, a target level of detail model which is adapted to a second parameter of the virtual scene is determined from the plurality of level of detail models; wherein the second parameter is the other of the camera distance parameter and the graphics processing performance parameter other than the first parameter.
Taking the second parameter as the camera distance parameter as an example, after a plurality of detail level models which are adaptive to the graphic processing performance parameter of the virtual scene are determined, a target detail level model which is adaptive to the camera position parameter of the virtual scene is determined from the plurality of detail level models, and a virtual object which is positioned in the visual field of the camera in the virtual scene is displayed according to the target detail level model, so that the detail degree presented by the virtual object meets the actual requirement under the condition that the virtual scene is ensured to operate smoothly.
The following description will be given taking the second parameter as the camera distance parameter as an example:
in some embodiments, when the second parameter is a camera distance parameter, determining a target level of detail model adapted to the second parameter of the virtual scene from the plurality of level of detail models comprises: acquiring a distance parameter configuration file, wherein the distance parameter configuration file comprises incidence relations between different camera distance parameters and different detail level models; and inquiring a distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a target detail level model matched with the camera distance parameter of the virtual scene.
For example, the association relations between different camera distance parameters and different detail level models are configured in advance and stored in a distance parameter configuration file, after the camera distance parameters of the virtual scene are obtained, the association relations between the different camera distance parameters and the different detail level models included in the distance parameter configuration file are inquired based on the camera distance parameters of the virtual scene so as to inquire a target detail level model adapted to the second parameters of the virtual scene, and therefore the target detail level model adapted to the camera distance parameters and the graphic processing performance parameters is obtained, and all virtual objects in the virtual scene are displayed based on the target detail level model.
In some embodiments, the distance parameter configuration file includes association relations between different camera distance intervals and different detail level models, and the camera distance intervals are obtained by dividing value intervals of camera distance parameters; inquiring a distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a target detail level model adaptive to the camera distance parameter of the virtual scene, wherein the target detail level model comprises the following steps: inquiring a camera distance interval included in the distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a camera distance interval corresponding to the camera distance parameter of the virtual scene; based on the camera distance interval corresponding to the camera distance of the virtual scene, the incidence relation between different camera distance intervals and different detail level models included in the distance parameter configuration file is inquired, and a target detail level model adaptive to the camera distance parameters of the virtual scene is obtained.
For example, according to the actual display requirement of the virtual object, dividing the value intervals of the camera distance parameters to obtain a plurality of camera distance intervals, pre-configuring the association relations between different camera distance intervals and different detail level models, storing the association relations in a distance parameter configuration file, after obtaining the camera distance parameters of the virtual scene, firstly inquiring the camera distance intervals included in the distance parameter configuration file based on the camera distance parameters of the virtual scene, and inquiring the camera distance intervals corresponding to the camera distance parameters of the virtual scene; based on the camera distance interval corresponding to the camera distance of the virtual scene, the incidence relation between different camera distance intervals and different detail level models included in the distance parameter configuration file is inquired, and the target detail level model matched with the camera distance parameters of the virtual scene is inquired.
In step 103, virtual objects in the virtual scene that are in the field of view of the camera are displayed according to the target level of detail model.
For example, after a target detail level model adapted to a camera distance parameter and a graphic processing performance parameter is determined, all virtual objects in a virtual scene are displayed based on the target detail level model, so that the detail degree presented by the virtual objects meets the actual requirement under the condition of ensuring smooth operation of the virtual scene.
In some embodiments, when there are a plurality of detail level models adapted to the second parameter of the virtual scene, acquiring a user portrait corresponding to an account controlling the virtual object in the virtual scene; and obtaining a target detail level model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and a detail level preference model called by the user portrait.
For example, a user representation corresponding to an account controlling a virtual object in a virtual scene may be counted by historical data. And performing prediction processing on the detail level model by combining the detail level preference model with a plurality of detail level models which are adaptive to the second parameter of the virtual scene and the user portrait to obtain a detail level model (namely a target detail level model) which is possibly preferred by the account. The preference model of the level of detail can be a trained neural network model, and the neural network model can be obtained by training a plurality of samples of the model of the level of detail, samples of the portrait of the user and samples of the model of the target level of detail.
In some embodiments, when there are a plurality of level of detail models adapted to the second parameter of the virtual scene, acquiring a complexity level of a virtual object in the virtual scene that is located in a field of view of the camera; and calling a detail level prediction model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and the complexity degree to obtain a target detail level model.
For example, the level of detail preference model is combined with a plurality of level of detail models adapted to the second parameter of the virtual scene and the complexity to perform prediction processing on the level of detail models, so as to obtain an appropriate level of detail model (i.e. a target level of detail model). The detail level preference model may be a trained neural network model, and the neural network model may be obtained by training a plurality of detail level model samples, virtual object samples, and target detail level model samples.
In some embodiments, displaying a virtual object in a virtual scene that is in a field of view of a camera according to a target level of detail model includes: obtaining model parameters of a target detail level model, wherein the model parameters comprise model meshes and model materials; and displaying a virtual object positioned in the visual field of the camera in the virtual scene according to the model parameters.
For example, after the target detail level model is determined, the model parameters of the target detail level model are loaded, so that the virtual object in the visual field of the camera in the virtual scene is displayed according to the model parameters acquired in real time, and the virtual object can be immediately taken and used, so that the phenomenon that the model parameters are loaded in advance and the storage space is occupied is avoided.
In some embodiments, before displaying a virtual object in a field of view of a camera in a virtual scene according to a target level of detail model, obtaining model parameters for each level of detail model, wherein the model parameters include a model mesh and a model material; storing the model parameters of each detail level model to a cache space; displaying a virtual object in a virtual scene that is in a field of view of a camera according to a target level of detail model, comprising: obtaining model parameters of a target detail level model from a cache space; and displaying a virtual object in the virtual scene, which is positioned in the visual field of the camera, according to the model parameters of the target detail level model.
For example, the model parameters of each detail level model are loaded in advance, and after the target detail level model is determined, the model parameters of the target detail level model are obtained from the cache space in time; and displaying the virtual object positioned in the visual field of the camera in the virtual scene according to the model parameters of the target detail level model, thereby saving the loading time of the model parameters when the virtual object needs to be displayed.
In summary, the adaptive display method for the virtual scene provided in the embodiment of the present application determines the accurate and adaptive target detail level model by combining the two parameters, i.e., the camera position parameter and the graphics processing performance parameter, of the virtual scene, so that the smooth switching of the virtual object is performed based on the accurate and adaptive target detail level model, the human-computer interaction effect of the virtual scene is improved, and related communication resources and computing resources are saved.
The following describes a method for adaptive display of a virtual scene according to an embodiment of the present application with reference to the accompanying drawings. The method for adaptively displaying a virtual scene provided in the embodiment of the present application may be executed by the terminal 400 in fig. 1A alone, or may be executed by the terminal 400 and the server 200 in fig. 1B in cooperation.
Next, a description will be given taking an example in which the terminal 400 in fig. 1A alone performs the adaptive display method of the virtual scene provided in the embodiment of the present application. Referring to fig. 3C, fig. 3C is a schematic flowchart of an adaptive display method of a virtual scene provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 3C.
It should be noted that the method shown in fig. 3C may be executed by various forms of computer programs running on the terminal 400, and is not limited to the client 410 described above, but may also be the operating system 461, software modules and scripts described above, so that the client should not be considered as limiting the embodiments of the present application.
In step 201, a virtual object in the virtual scene that is in the field of view of the camera is displayed based on the first level of detail model.
For example, the first level of detail model is adapted to the current vertical distance, and virtual objects in the virtual scene that are in the field of view of the camera are displayed by the level of detail model adapted to the current vertical distance.
As shown in fig. 4B, when the current vertical distance is a medium distance, the camera is closer to the mountain 401 (i.e., the virtual object), and the details of the mountain 401 can be clearly seen.
In step 202, in response to a change in the vertical distance of the camera relative to the plane of the virtual scene, updating a virtual object in the virtual scene that is in the field of view of the camera based on the second level of detail model; wherein the level of detail of the second level of detail model is inversely related to the changed vertical distance.
For example, when the vertical distance of the camera with respect to the plane of the virtual scene is shorter, the virtual object in the visual field of the camera in the virtual scene is updated and displayed based on the second level-of-detail model, and the level of detail of the virtual object after updating and displaying is higher, as shown in fig. 4A, the camera is very close to the mountain 401, a great amount of detail of the mountain 401 can be clearly seen, and compared with fig. 4B, the number of the model surfaces included in the mountain in fig. 4A is increased, and the model material is better; when the vertical distance of the camera relative to the plane of the virtual scene is longer, the virtual object positioned in the visual field of the camera in the virtual scene is updated and displayed based on the second level-of-detail model, and the detail degree of the virtual object after updating and displaying is lower, as shown in fig. 4C, the camera is far away from the mountain 401, partial details of the mountain 401 can be seen, and compared with fig. 4B, the number of the model surfaces contained in the mountain in fig. 4C is reduced, and the model material is also simplified.
In some embodiments, the vertical distance of the camera relative to the plane of the virtual scene is one of: a vertical distance between the camera and a plane of the virtual scene; the difference in the vertical distance between the camera and the plane of the virtual scene and the height of the virtual object.
For example, when the height of the virtual object in the virtual scene is small, the height of the virtual object in the virtual scene may be ignored in calculating the camera distance parameter, thereby increasing the calculation rate of the camera distance parameter.
In some embodiments, prior to updating the virtual object in the field of view of the camera in the display virtual scene based on the second level of detail model, determining a plurality of level of detail models that are adapted to graphics processing performance parameters of the virtual scene; from the plurality of level of detail models, a second level of detail model is determined that fits the changed vertical distance.
For example, a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene are determined, then a second detail level model adapted to the changed vertical distance is determined from the plurality of detail level models, and the virtual object in the virtual scene in the visual field of the camera is updated and displayed according to the second detail level model, so that the accurate second detail level model is adapted in combination with the distance parameters of the camera and the graphics processing performance parameters, and the details of the virtual object in the virtual scene are reflected on the basis of the appropriate second detail level model.
In some embodiments, prior to updating the virtual object in the field of view of the camera in the displayed virtual scene based on the second level of detail model, determining a plurality of level of detail models that fit the changed vertical distance; from the plurality of level of detail models, a second level of detail model is determined that is adapted to graphics processing performance parameters of the virtual scene.
For example, a plurality of detail level models adapted to the changed vertical distance are determined, then a second detail level model adapted to the graphics processing performance parameters of the virtual scene is determined from the plurality of detail level models, and the virtual object in the virtual scene in the visual field of the camera is updated and displayed according to the second detail level model, so that the accurate second detail level model is adapted in combination with the distance parameters and the graphics processing performance parameters of the camera, and the details of the virtual object in the virtual scene are reflected on the basis of the appropriate second detail level model.
It should be noted that the process of adapting to the graphics processing performance parameter of the virtual scene and the process of adapting to the vertical distance are similar to the process of adapting to the above-described embodiment.
In summary, the adaptive display method for the virtual scene provided in the embodiment of the present application determines the adaptive detail level model quickly by using the parameter of the vertical distance between the camera and the plane of the virtual scene, so that the smooth switching of the virtual object is performed based on the adaptive detail level model, the human-computer interaction effect of the virtual scene is improved, and the related communication resources and the computing resources are saved.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The following description takes a virtual scene as an example:
in the related art, the load solution provided by Unity loads all model meshes (mesh) and model materials at the beginning of a game, and the overhead on a memory is increased. In addition, the Lodgroup scheme also fails to solve the problem of model lod.
In order to solve the above problem, an embodiment of the present application provides an adaptive display method for a virtual scene, which can be used for a model lod and a distance lod. The method is used for adapting to different models and loading models with different detail degrees (namely different level models) at different distances so as to improve the running performance of the game; the adaptive display method for the virtual scene, provided by the embodiment of the application, can reload the model mesh and the model material included in the lod when the lod is needed, switch to display the lod after the lod is loaded, and can also preload in advance; in the embodiment of the application, when the game runs, the corresponding configuration file is determined according to the model, and then the configuration file is inquired according to the vertical distance to switch different lods.
It should be noted that more and more SLG games are top-down, and the whole world in the game is observed by changing the position and height of the camera. Taking a scene as a mountain as an example, as shown in fig. 4A, under a high-end machine and a low viewing angle, a camera is close to a mountain 401, and a great deal of details of the mountain 401 can be clearly seen, that is, a game mountain includes a great number of triangular meshes; as shown in fig. 4B, at a medium viewing angle in a high-end camera, the camera is closer to the mountain 401, and the details of the mountain 401 can still be clearly seen; as shown in fig. 4C, the camera is far from the mountain 401 under the high viewing angle of the high-end machine, and can see part of the details of the mountain 401, and compared with fig. 4A, the number of the model surfaces included in the mountain in fig. 4C is reduced, and the model material is also simplified; as shown in fig. 4D, the camera is close to the mountain 401 at a low viewing angle of the low-end camera, so that a lot of details of the mountain 401 can be clearly seen, and compared with fig. 4A, the number of model surfaces included in the mountain in fig. 4C is reduced, but the details are seen with the naked eye to be as good as the effect of the high-end camera.
It should be noted that the art designer creates LOD providing different models and names LOD01, LOD02, LOD03, LOD04, etc., and the model mesh (mesh) and model material are named in this way, such as xxx _01.mesh, xxx _02.mesh, xxx _03.mesh, xxx _01.mat, xxx _02.mat, xxx _03.mat, etc. The model details of lod are sequentially from high to low and exist in the form of prefab, and the prefab is associated with the corresponding mesh and the model material.
In addition, the embodiment of the present application provides a configuration tool, which can be used by an art designer to configure the model lod and the distance lod, for example, configuration items shown in table 1 and table 2 are provided.
TABLE 1
Figure BDA0003453239430000261
Figure BDA0003453239430000271
TABLE 2
Figure BDA0003453239430000272
Figure BDA0003453239430000281
The above configuration is explained below:
1) MainCity in Table 1, Hill in Table 2 is the module name, and MC _ Hav01_01 is the prefix of prefab in the module interior, placed in the first column of the resource menu bar shown in Table 1, where the suffix 01, 02, 03, 04 of MC _ Hav01_01 is the index of lod.
2) S, A, B, C are model classes, which respectively represent extreme, high, medium and low, S1, S2 and S3 respectively represent different height levels (which are divided according to the distance in actual use), and respectively represent low, medium and high. Different models lod are configured, the model lod can be flexibly expanded to an N gear, the maximum N gear can be matched with the maximum model gear of the client, and as shown in fig. 5, the model lod is flexibly expanded to an S gear 501, an A gear 502, a B gear 503 and a C gear 504.
3) There are three items of distance lod at S1 as shown in table 1, namely, one item of distance lod at lod1, lod2, lod3, S2 and S3, as shown in fig. 6, "+" 601 is used to add distance lod, "x" 602 is used to delete distance lod, "resource" column is prefix of prefab of lod, "distance value" column is maximum display distance of object from camera, beyond which distance is not displayed, if there is next layer, it is automatically switched to lod of next layer. Wherein different models lod can be repeated.
It should be noted that, because the game is a depression, the actual distance of the camera is reduced to the vertical distance between the object (i.e. the virtual object in the game) and the camera, so that the calculation overhead of the central processing unit can be reduced. Of course, if one wants to use for non-top down games, it is also possible to simply change the vertical distance to the actual distance, just a little bit more computationally and code complexity.
As shown in fig. 7, after the art designer completes configuration, the "save LOD configuration" button 701 is clicked to save the LOD configuration and generate 2 configuration files, one of which is information about a configuration tool of the art designer and one of which is a configuration file of the LOD model class provided to the client.
Information about the configuration tool of the art designer: as shown in fig. 8, the configuration items are configured and stored according to the divided module profiles, so that the art designer can edit different modules at the same time, and the modules are convenient to look up.
Regarding the binning configuration (lodddevicecfg): different files are divided according to different model gears, namely LodDeviceCfg-S, LodDeviceCfg-A, LodDeviceCfg-B, LodDeviceCfg-C.
When the game runs, corresponding configuration files are selected according to different models, and different lods are switched according to the vertical distance, wherein the specific flow is shown in fig. 9:
and step 11, judging the model after the game is started.
And step 12, reading the lod configuration of the corresponding model.
And step 13, loading the model configuration of the corresponding land parcel in the visual field after the camera moves. For example, when a camera is moved to a particular location, the corresponding model configuration is loaded based on the model information stored in the parcel.
And step 14, reading the lod configuration according to the model ID to acquire a corresponding configuration item.
And step 15, loading the model of the corresponding layer according to the current height of the camera.
And step 16, judging whether the model needs to be switched or not when the height of the camera (namely the vertical distance between the object and the camera) changes, and switching lod when the height is not in the current height range.
For example, when the level of the height of the camera is greater than or equal to the current level, loading the previous model, displaying the previous model, and hiding the current level; and when the level of the height of the camera is smaller than the current level and smaller than or equal to the next level, loading the next level model, displaying the next level model and hiding the current level.
Because the loading is asynchronous, after the loading is finished, whether the current vertical distance is within the distance range of the current lod needs to be judged again, and if the current vertical distance is not within the distance range of the current lod, the switching needs to be continued.
Note that, when attribute information (transform) such as coordinates, scaling, and rotation differs for the same model, the corresponding model can be acquired from the lod system, and the transform can be processed as follows.
a. If the object is not suitable for instantiation (gpu instance) processing by an image processor, instantiation is required, and transform information in the instance is modified;
b. if the object is suitable for processing with gpu instance, then instantiation is not required and transform information can be converted to the object's Matrix4x4 for processing.
In summary, the adaptive display method for the virtual scene provided by the embodiment of the application has the following beneficial effects:
1) the distance between the object model (the model corresponding to the object) and the camera is calculated simply, namely the actual distance is reduced to the vertical height of the camera minus the vertical height of the object model, and if the object model is small, the vertical height of the object model can be ignored, so that the distance calculation of the embodiment of the application is simple and efficient, and is suitable for most SLG games.
2) According to the method and the device, the model mesh and the model material included by the lod are loaded when the lod is needed, and seamless switching is performed after the lod is loaded, so that the memory overhead is reduced.
3) The embodiment of the application can process the model lod and the distance lod, can conveniently expand the model lod and the distance lod, and can better meet the actual project requirements.
So far, the adaptive display method of the virtual scene provided in the embodiment of the present application has been described in conjunction with the exemplary application and implementation of the terminal provided in the embodiment of the present application, and the following continues to describe an adaptive display scheme for realizing the virtual scene by matching each module in the adaptive display device 465 of the virtual scene provided in the embodiment of the present application.
A first determining module 4651, configured to determine a plurality of level of detail models adapted to first parameters of the virtual scene; wherein the first parameter is any one of a camera position parameter and a graphic processing performance parameter; a second determining module 4652, configured to determine a target level of detail model adapted to a second parameter of the virtual scene from the plurality of level of detail models; wherein the second parameter is the other of the camera distance parameter and the graphics processing performance parameter, excluding the first parameter; a first display module 4653, configured to display a virtual object in the virtual scene in the field of view of the camera according to the target level of detail model.
In some embodiments, when the first parameter is the graphics processing performance parameter, the first determining module 4651 is further configured to obtain a performance parameter profile, where the performance parameter profile includes association relationships between different graphics processing performance parameters and different level of detail models; and inquiring the performance parameter configuration file based on the image processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the image processing performance parameters of the virtual scene.
In some embodiments, the performance parameter profile comprises a model configuration table for different virtual objects, the model configuration table comprising associations of different ones of the graphics processing performance parameters to different level of detail models for the virtual objects; the first determining module 4651 is further configured to query the performance parameter configuration file based on a virtual object in the virtual scene to obtain a model configuration table for the virtual object; and inquiring a model configuration table aiming at the virtual object based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models which are matched with the graphic processing performance parameters of the virtual scene and the virtual object.
In some embodiments, the performance parameter profile includes associations of different ones of the graphics processing performance parameters to different level of detail models for different virtual objects; the first determining module 4651 is further configured to query the performance parameter configuration file based on a virtual object in the virtual scene and a graphics processing performance parameter of the virtual scene, so as to obtain a plurality of detail level models adapted to the graphics processing performance parameter of the virtual scene and the virtual object.
In some embodiments, the graphics processing performance parameters comprise graphics processing hardware parameters, the performance parameter profile comprising associations of different ones of the graphics processing hardware parameters to different level of detail models; the first determining module 4651 is further configured to query the performance parameter configuration file based on the graphics processing hardware parameters of the virtual scene before the virtual scene ends running, so as to obtain a plurality of detail level models adapted to the graphics processing hardware parameters of the virtual scene; the graphics processing hardware parameters are inquired according to the model of the electronic equipment displaying the virtual scene, and include at least one of the following: processor model, memory capacity.
In some embodiments, the graphics processing performance parameters comprise graphics processing software parameters, the performance parameter profile comprising associations of different ones of the graphics processing software parameters to different level of detail models; the first determining module 4651 is further configured to query the performance parameter configuration file based on the graphics processing software parameters during the virtual scene runtime, so as to obtain a plurality of detail level models adapted to the graphics processing software parameters during the virtual scene runtime; wherein the graphics processing software parameters include at least one of: memory free capacity, processor free computing power.
In some embodiments, the graphics processing performance parameters include graphics processing software parameters and graphics processing hardware parameters, the performance parameter profile includes first associations of different ones of the graphics processing hardware parameters to different levels of detail models, and second associations of different ones of the graphics processing software parameters to different levels of detail models; the first determining module 4651 is further configured to, when the graphics processing software parameter of the virtual scene is greater than the performance threshold, query a first association included in the performance parameter configuration file based on the graphics processing hardware parameter of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing hardware parameter of the virtual scene; when the graphic processing software parameters of the virtual scene are smaller than or equal to the performance threshold, querying a second association relation included in the performance parameter configuration file based on the graphic processing hardware parameters during the virtual scene operation to obtain a plurality of detail level models adapted to the graphic processing software parameters during the virtual scene operation.
In some embodiments, when the second parameter is the camera distance parameter, the second determining module 4652 is further configured to obtain a distance parameter profile, where the distance parameter profile includes association relationships between different camera distance parameters and different level of detail models; and inquiring the distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a target detail level model adaptive to the camera distance parameter of the virtual scene.
In some embodiments, the distance parameter configuration file includes association relationships between different camera distance intervals and different detail level models, and the camera distance intervals are obtained by dividing value intervals of the camera distance parameters; the second determining module 4652 is further configured to query, based on the camera distance parameter of the virtual scene, a camera distance interval included in the distance parameter configuration file, so as to obtain a camera distance interval corresponding to the camera distance parameter of the virtual scene; and inquiring the incidence relation between different camera distance intervals and different detail level models included in the distance parameter configuration file based on the camera distance interval corresponding to the camera distance of the virtual scene to obtain a target detail level model adaptive to the camera distance parameter of the virtual scene.
In some embodiments, the camera distance parameter is one of: a distance between a camera of the virtual scene and a virtual object in a field of view of the camera; a vertical distance between the camera and a plane of the virtual scene; a difference in a vertical distance between the camera and a plane of the virtual scene and a height of the virtual object.
In some embodiments, the first display module 4653 is further configured to obtain model parameters of the target level of detail model, wherein the model parameters include a model mesh and a model material; and displaying a virtual object in the virtual scene, which is positioned in the visual field of the camera, according to the model parameters.
In some embodiments, before displaying the virtual object in the field of view of the camera in the virtual scene according to the target level of detail model, the second determining module 4652 is further configured to obtain model parameters of each of the level of detail models, wherein the model parameters include a model mesh and a model material; storing the model parameters of each detail level model into a cache space; obtaining model parameters of the target detail level model from the cache space; and displaying a virtual object in the visual field of the camera in the virtual scene according to the model parameters of the target detail level model.
In some embodiments, when there are multiple level of detail models adapted to the second parameter of the virtual scene, the second determining module 4652 is further configured to obtain a user representation corresponding to an account controlling a virtual object in the virtual scene; and obtaining the target detail level model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and a detail level preference model which is called by the user portrait.
In some embodiments, when there are multiple level of detail models adapted to the second parameter of the virtual scene, the second determining module 4652 is further configured to obtain the complexity of a virtual object in the virtual scene that is in the field of view of the camera; and calling a detail level prediction model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and the complexity degree to obtain the target detail level model.
A second display module 4654 configured to display a virtual object in the virtual scene that is in the field of view of the camera based on the first level of detail model; a third display module 4655, configured to update and display a virtual object in the virtual scene that is in the field of view of the camera based on the second level of detail model in response to a change in the vertical distance of the camera relative to the plane of the virtual scene; wherein the level of detail of the second level of detail model is inversely related to the vertical distance after the change.
In some embodiments, the vertical distance of the camera relative to the plane of the virtual scene is one of: a vertical distance between the camera and a plane of the virtual scene; a difference in a vertical distance between the camera and a plane of the virtual scene and a height of the virtual object.
In some embodiments, prior to said updating displaying a virtual object in the virtual scene that is in the field of view of the camera based on the second level of detail model, the third display module 4655 is further configured to determine a plurality of level of detail models that are adapted to graphics processing performance parameters of the virtual scene; determining a second level of detail model from the plurality of level of detail models that fits the changed vertical distance.
In some embodiments, the third display module 4655 is further configured to determine a plurality of level of detail models adapted to the vertical distance after the change, prior to said updating displaying a virtual object in the virtual scene located in the field of view of the camera based on the second level of detail model; determining a second level of detail model from the plurality of level of detail models that is adapted to graphics processing performance parameters of the virtual scene.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the method for adaptive display of a virtual scene in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to perform an adaptive display method of a virtual scene provided by embodiments of the present application, for example, the adaptive display method of a virtual scene shown in fig. 3A to 3C.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (23)

1. An adaptive display method of a virtual scene, comprising:
determining a plurality of level of detail models adapted to first parameters of the virtual scene; wherein the first parameter is any one of a camera position parameter and a graphic processing performance parameter;
determining a target level of detail model adapted to a second parameter of the virtual scene from the plurality of level of detail models; wherein the second parameter is the other of the camera distance parameter and the graphics processing performance parameter, excluding the first parameter;
and displaying a virtual object in the visual field of the camera in the virtual scene according to the target detail level model.
2. The method of claim 1, wherein when the first parameter is the graphics processing performance parameter, the determining a plurality of level of detail models adapted to the first parameter of the virtual scene comprises:
acquiring a performance parameter configuration file, wherein the performance parameter configuration file comprises incidence relations between different graphic processing performance parameters and different detail level models;
and inquiring the performance parameter configuration file based on the image processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the image processing performance parameters of the virtual scene.
3. The method of claim 2,
the performance parameter configuration file comprises a model configuration table for different virtual objects, the model configuration table comprising associations of different graphics processing performance parameters to different level of detail models for the virtual objects;
the querying the performance parameter configuration file based on the graphics processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene includes:
inquiring the performance parameter configuration file based on a virtual object in the virtual scene to obtain a model configuration table aiming at the virtual object;
and inquiring a model configuration table aiming at the virtual object based on the graphic processing performance parameters of the virtual scene to obtain a plurality of detail level models which are matched with the graphic processing performance parameters of the virtual scene and the virtual object.
4. The method of claim 2,
the performance parameter configuration file comprises incidence relations between different graphics processing performance parameters and different detail level models aiming at different virtual objects;
the querying the performance parameter configuration file based on the graphics processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene includes:
and inquiring the performance parameter configuration file based on the virtual object in the virtual scene and the image processing performance parameter of the virtual scene to obtain a plurality of detail level models which are matched with the image processing performance parameter of the virtual scene and the virtual object.
5. The method of claim 2,
the graphics processing performance parameters comprise graphics processing hardware parameters, and the performance parameter configuration file comprises incidence relations between different graphics processing hardware parameters and different detail level models;
the querying the performance parameter configuration file based on the graphics processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene includes:
before the virtual scene operation is finished, inquiring the performance parameter configuration file based on the graphic processing hardware parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing hardware parameters of the virtual scene;
the graphics processing hardware parameters are inquired according to the model of the electronic equipment displaying the virtual scene, and include at least one of the following: processor model, memory capacity.
6. The method of claim 2,
the graphics processing performance parameters comprise graphics processing software parameters, and the performance parameter configuration file comprises incidence relations between different graphics processing software parameters and different detail level models;
the querying the performance parameter configuration file based on the graphics processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene includes:
inquiring the performance parameter configuration file based on the graphic processing software parameters during the virtual scene operation to obtain a plurality of detail level models adapted to the graphic processing software parameters during the virtual scene operation;
wherein the graphics processing software parameters include at least one of: memory free capacity, processor free computing power.
7. The method of claim 2,
the graphics processing performance parameters comprise graphics processing software parameters and graphics processing hardware parameters, and the performance parameter configuration file comprises first incidence relations between different graphics processing hardware parameters and different detail level models and second incidence relations between different graphics processing software parameters and different detail level models;
the querying the performance parameter configuration file based on the graphics processing performance parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphics processing performance parameters of the virtual scene includes:
when the graphic processing software parameters of the virtual scene are larger than a performance threshold, inquiring a first incidence relation included in the performance parameter configuration file based on the graphic processing hardware parameters of the virtual scene to obtain a plurality of detail level models adapted to the graphic processing hardware parameters of the virtual scene;
when the graphic processing software parameters of the virtual scene are smaller than or equal to the performance threshold, querying a second association relation included in the performance parameter configuration file based on the graphic processing hardware parameters during the virtual scene operation to obtain a plurality of detail level models adapted to the graphic processing software parameters during the virtual scene operation.
8. The method according to claim 1, wherein when the second parameter is the camera distance parameter, the determining a target level of detail model from the plurality of level of detail models that is adapted to the second parameter of the virtual scene comprises:
acquiring a distance parameter configuration file, wherein the distance parameter configuration file comprises incidence relations between different camera distance parameters and different detail level models;
and inquiring the distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a target detail level model adaptive to the camera distance parameter of the virtual scene.
9. The method of claim 8,
the distance parameter configuration file comprises incidence relations between different camera distance intervals and different detail level models, and the camera distance intervals are obtained by dividing value intervals of the camera distance parameters;
the querying the distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a target detail level model adapted to the camera distance parameter of the virtual scene includes:
inquiring a camera distance interval included in the distance parameter configuration file based on the camera distance parameter of the virtual scene to obtain a camera distance interval corresponding to the camera distance parameter of the virtual scene;
and inquiring the incidence relation between different camera distance intervals and different detail level models included in the distance parameter configuration file based on the camera distance interval corresponding to the camera distance of the virtual scene to obtain a target detail level model adaptive to the camera distance parameter of the virtual scene.
10. The method of claim 1,
the camera distance parameter is one of:
a distance between a camera of the virtual scene and a virtual object in a field of view of the camera;
a vertical distance between the camera and a plane of the virtual scene;
a difference in a vertical distance between the camera and a plane of the virtual scene and a height of the virtual object.
11. The method of claim 1, wherein displaying virtual objects in the virtual scene in the field of view of a camera according to the target level of detail model comprises:
obtaining model parameters of the target detail level model, wherein the model parameters comprise model meshes and model materials;
and displaying a virtual object in the virtual scene, which is positioned in the visual field of the camera, according to the model parameters.
12. The method of claim 1,
before the displaying, according to the target level of detail model, a virtual object in the virtual scene that is in a field of view of a camera, the method further comprises:
obtaining model parameters of each detail level model, wherein the model parameters comprise model meshes and model materials;
storing the model parameters of each detail level model into a cache space;
the displaying a virtual object in the virtual scene in the field of view of the camera according to the target level of detail model includes:
obtaining model parameters of the target detail level model from the cache space;
and displaying a virtual object in the visual field of the camera in the virtual scene according to the model parameters of the target detail level model.
13. The method according to claim 1, wherein when there are a plurality of level of detail models adapted to the second parameter of the virtual scene, the method further comprises:
acquiring a user portrait corresponding to an account number for controlling a virtual object in the virtual scene;
and obtaining the target detail level model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and a detail level preference model which is called by the user portrait.
14. The method according to claim 1, wherein when there are a plurality of level of detail models adapted to the second parameter of the virtual scene, the method further comprises:
acquiring the complexity of a virtual object in the visual field of a camera in the virtual scene;
and calling a detail level prediction model based on a plurality of detail level models which are adaptive to the second parameter of the virtual scene and the complexity degree to obtain the target detail level model.
15. An adaptive display method of a virtual scene, comprising:
displaying a virtual object in the virtual scene that is in a field of view of a camera based on a first level of detail model;
updating a display of a virtual object in the virtual scene that is in a field of view of the camera based on a second level of detail model in response to a change in a vertical distance of the camera relative to a plane of the virtual scene;
wherein the level of detail of the second level of detail model is inversely related to the vertical distance after the change.
16. The method of claim 15, wherein the vertical distance of the camera relative to the plane of the virtual scene is one of:
a vertical distance between the camera and a plane of the virtual scene;
a difference in a vertical distance between the camera and a plane of the virtual scene and a height of the virtual object.
17. The method of claim 15, wherein prior to said updating the display of the virtual object in the field of view of the camera in the virtual scene based on the second level of detail model, the method further comprises:
determining a plurality of level of detail models adapted to graphics processing performance parameters of the virtual scene;
determining a second level of detail model from the plurality of level of detail models that fits the changed vertical distance.
18. The method of claim 15, wherein prior to said updating the display of the virtual object in the field of view of the camera in the virtual scene based on the second level of detail model, the method further comprises:
determining a plurality of level of detail models adapted to the varied vertical distances;
determining a second level of detail model from the plurality of level of detail models that is adapted to graphics processing performance parameters of the virtual scene.
19. An apparatus for adaptive display of a virtual scene, the apparatus comprising:
a first determining module for determining a plurality of level of detail models adapted to first parameters of the virtual scene; wherein the first parameter is any one of a camera position parameter and a graphic processing performance parameter;
a second determining module, configured to determine, from the multiple level of detail models, a target level of detail model adapted to a second parameter of the virtual scene; wherein the second parameter is the other of the camera distance parameter and the graphics processing performance parameter, excluding the first parameter;
and the first display module is used for displaying the virtual object positioned in the visual field of the camera in the virtual scene according to the target detail level model.
20. An apparatus for adaptive display of a virtual scene, the apparatus comprising:
a second display module for displaying a virtual object in the virtual scene in the field of view of the camera based on the first level of detail model;
a third display module for updating and displaying a virtual object in the virtual scene that is in the field of view of the camera based on the second level of detail model in response to a change in the vertical distance of the camera relative to the plane of the virtual scene;
wherein the level of detail of the second level of detail model is inversely related to the vertical distance after the change.
21. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory to implement the method for adaptive display of a virtual scene according to any one of claims 1 to 18.
22. A computer-readable storage medium storing executable instructions for implementing a method for adaptive display of a virtual scene according to any one of claims 1 to 18 when executed by a processor.
23. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the method for adaptive display of a virtual scene according to any one of claims 1 to 18.
CN202111671860.4A 2021-10-18 2021-12-31 Adaptive display method, device, equipment, medium and program product for virtual scene Active CN114219924B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111209465.4A CN113902881A (en) 2021-10-18 2021-10-18 Method, apparatus, device, medium, and program product for adaptive display of virtual scene
CN2021112094654 2021-10-18

Publications (2)

Publication Number Publication Date
CN114219924A true CN114219924A (en) 2022-03-22
CN114219924B CN114219924B (en) 2023-06-13

Family

ID=79192410

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111209465.4A Withdrawn CN113902881A (en) 2021-10-18 2021-10-18 Method, apparatus, device, medium, and program product for adaptive display of virtual scene
CN202111671860.4A Active CN114219924B (en) 2021-10-18 2021-12-31 Adaptive display method, device, equipment, medium and program product for virtual scene

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111209465.4A Withdrawn CN113902881A (en) 2021-10-18 2021-10-18 Method, apparatus, device, medium, and program product for adaptive display of virtual scene

Country Status (1)

Country Link
CN (2) CN113902881A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996242A (en) * 2010-11-02 2011-03-30 江西师范大学 Three-dimensional R-tree index expansion structure-based three-dimensional city model adaptive method
CN105631925A (en) * 2015-12-29 2016-06-01 北京航天测控技术有限公司 Three-dimensional scene generation method based on OSG three-dimensional rendering engine preprocessing and device thereof
DE102016116582A1 (en) * 2016-09-05 2018-03-08 BBIT-Solutions UG (haftungsbeschränkt) Method and apparatus for displaying augmented reality
CN109045694A (en) * 2018-08-17 2018-12-21 腾讯科技(深圳)有限公司 Virtual scene display method, apparatus, terminal and storage medium
CN112052097A (en) * 2020-10-15 2020-12-08 腾讯科技(深圳)有限公司 Rendering resource processing method, device and equipment for virtual scene and storage medium
CN112370784A (en) * 2021-01-15 2021-02-19 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium
CN112370783A (en) * 2020-12-02 2021-02-19 网易(杭州)网络有限公司 Virtual object rendering method and device, computer equipment and storage medium
CN112843735A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996242A (en) * 2010-11-02 2011-03-30 江西师范大学 Three-dimensional R-tree index expansion structure-based three-dimensional city model adaptive method
CN105631925A (en) * 2015-12-29 2016-06-01 北京航天测控技术有限公司 Three-dimensional scene generation method based on OSG three-dimensional rendering engine preprocessing and device thereof
DE102016116582A1 (en) * 2016-09-05 2018-03-08 BBIT-Solutions UG (haftungsbeschränkt) Method and apparatus for displaying augmented reality
CN109045694A (en) * 2018-08-17 2018-12-21 腾讯科技(深圳)有限公司 Virtual scene display method, apparatus, terminal and storage medium
CN112052097A (en) * 2020-10-15 2020-12-08 腾讯科技(深圳)有限公司 Rendering resource processing method, device and equipment for virtual scene and storage medium
CN112370783A (en) * 2020-12-02 2021-02-19 网易(杭州)网络有限公司 Virtual object rendering method and device, computer equipment and storage medium
CN112843735A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN112370784A (en) * 2021-01-15 2021-02-19 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANG TAO; LI FANG: "A Stereoscopic Parallax Generation Algorithm in a Virtual Reality Scene That Takes the Viewpoint-Related LOD Quadtree into Account", 《2018 INTERNATIONAL CONFERENCE ON SMART GRID AND ELECTRICAL AUTOMATION (ICSGEA)》 *
官巍;蔡晓琳;陈海;: "细节层次技术在场景建模中的应用", 系统仿真学报, no. 2 *

Also Published As

Publication number Publication date
CN113902881A (en) 2022-01-07
CN114219924B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN111659115B (en) Virtual role control method and device, computer equipment and storage medium
CN112691377B (en) Control method and device of virtual role, electronic equipment and storage medium
CN112569599B (en) Control method and device for virtual object in virtual scene and electronic equipment
TWI818343B (en) Method of presenting virtual scene, device, electrical equipment, storage medium, and computer program product
US20240037839A1 (en) Image rendering
CN112669194B (en) Animation processing method, device, equipment and storage medium in virtual scene
CN114067042A (en) Image rendering method, device, equipment, storage medium and program product
WO2023005522A1 (en) Virtual skill control method and apparatus, device, storage medium, and program product
CN112711458A (en) Method and device for displaying prop resources in virtual scene
TW202217541A (en) Location adjusting method, device, equipment, storage medium, and program product for virtual buttons
CN114359458A (en) Image rendering method, device, equipment, storage medium and program product
US8992330B1 (en) System and method for facilitating data model substitutions for pre-existing data objects
CN114344896A (en) Virtual scene-based snap-shot processing method, device, equipment and storage medium
CN114219924B (en) Adaptive display method, device, equipment, medium and program product for virtual scene
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN114130022A (en) Method, apparatus, device, medium, and program product for displaying screen of virtual scene
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN114425159A (en) Motion processing method, device and equipment in virtual scene and storage medium
CN114210057B (en) Method, device, equipment, medium and program product for picking up and processing virtual prop
CN114247132B (en) Control processing method, device, equipment, medium and program product for virtual object
CN116271830B (en) Behavior control method, device, equipment and storage medium for virtual game object
WO2024032104A1 (en) Data processing method and apparatus in virtual scene, and device, storage medium and program product
WO2024037139A1 (en) Method and apparatus for prompting information in virtual scene, electronic device, storage medium, and program product
CN114887325B (en) Data processing method, display method, device and storage medium
WO2024032176A1 (en) Virtual item processing method and apparatus, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant