CN116785703A - Information display method and device and computing equipment - Google Patents

Information display method and device and computing equipment Download PDF

Info

Publication number
CN116785703A
CN116785703A CN202310820250.9A CN202310820250A CN116785703A CN 116785703 A CN116785703 A CN 116785703A CN 202310820250 A CN202310820250 A CN 202310820250A CN 116785703 A CN116785703 A CN 116785703A
Authority
CN
China
Prior art keywords
information
display
displayed
display area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310820250.9A
Other languages
Chinese (zh)
Inventor
施润丰
梁波
王汝豫
杨林
杜晓荣
梁延研
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority to CN202310820250.9A priority Critical patent/CN116785703A/en
Publication of CN116785703A publication Critical patent/CN116785703A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides an information display method and device and a computing device, wherein the method comprises the following steps: obtaining information to be displayed, model parameters of a target object model and first position information of a virtual camera, wherein the information to be displayed is associated with the target object model, the model parameters of the target object model and the first position information of the virtual camera are related, the target object model is displayed in a display area, and the virtual camera corresponds to a scene displayed in the display area; constructing a reference plane based on the region information of the display region and the first position information; determining a display position of the information to be displayed in the display area based on the model parameters and the reference plane; and displaying the information to be displayed based on the display position. The information display method can improve the information display effect.

Description

Information display method and device and computing equipment
Technical Field
The application relates to the technical field of electronics, in particular to an information display method. The application also relates to an information display device, a computing device and a computer readable storage medium.
Background
With the development of electronic technology, three-dimensional game applications have been developed in large numbers.
The terminal may run a three-dimensional game application to display a game scene for a user to manipulate a game character in the game scene. The user can also enable the game role to interact with the objects in the game scene, and corresponding interaction information can be triggered to be displayed around the objects during interaction.
However, in the related art, there are some cases where interactive information cannot be displayed in a game scene, and thus, information display effects in a game scene of a three-dimensional application program are poor.
Disclosure of Invention
In view of this, the embodiment of the application provides an information display method, which can improve the information display effect. The embodiment of the application also provides an information display device, a computing device and a computer readable storage medium.
According to an aspect of an embodiment of the present application, there is provided an information display method including:
obtaining information to be displayed, model parameters of a target object model and first position information of a virtual camera, wherein the information to be displayed is associated with the target object model, the model parameters of the target object model and the first position information of the virtual camera are related, the target object model is displayed in a display area, and the virtual camera corresponds to a scene displayed in the display area;
constructing a reference plane based on the region information of the display region and the first position information;
Determining a display position of the information to be displayed in the display area based on the model parameters and the reference plane;
and displaying the information to be displayed based on the display position.
Optionally, constructing a reference plane based on the region information of the display region and the first position information includes:
determining an orientation vector of the virtual camera relative to the display area based on the area information of the display area and the first position information;
determining a region vector of the display region based on the region information of the display region;
and constructing the reference plane according to the orientation vector and the area vector.
Optionally, determining an orientation vector of the virtual camera relative to the display area based on the area information of the display area and the first position information includes:
determining second position information of each endpoint of the display area based on the area information of the display area;
determining third position information of the center point of the display area according to the second position information of each endpoint;
an orientation vector of the virtual camera relative to the display area is determined based on the first location information and the third location information.
Optionally, determining the area vector of the display area based on the area information of the display area includes:
determining two target edges of the display area based on the area information;
determining fourth position information of reference points on each target edge;
and determining the area vector of the display area according to the fourth position information of each reference point.
Optionally, determining the area vector of the display area based on the area information of the display area includes:
determining an auxiliary line intersecting with a target direction in the display area based on the area information and the target direction, wherein the target direction is a designated direction of the target object model in which the information to be displayed is located when the information to be displayed is displayed;
based on the auxiliary line, a region vector of the display region is determined.
Optionally, determining the display position of the information to be displayed in the display area based on the model parameters and the reference plane includes:
determining an intersection point of the target object model and the reference plane in a target direction based on the model parameters, wherein the target direction is a designated direction of the target object model when the information to be displayed is displayed;
And determining the display position of the information to be displayed in the display area based on the intersection point.
Optionally, determining the display position of the information to be displayed in the display area based on the intersection point includes:
mapping the intersection points to the display area to obtain mapping points in the display area;
and determining the display position based on the mapping point.
Optionally, determining the display position based on the mapping point includes:
shifting the mapping points to the position, outside the mapping area of the display area, of the target object model;
determining the display position containing the shifted mapping points.
Optionally, before constructing the reference plane based on the region information of the display region and the first position information, the method further includes:
determining a default display position of the information to be displayed; wherein the default display position and the target object model satisfy a set relative position relationship;
constructing a reference plane based on the region information of the display region and the first position information, including:
and constructing a reference plane based on the region information of the display region and the first position information when the mapping position of the default display position for the display region is located outside the display region.
Optionally, after determining the default display position of the information to be displayed, the method further includes:
and displaying the information to be displayed based on the mapping position when the mapping position of the default display position relative to the display area is positioned in the display area.
According to another aspect of the embodiments of the present application, there is provided an information display apparatus including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring information to be displayed, related to a target object model, model parameters of the target object model and first position information of a virtual camera, the target object model is displayed in a display area, and the virtual camera corresponds to a scene displayed in the display area;
a construction module, configured to construct a reference plane based on the region information of the display region and the first position information;
the determining module is used for determining the display position of the information to be displayed in the display area based on the model parameters and the reference plane;
and the display module is used for displaying the information to be displayed based on the display position.
According to yet another aspect of an embodiment of the present application, there is provided a computing device including: a memory and a processor; the memory is used for storing computer executable instructions, and the processor implements the steps in the above method when executing the computer executable instructions.
According to yet another aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the steps in the above-described method.
According to a further aspect of embodiments of the present application, there is provided a chip storing a computer program which, when executed by the chip, performs the steps of the above method.
The information display method provided by the application has at least the following beneficial effects:
in the application, when the target object model in the scene displayed in the display area is associated with the information to be displayed, the reference plane can be constructed based on the display information of the display area and the position information of the virtual camera corresponding to the scene. Then, based on the model parameters of the target object model and the reference plane, a display position in the display area is determined for display of the information to be displayed. In this way, the information to be displayed associated with the target object model can be ensured to be displayed in the display area, the problem that the information to be displayed by a user cannot be displayed normally is avoided, and the information display effect in the scene displayed in the display area is improved.
Drawings
FIG. 1 is a schematic diagram of an information display system according to an embodiment of the present application;
FIG. 2 is a flowchart of an information display method according to an embodiment of the present application;
FIG. 3 is a flowchart of another information display method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of information acquisition performed by a camera according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a display interface of a display area according to an embodiment of the application;
FIG. 6 is a flow chart of a method for constructing a reference plane according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a display interface of another display area according to an embodiment of the application;
fig. 8 is a schematic structural diagram of an information display device according to an embodiment of the present application;
FIG. 9 is a block diagram of a computing device according to one embodiment of the application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the application. As used in one or more embodiments of the application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any or all possible combinations of one or more of the associated listed items. The term "at least one" in embodiments of the present application refers to "one or more" and "a plurality" refers to "two or more". The term "comprising" is an open description and should be understood as "including but not limited to" and may include other content in addition to what has been described.
It should be understood that although the terms "first," "second," etc. may be used in one or more embodiments of the application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, "first" may also be referred to as "second" and, similarly, "second" may also be referred to as "first" without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
With the development of electronic technology, the functions of the terminal are more and more powerful, and the screen of the terminal can realize the presentation of various scenes (such as two-dimensional scenes and three-dimensional scenes). There is also an increasing demand for presentation effects of scenes and display effects of various information in scenes.
In the related art, when a user interacts with an object in a scene displayed on a screen, the display of prompt information associated with the object is triggered, so that the user can learn an interaction result. Typically, the hint information is displayed at a designated location around the object. If the object is located in the scene near the edge of the scene, the specified position may be located outside the scene, so that the prompt information associated with the object cannot be displayed on the screen of the terminal. Therefore, the information display effect in the scene is poor.
For example, in a three-dimensional game scenario, the object may be a non-player character (NPC). When a user operates a player character to fight against the NPC, the injury value of the NPC can be displayed above the NPC, and the injury value is prompt information related to the NPC. The scene displayed by the screen may correspond to a virtual camera, and the scene may be considered to be presented according to the picture captured by the virtual camera. The user may adjust the viewing angle of the virtual camera (e.g., zoom or rotate the lens of the virtual camera) to adjust the display of the scene in the screen. When the visual angle of the virtual camera is adjusted by the user or the NPC model is particularly large, the problem that the injury value of the NPC cannot be displayed exists, so that the user cannot accurately know the fight condition, and the game experience effect of the user is affected.
The embodiment of the application provides an information display method, which can ensure that information related to an object can be displayed on a screen in a scene displayed on the screen, and improve the information display effect. The embodiment of the application also relates to an information display device, a computing device and a computer readable storage medium.
Fig. 1 is a schematic structural diagram of an information display system according to an embodiment of the application. As shown in fig. 1, the information display system is a terminal, and the terminal may include devices with display functions, such as a smart phone, a desktop computer, and a tablet computer. The terminal includes a front end, which may include a display screen, and a back end, which may include a processor.
In the embodiment of the application, the display area of the terminal can display a scene, and the scene can be regarded as a scene captured by the virtual camera. The user can operate on the terminal to adjust the display view angle of the scene, and can also operate the roles in the scene to interact with the target objects in the scene, so that the terminal can adjust the display picture based on the roles in the scene.
Alternatively, the terminal and the server may include a communication connection established. The server may include a server, or a server cluster composed of a plurality of servers. For example, an application (such as a game application) may be installed on a terminal, and a server may provide support for the running of the application. The terminal is in communication connection with the server when the application program is operated, so as to send instructions to the server and receive data sent by the server. The instruction can be an instruction triggered by a user to operate the terminal, and the terminal can correspondingly display the data based on the data sent by the server. Alternatively, the terminal may interact with the server through a web page instead of installing an application program. When the user operates on the terminal, the terminal can generate a corresponding instruction and send the instruction to the server. The server side can generate certain information based on the instruction so that the terminal can adjust the displayed picture based on the information.
Fig. 2 is a flowchart of an information display method according to an embodiment of the present application, and the information display system shown in fig. 1 may implement the information display method. As shown in fig. 2, the method may include:
step 201, obtaining information to be displayed associated with a target object model, model parameters of the target object model, and first position information of a virtual camera, wherein the target object model is displayed in a display area, and the virtual camera corresponds to a scene displayed in the display area.
Illustratively, step 201 may be performed in response to a user operation on the terminal. The operation may be an operation in which a user manipulates a character in a scene displayed by the terminal to interact with a target object in the scene.
Step 203, constructing a reference plane based on the area information of the display area and the first position information.
Step 205, determining a display position of the information to be displayed in the display area based on the model parameters and the reference plane.
Step 207, displaying the information to be displayed based on the display position.
Steps 201 to 207 may be performed by the back end in the information display system shown in fig. 1. In step 207, the back end may control the front end to display the information to be displayed based on the display position. Alternatively, the back end may send the information of the display position to the front end after step 205, and step 207 may be performed by the front end, that is, the front end may display the information to be displayed based on the information of the display position.
In summary, in the information display method provided by the embodiment of the present application, when information to be displayed is associated with the target object model in the scene displayed in the display area, the reference plane may be constructed based on the display information of the display area and the position information of the virtual camera corresponding to the scene. Then, based on the model parameters of the target object model and the reference plane, a display position in the display area is determined for display of the information to be displayed. In this way, the information to be displayed associated with the target object model can be ensured to be displayed in the display area, the problem that the information to be displayed by a user cannot be displayed normally is avoided, and the information display effect in the scene displayed in the display area is improved.
Fig. 3 is a flowchart of another information display method according to an embodiment of the present application, which may be used in the information display system shown in fig. 1, for example, in a back end of the information display system. As shown in fig. 3, the method includes:
step 301, obtaining information to be displayed associated with a target object model, model parameters of the target object model, and first position information of a virtual camera; the target object model is displayed in the display area, and the virtual camera corresponds to a scene displayed in the display area.
The scene in the embodiment of the present application may be a three-dimensional scene, a two-dimensional scene or other scenes, and the three-dimensional scene is taken as an example in the following description. The target object model is a build model of objects in the scene. Such as the target object may include various objects such as people, animals, plants, vehicles, and the like. Model parameters of the target object model may include: the shape information, the appearance information, the function information, and other parameters of the target object model. In the embodiment of the application, the target object model can have thickness parameters in all directions.
In the embodiment of the present application, the display area refers to an area of the display screen of the terminal, where the scene is displayed, and the display area may be all or part of the display screen. The scene corresponds to a virtual camera whose position is a hypothetical position outside the display area. The scene may be considered to be presented in accordance with the captured view of the virtual camera, the position of the virtual camera determining the specific display of the scene on the display screen. The target object model is located in the scene, and the position of the virtual camera can determine the specific display condition of the target object model. The position of the virtual camera in the embodiment of the present application may refer to a point, where the point is the position of the optical center of the virtual camera.
Fig. 4 is a schematic diagram of information acquisition performed by a camera according to an embodiment of the application. Wherein point O represents the position of the virtual camera, the frustum with points A, B, C, D, A ', B', C ', D' as vertices represents the visual cone of the virtual camera, the area in the visual cone is the visual range of the virtual camera, and the content in the visual cone can be rendered for display in the display area. If the content in the cone is a three-dimensional scene to be displayed in the display area in the embodiment of the application. The position of the virtual camera may determine the cone of view, and thus the content of the displayable three-dimensional scene.
The optic cone has a proximal section and a distal section parallel to each other, with the rectangle enclosed by points A ', B', C ', D' in FIG. 4 representing the proximal section and the rectangle enclosed by point A, B, C, D representing the distal section. The near-cross section may correspond to a display area of the terminal, and the picture displayed in the display area may be a picture obtained by mapping the content in the cone of view to the near-cross section, that is, by projecting a point in the cone of view onto the near-cross section. In the embodiment of the application, the shape of the near section and the far section is quadrilateral taking the shape of the cone as a prismatic table as an example; the viewing cone may also be prismatic in some alternative cases, and the proximal and distal cross-sections may also be pentagonal, hexagonal, or other shapes.
In order to facilitate distinguishing from other location information, the location information of the virtual camera is referred to as first location information in the embodiment of the present application. The location information described in the embodiments of the present application may each include a location coordinate in a target coordinate system, where the target coordinate system may be a coordinate system based on the location coordinate in a scene (e.g., a three-dimensional scene) displayed in the display area. The first position information is determined based on the same coordinate system as the position information in the three-dimensional scene. Optionally, the target coordinate system may be a camera coordinate system corresponding to the virtual camera.
For the information display system shown in fig. 1, the front end may transmit operation information of a user to the back end in real time. The back end may determine location information of the virtual camera based on the received operation information, determine information of a scene to be displayed in the display area at the location based on the location information of the virtual camera, and transmit the information to the front end. The front end may adjust the displayed scene based on this information. Alternatively, the process of determining the position information of the virtual camera and determining the information of the scene to be displayed based on the position information may also be performed by the server connected to the terminal, which is not limited in the embodiment of the present application.
In the embodiment of the application, the information to be displayed associated with the target object model comprises: information to be displayed based on the position of the target object model in the scene. The information to be displayed may include at least one of text information and pattern information. The text information may include information such as words, numerals, symbols, etc., and the pattern information may include information such as pictures and expressions. For convenience of description, the information to be displayed associated with the target object model is hereinafter simply referred to as information to be displayed of the target object model.
For example, the scene in the embodiment of the present application is a three-dimensional game scene. An application program of a three-dimensional game can be installed in the terminal, and a basic object model (such as models of characters, buildings, terrains and the like) in the application program can be realized by using a three-dimensional model. The terminal may display a three-dimensional scene while running the application, and the three-dimensional scene may include at least one object model therein. The target object model in the embodiment of the application can be an NPC model in the three-dimensional scene, the information to be displayed associated with the target object model can be injury value prompt information of the NPC, and the information to be displayed can also be dialogue information of the NPC and player characters. The target object model may also be a player character model, a prop model, or a building model.
Step 303, determining a default display position of the information to be displayed; the default display position and the target object model meet the set relative position relation.
For the information to be displayed associated with the target object model, a default display position of the information to be displayed can be determined based on the model parameters of the target object model and the set relative position relationship. The default display position may refer to a position for the scene described above. For example, the back end may determine the position information of the target object model in the scene based on the model parameters of the target object model, and further determine the default display position based on the position information and the relative position relationship.
Illustratively, the relative positional relationship includes being outside of the target object model and being a set distance away from the target object in a set direction. If the target object model is a character model, the relative position relationship is a relationship outside two unit distances at the top of the character model; in this example, the set direction is the direction from the foot to the head of the mannequin. Alternatively, the relative positional relationship may also include a relationship located at a specified position in the target object model, or may also include other positional relationships, which is not limited by the embodiment of the present application.
Alternatively, the default display position may be a point or may be an area. If the default display position is an area, the default display position may also be related to the content of the information to be displayed. Such as how much the size of the default display position is positively related to the content of the information to be displayed.
Step 305, determining whether the mapping position of the default display position for the display area is located outside the display area. In case the mapped location is located within the display area, step 307 is performed; in case the mapped position is outside the display area, step 309 is performed.
After determining a default display position of the information to be displayed, the back end can map the default display position against a display area to obtain a mapped position; then, it is determined whether the mapping position is located in the display area. If the mapping location is within the display area, it may be determined that the information to be displayed associated with the target object model may be normally displayed within the display area based on the mapping location, and then step 307 may be performed. If the mapping location is outside the display area, it may be determined that the information to be displayed associated with the target object model cannot be displayed in the display area based on the mapping location, and then step 309 may be performed to determine the display location of the information to be displayed in other manners.
Alternatively, in step 305, it may also be directly determined whether the default display position is in the optic cone, without mapping the default display position. In the case where the default display position is within the optic cone, execute step 307; step 309 is performed with the default display position outside the optic cone.
Step 307, based on the mapping position, controlling the front end to display the information to be displayed associated with the target object model.
In the case that the information to be displayed associated with the target object model can be normally displayed in the display area based on the mapping position, the back end can directly control the front end to display the information to be displayed based on the mapping position. For example, the back end may directly determine the mapped position as a display position of the information to be displayed, and transmit the information of the display position to the front end. The front end may display the information to be displayed at the display position based on the received information of the display position.
Fig. 5 is a schematic diagram of a display interface of a display area according to an embodiment of the application. As shown in fig. 5, the target object model is an NPC model, the information to be displayed is injury value prompt information in the fight, and the relative positional relationship satisfied by the default display position and the NPC model is: outside the two unit distances of the top of the NPC head. As shown in fig. 5, the mapping position of the default display position in the display area is an area a, and the information to be displayed is "injury value+100".
In an alternative implementation, steps 303 to 307 may not be performed, but steps 309 to 315 are used to determine a display position in a display area for any target object model, so as to display information to be displayed, and ensure that the information to be displayed can be normally presented to a user in the display area.
Step 309, constructing a reference plane based on the region information of the display region and the first position information of the virtual camera.
In the embodiment of the application, the area information of the display area can indicate the position of the display area. The region information of the display region may include position coordinates of a region covered by the display region. Alternatively, the position coordinates may be coordinates in the above-described target coordinate system, in other words, the region information of the display region, the first position information, and the position information in the three-dimensional scene are determined based on the same coordinate system. The first position information of the virtual camera indicates a position of the virtual camera, such as a position of an optical center of the virtual camera. The reference plane constructed based on the region information and the first position information may also be represented based on coordinates in the target coordinate system.
FIG. 6 is a flowchart of a method for constructing a reference plane according to an embodiment of the present application, and the back end may implement step 309 using the method provided in FIG. 6. As shown in fig. 6, step 309 includes:
Step 3091, determining an orientation vector of the virtual camera relative to the display area based on the area information of the display area and the first position information of the virtual camera.
The orientation vector of the virtual camera with respect to the display area is used to indicate the orientation of the virtual camera, i.e. the information gathering direction of the virtual camera. For example, the back end may determine a center point of the display area based on the area information of the display area, and further determine the orientation vector based on the center point and the first position information of the virtual camera. The orientation vector takes the position of the virtual camera as an origin and passes through the center point of the display area. Alternatively, it is also possible to set that the orientation vector passes through other points than the center point of the display area, without passing through the center point.
First, the back end may determine position information of each endpoint of the display area based on the area information of the display area. The location information is used to indicate the location of the endpoint, e.g., the location information includes coordinates of the endpoint. For convenience of distinction, the location information of the endpoint is referred to as second location information in the embodiment of the present application. Referring to fig. 4, the display area corresponds to a near-cross section of the vertebral body, and the rear end may determine second position information of each endpoint a ', B', C ', D' of the display area.
Second, the rear end may determine third position information of the center point of the display area according to the second position information of each end point. The third location information is used to indicate the location of the center point, e.g. the third location information comprises coordinates of the center point. For example, the rear end may determine the intersection of the line of A 'and C' and the line of B 'and D' as the center point E of the display area. For other shapes of display areas, the center point may be the centroid of the display area.
Then, the rear end determines an orientation vector of the virtual camera with respect to the display area based on the first position information and the third position information. The rear end determines a vector passing through the point indicated by the third position information with the point indicated by the first position information as an origin as an orientation vector. For example, with continued reference to FIG. 4, the back-end determined orientation vector may be a vector
Step 3092, determining a region vector of the display region based on the region information of the display region.
Wherein the region vector corresponds to a line segment in the display region.
In an alternative implementation, the backend may determine two target edges of the display area based on the area information of the display area. Next, determining fourth position information of the reference points on each target edge; and determining the area vector of the display area according to the fourth position information of each reference point. The fourth position information of the reference point is used to indicate the position of the reference point, e.g. the fourth position information comprises the coordinates of the reference point.
For example, the two target edges may be opposite edges of the respective edges of the display area, and the reference point on the target edge may be a midpoint of the target edge. With continued reference to FIG. 4, the two target edges may beThe reference point on the side A 'B' is the midpoint F 'of the sides A' and B ', and the reference point on the side C' D 'is the midpoint G' of the sides C 'and D'. The back end can determine the vector of each reference point as the area vector of the display area, for example, the area vector of the display area can be the vectorAlternatively, the two target edges may not be opposite edges, for example, the target edges may be adjacent edges; the reference point may not be the midpoint of the target edge, for example, the reference point may be one third or one fourth of the target edge, which is not limited in the embodiment of the present application.
In another optional implementation, the back end may further acquire a target direction, where the target direction is a specified direction of the information to be displayed in the target object model when the information to be displayed is displayed. The rear end determines an auxiliary line intersecting with the target direction in the display area based on the area information of the display area and the target direction; an area vector of the display area is then determined based on the auxiliary line.
Illustratively, the target direction is a vertical direction in the three-dimensional scene, which may be parallel to the side a 'B' in fig. 4. If the target direction is the direction from B 'to A'. The back end can determine the line segment F 'G' as the auxiliary line which is perpendicular to the target direction, and then the vector corresponding to the auxiliary lineA region vector is determined as the display region. Alternatively, the rear end may determine a portion of the line segment F 'G' as an auxiliary line, where the auxiliary line may not be perpendicular to the target direction, and the target direction may be other directions. Alternatively, the rear end may select a line segment that does not pass through the center point of the display area as the auxiliary line.
Alternatively, the target direction may be determined based on the object arrangement direction set in the three-dimensional scene. The target direction may intersect the object arrangement direction, e.g., the target direction may be perpendicular to the object arrangement direction. If a plurality of objects exist in the three-dimensional scene, the objects are arranged in a direction parallel to A 'D', and then the target direction can be parallel to A 'B'.
It should be noted that, the determination of the orientation vector in step 3091 and the determination of the area vector in step 3092 may be performed in parallel, and the back end may determine the orientation vector and the area vector based on a set constraint condition. The constraint is used to define the intersection of the orientation vector with the region vector.
Illustratively, the constraints on which the above examples are based may include: the orientation vector intersects the region vector at a center point of the display region. Therefore, in step 3091, the third position information of the center point of the display area needs to be determined to ensure that the obtained orientation vector passes through the center point of the display area; the target edge determined in step 3092 is two opposite edges, and the reference point is the midpoint of the target edge, so as to ensure that the obtained region vector also passes through the center point of the display region. The constraint condition may also be adjusted accordingly, based on which the position information of other points than the center point of the display area may need to be determined in step 3091, and the target edge and the reference point determined in step 3092 need to be adjusted accordingly, which is not described in detail in the embodiments of the present application.
Step 3093, constructing a reference plane according to the orientation vector and the region vector.
The back end may take the plane in which the orientation vector and the region vector lie as a reference plane. For example, with continued reference to FIG. 4, the backend may orient the vectorsSum region vector->The plane M is used as a reference plane.
Optionally, the rear end may also offset the auxiliary plane where the orientation vector and the area vector are located to a certain extent along a certain direction, and use the offset auxiliary plane as the reference plane. If the auxiliary plane can be shifted along the target direction, please refer to the related description about the target direction, and the description of the embodiment of the present application is omitted here. Alternatively, the rear end may rotate the auxiliary plane by a certain angle along a certain direction, and the rotated auxiliary plane is used as the reference plane.
Step 311, determining an intersection point of the target object model and the reference plane in the target direction based on the model parameters of the target object model.
The target direction is a designated direction of the target object model when the information to be displayed is displayed, and reference is made to the above description about the target direction, which is not repeated herein.
The reference plane built by the back-end necessarily includes the target portion located in the scene. The target object model is located in a scene displayed in the display area, and the rear end determines an intersection point of the target object model and the reference plane in the target direction, namely, determines an intersection point of the target object model and the target portion in the target direction.
In the first case, the target object model directly intersects the reference plane. The target object model may be a model having thickness parameters in all directions, and the target object model may have a plurality of intersections by directly intersecting the reference plane. The backend may perform the subsequent steps directly based on the plurality of intersections, or may also determine at least one intersection among the plurality of intersections to perform the subsequent steps based on the at least one intersection. For example, with continued reference to fig. 4, if the target object model is object X1 in the optic cone, the target object model may directly intersect the reference plane M.
In the second case, the target object model does not directly intersect the reference plane. In this case, the rear end may extend the entire target object model in the target direction to determine a plurality of intersections of the extended target object model with the reference plane. The backend may perform subsequent steps based on all or part of the plurality of intersections.
Alternatively, the back end may also determine a reference line passing through the target object model and parallel to the target direction, and determine an intersection point of the reference line and the reference plane to perform the subsequent steps based on the intersection point. From this reference line may be through the center of the target object model or may be through other locations of the target object model. For example, referring to fig. 4, if the target object model is the object X2 in the cone, the back end may determine the reference line L for the object X2, and further determine the intersection point of the reference line L and the reference plane M.
After determining the intersection point of the target object model and the reference plane in the target direction, the rear end can determine the display position of the information to be displayed based on the intersection point. The display position may be determined based on the following steps 313 and 315, and the following steps are described taking the example of determining the display position based on an intersection.
Step 313, mapping the intersection point to a display area to obtain a mapping point in the display area.
In both the step 309 and the step 311, the back end calculates the information in the scene displayed in the display area. If the scene is a three-dimensional scene, the position information involved therein includes coordinate information in a three-dimensional coordinate system. Because the scene is finally displayed in the display area, and the display area can be a two-dimensional display surface, the back end needs to map the information obtained in the step 311 to the display area, so as to determine the information of the actual display position of the information to be displayed.
The back end may map the intersection point of the target object model obtained in step 311 and the reference plane in the target direction to the display area to obtain a mapping point in the display area. The mapping process may be a projection process from information in a three-dimensional coordinate system to a two-dimensional coordinate system.
Step 315, determining the display position of the information to be displayed in the display area based on the mapping point.
In one example, the backend determines the location containing the mapped point as the display location of the information to be displayed. And determining a certain-sized area taking the mapping point as a center as a display position of information to be displayed by the rear end. Alternatively, the size of the area may be adjusted according to the content of the information to be displayed.
In another example, the backend shifts the mapping points outside the mapping region of the target object model in the display region, and determines the position containing the shifted mapping points as the display position of the information to be displayed. Therefore, the display position can be located outside the target object model as far as possible, the displayed information to be displayed is reduced to be overlapped with the target object model, and the good display effect of the information to be displayed can be ensured.
For example, the backend may be based on shifting the mapping point towards an orientation by a distance, which may be a preset value. For another example, the back end can also shift the mapping point by a certain angle toward an orientation with the specified position on the target object model as the center of a circle. Alternatively, the orientation may be randomly selected for the backend.
Optionally, the backend may determine whether a hint information of the target object model is displayed at the location of the mapping point. The prompt information and the information to be displayed can be the same type of information, such as injury value information. And under the condition that the prompt information is displayed at the mapping point, shifting the mapping point, for example, shifting the mapping point to the outside of the area where the prompt information is located. Therefore, superposition display of different information can be avoided, and display definition of each information is ensured.
Step 317, based on the display position, the front end is controlled to display the information to be displayed of the target object model in the display area.
After determining the display position of the information to be displayed in the display area, the terminal can display the information to be displayed based on the display position. The back end may control the front end to display the information to be displayed directly in the display area at the display location. Fig. 7 is a schematic view of a display interface of another display area according to an embodiment of the present application. As shown in fig. 7, for the object X1 in the first case of step 311, at least part of the information to be displayed, which is displayed by the terminal, may overlap with the target object model. For the object X2 in the second case of step 311, the information to be displayed by the terminal may be located outside the object X2.
Optionally, the back end may further perform a certain calculation based on the display position to determine another position different from the display position, and then control the front end to display the information to be displayed at the other position.
In the embodiment of the application, the display position is determined by constructing the reference plane and determining the intersection point of the target object model and the reference plane in the target direction. Because the intersection point can be mapped in the display area, the probability that the display position determined based on the intersection point is in the display area is larger, so that the information to be displayed of the target object model can be displayed to a user to a large extent, and the information display effect is improved.
Alternatively, the target direction may be determined based on the object arrangement direction set in the scene. Therefore, the obtained intersection point can be located outside other objects as far as possible, the correlation between the information to be displayed of the display target object based on the display position determined by the intersection point and the target object model is ensured to be large, and the object pertinence of information display is ensured to be good.
In summary, in the information display method provided by the embodiment of the present application, when information to be displayed is associated with the target object model in the scene displayed in the display area, the reference plane may be constructed based on the display information of the display area and the position information of the virtual camera corresponding to the scene. Then, based on the model parameters of the target object model and the reference plane, a display position in the display area is determined for display of the information to be displayed. In this way, the information to be displayed associated with the target object model can be ensured to be displayed in the display area, the problem that the information to be displayed by a user cannot be displayed normally is avoided, and the information display effect in the scene displayed in the display area is improved.
Corresponding to the method embodiment, the present application further provides an embodiment of an information display device, and fig. 8 is a schematic structural diagram of an information display device according to an embodiment of the present application. As shown in fig. 8, the apparatus includes:
The obtaining module 801 is configured to obtain information to be displayed associated with a target object model, model parameters of the target object model, and first position information of a virtual camera, where the target object model is displayed in a display area, and the virtual camera corresponds to a scene displayed in the display area.
A construction module 802 is configured to construct a reference plane based on the region information and the first position information of the display region.
A first determining module 803, configured to determine a display position of information to be displayed in the display area based on the model parameter and the reference plane.
The first display module 804 is configured to display information to be displayed based on the display position.
In summary, in the information display device provided by the embodiment of the present application, when information to be displayed is associated with the target object model in the scene displayed in the display area, the reference plane may be constructed based on the display information of the display area and the position information of the virtual camera corresponding to the scene. Then, based on the model parameters of the target object model and the reference plane, a display position in the display area is determined for display of the information to be displayed. In this way, the information to be displayed associated with the target object model can be ensured to be displayed in the display area, the problem that the information to be displayed by a user cannot be displayed normally is avoided, and the information display effect in the scene displayed in the display area is improved.
Optionally, the building module 802 may include:
the first determining sub-module is used for determining an orientation vector of the virtual camera relative to the display area based on the area information of the display area and the first position information.
And the second determining submodule is used for determining the area vector of the display area based on the area information of the display area.
And the construction submodule is used for constructing a reference plane according to the orientation vector and the region vector.
Optionally, the first determining submodule is configured to:
determining second position information of each endpoint of the display area based on the area information of the display area;
determining third position information of a center point of the display area according to the second position information of each endpoint;
an orientation vector of the virtual camera relative to the display area is determined based on the first location information and the third location information.
Optionally, the second determining submodule is configured to:
determining two target edges of the display area based on the area information;
determining fourth position information of reference points on each target edge;
and determining the area vector of the display area according to the fourth position information of each reference point.
Optionally, the second determining submodule is configured to:
determining an auxiliary line intersecting with a target direction in a display area based on the area information and the target direction, wherein the target direction is a designated direction of a target object model when the information to be displayed is displayed;
Based on the auxiliary line, a region vector of the display region is determined.
Optionally, the first determining module 803 includes:
the third determining submodule is used for determining an intersection point of the target object model and the reference plane in the target direction based on the model parameters, wherein the target direction is a designated direction of the target object model when the information to be displayed is displayed;
and the fourth determination submodule is used for determining the display position of the information to be displayed in the display area based on the intersection point.
Optionally, the fourth determining submodule is configured to:
mapping the intersection points to a display area to obtain mapping points in the display area;
the display position is determined based on the mapping points.
Optionally, the fourth determining submodule is configured to:
shifting the mapping points to the position outside the mapping area of the target object model in the display area;
a display position including the shifted mapping points is determined.
Optionally, the information display device further includes:
the second determining module is used for determining a default display position of the information to be displayed; the default display position and the target object model meet the set relative position relation;
the construction module 802 is configured to:
when the mapping position of the default display position for the display area is located outside the display area, a reference plane is constructed based on the area information of the display area and the first position information.
Optionally, the information display device further includes:
and the second display module is used for displaying the information to be displayed based on the mapping position when the mapping position of the default display position for the display area is positioned in the display area.
In summary, in the information display device provided by the embodiment of the present application, when information to be displayed is associated with the target object model in the scene displayed in the display area, the reference plane may be constructed based on the display information of the display area and the position information of the virtual camera corresponding to the scene. Then, based on the model parameters of the target object model and the reference plane, a display position in the display area is determined for display of the information to be displayed. In this way, the information to be displayed associated with the target object model can be ensured to be displayed in the display area, the problem that the information to be displayed by a user cannot be displayed normally is avoided, and the information display effect in the scene displayed in the display area is improved.
The above is a schematic scheme of an information display device of the present embodiment. It should be noted that, the technical solution of the information display device and the technical solution of the information display method belong to the same concept, and details of the technical solution of the information display device, which are not described in detail, can be referred to the description of the technical solution of the information display method. Furthermore, the components in the apparatus embodiments should be understood as functional blocks that must be established to implement the steps of the program flow or the steps of the method, and the functional blocks are not actually functional partitions or separate limitations. The device claims defined by such a set of functional modules should be understood as a functional module architecture for implementing the solution primarily by means of the computer program described in the specification, and not as a physical device for implementing the solution primarily by means of hardware.
FIG. 9 is a block diagram of a computing device according to one embodiment of the application. The components of computing device 900 include, but are not limited to, memory 910 and processor 920. Processor 920 is coupled to memory 910 via bus 930 with database 950 configured to hold data.
Computing device 900 also includes an access device 940, access device 940 enabling computing device 900 to communicate via one or more networks 960. Examples of such networks include public switched telephone networks (PSTN, public Switched Telephone Network), local area networks (LAN, local Area Network), wide area networks (WAN, wide Area Network), personal area networks (PAN, personal Area Network), or combinations of communication networks such as the internet. Access device 940 may include one or more of any type of network interface, wired or wireless, such as a network interface card (NIC, network Interface Controller), such as an IEEE802.11 wireless local area network (WLAN, wireless Local Area Networks) wireless interface, a worldwide interoperability for microwave access (Wi-MAX, worldwide Interoperability for Microwave Access) interface, an ethernet interface, a universal serial bus (USB, universal Serial Bus) interface, a cellular network interface, a bluetooth interface, a near field communication (NFC, near Field Communication) interface, and so forth.
In one embodiment of the application, the above-described components of computing device 900, as well as other components not shown in FIG. 9, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 9 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 900 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 900 may also be a mobile or stationary server.
Wherein the processor 920 is configured to execute computer-executable instructions of the information display method.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the information display method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the information display method.
An embodiment of the present application also provides a computer readable storage medium storing computer instructions that, when executed by a processor, are operable to implement the above-described information display method.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the information display method belong to the same concept, and details of the technical solution of the storage medium, which are not described in detail, can be referred to the description of the technical solution of the information display method.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium may be appropriately increased or decreased according to the requirements of jurisdictions in which the computer readable storage medium does not include electrical carrier signals and telecommunication signals, for example, according to jurisdictions in which the computer readable storage medium is not configured.
An embodiment of the present application also provides a chip storing a computer program which, when executed by the chip, implements the steps of the above-described information display method.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the application disclosed above are intended only to assist in the explanation of the application. Alternative embodiments are not intended to be exhaustive or to limit the application to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and the full scope and equivalents thereof.

Claims (13)

1. An information display method, comprising:
obtaining information to be displayed associated with a target object model, model parameters of the target object model and first position information of a virtual camera; the target object model is displayed in a display area, and the virtual camera corresponds to a scene displayed in the display area;
constructing a reference plane based on the region information of the display region and the first position information;
determining a display position of the information to be displayed in the display area based on the model parameters and the reference plane;
and displaying the information to be displayed based on the display position.
2. The method of claim 1, wherein constructing a reference plane based on the region information of the display region and the first position information comprises:
determining an orientation vector of the virtual camera relative to the display area based on the area information of the display area and the first position information;
determining a region vector of the display region based on the region information of the display region;
and constructing the reference plane according to the orientation vector and the area vector.
3. The method of claim 2, wherein determining an orientation vector of the virtual camera relative to the display area based on the area information of the display area and the first location information comprises:
determining second position information of each endpoint of the display area based on the area information of the display area;
determining third position information of the center point of the display area according to the second position information of each endpoint;
an orientation vector of the virtual camera relative to the display area is determined based on the first location information and the third location information.
4. The method of claim 2, wherein determining the region vector for the display region based on the region information for the display region comprises:
determining two target edges of the display area based on the area information;
determining fourth position information of reference points on each target edge;
and determining the area vector of the display area according to the fourth position information of each reference point.
5. The method of claim 2, wherein determining the region vector for the display region based on the region information for the display region comprises:
Determining an auxiliary line intersecting with a target direction in the display area based on the area information and the target direction, wherein the target direction is a designated direction of the target object model in which the information to be displayed is located when the information to be displayed is displayed;
based on the auxiliary line, a region vector of the display region is determined.
6. The method according to any one of claims 1 to 5, wherein determining a display position of the information to be displayed in the display area based on the model parameters and the reference plane comprises:
determining an intersection point of the target object model and the reference plane in a target direction based on the model parameters, wherein the target direction is a designated direction of the target object model when the information to be displayed is displayed;
and determining the display position of the information to be displayed in the display area based on the intersection point.
7. The method of claim 6, wherein determining a display location of the information to be displayed in the display area based on the intersection comprises:
mapping the intersection points to the display area to obtain mapping points in the display area;
And determining the display position based on the mapping point.
8. The method of claim 7, wherein determining the display location based on the mapping points comprises:
shifting the mapping points to the position, outside the mapping area of the display area, of the target object model;
determining the display position containing the shifted mapping points.
9. The method according to any one of claims 1 to 5, wherein before constructing a reference plane based on the region information of the display region and the first position information, the method further comprises:
determining a default display position of the information to be displayed; wherein the default display position and the target object model satisfy a set relative position relationship;
constructing a reference plane based on the region information of the display region and the first position information, including:
and constructing a reference plane based on the region information of the display region and the first position information when the mapping position of the default display position for the display region is located outside the display region.
10. The method of claim 9, wherein after determining a default display position of the information to be displayed, the method further comprises:
And displaying the information to be displayed based on the mapping position when the mapping position of the default display position relative to the display area is positioned in the display area.
11. An information display device, characterized in that the information display device comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring information to be displayed, related to a target object model, model parameters of the target object model and first position information of a virtual camera, the target object model is displayed in a display area, and the virtual camera corresponds to a scene displayed in the display area;
a construction module, configured to construct a reference plane based on the region information of the display region and the first position information;
the determining module is used for determining the display position of the information to be displayed in the display area based on the model parameters and the reference plane;
and the display module is used for displaying the information to be displayed based on the display position.
12. A computing device, comprising: a memory and a processor; the memory is for storing computer executable instructions, the processor being for executing the computer executable instructions to implement the steps of the method of any one of claims 1 to 10.
13. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 10.
CN202310820250.9A 2023-07-05 2023-07-05 Information display method and device and computing equipment Pending CN116785703A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310820250.9A CN116785703A (en) 2023-07-05 2023-07-05 Information display method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310820250.9A CN116785703A (en) 2023-07-05 2023-07-05 Information display method and device and computing equipment

Publications (1)

Publication Number Publication Date
CN116785703A true CN116785703A (en) 2023-09-22

Family

ID=88043720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310820250.9A Pending CN116785703A (en) 2023-07-05 2023-07-05 Information display method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN116785703A (en)

Similar Documents

Publication Publication Date Title
US11538229B2 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
US11330172B2 (en) Panoramic image generating method and apparatus
US10573060B1 (en) Controller binding in virtual domes
CN107193372B (en) Projection method from multiple rectangular planes at arbitrary positions to variable projection center
EP3882870B1 (en) Method and device for image display, storage medium and electronic device
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
US11042730B2 (en) Method, apparatus and device for determining an object, and storage medium for the same
CN110559647B (en) Control method and device for sight display in virtual shooting game, medium and equipment
CN112862935A (en) Game character motion processing method and device, storage medium and computer equipment
CN111773709A (en) Scene map generation method and device, computer storage medium and electronic equipment
US20190362559A1 (en) Augmented reality method for displaying virtual object and terminal device therefor
CN110971678A (en) Immersive visual campus system based on 5G network
US10740957B1 (en) Dynamic split screen
US20230290043A1 (en) Picture generation method and apparatus, device, and medium
CN108553895A (en) User interface element and the associated method and apparatus of three-dimensional space model
US10839587B2 (en) Image processing methods and devices for moving a target object by using a target ripple
KR101146660B1 (en) Image processing device, image processing method, and information recording medium
CN112604279A (en) Special effect display method and device
CN115619986B (en) Scene roaming method, device, equipment and medium
WO2023130809A1 (en) Picture display method and apparatus, terminal, storage medium, and program product
CN116785703A (en) Information display method and device and computing equipment
CN112891940B (en) Image data processing method and device, storage medium and computer equipment
CN108986228B (en) Method and device for displaying interface in virtual reality
CN111524240A (en) Scene switching method and device and augmented reality equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination