CN115531877B - Method and system for measuring distance in virtual engine - Google Patents

Method and system for measuring distance in virtual engine Download PDF

Info

Publication number
CN115531877B
CN115531877B CN202211455236.5A CN202211455236A CN115531877B CN 115531877 B CN115531877 B CN 115531877B CN 202211455236 A CN202211455236 A CN 202211455236A CN 115531877 B CN115531877 B CN 115531877B
Authority
CN
China
Prior art keywords
distance
point
position information
invisible
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211455236.5A
Other languages
Chinese (zh)
Other versions
CN115531877A (en
Inventor
李仕林
王峥
郭建君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weiling Times Technology Co Ltd
Original Assignee
Beijing Weiling Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weiling Times Technology Co Ltd filed Critical Beijing Weiling Times Technology Co Ltd
Priority to CN202211455236.5A priority Critical patent/CN115531877B/en
Publication of CN115531877A publication Critical patent/CN115531877A/en
Application granted granted Critical
Publication of CN115531877B publication Critical patent/CN115531877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The application discloses a method and a system for measuring distance in a virtual engine, wherein the method for measuring distance in the virtual engine is characterized by comprising the following substeps: acquiring position information; determining an invisible distance between the position information; and converting the invisible distance between the position information into the visible distance. The method and the device can be used for measuring various game scenes and virtual 3D scenes under the illusion engine, and can realize free click and free generation of distance. And adopt two floating point to calculate in this application, can click accurate drawing at will in the world space of virtual engine, have stronger ease for use, can survey promptly, carry out distance measurement more fast convenient in the virtual 3D scene finally.

Description

Method and system for measuring distance in virtual engine
Technical Field
The present application relates to the field of virtualization technologies, and in particular, to a method and a system for distance measurement in a virtual engine.
Background
The illusion engine is a complete game development platform oriented to next generation gaming machines and personal computers, which contains a great deal of core technology, data generation tools and basic support required by game developers. The illusion engine is also a game production engine with complete functions, includes scene production, light rendering, action lenses, particle special effects and material blueprints, can help studios of various scales to efficiently produce different types of games, and constructs a large number of virtual 3D scenes.
Therefore, how to provide a method capable of quickly and conveniently performing distance measurement in a virtual 3D scene is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides a method for measuring distance in a virtual engine, which is characterized by comprising the following substeps: acquiring position information; determining an invisible distance between the position information; and converting the invisible distance between the position information into the visible distance.
As above, the position information is one or more groups of position information, and each group of acquired position information includes the position information of the starting point and the ending point in the acquired world space.
As above, among others, based on the ray detection method, the start point position information and the end point position information in the world space are determined.
As described above, the determining of the start point position information and the end point position information in the world space specifically includes determining whether or not an object is collided by a ray detection method, and setting the start point position as the object if the object is collided, and setting the end point position as the end point position if the object is collided again.
As above, wherein the invisible distance between the position information is calculated as determining the starting point and the ending point position distance in each set of position information in one or more sets of position information.
As above, the determining the invisible distance between the position information specifically includes: the generation point is determined from the start point and end point position information, and the generation point can confirm the order of the start point and the end point in each set of position information.
As above, wherein, in response to determining the generation point, the coordinates of the start point and the end point are placed into a preset Vector type array, resulting in an invisible distance between the start point and the end point.
The above, wherein the string variable in the blueprint is called through blueprint communication, and the invisible distance is converted into the visible distance.
A distance measuring system in a virtual engine specifically comprises a position information acquisition unit, an invisible distance determining unit and a visible distance determining unit; the acquisition unit is used for acquiring the position information; an invisible distance determining unit for determining an invisible distance between the position information; and the visual distance determining unit is used for converting the invisible distance between the position information into the visual distance.
As above, the invisible distance determining unit specifically includes the following modules: a generation point determining module and an invisible distance obtaining module; the generating point determining module is used for determining a generating point according to the position information of the starting point and the end point; and the invisible distance acquisition module is used for putting the coordinates of the starting point and the end point into a preset Vector type array to obtain the invisible distance between the starting point and the end point.
The application has the following beneficial effects:
the method and the device can be used for measuring various game scenes and virtual 3D scenes under the illusion engine, and can realize free click and free generation of distance. And adopt two floating point to calculate in this application, can click the accurate drawing at will in virtual engine's world space, have stronger ease for use, can point promptly and survey promptly, it is more convenient fast finally to carry out distance measurement in the virtual 3D scene.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of a method for distance measurement in a virtual engine provided according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a distance measurement system in a virtual engine according to an embodiment of the present disclosure;
fig. 3 is another schematic structural diagram of a distance measuring system in a virtual engine according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
As shown in fig. 1, a method for distance measurement in a virtual engine is provided, where a blueprint and a virtual engine are required before a distance measurement method is performed.
The blueprint is a visual program editor packaged in C + +, and can be used for realizing various program requirements in the illusion engine, and the blueprint can be regarded as a program language built in the UE engine in the simplest mode.
Specifically, the data in the blueprint are similar, array, float type, etc. The blueprint editor used in the present embodiment is a blueprint editor in the prior art. But it is worth noting that different blueprint types are used for different usage scenarios. Therefore, the present embodiment will mainly use Character in the blueprint to make the blueprint so as to implement the distance measurement.
In addition, the blueprint editor provides a large number of nodes and functions to conveniently create the blueprint.
Therefore, in the blueprint file provided by this embodiment, the material enabled output attributes are as follows:
BP _ third personcharacter: final mode of operation
BP _ Point: marking point generated by clicking position
BP _ Distance: invoking UI
BP _ UI: generating UI parameters according to length
BP _ UI _ total: total length of drawn measurement line
Enabling output attributes based on the textures, wherein the data types used by the built-in variables in the blueprint are as follows in table 1:
TABLE 1
Figure 956001DEST_PATH_IMAGE001
After the setting is completed, the distance measurement is carried out, wherein the method for measuring the distance in the virtual engine specifically comprises the following steps:
step S110: position information is acquired.
The acquiring of the position information includes acquiring position information of a start point and an end point, and specifically includes: the method comprises the steps of obtaining role position information, converting a screen space of a mouse into a world space according to the role position information, and determining starting point position coordinate information and end point position coordinate information in the world space based on a ray detection method.
Wherein the conversion of the screen space of the mouse into world space can be performed using methods customary in the art.
The following is a detailed explanation of the start point position coordinates and the end point position coordinates information: where the origin and focus coordinates are normalized device coordinates, i.e., each vertex's x, y, z coordinates should be between-1.0 and 1.0, vertices beyond this coordinate range will not be visible. The transformation of the coordinates to standardized device coordinates and then to screen coordinates is usually done in steps, i.e. similar to a pipeline. In the pipeline, the vertices of the object are also transformed to a plurality of Coordinate systems (Coordinate System) before being finally transformed into screen coordinates. An advantage of transforming the coordinates of the object to several Intermediate Coordinate systems (Intermediate Coordinate System) is that some operations or calculations are more convenient and easier in a particular Coordinate System.
There are a total of 5 different coordinate systems that are important to us: local Space (or Object Space), world Space (World Space), view Space (View Space), clip Space (Clip Space), and Screen Space (Screen Space). This is all the different states a vertex needs to go through before it is finally converted into a fragment.
In order to transform coordinates from one coordinate system to another, we need to use several transformation matrices, the most important ones being Model (Model) matrix, view (View) matrix, projection (Projection) matrix, respectively. The joint is what we often say as an MVP matrix.
Our vertex coordinates start in Local Space (Local Space), which is referred to herein as Local coordinates (Local Coordinate), which later becomes World coordinates (World Coordinate), view coordinates (View Coordinate), crop coordinates (Clip Coordinate), and finally end in Screen coordinates (Screen Coordinate).
The local coordinates are the coordinates of the object relative to the local origin and are also the coordinates of the start of the object.
World space coordinates are at a larger spatial range. These coordinates are relative to the world's global origin, and they are placed with other objects relative to the world's origin. The world coordinates are transformed into viewing space coordinates such that each coordinate is viewed from the perspective of the camera or viewer.
After the coordinates arrive in the viewing space, we need to project them to the cropping coordinates. The clipping coordinates are processed to a range of-1.0 to 1.0 and determine which vertices will appear on the screen. Finally, transform clipping coordinates to screen coordinates, we will use a process called Viewport transformation (Viewport Transform).
The viewport transformation transforms coordinates in the range-1.0 to a coordinate range defined by the glViewport function. The coordinates finally transformed are sent to a rasterizer, which converts them into fragments.
The local space may be understood as an example of taking a picture, and taking a picture of a group of people, each of whom adjusts its own instrument, and the adjustment is in the local space. The local space refers to the coordinate space where the object is located, i.e. where the object is located at the very beginning. In a piece of software a cube is created, the origin of which may be located at (0, 0, 0), even though it may end up in a completely different position in the program. It is possible that all models created have an initial position of (0, 0, 0) (translated notes: however they will eventually appear in different locations in the world). Therefore, all vertices of the model are in local space, and they are local to the object.
The world space/model matrix described above can be understood as: a group of people take photos, and after the people need to sort the photos, the people need to gather the photos firstly, and then adjust the standing position and the angle. And (4) adjusting in a centralized way, wherein the process is a model matrix, and the space is the world space. If we import all our objects into the program, they may crowd all over the world's origin (0, 0, 0), which is not a result we want. We want to define a location for each object so that they can be placed in the larger world. Coordinates in world space are just as their name: refers to the coordinates of the vertices relative to the (game) world. If the object is to be distributed around the world, the space is the space that we want the object to transform to.
The coordinates of the object will be transformed from local to world space; the transformation is realized by a Model Matrix (Model Matrix). The model matrix is a transformation matrix that can be used to place an object in the position or orientation it should be by shifting, scaling, rotating it. The above-described viewing space/viewing matrix can be understood as finding a group of people to take a picture, and after everything goes well, the cameras are aligned. The camera is the viewing volume and the process of adjusting the camera is the viewing matrix. The viewing Space is often referred to as the Camera (Camera) of OpenGL (and is therefore sometimes also referred to as Camera Space (Camera Space) or Eye Space (Eye Space)). The viewing space is the result of translating world space coordinates to coordinates in front of the user's field of view. The observation space is thus the space that is observed from the point of view of the camera.
This is usually done by a series of combinations of displacements and rotations, translating/rotating the scene so that a particular object is transformed in front of the camera. These combined transformations are typically stored in a View Matrix (View Matrix), which is used to transform the world coordinates to the viewing space.
Specifically, the method for acquiring the start point and end point position information determines whether an object is collided based on a ray detection function in the UE blueprint, and takes the object as the start point position if the object is collided, and takes the object as the end point position if the object is collided again. The start point position and the end point position are a set of position information, and if the mouse is clicked again, the start point position and the end point position are continuously generated. Step S120 is performed in response to determining one or more sets of start point information and end point information.
The determination of the start point and the end point based on whether the object collides may be understood as that when the mouse is clicked, if the object collides, a start point or an end point is generated after the mouse is clicked.
The coordinates of the position of the object when it is collided with in the ray detection function are based on the coordinates of the world position in the graphics, or the coordinates in the UE engine. The coordinates can be used to determine specific start and end coordinates.
In which one start point coordinate and the corresponding end point coordinate are regarded as a set of position information, the present embodiment may detect a distance between multiple sets of position information, that is, the present embodiment may acquire one or more sets of position information, thereby detecting a distance between the start point and the end point in each set of position information.
Specifically, the determination as to whether a ray hits an object may be understood as a ray to perform hit determination, where the hit is a collision.
There may be situations in the game: it is necessary to determine whether a player character is looking at an object; if so, the game state is adjusted in some way (e.g., highlighted when the player looks at an object). Or whether the enemy can see the player character; if so, a shot or other attack is initiated. The two cases-the "emission" of an invisible ray-the detection of the geometry between two points can be achieved using tracing (or ray casting); such as a hit geometry, returns the hit to operate on.
There are several different options available when running traces. You can run tracking, checking and collision with any target (the hit object will be returned); or run the trace on the trace channel, only objects that hit when they are set to respond to a particular trace channel (which may be set by the precision Settings) return hit information.
Besides running the trace according to the object or the trace channel, the user can also run the trace to detect single hit or multiple hits, the single trace only returns the single hit result, and the multiple traces return multiple hits caused by the trace. The type of ray used may also be specified by tracing: straight lines, squares, capsules or spheres.
Where tracking provides a way to get feedback in the level about the content present on the line segment. The method used is to provide two end points (one start and one end position) and the physical system will "trace" a line segment between the two points, reporting any Actor (band collision) it hits. Essentially, tracing is the same as ray casting or ray tracing in other software packages.
Whether you need to know whether one Actor can "see" another Actor, determine the normal to a particular polygon, simulate a high-speed weapon, or know whether an Actor has entered a space, a reliable, computationally inexpensive solution to tracing can be used. The present embodiment will describe the basic set of functions tracked in the phantom engine 5 (UE 5).
Including tracking by channel or object type because tracking uses a physical system you can define the class of objects that need to be tracked. Selection can be made in two broad categories: channel and object type. The channel is used for visibility and camera, etc., and is almost exclusively relevant for tracking. The object type is the Actor physical type with a collision in the scene, such as pan, vehicle, destructible Actor, and so on. More channels and object types can be added as needed. For more information on the specific operation, please refer to adding custom object types to the project.
During tracing, the first item matching the condition and being traced and hit may be selected to be returned, or all items matching the condition and being traced and hit may be returned. Particular attention is paid to the distinction between Multi Trace by Channel (MultiTrace by Channel) and object by object (Multi Trace For Objects). When using per-Channel multiple tracking (Muli Trace by Channel), the tracking will return all Overlaps (Overlaps) including the first Block (Block). Imagine that a shot bullet passes through a tall grass mat and then hits the wall. All Objects matching the object type of the Trace lookup will be returned per object multiple Trace (Multi Trace For Objects), assuming the component is set to return a Trace query. It is well suited to calculate the number of objects between the start and the end of the trace.
The UV coordinates can be obtained from the Trace, and if Trace Complex is used, the Trace can return the UV coordinates of the Actor it hit. From version 4.14, this function is only valid on static grid elements, procedural grid elements and BSPs. It cannot work properly on the skeletal grid body component because you are tracking physical resources, which do not have UV coordinates (even if you choose Trace Complex). Using this functionality will increase CPU memory usage because the ghost engine needs to keep an additional copy of vertex locations and UV coordinates in main memory.
Traces also have some small functionality that can be used to limit what they return, which simplifies the process of debugging them. They can track Complex collisions (Complex collisions) if a static or procedural mesh enables it. If they are called from an Actor, they can be told to ignore all connected components by letting the Actor track themselves. Finally, they have the option of tracing with red or green lines, with larger boxes representing hits.
The above is a way of obtaining coordinates by tracking, and in addition, the tracking may use a single-line tracking method, which specifically includes the following sub-steps:
step D1: a new project is created and opened in the virtual engine.
Specifically, a new project is created and opened in the virtual engine using the Blueprint First Person template that includes startcontent.
In the FirstPerson/Blueprints folder, a BP _ FirstPersonacter blueprint is opened. Click in the graph, search and add an event point difference. This results in tracking running every frame. A seek to connect to the line to go out the game is performed and then from ByChannel.
Step D2: tracking is performed based on the opened item.
Wherein the key controls the camera, drags and keys in the first component, and starts to track.
Specifically, the first-person camera node connects out the line, from adding one acquisition world location node, and then tracks it to the beginning of tracking. Then, a clue is connected out from the side of the first-person camera, and a Get World Rotation node is added.
We start tracing from the position of firstpresoncamera and then obtain the rotation of firstpresoncamera. A node is connected out of the GetWorld Rotation and a Get forwarded Vector is added, and then a node is connected out of the node and a Vector Float is added, wherein the value is 1500.
A future win or future is obtained and then extended forward.
The World Location node is taken to connect out the node and add one Vector + Vector from then (click show to End of trace node).
We here show them preemptively using the PersonCamera position 1500 (based on their rotation and forward movement).
The draw debug type is set to one frame on the trace node.
When the game line tracking is checked, the debugging line can be checked.
The execution output of the trace concatenates and adds a print string.
The search results in Hitreak, and then add a HitBacak Hit Result.
Add one from (from the actor to his connection to the result) to the string object (print string object) and print string nodes. This may allow us to debug the output trace target object.
And D3: and completing tracking and obtaining a hit result.
Where the game blocks in the view level are clicked, by compiling the buttons, then starting in the editor. See if a final hit is made by whether the ray hits the game box. If a block is hit, it is considered a hit.
Step S120: an invisible distance between the location information is determined.
The step of calculating the invisible distance between the position information includes specifically obtaining the position distance between a starting point and an end point in one or more groups of position information, calculating the current position if the object is collided, and specifically visualizing the specific ray collision position through a debug mode. It is to be noted that the distance between the start point and the end point position information calculated in this step is not visualized.
Specifically, the step S120 specifically includes the following sub-steps:
step S1201: and determining a generation point according to the starting point and the end point position information.
Where from the coordinates of the position when colliding with the object one can understand a generation point, which is a point located between the starting point and the end point.
Further, the generated points are not visible, so that the embodiment designates a generated point in the blueprint by clicking a mouse, and generates a visualized sphere as a reference, thereby defining the position coordinates of the generated point.
Since the position information may be a plurality of sets of position information, the order of the start point and the end point in each set of position information can be confirmed by the generated point, for example, in one set of position information, the first click is generated by the mouse to be the start point, and the second click is generated to be the end point, thereby avoiding confusion of the start point and the end point in each set of position information.
Step S1202: and in response to the determination of the generation point, putting the coordinates of the starting point and the end point into a preset Vector type array to obtain the invisible distance between the starting point and the end point.
Specifically, after the sequence of the starting point and the ending point in one or more groups is determined, the coordinates of the starting point and the ending point in each group of position information are placed into a Vector type array, and since the Vector type is an object array capable of realizing automatic growth in java, a Vector between the coordinates of the starting point and the ending point can be obtained through the array, and the length is obtained through the Vector, so that the invisible distance between the starting point and the ending point is obtained.
Step S130: and converting the invisible distance between the position information into the visible distance.
In this step, the UI is generated and visualized for the invisible distance in step S120.
Specifically, a UI is generated in the UI blueprint through blueprint communication, and a string variable Distance is called to generate length data measured in a scene, namely, an invisible Distance is converted into a visible Distance.
Example two
As shown in fig. 2, the present embodiment provides a distance measuring system in a virtual engine, including a position information obtaining unit 210, an invisible distance determining unit 220, and a visible distance determining unit 230.
The obtaining unit 210 is used for obtaining the position information.
The invisible distance determining unit 220 is connected to the obtaining unit 210 for determining the invisible distance between the position information.
The invisible distance determining unit 220 specifically includes the following modules: a generation point determining module 310 and an invisible distance acquiring module 320.
Wherein the generation point determination module 310 is configured to determine a generation point according to the start point and the end point position information.
The invisible distance acquiring module 320 is connected to the generation point determining module 310, and is configured to put the coordinates of the start point and the end point into a preset Vector type array, so as to obtain the invisible distance between the start point and the end point.
The visible distance determining unit 230 is connected to the invisible distance determining unit 220 for converting the invisible distance between the position information into a visible distance.
The application has the following beneficial effects:
the method and the device can be used for measuring various game scenes and virtual 3D scenes under the illusion engine, and can realize free click and free generation of distance. And adopt two floating point to calculate in this application, can click the accurate drawing at will in virtual engine's world space, have stronger ease for use, can point promptly and survey promptly, it is more convenient fast finally to carry out distance measurement in the virtual 3D scene.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. A method for measuring distance in a virtual engine is characterized by comprising the following substeps:
acquiring position information comprising starting point position coordinate information and end point position coordinate information;
determining an invisible distance between the position information;
converting the invisible distance between the position information into a visible distance;
wherein, the determining of the invisible distance between the position information specifically comprises: determining a generation point according to the position information of the starting point and the end point, wherein the generation point is an invisible point between the starting point and the end point, the position information is a plurality of groups of position information, and the sequence of the starting point and the end point in each group of position information can be confirmed through the generation point; after the generation point is determined, the coordinates of the starting point and the end point are placed in a preset Vector type array, a Vector between the coordinates of the starting point and the end point can be obtained through the array, and the length is obtained through the Vector so as to obtain the invisible distance between the starting point and the end point;
the conversion is to call a character string variable in the blueprint through blueprint communication, specifically, to generate a UI in the UI blueprint through the blueprint communication, and call a character string variable Distance to generate length data measured in a scene, that is, to convert an invisible Distance into a visible Distance.
2. The method of claim 1, wherein the location information is one or more sets of location information, and each set of location information obtained comprises obtaining location information of a start point and an end point in world space.
3. The method of distance measurement in a virtual engine according to claim 2, wherein the start point position information and the end point position information in the world space are determined based on a ray detection method.
4. The method of distance measurement in a virtual engine of claim 3, wherein determining the start position information and the end position information in the world space specifically includes determining whether an object is collided by a ray detection method, and taking the object as the start position if the object is collided, and taking the object as the end position if the object is collided again.
5. The method of distance measurement in a virtual engine of claim 1, wherein calculating the invisible distance between location information is determining a starting point and ending point location distance in each of one or more sets of location information.
6. A distance measuring system in a virtual engine is characterized by specifically comprising a position information acquisition unit, an invisible distance determining unit and a visible distance determining unit;
the acquisition unit is used for acquiring position information, including start position coordinate information and end position coordinate information;
an invisible distance determining unit for determining an invisible distance between the position information;
a visible distance determining unit for converting the invisible distance between the position information into a visible distance;
wherein, the determining of the invisible distance between the position information specifically comprises: determining a generation point according to the position information of the starting point and the end point, wherein the generation point is an invisible point between the starting point and the end point, the position information is a plurality of groups of position information, and the order of the starting point and the end point in each group of position information can be confirmed through the generation point; after the generation point is determined, the coordinates of the starting point and the end point are placed in a preset Vector type array, a Vector between the coordinates of the starting point and the end point can be obtained through the array, and the length is obtained through the Vector so as to obtain the invisible distance between the starting point and the end point;
the conversion is to call a character string variable in the blueprint through blueprint communication, specifically, to generate a UI in the UI blueprint through the blueprint communication, and call a character string variable Distance to generate length data measured in a scene, that is, to convert an invisible Distance into a visible Distance.
7. The system for measuring distance in a virtual engine according to claim 6, wherein the invisible distance determining unit specifically includes the following modules: a generation point determining module and an invisible distance obtaining module;
the generating point determining module is used for determining a generating point according to the position information of the starting point and the end point;
and the invisible distance acquisition module is used for putting the coordinates of the starting point and the end point into a preset Vector type array to obtain the invisible distance between the starting point and the end point.
CN202211455236.5A 2022-11-21 2022-11-21 Method and system for measuring distance in virtual engine Active CN115531877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211455236.5A CN115531877B (en) 2022-11-21 2022-11-21 Method and system for measuring distance in virtual engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211455236.5A CN115531877B (en) 2022-11-21 2022-11-21 Method and system for measuring distance in virtual engine

Publications (2)

Publication Number Publication Date
CN115531877A CN115531877A (en) 2022-12-30
CN115531877B true CN115531877B (en) 2023-03-07

Family

ID=84721004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211455236.5A Active CN115531877B (en) 2022-11-21 2022-11-21 Method and system for measuring distance in virtual engine

Country Status (1)

Country Link
CN (1) CN115531877B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092303A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Virtual reality scenario model establishing method and device
CN107392888A (en) * 2017-06-16 2017-11-24 福建天晴数码有限公司 A kind of distance test method and system based on Unity engines
CN107423688A (en) * 2017-06-16 2017-12-01 福建天晴数码有限公司 A kind of method and system of the remote testing distance based on Unity engines
CN112245923A (en) * 2020-10-20 2021-01-22 珠海天燕科技有限公司 Collision detection method and device in game scene
CN113975812A (en) * 2021-10-21 2022-01-28 网易(杭州)网络有限公司 Game image processing method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092303A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Virtual reality scenario model establishing method and device
CN107392888A (en) * 2017-06-16 2017-11-24 福建天晴数码有限公司 A kind of distance test method and system based on Unity engines
CN107423688A (en) * 2017-06-16 2017-12-01 福建天晴数码有限公司 A kind of method and system of the remote testing distance based on Unity engines
CN112245923A (en) * 2020-10-20 2021-01-22 珠海天燕科技有限公司 Collision detection method and device in game scene
CN113975812A (en) * 2021-10-21 2022-01-28 网易(杭州)网络有限公司 Game image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115531877A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
Langlotz et al. Sketching up the world: in situ authoring for mobile augmented reality
CN113781626B (en) Techniques to traverse data used in ray tracing
CN101751690B (en) System and method for photorealistic imaging using ambient occlusion
CN112933597B (en) Image processing method, image processing device, computer equipment and storage medium
US20080231631A1 (en) Image processing apparatus and method of controlling operation of same
US20230206565A1 (en) Providing augmented reality in a web browser
JP2012528376A (en) Ray tracing apparatus and method
US20110109628A1 (en) Method for producing an effect on virtual objects
CN112419499B (en) Immersive situation scene simulation system
CN110559660B (en) Method and medium for mouse-to-object drag in Unity3D scene
CN109255749A (en) From the map structuring optimization in non-autonomous platform of advocating peace
Grootjans XNA 3.0 Game Programming Recipes: A Problem-Solution Approach
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
KR102317182B1 (en) Apparatus for generating composite image using 3d object and 2d background
US20210142511A1 (en) Method of generating 3-dimensional model data
CN111161398A (en) Image generation method, device, equipment and storage medium
US20220215581A1 (en) Method for displaying three-dimensional augmented reality
CN112230765A (en) AR display method, AR display device, and computer-readable storage medium
CN112215964A (en) Scene navigation method and device based on AR
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
CN111142967A (en) Augmented reality display method and device, electronic equipment and storage medium
US7116341B2 (en) Information presentation apparatus and method in three-dimensional virtual space and computer program therefor
US10909752B2 (en) All-around spherical light field rendering method
US20210241539A1 (en) Broker For Instancing
CN115531877B (en) Method and system for measuring distance in virtual engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant