CN115661359A - Method, device, equipment and medium for generating air wall in virtual environment - Google Patents

Method, device, equipment and medium for generating air wall in virtual environment Download PDF

Info

Publication number
CN115661359A
CN115661359A CN202211415182.XA CN202211415182A CN115661359A CN 115661359 A CN115661359 A CN 115661359A CN 202211415182 A CN202211415182 A CN 202211415182A CN 115661359 A CN115661359 A CN 115661359A
Authority
CN
China
Prior art keywords
target
wall
collision
model
air wall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211415182.XA
Other languages
Chinese (zh)
Inventor
武毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xintang Sichuang Educational Technology Co Ltd
Original Assignee
Beijing Xintang Sichuang Educational Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xintang Sichuang Educational Technology Co Ltd filed Critical Beijing Xintang Sichuang Educational Technology Co Ltd
Priority to CN202211415182.XA priority Critical patent/CN115661359A/en
Publication of CN115661359A publication Critical patent/CN115661359A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a method, an apparatus, a device and a medium for generating an air wall in a virtual environment; the method comprises the following steps: responding to the clicking operation of the user, and emitting a virtual ray vertical to the user interface from the clicking position of the clicking operation; detecting intersection point positions of the virtual rays and the boundary regions in the virtual scene to obtain a plurality of intersection point positions; configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body; and forming a closed air wall by a plurality of collision bodies with continuous intersection points. The air wall creation method and the air wall creation device can improve the creation efficiency of the air wall.

Description

Method, device, equipment and medium for generating air wall in virtual environment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for generating an air wall in a virtual environment.
Background
In many virtual scenes such as games, online classrooms and the like, transparent air walls with collision volumes are often used for blocking map boundaries, and players are prevented from walking to inaccessible areas. Currently, when configuring an air wall in a virtual scene, a plurality of contents need to be manually configured, such as: the method is characterized in that a cube model is placed, parameters such as the position, the angle and the scaling coefficient of the cube model are set, the configuration model is invisible, the manual configuration process is very complex and long, and the construction efficiency of the air wall is greatly influenced.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, the present disclosure provides a method, an apparatus, a device, and a medium for generating an air wall in a virtual environment.
According to an aspect of the present disclosure, a method for generating an air wall in a virtual environment is provided, including:
responding to the clicking operation of a user, and emitting a virtual ray vertical to a user interface from the clicking position of the clicking operation;
detecting intersection point positions of the virtual rays and the boundary regions in the virtual scene to obtain a plurality of intersection point positions;
configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body;
and forming a closed air wall by a plurality of collision bodies with continuous intersection points.
According to another aspect of the present disclosure, there is provided an apparatus for generating an air wall in a virtual environment, including:
the ray emission module is used for responding to the clicking operation of a user and emitting a virtual ray vertical to a user interface from the clicking position of the clicking operation;
the intersection point detection module is used for detecting intersection point positions where the virtual rays intersect with the boundary areas in the virtual scene to obtain a plurality of intersection point positions;
the parameter configuration module is used for configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body;
and the combined module is used for combining a plurality of collision bodies with continuous intersection point positions into a closed air wall.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
and the processor is used for reading the executable instruction from the memory and executing the instruction to realize the method for generating the air wall in the virtual environment.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions which, when run on a terminal device, cause the terminal device to implement a method of generating an air wall in a virtual environment.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
the method, the device, the equipment and the medium for generating the air wall in the virtual environment provided by the embodiment of the disclosure comprise the following steps: responding to the clicking operation of the user, and emitting a virtual ray vertical to the user interface from the clicking position of the clicking operation; detecting intersection point positions of the virtual rays and the boundary regions in the virtual scene to obtain a plurality of intersection point positions; configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body; and forming a closed air wall by a plurality of collision bodies with continuous intersection points. The air wall creation method and the air wall creation device can improve the creation efficiency of the air wall.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a method for generating an air wall in a virtual environment according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a location of an intersection provided by an embodiment of the present disclosure;
FIG. 3 is a schematic view of an air wall provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a collision effect provided by an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus for generating an air wall in a virtual environment according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The existing method for manually configuring the air wall is very complex and long, and greatly influences the building efficiency of the air wall. Based on this, the embodiments of the present disclosure provide a method, an apparatus, a device, and a medium for generating an air wall in a virtual environment. For ease of understanding, the embodiments of the present disclosure are described below.
Fig. 1 is a flowchart of a method for generating an air wall in a virtual environment according to an embodiment of the present disclosure, where the method may be performed by an apparatus for generating an air wall in a virtual environment, and the apparatus may be implemented by software and/or hardware. Referring to fig. 1, a method for generating an air wall in a virtual environment may include the following steps.
And step S102, responding to the clicking operation of the user, and emitting a virtual ray vertical to the user interface from the clicking position of the clicking operation.
It will be appreciated that the virtual scene requires a viewpoint at the time of rendering, from the position of which the user observes the virtual scene. The three-dimensional graphic system generally uses a camera to represent a viewpoint of a user, and uses the camera to define a relative position and an orientation of the user and a virtual scene, a direction of the camera is a near cutting surface and a far cutting surface from near to far, the near cutting surface is a user interface, the near cutting surface and the far cutting surface form a three-dimensional window, and the three-dimensional virtual scene is rendered in the three-dimensional window.
In the embodiment, a virtual ray perpendicular to the user interface (i.e. the near cutting surface) can be emitted from the click position of the click operation in the three-dimensional window in response to the click operation of the user. The click operation is generally a mouse click operation, a touch click operation, or the like, which is performed on the user interface.
And step S104, detecting the intersection point position of the virtual ray and the boundary region in the virtual scene to obtain a plurality of intersection point positions.
The virtual ray may intersect a bounding region in the virtual scene in the three-dimensional viewport in a path that is transmitted from the user interface to the far clipping plane. Based on this, a plurality of intersection positions where the virtual ray intersects with the boundary region in the virtual scene are detected.
Referring to the above embodiment, a plurality of intersection positions are obtained according to a multi-click operation; as in the example of fig. 2, four intersection positions are illustrated, each of which may be represented by a three-dimensional position vector.
And S106, configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body.
The cube model in the present embodiment may be understood as a cube of a unit property having initialized parameters of length, width, height, angle, etc., and the initialized original position is usually set to (0,0,0) and the original angle is set to (0,0,0) for convenience of configuration. For any two adjacent intersection points, model parameters such as length, width, height, placement position and rotation angle of the cubic model can be configured according to the two intersection points, so that the cubic model is configured into collision bodies, and the collision bodies are used for forming the air wall.
In one embodiment, to ensure that the air wall comprised of collision volumes cannot pass through, collision volume components may be added to the cube model; specifically, the collision volume component collider can be configured for the cube model by calling a unity physics engine, so that the collision volume and the air wall generated based on the cube model cannot penetrate through the collision volume component collider. After the collision body component is added to the cube model, the collision body configured by the cube model and an air wall composed of the collision bodies can block the map boundary, and a player is prevented from walking to an area with blocked actions.
In step S108, a plurality of collision bodies having consecutive intersection positions form a closed air wall.
According to the above embodiment, on the map of the virtual scene, one collision volume is generated between every two adjacent intersection positions, and a plurality of collision volumes with consecutive intersection positions are combined to obtain the closed air wall.
In the method for generating an air wall in a virtual environment provided by this embodiment, a virtual ray perpendicular to a user interface is emitted from a click position of a click operation in response to the click operation of a user; detecting intersection point positions of the virtual rays and the boundary regions in the virtual scene to obtain a plurality of intersection point positions; then configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body; and finally, forming a closed air wall by using a plurality of collision bodies with continuous intersection points. According to the technical scheme, the intersection point position of the boundary region in the three-dimensional world can be rapidly positioned through a ray collision detection method, and the model parameters of the cube model can be automatically configured by utilizing the intersection point position, so that a collision body and an air wall formed by the collision body are obtained; in the whole process, a user does not need to manually create and adjust the cube model, and the creation efficiency of the air wall is effectively improved.
With respect to step S106, a detailed description is given below of an embodiment in which model parameters of a preset cube model are configured according to the intersection point position to generate a collision volume.
In this embodiment, the model parameters include the placement position of the cube model; correspondingly, configuring the preset placing position of the cube model according to the intersection point position may include:
configuring the central positions of two adjacent intersection points as the placing positions of the cube models; and placing the cube model at a placing position.
Specifically, the placing position Pos of the cube model in the three-dimensional space is the position (p) of two adjacent intersection points i 、p i+1 ) Is half of the sum of the three-dimensional position vectors of (a) as follows:
Figure BDA0003938654780000051
in the three-dimensional space, the x axis of the three-dimensional coordinate system is used for representing the thickness of the wall, the y axis is used for representing the height of the wall, and the z axis is used for representing the length of the wall. In the above formula, p i (x)、p i+1 (x) Respectively indicate the intersection positions p i 、p i+1 Position coordinate in the x-axis direction, p i (y)、p i+1 (y) respectively indicate the intersection positions p i 、p i+1 Position coordinates in the direction of the y-axis, p i (z)、p i+1 (z) respectively indicate intersection positions p i 、p i+1 Position coordinates in the z-axis direction.
When the cube model is placed at the placement position Pos, an anchor point position (such as a central point) preset by the cube model can be used as a reference, so that the anchor point position coincides with the placement position Pos.
In this embodiment, the model parameters include the rotation angle of the cube model; correspondingly, configuring the rotation angle of the preset cube model according to the intersection point position may include:
and determining the placing direction of the cube model in the virtual scene according to the three-dimensional position vector of the two adjacent intersection points.
Specifically, the placing direction Dir of the cube model in the three-dimensional virtual scene is a subtraction result of three-dimensional position vectors of two adjacent intersection positions (pi, pi + 1):
Figure BDA0003938654780000052
generating a rotation angle of the cube model according to the quaternion and the placing direction; determining a central axis of the cubic model in the height direction as a first rotation axis; and rotating the cube model from a preset original angle to a rotation angle around the first rotation.
And calculating a rotation angle of the cube model in the three-dimensional space, which needs to be rotated, through the quaternion and the placing direction Dir of the cube model in the three-dimensional space, and rotating the cube model in the height direction according to the rotation angle, wherein the width direction and the length direction of the cube model are kept unchanged.
The quaternion can express the rotation of the object around any vector axis, and compared with the Euler angle rotation and rotation matrix method, the method has the advantages of higher efficiency and more flexible operation. Quaternions are all real plus threeThe imaginary units i, j and k are composed of the following relationships: i.e. i 2 =j 2 =k 2 = -1,i ° = j ° = k ° =1, each quaternion is a linear combination of 1, i, j, and k, i.e., quaternions can be generally expressed as a + bi + cj + dk, where a, b, c, d are real numbers. The geometrical meaning of i, j, k of a quaternion can be understood as a rotation, meaning as follows: i rotation represents the rotation of the Z axis in the positive direction of the Y axis in the intersecting plane of the Z axis and the Y axis, j rotation represents the rotation of the X axis in the positive direction of the Z axis in the intersecting plane of the X axis and the Z axis, k rotation represents the rotation of the Y axis in the positive direction of the X axis in the intersecting plane of the Y axis and the X axis, and-i, -j, -k represent the reverse rotation of the i, j and k rotation respectively.
In this embodiment, the model parameters include the wall size of the cube model, and the wall size includes the wall length, the wall thickness, and the wall height; correspondingly, when the wall size of the preset cube model is configured according to the intersection point position, the three wall sizes of the wall length, the wall thickness and the wall height need to be scaled and configured; reference may be made to the following.
In the length direction, the cube model needs to occupy the length distance between two intersection points, and the length distance between any two intersection points is not fixed; based on this, this embodiment can zoom to wall body length for the wall body length after zooming matches with the length interval between two nodical positions.
In this embodiment, the original length preset by the cube model may be obtained; determining a scaling ratio according to the length between two adjacent intersection points and the original length; wherein, the length between two adjacent intersection point positions is the modulus of the vector of the placing direction Dir. Scaling the cube model from the original length to the wall length according to the scaling; the length of the wall after scaling is equal to the length between the two intersection positions, so that the cube model can be completely filled between the two intersection positions.
In this embodiment, the height of the wall may be a preset height value, and the cube model is configured to be the preset height value from the height direction. The thickness of the wall body can be a preset thickness value, and the cube model is configured to be the preset thickness value from the thickness direction; alternatively, the scaling of the thickness may be preset, and the initialized original wall thickness of the cube model may be scaled according to the preset scaling.
And configuring the wall size of the cube model to obtain a collision body with the wall length, the wall thickness and the wall height. In order to form a complete air wall together, the wall thickness and the wall height are equal among different collision bodies, and the wall length is matched with the length between intersection points and can be different from each other.
It can be understood that, in the above embodiments in which the model parameters of the cube model are configured, the configuration order of the model parameters such as the placement position, the wall size (length, thickness, and height), and the rotation angle is not limited, and different configuration orders do not affect the collision volume generated after configuration.
According to the above embodiment, collision volumes are continuously created between two adjacent intersection positions, thereby obtaining an air wall which cannot be penetrated by a player between a plurality of consecutive intersection positions, and the air wall can be exemplified by referring to fig. 3.
In a virtual scenario, when a player collides with an air wall, a collision effect may be exhibited at the collision location to suggest that the area in front of the player is not passable. Based on this, the present embodiment may provide a method, including:
(I) Responding to the clicking operation of the user, and emitting a virtual ray vertical to the user interface from the clicking position of the clicking operation;
(II) detecting intersection point positions of the virtual rays and the boundary region in the virtual scene to obtain a plurality of intersection point positions;
(III) configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body;
(IV) forming a closed air wall by a plurality of collision bodies with continuous intersection points;
(V) when the target object collides with the air wall in the virtual scene, displaying a preset collision effect at a corresponding collision position on the air wall.
An embodiment of exhibiting a predetermined impact effect at a corresponding impact on the air wall can be referred to as follows.
Firstly, acquiring a target collision body which collides with a target object in an air wall, and an anchor point position of the target collision body and a first direction of the target collision body; wherein the target collision volume is a cube having a target wall length, a target wall thickness, and a target wall height. When a target object collides with an air wall in a virtual scene, determining a target collision body at a collision position in the air wall, an anchor point position of the target collision body and a first direction of the orientation of the target collision body; wherein, the anchor point position is also the placing position Pos of the target collision body; the first direction is also the placing direction Dir of the target collision object. The target object is, for example, a character manipulated by a player, a virtual object that can move on the ground, such as a vehicle.
A second direction in which the target object moves toward the target collision volume is then determined based on the object position of the target object and the anchor point position. Specifically, the subtraction result of the three-dimensional position vector between the anchor point position of the target collision volume and the object position of the target object is determined as the reference direction of the target object to the air wall (or the target collision volume); on the basis of the consideration that the target object does not move in the height direction when moving toward the air wall, the value of the reference direction in the height direction is set to 0, and the second direction in which the target object moves toward the target collision body is obtained.
An angle is formed between a first direction towards which the target collision volume faces and a second direction in which the target object moves towards the target collision volume, thereby determining a first angle between the first direction and the second direction; wherein the first included angle is an acute angle. Two complementary included angles are formed between the first direction and the second direction, and the acute angle is taken as the first included angle in the embodiment.
And determining a second included angle according to the tangent value between the thickness of the target wall and the length of the target wall in the bottom surface of the target collision body. Specifically, the tangent value between the thickness of the target wall and the length of the target wall in the rectangular bottom surface may be calculated first, and then the second included angle may be calculated according to the tangent value.
Next, rotating a patch preset and configured on the target collision body according to the first included angle and the second included angle; wherein, the dough sheet is used for binding the collision effect, and the dough sheet laminates with the air wall completely.
In this embodiment, when the first included angle is larger than the second included angle, a central axis of the target collision volume in the thickness direction of the target wall is determined as a second rotation axis, and a patch preset and configured on the target collision volume is vertically rotated around the second rotation axis; and determining a central axis of the target collision body in the length direction of the target wall body as a third rotating axis, and vertically rotating the surface patch around the third rotating axis.
And under the condition that the first included angle is not larger than the second included angle, vertically rotating a patch which is preset and configured on the target collision body around a second rotating shaft.
After the dough sheet is rotated according to the mode, the collision effect bound on the dough sheet is correspondingly rotated, so that the collision effect bound on the rotated back dough sheet is displayed. In the example of fig. 4, the collision effect is, for example, a grid that is exhibited in the form of a particle effect.
In summary, according to the method for generating an air wall in a virtual environment provided by this embodiment, model parameters such as a placement position, a rotation angle, a wall size, and the like of a cube model can be automatically configured by using an intersection point position, a user does not need to manually create and adjust the cube model, the creation is fully automated, and the creation efficiency of the air wall is effectively improved. When the target object collides with the air wall in the virtual scene, the preset collision effect is displayed at the corresponding collision position on the air wall, so that the player can be prompted in time, and the interestingness can be increased by displaying the collision effect.
Fig. 5 is a schematic structural diagram of an apparatus for generating an air wall in a virtual environment according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware for implementing a method for generating an air wall in a virtual environment. Referring to fig. 5, an apparatus 500 for generating an air wall in a virtual environment includes:
a ray emitting module 502, configured to respond to a click operation of a user, and emit a virtual ray perpendicular to a user interface from a click position of the click operation;
an intersection point detection module 504, configured to detect an intersection point position where the virtual ray intersects with a boundary region in a virtual scene, to obtain multiple intersection point positions;
a parameter configuration module 506, configured to configure a model parameter of a preset cube model between every two adjacent intersection points according to the intersection points, so as to generate a collision volume;
and the combination module 508 is used for combining a plurality of collision bodies with continuous intersection points into a closed air wall.
In one embodiment, the model parameters include a placement position of the cube model, and the parameter configuration module 506 is further configured to:
configuring the central position of two adjacent intersection points as the placing position of the cube model;
placing the cube model in the pose position,
wherein the model parameters further include a wall length of the cube model, and the parameter configuration module 506 is further configured to:
acquiring an original length preset by the cube model;
determining a scaling ratio according to the length between two adjacent intersection points and the original length;
scaling the cube model from the original length to the wall length according to the scaling.
In one embodiment, the model parameters include a rotation angle of the cube model, and the parameter configuration module 506 is further configured to:
determining the placing direction of the cube model in the virtual scene according to the three-dimensional position vectors of the two adjacent intersection points;
generating a rotation angle of the cube model according to the quaternion and the placing direction;
determining a central axis of the cube model in a height direction as a first rotation axis;
and rotating the cube model from a preset original angle to the rotation angle around the first rotation.
In one embodiment, the apparatus 500 for generating an air wall in a virtual environment further includes:
and the collision display module is used for displaying a preset collision effect at a corresponding collision position on the air wall when the target object collides with the air wall in the virtual scene.
In one embodiment, the collision display module is further configured to:
acquiring a target collision body which collides with the target object in the air wall, and the anchor point position of the target collision body and the first direction of the target collision body; wherein the target collision volume is a cube having a target wall length, a target wall thickness, and a target wall height;
determining a second direction in which the target object moves toward the target collision volume based on the object position of the target object and the anchor point position;
determining a first included angle between the first direction and the second direction; wherein the first included angle is an acute angle;
determining a second included angle according to the tangent value between the thickness of the target wall body and the length of the target wall body in the bottom surface of the target collision body;
rotating a patch preset and configured on the target collision body according to the first included angle and the second included angle; wherein the patch is used for binding the collision effect;
and displaying the collision effect bound on the rotary rear panel.
In one embodiment, the collision display module is further configured to:
determining a central axis of the target collision body in the thickness direction of the target wall as a second rotating axis under the condition that the first included angle is larger than the second included angle, and vertically rotating a surface patch preset and configured on the target collision body around the second rotating axis;
determining a central axis of the target collision body in the length direction of the target wall body as a third rotating axis, and vertically rotating the surface patch around the third rotating axis;
and under the condition that the first included angle is not larger than the second included angle, vertically rotating a patch which is preset and configured on the target collision body around the second rotating shaft.
In one embodiment, the apparatus 500 for generating an air wall in a virtual environment further includes:
and the component configuration module is used for configuring a collision body component for the cube model by calling a unity physics engine so that the collision body and the air wall generated based on the cube model cannot pass through.
The device provided by the embodiment has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
An exemplary embodiment of the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor, the computer program, when executed by the at least one processor, is for causing the electronic device to perform a method according to an embodiment of the disclosure.
The exemplary embodiments of the present disclosure also provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
Referring to fig. 6, a block diagram of a structure of an electronic device 600, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the electronic device 600, and the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device. Output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 608 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as a bluetooth (TM) device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above. For example, in some embodiments, the method of air wall generation in a virtual environment may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. In some embodiments, the computing unit 601 may be configured to perform the method of generating an air wall in a virtual environment by any other suitable means (e.g., by means of firmware).
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for generating an air wall in a virtual environment is characterized by comprising the following steps:
responding to the clicking operation of a user, and emitting a virtual ray vertical to a user interface from the clicking position of the clicking operation;
detecting intersection point positions of the virtual rays and the boundary regions in the virtual scene to obtain a plurality of intersection point positions;
configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body;
and forming a closed air wall by a plurality of collision bodies with continuous intersection points.
2. The method according to claim 1, wherein the model parameters include a placement position of the cube model, and the configuring the model parameters of the preset cube model according to the intersection position includes:
configuring the central position of two adjacent intersection points as the placing position of the cube model;
placing the cube model in the pose position,
the model parameters further include the wall length of the cube model, and the configuration of the preset model parameters of the cube model according to the intersection point position includes:
acquiring an original length preset by the cube model;
determining a scaling ratio according to the length between two adjacent intersection points and the original length;
scaling the cube model from the original length to the wall length according to the scaling.
3. The method according to claim 1 or 2, wherein the model parameters comprise a rotation angle of the cube model, and the configuring of the parameters of the preset cube model according to the intersection positions comprises:
determining the placing direction of the cube model in the virtual scene according to the three-dimensional position vectors of the two adjacent intersection points;
generating a rotation angle of the cube model according to the quaternion and the placing direction;
determining a central axis of the cube model in a height direction as a first rotation axis;
and rotating the cube model from a preset original angle to the rotation angle around the first rotation.
4. The method of claim 1, further comprising:
when the target object collides with the air wall in the virtual scene, a preset collision effect is displayed at a corresponding collision position on the air wall.
5. The method of claim 4, wherein the displaying a predetermined impact effect at a corresponding impact on the air wall comprises:
acquiring a target collision body which collides with the target object in the air wall, and the anchor point position of the target collision body and the first direction of the target collision body; wherein the target collision volume is a cube having a target wall length, a target wall thickness, and a target wall height;
determining a second direction in which the target object moves toward the target collision volume based on the object position of the target object and the anchor point position;
determining a first included angle between the first direction and the second direction; wherein the first included angle is an acute angle;
determining a second included angle according to a tangent value between the thickness of the target wall and the length of the target wall in the bottom surface of the target collision body;
rotating a patch preset and configured on the target collision body according to the first included angle and the second included angle; wherein the patch is used for binding the collision effect;
and displaying the collision effect bound on the rotary rear panel.
6. The method of claim 7, wherein rotating a pre-configured patch of the target collision volume according to the first angle and the second angle comprises:
determining a central axis of the target collision body in the thickness direction of the target wall as a second rotating axis under the condition that the first included angle is larger than the second included angle, and vertically rotating a surface patch preset and configured on the target collision body around the second rotating axis;
determining a central axis of the target collision body in the length direction of the target wall body as a third rotating axis, and vertically rotating the surface patch around the third rotating axis;
and under the condition that the first included angle is not larger than the second included angle, vertically rotating a patch which is preset and configured on the target collision body around the second rotating shaft.
7. The method of claim 1, further comprising:
configuring a collision volume component to the cube model by invoking a unity physics engine such that the collision volume and the air wall generated based on the cube model cannot pass through.
8. An apparatus for generating an air wall in a virtual environment, comprising:
the ray emission module is used for responding to the clicking operation of a user and emitting a virtual ray vertical to a user interface from the clicking position of the clicking operation;
the intersection point detection module is used for detecting intersection point positions where the virtual rays intersect with the boundary areas in the virtual scene to obtain a plurality of intersection point positions;
the parameter configuration module is used for configuring model parameters of a preset cubic model between every two adjacent intersection points according to the intersection points to generate a collision body;
and the combined module is used for combining a plurality of collision bodies with continuous intersection points to form a closed air wall.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions, which, when run on a terminal device, cause the terminal device to implement the method of any one of claims 1-7.
CN202211415182.XA 2022-11-11 2022-11-11 Method, device, equipment and medium for generating air wall in virtual environment Pending CN115661359A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211415182.XA CN115661359A (en) 2022-11-11 2022-11-11 Method, device, equipment and medium for generating air wall in virtual environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211415182.XA CN115661359A (en) 2022-11-11 2022-11-11 Method, device, equipment and medium for generating air wall in virtual environment

Publications (1)

Publication Number Publication Date
CN115661359A true CN115661359A (en) 2023-01-31

Family

ID=85020831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211415182.XA Pending CN115661359A (en) 2022-11-11 2022-11-11 Method, device, equipment and medium for generating air wall in virtual environment

Country Status (1)

Country Link
CN (1) CN115661359A (en)

Similar Documents

Publication Publication Date Title
KR102625233B1 (en) Method for controlling virtual objects, and related devices
US10304240B2 (en) Multi-modal method for interacting with 3D models
US8610714B2 (en) Systems, methods, and computer-readable media for manipulating graphical objects
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
WO2017092430A1 (en) Method and device for realizing user interface control based on virtual reality application
KR20120009564A (en) Apparatus and method for generating 3 dimentional mouse pointer
CN108154548A (en) Image rendering method and device
KR20160013928A (en) Hud object design and method
WO2023165315A1 (en) Skill indicator display method and apparatus, electronic device, and storage medium
US20220032188A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
US20140292754A1 (en) Easy selection threshold
EP2581823B1 (en) Method and system for applying three-dimensional (3d) switch panels in instant messaging tool
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
JP6980802B2 (en) Methods, equipment and computer programs to provide augmented reality
CN115661359A (en) Method, device, equipment and medium for generating air wall in virtual environment
JP5767371B1 (en) Game program for controlling display of objects placed on a virtual space plane
CN110992453A (en) Scene object display method and device, electronic equipment and storage medium
CN113457144B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN112473138B (en) Game display control method and device, readable storage medium and electronic equipment
CN114327174A (en) Virtual reality scene display method and cursor three-dimensional display method and device
CN115803782A (en) Augmented reality effect of perception geometry with real-time depth map
CN113486415A (en) Model perspective method, intelligent terminal and storage device
CN112190939A (en) Method, device, equipment and medium for controlling component display in game scene
JP2016016319A (en) Game program for display-controlling objects arranged on virtual spatial plane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination