WO2024074016A1 - 虚拟模型的形变控制方法、装置和电子设备 - Google Patents

虚拟模型的形变控制方法、装置和电子设备 Download PDF

Info

Publication number
WO2024074016A1
WO2024074016A1 PCT/CN2023/082823 CN2023082823W WO2024074016A1 WO 2024074016 A1 WO2024074016 A1 WO 2024074016A1 CN 2023082823 W CN2023082823 W CN 2023082823W WO 2024074016 A1 WO2024074016 A1 WO 2024074016A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
particle
target
deformation
virtual
Prior art date
Application number
PCT/CN2023/082823
Other languages
English (en)
French (fr)
Inventor
刘忠源
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Publication of WO2024074016A1 publication Critical patent/WO2024074016A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the present disclosure relates to the technical field of model rendering, and in particular to a deformation control method, device and electronic device for a virtual model.
  • a rigid body collision body is usually used to simulate the state of the model after the collision.
  • the model will produce position changes and posture changes after the collision, such as rolling, rotating, etc.; but it is difficult to simulate the deformation of the model after the collision, such as the collapse, bulge, and fragmentation of the model, resulting in a low degree of realism in this simulation method.
  • the model can also be divided into finite elements to obtain a large number of voxel sets.
  • the position of each voxel in the model is calculated according to various parameters such as the collision force and direction, thereby obtaining the deformed model.
  • this method has a huge amount of calculation, requires a lot of computing time and computing resources, and is difficult to apply to real-time rendered virtual scenes.
  • an embodiment of the present disclosure provides a deformation control method for a virtual model, the method comprising: generating a target model located in a virtual scene and a particle model corresponding to the target model; wherein the particle model and the target model are overlapped and arranged in the virtual scene; the particle model comprises a plurality of particles, and the plurality of particles are connected by virtual springs; a shape composed of the plurality of particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; a preset mapping relationship exists between the particles and the mesh vertices; in response to the particle model detecting a collision event, determining a first position of the particles in the particle model after the collision event occurs based on collision parameters of the collision event and a deformation threshold of the virtual spring; determining a first rendering parameter of the mesh vertices in the target model based on the first position of the particles in the particle model and the mapping relationship between the particles and the mesh vert
  • the method further includes: in response to the target model being located in the virtual scene, detecting whether a collision occurs between particles through particles in the particle model; and determining that a collision event is detected if at least one particle in the particle model collides.
  • the step of determining the first position of the particle in the particle model after the collision event occurs based on the collision parameters of the collision event and the deformation threshold of the virtual spring includes: in response to the particle model detecting a collision event, obtaining the collision parameters of the collision event; wherein the collision parameters include: a target particle that collides, a collision direction, and a plurality of collision forces; based on the collision parameters, controlling the displacement of each particle in the particle model, and during the displacement of the particle, monitoring the deformation amount of the virtual spring connected between the particles; and determining the first position of the particle based on the deformation amount and the deformation threshold.
  • the step of determining the first position of the mass point based on the deformation amount and the deformation threshold includes: if the deformation amount does not exceed the deformation threshold of the virtual spring, controlling the virtual spring to rebound, and determining the first position of the mass point based on the length of the virtual spring after rebound; if the deformation amount exceeds the deformation threshold of the virtual spring, determining the length of the virtual spring after deformation, and determining the first position of the mass point based on the length of the virtual spring after deformation.
  • the method further includes: if the deformation exceeds a breaking threshold of the virtual spring, removing the virtual spring to control the mass points at both ends of the virtual spring to separate from each other, thereby obtaining a first position of the mass points.
  • the mapping relationship between the particle and the mesh vertices is obtained in the following manner: the target model and the particle model are overlapped and placed in a preset world coordinate system; for the mesh vertices in the target model, a specified number of target particle points are determined from the particle model, a local coordinate system is established based on the specified number of target particle points, and a first transformation relationship between the local coordinate system and the world coordinate system is determined; based on the first transformation relationship, the initial rendering parameters of the mesh vertices in the world coordinate system are transformed into the local coordinate system to obtain local rendering parameters; wherein the initial rendering parameters include: multiple types of position parameters, normal parameters, and tangent parameters of the mesh vertices; the mesh vertices, target particle points, and local rendering parameters are determined as the mapping relationship between the particle and the mesh vertices.
  • the step of determining a specified number of target particles from the particle model includes: for mesh vertices in the target model, calculating the Euclidean distance between the mesh vertices and at least some of the particles in the particle model; sorting at least some of the particles in order of Euclidean distance from small to large to obtain a particle sequence; and determining the first three particles in the particle sequence as target particles.
  • the step of establishing a local coordinate system based on a specified number of target particles includes: taking the first particle among the target particles as the origin of the local coordinate system; taking the direction of the line connecting the first particle and the second particle among the target particles as the first axis of the local coordinate system; taking the direction corresponding to the vector product of the direction of the line connecting the third particle and the first particle among the target particles and the first axis as the second axis of the local coordinate system.
  • the direction perpendicular to the first and second axes is taken as the third axis of the local coordinate system to obtain the local coordinate system.
  • the step of determining the first rendering parameters of the mesh vertices in the target model includes: obtaining a specified number of target particles corresponding to the mesh vertices from the mapping relationship; establishing a deformation coordinate system based on the first positions of the specified number of target particles, and determining a second transformation relationship between the deformation coordinate system and a preset world coordinate system; wherein the world coordinate system is established in a virtual scene where the target model and the particle model are located; obtaining local rendering parameters corresponding to the mesh vertices from the mapping relationship; and determining the first rendering parameters of the mesh vertices in the world coordinate system based on the second transformation relationship and the local rendering parameters.
  • a deformation control device for a virtual model comprising: a model generation module, used to generate a target model located in a virtual scene and a particle model corresponding to the target model, wherein the particle model and the target model are overlapped and arranged in the virtual scene; the particle model comprises a plurality of particles, and the plurality of particles are connected by virtual springs; the shape composed of the plurality of particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; there is a preset mapping relationship between the particles and the mesh vertices; a position determination module, used to determine, in response to the particle model detecting a collision event, a first position of the particles in the particle model after the collision event occurs based on the collision parameters of the collision event and the deformation threshold of the virtual spring; a parameter determination module, used to determine a first rendering parameter of the mesh vertices in the target model based on the first position of
  • an electronic device including a processor and a memory, wherein the memory stores machine executable instructions that can be executed by the processor, and the processor executes the machine executable instructions to implement the above-mentioned virtual model deformation control method.
  • a machine-readable storage medium which stores machine-executable instructions.
  • the machine-executable instructions When the machine-executable instructions are called and executed by a processor, the machine-executable instructions prompt the processor to implement the above-mentioned virtual model deformation control method.
  • the deformation control method, device and electronic device of the virtual model of the embodiment of the present disclosure generate a target model located in a virtual scene and a particle model corresponding to the target model; wherein the particle model and the target model are overlapped and arranged in the virtual scene; the particle model includes multiple particles, and the multiple particles are connected by virtual springs; the shape composed of the multiple particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; there is a preset mapping relationship between the particles and the mesh vertices; in response to the particle model detecting a collision event, based on the collision parameters of the collision event and the deformation threshold of the virtual spring, the first position of the particles in the particle model after the collision event occurs is determined; based on the first position of the particles in the particle model and the mapping relationship between the particles and the mesh vertices, the first rendering parameters of the mesh vertices in the target model are determined; the mesh vertices in the target model are rendered by the first rendering parameters
  • a particle model is set for the target model, which includes a small number of particles, and the particles are connected by virtual springs.
  • the collision event is detected by the particle model. If a collision occurs, the displacement between the particles is calculated, and the rendering parameters of the deformed target model are determined by the deformation of the virtual springs, so as to render the deformed target model.
  • This method has a small amount of calculation for calculating the deformation, is efficient, and the deformation effect is realistic, so it is suitable for real-time rendering of virtual scenes.
  • FIG1 is a flow chart of a method for controlling deformation of a virtual model provided by an embodiment of the present disclosure
  • FIG2 is a schematic diagram of a mass point model of a vehicle model provided by an embodiment of the present disclosure
  • FIG3 is a schematic diagram of a vehicle model provided by an embodiment of the present disclosure.
  • FIG4 is a schematic structural diagram of a deformation control device for a virtual model provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure.
  • the deformation simulation of virtual models can enrich the visual effects of virtual models and improve the user's interactive experience.
  • most virtual models are non-deformable rigid bodies. Virtual models can be blocked by obstacles in the virtual scene, but the virtual models themselves do not deform.
  • a rigid body collision body will be configured for the vehicle model.
  • the vehicle model When the game is running, if a vehicle collides, when performing physical simulation on the vehicle model, the vehicle model only participates in the rigid body simulation process to obtain the position and rotation of the rigid body, thereby rendering and controlling the position and rotation of the vehicle model. The deformation effect of the vehicle model caused by the collision cannot be rendered.
  • a finite element method can be used to simulate the deformation of a virtual model.
  • the virtual model is segmented by finite elements to obtain a set of tetrahedral or hexahedral voxels.
  • the elastic matrix is assembled for each voxel in the model to obtain the elastic equation, and the elastic equation is solved to obtain the position of each voxel, and then the deformation data of the virtual model is obtained.
  • a virtual model is usually divided into a large number of voxels. Calculation for each voxel requires a lot of computing resources and time, which makes it difficult to apply to real-time rendering scenarios such as games.
  • the embodiments of the present disclosure provide a method, device and electronic device for controlling the deformation of a virtual model.
  • the technology can be applied to game scenes or other types of virtual scenes, to the deformation control of virtual models, and in particular to real-time deformation rendering of virtual models.
  • the deformation control method of the virtual model can be applied to a server, a terminal device, or a cloud server.
  • the method includes the following steps:
  • Step S102 generating a target model located in a virtual scene and a particle model corresponding to the target model; wherein the particle model and the target model are overlapped and set in the virtual scene; the particle model includes a plurality of particles, and the plurality of particles are connected by virtual springs; the shape composed of the plurality of particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; and there is a preset mapping relationship between the particles and the mesh vertices;
  • the target model can be a model of a person, a vehicle, a still life, an animal, or any other model with deformable properties.
  • a particle model that matches the target model also needs to be produced.
  • the order in which the particle model and the target model are produced is not limited.
  • particles are connected by virtual springs.
  • a particle is usually connected to one or more surrounding particles.
  • the particles and virtual springs need to be constructed to obtain a relatively stable structure. Since a triangle is a relatively stable shape, usually, a triangle is formed between particles and virtual springs, so the above-mentioned particle model will include multiple triangles.
  • a certain mass point which mass points are set with virtual springs is usually determined according to the shape of the mass point model or the shape of the target model. If a virtual spring is connected between two mass points, the two mass points will maintain a certain elastic distance.
  • the virtual spring can be implemented by a plastic spring, and the virtual spring can set a deformation threshold and a fracture threshold. When the mass point model collides, all or part of the mass points in the mass point model will be displaced.
  • the two mass points connected by the spring are displaced, or one of the mass points is displaced, the virtual spring will be deformed, and the deformation can be specifically stretched or compressed.
  • the virtual spring When the deformation variable does not reach the deformation threshold, the virtual spring will rebound, for example, rebound to the situation where the virtual spring is neither stretched nor compressed. At this time, the positions of the two mass points remain unchanged; when the deformation variable reaches the deformation threshold, the virtual spring will not rebound. At this time, the positions of the two mass points will change, thereby simulating the deformation of the model; further, when the deformation variable reaches the fracture threshold, it can be regarded as the virtual spring breaking. At this time, the two mass points will separate from each other, thereby simulating the fragmentation of the model.
  • the number of particles in the particle model is smaller than the number of mesh vertices in the target model; it is understandable that the more particles in the particle model, the more delicate and realistic the deformation of the target model will be when a collision occurs, but it requires greater computing resources and computing time to calculate the position of each particle and the rendering parameters of each mesh vertex in the target model.
  • This embodiment aims to achieve real-time rendering of simulated deformation. Based on this, the number of particles in the particle model is smaller than the number of mesh vertices in the target model. When a collision event occurs, the positions of a smaller number of particles are calculated to determine the rendering parameters of each mesh vertex in the target model, which can reduce the amount of calculation and calculation time.
  • Figures 2 and 3 are taken as an example, where Figure 2 is a particle model of a vehicle model, and Figure 3 is an example where the target model is a vehicle model.
  • the particle model includes multiple particles. According to the shape of the vehicle model, the particles are connected by virtual springs to obtain a particle model of a vehicle shape with a relatively stable structure. It can be seen that the size and shape of the particle model and the target model match each other. In the virtual scene, the particle model and the target model are set to overlap with each other, but only the target model is displayed, and the particle model is used to detect collision events.
  • the mesh vertices of the vehicle model are about 20,000, and there are about 900 particles in the particle model.
  • the mapping relationship between the mass point and the mesh vertices is pre-set.
  • the rendering parameters of the mesh vertices in the target model are established based on the world coordinate system in the virtual scene. Through this mapping relationship, the rendering parameters of the mesh vertices relative to the mass points can be obtained. After the collision, the position of the mass point changes, but the mapping relationship remains unchanged, that is, the rendering parameters of the mesh vertices relative to the mass point remain unchanged. Based on this, the rendering parameters of the mesh vertices after the position of the mass point changes can be obtained, thereby rendering the deformed target model.
  • the target model is located in the virtual scene.
  • the mass point model overlaps with the target model and is also set in the virtual scene, but the mass point model is not displayed.
  • the mass point model also moves at the same time, that is, the mass point model and the target model are set to overlap in real time.
  • the particle model is used to detect collision events.
  • the target model and the particle model may collide with obstacles and other models in the virtual scene.
  • the size of the particle model can be the same as the size of the target model, or the particle model can be slightly larger than the target model, in which case the detection of collision events will be more sensitive; or the particle model can be slightly smaller than the target model, in which case the detection of collision events will be slower.
  • Step S104 in response to the particle model detecting a collision event, determining a first position of a particle in the particle model after the collision event occurs based on a collision parameter of the collision event and a deformation threshold of the virtual spring;
  • the collision parameters of the above collision events may include the position of the target model where the collision occurs, the collision direction, the collision force, the relative speed between the models where the collision occurs, etc.
  • the displacement of each particle can be calculated through XPBD (Extended Position-Based Dynamics) or other algorithms based on dynamic principles.
  • the displacement of the particles will cause the virtual spring to deform. If the displacement is small, the deformation of the virtual spring does not exceed the deformation threshold. At this time, the virtual spring will rebound, and the first position of the particle will be the same as the initial position before the collision, or will change slightly. If the deformation of the virtual spring exceeds the deformation threshold, the virtual spring will not rebound, and the first position of the particle will be significantly different from the initial position before the collision.
  • Step S106 based on the first position of the particle in the particle model and the mapping relationship between the particle and the mesh vertices, determine the first rendering parameters of the mesh vertices in the target model; render the mesh vertices in the target model using the first rendering parameters to obtain the deformed target model.
  • the mass point After the collision, the mass point arrives at the first position. Since there is a relatively fixed mapping relationship between the mass point and the mesh vertex, the first rendering parameters of the mesh vertex after the collision can be obtained based on the first position of the mass point.
  • the first rendering parameters may include the position parameters, normal parameters, tangent parameters, etc. of the mesh vertex after the collision. Then, the mesh vertices in the target model are rendered using the first rendering parameters to obtain the target model deformed by the collision.
  • the deformation control method of the above-mentioned virtual model generates a target model located in a virtual scene and a particle model corresponding to the target model; wherein the particle model and the target model are overlapped and set in the virtual scene; the particle model includes multiple particles, and the multiple particles are connected by virtual springs; the shape composed of the multiple particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; there is a preset mapping relationship between the particles and the mesh vertices; in response to the particle model detecting a collision event, based on the collision parameters of the collision event and the deformation threshold of the virtual spring, determining the first position of the particles in the particle model after the collision event occurs; based on the first position of the particles in the particle model and the mapping relationship between the particles and the mesh vertices, determining the first rendering parameters of the mesh vertices in the target model; rendering the mesh vertices in the target model by the first rendering parameters to obtain the deformed target model
  • a particle model is set for the target model, which includes a small number of particles, and the particles are connected by virtual springs.
  • the collision event is detected by the particle model. If a collision occurs, the displacement between the particles is calculated, and the rendering parameters of the deformed target model are determined by the deformation of the virtual springs, so as to render the deformed target model.
  • This method has a small amount of calculation for calculating the deformation, is efficient, and the deformation effect is realistic, so it is suitable for real-time rendering of virtual scenes.
  • collision events are detected by particles in the particle model. All or part of the particles in the particle model have the function of collision detection. For example, detection rays can be set for these particles to detect whether the particles themselves are in contact or collision with other models in the scene. When the target model is in the virtual scene, these particles detect whether they collide with other models in real time or at regular intervals. When the collision model in the virtual scene is small, it may collide with only one particle in the particle model. At this time, the particle detects a collision event; when the collision model is large, it may collide with multiple particles in the particle model. At this time, multiple particles are detected together to obtain a collision event.
  • collision parameters of the collision event are obtained; wherein the collision parameters include: a target particle that collides, a collision direction, and a plurality of collision forces; based on the collision parameters, each particle in the particle model is controlled to be displaced, and during the displacement of the particles, the deformation amount of the virtual spring connected between the particles is monitored; based on the deformation amount and the deformation threshold, the first position of the particle is determined.
  • the target particle that collides as described above usually has a large displacement.
  • the target particle is connected to the surrounding particles through a virtual spring. Therefore, under the action of the virtual spring, the surrounding particles will also be displaced, and then, other particles connected to the surrounding particles may also be displaced.
  • the displacement of each particle can be calculated by an algorithm based on XPBD or other dynamic principles.
  • the above-mentioned collision direction may affect the range of particles that are displaced in the particle model. The particles far away from the collision direction may not be displaced, or only a small amount of displacement may occur.
  • the above-mentioned collision force may affect the displacement size of the particles and the number of displaced particles. The greater the collision force, the greater the displacement of the particles and the greater the number of displaced particles.
  • the displacement of each mass point is calculated using the principle of dynamics, and the mass point is controlled to move.
  • the movement of the mass point will cause the connected virtual spring to deform.
  • the virtual spring when the deformation of the spring does not exceed the deformation threshold, the virtual spring can rebound completely or partially, and when the deformation exceeds the deformation threshold, the virtual spring will not rebound. Therefore, after the collision, the first position of the mass point is related to the deformation and the deformation threshold.
  • the displacement of the mass point and the deformation of the virtual spring affect each other. For example, the greater the displacement of the mass point, The larger the deformation of the virtual spring, the greater the displacement of the mass point.
  • the deformation threshold When the deformation exceeds the deformation threshold, the position of the mass point will not return to the initial position. Generally, the closer the mass point is to the target mass point that collides, the greater the displacement will be.
  • the virtual spring connected to the target mass point that collides, or the virtual spring closer to the target mass point will also have a larger deformation.
  • the virtual spring if the deformation amount does not exceed the deformation threshold of the virtual spring, the virtual spring is controlled to rebound, and the first position of the mass point is determined based on the length of the virtual spring after rebound; if the deformation amount exceeds the deformation threshold of the virtual spring, the length of the virtual spring after deformation is determined, and the first position of the mass point is determined based on the length of the virtual spring after deformation.
  • the virtual spring here is also called a plastic spring, because squeezing or stretching will cause the spring to yield and permanently deform; the deformation threshold can also be called a plastic threshold.
  • the static length of the virtual spring changes, and thus, the position of the mass point will also change permanently.
  • the static length can be understood as the length of the virtual spring when no external force is applied.
  • the position of the mass point will change permanently.
  • the mass point will drive the position of the model vertices in the target model to change, thereby controlling the deformation of the target model.
  • the target model may partially break or fragment.
  • the virtual spring is removed to control the separation of the particles at both ends of the virtual spring to obtain the first position of the particles.
  • the fracture threshold is usually greater than the aforementioned deformation threshold.
  • the virtual spring is stretched under the drive of the displacement of the particles, generating a deformation. During the continuous stretching process, the deformation will reach the deformation threshold. During the continued stretching process, the deformation will reach the fracture threshold, at which point the virtual spring will break.
  • the virtual spring is compressed, it also has a deformation threshold and a fracture threshold. During the compression process, the deformation of the virtual spring will first reach the deformation threshold. When the compression continues, the deformation will reach the fracture threshold.
  • the relevant algorithms of the aforementioned dynamic principles can also be used to calculate the first position of the mass point when it is not constrained by the virtual spring.
  • This method can simulate the effect of the target model being broken and fractured in the case of a severe collision, for example, the effect of a door falling off in a vehicle model.
  • the first position of each mass point can be saved in a map. Since the number of mass points is small, the data volume of the map is also small.
  • This embodiment aims to control the deformation of the target model through the mass point model. Therefore, in the process of making the target model, it is necessary to set the mapping relationship between the mass points and the mesh vertices.
  • the specific setting method of the mapping relationship is provided below, including the following steps 21 to 24;
  • Step 21 placing the target model and the mass point model in a preset world coordinate system
  • the mesh vertices in the target model and the particles in the particle model all have their own world coordinates.
  • Step 22 for the mesh vertices in the target model, determine a specified number of target particles from the particle model, establish a local coordinate system based on the specified number of target particles, and determine a first conversion relationship between the local coordinate system and the world coordinate system;
  • the above step 22 can be performed for each mesh vertex in the target model. Specifically, a specified number of target mass points can be determined from the surroundings of the mesh vertices. The target mass points are used to establish a local coordinate system to determine the relative relationship between the mesh vertices and the target mass points through the local coordinate system. The specified number of target mass points can be determined according to requirements, for example, two, three or other numbers.
  • the Euclidean distances between the mesh vertices and at least some of the particles in the particle model are calculated; at least some of the particles are sorted in order of Euclidean distance from small to large to obtain a particle sequence; and the first three particles in the particle sequence are determined as target particles.
  • the Euclidean distance between the mesh vertex and each particle can be calculated.
  • the local coordinate system is a three-dimensional coordinate system, three target particles need to be determined. Therefore, the three particles with the smallest Euclidean distance are selected as the target particles of the current mesh vertex.
  • the local coordinate system usually includes a coordinate origin and multiple axial directions;
  • the coordinate origin is usually a coordinate point in the world coordinate system, based on which the transformation relationship between the origin of the local coordinate system and the world coordinate system can be obtained;
  • the axial direction of the local coordinate system is determined according to the relative position relationship between multiple target particles, and the relative position relationship between the target particles can be determined by the difference vector between the coordinate points of the target particles in the world coordinate system; therefore, the above-mentioned first transformation relationship can include the transformation relationship of the origin between the local coordinate system and the world coordinate system, as well as the transformation relationship of the axial direction.
  • the local coordinate system can be established in the following manner: the first particle among the target particles is used as the origin of the local coordinate system; the direction of the line connecting the first particle and the second particle among the target particles is used as the first axis of the local coordinate system; the direction corresponding to the vector product of the direction of the line connecting the third particle and the first particle among the target particles and the first axis is used as the second axis of the local coordinate system; the direction perpendicular to both the first axis and the second axis is used as the third axis of the local coordinate system to obtain the local coordinate system.
  • three target particles are represented as particle n0, particle n1 and particle n2 respectively;
  • particle n0 is taken as the origin of the local coordinate system;
  • the first axis, the second axis and the third axis are respectively the X axis, the Y axis and the Z axis;
  • the X axis represents Normalize(n1-n0) is to subtract the coordinate value of particle n0 in the world coordinate system from the coordinate value of particle n1 in the world coordinate system to obtain a vector pointing from n0 to n1.
  • the vector is normalized by the normolize function to obtain a unit vector indicating only the direction, which represents the X-axis direction mentioned above.
  • the Y-axis is expressed as normalize(cross(n2-n0,x)), that is, the vector from n0 to n2 is obtained by n2-n0, the vector product of the vector and the X-axis is calculated by the cross function, and then the vector product is normalized by the normolize function to obtain the Y-axis; for the Z-axis, it needs to be perpendicular to the plane formed by the X-axis and the Y-axis to determine the Z-axis.
  • Step 23 based on the first conversion relationship, convert the initial rendering parameters of the mesh vertex in the world coordinate system into the local coordinate system to obtain local rendering parameters; wherein the initial rendering parameters include: multiple of the position parameters, normal parameters, and tangent parameters of the mesh vertex;
  • the initial rendering parameters here are used to determine the rendering method of the mesh vertices, where the position parameters are used to determine where the mesh vertices are rendered in the world coordinate system, and the normal parameters and tangent parameters are used to render the orientation of the mesh vertices, thereby affecting the lighting effect or other rendering effects of the mesh vertices.
  • the initial rendering parameters are defined in the world coordinate system. In this step, these rendering parameters need to be converted to the local coordinate system so that after the mass point is displaced, the rendering parameters of the mesh vertices change with the position of the mass point, thereby controlling the deformation effect of the target model.
  • the first conversion relationship can be represented in the form of a matrix or an inverse matrix.
  • the local rendering parameters in the local coordinate system can be obtained by multiplying the initial rendering parameters by the first conversion relationship.
  • Step 24 determining the mesh vertices, target mass points and local rendering parameters as a mapping relationship between mass points and mesh vertices.
  • each mesh vertex in the target model For each mesh vertex in the target model, the target particle corresponding to the mesh vertex, the local coordinate system established by the target particle, and the first transformation relationship between the local coordinate system and the world coordinate system can be obtained, and then the initial rendering parameters in the world coordinate system are transformed based on the first transformation relationship to obtain the local rendering parameters in the local coordinate system. Therefore, each mesh vertex has a target particle and local rendering parameters, which are used as the mapping relationship corresponding to the mesh vertex.
  • the first position of the particle in the particle model is determined, and the mapping relationship between the particle and the mesh vertex is also predetermined.
  • the first rendering parameters of the mesh vertex in the world coordinate system after the collision can be obtained, which is specifically achieved through the following steps 31 to 34.
  • Step 31 obtaining a specified number of target particles corresponding to the mesh vertices from the mapping relationship
  • the target mass point can be obtained from the mapping relationship for each mesh vertex. It should be noted that the target mass points corresponding to adjacent mesh vertices may be the same or partially the same.
  • Step 32 based on the first positions of the specified number of target mass points, a deformation coordinate system is established, and a second transformation relationship between the deformation coordinate system and a preset world coordinate system is determined; wherein the world coordinate system is established in a virtual scene where the target model and the mass point model are located;
  • the position of the target mass point may change.
  • the above-mentioned first position is the first position of the target mass point after the collision event occurs. If the target mass point is displaced during the collision process, the first position is different from the initial position of the target mass point before the collision event occurs; if the target mass point does not change position during the collision process, the first position is the same as the initial position of the target mass point before the collision event occurs.
  • the deformation coordinate system is established using the target particle in the same way as the aforementioned local coordinate system. Considering that the target particle may be displaced, the relative position between the target particles will also change. Based on this, the axis of the deformation coordinate system may be different from the axis of the local coordinate system; and the origin of the deformation coordinate system can use the same particle as the origin of the local coordinate system, such as the aforementioned particle n0 as the origin.
  • the above-mentioned second transformation relationship includes the transformation relationship between the origin of the deformation coordinate system and the world coordinate system, and also includes the axial transformation relationship. Since the relative positions between the target particles will change after the collision, the axial transformation relationship may be different from the transformation relationship corresponding to the local coordinate system.
  • Step 33 obtaining local rendering parameters corresponding to the mesh vertices from the mapping relationship
  • the local rendering parameters here indicate the rendering parameters of the mesh vertices relative to the mass points, which are calculated by the first transformation relationship corresponding to the aforementioned local coordinate system. No matter how the position of the target mass point changes, the local rendering parameters will not change.
  • Step 34 Determine first rendering parameters of mesh vertices in the world coordinate system based on the second conversion relationship and the local rendering parameters.
  • the local rendering parameters can be multiplied by the second transformation relationship, or the local rendering parameters can be multiplied by the inverse matrix of the second transformation relationship to obtain the first rendering parameters.
  • the deformation effect of the target model can be obtained in the virtual scene rendering.
  • the above embodiment can achieve that after the vehicle model hits an obstacle, the vehicle model is deformed. In a game scene, the deformed vehicle model can be rendered in real time.
  • the mass module uses a plastic spring mass model to simulate the deformation of the vehicle model, so that the rendering parameters of the model vertices can be efficiently updated and the shape of the rendered model can be updated in VertexShader.
  • the technology calculates the position of particles after collision deformation in real time.
  • the position of the rendered vertices after deformation can be interpolated in parallel on the GPU.
  • the device includes:
  • the model generation module 40 is used to generate a target model located in a virtual scene and a mass point model corresponding to the target model, wherein the mass point model and the target model are overlapped and set in the virtual scene; the mass point model includes a plurality of mass points, and the plurality of mass points are connected by virtual springs; the shape composed of the plurality of mass points matches the shape of the target model; the number of mass points in the mass point model is less than the number of mesh vertices in the target model; and there is a preset mapping relationship between the mass points and the mesh vertices;
  • a position determination module 42 configured to determine, in response to the mass point model detecting a collision event, a first position of a mass point in the mass point model after the collision event occurs based on a collision parameter of the collision event and a deformation threshold of a virtual spring;
  • the parameter determination module 44 is used to determine the first rendering parameters of the mesh vertices in the target model based on the first positions of the particles in the particle model and the mapping relationship between the particles and the mesh vertices; the mesh vertices in the target model are rendered using the first rendering parameters to obtain the deformed target model.
  • the deformation control device of the above-mentioned virtual model generates a target model located in a virtual scene and a particle model corresponding to the target model, wherein the particle model and the target model are overlapped and set in the virtual scene; the particle model includes multiple particles, and the multiple particles are connected by virtual springs; the shape composed of the multiple particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; there is a preset mapping relationship between the particles and the mesh vertices; in response to the particle model detecting a collision event, based on the collision parameters of the collision event and the deformation threshold of the virtual spring, the first position of the particle in the particle model after the collision event occurs is determined; based on the first position of the particle in the particle model and the mapping relationship between the particle and the mesh vertices, the first rendering parameters of the mesh vertices in the target model are determined; the mesh vertices in the target model are rendered by the first rendering parameters to obtain the deformed target
  • a particle model is set for the target model, which includes a small number of particles, and the particles are connected by virtual springs.
  • the collision event is detected by the particle model. If a collision occurs, the displacement between the particles is calculated, and the rendering parameters of the deformed target model are determined by the deformation of the virtual springs, so as to render the deformed target model.
  • This method requires less calculation for deformation, and the deformation effect is realistic, so it is suitable for real-time rendering of virtual scenes.
  • the deformation control device of the virtual model also includes a collision detection module, which is used to: in response to the target model being located in the virtual scene, detect whether the particles collide through the particles in the particle model; if at least one particle in the particle model collides, determine that a collision event is detected.
  • a collision detection module which is used to: in response to the target model being located in the virtual scene, detect whether the particles collide through the particles in the particle model; if at least one particle in the particle model collides, determine that a collision event is detected.
  • the position determination module is also used to: obtain collision parameters of the collision event in response to the particle model detecting a collision event; wherein the collision parameters include: a target particle that collides, a collision direction, and a collision force; control the displacement of each particle in the particle model based on the collision parameters, and monitor the deformation of the virtual spring connected between the particles during the displacement of the particles; determine the first position of the particle based on the deformation and the deformation threshold.
  • the position determination module is also used to: if the deformation amount does not exceed the deformation threshold of the virtual spring, control the virtual spring to rebound, and determine the first position of the mass point based on the length of the virtual spring after rebound; if the deformation amount exceeds the deformation threshold of the virtual spring, determine the length of the virtual spring after deformation, and determine the first position of the mass point based on the length of the virtual spring after deformation.
  • the deformation control device of the virtual model also includes a removal module, which is used to: if the deformation amount exceeds the fracture threshold of the virtual spring, remove the virtual spring to control the mass points at both ends of the virtual spring to separate from each other and obtain the first position of the mass points.
  • a removal module which is used to: if the deformation amount exceeds the fracture threshold of the virtual spring, remove the virtual spring to control the mass points at both ends of the virtual spring to separate from each other and obtain the first position of the mass points.
  • the deformation control device of the virtual model also includes a mapping relationship determination module, which is used to: overlap the target model and the particle model and place them in a preset world coordinate system; for the mesh vertices in the target model, determine a specified number of target particles from the particle model, establish a local coordinate system based on the specified number of target particles, and determine a first transformation relationship between the local coordinate system and the world coordinate system; based on the first transformation relationship, transform the initial rendering parameters of the mesh vertices in the world coordinate system into the local coordinate system to obtain local rendering parameters; wherein the initial rendering parameters include: multiple types of position parameters, normal parameters, and tangent parameters of the mesh vertices; and determine the mesh vertices, target particles, and local rendering parameters as a mapping relationship between particles and mesh vertices.
  • a mapping relationship determination module which is used to: overlap the target model and the particle model and place them in a preset world coordinate system; for the mesh vertices in the target model, determine a specified number of target particles from the particle
  • the mapping relationship determination module is also used to: calculate the Euclidean distance between the mesh vertices in the target model and at least some of the particles in the particle model; sort at least some of the particles in order of Euclidean distance from small to large to obtain a particle sequence; and determine the first three particles in the particle sequence as target particles.
  • the mapping relationship determination module is used to: use the first particle among the target particles as the origin of the local coordinate system; use the direction of the line connecting the first particle and the second particle among the target particles as the first axis of the local coordinate system; use the direction corresponding to the vector product of the direction of the line connecting the third particle and the first particle among the target particles and the first axis as the second axis of the local coordinate system; use the direction perpendicular to both the first axis and the second axis as the third axis of the local coordinate system to obtain the local coordinate system.
  • the parameter determination module is further used to: obtain a specified number of target mass points corresponding to the mesh vertices from the mapping relationship; establish a deformation coordinate system based on the first positions of the specified number of target mass points, and determine a second transformation of the deformation coordinate system and the preset world coordinate system; wherein the world coordinate system is established in the virtual scene where the target model and the particle model are located; obtaining the local rendering parameters corresponding to the mesh vertices from the mapping relationship; and determining the first rendering parameters of the mesh vertices in the world coordinate system based on the second conversion relationship and the local rendering parameters.
  • This embodiment also provides an electronic device, including a processor and a memory, wherein the memory stores machine executable instructions that can be executed by the processor, and the processor executes the machine executable instructions to implement the above virtual model deformation control method.
  • the electronic device can be a server or a touch terminal device.
  • the electronic device includes a processor 100 and a memory 101 .
  • the memory 101 stores machine executable instructions that can be executed by the processor 100 .
  • the processor 100 executes the machine executable instructions to implement the above-mentioned virtual model deformation control method.
  • the electronic device shown in FIG. 5 further includes a bus 102 and a communication interface 103 , and the processor 100 , the communication interface 103 and the memory 101 are connected via the bus 102 .
  • the memory 101 may include a high-speed random access memory (RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk storage.
  • RAM random access memory
  • non-volatile memory non-volatile memory
  • the communication connection between the system network element and at least one other network element is realized through at least one communication interface 103 (which can be wired or wireless), and the Internet, wide area network, local area network, metropolitan area network, etc. can be used.
  • the bus 102 can be an ISA bus, a PCI bus or an EISA bus, etc.
  • the bus can be divided into an address bus, a data bus, a control bus, etc. For ease of representation, only one bidirectional arrow is used in Figure 5, but it does not mean that there is only one bus or one type of bus.
  • the processor 100 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by the hardware integrated logic circuit in the processor 100 or the instruction in the form of software.
  • the above processor 100 can be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it can also be a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the disclosed methods, steps and logic block diagrams in the embodiments of the present disclosure can be implemented or executed.
  • the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present disclosure can be directly embodied as a hardware decoding processor for execution, or can be executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium mature in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method of the above embodiment in combination with its hardware.
  • the processor in the electronic device can implement the following operations in the deformation control method of the virtual model by executing the machine executable instructions:
  • the particle model includes multiple particles, and the multiple particles are connected by virtual springs; the shape composed of the multiple particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; there is a preset mapping relationship between the particles and the mesh vertices; in response to the particle model detecting a collision event, based on the collision parameters of the collision event and the deformation threshold of the virtual spring, determine the first position of the particles in the particle model after the collision event occurs; based on the first position of the particles in the particle model and the mapping relationship between the particles and the mesh vertices, determine the first rendering parameters of the mesh vertices in the target model; render the mesh vertices in the target model by the first rendering parameters to obtain the deformed target model.
  • whether a collision occurs between the particles is detected by using particles in the particle model; if at least one particle in the particle model collides, it is determined that a collision event is detected.
  • collision parameters of the collision event are obtained; wherein the collision parameters include: a target particle that collides, a collision direction, and a collision force; based on the collision parameters, each particle in the particle model is controlled to be displaced, and during the displacement of the particles, the deformation amount of the virtual spring connected between the particles is monitored; based on the deformation amount and the deformation threshold, the first position of the particle is determined.
  • the virtual spring is controlled to rebound, and the first position of the mass point is determined based on the length of the virtual spring after rebound; if the deformation amount exceeds the deformation threshold of the virtual spring, the length of the virtual spring after deformation is determined, and the first position of the mass point is determined based on the length of the virtual spring after deformation.
  • the virtual spring is removed to control the mass points at both ends of the virtual spring to separate from each other, thereby obtaining the first position of the mass points.
  • the mapping relationship between the mass points and the mesh vertices is obtained by: placing the target model and the mass point model in a preset world coordinate system; determining a specified number of target mass points from the mass point model for the mesh vertices in the target model, establishing a local coordinate system based on the specified number of target mass points, and determining a first transformation relationship between the local coordinate system and the world coordinate system; based on the first transformation relationship, transforming the initial rendering parameters of the mesh vertices in the world coordinate system into the local coordinate system to obtain local rendering parameters;
  • the initial rendering parameters include: multiple parameters of the position parameters, normal parameters, and tangent parameters of the mesh vertices; the mesh vertices, target particles, and local rendering parameters are determined as a mapping relationship between the particles and the mesh vertices.
  • the Euclidean distances between the mesh vertices and at least some of the particles in the particle model are calculated; at least some of the particles are sorted in order of Euclidean distance from small to large to obtain a particle sequence; and the first three particles in the particle sequence are determined as target particles.
  • the first particle among the target particles is used as the origin of the local coordinate system; the direction of the line connecting the first particle and the second particle among the target particles is used as the first axis of the local coordinate system; the direction corresponding to the vector product of the direction of the line connecting the third particle and the first particle among the target particles and the first axis is used as the second axis of the local coordinate system; the direction perpendicular to both the first axis and the second axis is used as the third axis of the local coordinate system to obtain the local coordinate system.
  • a specified number of target particles corresponding to the mesh vertices are obtained from the mapping relationship; based on the first positions of the specified number of target particles, a deformation coordinate system is established, and a second transformation relationship between the deformation coordinate system and a preset world coordinate system is determined; wherein the world coordinate system is established in a virtual scene where the target model and the particle model are located; local rendering parameters corresponding to the mesh vertices are obtained from the mapping relationship; based on the second transformation relationship and the local rendering parameters, the first rendering parameters of the mesh vertices in the world coordinate system are determined.
  • a particle model is set for the target model, which includes a small number of particles, and the particles are connected by virtual springs.
  • the collision event is detected by the particle model. If a collision occurs, the displacement between the particles is calculated, and the rendering parameters of the deformed target model are determined by the deformation of the virtual springs, so as to render the deformed target model.
  • This method requires less calculation for deformation, and the deformation effect is realistic, so it is suitable for real-time rendering of virtual scenes.
  • This embodiment also provides a machine-readable storage medium, which stores machine-executable instructions.
  • the machine-executable instructions When the machine-executable instructions are called and executed by a processor, the machine-executable instructions prompt the processor to implement the above-mentioned virtual model deformation control method.
  • the machine executable instructions stored in the machine readable storage medium can implement the following operations in the deformation control method of the virtual model by executing the machine executable instructions:
  • the particle model includes multiple particles, and the multiple particles are connected by virtual springs; the shape composed of the multiple particles matches the shape of the target model; the number of particles in the particle model is less than the number of mesh vertices in the target model; there is a preset mapping relationship between the particles and the mesh vertices; in response to the particle model detecting a collision event, based on the collision parameters of the collision event and the deformation threshold of the virtual spring, determine the first position of the particles in the particle model after the collision event occurs; based on the first position of the particles in the particle model and the mapping relationship between the particles and the mesh vertices, determine the first rendering parameters of the mesh vertices in the target model; render the mesh vertices in the target model by the first rendering parameters to obtain the deformed target model.
  • whether a collision occurs between the particles is detected by using particles in the particle model; if at least one particle in the particle model collides, it is determined that a collision event is detected.
  • collision parameters of the collision event are obtained; wherein the collision parameters include: a target particle that collides, a collision direction, and a collision force; based on the collision parameters, each particle in the particle model is controlled to be displaced, and during the displacement of the particles, the deformation amount of the virtual spring connected between the particles is monitored; based on the deformation amount and the deformation threshold, the first position of the particle is determined.
  • the virtual spring is controlled to rebound, and the first position of the mass point is determined based on the length of the virtual spring after rebound; if the deformation amount exceeds the deformation threshold of the virtual spring, the length of the virtual spring after deformation is determined, and the first position of the mass point is determined based on the length of the virtual spring after deformation.
  • the virtual spring is removed to control the mass points at both ends of the virtual spring to separate from each other, thereby obtaining the first position of the mass points.
  • the mapping relationship between the particle and the mesh vertices is obtained in the following manner: the target model and the particle model are overlapped and placed in a preset world coordinate system; for the mesh vertices in the target model, a specified number of target particle points are determined from the particle model, a local coordinate system is established based on the specified number of target particle points, and a first transformation relationship between the local coordinate system and the world coordinate system is determined; based on the first transformation relationship, the initial rendering parameters of the mesh vertices in the world coordinate system are transformed into the local coordinate system to obtain local rendering parameters; wherein the initial rendering parameters include: multiple types of position parameters, normal parameters, and tangent parameters of the mesh vertices; the mesh vertices, target particle points, and local rendering parameters are determined as the mapping relationship between the particle and the mesh vertices.
  • the Euclidean distances between the mesh vertices and at least some of the particles in the particle model are calculated; at least some of the particles are sorted in order of Euclidean distance from small to large to obtain a particle sequence; and the first three particles in the particle sequence are determined as target particles.
  • the first particle among the target particles is used as the origin of the local coordinate system; the direction of the line connecting the first particle and the second particle among the target particles is used as the first axis of the local coordinate system; the direction corresponding to the vector product of the direction of the line connecting the third particle and the first particle among the target particles and the first axis is used as the second axis of the local coordinate system; the direction perpendicular to both the first axis and the second axis is used as the local The third axis of the local coordinate system is obtained.
  • a specified number of target particles corresponding to the mesh vertices are obtained from the mapping relationship; based on the first positions of the specified number of target particles, a deformation coordinate system is established, and a second transformation relationship between the deformation coordinate system and a preset world coordinate system is determined; wherein the world coordinate system is established in a virtual scene where the target model and the particle model are located; local rendering parameters corresponding to the mesh vertices are obtained from the mapping relationship; based on the second transformation relationship and the local rendering parameters, the first rendering parameters of the mesh vertices in the world coordinate system are determined.
  • a particle model is set for the target model, which includes a small number of particles, and the particles are connected by virtual springs.
  • the collision event is detected by the particle model. If a collision occurs, the displacement between the particles is calculated, and the rendering parameters of the deformed target model are determined by the deformation of the virtual springs, so as to render the deformed target model.
  • This method requires less calculation for deformation, and the deformation effect is realistic, so it is suitable for real-time rendering of virtual scenes.
  • the computer program product of the deformation control method, device and electronic device of the virtual model provided in the embodiments of the present disclosure includes a computer-readable storage medium storing program code.
  • the instructions included in the program code can be used to execute the methods described in the previous method embodiments. The specific implementation can be found in the method embodiments, which will not be repeated here.
  • the terms “installed”, “connected”, and “connected” should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection, or it can be indirectly connected through an intermediate medium, or it can be the internal communication of two components.
  • installed should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection, or it can be indirectly connected through an intermediate medium, or it can be the internal communication of two components.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Architecture (AREA)
  • Evolutionary Computation (AREA)
  • Image Generation (AREA)

Abstract

一种虚拟模型的形变控制方法,包括:生成位于虚拟场景的目标模型以及与目标模型对应的质点模型(S102);响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置(S104);基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型(S106)。该方式可以高效地渲染模型的形变效果,适用于实时渲染的虚拟场景。 (图1)

Description

虚拟模型的形变控制方法、装置和电子设备
相关申请的交叉引用
本申请要求于2022年10月8日提交的申请号为202211222119.4、名称为“虚拟模型的形变控制方法、装置和电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入全文。
技术领域
本公开涉及模型渲染技术领域,尤其是涉及一种虚拟模型的形变控制方法、装置和电子设备。
背景技术
在虚拟场景中,模型发生碰撞后,通常使用刚体碰撞体模拟模型碰撞后的状态。通过刚体碰撞体模拟模型碰撞后的状态时,模型碰撞后会产生位置变化和姿势变化,例如,翻滚、旋转等;但难以模拟模型被碰撞后的形变,例如,模型的塌陷、隆起、碎裂等形变,导致这种模拟方式的逼真程度较低。相关技术中,还可以将模型进行有限元切分,得到大量的体素集合,当模型发生碰撞时,根据碰撞力度、方向等各项参数,计算模型中每个体素的位置,从而得到形变后的模型,但该方式的计算量巨大,需要大量的计算时间和计算资源,难以应用于实时渲染的虚拟场景中。
发明内容
根据本公开的一个方面,本公开实施例提供了一种虚拟模型的形变控制方法,方法包括:生成位于虚拟场景的目标模型以及与目标模型对应的质点模型;其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。
可选地,生成位于虚拟场景的目标模型以及与目标模型对应的质点模型的步骤之后,该方法还包括:响应于目标模型位于虚拟场景中,通过质点模型中的质点,检测质点是否发生碰撞;如果质点模型中的至少一个质点发生碰撞,确定检测到碰撞事件。
可选地,响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置的步骤,包括:响应于质点模型检测到碰撞事件,获取碰撞事件的碰撞参数;其中,碰撞参数包括:发生碰撞的目标质点、碰撞方向、碰撞力度中的多种;基于碰撞参数控制质点模型中的各个质点发生位移,在质点发生位移的过程中,监听质点之间连接的虚拟弹簧的形变量;基于形变量和形变阈值,确定质点的第一位置。
可选地,基于形变量和形变阈值,确定质点的第一位置的步骤,包括:如果形变量没有超出虚拟弹簧的形变阈值,控制虚拟弹簧回弹,基于回弹后的虚拟弹簧的长度确定质点的第一位置;如果形变量超出虚拟弹簧的形变阈值,确定虚拟弹簧形变后的长度,基于虚拟弹簧形变后的长度确定质点的第一位置。
可选地,该方法还包括:如果形变量超出虚拟弹簧的断裂阈值,移除虚拟弹簧,以控制虚拟弹簧两端的质点相互分离,得到质点的第一位置。
可选地,质点与网格顶点之间的映射关系,通过下述方式得到:将目标模型和质点模型重合放置在预设的世界坐标系中;针对目标模型中的网格顶点,从质点模型中确定指定数量的目标质点,基于指定数量的目标质点建立局部坐标系,并确定局部坐标系与世界坐标系的第一转换关系;基于第一转换关系,将网格顶点在世界坐标系中的初始渲染参数,转换至局部坐标系中,得到局部渲染参数;其中,初始渲染参数包括:网格顶点的位置参数、法线参数、切线参数中的多种;将网格顶点、目标质点以及局部渲染参数,确定为质点与网格顶点之间的映射关系。
可选地,针对目标模型中的网格顶点,从质点模型中确定指定数量的目标质点的步骤,包括:针对目标模型中的网格顶点,计算网格顶点与质点模型中至少部分质点之间的欧式距离;按照欧式距离由小到大的顺序,对至少部分质点进行排序,得到质点序列;将质点序列中前三个质点确定为目标质点。
可选地,基于指定数量的目标质点建立局部坐标系的步骤,包括:将目标质点中的第一质点作为局部坐标系的原点;将目标质点中第一质点和第二质点的连线方向,作为局部坐标系的第一轴向;将目标质点中,第三质点与第一质点的连线方向与第一轴向的向量积对应的方向,作为局部坐标系的第 二轴向;将与第一轴向和第二轴向均垂直的方向,作为局部坐标系的第三轴向,得到局部坐标系。
可选地,基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数的步骤,包括:从映射关系中获取网格顶点对应的指定数量的目标质点;基于指定数量的目标质点的第一位置,建立形变坐标系,并确定形变坐标系与预设的世界坐标系的第二转换关系;其中,世界坐标系建立在目标模型和质点模型所处的虚拟场景中;从映射关系中获取网格顶点对应的局部渲染参数;基于第二转换关系和局部渲染参数,确定网格顶点在世界坐标系中的第一渲染参数。
根据本公开的一个方面,还提供了一种虚拟模型的形变控制装置,装置包括:模型生成模块,用于生成位于虚拟场景的目标模型以及与目标模型对应的质点模型,其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;位置确定模块,用于响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;参数确定模块,用于基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。
根据本公开的一个方面,还提供了一种电子设备,包括处理器和存储器,存储器存储有能够被处理器执行的机器可执行指令,处理器执行机器可执行指令以实现上述虚拟模型的形变控制方法。
根据本公开的一个方面,还提供了一种机器可读存储介质,机器可读存储介质存储有机器可执行指令,机器可执行指令在被处理器调用和执行时,机器可执行指令促使处理器实现上述虚拟模型的形变控制方法。
本公开实施例的虚拟模型的形变控制方法、装置和电子设备,生成位于虚拟场景的目标模型以及与所述目标模型对应的质点模型;其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。该方式中,为目标模型设置质点模型,该质点模型中包括数量较少的质点,且质点之间通过虚拟弹簧连接,通过质点模型检测碰撞事件,如果发生碰撞,计算质点之间的位移,并通过虚拟弹簧的形变确定目标模型形变后的渲染参数,从而渲染得到形变的目标模型,该方式计算形变的计算量较少,高效且形变效果逼真,适用于实时渲染的虚拟场景。
附图说明
图1为本公开实施例提供的一种虚拟模型的形变控制方法的流程图;
图2为本公开实施例提供的一种车辆模型的质点模型的示意图;
图3为本公开实施例提供的一种车辆模型的示意图;
图4为本公开实施例提供的一种虚拟模型的形变控制装置的结构示意图;
图5为本公开实施例提供的一种电子设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合附图对本公开的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
在游戏场景或其他虚拟场景中,虚拟模型的变形模拟可以丰富虚拟模型的视觉效果,提高用户的交互体验。但目前的虚拟场景中的,大多数虚拟模型为不可形变的刚体,虚拟模型可以被虚拟场景中的障碍物遮挡,但虚拟模型本身不发生形变。
以游戏场景中的车辆模型为例,在场景制作阶段,会为车辆模型配置刚体碰撞体。在游戏运行时,如果车辆发生碰撞,针对车辆模型进行物理模拟时,车辆模型仅参与刚体模拟流程,得到刚体的位置和旋转,从而渲染并控制车辆模型的位置和旋转,并不能渲染出车辆模型因为碰撞导致的形变效果。
相关技术中,对虚拟模型模拟形变,可以使用有限元方案。在该方案中,将虚拟模型进行有限元切分,得到四面体或六面体体素集合。虚拟模型发生碰撞时,对模型中的每个体素进行弹性矩阵的组装,得到弹性方程,求解该弹性方程,从而得到每个体素的位置,进而得到虚拟模型的形变数据。但 是一个虚拟模型通常会切分出大量的体素,针对每个体素进行计算,需要消耗大量的计算资源和计算时间,难以应用于游戏等实时渲染场景中。
基于上述,本公开实施例提供一种虚拟模型的形变控制方法、装置和电子设备,该技术可以应用于游戏场景或其他各类虚拟场景中,虚拟模型的形变控制中,尤其可以应用于对虚拟模型进行实时的形变渲染中。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种虚拟模型的形变控制方法进行详细介绍,如图1所示,该虚拟模型的形变控制方法可以应用于服务器、终端设备,也可以应用于云服务器。该本方法包括如下步骤:
步骤S102,生成位于虚拟场景的目标模型以及与目标模型对应的质点模型;其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;
该目标模型可以为人物模型、交通工具模型、静物模型、动物模型等各类具有可形变属性的模型。该目标模型在制作过程中,除制作目标模型本身以外,还需要制作与目标模型匹配的质点模型。质点模型和目标模型制作的先后顺序不限定。在质点模型中,质点之间通过虚拟弹簧连接,一个质点通常与一个或多个周围的质点连接,质点与虚拟弹簧需要搭建得到相对稳固的结构。由于三角形是相对稳定的形状,通常,质点和虚拟弹簧之间会组成三角形,因而上述质点模型中会包括多个三角形。
在实际实现时,对于某一个质点,与哪些质点之间设置虚拟弹簧,通常是根据质点模型的形状,或者目标模型的形状确定的。如果两个质点之间连接有虚拟弹簧,则这两个质点会保持一定的弹性距离。虚拟弹簧可以通过塑形弹簧实现,该虚拟弹簧可以设置形变阈值和断裂阈值。当质点模型发生碰撞时,质点模型中的全部或部分质点会发生位移,对于某个虚拟弹簧而言,如果弹簧连接的两个质点发生位移,或者其中一个质点发生位移,则该虚拟弹簧会发生形变,形变具体可以是拉伸或者压缩。
当形变量没有达到形变阈值时,虚拟弹簧会回弹,例如回弹至虚拟弹簧没有拉伸也没有压缩的情况,此时,两个质点的位置不变;当形变量达到形变阈值,则虚拟弹簧就不会回弹,此时,两个质点的位置会发生变化,从而模拟模型的形变;进一步的,当形变量达到断裂阈值,可以视为虚拟弹簧断乱,此时,两个质点会相互分离,从而模拟模型的碎裂。
由于目标模型中的网格顶点数量较大,当发生碰撞事件时,如果直接计算网格顶点的位置,则会带来巨大的计算开销,导致场景画面出现卡顿,难以适用于实时渲染场景。为了降低运算量,质点模型中质点的数量小于目标模型中网格顶点的数量;可以理解的是,质点模型的质点数量越多,发生碰撞时,目标模型的形变就越细腻、逼真,但需要较大的运算资源和运算时长,才能计算得到每个质点的位置,以及目标模型中每个网格顶点的渲染参数。
本实施例旨在实现模拟形变的实时渲染,基于此,质点模型中质点的数量小于目标模型中网格顶点的数量,当发生碰撞事件时,计算较少数数量的质点的位置,从而确定目标模型中各个网格顶点的渲染参数,可以降低计算量和计算时间。
图2和图3作为一个示例,其中的图2是车辆模型的质点模型,图3是目标模型为车辆模型的示例。质点模型包括多个质点,根据车辆模型的形状,质点之间通过虚拟弹簧进行连接,得到结构相对稳固的车辆形状的质点模型。可知,质点模型和目标模型的大小、形状相互匹配,在虚拟场景中,质点模型和目标模型相互重合设置,但仅显示目标模型,而质点模型用于检测碰撞事件。通常,车辆模型的网格顶点为20000个左右,而质点模型中共有900个左右的质点。
另外,为了使质点模型与目标模型产生关联,预先设置质点与网格顶点之间的映射关系。在虚拟场景中,目标模型中网格顶点的渲染参数是基于虚拟场景中的世界坐标系建立的,通过该映射关系,可以得到网格顶点相对于质点的渲染参数。发生碰撞后,质点的位置发生变化,而映射关系不变,即网格顶点相对于质点的渲染参数不变,基于此,可以得到质点的位置变化后,网格顶点的渲染参数,从而渲染得到形变后的目标模型。
上述虚拟场景在运行过程中,目标模型位于虚拟场景中,同时,质点模型与目标模型重合,也设置在虚拟场景中,但质点模型不显示。目标模型在虚拟场景中移动时,质点模型同时也移动,即质点模型与目标模型实时重合设置。
该质点模型用于检测碰撞事件,例如,目标模型和质点模型可能和虚拟场景中障碍物、其他模型均可能发生碰撞。质点模型的大小可以与目标模型的大小相同,或者质点模型略大于目标模型,此时,碰撞事件的检测会比较灵敏;也可以质点模型略小于目标模型,此时,碰撞事件的检测会比较迟钝。
步骤S104,响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;
上述碰撞事件的碰撞参数可以包括,目标模型发生碰撞的位置、碰撞方向、碰撞力度、发生碰撞的模型之间的相对速度等参数。通过这些碰撞参数,可以通过XPBD(Extended Position-Based Dynamics,扩展的基于位置的动力学)或其他动力学原理的算法,计算各个质点的位移。
又由于质点之间连接有虚拟弹簧,质点的位移会导致虚拟弹簧发生形变,如果位移较小,则虚拟弹簧的形变量不超过形变阈值,此时,虚拟弹簧会回弹,质点的第一位置与发生碰撞之前的初始位置相同,或发生微小变化;如果虚拟弹簧的形变量超过了形变阈值,则虚拟弹簧就不会回弹了,质点的第一位置与发生碰撞之前的初始位置就会差异较大。
步骤S106,基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。
发生碰撞之后,质点到达第一位置,由于质点与网格顶点之间具有相对固定的映射关系,因此,基于质点的第一位置,可以得到网格顶点在发生碰撞后的第一渲染参数,该第一渲染参数可以包括发生碰撞后网格顶点的位置参数、法线参数、切线参数等。然后,通过第一渲染参数渲染目标模型中的网格顶点,即可得因碰撞导致的形变后的目标模型。
上述虚拟模型的形变控制方法,生成位于虚拟场景的目标模型以及与目标模型对应的质点模型;其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。该方式中,为目标模型设置质点模型,该质点模型中包括数量较少的质点,且质点之间通过虚拟弹簧连接,通过质点模型检测碰撞事件,如果发生碰撞,计算质点之间的位移,并通过虚拟弹簧的形变确定目标模型形变后的渲染参数,从而渲染得到形变的目标模型,该方式计算形变的计算量较少,高效且形变效果逼真,适用于实时渲染的虚拟场景。
下面描述碰撞事件的检测方式。
具体的,响应于目标模型位于虚拟场景中,通过质点模型中的质点,检测质点是否发生碰撞;如果质点模型中的至少一个质点发生碰撞,确定检测到碰撞事件。
该方式中,通过质点模型中的质点检测碰撞事件。质点模型中的全部质点或部分质点,具有碰撞的检测功能,例如,可以对这些质点设置检测射线,通过射线检测质点本身是否与场景中的其他模型相互接触或相互碰撞。当目标模型位于虚拟场景中,这些质点实时或定时地检测自身是否与其他模型发生碰撞。当虚拟场景中的碰撞模型较小时,则可能仅与质点模型中的一个质点发生碰撞,此时,该质点检测到碰撞事件;当碰撞模型较大时,则可能与质点模型中的多个质点发生碰撞,此时,多个质点共同检测得到碰撞事件。
下面描述发生碰撞事件后,质点模型中各个质点的位置确定方式。
具体的,响应于质点模型检测到碰撞事件,获取碰撞事件的碰撞参数;其中,该碰撞参数包括:发生碰撞的目标质点、碰撞方向、碰撞力度中的多种;基于碰撞参数控制质点模型中的各个质点发生位移,在质点发生位移的过程中,监听质点之间连接的虚拟弹簧的形变量;基于形变量和形变阈值,确定质点的第一位置。
可以理解的是,上述发生碰撞的目标质点通常具有较大的位移,该目标质点与周围质点通过虚拟弹簧连接,因而,在虚拟弹簧的作用下,周围质点也会发生位移,进而,与周围质点连接的其他质点也可能会发生位移。在实际实现时,可以通过或XPBD其他动力学原理的算法,计算各个质点的位移。上述碰撞方向可能会影响质点模型中发生位移的质点范围,远离碰撞方向的质点可能不发生位移,或者仅发生少量的位置。上述碰撞力度可能会影响质点的位移大小,以及发生位移的质点的数量,碰撞力度越大,质点的位移也就越大,产生位移的质点数量也就越大。
通过碰撞参数,使用动力学原理计算各个质点的位移,并控制质点发生移动。质点的移动会带动连接的虚拟弹簧产生形变。根据弹簧的性质,当弹簧的形变量没有超过形变阈值,则虚拟弹簧可以全部回弹或者部分回弹,而当形变量超过形变阈值,则虚拟弹簧则不会发生回弹,因此,在发生碰撞后,质点的第一位置与形变量和形变阈值有关。
需要说明的是,在发生碰撞后,质点的位移和虚拟弹簧的形变相互影响,例如,质点的位移越大, 虚拟弹簧的形变量也就越大,形变量超出形变阈值时,质点的位置就不会恢复至初始位置。通常,与发生碰撞的目标质点距离越近的质点,位移会越大,与发生碰撞的目标质点连接的虚拟弹簧,或者距离目标质点较近的虚拟弹簧,形变量也会越大。
一种具体的实现方式中,如果形变量没有超出虚拟弹簧的形变阈值,控制虚拟弹簧回弹,基于回弹后的虚拟弹簧的长度确定质点的第一位置;如果形变量超出虚拟弹簧的形变阈值,确定虚拟弹簧形变后的长度,基于虚拟弹簧形变后的长度确定质点的第一位置。这里的虚拟弹簧也称为塑性弹簧,因为挤压或拉伸会导致弹簧屈服,发生永久性的形变;形变阈值也可以称为塑性阈值,当虚拟弹簧的形变量超出形变阈值时,虚拟弹簧的静态长度发生变化,因而,质点的位置也会发生永久性的变化。静态长度可以理解为没有外力施加时,虚拟弹簧的长度。
当形变量超出形变阈值时,质点的位置会发生永久性的变化,此时,质点会带动目标模型中的模型顶点的位置发生变化,从而控制目标模型发生形变。
进一步的,当碰撞比较剧烈时,目标模型可能会发生局部断裂或碎裂,为了模拟该效果,一种方式中,如果形变量超出虚拟弹簧的断裂阈值,移除虚拟弹簧,以控制虚拟弹簧两端的质点相互分离,得到质点的第一位置。断裂阈值通常会大于前述形变阈值。以拉伸为例,虚拟弹簧在质点发生位移的带动下进行拉伸,产生形变量,持续拉伸过程中,形变量会达到形变阈值,继续拉伸过程中,形变量会达到断裂阈值,此时,虚拟弹簧会断裂。虚拟弹簧被压缩时,同样具有形变阈值和断裂阈值,虚拟弹簧在被压缩过程中,形变量首先会达到形变阈值,继续压缩时,形变量会达到断裂阈值。
当虚拟弹簧达到断裂阈值时,为了模拟弹簧断裂时质点的效果,此时会移除虚拟弹簧,则质点不再连接该虚拟弹簧,因而,该质点的移动不再受到虚拟弹簧的影响,该情况下,也可以使用前述动力学原理的相关算法,计算质点不被该虚拟弹簧约束的情况下的第一位置。
该方式可以模拟碰撞剧烈的情况下,目标模型发生破碎断裂的效果,例如,车辆模型中,车门脱落的效果。
得到质点的第一位置之后,可以将每个质点的第一位置保存在一张贴图中,由于质点的数量较少,因而,该贴图的数据量也较少。
本实施例旨在通过质点模型控制目标模型的形变,因此,在目标模型的制作过程中,需要设置质点与网格顶点之间的映射关系,下面提供映射关系的具体设置方式,包括下述步骤21-步骤24;
步骤21,将目标模型和质点模型重合放置在预设的世界坐标系中;
此时,目标模型中的网格顶点和质点模型中的质点均具有各自的世界坐标。
步骤22,针对目标模型中的网格顶点,从质点模型中确定指定数量的目标质点,基于指定数量的目标质点建立局部坐标系,并确定局部坐标系与世界坐标系的第一转换关系;
为了使网格顶点与质点产生关联,可以针对目标模型中的每个网格顶点,执行上述步骤22。具体的,可以从网格顶点的周围,确定指定数量的目标质点。目标质点用于建立局部坐标系,以通过该局部坐标系,确定网格顶点与目标质点的相对关系。目标质点的指定数量可以根据需求确定,例如,两个、三个或其他数量。
一种具体的实现方式中,针对目标模型中的网格顶点,计算网格顶点与质点模型中至少部分质点之间的欧式距离;按照欧式距离由小到大的顺序,对至少部分质点进行排序,得到质点序列;将质点序列中前三个质点确定为目标质点。
在实际实现时,针对每个网格顶点,可以计算该网格顶点与每个质点之间的欧式距离,欧氏距离越小,则质点距离网格顶点越近。当局部坐标系为三维坐标系中,则需要确定三个目标质点,因此,选择欧式距离最小的三个质点作为当前网格顶点的目标质点。
可以理解的是,局部坐标系通常包括坐标原点和多个轴向;坐标原点通常为世界坐标系中的一个坐标点,基于该坐标点,可以得到局部坐标系和世界坐标系之间原点的转换关系;另外,局部坐标系的轴向根据多个目标质点之间的相对位置关系确定,而目标质点之间的相对位置关系可以通过世界坐标系中目标质点的坐标点之间的差向量确定;因而,上述第一转换关系中可以包括局部坐标系和世界坐标系之间原点的转换关系,以及轴向的转换关系。
一种具体的实现方式中,局部坐标系可以通过下述方式建立:将目标质点中的第一质点作为局部坐标系的原点;将目标质点中第一质点和第二质点的连线方向,作为局部坐标系的第一轴向;将目标质点中,第三质点与第一质点的连线方向与第一轴向的向量积对应的方向,作为局部坐标系的第二轴向;将与第一轴向和第二轴向均垂直的方向,作为局部坐标系的第三轴向,得到局部坐标系。
作为示例,三个目标质点分别表示为质点n0、质点n1和质点n2;其中,将质点n0作为局部坐标系的原点;上述第一轴向、第二轴向和第三轴向分别为X轴向、Y轴向和Z轴向;其中,X轴向表示 为normalize(n1-n0),即将质点n1在世界坐标系中的坐标值,减去质点n0在世界坐标系中的坐标值,得到从n0指向n1的向量,通过normolize函数将该向量归一化,得到仅指示方向的单位向量,表示上述X轴向。
同理,Y轴向表示为normalize(cross(n2-n0,x)),即,通过n2-n0的得到从n0指向n2的向量,通过cross函数计算该向量与X轴向的向量积,再通过normolize函数对该向量积归一化,得到Y轴向;对于Z轴向而言,需要与X轴向和Y轴向组成的平面垂直,从而确定Z轴向。
步骤23,基于第一转换关系,将网格顶点在世界坐标系中的初始渲染参数,转换至局部坐标系中,得到局部渲染参数;其中,初始渲染参数包括:网格顶点的位置参数、法线参数、切线参数中的多种;
这里的初始渲染参数用于确定网格顶点的渲染方式,其中,位置参数用于确定网格顶点渲染至世界坐标系中的哪个位置,法线参数和切线参数用于渲染该网格顶点的朝向,从而影响该网格顶点的光照效果或其他渲染效果。初始渲染参数是在世界坐标系下定义的,该步骤中,需要将这些渲染参数转换至局部坐标系中,从而使质点在发生位移后,网格顶点的渲染参数随着质点的位置进行变化,从而控制目标模型的形变效果。
上述第一转换关系可以通过矩阵或者逆矩阵的形式表征。将初始渲染参数与第一转换关系相乘,即可得到局部坐标系下的局部渲染参数。
步骤24,将网格顶点、目标质点以及局部渲染参数,确定为质点与网格顶点之间的映射关系。
针对目标模型中的每个网格顶点,都可以得到该网格顶点对应的目标质点,该目标质点建立的局部坐标系,以及该局部坐标系与世界坐标系的第一转换关系,进而基于该第一转换关系将世界坐标系下的初始渲染参数转换得到局部坐标系下的局部渲染参数。因此,每个网格顶点就具有了目标质点和局部渲染参数,将这些作为该网格顶点对应的映射关系。
当目标模型在虚拟场景中运行,且发生碰撞事件后,质点模型中质点的第一位置确定,且质点与网格顶点之间的映射关系也已预先确定,该情况下,就可以得到在发生碰撞后,网格顶点在世界坐标系的第一渲染参数,具体通过下述步骤31-步骤34实现。
步骤31,从映射关系中获取网格顶点对应的指定数量的目标质点;
由于目标模型中的每个网格顶点均预设有映射关系,因此,针对每个网格顶点,均可以从该映射关系中得到目标质点。需要说明的是,相邻的网格顶点对应的目标质点可能相同或者部分相同。
步骤32,基于指定数量的目标质点的第一位置,建立形变坐标系,并确定形变坐标系与预设的世界坐标系的第二转换关系;其中,该世界坐标系建立在目标模型和质点模型所处的虚拟场景中;
当碰撞事件发生后,目标质点的位置可能会发生变化,上述第一位置为发生碰撞事件后目标质点的第一位置,如果目标质点在碰撞过程中发生位移,则第一位置与目标质点在发生碰撞事件之前的初始位置不同;如果目标质点在碰撞过程中没有发生位置,则第一位置与目标质点在发生碰撞事件之前的初始位置相同。
通过与前述局部坐标系相同的建立方式,使用目标质点建立形变坐标系,考虑到目标质点可能会发生位移,因而目标质点之间的相对位置也会变化,基于此,形变坐标系的轴向可能会与局部坐标系的轴向不同;而形变坐标系的原点可以与局部坐标系的原点采用同一质点,如前述质点n0作为原点。
上述第二转换关系中包括形变坐标系和世界坐标系的原点之间的转换关系,还包括轴向的转换关系,由于目标质点之间的相对位置在碰撞后会发生变化,因此,轴向的转换关系与局部坐标系对应的转换关系可能会不同。
步骤33,从映射关系中获取网格顶点对应的局部渲染参数;
这里的局部渲染参数指示了网格顶点相对于质点的渲染参数,是通过前述局部坐标系对应的第一转换关系计算得到的,无论目标质点的位置如何变化,该局部渲染参数不会改变。
步骤34,基于第二转换关系和局部渲染参数,确定网格顶点在世界坐标系中的第一渲染参数。
由于目标质点在世界坐标系中的相对位置发生了变化,为了得到网格顶点在发生碰撞事件后在世界坐标系中的渲染参数,需要基于第二转换关系,将网格顶点相对于目标质点的局部渲染参数,再转换至世界坐标系中。在实际实现时,可以将局部渲染参数与第二转换关系相乘,或者将局部渲染参数与第二转换关系的逆矩阵相乘,得到第一渲染参数。通过该第一渲染参数,可以在虚拟场景渲染得到目标模型的形变效果。
当目标模型为车辆模型时,上述实施例可以实现在车辆模型撞击障碍物之后,车辆模型发生变形,在游戏场景下,可以实时高校的渲染得到变形的车辆模型。
在上述实施例中,质点模组使用了塑性弹簧质点模型,用于模拟车辆模型的变形,从而可以高效的更新模型顶点的渲染参数,在VertexShader中更新渲染模型的形状。在实际实现时,可以通过XPBD 技术实时的解算出碰撞变形后的质点位置,在利用渲染管线中的VertexShader阶段,可以在GPU上并行的插值出变形后渲染顶点的位置。
对应于上述方法实施例,参见图4所示的一种虚拟模型的形变控制装置的结构示意图,该装置包括:
模型生成模块40,用于生成位于虚拟场景的目标模型以及与目标模型对应的质点模型,其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;
位置确定模块42,用于响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;
参数确定模块44,用于基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。
上述虚拟模型的形变控制装置,生成位于虚拟场景的目标模型以及与目标模型对应的质点模型,其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。该方式中,为目标模型设置质点模型,该质点模型中包括数量较少的质点,且质点之间通过虚拟弹簧连接,通过质点模型检测碰撞事件,如果发生碰撞,计算质点之间的位移,并通过虚拟弹簧的形变确定目标模型形变后的渲染参数,从而渲染得到形变的目标模型,该方式计算形变的计算量较少,且形变效果逼真,适用于实时渲染的虚拟场景。
可选地,虚拟模型的形变控制装置还包括碰撞检测模块,用于:响应于目标模型位于虚拟场景中,通过质点模型中的质点,检测质点是否发生碰撞;如果质点模型中的至少一个质点发生碰撞,确定检测到碰撞事件。
可选地,位置确定模块,还用于:响应于质点模型检测到碰撞事件,获取碰撞事件的碰撞参数;其中,碰撞参数包括:发生碰撞的目标质点、碰撞方向、碰撞力度中的多种;基于碰撞参数控制质点模型中的各个质点发生位移,在质点发生位移的过程中,监听质点之间连接的虚拟弹簧的形变量;基于形变量和形变阈值,确定质点的第一位置。
可选地,位置确定模块,还用于:如果形变量没有超出虚拟弹簧的形变阈值,控制虚拟弹簧回弹,基于回弹后的虚拟弹簧的长度确定质点的第一位置;如果形变量超出虚拟弹簧的形变阈值,确定虚拟弹簧形变后的长度,基于虚拟弹簧形变后的长度确定质点的第一位置。
可选地,虚拟模型的形变控制装置还包括移除模块,用于:如果形变量超出虚拟弹簧的断裂阈值,移除虚拟弹簧,以控制虚拟弹簧两端的质点相互分离,得到质点的第一位置。
可选地,虚拟模型的形变控制装置还包括映射关系确定模块,用于:将目标模型和质点模型重合放置在预设的世界坐标系中;针对目标模型中的网格顶点,从质点模型中确定指定数量的目标质点,基于指定数量的目标质点建立局部坐标系,并确定局部坐标系与世界坐标系的第一转换关系;基于第一转换关系,将网格顶点在世界坐标系中的初始渲染参数,转换至局部坐标系中,得到局部渲染参数;其中,初始渲染参数包括:网格顶点的位置参数、法线参数、切线参数中的多种;将网格顶点、目标质点以及局部渲染参数,确定为质点与网格顶点之间的映射关系。
可选地,映射关系确定模块,还用于:针对目标模型中的网格顶点,计算网格顶点与质点模型中至少部分质点之间的欧式距离;按照欧式距离由小到大的顺序,对至少部分质点进行排序,得到质点序列;将质点序列中前三个质点确定为目标质点。
可选地,映射关系确定模块,用于:将目标质点中的第一质点作为局部坐标系的原点;将目标质点中第一质点和第二质点的连线方向,作为局部坐标系的第一轴向;将目标质点中,第三质点与第一质点的连线方向与第一轴向的向量积对应的方向,作为局部坐标系的第二轴向;将与第一轴向和第二轴向均垂直的方向,作为局部坐标系的第三轴向,得到局部坐标系。
可选地,参数确定模块,还用于:从映射关系中获取网格顶点对应的指定数量的目标质点;基于指定数量的目标质点的第一位置,建立形变坐标系,并确定形变坐标系与预设的世界坐标系的第二转 换关系;其中,世界坐标系建立在目标模型和质点模型所处的虚拟场景中;从映射关系中获取网格顶点对应的局部渲染参数;基于第二转换关系和局部渲染参数,确定网格顶点在世界坐标系中的第一渲染参数。
本实施例还提供一种电子设备,包括处理器和存储器,存储器存储有能够被处理器执行的机器可执行指令,处理器执行机器可执行指令以实现上述虚拟模型的形变控制方法。该电子设备可以是服务器,也可以是触控终端设备。
参见图5所示,该电子设备包括处理器100和存储器101,该存储器101存储有能够被处理器100执行的机器可执行指令,该处理器100执行机器可执行指令以实现上述虚拟模型的形变控制方法。
进一步地,图5所示的电子设备还包括总线102和通信接口103,处理器100、通信接口103和存储器101通过总线102连接。
其中,存储器101可能包含高速随机存取存储器(RAM,Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个通信接口103(可以是有线或者无线)实现该系统网元与至少一个其他网元之间的通信连接,可以使用互联网,广域网,本地网,城域网等。总线102可以是ISA总线、PCI总线或EISA总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图5中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
处理器100可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器100中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器100可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processor,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现场可编程门阵列(Field-Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本公开实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器101,处理器100读取存储器101中的信息,结合其硬件完成前述实施例的方法的步骤。
上述电子设备中的处理器,通过执行机器可执行指令,可以实现上述虚拟模型的形变控制方法中的下述操作:
生成位于虚拟场景的目标模型以及与所述目标模型对应的质点模型,其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。
可选地,响应于目标模型位于虚拟场景中,通过质点模型中的质点,检测质点是否发生碰撞;如果质点模型中的至少一个质点发生碰撞,确定检测到碰撞事件。
可选地,响应于质点模型检测到碰撞事件,获取碰撞事件的碰撞参数;其中,碰撞参数包括:发生碰撞的目标质点、碰撞方向、碰撞力度中的多种;基于碰撞参数控制质点模型中的各个质点发生位移,在质点发生位移的过程中,监听质点之间连接的虚拟弹簧的形变量;基于形变量和形变阈值,确定质点的第一位置。
可选地,如果形变量没有超出虚拟弹簧的形变阈值,控制虚拟弹簧回弹,基于回弹后的虚拟弹簧的长度确定质点的第一位置;如果形变量超出虚拟弹簧的形变阈值,确定虚拟弹簧形变后的长度,基于虚拟弹簧形变后的长度确定质点的第一位置。
可选地,如果形变量超出虚拟弹簧的断裂阈值,移除虚拟弹簧,以控制虚拟弹簧两端的质点相互分离,得到质点的第一位置。
可选地,质点与网格顶点之间的映射关系,通过下述方式得到:将目标模型和质点模型重合放置在预设的世界坐标系中;针对目标模型中的网格顶点,从质点模型中确定指定数量的目标质点,基于指定数量的目标质点建立局部坐标系,并确定局部坐标系与世界坐标系的第一转换关系;基于第一转换关系,将网格顶点在世界坐标系中的初始渲染参数,转换至局部坐标系中,得到局部渲染参数;其 中,初始渲染参数包括:网格顶点的位置参数、法线参数、切线参数中的多种;将网格顶点、目标质点以及局部渲染参数,确定为质点与网格顶点之间的映射关系。
可选地,针对目标模型中的网格顶点,计算网格顶点与质点模型中至少部分质点之间的欧式距离;按照欧式距离由小到大的顺序,对至少部分质点进行排序,得到质点序列;将质点序列中前三个质点确定为目标质点。
可选地,将目标质点中的第一质点作为局部坐标系的原点;将目标质点中第一质点和第二质点的连线方向,作为局部坐标系的第一轴向;将目标质点中,第三质点与第一质点的连线方向与第一轴向的向量积对应的方向,作为局部坐标系的第二轴向;将与第一轴向和第二轴向均垂直的方向,作为局部坐标系的第三轴向,得到局部坐标系。
可选地,从映射关系中获取网格顶点对应的指定数量的目标质点;基于指定数量的目标质点的第一位置,建立形变坐标系,并确定形变坐标系与预设的世界坐标系的第二转换关系;其中,世界坐标系建立在目标模型和质点模型所处的虚拟场景中;从映射关系中获取网格顶点对应的局部渲染参数;基于第二转换关系和局部渲染参数,确定网格顶点在世界坐标系中的第一渲染参数。
该方式中,为目标模型设置质点模型,该质点模型中包括数量较少的质点,且质点之间通过虚拟弹簧连接,通过质点模型检测碰撞事件,如果发生碰撞,计算质点之间的位移,并通过虚拟弹簧的形变确定目标模型形变后的渲染参数,从而渲染得到形变的目标模型,该方式计算形变的计算量较少,且形变效果逼真,适用于实时渲染的虚拟场景。
本实施例还提供一种机器可读存储介质,机器可读存储介质存储有机器可执行指令,机器可执行指令在被处理器调用和执行时,机器可执行指令促使处理器实现上述虚拟模型的形变控制方法。
上述机器可读存储介质存储中的机器可执行指令,通过执行该机器可执行指令,可以实现上述虚拟模型的形变控制方法中的下述操作:
生成位于虚拟场景的目标模型以及与所述目标模型对应的质点模型,其中,质点模型与目标模型重合设置在虚拟场景中;质点模型包括多个质点,多个质点之间通过虚拟弹簧连接;多个质点组成的形状与目标模型的形状相匹配;质点模型中质点的数量小于目标模型中网格顶点的数量;质点与网格顶点之间具有预设的映射关系;响应于质点模型检测到碰撞事件,基于碰撞事件的碰撞参数以及虚拟弹簧的形变阈值,确定碰撞事件发生后质点模型中质点的第一位置;基于质点模型中质点的第一位置,以及质点与网格顶点之间的映射关系,确定目标模型中网格顶点的第一渲染参数;通过第一渲染参数渲染目标模型中的网格顶点,得到形变后的目标模型。
可选地,响应于目标模型位于虚拟场景中,通过质点模型中的质点,检测质点是否发生碰撞;如果质点模型中的至少一个质点发生碰撞,确定检测到碰撞事件。
可选地,响应于质点模型检测到碰撞事件,获取碰撞事件的碰撞参数;其中,碰撞参数包括:发生碰撞的目标质点、碰撞方向、碰撞力度中的多种;基于碰撞参数控制质点模型中的各个质点发生位移,在质点发生位移的过程中,监听质点之间连接的虚拟弹簧的形变量;基于形变量和形变阈值,确定质点的第一位置。
可选地,如果形变量没有超出虚拟弹簧的形变阈值,控制虚拟弹簧回弹,基于回弹后的虚拟弹簧的长度确定质点的第一位置;如果形变量超出虚拟弹簧的形变阈值,确定虚拟弹簧形变后的长度,基于虚拟弹簧形变后的长度确定质点的第一位置。
可选地,如果形变量超出虚拟弹簧的断裂阈值,移除虚拟弹簧,以控制虚拟弹簧两端的质点相互分离,得到质点的第一位置。
可选地,质点与网格顶点之间的映射关系,通过下述方式得到:将目标模型和质点模型重合放置在预设的世界坐标系中;针对目标模型中的网格顶点,从质点模型中确定指定数量的目标质点,基于指定数量的目标质点建立局部坐标系,并确定局部坐标系与世界坐标系的第一转换关系;基于第一转换关系,将网格顶点在世界坐标系中的初始渲染参数,转换至局部坐标系中,得到局部渲染参数;其中,初始渲染参数包括:网格顶点的位置参数、法线参数、切线参数中的多种;将网格顶点、目标质点以及局部渲染参数,确定为质点与网格顶点之间的映射关系。
可选地,针对目标模型中的网格顶点,计算网格顶点与质点模型中至少部分质点之间的欧式距离;按照欧式距离由小到大的顺序,对至少部分质点进行排序,得到质点序列;将质点序列中前三个质点确定为目标质点。
可选地,将目标质点中的第一质点作为局部坐标系的原点;将目标质点中第一质点和第二质点的连线方向,作为局部坐标系的第一轴向;将目标质点中,第三质点与第一质点的连线方向与第一轴向的向量积对应的方向,作为局部坐标系的第二轴向;将与第一轴向和第二轴向均垂直的方向,作为局 部坐标系的第三轴向,得到局部坐标系。
可选地,从映射关系中获取网格顶点对应的指定数量的目标质点;基于指定数量的目标质点的第一位置,建立形变坐标系,并确定形变坐标系与预设的世界坐标系的第二转换关系;其中,世界坐标系建立在目标模型和质点模型所处的虚拟场景中;从映射关系中获取网格顶点对应的局部渲染参数;基于第二转换关系和局部渲染参数,确定网格顶点在世界坐标系中的第一渲染参数。
该方式中,为目标模型设置质点模型,该质点模型中包括数量较少的质点,且质点之间通过虚拟弹簧连接,通过质点模型检测碰撞事件,如果发生碰撞,计算质点之间的位移,并通过虚拟弹簧的形变确定目标模型形变后的渲染参数,从而渲染得到形变的目标模型,该方式计算形变的计算量较少,且形变效果逼真,适用于实时渲染的虚拟场景。
本公开实施例所提供的虚拟模型的形变控制方法、装置和电子设备的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行前面方法实施例中所述的方法,具体实现可参见方法实施例,在此不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
另外,在本公开实施例的描述中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域技术人员而言,可以具体情况理解上述术语在本公开中的具体含义。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
在本公开的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本公开和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
最后应说明的是:以上实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。

Claims (12)

  1. 一种虚拟模型的形变控制方法,包括:
    生成位于虚拟场景的目标模型以及与所述目标模型对应的质点模型,其中,所述质点模型与所述目标模型重合设置在所述虚拟场景中;所述质点模型包括多个质点,所述多个质点之间通过虚拟弹簧连接;所述多个质点组成的形状与所述目标模型的形状相匹配;所述质点模型中质点的数量小于所述目标模型中网格顶点的数量;所述质点与所述网格顶点之间具有预设的映射关系;
    响应于所述质点模型检测到碰撞事件,基于所述碰撞事件的碰撞参数以及所述虚拟弹簧的形变阈值,确定所述碰撞事件发生后所述质点模型中质点的第一位置;
    基于所述质点模型中质点的第一位置,以及所述质点与所述网格顶点之间的映射关系,确定所述目标模型中网格顶点的第一渲染参数;通过所述第一渲染参数渲染所述目标模型中的网格顶点,得到形变后的所述目标模型。
  2. 根据权利要求1所述的方法,其中,生成位于虚拟场景的目标模型以及与所述目标模型对应的质点模型的步骤之后,所述方法还包括:
    响应于所述目标模型位于所述虚拟场景中,通过所述质点模型中的质点,检测所述质点是否发生碰撞;
    如果所述质点模型中的至少一个质点发生碰撞,确定检测到碰撞事件。
  3. 根据权利要求1所述的方法,其中,响应于所述质点模型检测到碰撞事件,基于所述碰撞事件的碰撞参数以及所述虚拟弹簧的形变阈值,确定所述碰撞事件发生后所述质点模型中质点的第一位置的步骤,包括:
    响应于所述质点模型检测到碰撞事件,获取所述碰撞事件的碰撞参数;其中,所述碰撞参数包括:发生碰撞的目标质点、碰撞方向、碰撞力度中的多种;
    基于所述碰撞参数控制所述质点模型中的各个质点发生位移,在所述质点发生位移的过程中,监听所述质点之间连接的虚拟弹簧的形变量;
    基于所述形变量和所述形变阈值,确定所述质点的第一位置。
  4. 根据权利要求3所述的方法,其中,基于所述形变量和所述形变阈值,确定所述质点的第一位置的步骤,包括:
    如果所述形变量没有超出所述虚拟弹簧的形变阈值,控制所述虚拟弹簧回弹,基于回弹后的所述虚拟弹簧的长度确定所述质点的第一位置;
    如果所述形变量超出所述虚拟弹簧的形变阈值,确定所述虚拟弹簧形变后的长度,基于所述虚拟弹簧形变后的长度确定所述质点的第一位置。
  5. 根据权利要求3所述的方法,其中,所述方法还包括:
    如果所述形变量超出所述虚拟弹簧的断裂阈值,移除所述虚拟弹簧,以控制所述虚拟弹簧两端的质点相互分离,得到所述质点的第一位置。
  6. 根据权利要求1所述的方法,其中,所述质点与所述网格顶点之间的映射关系,通过下述方式得到:
    将所述目标模型和所述质点模型重合放置在预设的世界坐标系中;
    针对所述目标模型中的网格顶点,从所述质点模型中确定指定数量的目标质点,基于所述指定数量的目标质点建立局部坐标系,并确定所述局部坐标系与所述世界坐标系的第一转换关系;
    基于所述第一转换关系,将所述网格顶点在所述世界坐标系中的初始渲染参数,转换至所述局部坐标系中,得到局部渲染参数;其中,所述初始渲染参数包括:所述网格顶点的位置参数、法线参数、切线参数中的多种;
    将所述网格顶点、所述目标质点以及所述局部渲染参数,确定为所述质点与所述网格顶点之间的映射关系。
  7. 根据权利要求6所述的方法,其中,针对所述目标模型中的网格顶点,从所述质点模型中确定指定数量的目标质点的步骤,包括:
    针对所述目标模型中的网格顶点,计算所述网格顶点与所述质点模型中至少部分质点之间的欧式距离;
    按照所述欧式距离由小到大的顺序,对所述至少部分质点进行排序,得到质点序列;将所述质点序列中前三个质点确定为所述目标质点。
  8. 根据权利要求6所述的方法,其中,基于所述指定数量的目标质点建立局部坐标系的步骤,包括:
    将所述目标质点中的第一质点作为局部坐标系的原点;
    将所述目标质点中所述第一质点和第二质点的连线方向,作为所述局部坐标系的第一轴向;
    将所述目标质点中,第三质点与所述第一质点的连线方向与所述第一轴向的向量积对应的方向,作为所述局部坐标系的第二轴向;
    将与所述第一轴向和所述第二轴向均垂直的方向,作为所述局部坐标系的第三轴向,得到所述局部坐标系。
  9. 根据权利要求1所述的方法,其中,基于所述质点模型中质点的第一位置,以及所述质点与所述网格顶点之间的映射关系,确定所述目标模型中网格顶点的第一渲染参数的步骤,包括:
    从所述映射关系中获取所述网格顶点对应的指定数量的目标质点;
    基于所述指定数量的目标质点的第一位置,建立形变坐标系,并确定所述形变坐标系与预设的世界坐标系的第二转换关系;其中,所述世界坐标系建立在所述目标模型和所述质点模型所处的虚拟场景中;
    从所述映射关系中获取所述网格顶点对应的局部渲染参数;
    基于所述第二转换关系和所述局部渲染参数,确定所述网格顶点在所述世界坐标系中的第一渲染参数。
  10. 一种虚拟模型的形变控制装置,其中,包括:
    模型生成模块,用于生成位于虚拟场景的目标模型以及与所述目标模型对应的质点模型,其中,所述质点模型与所述目标模型重合设置在所述虚拟场景中;所述质点模型包括多个质点,所述多个质点之间通过虚拟弹簧连接;所述多个质点组成的形状与所述目标模型的形状相匹配;所述质点模型中质点的数量小于所述目标模型中网格顶点的数量;所述质点与所述网格顶点之间具有预设的映射关系;
    位置确定模块,用于响应于所述质点模型检测到碰撞事件,基于所述碰撞事件的碰撞参数以及所述虚拟弹簧的形变阈值,确定所述碰撞事件发生后所述质点模型中质点的第一位置;
    参数确定模块,用于基于所述质点模型中质点的第一位置,以及所述质点与所述网格顶点之间的映射关系,确定所述目标模型中网格顶点的第一渲染参数;通过所述第一渲染参数渲染所述目标模型中的网格顶点,得到形变后的所述目标模型。
  11. 一种电子设备,其中,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的机器可执行指令,所述处理器执行所述机器可执行指令以实现权利要求1-9任一项所述的虚拟模型的形变控制方法。
  12. 一种机器可读存储介质,其中,所述机器可读存储介质存储有机器可执行指令,所述机器可执行指令在被处理器调用和执行时,所述机器可执行指令促使所述处理器实现权利要求1-9任一项所述的虚拟模型的形变控制方法。
PCT/CN2023/082823 2022-10-08 2023-03-21 虚拟模型的形变控制方法、装置和电子设备 WO2024074016A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211222119.4A CN115700779A (zh) 2022-10-08 2022-10-08 虚拟模型的形变控制方法、装置和电子设备
CN202211222119.4 2022-10-08

Publications (1)

Publication Number Publication Date
WO2024074016A1 true WO2024074016A1 (zh) 2024-04-11

Family

ID=85121017

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082823 WO2024074016A1 (zh) 2022-10-08 2023-03-21 虚拟模型的形变控制方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN115700779A (zh)
WO (1) WO2024074016A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115700779A (zh) * 2022-10-08 2023-02-07 网易(杭州)网络有限公司 虚拟模型的形变控制方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030085896A1 (en) * 2001-11-07 2003-05-08 Freeman Kyle G. Method for rendering realistic terrain simulation
CN102207997A (zh) * 2011-06-07 2011-10-05 哈尔滨工业大学 基于力反馈的机器人微创手术仿真系统
CN111167120A (zh) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 游戏中虚拟模型的处理方法和装置
CN111773707A (zh) * 2020-08-11 2020-10-16 网易(杭州)网络有限公司 一种渲染处理的方法及装置、电子设备、存储介质
CN115700779A (zh) * 2022-10-08 2023-02-07 网易(杭州)网络有限公司 虚拟模型的形变控制方法、装置和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030085896A1 (en) * 2001-11-07 2003-05-08 Freeman Kyle G. Method for rendering realistic terrain simulation
CN102207997A (zh) * 2011-06-07 2011-10-05 哈尔滨工业大学 基于力反馈的机器人微创手术仿真系统
CN111167120A (zh) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 游戏中虚拟模型的处理方法和装置
CN111773707A (zh) * 2020-08-11 2020-10-16 网易(杭州)网络有限公司 一种渲染处理的方法及装置、电子设备、存储介质
CN115700779A (zh) * 2022-10-08 2023-02-07 网易(杭州)网络有限公司 虚拟模型的形变控制方法、装置和电子设备

Also Published As

Publication number Publication date
CN115700779A (zh) 2023-02-07

Similar Documents

Publication Publication Date Title
KR101690917B1 (ko) 가상 시나리오에서 사운드를 시뮬레이션하는 방법 및 장치, 및 단말기
CN110073352B (zh) 用于自主车辆仿真的方法、系统及计算机可读介质
WO2024074016A1 (zh) 虚拟模型的形变控制方法、装置和电子设备
US9251618B2 (en) Skin and flesh simulation using finite elements, biphasic materials, and rest state retargeting
CN112915542B (zh) 一种碰撞数据处理方法、装置、计算机设备及存储介质
US8886501B2 (en) Method of simulating deformable object using geometrically motivated model
WO2017088361A1 (zh) 一种基于虚拟现实设备的视锥体裁剪方法及装置
US10410431B2 (en) Skinning a cluster based simulation with a visual mesh using interpolated orientation and position
CN111063032A (zh) 模型渲染方法、系统及电子装置
TWI412948B (zh) 三維物件的碰撞模擬方法
CN111080762A (zh) 虚拟模型渲染方法及装置
US20210406432A1 (en) Calculation method, medium and system for real-time physical engine enhancement based on neural network
CN107342009A (zh) 牙科备牙手术模拟方法及装置
US9111391B2 (en) Image generating device, image generating method, and non-transitory information storage medium
CN114288656A (zh) 虚拟音源物体设置方法、装置、电子设备及介质
CN107050848B (zh) 基于体域网的体感游戏实现方法以及装置
JP6253838B1 (ja) 行動ログ生成方法、行動ログ生成装置及び行動ログ生成プログラム
CN107930124B (zh) 娃娃模型之间配合运动的方法、装置、终端设备及存储介质
CN115293018B (zh) 柔性体的碰撞检测方法、装置、计算机设备及存储介质
CN116384271A (zh) 液滴模型的流动效果的生成方法、装置及电子设备
CN115544411B (zh) 一种网页端bim模型的快速加载与显示方法及系统
CN117152327B (zh) 一种参数调节方法和相关装置
WO2020246508A1 (ja) 物理演算装置、物理演算方法およびプログラム
CN116402985A (zh) 触觉处理方法、装置、存储介质及电子设备
KR20160004467A (ko) 모바일 광고 엔진 및 그의 삼차원 변형물체 모델링 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23874211

Country of ref document: EP

Kind code of ref document: A1