CN109903384B - Model setting method and device, computing equipment and storage medium - Google Patents

Model setting method and device, computing equipment and storage medium Download PDF

Info

Publication number
CN109903384B
CN109903384B CN201910305736.2A CN201910305736A CN109903384B CN 109903384 B CN109903384 B CN 109903384B CN 201910305736 A CN201910305736 A CN 201910305736A CN 109903384 B CN109903384 B CN 109903384B
Authority
CN
China
Prior art keywords
determining
drawing control
grid
cube
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910305736.2A
Other languages
Chinese (zh)
Other versions
CN109903384A (en
Inventor
李晶晶
谭贤亮
喻赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xishanju Digital Technology Co ltd
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Xishanju Digital Technology Co ltd
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xishanju Digital Technology Co ltd, Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Xishanju Digital Technology Co ltd
Priority to CN201910305736.2A priority Critical patent/CN109903384B/en
Publication of CN109903384A publication Critical patent/CN109903384A/en
Application granted granted Critical
Publication of CN109903384B publication Critical patent/CN109903384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The application provides a method and a device for setting a model, computing equipment and a storage medium, wherein the method comprises the following steps: determining a selection point according to an input instruction, and determining a target ray according to the selection point and a virtual camera; determining a first intersection point of the target ray and the model, and taking the first intersection point as a center point of a drawing control; and determining the model attribute value in the range of the drawing control according to the center point of the drawing control and the attribute value corresponding to the drawing control, so that the model attribute value can be set by selecting any position in the model through the drawing control, and the operation is simple, convenient and efficient.

Description

Model setting method and device, computing equipment and storage medium
Technical Field
The present disclosure relates to the field of animation technologies, and in particular, to a method and apparatus for setting a model, a computing device, and a storage medium.
Background
For the production of a three-dimensional model in an animation scene, after the three-dimensional model is drawn, the attribute of the three-dimensional model needs to be further set. Taking the character model as an example, it is necessary to set properties of the character model, such as the color of the three-dimensional model, the softness of wearing apparel, the firmness of wearing armor, and the like.
In the prior art, the attribute setting of the three-dimensional model can only be selected in blocks, and then the attribute setting is carried out on each block respectively. Taking coloring as an example, in the prior art, the coloring process of the three-dimensional model is mostly based on software tools for editing, and when the tools select the model, only partition block selection is often realized, or a complete object or a part of the object in the model is selected, then the selected part of the content is filled with colors, and color editing cannot be realized by selecting any size or position.
Therefore, the model setting in the prior art is complex in work and low in efficiency, and the manufacturing progress of the three-dimensional model is affected.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide a method and apparatus for setting a model, a computing device and a storage medium, so as to solve the technical drawbacks in the prior art.
The embodiment of the application discloses a method for setting a model, which is used in a virtual scene, wherein the virtual scene is provided with a virtual camera, and the method comprises the following steps:
determining a selection point according to an input instruction, and determining a target ray according to the selection point and a virtual camera;
determining a first intersection point of the target ray and the model, and taking the first intersection point as a center point of a drawing control;
and determining a model attribute value in the drawing control range according to the center point of the drawing control and the attribute value corresponding to the drawing control.
Optionally, determining the selection point according to the input instruction includes: and determining a selection point according to an input instruction generated by the mouse.
Optionally, determining a first intersection point of the target ray with the model includes:
dividing the model into a plurality of grids, and forming a multi-stage space cube according to the grids, wherein the i-th stage space cube comprises at least two i+1th stage space cubes, and i is a positive integer greater than or equal to 1;
intersecting the target ray with the space cube step by step, and determining the first space cube with the minimum level intersecting the target ray as a target space cube;
each grid in the target space cube is intersected with the target ray to obtain a first grid intersected with the target ray;
an intersection of the target ray with the grid is determined.
Optionally, the target ray and the space cube are intersected step by step, and the first space cube with the minimum level intersected with the target ray is determined to be the target space cube, which comprises the following steps:
s1, intersecting the target ray with an ith space cube, and determining a first ith space cube intersected with the target ray; wherein i is more than or equal to 1 and less than or equal to n;
s2, intersecting the target ray with an i+1st level space cube corresponding to the first i level space cube, and determining the first i+1st level space cube intersected with the target ray;
s3, judging whether i is smaller than n, if yes, executing the step S4, and if not, executing the step S5;
s4, adding 1 to the i, and then executing the step S2;
s5, determining the first nth-order space cube intersected with the target ray as a target space cube.
Optionally, determining an intersection point of the target ray and the grid includes:
and obtaining the intersection point coordinates of the target ray and the grid through a predefined function.
Optionally, determining the model attribute value in the drawing control range according to the center point of the drawing control and the attribute value corresponding to the drawing control includes:
determining a drawing control range according to the center point of the drawing control;
traversing all grid vertexes of the model, and determining grid vertexes positioned in a drawing control range;
determining attribute values of grid vertices positioned in the drawing control range according to the attribute values corresponding to the drawing control;
and determining the attribute value of the grid according to the attribute value of the vertex of the grid.
Optionally, determining the attribute value of the grid vertex within the drawing control according to the attribute value corresponding to the drawing control includes:
and determining the attribute value of the grid vertex in the drawing control range according to the product of the attribute value corresponding to the drawing control, the weight coefficient and the distance between the grid vertex and the center point of the drawing control.
Optionally, the attribute value includes: one or more of color value, weight value, bone weight value, hardness value.
The embodiment of the application discloses a device of model setting for in the virtual scene, the virtual scene is provided with virtual camera, the device includes:
the ray determination module is configured to determine a selection point according to an input instruction, and determine a target ray according to the selection point and the virtual camera;
the center point determining module is configured to determine a first intersection point of the target ray and the model, and the first intersection point is used as a center point of a drawing control;
and the model attribute value determining module is configured to determine the model attribute value in the drawing control range according to the center point of the drawing control and the attribute value corresponding to the drawing control.
Optionally, the ray determination module is specifically configured to: and determining a selection point according to an input instruction generated by the mouse.
Optionally, the center point determination module is specifically configured to:
dividing the model into a plurality of grids, and forming a multi-stage space cube according to the grids, wherein the i-th stage space cube comprises at least two i+1th stage space cubes, and i is a positive integer greater than or equal to 1;
intersecting the target ray with the space cube step by step, and determining the first space cube with the minimum level intersecting the target ray as a target space cube;
each grid in the target space cube is intersected with the target ray to obtain a first grid intersected with the target ray;
an intersection of the target ray with the grid is determined.
Optionally, the center point determination module is specifically configured to:
a first intersection module configured to intersect the target ray with an i-th level spatial cube, determining a first i-th level spatial cube that intersects the target ray; wherein i is more than or equal to 1 and less than or equal to n;
a second intersection module configured to intersect the target ray with an i+1st level spatial cube corresponding to the first i level spatial cube, and determine a first i+1st level spatial cube intersected by the target ray;
the judging module is configured to judge whether i is smaller than n, if so, the self-increasing module is executed, and if not, the determining module is executed;
the self-increasing module is configured to self-increase i by 1 and then execute a second intersection module;
a determination module configured to determine a first nth level spatial cube intersecting the target ray as a target spatial cube.
Optionally, the center point determination module is specifically configured to: and obtaining the intersection point coordinates of the target ray and the grid through a predefined function.
Optionally, the center point determination module is specifically configured to:
determining a drawing control range according to the center point of the drawing control;
traversing all grid vertexes of the model, and determining grid vertexes positioned in a drawing control range;
determining attribute values of grid vertices positioned in the drawing control range according to the attribute values corresponding to the drawing control;
and determining the attribute value of the grid according to the attribute value of the vertex of the grid.
Optionally, the center point determination module is specifically configured to: and determining the attribute value of the grid vertex in the drawing control range according to the product of the attribute value corresponding to the drawing control, the weight coefficient and the distance between the grid vertex and the center point of the drawing control.
Optionally, the attribute value includes: one or more of color value, weight value, bone weight value, hardness value.
The embodiment of the application discloses a computing device, which comprises a memory, a processor and computer instructions stored on the memory and capable of running on the processor, wherein the processor executes the instructions to realize the steps of the method for setting a model as described above.
The embodiment of the application discloses a computer readable storage medium storing computer instructions, which are characterized in that the instructions, when executed by a processor, implement the steps of the method for setting a model as described above.
According to the method and device for setting the model, the target ray is determined according to the selection point and the virtual camera, the first intersection point of the target ray and the model is used as the center point of the drawing control, and the model attribute value in the range of the drawing control is determined according to the center point of the drawing control and the attribute value corresponding to the drawing control, so that the model attribute value can be set at any position in the model selected by the drawing control.
Drawings
FIG. 1 is a schematic structural diagram of a computing device of an embodiment of the present application;
FIG. 2 is a flow chart of a method of model setup according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of model setup according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a binary tree of an embodiment of the present application;
FIG. 5 is a flow chart of a method of model setup according to an embodiment of the present application;
FIG. 6 is a flow chart of a method of model setup according to an embodiment of the present application;
FIG. 7 is a flow chart of a method of model setup according to another embodiment of the present application;
fig. 8 is a schematic structural view of a device for model setting in the embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the present application, a method and apparatus for setting a model, a computing device, and a storage medium are provided, and detailed descriptions are given in the following embodiments.
Fig. 1 is a block diagram illustrating a configuration of a computing device 100 according to an embodiment of the present description. The components of the computing device 100 include, but are not limited to, a memory 110 and a processor 120. Processor 120 is coupled to memory 110 via bus 130 and database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 140 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100, as well as other components not shown in FIG. 1, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device shown in FIG. 1 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the method shown in fig. 2. Fig. 2 is a schematic flow chart illustrating a method of model setup according to an embodiment of the present application, including steps 201 to 203.
201. And determining a selection point according to the input instruction, and determining a target ray according to the selection point and the virtual camera.
It should be explained that the virtual camera is set in a virtual scene, for example, a 3D scene, and the corresponding 3D window shows the content shot by the virtual camera. The position of the camera is defined by the system and is used as the starting point of the target ray. This ray, i.e. the ray emitted from the origin towards the selected point. Therefore, each selected point corresponds to only one ray.
The input command may be various, for example, a command input by a mouse, a keyboard, or the like.
Taking mouse input as an example, in this step, determining a selection point according to an input instruction includes: and determining a selection point according to an input instruction generated by the mouse.
Specifically, the input instruction generated by the mouse may be a movement or click instruction of the mouse. For example, the mouse is moved to a point and then clicked, i.e., the point is determined to be the selected point.
202. And determining a first intersection point of the target ray and the model, and taking the first intersection point as a center point of the drawing control.
The drawing control may be various, such as a brush.
Specifically, referring to fig. 3, determining the first intersection point of the target ray and the model in this step includes:
301. dividing the model into a plurality of grids, and forming a multi-stage space cube according to the grids, wherein the i-th stage space cube comprises at least two i+1th stage space cubes, and i is a positive integer greater than or equal to 1.
Alternatively, the mesh may be a plurality, for example, a triangular mesh.
Wherein, form multistage space cube according to the net, specifically include: the multiple grids are arranged to form a multi-stage grid, and then a multi-stage spatial cube is formed according to the multi-stage grid.
For example, a model is divided into a plurality of meshes and then the plurality of meshes are formed into a 3-level mesh, wherein the model includes at least two 1 st-level meshes, each 1 st-level mesh includes at least two 2 nd-level meshes, and each 2 nd-level mesh includes at least two 3 rd-level meshes.
A level 3 spatial cube is formed from the level 3 mesh, wherein each level 1 spatial cube comprises at least two level 2 spatial cubes and each level 2 spatial cube comprises at least two level 3 spatial cubes.
Specifically, there are various methods for arranging a plurality of grids to form a multi-stage grid, such as a binary tree space division method, a quadtree space division method, an octree space division method, and the like.
The number of stages i may be set according to practical requirements, for example, setting the number of stages i to 10, 20 or 30.
302. And intersecting the target ray with the space cube step by step, and determining the first space cube with the minimum level intersecting the target ray as a target space cube.
For example, the model is divided into level 3 spatial cubes, wherein each level 1 spatial cube comprises at least two level 2 spatial cubes, and each level 2 spatial cube comprises at least two level 3 spatial cubes. Then, in step 302, it is necessary to determine the first level 3 spatial cube that intersects the target ray as the target spatial cube.
303. And intersecting each grid in the target space cube with the target ray to obtain a first grid intersected with the target ray.
Specifically, in this step, all grids in the target space cube need to be traversed, each grid is respectively intersected with the target ray, then a grid intersected with the target ray is obtained, and finally the first grid intersected with the target ray is judged.
304. An intersection of the target ray with the grid is determined.
Specifically, the coordinates of the intersection points of the target ray and the grid are obtained through a predefined function, such as a function directly provided by Direct3D of the bottom layer of the system.
In addition, the calculation may be performed by the following method: and firstly calculating the intersection point of the target ray and the plane where the grid is located, then judging whether the intersection point is inside the grid, and if so, determining the intersection point as the intersection point of the target ray and the grid.
The method of binary tree space partitioning is explained below with reference to fig. 4, as a method of forming a multi-level space cube from a plurality of grids.
Each model consists of thousands of grids. Referring to fig. 4, all meshes corresponding to the model may be first divided into two, and first, all meshes of the model are divided into two classes, forming two level 1 spatial cubes a41. And continuing classification, namely continuing classifying grids corresponding to each 1-level space cube into two types to generate two corresponding 2-level space cubes a42, continuing classifying grids corresponding to each 2-level space cube into two types to generate two corresponding 3-level space cubes a43 and … …, and so on until the final-level space cube is generated.
Eventually, all meshes of this 3D model form a binary tree. Each grid has its own nodes on the binary tree. When the intersection point of the ray and the model is found, the calculation is performed along the binary tree. The resulting binary tree is shown in fig. 4. Wherein, the numbers in the circles in fig. 4 indicate what number of grids.
Specifically, referring to fig. 5, step 302 includes:
501. and intersecting the target ray with an ith space cube, and determining a first ith space cube intersected with the target ray, wherein i is more than or equal to 1 and less than or equal to n.
Where n is the number of layers, e.g., 20, 30, etc.
502. And intersecting the target ray with an i+1st level space cube corresponding to the first i level space cube, and determining the first i+1st level space cube intersected with the target ray.
503. Whether i is smaller than n is determined, if yes, step 504 is executed, and if not, step 505 is executed.
504. I is incremented by 1 and then step 502 is performed.
505. And determining the first nth-order space cube intersected with the target ray as a target space cube.
Through steps 501-505, in the process of intersecting the target ray with the spatial cubes step by step, not all the spatial cubes are traversed, but the first i+1th spatial cube intersected with the target ray is determined by intersecting the target ray with the i+1th spatial cube corresponding to the first i-th spatial cube, and finally the first n-th spatial cube is obtained as the target spatial cube. By the method, the intersection range of the target ray and the n-level space cube is converged step by step, so that the calculation cost is saved, and the calculation speed is improved.
203. And determining a model attribute value in the drawing control range according to the center point of the drawing control and the attribute value corresponding to the drawing control.
Specifically, referring to fig. 6, step 203 includes:
601. and determining the range of the drawing control according to the center point of the drawing control.
Specifically, the drawing control range may be determined according to the center point of the drawing control and the outer contour of the drawing control.
In this case, the brush shape may be customized, for example, a spherical brush, a square brush, a conical brush, or the like.
Taking a spherical brush as an example, the center point of the brush is determined, and then the range of the brush can be determined according to the spherical radius of the brush.
Taking a square brush as an example, the center point of the brush is determined, and then the range of the brush can be determined according to the side length of the square.
602. And traversing all grid vertexes of the model, and determining the grid vertexes positioned in the drawing control range.
Taking a triangular mesh as an example, each mesh has three mesh vertices. By traversing all of the meshes of the model, mesh vertices of the model can be obtained. And then according to the drawing control range, the grid vertexes in the drawing control range can be determined.
Taking a spherical brush as an example, if the distance between the grid vertex and the sphere center is larger than the radius of the spherical brush, neglecting; if the distance is less than the radius of the spherical brush, the mesh vertex is determined to be within the brush range.
603. And determining the attribute value of the grid vertex positioned in the drawing control range according to the attribute value corresponding to the drawing control.
Specifically, step 603 includes: and determining the attribute value of the grid vertex in the drawing control range according to the product of the attribute value corresponding to the drawing control, the weight coefficient and the distance between the grid vertex and the center point of the drawing control.
Through this step, the closer to the center point of the drawing control, the smaller the attribute value of the mesh vertex.
Taking color attribute as an example, the smaller the attribute value, the darker the color can be set; the larger the attribute value, the lighter the color. Thus, the closer the grid vertex is to the center point of the drawing control, the darker the color; the farther apart the color is lighter.
Of course, the attribute values of the grid vertices within the drawing control range may also be set in other manners, for example, the attribute values of the grid vertices within the drawing control range are uniformly set as one attribute value without distinguishing them.
604. And determining the attribute value of the grid according to the attribute value of the vertex of the grid.
Taking a triangle mesh as an example, the attribute values of the mesh need to be determined according to the attribute values of three vertices.
And (3) re-executing the steps 601-604 when the drawing control moves to the next position, so as to realize the setting of the model attribute values in the drawing control range. Therefore, the user only needs to define the attribute value corresponding to the drawing control first, then operate the mouse to control the drawing control to act, and the setting of the model attribute value can be flexibly performed at any position in the drawing control selection model, so that the operation freedom degree is high, and the quick processing is facilitated.
The attribute values may be various, for example, the attribute values may include: one or more of color value, weight value, bone weight value, hardness value.
Taking color values as an example, the method of the embodiment can be used for editing model colors;
taking a weight value as an example, in a 3D material distribution system, a drawing control can be used for setting the property of the material, such as the weight degree of the material;
taking the skeletal weight values as an example, in a 3D animation system, skeletal weights of the animation, such as the magnitude weights of the grid vertex offsets in the motion, may be set with rendering controls.
In addition, the attribute values can be identified by drawing the color values of the grid vertices. Taking a character model as an example, the blue color is used for representing silk, and the more blue the color is, the softer the silk is; the metal armour is represented by red, the more red the colour is, the harder the armour is represented.
These properties can then be represented by color values on the mesh vertices by the drawing control, but in actual use, these color values have other special meanings that can be customized by the user using the drawing control. For example: the hardness of the armor may be defined as ranging from 0 to 100, red=0 indicating a hardness of 0 and red=255 indicating a hardness of 100.
According to the method for setting the model, the target ray is determined according to the selected point and the virtual camera, the first intersection point of the target ray and the model is used as the center point of the drawing control, and the model attribute value in the range of the drawing control is determined according to the center point of the drawing control and the attribute value corresponding to the drawing control, so that the model attribute value can be set at any position in the selected model through the drawing control.
The method of setting the model of the present embodiment will be schematically described below taking the setting of the color attribute of the model as an example. Referring to fig. 7, the embodiment of the application further discloses a method for setting a model color, which includes:
701. and determining a selected point according to an input instruction generated by the mouse, and determining a target ray according to the selected point and the virtual camera.
702. Dividing the model into a plurality of grids, and forming a multi-stage space cube according to the grids, wherein the i-th stage space cube comprises at least two i+1th stage space cubes, and i is a positive integer greater than or equal to 1;
703. intersection is carried out on the target ray and an ith level space cube, and a first ith level space cube intersected with the target ray is determined; wherein i is more than or equal to 1 and n is more than or equal to n.
704. And intersecting the target ray with an i+1st level space cube corresponding to the first i level space cube, and determining the first i+1st level space cube intersected with the target ray.
705. Whether i is smaller than n is determined, if yes, step 706 is executed, and if not, step 707 is executed.
706. I is incremented by 1 and then step 704 is performed.
707. The first nth level spatial cube intersecting the target ray is determined to be the target spatial cube.
708. And intersecting each grid in the target space cube with the target ray to obtain a first grid intersected with the target ray.
709. And determining an intersection point of the target ray and the grid, and taking the intersection point as a center point of the brush.
710. And determining the range of the brush according to the center point of the brush.
711. All grid vertices of the model are traversed to determine grid vertices that lie within the brush range.
712. And determining the color value of the grid vertex positioned in the range of the brush according to the color value corresponding to the brush.
713. And determining the color value of the grid according to the color value of the vertex of the grid.
It should be noted that, each mesh vertex has an initial value. For a mesh, taking a triangle mesh as an example, if only one mesh vertex is within the brush range and the color value is changed, then for the mesh, the color value of the mesh needs to be determined from the color values of the three mesh vertices.
The color value of the grid can be one, and the corresponding grid presents a single color, such as red, blue, etc.;
the color value of the grid can be multiple, and the corresponding grid presents a gradual color. The gradient colors are automatically rendered by the system according to the color values of the grid vertices.
According to the method for setting the model color, the target ray is determined according to the selected point and the virtual camera, the first intersection point of the target ray and the model is used as the center point of the brush, and the model color in the range of the brush is determined according to the center point of the brush and the color value corresponding to the brush, so that the model color can be set at any position in the selected model through the brush.
The embodiment of the application also discloses a device for setting a model, see fig. 8, which is used in a virtual scene, wherein the virtual scene is provided with a virtual camera, and the device comprises:
a ray determination module 801 configured to determine a selected point according to an input instruction, and determine a target ray according to the selected point and a virtual camera;
a center point determining module 802 configured to determine a first intersection point of the target ray and the model, and take the first intersection point as a center point of a drawing control;
the model attribute value determining module 803 is configured to determine a model attribute value within the scope of the drawing control according to the center point of the drawing control and the attribute value corresponding to the drawing control.
Optionally, the ray determination module 801 is specifically configured to: and determining a selection point according to an input instruction generated by the mouse.
Optionally, the center point determination module 802 is specifically configured to:
dividing the model into a plurality of grids, and forming a multi-stage space cube according to the grids, wherein the i-th stage space cube comprises at least two i+1th stage space cubes, and i is a positive integer greater than or equal to 1;
intersecting the target ray with the space cube step by step, and determining the first space cube with the minimum level intersecting the target ray as a target space cube;
each grid in the target space cube is intersected with the target ray to obtain a first grid intersected with the target ray;
an intersection of the target ray with the grid is determined.
Optionally, the center point determination module 802 is specifically configured to:
a first intersection module configured to intersect the target ray with an i-th level spatial cube, determining a first i-th level spatial cube that intersects the target ray; wherein i is more than or equal to 1 and less than or equal to n;
a second intersection module configured to intersect the target ray with an i+1st level spatial cube corresponding to the first i level spatial cube, and determine a first i+1st level spatial cube intersected by the target ray;
the judging module is configured to judge whether i is smaller than n, if so, the self-increasing module is executed, and if not, the determining module is executed;
the self-increasing module is configured to self-increase i by 1 and then execute a second intersection module;
a determination module configured to determine a first nth level spatial cube intersecting the target ray as a target spatial cube.
Optionally, the center point determination module 802 is specifically configured to: and obtaining the intersection point coordinates of the target ray and the grid through a predefined function.
Optionally, the center point determination module 802 is specifically configured to:
determining a drawing control range according to the center point of the drawing control;
traversing all grid vertexes of the model, and determining grid vertexes positioned in a drawing control range;
determining attribute values of grid vertices positioned in the drawing control range according to the attribute values corresponding to the drawing control;
and determining the attribute value of the grid according to the attribute value of the vertex of the grid.
Optionally, the center point determination module 802 is specifically configured to: and determining the attribute value of the grid vertex in the drawing control range according to the product of the attribute value corresponding to the drawing control, the weight coefficient and the distance between the grid vertex and the center point of the drawing control.
Wherein the attribute values include: one or more of color value, weight value, bone weight value, hardness value.
According to the device for setting the model, the target ray is determined according to the selected point and the virtual camera, the first intersection point of the target ray and the model is used as the center point of the drawing control, and the model attribute value in the range of the drawing control is determined according to the center point of the drawing control and the attribute value corresponding to the drawing control, so that the model attribute value can be set at any position in the selected model through the drawing control.
The above is a schematic version of a device for model setting of the present embodiment. It should be noted that, the technical solution of the device and the technical solution of the method for setting the model belong to the same conception, and details of the technical solution of the device, which are not described in detail, can be referred to the description of the technical solution of the method for setting the model.
An embodiment of the present application also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of a method of model setup as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the method for setting a model belong to the same concept, and details of the technical solution of the storage medium, which are not described in detail, can be referred to the description of the technical solution of the method for setting a model.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (16)

1. A method of model setting, for use in a virtual scene, the virtual scene being provided with a virtual camera, the method comprising:
determining a selection point according to an input instruction, and determining a target ray according to the selection point and a virtual camera;
after the grids contained in the model are subjected to tree structure space division, determining a multistage space cube of the target model based on multistage grids corresponding to tree nodes in a tree structure obtained by division; determining a first intersection point of the target ray and the multi-level space cube, and taking the first intersection point as a center point of a drawing control, wherein a grid of the model corresponds to nodes in a tree structure obtained by segmentation;
and determining a model attribute value positioned in the drawing control range according to the center point of the drawing control and the attribute value corresponding to the drawing control, wherein the attribute value comprises one or more of a color value, a weight value, a skeleton weight value and a hardness value.
2. The method of claim 1, wherein determining the selected point based on the input instruction comprises: and determining a selection point according to an input instruction generated by the mouse.
3. The method of claim 1, wherein determining a first intersection of the target ray with a model comprises:
dividing the model into a plurality of grids, and forming a multi-stage space cube according to the grids, wherein the i-th stage space cube comprises at least two i+1th stage space cubes, and i is a positive integer greater than or equal to 1;
intersecting the target ray with the space cube step by step, and determining the first space cube with the minimum level intersecting the target ray as a target space cube;
each grid in the target space cube is intersected with the target ray to obtain a first grid intersected with the target ray;
an intersection of the target ray with the grid is determined.
4. The method of claim 3, wherein intersecting the target ray with the spatial cube step-by-step, determining a first spatial cube with a smallest level of intersection with the target ray as a target spatial cube, comprises:
s1, intersecting the target ray with an ith space cube, and determining a first ith space cube intersected with the target ray; wherein i is more than or equal to 1 and less than or equal to n;
s2, intersecting the target ray with an i+1st level space cube corresponding to the first i level space cube, and determining the first i+1st level space cube intersected with the target ray;
s3, judging whether i is smaller than n, if yes, executing the step S4, and if not, executing the step S5;
s4, adding 1 to the i, and then executing the step S2;
s5, determining the first nth-order space cube intersected with the target ray as a target space cube.
5. The method of claim 3, wherein determining the intersection of the target ray with the grid comprises:
and obtaining the intersection point coordinates of the target ray and the grid through a predefined function.
6. The method of claim 3, wherein determining model attribute values within the scope of the drawing control based on a center point of the drawing control and attribute values corresponding to the drawing control comprises:
determining a drawing control range according to the center point of the drawing control;
traversing all grid vertexes of the model, and determining grid vertexes positioned in a drawing control range;
determining attribute values of grid vertices positioned in the drawing control range according to the attribute values corresponding to the drawing control;
and determining the attribute value of the grid according to the attribute value of the vertex of the grid.
7. The method of claim 6, wherein determining attribute values for mesh vertices within the scope of a drawing control from attribute values corresponding to the drawing control comprises:
and determining the attribute value of the grid vertex in the drawing control range according to the product of the attribute value corresponding to the drawing control, the weight coefficient and the distance between the grid vertex and the center point of the drawing control.
8. An apparatus for model setting, for use in a virtual scene, the virtual scene being provided with a virtual camera, the apparatus comprising:
the ray determination module is configured to determine a selection point according to an input instruction, and determine a target ray according to the selection point and the virtual camera;
the center point determining module is configured to determine a multistage space cube of the target model based on multistage grids corresponding to tree nodes in a tree structure obtained by dividing after the grids contained in the model are subjected to tree structure space division; determining a first intersection point of the target ray and the multi-level space cube, and taking the first intersection point as a center point of a drawing control, wherein a grid of the model corresponds to nodes in a tree structure obtained by segmentation;
the model attribute value determining module is configured to determine a model attribute value in the range of the drawing control according to the center point of the drawing control and an attribute value corresponding to the drawing control, wherein the attribute value comprises a color value, a weight value, a skeleton weight value and a hardness value.
9. The apparatus of claim 8, wherein the ray determination module is specifically configured to: and determining a selection point according to an input instruction generated by the mouse.
10. The apparatus of claim 8, wherein the center point determination module is specifically configured to:
dividing the model into a plurality of grids, and forming a multi-stage space cube according to the grids, wherein the i-th stage space cube comprises at least two i+1th stage space cubes, and i is a positive integer greater than or equal to 1;
intersecting the target ray with the space cube step by step, and determining the first space cube with the minimum level intersecting the target ray as a target space cube;
each grid in the target space cube is intersected with the target ray to obtain a first grid intersected with the target ray;
an intersection of the target ray with the grid is determined.
11. The apparatus of claim 10, wherein the center point determination module is specifically configured to:
a first intersection module configured to intersect the target ray with an i-th level spatial cube, determining a first i-th level spatial cube that intersects the target ray; wherein i is more than or equal to 1 and less than or equal to n;
a second intersection module configured to intersect the target ray with an i+1st level spatial cube corresponding to the first i level spatial cube, and determine a first i+1st level spatial cube intersected by the target ray;
the judging module is configured to judge whether i is smaller than n, if so, the self-increasing module is executed, and if not, the determining module is executed;
the self-increasing module is configured to self-increase i by 1 and then execute a second intersection module;
a determination module configured to determine a first nth level spatial cube intersecting the target ray as a target spatial cube.
12. The apparatus of claim 10, wherein the center point determination module is specifically configured to: and obtaining the intersection point coordinates of the target ray and the grid through a predefined function.
13. The apparatus of claim 10, wherein the center point determination module is specifically configured to:
determining a drawing control range according to the center point of the drawing control;
traversing all grid vertexes of the model, and determining grid vertexes positioned in a drawing control range;
determining attribute values of grid vertices positioned in the drawing control range according to the attribute values corresponding to the drawing control;
and determining the attribute value of the grid according to the attribute value of the vertex of the grid.
14. The apparatus of claim 13, wherein the center point determination module is specifically configured to: and determining the attribute value of the grid vertex in the drawing control range according to the product of the attribute value corresponding to the drawing control, the weight coefficient and the distance between the grid vertex and the center point of the drawing control.
15. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the instructions, implements the steps of the method of any of claims 1-7.
16. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 7.
CN201910305736.2A 2019-04-16 2019-04-16 Model setting method and device, computing equipment and storage medium Active CN109903384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910305736.2A CN109903384B (en) 2019-04-16 2019-04-16 Model setting method and device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910305736.2A CN109903384B (en) 2019-04-16 2019-04-16 Model setting method and device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109903384A CN109903384A (en) 2019-06-18
CN109903384B true CN109903384B (en) 2023-12-26

Family

ID=66955951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910305736.2A Active CN109903384B (en) 2019-04-16 2019-04-16 Model setting method and device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109903384B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838164A (en) * 2021-08-18 2021-12-24 北京商询科技有限公司 Grid drawing method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789083A (en) * 2008-10-30 2010-07-28 奥多比公司 Actual real-time simulation of brush behaviour
CN103903293A (en) * 2012-12-27 2014-07-02 腾讯科技(深圳)有限公司 Generation method and device for brush-stroke-feel art image
CN104123747A (en) * 2014-07-17 2014-10-29 北京毛豆科技有限公司 Method and system for multimode touch three-dimensional modeling
CN107085862A (en) * 2017-05-19 2017-08-22 东华大学 A kind of stereo clipping method of three-dimensional virtual garment
CN107093200A (en) * 2017-03-29 2017-08-25 珠海金山网络游戏科技有限公司 A kind of method of Skeletal Skinned Animation surface mesh additional model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7248259B2 (en) * 2001-12-12 2007-07-24 Technoguide As Three dimensional geological model construction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789083A (en) * 2008-10-30 2010-07-28 奥多比公司 Actual real-time simulation of brush behaviour
CN103903293A (en) * 2012-12-27 2014-07-02 腾讯科技(深圳)有限公司 Generation method and device for brush-stroke-feel art image
CN104123747A (en) * 2014-07-17 2014-10-29 北京毛豆科技有限公司 Method and system for multimode touch three-dimensional modeling
CN107093200A (en) * 2017-03-29 2017-08-25 珠海金山网络游戏科技有限公司 A kind of method of Skeletal Skinned Animation surface mesh additional model
CN107085862A (en) * 2017-05-19 2017-08-22 东华大学 A kind of stereo clipping method of three-dimensional virtual garment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于DEM栅格数据结构的三维空间点交互选取方法;潘德吉 等;《测绘科学》;20091130;第198-200页 *

Also Published As

Publication number Publication date
CN109903384A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
US11455759B2 (en) Systems and methods for high dimensional 3D data visualization
JP7125512B2 (en) Object loading method and device, storage medium, electronic device, and computer program
CN112347546A (en) BIM rendering method, device and computer-readable storage medium based on lightweight device
CN109949693B (en) Map drawing method and device, computing equipment and storage medium
CN112509118B (en) Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling
CN109087369A (en) Virtual objects display methods, device, electronic device and storage medium
CN112569602B (en) Method and device for constructing terrain in virtual scene
CN110349267B (en) Method and device for constructing three-dimensional heat model
CN113129372B (en) Hololens space mapping-based three-dimensional scene semantic analysis method
CN107077746A (en) System, method and computer program product for network transmission and the Automatic Optimal of the 3D texture models of real-time rendering
US11995771B2 (en) Automated weighting generation for three-dimensional models
CN109903384B (en) Model setting method and device, computing equipment and storage medium
CN111798558A (en) Data processing method and device
US9704290B2 (en) Deep image identifiers
EP3319047A1 (en) Method and apparatus for generating acceleration structure
CN114494553A (en) Real-time rendering method, system and equipment based on rendering time estimation and LOD selection
CN105701573B (en) A kind of route processing method and device based on GIS buffer zone analysis
Hu et al. Parallel BVH construction using locally density clustering
WO2023239799A1 (en) Systems and methods for efficient rendering and processing of point clouds using textures
CN114254501B (en) Large-scale grassland rendering and simulating method
CN115228083A (en) Resource rendering method and device
CN109658508A (en) A kind of landform synthetic method of multiple dimensioned details fusion
Ling et al. [Retracted] Computer 3D Scene Simulation of Ecological Landscape Layout Planning
CN107025433A (en) Video Events class people's concept learning method and device
CN114470782A (en) Region processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Applicant after: Zhuhai Xishanju Digital Technology Co.,Ltd.

Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

Applicant before: ZHUHAI SEASUN MOBILE GAME TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant