CN109448137B - Interaction method, interaction device, electronic equipment and storage medium - Google Patents
Interaction method, interaction device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN109448137B CN109448137B CN201811235940.3A CN201811235940A CN109448137B CN 109448137 B CN109448137 B CN 109448137B CN 201811235940 A CN201811235940 A CN 201811235940A CN 109448137 B CN109448137 B CN 109448137B
- Authority
- CN
- China
- Prior art keywords
- interaction
- model
- interactive
- channel height
- local coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention provides an interaction method and device, electronic equipment and a storage medium, and relates to the technical field of computer graphics. The method comprises the following steps: responding to interactive operation on an interactive model, and acquiring an interactive position of the interactive operation by taking a frame as a unit; acquiring texture coordinates of corresponding local coordinate points on the interactive model according to the interactive position; modifying the color of the corresponding position on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points to generate a corresponding interaction track on the single-channel height map of the interaction model; and acquiring a corresponding normal map according to the single-channel height map of the generated interaction track so as to present an interaction result of the interaction operation on the interaction model. The technical scheme of the embodiment of the invention can modify the model resources in real time according to the user interaction, and improve the sense of reality of the three-dimensional scene.
Description
Technical Field
The present invention relates to the field of computer graphics technologies, and in particular, to an interaction method, an interaction apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of internet technology, three-dimensional modeling technology is widely applied to various fields such as game modeling, movie production, architectural design and the like. In order for a user to have a better experience in a three-dimensional scene, it is desirable for the user to be able to interact with the model.
At present, a user interacts with an object in a three-dimensional scene in a click mode, generally, processing on a model during interaction is limited to modifying material parameters of the model, for example, when the model is selected, the model is wholly highlighted, or model contour light, model edge tracing and the like are added to represent that the object is selected. In the scheme, the model is difficult to modify in real time according to the interactive operation of the user, and the interactive use experience of the user is influenced.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the invention and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
Embodiments of the present invention provide an interaction method, an interaction apparatus, an electronic device, and a computer-readable storage medium, so as to overcome, at least to a certain extent, the problem that interactive use experience of a user is affected because a model cannot be modified in real time according to an interactive operation of the user due to limitations and defects of related technologies.
According to a first aspect of embodiments of the present invention, there is provided an interaction method, including: responding to interactive operation on an interactive model, and acquiring an interactive position of the interactive operation by taking a frame as a unit; acquiring texture coordinates of corresponding local coordinate points on the interactive model according to the interactive position; modifying the color of the corresponding position on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points to generate a corresponding interaction track on the single-channel height map of the interaction model; and acquiring a corresponding normal map according to the single-channel height map of the generated interaction track so as to present an interaction result of the interaction operation on the interaction model.
In some example embodiments of the present invention, based on the foregoing solution, obtaining texture coordinates of a corresponding local coordinate point on the interaction model according to the interaction position includes: acquiring a screen coordinate of the interaction position, and determining a ray corresponding to the screen coordinate in a world coordinate system of the three-dimensional scene; determining world coordinates of an intersection point of the ray and the interaction model in the world coordinate system; determining a local coordinate point of the interaction position projected on the interaction model according to the world coordinates of the intersection point; and determining the texture coordinate of the local coordinate point based on the mapping relation between the local coordinate point and the texture coordinate of the vertex of the corresponding triangular patch.
In some example embodiments of the present invention, determining the local coordinate point of the interaction position projected on the interaction model according to the world coordinates of the intersection point based on the foregoing solution includes: determining a corresponding inverse matrix according to the world transformation matrix of the interaction model; and determining a local coordinate point of the interaction position projected on the interaction model based on the world coordinate of the intersection point and the inverse matrix.
In some example embodiments of the present invention, based on the foregoing solution, determining texture coordinates of the local coordinate point based on a mapping relationship between the local coordinate point and texture coordinates of a corresponding vertex of the triangle patch includes: acquiring local coordinates and texture coordinates of the vertex of the triangular patch corresponding to the world coordinates of the intersection point; determining a transformation matrix according to the local coordinates and the texture coordinates of the vertex of the triangular patch; and determining texture coordinates corresponding to the local coordinate points on the interactive model based on the transformation matrix and the local coordinate points on the interactive model.
In some example embodiments of the present invention, the interaction method includes obtaining texture coordinates of a corresponding local coordinate point on the interaction model, the interaction method including: and saving the obtained texture coordinates of the local coordinate points by taking a frame as a unit.
In some example embodiments of the present invention, modifying the color of the corresponding location on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points based on the foregoing scheme comprises: determining a corresponding position on a single-channel height map of the interaction model by adopting a signed distance field mode based on texture coordinates of the local coordinate points; and modifying the color of the corresponding pixel at the corresponding position.
In some example embodiments of the invention, determining respective locations on a single-channel height map of the interaction model using the signed distance field based on the foregoing comprises: the corresponding position on the single-channel height map of the interaction model is determined using the signed distance field and the axis-aligned bounding box.
In some example embodiments of the present invention, generating a corresponding interaction trajectory on a single-channel height map of the interaction model based on the foregoing scheme comprises: and generating a corresponding interaction track on the single-channel height map of the interaction model by adopting a Bresenham mode based on the texture coordinates of the local coordinate points.
In some example embodiments of the present invention, based on the foregoing solution, obtaining a corresponding normal map according to the single-channel height map generating the interaction trajectory to present an interaction result of the interaction operation on the interaction model, includes: converting the single-channel height map into a corresponding normal map by using a Sobel operator; and drawing the interaction model by using a rendering pipeline based on physics based on the normal map so as to present an interaction result of the interaction operation on the interaction model.
In some example embodiments of the present invention, based on the foregoing scheme, the interaction method further includes: and in response to the end of the interactive operation, serializing and uploading the stored series of texture coordinates to a server.
In some example embodiments of the present invention, based on the foregoing scheme, the interaction method further includes: and restoring the single-channel height map into the original single-channel height map in response to the operation of clearing all the interaction tracks.
According to a second aspect of the embodiments of the present invention, there is provided an interaction apparatus, including: the acquisition unit is used for responding to the interactive operation on the interactive model and acquiring the interactive position of the interactive operation by taking a frame as a unit; the texture coordinate acquisition unit is used for acquiring the texture coordinates of corresponding local coordinate points on the interaction model according to the interaction position; the modifying unit is used for modifying the color of the corresponding position on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points so as to generate a corresponding interaction track on the single-channel height map of the interaction model; and the display unit is used for acquiring a corresponding normal map according to the single-channel height map of the generated interaction track so as to present an interaction result of the interaction operation on the interaction model.
According to a third aspect of the present invention, there is provided an electronic apparatus comprising: a processor; and a memory having computer readable instructions stored thereon which, when executed by the processor, implement the interaction method according to any one of the above.
According to a fourth aspect of the invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an interaction method according to any one of the above.
In the interaction method in the example embodiment of the invention, when the interaction operation of the user is detected, a local coordinate point of the interaction position of the interaction operation on the screen projected on the interaction model is obtained by taking a frame as a unit; determining texture coordinates of the local coordinate points according to the obtained mapping relation of the local coordinate points and the texture coordinates of the vertexes of the corresponding triangular patches; generating a corresponding interaction track on a single-channel height map of the interaction model according to texture coordinates of local coordinate points of a previous frame and a current frame; and calculating the modified single-channel height map to generate a normal map, and displaying an interaction result of the interaction operation on the interaction model according to the normal map. On one hand, local coordinate points of interaction operation acting on the interaction model are obtained by taking a frame as a unit, and acting points of the interaction operation of a user on the model can be obtained in real time; on the other hand, generating a corresponding interaction track on a single-channel height map of the interaction model based on texture coordinates of local coordinate points of the current frame and the previous frame, and recording the interaction track of the interaction operation on the model in real time; on the other hand, the normal map is generated according to the modified single-channel height map, the interaction effect is displayed through the normal map, the model can be modified in real time according to the interaction operation of the user, and the reality of the scene and the interaction use experience of the user are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 schematically illustrates a schematic diagram of an interaction method flow, in accordance with some embodiments of the invention;
FIG. 2 schematically illustrates a schematic diagram of the Bresenham line drawing method principle, in accordance with some embodiments of the present invention;
FIG. 3 schematically illustrates a schematic diagram of the principle of signed distance fields according to some embodiments of the invention;
FIG. 4 schematically illustrates a schematic diagram of the principles of an axis-aligned bounding box according to some embodiments of the present invention;
FIG. 5 schematically illustrates a diagram of calculating a weighted representation of pixel colors by a Sobel convolution factor, according to some embodiments of the invention;
FIG. 6 schematically illustrates a schematic diagram of applying a normal map model, according to some embodiments of the invention;
FIG. 7 schematically illustrates a schematic diagram of a single-channel height map generated by user interaction, in accordance with some embodiments of the invention;
FIG. 8 schematically illustrates a schematic diagram of converting a single-channel height map to a normal map using a Sobel operator, in accordance with some embodiments of the invention;
FIG. 9 schematically illustrates a schematic view of superimposing a computed normal map with a model native normal map, in accordance with some embodiments of the invention;
FIG. 10 is a schematic diagram that schematically illustrates the computation of a final model rendering effect by a physics-based rendering technique by adding other maps of a model, in accordance with some embodiments of the present invention;
FIG. 11 schematically illustrates the application of the present invention to a three-dimensional scene, in accordance with some embodiments of the present invention;
FIG. 12 schematically illustrates a schematic diagram of an interaction device, according to some embodiments of the invention;
FIG. 13 schematically illustrates a structural diagram of a computer system of an electronic device, in accordance with some embodiments of the present invention;
FIG. 14 schematically illustrates a schematic diagram of a computer-readable storage medium according to some embodiments of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the present exemplary embodiment, an interaction method is first provided, and fig. 1 schematically illustrates a schematic diagram of a flow of the interaction method according to some embodiments of the present invention. Referring to fig. 1, the interactive method may include the steps of:
step S110, responding to the interactive operation on the interactive model, and acquiring the interactive position of the interactive operation by taking a frame as a unit;
step S120, acquiring texture coordinates of corresponding local coordinates on the interaction model according to the interaction position;
step S130, modifying the color of the corresponding position on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points so as to generate a corresponding interaction track on the single-channel height map of the interaction model;
and step S140, acquiring a corresponding normal map according to the single-channel height map of the generated interaction track so as to present an interaction result of the interaction operation on the interaction model.
According to the interaction method in the present exemplary embodiment, on one hand, the local coordinate point of the interaction operation acting on the interaction model is obtained in units of frames, and the acting point of the user interaction operation on the model can be obtained in real time; on the other hand, generating a corresponding interaction track on a single-channel height map of the interaction model based on texture coordinates of local coordinate points of the current frame and the previous frame, and recording the interaction track of the interaction operation on the model in real time; on the other hand, the normal map is generated according to the modified single-channel height map, the interaction effect is displayed through the normal map, the model can be modified in real time according to the interaction operation of the user, and the reality of the scene and the interaction use experience of the user are improved.
Next, the interaction method in the present exemplary embodiment will be further explained.
In step S110, in response to an interactive operation on an interactive model, an interactive position of the interactive operation is acquired in units of frames.
In an example embodiment of the present invention, when it is detected that a user performs an interaction operation, an interaction position of the interaction operation on a screen is detected, for example, when the user performs a dragging operation using a finger, a dragging track of the finger on the screen is an interaction position of the user on the screen, information of the interaction position is obtained in units of frames, and a local coordinate point projected on an interaction model is determined according to the interaction position.
In an exemplary embodiment of the invention, the screen coordinate corresponding to the interaction position is determined according to the interaction position of the user on the screen, the screen coordinate is preset by taking the screen of the terminal device as a plane with a boundary, and the corresponding ray of the interaction position in the world coordinate in the three-dimensional scene is calculated and determined according to the screen coordinate of the interaction position. Obtaining a first model intersected with the ray in the world coordinates and an intersection point intersected with the ray and the model surface through ray detection (Raycast), and obtaining information of a triangular patch corresponding to local coordinates where the intersection point is located and world coordinates P of the intersection point world The triangular patch refers to a surface forming an irregular object model in three-dimensional modeling software, namely the irregular object model is formed by connecting a large number of triangular patches.
Further, a world transformation matrix M of the interaction model is pre-computed world World transformation matrix M from model world Calculating the corresponding inverse matrix M inverse According to the world transformation matrix M world And an inverse matrix M inverse Calculating the world coordinate P of the intersection world At a local coordinate point P of the model local The calculation relationship is as in formula (1):
P local =P world ×M inverse (1)
wherein, P local As local coordinates on the model corresponding to the world coordinates of the intersection of the ray and the model, P world World coordinates, M, of the intersection of the ray with the model inverse World-varying matrix M for the model world The inverse matrix of (c).
In step S120, texture coordinates of a corresponding local coordinate point on the interactive model are obtained according to the interactive position.
In an exemplary embodiment of the present invention, local coordinates and texture coordinates (UV coordinates) information of vertices of an intersecting triangle patch corresponding to world coordinates of an intersection point are acquired, and a transformation matrix M is calculated from the local coordinates and the texture coordinates of the vertices of the triangle patch UV Based on a transformation matrix M UV And local coordinate points on the interactive model, and calculating texture coordinates P corresponding to the local coordinate points on the interactive model UV The relationship is calculated as the following formula (2):
P UV =M UV ×P local (2)
wherein, P UV Texture coordinates, M, corresponding to local coordinate points of the interaction model UV As a coordinate transformation matrix, P local Is the coordinate of the local coordinate point of the interaction model.
In step S130, the color of the corresponding position on the single-channel height map of the interaction model is modified based on the texture coordinates of the local coordinate points to generate a corresponding interaction trajectory on the single-channel height map of the interaction model.
In an exemplary embodiment of the present invention, a white single-channel height map (height map) is initialized in advance for the material channel of the interaction model, where the height of the single-channel height map is normalized to 0-1, where 1 represents white and 0 represents black, the white single-channel height map represents that the whole map is the highest point, the object can only be engraved, i.e., recessed, during interaction, the whole map is the lowest point, and the object can only be engraved, i.e., raised, during interaction. Calculating texture coordinates P of the current frame according to the method for calculating texture coordinates corresponding to the local coordinate points in step S120 UV1 And obtaining the texture coordinate P of the previous frame UV2 According to the obtained texture coordinate P UV1 And P UV2 Drawing line segment (P) on single-channel height map UV1 ,P UV2 )。
In an exemplary embodiment of the present invention, a Bresenham line drawing method is adopted to generate a corresponding interaction trajectory on a single-channel height map of an interaction model according to texture coordinates of the local coordinate points of the current frame and the previous frame. The Bresenham line drawing method is an accurate and effective raster line generation algorithm proposed by Bresenham, the algorithm only uses incremental integer calculation, for example, as shown in fig. 2, in the traversal range of x, the value of y is adjusted according to the slope, floating point calculation is not needed, but the line generated by the method has more saw teeth.
In another exemplary embodiment of the present invention, an interaction area on a single-channel height map of an interaction model is determined by using a Signed Distance Function (SDF) manner according to texture coordinates of local coordinate points of a current frame and a previous frame, and a corresponding interaction track is generated by changing colors of corresponding pixels on the interaction area, for example, changing the area corresponding to the texture coordinates to black. By using signed distance fields, the trajectory of the user's interaction can be made smoother.
Referring to fig. 3, taking the SDF of a circle as an example, in the circle of the left diagram of fig. 3, the SDF value of each pixel position represents the distance to this boundary of the circle, the SDF value inside the circle is negative, and the SDF value outside the circle is positive. The right diagram in fig. 3 shows the distribution of the symbolic distance field of the circle in 2D with a uniform transition at the circular boundaries, which makes the Heightmap line more smooth in a single channel, with anti-aliasing effect.
Further, SDF and line segment (P) can be combined UV1 ,P UV2 ) The method of the Axis Aligned Bounding Box (AABB) is used for simulating a SuperSampling (SuperSampling) process during GPU rasterization by using a CPU, determining an interaction area on a single-channel height map of an interaction model, and generating a corresponding interaction track by changing the color of a corresponding pixel on the interaction area. By means of the axis alignment bounding box, only the pixel color inside the AABB on the single-channel height mapping texture needs to be updated, so that the pixel calculation amount of each frame can be reduced, and the calculation efficiency of the system is improved.
Referring to fig. 4, since the distance that the user interaction trajectory can move is often short in each frame time, for example, 0.033s, assuming that the user interaction position is from a to b, the calculation of the frame can be finished by only updating the color of the pixel inside the AABB on the height map texture, and compared with the recalculation of the entire texture, the amount of calculation can be reduced.
In step S140, according to the single-channel height map of the generated interaction trajectory, a corresponding normal map is obtained to present an interaction result of the interaction operation on the interaction model.
In an exemplary embodiment of the present invention, a Sobel operator (Sobel operator) is used in the GPU to convert a single-channel height map modified according to an interaction position of a user into a corresponding normal map, and the Sobel operator is mainly used for edge detection and is a discrete difference operator used for calculating a gray scale approximation of an image brightness function. When the Sobel convolution factor is used to calculate the color of each pixel, the colors of nine pixel points around the pixel position need to be sampled, the weight expression is shown in fig. 5, and the detailed calculation formulas are shown in formulas (3) and (4):
G x =(-1)×f(x-1,y-1)+0*f(x,y-1)+1*f(x+1,y-1)
+(-2)*f(x-1,y)+0*f(x,y)+2*f(x+1,y)
+(-1)*f(x-1,y+1)+0*f(x,y+1)+1*f(x+1,y+1)
=[f(x+1,y-1)+2*f(x+1,y)+f(x+1,y+1)]-[f(x-1,y-1)+2*f(x-1,y)+f(x-1,y+1)] (3)
Gy=1×f(x-1,y-1)+2*f(x,y-1)+1*f(x+1,y-1)
+0*f(x-1,y)+0*f(x,y)+0*f(x+1,y)
+(-1)*f(x-1,y+1)+(-2)*f(x,y+1)+(-1)*f(x+1,y+1)
=[f(x+1,y-1)+2*f(x,y-1)+f(x+1,y-1)]-[f(x-1,y+1)+2*f(x,y+1)+f(x+1,y+1)] (4)
where f (x, y) represents the color of the single-channel height map of the (x, y) point, and G x ,G y Then the color value of the normal map at (x, y) is represented. And drawing the interactive model by using an original PBR (physical Based Rendering) pipeline according to the calculated normal map so as to display an interactive result of interactive operation on the interactive model. For example, referring to fig. 6, fig. 6 schematically illustrates a schematic diagram of applying a normal map model according to some embodiments of the present invention, it can be observed that the concave-convex feeling of the model surface is closer to reality through the modification of the normal map, and the reality feeling of the model is improved.
It should be noted that, a part of the above steps may be executed in the CPU, a part of the steps may be executed in the GPU, or all of the steps may be executed in the GPU, which is not particularly limited in the present invention.
Referring to FIG. 7, FIG. 7 is a diagram of a single-channel height map after modification according to a user's interaction trajectory; referring to fig. 8, in fig. 8, the single-channel height map modified according to the user interaction trajectory obtained in fig. 7 is converted into a normal map through a Sobel operator; referring to fig. 9, fig. 9 is obtained by superimposing the normal map calculated in fig. 8 and the original normal map of the model; referring to fig. 10, adding other maps of the model, such as a color map (AlbedoMap) and a mixed map (MixMap), to the superimposed normal map obtained in fig. 9, and then performing PBR calculation to obtain fig. 10, where fig. 10 is a final model drawing effect; referring to fig. 11, fig. 11 is a schematic diagram illustrating the effect of interaction with "sand" in a three-dimensional scene by applying an exemplary embodiment of the present invention to the three-dimensional scene.
Further, in an exemplary embodiment of the present invention, when it is detected that the user stops the dragging operation, i.e., no texture modification is required, the series of data of texture coordinates to be obtained at this time is, for example, (P) UV1 ,P UV2 ,P UV3 … …) and uploading the data to a server or storing the data locally in a file form, and storing the interaction result of the user so that the last interaction result can still be seen when the user enters a scene next time.
In contrast to the saving of the entire image, the exemplary embodiment of the present invention employs a method of track saving, which is a user dragging track recorded once per frame, for example, in the form of [ <0.1,0.2> <0.3,0.2> <0.4,0.3> … … < -1> ], the texture coordinate < X, Y > position of each point on the graph is recorded, and X, Y can range from 0 to 1, < -1, -1> representing an end of a stroke. The method can save more complex interaction results with less memory resources, and can achieve complete storage and effect synchronization. For example, for 512-512 maps, the trace saving method can save more than 99% of space.
In addition, in an example embodiment of the present invention, when it is detected that the user executes an operation instruction to clean up all interaction tracks, the single-channel height map after being modified according to the user interaction track is restored to the original white single-channel height map, that is, the height of the full map of the single-channel height map is restored to 1, and the region modified according to the user interaction track is restored.
Further, the white single-channel height map initialized in the above steps, and the completed user interaction is mainly the interaction of making the model surface concave, such as carving or sand painting. If the surface of the model needs to be raised, a black single-channel height map needs to be generated by pre-calculation, and texture coordinates corresponding to the interaction track of the user are changed into white during interaction.
It is noted that although the steps of the methods of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken into multiple step executions, etc.
Furthermore, in the present exemplary embodiment, an interactive apparatus is also provided. Referring to fig. 12, the interactive apparatus includes: an acquisition unit 1210, a texture coordinate acquisition unit 1220, a modification unit 1230, and a display unit 1240. Wherein: the obtaining unit 1210 is configured to obtain, in response to an interactive operation on an interactive model, an interactive position of the interactive operation in units of frames; the texture coordinate obtaining unit 1220 is configured to obtain a texture coordinate of a corresponding local coordinate point on the interaction model according to the interaction position; the modifying unit 1230 is configured to modify, based on the texture coordinates of the local coordinate points, the color of the corresponding position on the single-channel height map of the interaction model to generate a corresponding interaction trajectory on the single-channel height map of the interaction model; the display unit 1240 is configured to obtain a corresponding normal map according to the single-channel height map of the generated interaction trajectory, so as to present an interaction result of the interaction operation on the interaction model.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the texture coordinate obtaining unit 1220 includes: the screen coordinate acquisition unit is used for acquiring the screen coordinate of the interaction position and determining the corresponding ray of the screen coordinate in a world coordinate system of the three-dimensional scene; a world coordinate determination unit, configured to determine a world coordinate of an intersection point of the ray and the interaction model in the world coordinate system; the local coordinate point determining unit is used for determining a local coordinate point projected on the interactive model by the interactive position according to the world coordinates of the intersection point; and the texture coordinate determination unit is used for determining the texture coordinate of the local coordinate point based on the mapping relation between the local coordinate point and the texture coordinate of the vertex of the corresponding triangular patch.
In an exemplary embodiment of the present invention, based on the foregoing, the local coordinate point determination unit is configured to: determining a corresponding inverse matrix according to the world transformation matrix of the interaction model; and determining a local coordinate point of the interaction position projected on the interaction model based on the world coordinate of the intersection point and the inverse matrix.
In an exemplary embodiment of the present invention, based on the foregoing, the texture coordinate determination unit is configured to: obtaining local coordinates and texture coordinates of the vertex of the intersected triangular patch corresponding to the world coordinates of the intersection point; determining a transformation matrix according to the local coordinates and the texture coordinates of the vertex of the triangular patch; and determining texture coordinates corresponding to the local coordinate points on the interactive model based on the transformation matrix and the local coordinate points on the interactive model.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the modifying unit 1230 includes: an interactive region determining unit, configured to determine, based on texture coordinates of the local coordinate points, a corresponding position on a single-channel height map of the interactive model in a manner of a signed distance field; and the changing unit is used for modifying the color of the corresponding pixel at the corresponding position.
In an exemplary embodiment of the present invention, based on the foregoing, the interaction region determining unit is configured to: an interaction region on a single-channel height map of the interaction model is determined using a signed distance field and an axis-aligned bounding box.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the modifying unit 1230 is configured to: and generating a corresponding interaction track on the single-channel height map of the interaction model by adopting a Bresenham mode based on the texture coordinates of the local coordinate points.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the display unit 1240 is configured to: converting the single-channel height map into a corresponding normal map by using a Sobel operator; and drawing the interaction model by using a rendering pipeline based on physics based on the normal map so as to display an interaction result of the interaction operation on the interaction model.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the interaction apparatus is configured to: and in response to the end of the interactive operation, serializing and uploading the stored series of texture coordinates to a server.
In an exemplary embodiment of the present invention, based on the foregoing scheme, the interaction apparatus is configured to: and restoring the single-channel height map to the original single-channel height map in response to the operation of clearing all the interaction tracks.
The specific details of each module of the above-mentioned interaction device have been described in detail in the corresponding interaction method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the interaction means are mentioned, this division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit according to an embodiment of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above interaction method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Accordingly, various aspects of the present invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1300 according to such an embodiment of the invention is described below with reference to fig. 13. The electronic device 1300 shown in fig. 13 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 13, the electronic device 1300 is in the form of a general purpose computing device. The components of the electronic device 1300 may include, but are not limited to: the at least one processing unit 1310, the at least one memory unit 1320, the bus 1330 connecting the various system components (including the memory unit 1320 and the processing unit 1310), the display unit 1340.
Wherein the memory unit stores program code that is executable by the processing unit 1310 to cause the processing unit 1310 to perform steps according to various exemplary embodiments of the present invention as described in the "exemplary methods" section above in this specification. For example, the processing unit 1310 may execute step S110 shown in fig. 1, in response to an interactive operation on an interactive model, acquiring an interactive position of the interactive operation in units of frames; step S120, acquiring texture coordinates of corresponding local coordinate points on the interactive model according to the interactive position; step S130, modifying the color of the corresponding position on the single-channel height map of the interaction model based on the texture coordinate of the local coordinate point so as to generate a corresponding interaction track on the single-channel height map of the interaction model; step S140, obtaining a corresponding normal map according to the single-channel height map of the generated interaction track so as to present an interaction result of the interaction operation on the interaction model.
The storage 1320 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 1321 and/or a cache memory unit 1322, and may further include a read only memory unit (ROM) 1323.
The electronic device 1300 may also communicate with one or more external devices 1370 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1300, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1300 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 1350. Also, the electronic device 1300 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 1360. As shown, the network adapter 1360 communicates with other modules of the electronic device 1300 via the bus 1330. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1300, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the page switching method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium 1400 having a program product stored thereon that is capable of implementing the above-described method of the present specification is also provided. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary method" of this description, when said program product is run on the terminal device.
Referring to fig. 14, a program product 1400 for implementing the above-described interaction method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this respect, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computing devices (e.g., through the internet using an internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (14)
1. An interaction method, comprising:
responding to interactive operation on an interactive model, and acquiring an interactive position of the interactive operation by taking a frame as a unit;
acquiring texture coordinates of corresponding local coordinate points on the interactive model according to the interactive position;
modifying the color of the corresponding position on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points to generate a corresponding interaction track on the single-channel height map of the interaction model;
and acquiring a corresponding normal map according to the single-channel height map of the generated interaction track so as to present an interaction result of the interaction operation on the interaction model.
2. The interaction method according to claim 1, wherein obtaining texture coordinates of corresponding local coordinate points on the interaction model according to the interaction position comprises:
acquiring a screen coordinate of the interaction position, and determining a ray corresponding to the screen coordinate in a world coordinate system of the three-dimensional scene;
determining world coordinates of an intersection point of the ray and the interaction model in the world coordinate system;
determining a local coordinate point of the interaction position projected on the interaction model according to the world coordinates of the intersection point;
and determining the texture coordinate of the local coordinate point based on the mapping relation between the local coordinate point and the texture coordinate of the vertex of the corresponding triangular patch.
3. The interaction method of claim 2, wherein determining the local coordinate point of the interaction location projected on the interaction model according to the world coordinates of the intersection point comprises:
determining a corresponding inverse matrix according to the world transformation matrix of the interaction model;
and determining a local coordinate point of the interaction position projected on the interaction model based on the world coordinate of the intersection point and the inverse matrix.
4. The interaction method according to claim 2, wherein determining the texture coordinates of the local coordinate points based on the mapping relationship between the local coordinate points and the texture coordinates of the corresponding vertices of the triangle patch comprises:
obtaining local coordinates and texture coordinates of the vertex of the triangular patch corresponding to the world coordinates of the intersection point;
determining a transformation matrix according to the local coordinates and the texture coordinates of the vertex of the triangular patch;
and determining texture coordinates corresponding to the local coordinate points on the interactive model based on the transformation matrix and the local coordinate points on the interactive model.
5. The interaction method according to claim 1, wherein texture coordinates of a corresponding local coordinate point on the interaction model are obtained, the interaction method comprising:
and saving the obtained texture coordinates of the local coordinate points by taking a frame as a unit.
6. The interaction method according to claim 1, wherein modifying the color of the corresponding location on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points comprises:
determining a corresponding position on a single-channel height map of the interaction model by adopting a signed distance field mode based on texture coordinates of the local coordinate points;
and modifying the color of the corresponding pixel at the corresponding position.
7. The interaction method of claim 6, wherein determining the corresponding location on the single-channel height map of the interaction model using the signed distance field comprises:
the corresponding position on the single-channel height map of the interaction model is determined using the signed distance field and the axis-aligned bounding box.
8. The interaction method of claim 1, wherein generating a corresponding interaction trajectory on a single-channel height map of the interaction model comprises:
and generating a corresponding interaction track on the single-channel height map of the interaction model by adopting a Bresenham mode based on the texture coordinates of the local coordinate points.
9. The interaction method according to claim 1, wherein obtaining a corresponding normal map for presenting the interaction result of the interaction operation on the interaction model according to the single-channel height map for generating the interaction trajectory comprises:
converting the single-channel height map into a corresponding normal map by using a Sobel operator;
and drawing the interaction model by using a rendering pipeline based on physics based on the normal map so as to present an interaction result of the interaction operation on the interaction model.
10. The interaction method according to claim 1, further comprising:
and in response to the end of the interactive operation, serializing the stored series of data of the texture coordinates and uploading the serialized data to a server.
11. The interaction method according to claim 1, further comprising:
and restoring the single-channel height map to the original single-channel height map in response to the operation of clearing all the interaction tracks.
12. An interaction device, comprising:
the acquisition unit is used for responding to the interactive operation on the interactive model and acquiring the interactive position of the interactive operation by taking a frame as a unit;
the texture coordinate acquisition unit is used for acquiring the texture coordinate of a corresponding local coordinate point on the interactive model according to the interactive position;
the modifying unit is used for modifying the color of the corresponding position on the single-channel height map of the interaction model based on the texture coordinates of the local coordinate points so as to generate a corresponding interaction track on the single-channel height map of the interaction model;
and the display unit is used for acquiring a corresponding normal map according to the single-channel height map of the generated interaction track so as to present an interaction result of the interaction operation on the interaction model.
13. An electronic device, comprising:
a processor; and
a memory having stored thereon computer readable instructions which, when executed by the processor, implement the interaction method of any one of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the interaction method according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811235940.3A CN109448137B (en) | 2018-10-23 | 2018-10-23 | Interaction method, interaction device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811235940.3A CN109448137B (en) | 2018-10-23 | 2018-10-23 | Interaction method, interaction device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109448137A CN109448137A (en) | 2019-03-08 |
CN109448137B true CN109448137B (en) | 2023-01-10 |
Family
ID=65547899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811235940.3A Active CN109448137B (en) | 2018-10-23 | 2018-10-23 | Interaction method, interaction device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109448137B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110196746B (en) * | 2019-05-30 | 2022-09-30 | 网易(杭州)网络有限公司 | Interactive interface rendering method and device, electronic equipment and storage medium |
CN111340598B (en) * | 2020-03-20 | 2024-01-16 | 北京爱笔科技有限公司 | Method and device for adding interactive labels |
CN111787081B (en) * | 2020-06-21 | 2021-03-23 | 江苏永鼎通信有限公司 | Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform |
CN112435304B (en) * | 2020-07-20 | 2023-03-14 | 上海哔哩哔哩科技有限公司 | Water body interactive mapping method and system |
CN112435312B (en) * | 2020-09-04 | 2023-04-11 | 上海哔哩哔哩科技有限公司 | Motion trajectory generation method and device, computer equipment and readable storage medium |
CN113064539B (en) * | 2021-03-04 | 2022-07-29 | 北京达佳互联信息技术有限公司 | Special effect control method and device, electronic equipment and storage medium |
CN114429518B (en) * | 2021-12-28 | 2024-07-23 | 清华大学 | Face model reconstruction method, device, equipment and storage medium |
CN115270032B (en) * | 2022-08-10 | 2023-04-25 | 上海图客科技有限公司 | Dynamic high-definition text display method and system based on WebGL |
CN116152444B (en) * | 2023-04-04 | 2023-06-30 | 山东捷瑞信息技术产业研究院有限公司 | Automatic adsorption method, device and medium for three-dimensional scene model based on digital twin |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106815881A (en) * | 2017-04-13 | 2017-06-09 | 腾讯科技(深圳)有限公司 | The color control method and device of a kind of actor model |
CN107103638A (en) * | 2017-05-27 | 2017-08-29 | 杭州万维镜像科技有限公司 | A kind of Fast rendering method of virtual scene and model |
CN108062784A (en) * | 2018-02-05 | 2018-05-22 | 深圳市易尚展示股份有限公司 | Threedimensional model texture mapping conversion method and device |
CN108564646A (en) * | 2018-03-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | Rendering intent and device, storage medium, the electronic device of object |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8570320B2 (en) * | 2011-01-31 | 2013-10-29 | Microsoft Corporation | Using a three-dimensional environment model in gameplay |
-
2018
- 2018-10-23 CN CN201811235940.3A patent/CN109448137B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106815881A (en) * | 2017-04-13 | 2017-06-09 | 腾讯科技(深圳)有限公司 | The color control method and device of a kind of actor model |
CN107103638A (en) * | 2017-05-27 | 2017-08-29 | 杭州万维镜像科技有限公司 | A kind of Fast rendering method of virtual scene and model |
CN108062784A (en) * | 2018-02-05 | 2018-05-22 | 深圳市易尚展示股份有限公司 | Threedimensional model texture mapping conversion method and device |
CN108564646A (en) * | 2018-03-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | Rendering intent and device, storage medium, the electronic device of object |
Also Published As
Publication number | Publication date |
---|---|
CN109448137A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109448137B (en) | Interaction method, interaction device, electronic equipment and storage medium | |
CN110196746B (en) | Interactive interface rendering method and device, electronic equipment and storage medium | |
CN111508052B (en) | Rendering method and device of three-dimensional grid body | |
Klawonn | Introduction to computer graphics: using Java 2D and 3D | |
JP5232358B2 (en) | Rendering outline fonts | |
KR101145260B1 (en) | Apparatus and method for mapping textures to object model | |
US9275493B2 (en) | Rendering vector maps in a geographic information system | |
US20130063460A1 (en) | Visual shader designer | |
US20130063472A1 (en) | Customized image filters | |
CN111968216A (en) | Volume cloud shadow rendering method and device, electronic equipment and storage medium | |
KR20240001021A (en) | Image rendering method and apparatus, electronic device, and storage medium | |
CN112734896B (en) | Environment shielding rendering method and device, storage medium and electronic equipment | |
JP2015515059A (en) | Method for estimating opacity level in a scene and corresponding apparatus | |
CN113034662A (en) | Virtual scene rendering method and device, storage medium and electronic equipment | |
CN109544674B (en) | Method and device for realizing volume light | |
US20080043023A1 (en) | Approximating subdivision surfaces with bezier patches | |
RU2422902C2 (en) | Two-dimensional/three-dimensional combined display | |
WO2014015206A1 (en) | Customized image filters | |
CN115937389A (en) | Shadow rendering method, device, storage medium and electronic equipment | |
CN110930492B (en) | Model rendering method, device, computer readable medium and electronic equipment | |
CN115713584A (en) | Method, system, device and storage medium for rendering volume cloud based on directed distance field | |
Sinenko et al. | Automation of visualization process for organizational and technological design solutions | |
CN114375464A (en) | Ray tracing dynamic cells in virtual space using bounding volume representations | |
CN110502305B (en) | Method and device for realizing dynamic interface and related equipment | |
Döllner | Geovisualization and real-time 3D computer graphics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |