CN115641399A - Image-based multi-layer grid picking method and system - Google Patents

Image-based multi-layer grid picking method and system Download PDF

Info

Publication number
CN115641399A
CN115641399A CN202211092199.6A CN202211092199A CN115641399A CN 115641399 A CN115641399 A CN 115641399A CN 202211092199 A CN202211092199 A CN 202211092199A CN 115641399 A CN115641399 A CN 115641399A
Authority
CN
China
Prior art keywords
grid
data table
rendering data
mesh
selection area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211092199.6A
Other languages
Chinese (zh)
Other versions
CN115641399B (en
Inventor
钱行
彭维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Dimension Systems Co ltd
Original Assignee
New Dimension Systems Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Dimension Systems Co ltd filed Critical New Dimension Systems Co ltd
Priority to CN202211092199.6A priority Critical patent/CN115641399B/en
Publication of CN115641399A publication Critical patent/CN115641399A/en
Application granted granted Critical
Publication of CN115641399B publication Critical patent/CN115641399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a multilayer grid picking method and a system based on images, belonging to the technical field of model picking, wherein the multilayer grid picking method stores the scene number of a grid surface rendered by a graph under a mouse selection area as a rendering data table, and can quickly determine any grid body selected under the mouse selection area according to the information stored in the rendering data table, thereby optimizing the picking effect of the grid body; and when the projection visual angle is not changed and the newly determined mouse selection area is in the previously determined mouse selection area, the selected grid body can be determined by repeatedly using the previous rendering data table without establishing a new rendering data table, so that the picking efficiency of the grid body is improved.

Description

Image-based multi-layer grid picking method and system
Technical Field
The invention relates to the technical field of grid picking, in particular to a multi-layer grid picking method and system based on images.
Background
In the 3D modeling display process, 3D mesh bodies need to be picked up quickly. There are two types of pickup methods commonly used today: one is a ray intersection method, a ray which points vertically in a screen is generated from a mouse position, the ray is intersected with a grid surface in a 3D scene, and then a grid body containing the grid surface is found from the intersected grid surface, although all the grid bodies intersected with the ray can be picked up by the method, the calculation speed of the method is in direct proportion to the complexity of the 3D scene, when the data volume is large, the efficiency problem exists and the method is not suitable for area pickup; the other method is a rendering method, wherein each grid body in a 3D scene is subjected to ID numbering, the ID numbering is mapped to a color value, the whole 3D scene is rendered into an off-screen image, the pixel color at the position of a mouse point is obtained, and the pixel data is decoded into the ID numbering, so that the grid body at the position of the mouse point is determined.
Disclosure of Invention
The invention aims to provide a multi-layer grid picking method and a multi-layer grid picking system based on images, which effectively optimize the picking effect of a grid body.
In order to achieve the purpose, the invention provides the following scheme:
the image-based multi-layer mesh pickup method includes a plurality of mesh bodies in a 3D scene, each mesh body is composed of a plurality of mesh surfaces, and comprises the following steps:
setting scene numbers for each grid surface in a 3D scene; the scene number of the grid surface comprises a body number of a grid body to which the grid surface belongs and a surface number of the grid surface in the grid body to which the grid surface belongs;
determining a mouse selection area under any projection visual angle; the mouse selection area is a pixel point or an area with random number of rows and columns;
establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table correspond to pixels in the mouse selection area one by one, and the row number of the rendering data table is determined according to the number of the picked grid surfaces in the mouse selection area;
storing scene numbers of a plurality of grid surfaces picked up under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and the cells except the cell of the first row in the rendering data table are used for storing the scene number of the picked grid surfaces;
and determining a plurality of grid bodies with all depths under the mouse selection area according to the data information in the rendering data table.
Optionally, the numbering the mesh surfaces of each mesh body in the 3D scene specifically includes:
in any grid body, setting surface numbers for a plurality of grid surfaces of the grid body;
in the 3D scene, setting a body number for each grid body;
setting scene numbers for each grid surface according to the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs.
Optionally, the creating a rendering data table according to the size of the mouse selection area specifically includes:
determining the pixel row number and the pixel column number of the mouse selection area;
determining the column number of the rendering data table according to the pixel row number and the pixel column number;
determining the number of rows of the rendering data table according to the number of the selected grid surfaces under the mouse selection area;
and establishing a rendering data table according to the row number of the rendering data table and the column number of the rendering data table.
Optionally, the storing, by using a graphics rendering algorithm, the scene numbers of the plurality of grid surfaces picked up under each pixel in the mouse selection area into the rendering data table specifically includes:
selecting any pixel in the area aiming at the mouse:
storing the number of grid surface layers rendered under the pixel in a first cell of a cell column corresponding to the pixel in the rendering data table;
sequentially storing the scene numbers of the grid surfaces rendered under the pixels in blank cells of a cell column corresponding to the pixels in the rendering data table according to the depth values of the grid surfaces; the depth value represents a distance of the mesh plane from the viewpoint.
Optionally, the determining, according to the data information in the rendering data table, a plurality of grid volumes at all depths in the mouse selection area specifically includes:
determining the picked grid surface in the mouse selection area according to the scene number stored in each cell in the rendering data table;
and determining the picked grid body according to the picked grid surface in the mouse selection area.
Optionally, after determining a plurality of mesh volumes at all depths under the mouse selection area according to the data information in the rendering data table, the multi-layer mesh picking method further includes:
listing the body numbers of a plurality of grid bodies at all depths under the mouse selection area;
removing the body number of the mesh body which does not need to be rendered from the list;
and rendering the rest grid bodies in the list in sequence.
Corresponding to the aforementioned image-based multi-layer mesh pickup method, the present invention also provides an image-based multi-layer mesh pickup system, including a plurality of mesh bodies in a 3D scene, each mesh body being composed of a plurality of mesh surfaces, comprising:
the scene number setting module is used for setting scene numbers for all the grid surfaces in the 3D scene; the scene number of the grid surface comprises a body number of a grid body to which the grid surface belongs and a surface number of the grid surface in the grid body to which the grid surface belongs;
the selection area determining module is used for determining a mouse selection area under any projection visual angle; the mouse selection area is a pixel point or an area with random number of rows and columns;
the rendering data table establishing module is used for establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table correspond to pixels in the mouse selection area one by one, and the row number of the rendering data table is determined according to the number of the picked grid surfaces in the mouse selection area;
the data rendering module is used for storing the scene numbers of the plurality of grid surfaces picked up under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and the cells except the cells of the first row in the rendering data table are used for storing the scene numbers of the picked grid surfaces;
and the grid body determining module is used for determining a plurality of grid bodies with all depths under the mouse selection area according to the data information in the rendering data table.
Optionally, the scene number setting module includes:
a surface number setting unit configured to set surface numbers for a plurality of mesh surfaces of any mesh body;
a volume number setting unit, configured to set a volume number for each mesh volume in the 3D scene;
and a scene number setting unit for setting a scene number for each mesh plane according to the body number of the mesh body to which the mesh plane belongs and the plane number of the mesh plane in the mesh body to which the mesh plane belongs, for any mesh plane.
Optionally, the rendering data table building module includes:
the area row and column determining unit is used for determining the pixel row number and the pixel column number of the mouse selection area;
the data table line number determining unit is used for determining the line number of the rendering data table according to the number of the selected grid surfaces in the mouse selection area;
the data table column number determining unit is used for determining the column number of the rendering data table according to the pixel row number and the pixel column number;
and the data table establishing unit is used for establishing a rendering data table according to the row number of the rendering data table and the column number of the rendering data table.
Optionally, the data rendering module comprises:
the layer number storage unit is used for storing the layer number of the grid surface rendered under the pixel in the selected area of the mouse in the first cell of the unit column corresponding to the pixel in the rendering data table;
a scene number storage unit, configured to store, for any pixel in the mouse selection area, a scene number of each grid surface rendered by the pixel in sequence in a blank cell of a cell column corresponding to the pixel in the rendering data table according to a depth value of each grid surface; the depth value represents a distance of the mesh plane from the viewpoint.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a multilayer grid picking method and a system based on images, wherein the multilayer grid picking method comprises the following steps: setting scene numbers for each grid surface in a 3D scene; determining a mouse selection area under any projection visual angle; establishing a rendering data table according to the size of the mouse selection area; storing the scene numbers of the grid surfaces picked up under each pixel of the mouse selection area into a rendering data table by using a graphics rendering tool; and determining a plurality of grid bodies at all depths under the mouse selection area according to the data information in the rendering data table. The image-based multilayer grid picking method stores the scene number of the grid surface rendered by the graph in the mouse selection area as a rendering data table, and can quickly determine any grid body selected in the mouse selection area according to the information stored in the rendering data table, thereby optimizing the picking effect of the grid body; and when the projection visual angle is not changed and the newly determined mouse selection area is in the previously determined mouse selection area, the selected grid body can be determined by repeatedly using the previous rendering data table without establishing a new rendering data table, so that the picking efficiency of the grid body is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of an image-based multi-layer mesh pickup method provided in embodiment 1 of the present invention;
fig. 2 is a schematic diagram of a point-surface relationship in the multi-layer grid picking method provided in embodiment 1 of the present invention;
fig. 3 is a schematic diagram of transferring an object from a world coordinate system to a clipping space in the multi-layer mesh picking method according to embodiment 1 of the present invention;
fig. 4 is a schematic diagram illustrating the creation of a rendering data table in the multi-layer mesh picking method according to embodiment 1 of the present invention;
fig. 5 is a schematic diagram illustrating data stored in a rendering data table in the multi-layer mesh picking method according to embodiment 1 of the present invention;
fig. 6 is a schematic diagram of index data tables and linked lists storing data in the image-based multi-layer mesh pickup method according to embodiment 2 of the present invention;
fig. 7 is a block diagram of an image-based multi-layer mesh pickup system provided in embodiment 3 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In 3D modeling display, it is often desirable to quickly pick up 3D geometric objects in a scene. The following two methods are commonly used for picking up:
ray intersection method: and calculating a ray which vertically points to the screen from the position of the mouse under a world coordinate system corresponding to the 3D geometric object, carrying out intersection test on the ray and a geometric primitive in the 3D scene, and finding the geometric object containing the primitive from the intersected geometric primitive. The method can pick up all geometric objects intersected with the rays, but the calculation speed of the method is in direct proportion to the number of geometric primitives in a 3D scene, the efficiency problem exists when the data size is large, and the method is not suitable for picking up mouse areas.
Pixel picking based on rendering: and performing ID numbering on each triangular mesh corresponding to the geometric object, mapping the ID to a color value, rendering the whole scene to an off-screen image, acquiring the pixel color at the position of the mouse point, and decoding the pixel data into the ID number of the triangular mesh corresponding to the geometric object. Compared with the geometric intersection, the rendering method has much higher efficiency, but only the foremost object can be picked up, and the blocked object cannot be picked up.
In view of the above problems in the prior art, an object of the present invention is to provide a method and a system for picking up multiple layers of grids based on images, which effectively optimize the effect of picking up the grids.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1:
the present embodiment provides an image-based multi-layer mesh pickup method, including a plurality of mesh volumes in a 3D scene, each mesh volume being composed of a plurality of mesh surfaces, as shown in the flowchart of fig. 1, the multi-layer mesh pickup method including:
a1, setting scene numbers for each grid surface in a 3D scene; the scene number of the grid surface comprises a body number of a grid body to which the grid surface belongs and a surface number of the grid surface in the grid body to which the grid surface belongs; in this embodiment, step A1 specifically includes:
a11, in the 3D scene, setting a body number for each mesh body; for example, in one example, the 3D scene includes a cup model, and the cup body, the cup handle and the cup cover of the cup are all a grid body, and the three form the cup together; the cup body of the teacup is assigned with the serial number BT, the cup handle is assigned with the serial number BB, and the cup cover is assigned with the serial number BG.
A12, in any grid body, setting surface numbers for a plurality of grid surfaces of the grid body; following the above example, if the cup body of the cup is a cuboid, there are 5 faces, i.e. front, left, back, right and bottom faces (in this embodiment, the thickness is below a set threshold of the mould wall, the inner and outer faces are considered as one face), then each grid face is assigned its face number in the grid body, e.g. the front, left, back, right and bottom faces are assigned face numbers 1,2, 3, 4 and 5, respectively; the indexing of the faces on the cup handle and the cup cover starts at 1. Of course, the plane index mentioned in this embodiment starts from 1, which is only an example, and in other embodiments, the index of the grid plane may start from 0. It should be further noted that, when a certain mesh surface is rendered by using a film source stainer, the mesh surface is composed of a plurality of triangular surfaces, and the mesh surface is rendered one by one, as shown in fig. 2, the triangular surface 1 and the triangular surface 2 together compose a certain mesh surface, and the index of the triangular surface 1 and the triangular surface 2 in the mesh surface is 0-1; the index value of each vertex of the triangular faces 1 and 2 on the mesh surface is 0 to 3.
A13, setting a scene number for each grid surface according to the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs aiming at any grid surface; in this embodiment, CGPB (Custom GPU PointBuffer) is allocated to each mesh surface of the mesh body, and represents ID of the 3D mesh body + surface index ID of the mesh surface in the 3D model group. Continuing with the example of the teacup, the scene number of the grid surface on the front surface of the cup body is assigned as BT1, the scene number of the grid surface on the left surface of the cup body is assigned as BT2, and the other surfaces are not listed.
A2, determining a mouse selection area under any projection visual angle; the mouse selection area is a pixel point or an area with random number of rows and columns; in this embodiment, after step A2, the method further includes:
and pre-calculating a transformation matrix (a projection matrix is multiplied by a view matrix), wherein the transformation matrix is used for processing the conversion of the 3D model from the world coordinate space to a corresponding mouse area under the clipping area, the pre-processing of the projection matrix is related to the mouse picking area, and the projection range of the projection matrix is dynamically adjusted according to the size of the mouse picking area. Converting the object from a world coordinate system to a clipping space, converting the matrix-converted point tranformdP.xyz/= tranformdP.w, converting the coordinate xyz to the range of-1 to 1, and clipping the point if any value of xyz does not belong to the range of-1 to 1. In fig. 3, the sphere is in the mouse area (in the red range), and only a part of the sphere is cut out, while the cube is completely cut out.
A3, establishing a rendering data table according to the size of the mouse selection area; as shown in fig. 4, columns in the rendering data table correspond to pixels in the mouse selection area one to one, and the number of rows in the rendering data table is determined according to the number of the grid surfaces picked up under the mouse selection area; in this embodiment, step A3 specifically includes:
and A31, determining the pixel row number and the pixel column number of the mouse selection area.
And A32, determining the column number of the rendering data table according to the pixel row number and the pixel column number, and if the pixel number of the mouse selection area is 5 x 6, namely 30 pixels, determining the column number of the rendering data table to be 30.
A33, determining the number of lines of the rendering data table according to the number of the selected grid surfaces under each pixel of the mouse selection area; if the number of the selected grid surfaces is different under each pixel of the mouse selection area, selecting the number with the largest number of the selected grid surfaces as the column number of the rendering data table, for example, if the number of the selected grid surfaces under a certain pixel is 10, determining that the row number of the rendering data table is 10.
And A34, establishing a rendering data table according to the row number of the rendering data table and the column number of the rendering data table.
A4, storing scene numbers of a plurality of grid surfaces picked up under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and the cells except the cell of the first row in the rendering data table are used for storing the scene number of the picked grid surfaces; in this embodiment, step A4 specifically includes:
selecting any pixel in the area aiming at the mouse:
and A41, storing the number of grid surface layers rendered under the pixel in the first cell of the cell column corresponding to the pixel in the rendering data table. The other way to say this is to store the times of execution of the source shader under each pixel in the first row of cells in the rendering data table. As shown in fig. 5, if the number of times of drawing is 5 times for a pixel point with coordinates (3, 3), 5 is stored in a cell with coordinates (0, 24) of the rendering data table.
A42, sequentially storing scene numbers of each grid surface rendered under the pixels in blank cells of a cell column corresponding to the pixels in the rendering data table according to the depth value of each grid surface; the depth value represents a distance of the mesh plane from the viewpoint. The scene number of each grid surface and the depth value of the grid surface are stored in the rendering data table, and the scene number and the depth value are stored in the unit column corresponding to the pixel in sequence according to the depth value of the grid surface. As shown in fig. 5, the rendering data table coordinates (1,24) to (5,24) sequentially store scene numbers of the mesh surfaces in accordance with depth values.
A5, determining a plurality of grid bodies with all depths in the mouse selection area according to the data information in the rendering data table; in this embodiment, step A5 specifically includes:
and A51, determining the picked grid surface in the mouse selection area according to the scene number stored in each cell in the rendering data table.
And A52, determining the picked grid body according to the picked grid surface in the mouse selection area.
In this embodiment, after determining a plurality of mesh volumes at all depths under the mouse selection area according to the data information in the rendering data table in step A5, the image-based multi-layer mesh pickup method further includes:
a6, listing the body numbers of a plurality of grid bodies in all depths under the mouse selection area;
a7, removing the body number of the mesh body which does not need to be rendered from the list;
and A8, rendering the rest grid bodies in the list in sequence.
In the image-based multi-layer grid picking method provided by the embodiment, the scene number of the grid surface rendered by the graph in the mouse selection area is stored as a rendering data table, and any grid body selected in the mouse selection area can be quickly determined according to the information stored in the rendering data table, so that the picking effect of the grid body is optimized; and when the projection visual angle is not changed and the newly determined mouse selection area is in the previously determined mouse selection area, the selected grid body can be determined by repeatedly using the previous rendering data table without establishing a new rendering data table, so that the picking efficiency of the grid body is improved.
Example 2:
the embodiment also provides an image-based multi-layer grid pickup method, which is different from the embodiment 1 in that in the multi-layer grid pickup method provided by the embodiment, scene numbers of each selected grid surface in a mouse selection area are stored in a linked list form, and the method specifically comprises the following steps:
b1, setting scene numbers for each grid surface in a 3D scene; the scene number of the grid surface comprises a body number of a grid body to which the grid surface belongs and a surface number of the grid surface in the grid body to which the grid surface belongs; in this embodiment, step B1 specifically includes:
b11, setting a body number for each grid body in the 3D scene; for example, in one example, the 3D scene includes a model of a teacup, and the cup body, the handle and the lid of the teacup are all a grid body, and the three together form the teacup; the cup body of the teacup is assigned with the serial number BT, the cup handle is assigned with the serial number BB, and the cup cover is assigned with the serial number BG.
B12, setting surface numbers for a plurality of grid surfaces of any grid body; following the above example, if the cup body of the cup is a cuboid, then there are 5 faces, namely front, left, back, right and bottom (in this embodiment, the thickness is below a set threshold of the model wall, the inside and outside are considered as one face), then each grid face is assigned its face number in the grid body, e.g. the front, left, back, right and bottom are assigned face numbers 1,2, 3, 4 and 5, respectively; the indices of the faces on the cup handle and the cup cover start from 1. Of course, the plane index mentioned in this embodiment is started from 1, which is only an example, and in other embodiments, the index of the grid plane may be started from 0.
B13, setting a scene number for each grid surface according to the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs aiming at any grid surface; in this embodiment, CGPB (Custom GPU Point Buffer) is allocated to each mesh surface of the mesh volume, and represents ID of the 3D mesh volume + surface index ID of the mesh surface in the 3D model group. Continuing with the example of the teacup, the scene number of the grid surface on the front surface of the cup body is assigned as BT1, the scene number of the grid surface on the left surface of the cup body is assigned as BT2, and the other surfaces are not listed.
B2, determining a mouse selection area under any projection view angle; the mouse selection area is a pixel point or an area with random number of rows and columns.
B3, establishing an index data table according to the size of the mouse selection area; the row and column size in the index data table corresponds to the pixel row and column in the mouse selection area one by one; in this embodiment, step B3 specifically includes:
and B31, determining the pixel row number and the pixel column number of the mouse selection area.
And B32, determining the number of rows and the number of columns of the index data table according to the number of the rows and the number of the columns of the pixels, and if the number of the pixels in the mouse selection area is 5 × 6, determining that the number of the rows and the columns of the index data table is also 5 × 6.
And B33, establishing an index data table according to the row number of the index data table and the column number of the index data table.
B4, coloring the objects under the mouse selection area by utilizing a graphic Application Program Interface (API), storing coloring indexes of the fragment source stainer under each pixel in a cell corresponding to the index data table, and storing index values, grid numbers and depth values of all points of the colored objects in a linked list according to the sequence of the index values; establishing connection between index values of a plurality of points under the same pixel point in a linked list; in each cell of the index data table, only the index of the point of the last colored object is stored; in this embodiment, step B4 specifically includes:
b41, drawing a triangular surface by a fragment source shader in the mouse selection area, wherein each point of the triangular surface has a unique index value; a plurality of triangular surfaces form a grid surface, and the index value of the first point on the next triangular surface is accumulated on the basis of the index value of the last point on the previous triangular surface;
in this embodiment, as shown in fig. 6, the first triangular surface A1 drawn by the film source stainer occupies 4 pixels, and the index values of the 4 pixels are 1 to 4 respectively; if the second triangular surface A2 drawn by the film source stainer occupies 6 pixel points, the index values of the 6 pixel points are respectively 5-10, and the index values of the 6 pixel points of the 3 rd triangular surface A3 are respectively 11-16 through accumulation according to the rule; in the linked list space, the first 1-4 spaces respectively store the scene number and the depth of the points with the index values of 1-4, and because the triangular surface A1 is the first drawn triangular surface and the pixel of each point has no index value covering other points, the reference index of each point with the index value of 1-4 is zero.
B42, filling the cells corresponding to the index data table according to the index values of the triangular surfaces drawn under the pixels in the mouse selection area, and storing the index values, the grid numbers and the depth values of all the points in a linked list according to the sequence of the index values;
for example, it is understood that, for example, under a pixel point with coordinates of (3, 3), there are 4 triangular faces where points overlap each other, and index values of 4 points are 5, 13, 19, and 26, respectively, then an index value stored in a cell with coordinates of row 4 and column 4 of an index data table is 26, at this time, only one index value of 26 is stored in the index data table, and actually, all colored faces under the pixel point need to be obtained; establishing connection in a linked list according to the index values of the plurality of points under the same pixel point, wherein the reference index of the point with the index value of 26 is 19, the reference index of the point with the index value of 19 is 13, the reference index of the point with the index value of 13 is 5, and the reference index of the point with the index value of 5 is 0, stopping reference, namely obtaining the pixel point with the coordinates of (3, 3), drawing 4 points in total, obtaining the scene numbers of the points corresponding to the 4 index values at the moment, and obtaining the numbers of all the colored surfaces under the pixel point.
B5, determining a plurality of grids at all depths under each pixel point in the mouse selection area according to the index value in the index data table and the data stored in the linked list; in this embodiment, step B5 specifically includes:
and B51, determining the index of the point which is drawn by each pixel point in the mouse selection area finally according to the index value stored in each cell in the index data table.
B52, determining other points which have a connection relation with the index in a linked list according to the index of the point to which each pixel point in the mouse selection area is drawn last;
and B53, acquiring scene numbers of each point, and determining a grid surface and a grid body corresponding to each point according to the scene numbers of each point.
In this embodiment, after determining a plurality of grid volumes at all depths under each pixel point in the mouse selection area according to the index value in the index data table and the data stored in the linked list in step B5, the multi-layer grid picking method further includes:
b6, listing the body numbers of a plurality of grid bodies at all depths under all pixel points in the mouse selection area;
and B7, removing the body number of the mesh body which is not required to be selected, and performing transparency processing on the mesh body corresponding to the removed body number.
The image-based multi-layer grid pickup method provided in this embodiment selects a linked list form to sequentially store information of each point of a film source stainer in a drawing process, stores an index value of a point to be drawn finally through an index data table, can obtain an index value of a top-layer drawing point of a pixel point corresponding to a cell according to the index value of each cell, and can quickly find the index value of the drawing point located under the same pixel point by combining index reference relations of storage units in the linked list, so that a plurality of grid bodies selected under each pixel point in a mouse selection area can be quickly determined. Compared with the embodiment 1, the scene number of the grid surface under each pixel of the mouse selection area is stored only through one two-dimensional rendering data table, when the rendering data table is established, the number of lines of the data table is difficult to determine, and because the number of the grid surfaces drawn under each pixel is different, some cells do not store content, so that the waste of storage space can be avoided.
Example 3:
this embodiment provides an image-based multi-layer mesh pickup system corresponding to the image-based multi-layer mesh pickup method provided in embodiment 1, including a plurality of mesh bodies in a 3D scene, each mesh body being composed of a plurality of mesh surfaces, as shown in fig. 7, the multi-layer mesh pickup system including:
the scene number setting module is used for setting scene numbers for all the grid surfaces in the 3D scene; the scene number of the grid surface comprises a body number of a grid body to which the grid surface belongs and a surface number of the grid surface in the grid body to which the grid surface belongs;
the selection area determining module is used for determining a mouse selection area under any projection view angle; the mouse selection area is a pixel point or an area with random number of rows and columns;
the rendering data table establishing module is used for establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table correspond to pixels in the mouse selection area one by one, and the row number of the rendering data table is determined according to the number of the picked grid surfaces in the mouse selection area;
the data rendering module is used for storing scene numbers of a plurality of grid surfaces picked up under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and the cells except the cell of the first row in the rendering data table are used for storing the scene number of the picked grid surfaces;
and the grid body determining module is used for determining a plurality of grid bodies with all depths under the mouse selection area according to the data information in the rendering data table.
In this embodiment, the scene number setting module includes:
a surface number setting unit configured to set surface numbers for a plurality of mesh surfaces of any mesh body;
a volume number setting unit, configured to set a volume number for each mesh volume in the 3D scene;
and a scene number setting unit for setting a scene number for each mesh plane according to the body number of the mesh body to which the mesh plane belongs and the plane number of the mesh plane in the mesh body to which the mesh plane belongs, for any mesh plane.
In this embodiment, the rendering data table creating module includes:
the area row and column determining unit is used for determining the pixel row number and the pixel column number of the mouse selection area;
the data table line number determining unit is used for determining the line number of the rendering data table according to the number of the selected grid surfaces under the mouse selection area;
the data table column number determining unit is used for determining the column number of the rendering data table according to the pixel row number and the pixel column number;
and the data table establishing unit is used for establishing a rendering data table according to the row number of the rendering data table and the column number of the rendering data table.
In this embodiment, the data rendering module includes:
the layer number storage unit is used for storing the layer number of the grid surface rendered under the pixel in the selected area of the mouse in the first cell of the unit column corresponding to the pixel in the rendering data table;
a scene number storage unit, configured to store, for any pixel in the mouse selection area, a scene number of each mesh surface rendered by the pixel in sequence in a blank cell of a cell column corresponding to the pixel in the rendering data table according to a depth value of each mesh surface; the depth value represents a distance of the mesh plane from the viewpoint.
Portions of the technology may be considered "articles of manufacture" or "articles of manufacture" in the form of executable code and/or associated data embodied in or carried out by a computer readable medium. Tangible, non-transitory storage media may include memory or storage for use by any computer, processor, or similar device or associated module. For example, various semiconductor memories, tape drives, disk drives, or any similar device capable of providing a storage function for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. For example: from a server or host computer of the video object detection device to a hardware platform of a computer environment, or other computer environment implementing a system, or similar functionality related to providing information needed for object detection. Thus, another medium capable of transferring software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic waves, etc., propagating through cables, optical cables, air, etc. The physical medium used for the carrier wave, such as an electric, wireless or optical cable or the like, may also be considered as the medium carrying the software. As used herein, unless limited to a tangible "storage" medium, other terms referring to a computer or machine "readable medium" refer to media that participate in the execution of any instructions by a processor.
Specific examples are used herein, but the foregoing description is only illustrative of the principles and embodiments of the present invention, and the description of the examples is only provided to assist understanding of the method and the core concept of the present invention; those skilled in the art will appreciate that the modules or steps of the invention described above can be implemented using general purpose computing apparatus, or alternatively, they can be implemented using program code executable by computing apparatus, such that it is executed by computing apparatus when stored in a storage device, or separately fabricated into integrated circuit modules, or multiple modules or steps thereof can be fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A multi-layer mesh picking method based on images, which comprises a plurality of mesh bodies in a 3D scene, wherein each mesh body is composed of a plurality of mesh surfaces, is characterized by comprising the following steps:
setting scene numbers for each grid surface in a 3D scene; the scene number of the grid surface comprises a body number of a grid body to which the grid surface belongs and a surface number of the grid surface in the grid body to which the grid surface belongs;
determining a mouse selection area under any projection visual angle; the mouse selection area is a pixel point or an area with random number of rows and columns;
establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table correspond to pixels in the mouse selection area one by one, and the row number of the rendering data table is determined according to the number of the picked grid surfaces in the mouse selection area;
storing scene numbers of a plurality of grid surfaces picked up under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and the cells except the cell of the first row in the rendering data table are used for storing the scene number of the picked grid surfaces;
and determining a plurality of grid bodies with all depths under the mouse selection area according to the data information in the rendering data table.
2. The multi-layer mesh picking method according to claim 1, wherein numbering mesh surfaces of each mesh body in the 3D scene specifically comprises:
in any grid body, setting surface numbers for a plurality of grid surfaces of the grid body;
in the 3D scene, setting a body number for each mesh body;
setting scene numbers for each grid surface according to the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs.
3. The multi-layer grid pickup method according to claim 1, wherein the creating a rendering data table according to the size of the mouse selection area specifically comprises:
determining the pixel row number and the pixel column number of the mouse selection area;
determining the column number of the rendering data table according to the pixel row number and the pixel column number;
determining the number of rows of the rendering data table according to the number of the selected grid surfaces under the mouse selection area;
and establishing a rendering data table according to the row number of the rendering data table and the column number of the rendering data table.
4. The method for picking up a multi-layer mesh according to claim 1, wherein the storing scene numbers of a plurality of mesh surfaces picked up under each pixel of the mouse selection area into the rendering data table by using a graphics rendering algorithm specifically comprises:
selecting any pixel in the area aiming at the mouse:
storing the number of grid surface layers rendered under the pixel in a first cell of a cell column corresponding to the pixel in the rendering data table;
sequentially storing the scene numbers of each grid surface rendered under the pixels in blank cells of a cell column corresponding to the pixels in the rendering data table according to the depth value of each grid surface; the depth value represents a distance of the mesh plane from the viewpoint.
5. The method for picking up multiple layers of grids according to claim 1, wherein the determining a plurality of grids at all depths under the mouse selection area according to the data information in the rendering data table specifically comprises:
determining the picked grid surface in the mouse selection area according to the scene number stored in each cell in the rendering data table;
and determining the picked grid body according to the picked grid surface in the mouse selection area.
6. The multi-layer grid picking method according to claim 1, wherein after determining a number of grid volumes at all depths under the mouse selection area according to the data information in the rendering data table, the multi-layer grid picking method further comprises:
listing the body numbers of a plurality of grid bodies at all depths under the mouse selection area;
removing the body number of the mesh body which does not need to be rendered from the list;
and rendering the rest grid bodies in the list in sequence.
7. Image-based multi-layer mesh pickup system including a plurality of mesh bodies in a 3D scene, each mesh body being composed of a plurality of mesh surfaces, the multi-layer mesh pickup system comprising:
the scene number setting module is used for setting scene numbers for all the grid surfaces in the 3D scene; the scene number of the grid surface comprises a body number of a grid body to which the grid surface belongs and a surface number of the grid surface in the grid body to which the grid surface belongs;
the selection area determining module is used for determining a mouse selection area under any projection visual angle; the mouse selection area is a pixel point or an area with random number of rows and columns;
the rendering data table establishing module is used for establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table correspond to pixels in the mouse selection area one by one, and the row number of the rendering data table is determined according to the number of the picked grid surfaces in the mouse selection area;
the data rendering module is used for storing scene numbers of a plurality of grid surfaces picked up under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and the cells except the cell of the first row in the rendering data table are used for storing the scene number of the picked grid surfaces;
and the grid body determining module is used for determining a plurality of grid bodies with all depths under the mouse selection area according to the data information in the rendering data table.
8. The multi-layer mesh picking system according to claim 7, wherein the scene number setting module comprises:
a surface number setting unit configured to set surface numbers for a plurality of mesh surfaces of any mesh body;
a volume number setting unit, configured to set a volume number for each mesh volume in the 3D scene;
and a scene number setting unit for setting a scene number for each mesh plane according to the body number of the mesh body to which the mesh plane belongs and the plane number of the mesh plane in the mesh body to which the mesh plane belongs, for any mesh plane.
9. The multi-layer mesh picking system of claim 7, wherein the rendering data table building module comprises:
the area row and column determining unit is used for determining the pixel row number and the pixel column number of the mouse selection area;
the data table line number determining unit is used for determining the line number of the rendering data table according to the number of the selected grid surfaces under the mouse selection area;
the data table column number determining unit is used for determining the column number of the rendering data table according to the pixel row number and the pixel column number;
and the data table establishing unit is used for establishing a rendering data table according to the row number of the rendering data table and the column number of the rendering data table.
10. The multi-tiered grid picking system of claim 7, wherein the data rendering module comprises:
the layer number storage unit is used for storing the layer number of the grid surface rendered under the pixel in the selected area of the mouse in the first cell of the unit column corresponding to the pixel in the rendering data table;
a scene number storage unit, configured to store, for any pixel in the mouse selection area, a scene number of each mesh surface rendered by the pixel in sequence in a blank cell of a cell column corresponding to the pixel in the rendering data table according to a depth value of each mesh surface; the depth value represents a distance of the mesh plane from the viewpoint.
CN202211092199.6A 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image Active CN115641399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211092199.6A CN115641399B (en) 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211092199.6A CN115641399B (en) 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image

Publications (2)

Publication Number Publication Date
CN115641399A true CN115641399A (en) 2023-01-24
CN115641399B CN115641399B (en) 2024-05-17

Family

ID=84940979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211092199.6A Active CN115641399B (en) 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image

Country Status (1)

Country Link
CN (1) CN115641399B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150287158A1 (en) * 2014-04-05 2015-10-08 Sony Computer Entertainment America Llc Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
CN110400372A (en) * 2019-08-07 2019-11-01 网易(杭州)网络有限公司 A kind of method and device of image procossing, electronic equipment, storage medium
US20200366838A1 (en) * 2017-08-03 2020-11-19 Hangzhou Hikvision Digital Technology Co., Ltd. Panoramic image generation method and device
CN112153408A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN114596423A (en) * 2022-02-16 2022-06-07 南方电网数字电网研究院有限公司 Model rendering method and device based on virtual scene gridding and computer equipment
CN114693851A (en) * 2022-03-24 2022-07-01 华南理工大学 Real-time grid contour vectorization and rendering system based on GPU

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150287158A1 (en) * 2014-04-05 2015-10-08 Sony Computer Entertainment America Llc Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
US20200366838A1 (en) * 2017-08-03 2020-11-19 Hangzhou Hikvision Digital Technology Co., Ltd. Panoramic image generation method and device
CN110400372A (en) * 2019-08-07 2019-11-01 网易(杭州)网络有限公司 A kind of method and device of image procossing, electronic equipment, storage medium
CN112153408A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN114596423A (en) * 2022-02-16 2022-06-07 南方电网数字电网研究院有限公司 Model rendering method and device based on virtual scene gridding and computer equipment
CN114693851A (en) * 2022-03-24 2022-07-01 华南理工大学 Real-time grid contour vectorization and rendering system based on GPU

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NANYANG WANG等: "Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images", 《PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION(ECCV)》, 3 August 2018 (2018-08-03), pages 52 - 67 *
张嘉华等: "GPU 三维图元拾取", 《工程图学学报》, vol. 30, no. 01, 15 February 2009 (2009-02-15), pages 46 - 52 *
张湘玉等: "Catmull-Clark细分网格数据点拾取", 《计算机应用》, vol. 35, no. 5, 10 May 2015 (2015-05-10), pages 1454 - 1458 *

Also Published As

Publication number Publication date
CN115641399B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN111311723B (en) Pixel point identification and illumination rendering method and device, electronic equipment and storage medium
CN115100339B (en) Image generation method, device, electronic equipment and storage medium
CN115082639B (en) Image generation method, device, electronic equipment and storage medium
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
WO2021228031A1 (en) Rendering method, apparatus and system
WO2009058845A1 (en) Real-time mesh simplification using the graphics-processing unit
WO2021249091A1 (en) Image processing method and apparatus, computer storage medium, and electronic device
JPH03127188A (en) Method and apparatus for generating image
US7158133B2 (en) System and method for shadow rendering
US10089796B1 (en) High quality layered depth image texture rasterization
EP4213102A1 (en) Rendering method and apparatus, and device
CN108022202A (en) A kind of advanced blanking geometry engines structure
CN114375464A (en) Ray tracing dynamic cells in virtual space using bounding volume representations
CN116503551A (en) Three-dimensional reconstruction method and device
CN116664752B (en) Method, system and storage medium for realizing panoramic display based on patterned illumination
CN116310060B (en) Method, device, equipment and storage medium for rendering data
US6690369B1 (en) Hardware-accelerated photoreal rendering
CN115375847B (en) Material recovery method, three-dimensional model generation method and model training method
CN115641399A (en) Image-based multi-layer grid picking method and system
CN113436307B (en) Mapping algorithm based on osgEarth image data to UE4 scene
US20230206567A1 (en) Geometry-aware augmented reality effects with real-time depth map
CN114494623A (en) LOD-based terrain rendering method and device
US11488347B2 (en) Method for instant rendering of voxels
KR20240074815A (en) 3D model rendering method and apparatus, electronic device, and storage medium
CN116934950A (en) Skin rendering method and device, computer storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 801, Building 2, No. 2570 Hechuan Road, Minhang District, Shanghai, 201101

Applicant after: Hangzhou New Dimension Systems Co.,Ltd.

Address before: Room 3008-1, No. 391, Wener Road, Xihu District, Hangzhou, Zhejiang 310000

Applicant before: NEW DIMENSION SYSTEMS Co.,Ltd.

GR01 Patent grant
GR01 Patent grant