CN115641399B - Multi-layer grid pickup method and system based on image - Google Patents

Multi-layer grid pickup method and system based on image Download PDF

Info

Publication number
CN115641399B
CN115641399B CN202211092199.6A CN202211092199A CN115641399B CN 115641399 B CN115641399 B CN 115641399B CN 202211092199 A CN202211092199 A CN 202211092199A CN 115641399 B CN115641399 B CN 115641399B
Authority
CN
China
Prior art keywords
grid
data table
rendering data
scene
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211092199.6A
Other languages
Chinese (zh)
Other versions
CN115641399A (en
Inventor
钱行
彭维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou New Dimension Systems Co ltd
Original Assignee
Hangzhou New Dimension Systems Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou New Dimension Systems Co ltd filed Critical Hangzhou New Dimension Systems Co ltd
Priority to CN202211092199.6A priority Critical patent/CN115641399B/en
Publication of CN115641399A publication Critical patent/CN115641399A/en
Application granted granted Critical
Publication of CN115641399B publication Critical patent/CN115641399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a multi-layer grid picking method and a multi-layer grid picking system based on images, which belong to the technical field of model picking, wherein the multi-layer grid picking method stores scene numbers of grid surfaces rendered by images under a mouse selection area as a rendering data table, any grid body selected under the mouse selection area can be rapidly determined according to information stored in the rendering data table, and the effect of grid body picking is optimized; and in the mouse selection area which is not changed in the projection view angle and is newly determined is within the mouse selection area which is determined before, a new rendering data table is not required to be established, the selected grid body can be determined by repeatedly using the previous rendering data table, and the grid body pickup efficiency is improved.

Description

Multi-layer grid pickup method and system based on image
Technical Field
The invention relates to the technical field of grid pickup, in particular to a multi-layer grid pickup method and system based on images.
Background
In the 3D modeling display process, a 3D mesh needs to be quickly picked up. There are two pickup methods commonly used at present: one is a ray intersection method, a ray pointing vertically to the screen is generated from the mouse position, the ray is subjected to intersection test with the grid surface in the 3D scene, then the grid body containing the grid surface is found from the intersected grid surface, although all the grid bodies intersected with the ray can be picked up by the method, the calculation speed of the method is in direct proportion to the complexity of the 3D scene, and when the data volume is large, the efficiency problem exists and the method is not suitable for regional pickup; the other is a rendering method, which is to perform ID numbering on each grid body in the 3D scene, map the ID numbering into a color value, render the whole 3D scene into a screen-off image, acquire the pixel color at the position of the mouse point, and decode the pixel data into the ID numbering, thereby determining the grid body at the position of the mouse point.
Disclosure of Invention
The invention aims to provide a multi-layer grid picking method and system based on images, which effectively optimize the effect of grid body picking.
In order to achieve the above object, the present invention provides the following solutions:
The multi-layer grid pickup method based on the image comprises a plurality of grid bodies in a 3D scene, wherein each grid body consists of a plurality of grid surfaces and comprises the following steps:
Setting a scene number for each grid surface in the 3D scene; the scene number of the grid surface comprises the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs;
Determining a mouse selection area under any projection view angle; the mouse selecting area is a pixel point or an area with random number of rows and columns;
Establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table are in one-to-one correspondence with pixels in the mouse selection area, and the number of rows of the rendering data table is determined according to the number of grid surfaces picked up under the mouse selection area;
Storing scene numbers of a plurality of picked grid surfaces under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and cells except the first row cell in the rendering data table are used for storing the scene numbers of the picked grid surfaces;
and determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table.
Optionally, the numbering the mesh surfaces of each mesh body in the 3D scene specifically includes:
setting a surface number for a plurality of grid surfaces of any grid body;
Setting a body number for each grid body in the 3D scene;
For any grid surface, setting a scene number for each grid surface according to the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs.
Optionally, the creating a rendering data table according to the size of the mouse selection area specifically includes:
determining the pixel row number and the pixel column number of the mouse selection area;
Determining the column number of the rendering data table according to the pixel row number and the pixel column number;
determining the number of lines of the rendering data table according to the number of the selected grid surfaces in the mouse selection area;
and establishing a rendering data table according to the number of rows of the rendering data table and the number of columns of the rendering data table.
Optionally, the storing, by using a graphics rendering algorithm, scene numbers of the picked grid planes under each pixel of the mouse selection area in the rendering data table specifically includes:
selecting any pixel in the area for the mouse:
The number of layers of the grid surface rendered under the pixel is stored in a first cell of a cell column corresponding to the pixel in the rendering data table;
sequentially storing scene numbers of all grid planes rendered under the pixels in blank cells of a cell column corresponding to the pixels in the rendering data table according to the depth values of all grid planes; the depth value represents a distance of the mesh surface from the viewpoint.
Optionally, the determining, according to the data information in the rendering data table, a plurality of grid bodies of all depths under the mouse selection area specifically includes:
Determining a grid surface picked up in the mouse selection area according to scene numbers stored in each cell in the rendering data table;
and determining the picked grid body according to the picked grid surface in the mouse selection area.
Optionally, after determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table, the multi-layer grid pick-up method further includes:
Listing the body numbers of a plurality of grid bodies at all depths under the mouse selection area;
Removing the volume number of the mesh volume which does not need to be rendered from the list;
and rendering the rest grid bodies in the list in turn.
Corresponding to the aforementioned image-based multi-layer grid pickup method, the present invention also provides an image-based multi-layer grid pickup system including a plurality of grid bodies in a 3D scene, each grid body being composed of a plurality of grid faces, including:
The scene number setting module is used for setting scene numbers for all grid surfaces in the 3D scene; the scene number of the grid surface comprises the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs;
The selecting area determining module is used for determining a mouse selecting area under any projection view angle; the mouse selecting area is a pixel point or an area with random number of rows and columns;
The rendering data table establishing module is used for establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table are in one-to-one correspondence with pixels in the mouse selection area, and the number of rows of the rendering data table is determined according to the number of grid surfaces picked up under the mouse selection area;
The data rendering module is used for storing scene numbers of a plurality of picked grid surfaces under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and cells except the first row cell in the rendering data table are used for storing the scene numbers of the picked grid surfaces;
And the grid body determining module is used for determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table.
Optionally, the scene number setting module includes:
a face number setting unit configured to set a face number for a plurality of mesh faces of any one mesh body;
A body number setting unit, configured to set a body number for each mesh body in the 3D scene;
The scene number setting unit is used for setting scene numbers for all the grid surfaces according to the body numbers of the grid bodies to which the grid surfaces belong and the surface numbers of the grid surfaces in the grid bodies to which the grid surfaces belong.
Optionally, the rendering data table building module includes:
the region row and column determining unit is used for determining the pixel row number and the pixel column number of the mouse selection region;
The data table line number determining unit is used for determining the line number of the rendering data table according to the number of the selected grid surfaces in the mouse selection area;
the data table column number determining unit is used for determining the column number of the rendering data table according to the pixel row number and the pixel column number;
And the data table establishing unit is used for establishing a rendering data table according to the number of rows of the rendering data table and the number of columns of the rendering data table.
Optionally, the data rendering module includes:
the layer number storage unit is used for selecting any pixel in the area by the mouse, and storing the layer number of the grid surface rendered under the pixel in a first cell of a cell column corresponding to the pixel in the rendering data table;
a scene number storage unit, configured to sequentially store, for any pixel in the mouse selection area, a scene number of each grid surface rendered under the pixel, according to a depth value of each grid surface, in a blank cell of a cell column corresponding to the pixel in the rendering data table; the depth value represents a distance of the mesh surface from the viewpoint.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
The invention provides a multi-layer grid picking method and a system based on images, wherein the multi-layer grid picking method comprises the following steps: setting a scene number for each grid surface in the 3D scene; determining a mouse selection area under any projection view angle; establishing a rendering data table according to the size of the mouse selection area; storing scene numbers of the picked grid surfaces under each pixel of the mouse selection area into a rendering data table by using a graphics rendering tool; and determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table. According to the multi-layer grid pickup method based on the image, provided by the invention, the scene number of the grid surface on which the graph is rendered under the mouse selection area is stored as a rendering data table, any grid body selected under the mouse selection area can be rapidly determined according to the information stored in the rendering data table, and the effect of grid body pickup is optimized; and in the mouse selection area which is not changed in the projection view angle and is newly determined is within the mouse selection area which is determined before, a new rendering data table is not required to be established, the selected grid body can be determined by repeatedly using the previous rendering data table, and the grid body pickup efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a multi-layer image-based grid pick-up method provided in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a point-to-plane relationship in a multi-layer grid pick-up method according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of transferring an object from a world coordinate system to a clipping space in the multi-layer grid pick-up method according to embodiment 1 of the present invention;
fig. 4 is a schematic diagram of a rendering data table established in the multi-layer grid pick-up method according to embodiment 1 of the present invention;
Fig. 5 is a schematic diagram of storing data in a rendering data table in the multi-layer grid picking method according to embodiment 1 of the present invention;
fig. 6 is a schematic diagram of index data table and linked list storage data in the multi-layer grid picking method based on image provided in embodiment 2 of the present invention;
fig. 7 is a block diagram of a multi-layer image-based grid pickup system according to embodiment 3 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In 3D modeling display processes, rapid picking up of 3D geometric objects in a scene is often required. The following two pickup methods are commonly used:
Ray intersection method: and calculating a ray which vertically points to the screen from the mouse position under the world coordinate system corresponding to the 3D geometric object, performing intersection test on the ray and the geometric primitive in the 3D scene, and finding the geometric object containing the primitive from the intersected geometric primitive. The method can pick up all geometric objects intersected with the ray, but the calculation speed of the method is in direct proportion to the number of geometric primitives in the 3D scene, and the efficiency problem exists when the data size is large, and the method is not suitable for mouse area pick-up.
Rendering-based pixel picking method: and carrying out ID (identity) numbering on each triangular grid corresponding to the geometric object, mapping the ID into a color value, rendering the whole scene into a screen-off image, obtaining the pixel color at the position of the mouse point, and decoding the pixel data into the triangular grid ID number corresponding to the geometric object. Compared with the geometric intersection method, the rendering method has much higher efficiency, but only the forefront object can be picked up, and the blocked object can not be picked up.
Aiming at the problems in the prior art, the invention aims to provide a multi-layer grid picking method and a multi-layer grid picking system based on images, which effectively optimize the effect of grid body picking.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1:
The present embodiment provides an image-based multi-layer mesh pickup method, including a plurality of mesh bodies in a 3D scene, each mesh body being composed of a plurality of mesh faces, as shown in a flowchart of fig. 1, the multi-layer mesh pickup method including:
a1, setting scene numbers for all grid surfaces in a 3D scene; the scene number of the grid surface comprises the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs; in this embodiment, the step A1 specifically includes:
a11, setting a body number for each grid body in the 3D scene; in one example, the 3D scene comprises a teacup model, and a teacup body, a teacup handle and a teacup cover of the teacup are all grid bodies, so that the teacup is formed by the three components; the cup body of the teacup is assigned with the body number BT, the cup handle is assigned with the body number BB, and the cup cover is assigned with the body number BG.
A12, setting surface numbers for a plurality of grid surfaces of any grid body; along the above example, if the cup body of the teacup is a square body, there are 5 faces, namely a front face, a left face, a rear face, a right face and a bottom face (in this embodiment, the thickness is lower than a model wall of a set threshold, the inner face and the outer face are regarded as one face), and each mesh face is assigned its face number in the mesh body, for example, face numbers 1,2, 3,4 and 5 are assigned to the front face, the left face, the rear face, the right face and the bottom face, respectively; the indexing of the faces on the stem and lid starts at 1. Of course, the surface index mentioned in this example starts with 1, which is only an example, and in other embodiments, the index of the grid surface may start with 0. It should be further noted that, when a certain grid surface is rendered by using the patch source dyeing device, the grid surface is formed by a plurality of triangular surfaces, and the grid surface is rendered one by one, as shown in fig. 2, the triangular surface 1 and the triangular surface 2 form a certain grid surface together, and the indexes of the triangular surface 1 and the triangular surface 2 in the grid surface are 0-1; the index value of each vertex of the triangular surface 1 and the triangular surface 2 in the mesh surface is 0 to 3.
A13, setting scene numbers for all grid surfaces according to the body numbers of the grid bodies to which the grid surfaces belong and the surface numbers of the grid surfaces in the grid bodies to which the grid surfaces belong; in this embodiment, CGPB (Custom GPU PointBuffer) is allocated to each mesh surface of the mesh body, which represents the ID of the 3D mesh body+the surface index ID of the mesh surface in the 3D model group. Continuing with the example of using the cup described above, the scene number of the mesh surface on the front of the cup is assigned BT1, the scene number of the mesh surface on the left of the cup is assigned BT2, and the other surfaces are not exemplified.
A2, determining a mouse selection area under any projection view angle; the mouse selecting area is a pixel point or an area with random number of rows and columns; in this embodiment, after step A2, the method further includes:
The method comprises the steps of pre-calculating a transformation matrix (a projection matrix is multiplied by a view matrix), wherein the transformation matrix is used for processing the conversion of a 3D model from a world coordinate space to a corresponding mouse region under a clipping region, the pre-processing of the projection matrix is related to a mouse pickup region, and the projection range of the projection matrix is dynamically adjusted according to the size of the mouse pickup region. The matrix-transformed point tranformedP =projection matrix-view point in world space, the object is transferred from the world coordinate system to the clipping space, the matrix-transformed point tranformadp.xyz/=tranformadp.w is converted into the range of-1 to 1, and if any value of xyz does not belong to the range of-1 to 1, the point is clipped. As in fig. 3 the sphere is in the mouse region (red range) with only a portion cut out and the cube is completely cut out.
A3, establishing a rendering data table according to the size of the mouse selection area; as shown in fig. 4, columns in the rendering data table are in one-to-one correspondence with pixels in the mouse selection area, and the number of rows in the rendering data table is determined according to the number of grid surfaces picked up under the mouse selection area; in this embodiment, the step A3 specifically includes:
a31, determining the pixel row number and the pixel column number of the mouse selection area.
A32, determining the column number of the rendering data table according to the pixel row number and the pixel column number, and determining the column number of the rendering data table as 30 if the pixel number of the mouse selection area is 5*6, namely 30 pixels.
A33, determining the number of lines of the rendering data table according to the number of the selected grid surfaces under each pixel of the mouse selection area; if the number of the selected grid surfaces is different under each pixel of the mouse selection area, selecting the number with the largest number of the selected grid surfaces as the column number of the rendering data table, for example, if the number of the selected grid surfaces under a certain pixel is 10, determining that the number of the lines of the rendering data table is 10.
A34, establishing a rendering data table according to the number of rows of the rendering data table and the number of columns of the rendering data table.
A4, storing scene numbers of the picked grid surfaces under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and cells except the first row cell in the rendering data table are used for storing the scene numbers of the picked grid surfaces; in this embodiment, step A4 specifically includes:
selecting any pixel in the area for the mouse:
And A41, storing the number of layers of the grid surface rendered under the pixel in a first cell of a cell column corresponding to the pixel in the rendering data table. In other words, the number of times of execution of the source shader under each pixel is stored in the first row of cells in the rendering data table. As shown in fig. 5, the number of times of drawing is 5 times under the pixel point of coordinates (3, 3), and then 5 is stored in the cell of coordinates (0, 24) of the rendering data table.
A42, sequentially storing scene numbers of all grid planes rendered under the pixels in blank cells of a cell column corresponding to the pixels in the rendering data table according to the depth values of all grid planes; the depth value represents a distance of the mesh surface from the viewpoint. The scene numbers of each grid surface are stored in the rendering data table, the scene numbers of each grid surface are also included, and the depth values of the grid surfaces are stored in sequence in the corresponding unit columns of the pixels according to the depth values of the grid surfaces. As shown in fig. 5, in the rendering data table coordinates (1, 24) to (5, 24), scene numbers of the mesh surfaces are sequentially stored according to the depth values.
A5, determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table; in this embodiment, step A5 specifically includes:
a51, determining the picked grid surface in the mouse selection area according to the scene numbers stored in each cell in the rendering data table.
And A52, determining the picked grid body according to the picked grid surface in the mouse selection area.
In this embodiment, after determining, in step A5, a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table, the image-based multi-layer grid pickup method further includes:
a6, listing the body numbers of a plurality of grid bodies at all depths under the mouse selection area;
a7, removing the volume number of the grid body which does not need to be rendered from the list;
and A8, sequentially rendering the rest grid bodies in the list.
According to the multi-layer grid pickup method based on the images, scene numbers of grid surfaces rendered by the images under the mouse selection area are stored as a rendering data table, any grid body selected under the mouse selection area can be rapidly determined according to information stored in the rendering data table, and the effect of grid body pickup is optimized; and in the mouse selection area which is not changed in the projection view angle and is newly determined is within the mouse selection area which is determined before, a new rendering data table is not required to be established, the selected grid body can be determined by repeatedly using the previous rendering data table, and the grid body pickup efficiency is improved.
Example 2:
The multi-layer grid picking method based on the image provided in this embodiment is different from embodiment 1 in that, in the multi-layer grid picking method provided in this embodiment, scene numbers of each grid surface selected under a mouse selection area are stored in a linked list form, and specifically includes the following steps:
B1, setting scene numbers for all grid surfaces in a 3D scene; the scene number of the grid surface comprises the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs; in this embodiment, the step B1 specifically includes:
b11, setting a body number for each grid body in the 3D scene; in one example, the 3D scene comprises a teacup model, and a teacup body, a teacup handle and a teacup cover of the teacup are all grid bodies, so that the teacup is formed by the three components; the cup body of the teacup is assigned with the body number BT, the cup handle is assigned with the body number BB, and the cup cover is assigned with the body number BG.
B12, setting surface numbers for a plurality of grid surfaces of any grid body; along the above example, if the cup body of the teacup is a square body, there are 5 faces, namely a front face, a left face, a rear face, a right face and a bottom face (in this embodiment, the thickness is lower than a model wall of a set threshold, the inner face and the outer face are regarded as one face), and each mesh face is assigned its face number in the mesh body, for example, face numbers 1,2, 3,4 and 5 are assigned to the front face, the left face, the rear face, the right face and the bottom face, respectively; the indexing of the faces on the stem and lid starts at 1. Of course, the surface index mentioned in this embodiment starts with 1, which is only an example, and in other embodiments, the index of the grid surface may start with 0.
B13, setting scene numbers for all grid surfaces according to the body numbers of the grid bodies to which the grid surfaces belong and the surface numbers of the grid surfaces in the grid bodies to which the grid surfaces belong; in this embodiment, CGPB (Custom GPU Point Buffer) is allocated to each mesh surface of the mesh body, which represents the ID of the 3D mesh body+the surface index ID of the mesh surface in the 3D model group. Continuing with the example of using the cup described above, the scene number of the mesh surface on the front of the cup is assigned BT1, the scene number of the mesh surface on the left of the cup is assigned BT2, and the other surfaces are not exemplified.
B2, determining a mouse selection area under any projection view angle; the mouse selection area is a pixel point or an area with a random number of rows and columns.
B3, establishing an index data table according to the size of the mouse selection area; the row and column sizes in the index data table are in one-to-one correspondence with the pixel rows and columns in the mouse selection area; in this embodiment, the step B3 specifically includes:
And B31, determining the pixel row number and the pixel column number of the mouse selection area.
B32, determining the number of rows and columns of the index data table according to the number of rows and columns of the pixels, and if the number of pixels in the mouse selection area is 5*6, determining the number of rows and columns of the index data table to be 5*6.
And B33, establishing an index data table according to the number of rows of the index data table and the number of columns of the index data table.
B4, coloring the object under the mouse selected area by using the graphic API, storing the coloring index of the patch source coloring device under each pixel in a cell corresponding to the index data table, and storing the index value, the grid number and the depth value of each point of the colored object in a linked list according to the sequence of the index values; establishing connection among index values of a plurality of points under the same pixel point in a linked list; in each cell of the index data table, the index of the point of the object that is only the last coloring is stored; in this embodiment, step B4 specifically includes:
b41, the patch source shader draws a triangular surface under the mouse selection area, and each point of the triangular surface has a unique index value; a plurality of triangular surfaces form a grid surface, and the index value of the first point on the next triangular surface is accumulated on the basis of the index value of the last point on the last triangular surface;
In this embodiment, as shown in fig. 6, the first triangular surface A1 drawn by the patch source dyeing device occupies 4 pixel points, and then the index values of the 4 pixel points are respectively 1-4; the second triangular surface A2 drawn by the sheet source dyeing device occupies 6 pixel points, the index values of the 6 pixel points are respectively 5-10, and the index values of the 6 pixel points of the 3 rd triangular surface A3 are respectively 11-16 according to the regular accumulation; in the linked list space, the first 1-4 spaces respectively store scene numbers and depths of points with index values of 1-4, and as the triangular surface A1 is the first triangular surface drawn, the index value covering other points does not exist in the pixel where each point is located, and therefore the reference index of each point with index values of 1-4 is zero.
B42, filling cells corresponding to the index data table according to index values of triangular surfaces drawn under each pixel of the mouse selection area, and storing the index values, grid numbers and depth values of each point in a linked list according to the sequence of the index values;
For example, for understanding that, for example, under the pixel point with coordinates (3, 3), there are 4 triangular surface points that are covered with each other, and the index values of the 4 points are 5, 13, 19, and 26, respectively, then the index value stored in the cell with coordinates 4 row and column in the index data table is 26, at this time, only one index value of 26 is stored in the index data table, and in fact, it is necessary to obtain all the surfaces that are colored under the pixel point; according to the index values of the plurality of points under the same pixel point, connection is established in a linked list, the reference index of the point with the index value of 26 is 19, the reference index of the point with the index value of 19 is 13, the reference index of the point with the index value of 13 is 5, and the reference index of the point with the index value of 5 is 0, the reference is stopped, namely, under the condition that the pixel point with the coordinates of (3, 3) is acquired, 4 points are drawn in total, at this time, scene numbers of the points corresponding to the 4 index values are acquired, and the numbers of all the colored surfaces under the pixel point can be acquired.
B5, determining a plurality of grid bodies of all depths under each pixel point of the mouse selection area according to the index value in the index data table and the data stored in the linked list; in this embodiment, step B5 specifically includes:
And B51, determining indexes of points to which the pixel points in the mouse selection area are finally drawn according to index values stored in the cells in the index data table.
B52, according to the index of the point to which each pixel point in the mouse selection area is finally drawn, determining other points with connection relation with the index in a linked list;
And B53, acquiring scene numbers of the points, and determining grid surfaces and grid bodies corresponding to the points according to the scene numbers of the points.
In this embodiment, after determining, in step B5, a plurality of grid bodies of all depths under each pixel point of the mouse selection area according to the index value in the index data table and the data stored in the linked list, the multi-layer grid pick-up method further includes:
b6, listing the body numbers of a plurality of grid bodies at all depths under each pixel point of the mouse selection area;
and B7, removing the body numbers of the mesh bodies which do not need to be selected, and performing transparency processing on the mesh bodies corresponding to the removed body numbers.
According to the multi-layer grid picking method based on the image, the information of each point of the sheet source dyeing device in the drawing process is sequentially stored in a linked list mode, the index value of the point which is finally drawn is stored through the index data table, the index value of the drawing point of the uppermost layer of the pixel points corresponding to the cells can be obtained according to the index value of each cell, and then the index reference relation of each storage unit in the linked list is combined, so that the index value of the drawing point located under the same pixel point can be quickly found, and therefore a plurality of grid bodies selected under each pixel point of a mouse selection area can be quickly determined. Compared with the embodiment 1, only one two-dimensional rendering data table is used to store the scene numbers of the grid surfaces under each pixel of the mouse selection area, when the rendering data table is built, the number of lines of the data table is difficult to determine, and because the number of the grid surfaces drawn under each pixel point is different, some cells do not store contents, so that the waste of storage space can be avoided.
Example 3:
The present embodiment corresponds to the image-based multi-layer grid pickup method provided in embodiment 1, and provides an image-based multi-layer grid pickup system, including a plurality of grid bodies in a 3D scene, each grid body being composed of a plurality of grid faces, as shown in fig. 7, in which the multi-layer grid pickup system includes:
The scene number setting module is used for setting scene numbers for all grid surfaces in the 3D scene; the scene number of the grid surface comprises the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs;
The selecting area determining module is used for determining a mouse selecting area under any projection view angle; the mouse selecting area is a pixel point or an area with random number of rows and columns;
The rendering data table establishing module is used for establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table are in one-to-one correspondence with pixels in the mouse selection area, and the number of rows of the rendering data table is determined according to the number of grid surfaces picked up under the mouse selection area;
The data rendering module is used for storing scene numbers of a plurality of picked grid surfaces under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and cells except the first row cell in the rendering data table are used for storing the scene numbers of the picked grid surfaces;
And the grid body determining module is used for determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table.
In this embodiment, the scene number setting module includes:
a face number setting unit configured to set a face number for a plurality of mesh faces of any one mesh body;
A body number setting unit, configured to set a body number for each mesh body in the 3D scene;
The scene number setting unit is used for setting scene numbers for all the grid surfaces according to the body numbers of the grid bodies to which the grid surfaces belong and the surface numbers of the grid surfaces in the grid bodies to which the grid surfaces belong.
In this embodiment, the rendering data table building module includes:
the region row and column determining unit is used for determining the pixel row number and the pixel column number of the mouse selection region;
The data table line number determining unit is used for determining the line number of the rendering data table according to the number of the selected grid surfaces in the mouse selection area;
the data table column number determining unit is used for determining the column number of the rendering data table according to the pixel row number and the pixel column number;
And the data table establishing unit is used for establishing a rendering data table according to the number of rows of the rendering data table and the number of columns of the rendering data table.
In this embodiment, the data rendering module includes:
the layer number storage unit is used for selecting any pixel in the area by the mouse, and storing the layer number of the grid surface rendered under the pixel in a first cell of a cell column corresponding to the pixel in the rendering data table;
a scene number storage unit, configured to sequentially store, for any pixel in the mouse selection area, a scene number of each grid surface rendered under the pixel, according to a depth value of each grid surface, in a blank cell of a cell column corresponding to the pixel in the rendering data table; the depth value represents a distance of the mesh surface from the viewpoint.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. For example: a hardware platform loaded from a server or host computer of the video object detection device to a computer environment, or other computer environment implementing the system, or similar functioning system related to providing information needed for object detection. Thus, another medium capable of carrying software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic, etc., propagating through cable, optical cable, air, etc. Physical media used for carrier waves, such as electrical, wireless, or optical, may also be considered to be software-bearing media. Unless limited to a tangible "storage" medium, other terms used herein to refer to a computer or machine "readable medium" mean any medium that participates in the execution of any instructions by a processor.
Specific examples are employed herein, but the above description is merely illustrative of the principles and embodiments of the present invention, which are presented solely to aid in the understanding of the method of the present invention and its core ideas; it will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented by general-purpose computer means, alternatively they may be implemented by program code executable by computing means, whereby they may be stored in storage means for execution by computing means, or they may be made into individual integrated circuit modules separately, or a plurality of modules or steps in them may be made into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. The multi-layer grid pickup method based on the image comprises a plurality of grid bodies in a 3D scene, wherein each grid body consists of a plurality of grid surfaces, and the multi-layer grid pickup method is characterized by comprising the following steps:
Setting a scene number for each grid surface in the 3D scene; the scene number of the grid surface comprises the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs;
Determining a mouse selection area under any projection view angle; the mouse selecting area is a pixel point or an area with random number of rows and columns;
Establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table are in one-to-one correspondence with pixels in the mouse selection area, and the number of rows of the rendering data table is determined according to the number of grid surfaces picked up under the mouse selection area;
Storing scene numbers of a plurality of picked grid surfaces under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and cells except the first row cell in the rendering data table are used for storing the scene numbers of the picked grid surfaces; storing scene numbers of a plurality of picked grid surfaces under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool, wherein the method specifically comprises the following steps:
selecting any pixel in the area for the mouse:
The number of layers of the grid surface rendered under the pixel is stored in a first cell of a cell column corresponding to the pixel in the rendering data table;
sequentially storing scene numbers of all grid planes rendered under the pixels in blank cells of a cell column corresponding to the pixels in the rendering data table according to the depth values of all grid planes; the depth value represents the distance between the grid surface and the viewpoint;
Determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table; the method specifically comprises the following steps:
Determining a grid surface picked up in the mouse selection area according to scene numbers stored in each cell in the rendering data table;
and determining the picked grid body according to the picked grid surface in the mouse selection area.
2. The multi-layer mesh pickup method according to claim 1, wherein the numbering of mesh surfaces of each mesh body in the 3D scene specifically includes:
setting a surface number for a plurality of grid surfaces of any grid body;
Setting a body number for each grid body in the 3D scene;
For any grid surface, setting a scene number for each grid surface according to the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs.
3. The multi-layer grid pickup method according to claim 1, wherein the creating a rendering data table according to the size of the mouse selection area specifically comprises:
determining the pixel row number and the pixel column number of the mouse selection area;
Determining the column number of the rendering data table according to the pixel row number and the pixel column number;
determining the number of lines of the rendering data table according to the number of the selected grid surfaces in the mouse selection area;
and establishing a rendering data table according to the number of rows of the rendering data table and the number of columns of the rendering data table.
4. The multi-layered grid pickup method according to claim 1, wherein after the determining of the number of grid bodies of all depths under the mouse selection area based on the data information in the rendering data table, the multi-layered grid pickup method further comprises:
Listing the body numbers of a plurality of grid bodies at all depths under the mouse selection area;
Removing the volume number of the mesh volume which does not need to be rendered from the list;
and rendering the rest grid bodies in the list in turn.
5. An image-based multi-layer grid pick-up system comprising a number of grid bodies in a 3D scene, each grid body being composed of a plurality of grid faces, characterized in that the image-based multi-layer grid pick-up system is adapted to implement the image-based multi-layer grid pick-up method as claimed in any one of claims 1 to 4, the image-based multi-layer grid pick-up system comprising:
The scene number setting module is used for setting scene numbers for all grid surfaces in the 3D scene; the scene number of the grid surface comprises the body number of the grid body to which the grid surface belongs and the surface number of the grid surface in the grid body to which the grid surface belongs;
The selecting area determining module is used for determining a mouse selecting area under any projection view angle; the mouse selecting area is a pixel point or an area with random number of rows and columns;
The rendering data table establishing module is used for establishing a rendering data table according to the size of the mouse selection area; columns in the rendering data table are in one-to-one correspondence with pixels in the mouse selection area, and the number of rows of the rendering data table is determined according to the number of grid surfaces picked up under the mouse selection area;
The data rendering module is used for storing scene numbers of a plurality of picked grid surfaces under each pixel of the mouse selection area into the rendering data table by using a graphics rendering tool; each cell of the first row in the rendering data table is used for storing the number of the picked grid surfaces under the corresponding pixel, and cells except the first row cell in the rendering data table are used for storing the scene numbers of the picked grid surfaces;
And the grid body determining module is used for determining a plurality of grid bodies of all depths under the mouse selection area according to the data information in the rendering data table.
6. The multi-layered grid pickup system according to claim 5, wherein the scene number setting module comprises:
a face number setting unit configured to set a face number for a plurality of mesh faces of any one mesh body;
A body number setting unit, configured to set a body number for each mesh body in the 3D scene;
The scene number setting unit is used for setting scene numbers for all the grid surfaces according to the body numbers of the grid bodies to which the grid surfaces belong and the surface numbers of the grid surfaces in the grid bodies to which the grid surfaces belong.
7. The multi-layered grid pickup system according to claim 5, wherein the rendering data table creation module comprises:
the region row and column determining unit is used for determining the pixel row number and the pixel column number of the mouse selection region;
The data table line number determining unit is used for determining the line number of the rendering data table according to the number of the selected grid surfaces in the mouse selection area;
the data table column number determining unit is used for determining the column number of the rendering data table according to the pixel row number and the pixel column number;
And the data table establishing unit is used for establishing a rendering data table according to the number of rows of the rendering data table and the number of columns of the rendering data table.
8. The multi-layer grid pick-up system of claim 5, wherein the data rendering module comprises:
the layer number storage unit is used for selecting any pixel in the area by the mouse, and storing the layer number of the grid surface rendered under the pixel in a first cell of a cell column corresponding to the pixel in the rendering data table;
a scene number storage unit, configured to sequentially store, for any pixel in the mouse selection area, a scene number of each grid surface rendered under the pixel, according to a depth value of each grid surface, in a blank cell of a cell column corresponding to the pixel in the rendering data table; the depth value represents a distance of the mesh surface from the viewpoint.
CN202211092199.6A 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image Active CN115641399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211092199.6A CN115641399B (en) 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211092199.6A CN115641399B (en) 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image

Publications (2)

Publication Number Publication Date
CN115641399A CN115641399A (en) 2023-01-24
CN115641399B true CN115641399B (en) 2024-05-17

Family

ID=84940979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211092199.6A Active CN115641399B (en) 2022-09-08 2022-09-08 Multi-layer grid pickup method and system based on image

Country Status (1)

Country Link
CN (1) CN115641399B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110400372A (en) * 2019-08-07 2019-11-01 网易(杭州)网络有限公司 A kind of method and device of image procossing, electronic equipment, storage medium
CN112153408A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN114596423A (en) * 2022-02-16 2022-06-07 南方电网数字电网研究院有限公司 Model rendering method and device based on virtual scene gridding and computer equipment
CN114693851A (en) * 2022-03-24 2022-07-01 华南理工大学 Real-time grid contour vectorization and rendering system based on GPU

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015154004A1 (en) * 2014-04-05 2015-10-08 Sony Computer Entertainment America Llc Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
CN109547766B (en) * 2017-08-03 2020-08-14 杭州海康威视数字技术股份有限公司 Panoramic image generation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110400372A (en) * 2019-08-07 2019-11-01 网易(杭州)网络有限公司 A kind of method and device of image procossing, electronic equipment, storage medium
CN112153408A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN114596423A (en) * 2022-02-16 2022-06-07 南方电网数字电网研究院有限公司 Model rendering method and device based on virtual scene gridding and computer equipment
CN114693851A (en) * 2022-03-24 2022-07-01 华南理工大学 Real-time grid contour vectorization and rendering system based on GPU

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Catmull-Clark细分网格数据点拾取;张湘玉等;《计算机应用》;20150510;第35卷(第5期);正文第1454-1458页 *
GPU 三维图元拾取;张嘉华等;《工程图学学报》;20090215;第30卷(第01期);正文第46-52页 *
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images;Nanyang Wang等;《Proceedings of the european Conference on Computer Vision(ECCV)》;20180803;正文第52-67页 *

Also Published As

Publication number Publication date
CN115641399A (en) 2023-01-24

Similar Documents

Publication Publication Date Title
CN115082639B (en) Image generation method, device, electronic equipment and storage medium
US8760450B2 (en) Real-time mesh simplification using the graphics processing unit
CN115100339B (en) Image generation method, device, electronic equipment and storage medium
US9218686B2 (en) Image processing device
CN108564652B (en) High-precision three-dimensional reconstruction method, system and equipment for efficiently utilizing memory
CN109887030A (en) Texture-free metal parts image position and posture detection method based on the sparse template of CAD
KR20140139553A (en) Visibility-based state updates in graphical processing units
CN110349225B (en) BIM model external contour rapid extraction method
CN104331918A (en) Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
US7158133B2 (en) System and method for shadow rendering
CN105894551B (en) Image drawing method and device
US11348303B2 (en) Methods, devices, and computer program products for 3D texturing
CN111161387A (en) Method and system for synthesizing image in stacked scene, storage medium and terminal equipment
CN113614735A (en) Dense 6-DoF gesture object detector
CN111415420A (en) Spatial information determination method and device and electronic equipment
US11107278B2 (en) Polygon model generating apparatus, polygon model generation method, and program
CN115641399B (en) Multi-layer grid pickup method and system based on image
CN116310060B (en) Method, device, equipment and storage medium for rendering data
TWI731604B (en) Three-dimensional point cloud data processing method
Steinbach et al. 3-D reconstruction of real-world objects using extended voxels
CN116912515A (en) LoD-based VSLAM feature point detection method
US11367262B2 (en) Multi-dimensional acceleration structure
CN113516751B (en) Method and device for displaying cloud in game and electronic terminal
EP3504684A1 (en) Hybrid render with preferred primitive batch binning and sorting
CN114418952A (en) Goods counting method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 801, Building 2, No. 2570 Hechuan Road, Minhang District, Shanghai, 201101

Applicant after: Hangzhou New Dimension Systems Co.,Ltd.

Address before: Room 3008-1, No. 391, Wener Road, Xihu District, Hangzhou, Zhejiang 310000

Applicant before: NEW DIMENSION SYSTEMS Co.,Ltd.

GR01 Patent grant
GR01 Patent grant