CN117876555A - Efficient rendering method of three-dimensional model data based on POI retrieval - Google Patents
Efficient rendering method of three-dimensional model data based on POI retrieval Download PDFInfo
- Publication number
- CN117876555A CN117876555A CN202410278302.9A CN202410278302A CN117876555A CN 117876555 A CN117876555 A CN 117876555A CN 202410278302 A CN202410278302 A CN 202410278302A CN 117876555 A CN117876555 A CN 117876555A
- Authority
- CN
- China
- Prior art keywords
- determined
- pixel points
- dimensional
- dividing lines
- dividing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 103
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 238000010586 diagram Methods 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 239000003086 colorant Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/168—Segmentation; Edge detection involving transform domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
Abstract
The invention belongs to the technical field of image processing, and particularly relates to a three-dimensional model data efficient rendering method based on POI retrieval, which comprises the following steps: acquiring all three-dimensional pixel points on a three-dimensional model of a planning design drawing, determining boundary pixel points according to the probability that each three-dimensional pixel point belongs to a region boundary, performing straight line detection on all boundary pixel points to obtain dividing lines, dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through the dividing lines, taking spatial position information of the three-dimensional pixel points corresponding to two endpoints of all the dividing lines and representative color features of all the three-dimensional blocks as preloaded rendering information and storing the preloaded rendering information; when the planning and design diagram is displayed, the three-dimensional model is subjected to primary rendering according to the stored preloaded rendering information, and detail rendering is performed through the stored complete rendering information. The invention improves the efficiency of the rendering method of the three-dimensional model, and can more smoothly present the rendering effect of the three-dimensional model when a user uses the rendering method.
Description
Technical Field
The invention relates to the technical field of image processing. More particularly, the invention relates to a three-dimensional model data efficient rendering method based on POI retrieval.
Background
For a three-dimensional model of a planning and design drawing constructed by using a computer technology, the three-dimensional model is widely applied in a plurality of fields because of the high reduction of actual scenes and details.
Conventionally, color rendering is performed on a three-dimensional model according to stored color information of each three-dimensional pixel point, and for a complex three-dimensional model, the number of three-dimensional pixel points to be rendered may be very large, resulting in a slow rendering speed.
In order to smoothly present the rendering effect of the three-dimensional model when a user uses the three-dimensional model, better user experience is provided, and the efficiency of the rendering method of the three-dimensional model needs to be improved.
Disclosure of Invention
To solve one or more of the above-described technical problems, the present invention provides aspects as follows.
A three-dimensional model data efficient rendering method based on POI (Point of Interest ) retrieval, comprising:
acquiring all three-dimensional pixel points on a three-dimensional model of a planning design drawing, wherein the three-dimensional pixel points have space position information and color rendering information;
determining a representative region of each three-dimensional pixel point by a region growing method according to the color rendering information and the spatial position information of each three-dimensional pixel point;
determining the probability that each three-dimensional pixel belongs to the region boundary according to the overlapping condition between the edge pixel points of the representative region of each three-dimensional pixel;
taking three-dimensional pixel points with probability of belonging to the regional boundary larger than a preset first threshold value as boundary pixel points; performing straight line detection on all boundary pixel points to obtain a plurality of line segments; determining the preference of each line segment as a dividing line according to the probability that all three-dimensional pixel points on each line segment belong to the region boundary;
taking a line segment with the preference larger than a preset second threshold value as a dividing line to be determined; obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined;
dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking the spatial position information of the three-dimensional pixel points corresponding to two endpoints of all dividing lines and the representative color characteristics of all three-dimensional blocks as preloaded rendering information, storing the preloaded rendering information, and taking the color rendering information of all three-dimensional pixel points on a three-dimensional model of a planning design drawing as complete rendering information, and storing the complete rendering information; when the planning and design diagram is displayed, the three-dimensional model is subjected to primary rendering according to the stored preloaded rendering information, and detail rendering is performed through the stored complete rendering information.
In one embodiment, the determining the representative region of each voxel by the region growing method includes:
combining the three-dimensional pixel points in the neighborhood of the seed point, which accords with the combination condition, into a growth area represented by the seed point, and continuing to combine the three-dimensional pixel points as new seed points until the new three-dimensional pixel points which accord with the combination condition do not exist, so as to obtain the growth area represented by the seed point, and taking the growth area as a representative area of the three-dimensional pixel points;
large of the neighborhood is as small as 3 x 3;
the merging condition is that the color rendering information of the three-dimensional pixel point is in a macadam ellipse corresponding to the color rendering information of the seed point.
In one embodiment, the probability that each voxel belongs to a region boundary satisfies the relationship:
;
in the method, in the process of the invention,represents the probability that the kth three-dimensional pixel belongs to the boundary of the region, k represents the sequence number of the three-dimensional pixel, and k is taken over [1, N ]]Integer in the range, N represents the number of all three-dimensional pixels, < >>Representing the number of three-dimensional pixel points including the kth three-dimensional pixel point in all edge pixel points of the representative area in all three-dimensional pixel points, +.>The number of the three-dimensional pixel points including the ith three-dimensional pixel point in all the three-dimensional pixel points in all the edge pixel points of the representative area is represented, i represents the serial number of the three-dimensional pixel points, and max () represents a maximum function.
In one embodiment, the performing straight line detection on all boundary pixel points to obtain a plurality of line segments includes:
performing line detection on all boundary pixel points through a Hough transformation line detection algorithm to obtain a plurality of lines;
dividing each straight line into a plurality of adjacent line segments according to boundary pixel points and non-boundary pixel points on the straight line, wherein two endpoints of the line segments are required to be boundary pixel points, the number of continuous non-boundary pixel points on the line segments is smaller than a number threshold value, meanwhile, no boundary pixel points exist between the two adjacent line segments, and the number of non-boundary pixel points between the two adjacent line segments is not smaller than the number threshold value;
the method for acquiring the non-boundary pixel point between the two adjacent line segments comprises the following steps: two endpoints of a first line segment in two adjacent line segments are respectively marked as an endpoint R1 and an endpoint R2, two endpoints of a second line segment are respectively marked as an endpoint W1 and an endpoint W2, the distance between the endpoint R1 and the endpoint W1, the distance between the endpoint R1 and the endpoint W2, the distance between the endpoint R2 and the endpoint W1 and the distance between the endpoint R2 and the endpoint W2 are respectively calculated, and all non-boundary pixel points between two endpoints corresponding to the minimum value in the four distances are used as non-boundary pixel points between the two adjacent line segments;
the non-boundary pixel points refer to three-dimensional pixel points with probability of belonging to the region boundary smaller than or equal to a preset first threshold value.
In one embodiment, the preference of each line segment as a parting line satisfies the expression:
;
wherein Y represents the preference of the line segment as the dividing line,the j-th boundary pixel point forming the line segment is represented by the probability of belonging to the region boundary, j represents the serial number of the boundary pixel point, and M represents the number of the boundary pixel points forming the line segment.
In one embodiment, the similarity between the two parting lines to be determined satisfies the expression:
;
wherein D represents the similarity between two dividing lines to be determined,representing the difference of the horizontal angles of the two dividing lines to be determined in the polar coordinate system, +.>Representing the difference of the vertical angles of the two dividing lines to be determined in the polar coordinate system, +.>Representing the difference in distance between two parting lines to be determined in a polar coordinate system, +.>Representing the difference of the abscissa of the midpoints of the two dividing lines to be determined, ±>Representing the difference of the ordinate of the midpoints of the two dividing lines to be determined, (-)>Representing the difference between the vertical coordinates of the midpoints of the two parting lines to be determined, exp () represents an exponential function based on a natural constant, +.>、/>、/>Representing the length, width and height, respectively, of the three-dimensional model of the planning plan.
In one embodiment, the obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined includes:
for any one of all the dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold value, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing lines with the largest comprehensive similarity in the two to-be-determined dividing lines with the largest similarity in all the to-be-determined dividing lines, wherein the number of all the to-be-determined dividing lines is A;
for any one of the remaining A-1 dividing lines to be determined, taking the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-1 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
for any one of the remaining A-2 dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-2 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
and the like, stopping iteration until the maximum value of the similarity between every two dividing lines to be determined among the remaining A-a dividing lines to be determined is smaller than a preset fourth threshold value, and taking the remaining A-a dividing lines to be determined as dividing lines, wherein A represents the number of all dividing lines to be determined.
In one embodiment, the representative color feature of each three-dimensional block refers to a mean value of color rendering information of all three-dimensional pixel points in each three-dimensional block.
The invention has the beneficial effects that: dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking space position information of the three-dimensional pixel points corresponding to two end points of all dividing lines and representative color features of all three-dimensional blocks as preloading rendering information, storing the preloading rendering information, and performing primary rendering on a three-dimensional model according to the stored preloading rendering information when a planning design diagram is displayed; the method has the advantages that a large number of three-dimensional pixel points needing to be rendered in a complex three-dimensional model can be converted into a small number of three-dimensional blocks, the efficiency of the rendering method of the three-dimensional model is improved, the rendering effect of the three-dimensional model can be more smoothly presented when a user uses the method, and better user experience is provided.
Furthermore, the invention takes the color rendering information of all three-dimensional pixel points on the three-dimensional model of the planning and design drawing as complete rendering information and stores the complete rendering information, when the user searches the interest points, detail rendering is carried out through the stored complete rendering information, the rendering effect of the three-dimensional model can be more smoothly presented when the user uses the three-dimensional model, and the experience of the user is improved.
Further, according to the overlapping condition among edge pixel points of a representative area of each three-dimensional pixel point, the probability that each three-dimensional pixel point belongs to an area boundary is determined, straight line detection is carried out on all boundary pixel points, the preference degree of each line segment as a dividing line is determined according to the obtained probability that all three-dimensional pixel points on each line segment belong to the area boundary, a line segment with the preference degree being larger than a preset second threshold value is used as a dividing line to be determined, a plurality of dividing lines are obtained from all dividing lines to be determined according to the similarity between every two dividing lines to be determined, and all three-dimensional pixel points are divided into a plurality of three-dimensional blocks through the dividing lines; the dividing line is determined based on the similar color distribution conditions of other three-dimensional pixel points around each three-dimensional pixel point, and the three-dimensional pixel points with similar colors can be divided into a three-dimensional block, so that the dividing result of the three-dimensional block is more in line with the actual condition of the planning design drawing, and better user experience is provided.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 schematically illustrates a flow chart of a method for efficiently rendering three-dimensional model data based on POI retrieval in the present invention;
fig. 2 schematically shows a neighborhood diagram in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The embodiment of the invention discloses a three-dimensional model data efficient rendering method based on POI retrieval, which comprises the following steps of S1-S6 with reference to FIG. 1:
s1, acquiring all three-dimensional pixel points on a three-dimensional model of the planning design drawing.
Specifically, the three-dimensional model of the planning and design drawing is provided with a plurality of three-dimensional pixel points, and each three-dimensional pixel point is provided with space position information and color rendering information.
The spatial position information refers to coordinates of the three-dimensional pixel points, and the coordinates refer to coordinates of the three-dimensional pixel points on a horizontal axis, a vertical axis and a vertical axis of a Cartesian coordinate system.
The color rendering information corresponds to a color point in the CIE chromaticity diagram and is used for rendering the three-dimensional pixel point.
It should be noted that, the CIE chromaticity diagram is a color representation, which is a known technology and will not be described herein.
S2, determining the representative area of each three-dimensional pixel point by an area growth method according to the color rendering information and the spatial position information of each three-dimensional pixel point.
In order to divide all three-dimensional pixel points into a plurality of three-dimensional blocks with similar color distribution, the invention determines the representative area of each three-dimensional pixel point by an area growth method according to the color rendering information and the spatial position information of each three-dimensional pixel point.
Specifically, each three-dimensional pixel point is taken as a seed point, three-dimensional pixel points meeting merging conditions in the neighborhood of the seed point are merged into a growth area represented by the seed point, and the three-dimensional pixel points are taken as new seed points to continue merging until no new three-dimensional pixel points meeting merging conditions exist, so that the growth area represented by the seed point is obtained and is taken as a representative area of the three-dimensional pixel points.
The size of the neighborhood is 3 x 3, which includes 26 voxels adjacent to the center voxel, as shown in figure 2, a neighborhood schematic is shown in which the black dot represents the central voxel point and the white dot represents the voxel point adjacent to the central voxel point.
The merging condition is that the color rendering information of the three-dimensional pixel point is in a macadam ellipse corresponding to the color rendering information of the seed point.
The macadam ellipse corresponding to the color rendering information is the macadam ellipse of the color point corresponding to the color rendering information in the CIE chromaticity diagram.
The chromaticity diagram and the macadam circle are well known techniques and will not be described in detail herein.
It should be noted that, the macadam ellipse includes color points that cannot be resolved by the common human eye, which is a known technique and will not be described herein.
And taking the three-dimensional pixel points positioned at the boundary of the representative region as edge pixel points of the representative region of each three-dimensional pixel point.
And S3, determining the probability that each three-dimensional pixel belongs to the boundary of the region according to the overlapping condition of the edge pixel points of the representative region of each three-dimensional pixel.
Specifically, the probability that a voxel belongs to a region boundary satisfies the expression:
;
in the method, in the process of the invention,represents the probability that the kth three-dimensional pixel belongs to the boundary of the region, k represents the sequence number of the three-dimensional pixel, and k is taken over [1, N ]]Integer in the range, N represents the number of all three-dimensional pixels, < >>Representing the number of three-dimensional pixel points including the kth three-dimensional pixel point in all edge pixel points of the representative area in all three-dimensional pixel points, +.>The number of the three-dimensional pixel points including the ith three-dimensional pixel point in all the three-dimensional pixel points in all the edge pixel points of the representative area is represented, i represents the serial number of the three-dimensional pixel points, and max () represents a maximum function.
The more the edge pixel points of the representative region of the three-dimensional pixel points include the kth three-dimensional pixel point, the more the boundary of the representative region is overlapped at the kth three-dimensional pixel point, and the more the kth three-dimensional pixel point can distinguish the representative regions of the three-dimensional pixel points, the greater the probability that the kth three-dimensional pixel point belongs to the boundary of the region.
S4, carrying out straight line detection on all boundary pixel points to obtain a plurality of line segments; and determining the preference of each line segment as a dividing line according to the probability that all the three-dimensional pixel points on each line segment belong to the region boundary.
Taking three-dimensional pixel points with probability of belonging to the regional boundary larger than a preset first threshold value as boundary pixel points; and taking the three-dimensional pixel points with the probability of belonging to the regional boundary smaller than or equal to a preset first threshold value as non-boundary pixel points.
The specific value of the first threshold value can be set according to the actual application scene and the requirement, and the first threshold value is set to be 0.5.
And carrying out straight line detection on all boundary pixel points through a Hough transform straight line detection algorithm to obtain a plurality of straight lines.
It should be noted that, since there may be non-boundary pixels in addition to the boundary pixels in the obtained straight line, the boundary pixels in the straight line are not all continuous, and therefore, the present invention divides the straight line into a plurality of line segments according to the non-boundary pixels in the straight line.
Specifically, each straight line is divided into a plurality of adjacent line segments according to boundary pixel points and non-boundary pixel points on the straight line, two endpoints of the line segments are required to be boundary pixel points, the number of continuous non-boundary pixel points on the line segments is smaller than a number threshold, meanwhile, no boundary pixel points exist between the two adjacent line segments, and the number of non-boundary pixel points between the two adjacent line segments is not smaller than the number threshold; the method for acquiring the non-boundary pixel point between the two adjacent line segments comprises the following steps: two endpoints of a first line segment in two adjacent line segments are respectively marked as an endpoint R1 and an endpoint R2, two endpoints of a second line segment are respectively marked as an endpoint W1 and an endpoint W2, the distance between the endpoint R1 and the endpoint W1, the distance between the endpoint R1 and the endpoint W2, the distance between the endpoint R2 and the endpoint W1 and the distance between the endpoint R2 and the endpoint W2 are respectively calculated, and all non-boundary pixel points between two endpoints corresponding to the minimum value in the four distances are used as non-boundary pixel points between the two adjacent line segments.
The specific value of the quantity threshold value can be set according to the actual application scene and the requirement, and the quantity threshold value is set to be 2.
In the hough transform line detection, three-dimensional pixel points need to be converted from a three-dimensional rectangular coordinate system to a polar coordinate system, and in the polar coordinate system, the spatial position information of each three-dimensional pixel point is as followsWherein->Represents a horizontal angle>Represents a vertical angle, +.>Representing distance.
The preference of each line segment as a dividing line satisfies the expression:
;
wherein Y represents the preference of the line segment as the dividing line,the j-th boundary pixel point forming the line segment is represented by the probability of belonging to the region boundary, j represents the serial number of the boundary pixel point, and M represents the number of the boundary pixel points forming the line segment.
When judging whether or not to use a line segment as a dividing line, the more boundary pixels that constitute the line segment, the more probability that the boundary pixels that constitute the line segment belong to the region boundary, the more likely the line segment is to be the edge of the representative region of the plurality of three-dimensional pixels, and therefore, all the edges that constitute the line segmentThe sum of probabilities that border pixels belong to region boundariesThe larger the line segment is, the larger the preference Y of the line segment as the dividing line is.
S5, obtaining dividing lines to be determined from all the line segments according to the preference of each line segment as the dividing line, and obtaining a plurality of dividing lines from all the dividing lines to be determined according to the similarity between every two dividing lines to be determined.
And taking a line segment with the preference larger than a preset second threshold value as a dividing line to be determined.
The specific value of the second threshold value can be set according to the actual application scene and the requirement, and the second threshold value is set to 20.
Calculating the similarity between every two dividing lines to be determined, wherein the similarity between the two dividing lines to be determined meets the expression:
;
wherein D represents the similarity between two dividing lines to be determined,representing the difference of the horizontal angles of the two dividing lines to be determined in the polar coordinate system, +.>Representing the difference of the vertical angles of the two dividing lines to be determined in the polar coordinate system, +.>Representing the difference in distance between two parting lines to be determined in a polar coordinate system, +.>Representing the difference of the abscissa of the midpoints of the two dividing lines to be determined, ±>Representing two dividing lines to be determinedDifference in ordinate of midpoint, +.>Representing the difference between the vertical coordinates of the midpoints of the two parting lines to be determined, exp () represents an exponential function based on a natural constant, +.>、/>、/>Representing the length, width and height, respectively, of the three-dimensional model of the planning plan.
It should be noted that, for the two division lines to be determined, the smaller the difference in spatial position information in the polar coordinate system, the more similar the two division lines to be determined, i.e.The smaller the similarity D between the two dividing lines to be determined is, the larger; the purpose of determining the similarity between the two dividing lines to be determined is to screen dividing lines different from other dividing lines to be determined from all dividing lines to be determined according to the similarity between the dividing lines to be determined, and the nature of the dividing lines to be determined is a line segment, so that the two dividing lines to be determined may belong to the same straight line, and for the dividing lines to be determined which belong to the same straight line, the dividing lines to be determined which belong to the same straight line are located at different positions in the three-dimensional model, therefore, the dividing lines to be determined which belong to the same straight line are dividing lines of the area, but at the moment, the difference of spatial position information of the two dividing lines to be determined in the polar coordinate system is 0, the similarity D between the two dividing lines to be determined is large, so that when the dividing lines to be determined which belong to the same straight line are different from the other dividing lines to be determined are screened according to the similarity, the dividing results are inaccurate; therefore, the difference of the coordinates of the midpoints of the two dividing lines to be determined is used for determining the similarity between the two dividing lines to be determined, and the midpoints of the two dividing lines to be determinedThe smaller the difference in coordinates, the more similar the two dividing lines to be determined are, i.e. +.>The smaller the similarity D between the two division lines to be determined is, the larger.
According to the similarity between every two dividing lines to be determined, a plurality of dividing lines are obtained from all the dividing lines to be determined, and the method comprises the following steps:
for any one of all the dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold value, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing lines with the largest comprehensive similarity in the two to-be-determined dividing lines with the largest similarity in all the to-be-determined dividing lines, wherein the number of all the to-be-determined dividing lines is A;
for any one of the remaining A-1 dividing lines to be determined, taking the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-1 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
for any one of the remaining A-2 dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-2 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
and the like, stopping iteration until the maximum value of the similarity between every two dividing lines to be determined among the remaining A-a dividing lines to be determined is smaller than a preset fourth threshold value, and taking the remaining A-a dividing lines to be determined as dividing lines, wherein A represents the number of all dividing lines to be determined.
Specific values of the third threshold and the fourth threshold can be set according to actual application scenes and requirements, and the invention sets the third threshold to 0.8 and the fourth threshold to 0.4.
According to the overlapping condition among edge pixel points of a representative area of each three-dimensional pixel point, determining the probability that each three-dimensional pixel point belongs to an area boundary, carrying out straight line detection on all boundary pixel points, determining the preference of each line segment as a dividing line according to the obtained probability that all three-dimensional pixel points on each line segment belong to the area boundary, taking the line segment with the preference larger than a preset second threshold value as a dividing line to be determined, obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined, and dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through the dividing lines; the dividing line is determined based on the similar color distribution conditions of other three-dimensional pixel points around each three-dimensional pixel point, and the three-dimensional pixel points with similar colors can be divided into a three-dimensional block, so that the dividing result of the three-dimensional block is more in line with the actual condition of the planning design drawing, and better user experience is provided.
S6, dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking the space position information of the three-dimensional pixel points corresponding to two end points of all dividing lines and the representative color characteristics of all three-dimensional blocks as preloaded rendering information and storing, and taking the color rendering information of all three-dimensional pixel points on a three-dimensional model of a planning design drawing as complete rendering information and storing; when the planning and design diagram is displayed, the three-dimensional model is subjected to primary rendering according to the stored preloaded rendering information, and detail rendering is performed through the stored complete rendering information.
According to the dividing line, all three-dimensional pixel points are divided into a plurality of areas, and each area is used as a three-dimensional block.
And taking the average value of the color rendering information of all the three-dimensional pixel points in each three-dimensional block as the representative color characteristic of each three-dimensional block.
Taking the space position information of all three-dimensional pixel points on the three-dimensional model of the planning design drawing as basic information and storing the basic information; three-dimensional pixel points corresponding to two endpoints of all dividing lines and representative color features of all three-dimensional blocks are used as preloaded rendering information and stored; and taking the color rendering information of all three-dimensional pixel points on the three-dimensional model of the planning design drawing as complete rendering information and storing the complete rendering information.
When the planning and design drawing is displayed, firstly, constructing a three-dimensional model of the planning and design drawing according to the stored basic information; performing primary rendering on the three-dimensional model according to the stored preloaded rendering information; when the user performs the point of interest search (Point of Interest, POI search), the point of interest is rendered in detail through the stored complete rendering information.
Dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking space position information of the three-dimensional pixel points corresponding to two end points of all dividing lines and representative color features of all three-dimensional blocks as preloading rendering information, storing the preloading rendering information, and performing primary rendering on a three-dimensional model according to the stored preloading rendering information when a planning design diagram is displayed; the method has the advantages that a large number of three-dimensional pixel points needing to be rendered in a complex three-dimensional model can be converted into a small number of three-dimensional blocks, the efficiency of the rendering method of the three-dimensional model is improved, the rendering effect of the three-dimensional model can be more smoothly presented when a user uses the method, and better user experience is provided.
According to the invention, the color rendering information of all three-dimensional pixel points on the three-dimensional model of the planning design drawing is used as complete rendering information and stored, when the user searches the interest points, detail rendering is carried out through the stored complete rendering information, the rendering effect of the three-dimensional model can be more smoothly presented when the user uses the three-dimensional model, and the experience of the user is improved.
In the description of the present specification, the meaning of "a plurality", "a number" or "a plurality" is at least two, for example, two, three or more, etc., unless explicitly defined otherwise.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.
Claims (8)
1. A three-dimensional model data efficient rendering method based on POI retrieval is characterized by comprising the following steps:
acquiring all three-dimensional pixel points on a three-dimensional model of a planning design drawing, wherein the three-dimensional pixel points have space position information and color rendering information;
determining a representative region of each three-dimensional pixel point by a region growing method according to the color rendering information and the spatial position information of each three-dimensional pixel point;
determining the probability that each three-dimensional pixel belongs to the region boundary according to the overlapping condition between the edge pixel points of the representative region of each three-dimensional pixel;
taking three-dimensional pixel points with probability of belonging to the regional boundary larger than a preset first threshold value as boundary pixel points; performing straight line detection on all boundary pixel points to obtain a plurality of line segments; determining the preference of each line segment as a dividing line according to the probability that all three-dimensional pixel points on each line segment belong to the region boundary;
taking a line segment with the preference larger than a preset second threshold value as a dividing line to be determined; obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined;
dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking the spatial position information of the three-dimensional pixel points corresponding to two endpoints of all dividing lines and the representative color characteristics of all three-dimensional blocks as preloaded rendering information, storing the preloaded rendering information, and taking the color rendering information of all three-dimensional pixel points on a three-dimensional model of a planning design drawing as complete rendering information, and storing the complete rendering information; when the planning and design diagram is displayed, the three-dimensional model is subjected to primary rendering according to the stored preloaded rendering information, and detail rendering is performed through the stored complete rendering information.
2. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the determining the representative region of each three-dimensional pixel point by the region growing method comprises:
combining the three-dimensional pixel points in the neighborhood of the seed point, which accords with the combination condition, into a growth area represented by the seed point, and continuing to combine the three-dimensional pixel points as new seed points until the new three-dimensional pixel points which accord with the combination condition do not exist, so as to obtain the growth area represented by the seed point, and taking the growth area as a representative area of the three-dimensional pixel points;
large of the neighborhood is as small as 3 x 3;
the merging condition is that the color rendering information of the three-dimensional pixel point is in a macadam ellipse corresponding to the color rendering information of the seed point.
3. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the probability that each three-dimensional pixel belongs to a region boundary satisfies a relation:
;
in the method, in the process of the invention,represents the probability that the kth three-dimensional pixel belongs to the boundary of the region, k represents the sequence number of the three-dimensional pixel, and k is taken over [1, N ]]Integer in the range, N represents the number of all three-dimensional pixels, < >>Representing the number of three-dimensional pixel points including the kth three-dimensional pixel point in all edge pixel points of the representative area in all three-dimensional pixel points, +.>The number of the three-dimensional pixel points including the ith three-dimensional pixel point in all the three-dimensional pixel points in all the edge pixel points of the representative area is represented, i represents the serial number of the three-dimensional pixel points, and max () represents a maximum function.
4. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the performing straight line detection on all boundary pixels to obtain a plurality of line segments comprises:
performing line detection on all boundary pixel points through a Hough transformation line detection algorithm to obtain a plurality of lines;
dividing each straight line into a plurality of adjacent line segments according to boundary pixel points and non-boundary pixel points on the straight line, wherein two endpoints of the line segments are required to be boundary pixel points, the number of continuous non-boundary pixel points on the line segments is smaller than a number threshold value, meanwhile, no boundary pixel points exist between the two adjacent line segments, and the number of non-boundary pixel points between the two adjacent line segments is not smaller than the number threshold value;
the method for acquiring the non-boundary pixel point between the two adjacent line segments comprises the following steps: two endpoints of a first line segment in two adjacent line segments are respectively marked as an endpoint R1 and an endpoint R2, two endpoints of a second line segment are respectively marked as an endpoint W1 and an endpoint W2, the distance between the endpoint R1 and the endpoint W1, the distance between the endpoint R1 and the endpoint W2, the distance between the endpoint R2 and the endpoint W1 and the distance between the endpoint R2 and the endpoint W2 are respectively calculated, and all non-boundary pixel points between two endpoints corresponding to the minimum value in the four distances are used as non-boundary pixel points between the two adjacent line segments;
the non-boundary pixel points refer to three-dimensional pixel points with probability of belonging to the region boundary smaller than or equal to a preset first threshold value.
5. The efficient rendering method of three-dimensional model data based on POI retrieval as claimed in claim 1, wherein the preference degree of each line segment as a dividing line satisfies the expression:
;
wherein Y represents the preference of the line segment as the dividing line,the j-th boundary pixel point forming the line segment is represented by the probability of belonging to the region boundary, j represents the serial number of the boundary pixel point, and M represents the number of the boundary pixel points forming the line segment.
6. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the similarity between the two dividing lines to be determined satisfies the expression:
;
wherein D represents the similarity between two dividing lines to be determined,representing the difference of the horizontal angles of the two dividing lines to be determined in the polar coordinate system, +.>Representing the difference of the vertical angles of the two dividing lines to be determined in the polar coordinate system, +.>Representing the difference in distance between two parting lines to be determined in a polar coordinate system, +.>Representing the difference of the abscissa of the midpoints of the two dividing lines to be determined, ±>Representing the difference of the ordinate of the midpoints of the two dividing lines to be determined, (-)>Representing the difference between the vertical coordinates of the midpoints of the two parting lines to be determined, exp () represents an exponential function based on a natural constant, +.>、/>、/>Representing the length, width and height, respectively, of the three-dimensional model of the planning plan.
7. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 6, wherein the obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined comprises:
for any one of all the dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold value, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing lines with the largest comprehensive similarity in the two to-be-determined dividing lines with the largest similarity in all the to-be-determined dividing lines, wherein the number of all the to-be-determined dividing lines is A;
for any one of the remaining A-1 dividing lines to be determined, taking the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-1 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
for any one of the remaining A-2 dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-2 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
and the like, stopping iteration until the maximum value of the similarity between every two dividing lines to be determined among the remaining A-a dividing lines to be determined is smaller than a preset fourth threshold value, and taking the remaining A-a dividing lines to be determined as dividing lines, wherein A represents the number of all dividing lines to be determined.
8. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the representative color characteristic of each three-dimensional block refers to a mean value of color rendering information of all three-dimensional pixel points in each three-dimensional block.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410278302.9A CN117876555B (en) | 2024-03-12 | 2024-03-12 | Efficient rendering method of three-dimensional model data based on POI retrieval |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410278302.9A CN117876555B (en) | 2024-03-12 | 2024-03-12 | Efficient rendering method of three-dimensional model data based on POI retrieval |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117876555A true CN117876555A (en) | 2024-04-12 |
CN117876555B CN117876555B (en) | 2024-05-31 |
Family
ID=90595278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410278302.9A Active CN117876555B (en) | 2024-03-12 | 2024-03-12 | Efficient rendering method of three-dimensional model data based on POI retrieval |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117876555B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005104042A1 (en) * | 2004-04-20 | 2005-11-03 | The Chinese University Of Hong Kong | Block-based fragment filtration with feasible multi-gpu acceleration for real-time volume rendering on standard pc |
WO2022095714A1 (en) * | 2020-11-09 | 2022-05-12 | 中兴通讯股份有限公司 | Image rendering processing method and apparatus, storage medium, and electronic device |
CN115358919A (en) * | 2022-08-17 | 2022-11-18 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN116310056A (en) * | 2023-03-02 | 2023-06-23 | 网易(杭州)网络有限公司 | Rendering method, rendering device, equipment and medium for three-dimensional model |
CN116681860A (en) * | 2023-06-09 | 2023-09-01 | 不鸣科技(杭州)有限公司 | Feature line rendering method and device, electronic equipment and storage medium |
CN116740249A (en) * | 2023-08-15 | 2023-09-12 | 湖南马栏山视频先进技术研究院有限公司 | Distributed three-dimensional scene rendering system |
-
2024
- 2024-03-12 CN CN202410278302.9A patent/CN117876555B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005104042A1 (en) * | 2004-04-20 | 2005-11-03 | The Chinese University Of Hong Kong | Block-based fragment filtration with feasible multi-gpu acceleration for real-time volume rendering on standard pc |
WO2022095714A1 (en) * | 2020-11-09 | 2022-05-12 | 中兴通讯股份有限公司 | Image rendering processing method and apparatus, storage medium, and electronic device |
CN115358919A (en) * | 2022-08-17 | 2022-11-18 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN116310056A (en) * | 2023-03-02 | 2023-06-23 | 网易(杭州)网络有限公司 | Rendering method, rendering device, equipment and medium for three-dimensional model |
CN116681860A (en) * | 2023-06-09 | 2023-09-01 | 不鸣科技(杭州)有限公司 | Feature line rendering method and device, electronic equipment and storage medium |
CN116740249A (en) * | 2023-08-15 | 2023-09-12 | 湖南马栏山视频先进技术研究院有限公司 | Distributed three-dimensional scene rendering system |
Non-Patent Citations (1)
Title |
---|
刘志;潘晓彬;: "基于渲染图像角度结构特征的三维模型检索方法", 计算机科学, no. 2, 15 November 2018 (2018-11-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN117876555B (en) | 2024-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109509199B (en) | Medical image organization intelligent segmentation method based on three-dimensional reconstruction | |
Song et al. | Road extraction using SVM and image segmentation | |
CN107169487B (en) | Salient object detection method based on superpixel segmentation and depth feature positioning | |
CN106447676B (en) | A kind of image partition method based on fast density clustering algorithm | |
CN115049925B (en) | Field ridge extraction method, electronic device and storage medium | |
US7822224B2 (en) | Terrain map summary elements | |
CN112036231B (en) | Vehicle-mounted video-based lane line and pavement indication mark detection and identification method | |
CN111833362A (en) | Unstructured road segmentation method and system based on superpixel and region growing | |
CN109741358B (en) | Superpixel segmentation method based on adaptive hypergraph learning | |
CN106919950B (en) | The brain MR image segmentation of probability density weighting geodesic distance | |
CN111915628A (en) | Single-stage instance segmentation method based on prediction target dense boundary points | |
CN107992856A (en) | High score remote sensing building effects detection method under City scenarios | |
CN118053081B (en) | Urban and rural planning data updating method based on image processing | |
CN111127622B (en) | Three-dimensional point cloud outlier rejection method based on image segmentation | |
CN106203451A (en) | A kind of image area characteristics extracts and the method for characteristic matching | |
CN117830873B (en) | Data processing method for urban and rural planning image | |
Li et al. | The research on traffic sign recognition based on deep learning | |
CN116703916B (en) | Washing water quality monitoring method based on image processing | |
CN117876555B (en) | Efficient rendering method of three-dimensional model data based on POI retrieval | |
CN112365517A (en) | Super-pixel segmentation method based on image color and density characteristics | |
CN107784269A (en) | A kind of method and system of 3D frame of video feature point extraction | |
CN116051841A (en) | Roadside ground object multistage clustering segmentation algorithm based on vehicle-mounted LiDAR point cloud | |
CN110728688B (en) | Energy optimization-based three-dimensional mesh model segmentation method and system | |
CN109522813B (en) | Improved random walk algorithm based on pedestrian salient features | |
CN110009654B (en) | Three-dimensional volume data segmentation method based on maximum flow strategy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |