CN113140022B - Digital mapping method, system and computer readable storage medium - Google Patents

Digital mapping method, system and computer readable storage medium Download PDF

Info

Publication number
CN113140022B
CN113140022B CN202011567550.3A CN202011567550A CN113140022B CN 113140022 B CN113140022 B CN 113140022B CN 202011567550 A CN202011567550 A CN 202011567550A CN 113140022 B CN113140022 B CN 113140022B
Authority
CN
China
Prior art keywords
data
dimensional
dimensional model
feature
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011567550.3A
Other languages
Chinese (zh)
Other versions
CN113140022A (en
Inventor
何玉生
杨江川
储飞龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Jinao Information Technology Co ltd
Original Assignee
Hangzhou Jinao Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Jinao Information Technology Co ltd filed Critical Hangzhou Jinao Information Technology Co ltd
Priority to CN202011567550.3A priority Critical patent/CN113140022B/en
Publication of CN113140022A publication Critical patent/CN113140022A/en
Application granted granted Critical
Publication of CN113140022B publication Critical patent/CN113140022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a digital mapping method, a system and a computer readable storage medium, wherein the method comprises the following steps: acquiring and displaying two-dimensional image data and three-dimensional model data; determining a region to be drawn, and extracting the characteristics of the three-dimensional model data based on the region to be drawn to obtain corresponding spatial characteristic data; and after the spatial characteristic data are projected to the two-dimensional image data, acquiring drawing operation performed on the two-dimensional image data or the three-dimensional model data by a user, generating corresponding drawing request data, and performing vectorization drawing based on the drawing request data to obtain a vectorized image. According to the method and the device, the spatial characteristic data are projected to the corresponding positions of the two-dimensional image data, so that mapping personnel can perform vectorization drawing by combining three-dimensional model data under the scene that the two-dimensional image data is superposed with the spatial characteristic data, and mapping efficiency can be effectively improved by combining convenience and intuitiveness of two-dimensional mapping and three-dimensional mapping.

Description

Digital mapping method, system and computer readable storage medium
Technical Field
The present invention relates to the field of mapping, and in particular, to a method, system and computer-readable storage medium for digital mapping.
Background
When vectorization drawing is performed based on a live-action three-dimensional model, the specific position of the surface of a building is often judged manually and a corresponding line drawing is drawn, and in order to ensure the drawing precision, a person who performs drawing is required to have certain drawing experience.
In the vectorization drawing process, in order to ensure the drawing precision, a mapping worker often needs to perform a large amount of operations such as rotation, amplification, reduction and movement on the live-action three-dimensional model to ensure that the drawn nodes and the building have high fitting degree and low working efficiency.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a digital mapping method, a system and a computer readable storage medium for vectorization mapping based on two-dimensional image data and three-dimensional model data, which realize two-dimensional mapping by superposing spatial characteristic data on the two-dimensional image data, simplify the mapping operation and effectively improve the mapping efficiency.
In order to solve the technical problems, the invention is solved by the following technical scheme:
a digital mapping method, comprising the steps of:
acquiring and displaying two-dimensional image data and three-dimensional model data;
determining a region to be drawn, and performing feature extraction on the three-dimensional model data based on the region to be drawn to obtain corresponding spatial feature data;
and after the spatial characteristic data are projected to the two-dimensional image data, acquiring drawing operation performed on the two-dimensional image data or the three-dimensional model data by a user, generating corresponding drawing request data, and performing vectorization drawing based on the drawing request data to obtain a vectorized image.
As an implementable embodiment:
drawing an extraction range on the two-dimensional image data to generate a region to be drawn;
acquiring preset height data or clicking and extracting height on the three-dimensional model data to obtain corresponding height data;
and performing feature extraction on the three-dimensional model data based on the area to be drawn and the height data to obtain corresponding spatial feature data.
As an implementable manner, the spatial feature data comprises feature point data and/or cropping point data;
the acquisition mode of the cutting point data is as follows:
cutting the three-dimensional model data based on the area to be drawn to obtain a cutting model, wherein the cutting model corresponds to the area to be drawn;
and performing horizontal plane cutting on the cutting model based on the height data, and extracting all intersection points of the horizontal section and the model to obtain cutting point data.
As an implementation manner, the feature point data is obtained by:
after the three-dimensional model data are divided into a plurality of areas, respectively extracting the features of each area to obtain a plurality of sub-feature libraries;
the sub-feature library comprises area coordinate data and a plurality of feature points, wherein the area coordinate data is used for indicating the areas of the feature points distributed in the three-dimensional model data;
extracting a sub-feature library matched with the area to be drawn based on the area coordinate data;
and extracting corresponding feature points from each sub-feature library based on the height data to obtain feature point data.
As an implementable embodiment:
vectorization drawing is carried out on two-dimensional image data or three-dimensional model data based on drawing request data to obtain two-dimensional vector data, and a corresponding vectorization image is generated according to the two-dimensional vector data;
the drawing request data includes reference baseline fitting request data and wire configuration request data;
the reference baseline fitting request data comprises a fitting area, fitting is carried out based on spatial characteristic data in the fitting area, and an obtained fitting line is used as a reference baseline;
the line drawing construction request data includes construction operation information indicating parallel lines or perpendicular lines constructing the reference base line, construction position information indicating positions of the constructed parallel lines or perpendicular lines, and a specified reference base line.
As one possible implementation:
the drawing request data also comprises graph drawing request data and information configuration request data;
the graph drawing request data comprises drawing operation information and drawing position information, wherein the drawing operation information comprises surrounding surface operation, hollow operation, merging operation and segmentation operation, and the drawing position information is used for indicating the position for drawing operation;
the information configuration request data comprises configuration information and assignment areas, wherein the configuration information is used for indicating attributes of the assignment areas, and the configuration information comprises structure information and floor information.
As an implementable mode, after the two-dimensional image data and the three-dimensional model data are obtained and displayed, the method further comprises two three-dimensional linkage display steps, and the specific steps are as follows:
acquiring scene adjustment request data, and adjusting two-dimensional image data and/or three-dimensional model data based on the scene adjustment request data;
the scene adjustment request data comprises adjustment request information, position information and scene information;
the adjustment request information comprises zooming-in, zooming-out, positioning, rotating and translating;
the scene information is used for indicating the adjustment of the two-dimensional image data and/or the three-dimensional model data;
the position information is used for indicating a reference point when the two-dimensional image data or the three-dimensional model data is adjusted.
The invention also provides a digital mapping system, which comprises:
the scene display module is used for acquiring and displaying two-dimensional image data and three-dimensional model data;
the characteristic extraction module is used for determining a region to be drawn, and extracting the characteristics of the three-dimensional model data based on the region to be drawn to obtain corresponding spatial characteristic data;
and the drawing module is used for projecting the spatial characteristic data to the two-dimensional image data, then collecting drawing operation performed by a user on the two-dimensional image data or the three-dimensional model data, generating corresponding drawing request data, and performing vectorization drawing based on the drawing request data to obtain a vectorized image.
As an implementable mode, the feature extraction module includes a region drawing unit, a height acquisition unit, and an extraction unit:
the region drawing unit is used for drawing the extraction range on the two-dimensional image data to generate a region to be drawn;
the height acquisition unit is used for acquiring preset height data or clicking and extracting height on the three-dimensional model data to acquire corresponding height data;
the extraction unit is used for extracting the characteristics of the three-dimensional model data based on the area to be drawn and the height data to obtain corresponding spatial characteristic data.
A computer-readable storage medium, storing a computer program which, when executed by a processor, performs the steps of any of the methods described above.
Due to the adoption of the technical scheme, the invention has the remarkable technical effects that:
according to the method and the device, the area to be drawn is determined, the spatial characteristic data of the terrain feature elements corresponding to the area to be drawn in the three-dimensional model data are extracted, and the spatial characteristic data are projected to the corresponding positions of the two-dimensional image data, so that a mapping worker can perform vectorization drawing by combining the three-dimensional model data under the scene that the two-dimensional image data are superposed with the spatial characteristic data, the drawing work is simplified, and the mapping efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart diagram of a digital mapping method of the present invention;
FIG. 2 is a block diagram of a digital mapping system of the present invention.
Detailed Description
The present invention will be further described in detail with reference to the following examples, which are illustrative of the present invention and are not intended to limit the present invention thereto.
Embodiment 1, a digital mapping method, as shown in fig. 1, comprising the steps of:
s100, acquiring and displaying two-dimensional image data and three-dimensional model data;
the two-dimensional image data is orthoimage data which is displayed as a two-dimensional scene;
the three-dimensional model data is live-action three-dimensional model data which is displayed as a three-dimensional scene.
The two-dimensional image data corresponds to the three-dimensional model data.
In this embodiment, the ortho-image data and the live-action three-dimensional model data are imported and displayed in a split screen manner.
S200, determining a region to be drawn, and extracting the characteristics of the three-dimensional model data based on the region to be drawn to obtain corresponding spatial characteristic data;
s300, after the spatial feature data are projected to the two-dimensional image data, collecting drawing operation performed by a user on the two-dimensional image data or the three-dimensional model data to generate corresponding drawing request data, and performing vectorization drawing based on the drawing request data to obtain a vectorized image.
Because the building has the eave, still has the condition of being sheltered from by other buildings or trees, and to the building of multilayer structure, can't learn the shape of each layer according to the two-dimensional scene, so the technical scheme of current digital mapping usually is, by mapping personnel on the three-dimensional model of outdoor scene select the point and draw, its point selection process need constantly rotate the three-dimensional model of outdoor scene, need mapping personnel to confirm according to its experience and draw the node in addition.
In the embodiment, the area to be drawn is determined, only the spatial feature data corresponding to the area to be drawn is extracted, and the spatial feature data is projected to the corresponding position of the two-dimensional image data, so that a mapping person can perform vectorization drawing by combining the three-dimensional model data under the scene that the two-dimensional image data overlaps the spatial feature data, and the vectorization image corresponding to the area to be drawn can be drawn without adjusting the angle, size and direction of the corresponding three-dimensional model in real time by the mapping person.
Because space characteristic data throws the corresponding position to two-dimensional image data, to the profile of mapping personnel's visual show area of waiting to draw, and mapping personnel can directly draw under the two-dimensional scene based on space characteristic data, simplify traditional three-dimensional drawing mode, improve mapping efficiency, it is well known, it is more convenient to draw under the two-dimensional scene than under the three-dimensional scene, also lower the requirement to mapping personnel, the event can also reduce mapping personnel's post demand, inexperienced mapping personnel also can accomplish the drawing work.
In conclusion, the spatial feature data corresponding to the three-dimensional model data are superposed on the two-dimensional image data, so that the mapping personnel can be assisted to perform two-dimensional scene mapping, the problem that the two-dimensional scene mapping precision is not enough is solved, and the two-dimensional mapping is possible.
In order to further facilitate the mapping personnel to check the three-dimensional scene, the pitch angle of the three-dimensional scene display, namely the pitch angle from the camera to the central point in the three-dimensional scene, is preset in the embodiment, and when the three-dimensional model display is carried out, the inclination angle of the three-dimensional scene is adjusted according to the pitch angle.
Those skilled in the art can set the pitch angle according to actual needs, so that the mapping personnel can view the three-dimensional scene more intuitively and comprehensively, for example, the pitch angle in this embodiment is 60 °.
Further, the specific steps of determining a region to be drawn in step S200, performing feature extraction on the three-dimensional model data based on the region to be drawn, and obtaining corresponding spatial feature data are as follows:
s210, drawing an extraction range on the two-dimensional image data to generate a region to be drawn;
the way of drawing the extraction range may include, for example, the following three ways:
the first method is a two-point method, namely, two points are selected on a two-dimensional scene to serve as opposite angles of a quadrangle, so that a rectangular area to be drawn is generated;
the second method is a three-point method, namely, three points are selected on a two-dimensional scene, and a fourth point is generated to form a quadrangle as a region to be drawn;
the third method is a frame selection method, namely, a mouse is manually dragged to form a quadrilateral drawing area.
A person skilled in the art can design a drawing manner of the extraction range according to actual situations, and the drawing manner is not specifically limited in this embodiment.
S220, acquiring preset height data, or clicking and extracting the height on the three-dimensional model data to acquire corresponding height data;
the specific step of clicking and extracting the height on the three-dimensional model data is to click a point on the three-dimensional model data, and the height of the point is used as the height data.
And S230, performing feature extraction on the three-dimensional model data based on the area to be drawn and the height data to obtain corresponding spatial feature data.
In this embodiment, the area to be drawn with the corresponding height is extracted, the spatial feature data corresponding to the area to be drawn is superimposed to the corresponding position of the two-dimensional image data, and when vectorization drawing is performed based on the two-dimensional image data, the influence of the eave or the upper layer structure can be effectively eliminated.
Further, in step S200, the spatial feature data is feature point data and/or edge point data;
the feature point data is obtained in the following manner:
after the three-dimensional model data are divided into a plurality of areas, respectively extracting the characteristics of each area to obtain a plurality of sub-characteristic libraries; wherein the sub-feature library corresponds to the segmented regions one to one.
The sub-feature library comprises area coordinate data and a plurality of feature points, wherein the area coordinate data is used for indicating the distribution area of each feature point in the three-dimensional model data;
extracting a sub-feature library matched with the region to be drawn based on the region coordinate data;
and extracting corresponding feature points from each sub-feature library based on the height data to obtain feature point data.
A person skilled in the art can set a rule for segmenting the three-dimensional model data according to actual needs, and the three-dimensional model data is segmented into a plurality of regions based on a hierarchical structure partitioned in a directory of the three-dimensional model data in the embodiment.
In this embodiment, semantic segmentation is performed on three-dimensional model data based on a deep learning technology to obtain corresponding point cloud data, and the point cloud data is used as feature data in this embodiment; the technology for obtaining point cloud data by semantic segmentation is the prior art, and can be easily reproduced without informing technicians in the field in detail, the existing drawing scheme is that a mapping worker draws in a three-dimensional scene based on all feature points, but because the data volume of the feature points is extremely large, the time consumption is long when the feature points in a region to be drawn are inquired and extracted, the feature points are segmented into a plurality of sub-feature libraries in the embodiment, and therefore the efficiency of inquiring and extracting corresponding feature points is effectively improved.
The extracted feature point data distribution situation corresponds to the outline of the corresponding building, so that the feature point data can be subjected to fitting processing, and a vectorized graph is automatically generated based on the fitting result; and fitting the characteristic lines drawn by the mapping personnel by utilizing the characteristic point data to further improve the drawing accuracy, for example, the mapping personnel connects two characteristic points to draw one characteristic line, at the moment, the characteristic points with the vertical distance to the characteristic line within a preset distance threshold can be extracted, the extracted characteristic points are fitted, and the obtained characteristic point fitting line is used as the drawn vector data to update the vectorized image.
Note: fitting based on a number of known data points is state of the art and is not described in detail in this embodiment.
The acquisition mode of the cutting point data is as follows:
cutting the three-dimensional model data based on the area to be drawn to obtain a cutting model, wherein the cutting model corresponds to the area to be drawn;
and performing horizontal plane cutting on the cutting model based on the height data, and extracting all intersection points of the horizontal section and the model to obtain cutting point data.
In the embodiment, based on the area to be drawn, the terrain and feature elements in the area to be drawn are cut from the three-dimensional model data, such as a certain building;
according to the topographic feature elements extracted by the height horizontal section, obtaining a corresponding section, and extracting the intersection point of the horizontal section and the model; the intersection points are used to indicate the profile of the cross section, and it is prior art to extract the intersection points of the cross section, so detailed description in this embodiment is omitted.
Because the data volume of the feature point data is extremely large, when the vectorized image is automatically generated based on the feature point data, the time for loading the feature point data and performing fitting calculation on the feature point data is long, the number of the intersection points in the same region to be drawn is far less than that of the feature point data, the data volume required by vectorization drawing is greatly reduced through the design of the intersection points, and the mapping efficiency is further improved.
In the actual use process, a person skilled in the art can adopt the cutting point data alone, the feature point data alone or the cutting point data and the feature point data in combination;
when the spatial feature data contains feature point data, before obtaining the vectorized image, the method further comprises an optimization step based on the feature point data, specifically:
and extracting feature points with the vertical distance from the drawing line within a preset distance threshold from the feature point data, fitting the extracted feature points, and generating a corresponding vectorized image based on the obtained feature point fitting line.
The line drawing includes, but is not limited to, a line generated by crop point data fitting and a line drawing generated based on a user drawing operation.
For example, the drawn line can be intelligently repaired by replacing a trend line in a house wall line identification method (CN 2020110487599), and a corresponding vector diagram is generated by using the repaired vector line.
The automatic generation or manual drawing of drawing line is optimized through feature point data to this embodiment, and to inexperienced mapping personnel, the vectorization image that can effectively guarantee to draw finally reaches the required precision, to experienced mapping personnel, need not it and draws for guaranteeing the precision repeatedly to improve its drawing efficiency.
Further, performing vectorized rendering based on the rendering request data in step S300, and obtaining a vectorized image specifically includes:
vectorizing and drawing on the two-dimensional image data or the three-dimensional model data based on the drawing request data to obtain two-dimensional vector data, and generating a corresponding vectorized image according to the two-dimensional vector data;
namely, a mapping person may perform rendering on the two-dimensional image data, or perform point-fetching rendering on the three-dimensional model data according to the existing rendering method, and after performing point-fetching rendering on the three-dimensional model data, automatically convert the rendered three-dimensional data into corresponding two-dimensional vector data to update the vectorized image located on the two-dimensional image data.
According to the embodiment, the drawing request information is generated through the operation of the drawing personnel, the drawing personnel can perform drawing operation on a two-dimensional scene or a three-dimensional scene according to actual conditions, the convenience of two-dimensional drawing and the intuitiveness of three-dimensional drawing are combined, the drawing efficiency is further improved, and the post requirement for the drawing personnel is reduced.
The drawing request data includes reference baseline fitting request data, configuration request data, graphic drawing request data and information configuration request data
(1) The reference baseline fitting request data comprises a fitting area, fitting is carried out based on spatial characteristic data in the fitting area, and an obtained fitting line is used as a reference baseline;
the mapping personnel can select the spatial characteristic data corresponding to one wall to perform single-wall fitting, and can also select the spatial characteristic data corresponding to multiple walls to perform batch fitting.
(1.1) when the spatial feature data is the feature point data alone, extracting corresponding feature points based on the area range of the corresponding wall body to perform fitting to obtain a corresponding reference baseline, for example, in this embodiment, fitting is performed on the extracted feature points based on a least square method to obtain a first fit line, screening is performed on the feature points based on a preset noise reduction rule, and then fitting is performed again to obtain a second fit line, which is used as the reference baseline.
(1.2) when the spatial feature data adopts cutting point data and feature point data:
(1.2.1) constructing a first point set based on the cutting point data, wherein the distance between two adjacent points in the first point set is within a preset distance range threshold value;
acquiring a preset first distance threshold, a preset second distance threshold, a preset third distance threshold and a preset fourth distance threshold, wherein the first distance threshold is smaller than or equal to the second distance threshold, the second distance threshold is smaller than the third distance threshold, and the fourth distance threshold is a second distance threshold which is equal to two times; determining a distance range threshold based on the first distance threshold and a fourth distance threshold;
removing points in the cutting point data to obtain a plurality of effective points, wherein the distance between every two adjacent effective points is greater than or equal to the first distance threshold;
when the distance between the adjacent effective points is larger than or equal to a fourth distance threshold, inserting corresponding auxiliary points between the adjacent effective points, and enabling the distance between the adjacent effective points and the auxiliary points and/or the distance between the adjacent auxiliary points to be larger than or equal to a second distance threshold and smaller than or equal to a third distance threshold;
the valid points and the auxiliary points are summarized to obtain a first point set.
For example, the first distance threshold is 25cm, the second distance threshold is 30cm, and the third distance threshold is 45cm, and the distance range threshold is 25cm to 60cm, the first point set may be constructed according to the following steps:
traversing the cutting point data, sequentially calculating the distance between the current effective point and each subsequent intersection point until the distance is more than or equal to 25cm, marking the corresponding intersection point as the effective point at the moment, and repeating the steps to extract the next effective point, wherein the initial effective point is the first intersection point.
When the distance between the adjacent effective points reaches 60cm, at least one auxiliary point is inserted between the adjacent effective points, and the distance between the adjacent points is larger than or equal to 30cm and smaller than or equal to 45cm, so that the maximum distance between the adjacent points in the constructed first point set does not exceed 60cm, and the distance threshold range of the adjacent points in the first point set is known to be [ 25, 60 ].
If the distance between two effective points is 100cm, auxiliary points need to be inserted, and the calculation results in the three-equal time division, the equal division length is between 30cm and 45cm, which meets the above condition, so that one auxiliary point is inserted every 33.33cm from the first effective point.
When the distance between adjacent intersection points is small, it is indicated that the wall surface is uneven, if such point pair is adopted, the error is large, and when the distance between adjacent intersection points is large, it indicates that the wall surface is straight, but it will cause that partial areas cannot be grouped subsequently
(1.2.2) grouping the first point set based on the variation trend of each point in the first point set to obtain a corresponding second point set;
sequentially carrying out segmentation point judgment on each point in the first point set, and grouping the first point set based on the obtained segmentation points to obtain a corresponding second point set;
the segmentation point judgment step comprises the following steps:
continuously extracting n points from the first point set after the current point is extracted as a trend judgment point, wherein n is more than or equal to 3 and less than or equal to 6, or extracting two points adjacent to the current point as the trend judgment points;
after connecting the current point with each trend judging point, calculating the included angle value between the obtained line segment and the x axis;
and generating a trend transformation value based on the included angle value, and judging whether the current point is a segmentation point or not based on a preset segmentation judgment rule and the trend transformation value.
And segmenting the first point set based on the segmentation points to obtain a corresponding second point set.
The person skilled in the art can select a mode for extracting the trend judgment point according to actual needs, and set the value of n;
the specific steps for judging the segmentation points are as follows:
A. when the number of the points to be determined in the first point set is greater than or equal to 5, extracting 3 continuous points behind the current point as trend determination points, namely n =3.
The formula for calculating the trend transformed value is:
△θ i =(θ i,i+3i,i+1 )+(θ i,i+2i,i+1 )=θ i,i+3i,i+2 -2*θ i,i+1
wherein Δ θ i A trend transformed value, theta, representing the ith point in the first set of points i,j Representing the included angle value of a line segment formed by the ith point and the jth point in the first point set and the x axis, wherein j takes the values of i +1, i +2 and i +3;
the segmentation judgment rule in the embodiment is as follows:
when |. DELTA theta i | a > a, and | Δ θ i ∣-∣△θ i-1 | b > b, judging the i-th point as a segmentation point, otherwise, judging the i-th point as a non-segmentation point, wherein a and b are both threshold parameters, in the embodiment, the values of a and b are two groups, when a takes 29, the value of b is 10, and when a takes 40, the value of b is 100.
B. And when the number of the points to be judged in the first point set is 3 or 4, extracting two points adjacent to the current point as trend judgment points.
The formula for calculating the trend transformed value is:
△θ i =θ i,i+1i-1,i
wherein Δ θ i A trend transform value theta representing the ith point in the first set of points i,i+1 The included angle value between the line segment formed by the ith point and the (i + 1) th point in the first point set and the x axis is expressed, and theta is expressed by the same principle i-1,i And the included angle value of a line segment formed by the ith point and the (i-1) th point in the first point set and the x axis is shown.
The segmentation judgment rule is as follows:
when |. DELTA theta i If | > c, the point i is judged to be a segmentation point, otherwise, the point i is judged to be a non-segmentation point, wherein c is a threshold parameter, and the value of c is 3 degrees in the embodiment.
C. When the number of the points to be determined in the first point set is 2, two points are directly used as one group, namely. A second set of points.
D. And when the number of the points to be judged in the first point set is 1, rejecting the points.
Note:
and rejecting non-segmentation points when the | Delta theta | of continuous 3 or more non-segmentation points is in a preset distribution range (0-25 degrees).
When the continuous 3 or more non-segmentation points are in the preset distribution range (0-25 degrees), the non-segmentation points are distributed in a circular arc and belong to the round corner between two walls, so that the non-segmentation points are removed.
(1.2.3) fitting based on the second set of points to generate a corresponding fitted line segment;
extracting effective points in the second point set;
when the number of the effective points is less than 2, the second point set is removed;
when the number of the effective points is 2, connecting the effective points to obtain a corresponding fitting line segment;
when the number of the effective points is more than 2:
taking the effective points and/or the middle points of the adjacent effective points as fitting points, and fitting based on the fitting points to obtain a first line segment; and screening the fitting points based on the first fitting line segment, fitting again based on a screening result, and taking the obtained second line segment as a corresponding fitting line segment.
The auxiliary points are self-inserting points, so that the auxiliary points are removed in the fitting process to ensure the accuracy of the fitting result;
in this embodiment, the midpoint of the adjacent effective points is additionally used for fitting, so as to further improve the accuracy of the fitting result.
In this embodiment, when the number of the effective points is greater than 2 and equal to or less than 4, the effective points are fewer, and therefore the middle points between the effective points and the adjacent effective points are taken as fitting points, and when the number of the effective points is greater than 4, the middle points between the adjacent effective points are taken as fitting points.
In this embodiment, the specific steps of screening the fitting points based on the first fitting line segment are as follows:
when the number of the fitting points is equal to 2, the fitting points do not need to be removed;
when the number of the fitting points is more than 2 and less than or equal to 5, one point with the farthest distance is removed;
and when the number of the fitting points is more than 5, removing the fitting points according to a preset proportion of 20 percent, wherein the number of the removed points is an integer.
And (1.2.4) merging the fitted line segments, and generating a corresponding vector line based on a merging result.
When only the cut point data is employed, the resultant vector line is taken as a reference baseline.
The merging method comprises the following specific steps:
(1.2.4.1) correcting each fitted line segment to obtain a corresponding correcting line segment, wherein the correcting line segment is parallel to an X axis or a Y axis;
the method for correcting the fitted line segment comprises the following steps:
xnew=x*cos(θ)-y*sin(θ);
ynew=x*sin(θ)+y*cos(θ);
znew=z;
wherein X, y and z are effective point coordinates on the fitting line segment, theta is an included angle between the fitting line segment and an X axis, and xnew, ynew and znew are point coordinates after rotation;
(1.2.4.2) classifying based on the start point coordinates and the end point coordinates of each normalized line segment to obtain at least one merged line set, wherein the normalized line segments in the merged line set are positioned on the same straight line, and the spacing distance between the normalized line segments and the adjacent normalized line segments is smaller than a preset spacing distance threshold; that is, the end-to-end distance of adjacent normalization line segments is calculated, and when the spacing distance is smaller than a preset spacing distance threshold (1 m), the result shows that the segments are foreign objects outside the wall or the wall surface is uneven, so that the segments can be combined to generate a corresponding combined line set.
(1.2.4.3) combining the fitted line segments corresponding to the normalization line segments in the same combined line set to obtain corresponding vector lines.
Traversing each regression line segment in the merged line set, and acquiring a starting point of a fitting line segment corresponding to the first regression line segment and an end point of a fitting line segment corresponding to the last regression line segment;
and connecting the starting point and the end point to obtain a corresponding vector line.
(1.2.5) optimizing the vector line based on the feature point data to obtain a corresponding reference baseline;
namely, the vector line is used for intelligently repairing the trend line in the recognition method (CN 2020110487599) of the house wall line, and the repaired vector line is used as a reference base line.
(2) The line drawing construction request data includes construction operation information indicating parallel lines or perpendicular lines constructing the reference base line, construction position information indicating positions of the constructed parallel lines or perpendicular lines, and a specified reference base line.
Because often mutually perpendicular between the two adjacent walls, often be parallel to each other between the two relative walls, so when obtaining the reference baseline of a wall body, can accomplish the drawing to corresponding room fast through the parallel line or the perpendicular line that this baseline of structure corresponds, not only can effectively improve drawing efficiency, and under the mixed and disorderly condition of space characteristic data on other faces in house, can draw corresponding wall through the mode of constructing parallel line or perpendicular line, ensure the precision of gained vectorization image.
In this embodiment, the spatial feature data is projected to the two-dimensional image data and the three-dimensional model data, so that a mapping person can select an appropriate point indication construction position from the spatial feature data by comparing the spatial feature data projected on the two-dimensional image data and the three-dimensional model data based on the selected reference baseline, so as to construct a parallel line or a perpendicular line of the reference baseline at the selected point.
When the spatial feature data of the corresponding wall surface are messy and difficult to select, the mapping personnel can also draw a proper point indication construction position on the three-dimensional model data so as to construct a parallel line or a perpendicular line of the reference base line at the drawn point. (3) The graph drawing request data comprises drawing operation information and drawing position information, wherein the drawing operation information comprises surrounding surface operation, hollow operation, merging operation and segmentation operation, and the drawing position information is used for indicating the position where the drawing operation is carried out;
the drawing operation information includes, but is not limited to, operations of drawing points, drawing lines, drawing surfaces, segmenting, deleting, hollowing, enclosing surfaces and the like;
the line drawing includes, but is not limited to, operations such as two-point line drawing, line segment merging, extended line drawing, merged line drawing (i.e., for connecting and integrating at least two segments of line drawing, for repairing wall lines), and the like;
the drawing surfaces include, but are not limited to, two-point drawing surfaces (diagonal points), multi-point drawing surfaces (three points and more);
the division includes but is not limited to dividing the drawing line and dividing the drawing surface, the division mode includes but is not limited to selecting a drawing line as the dividing line, extending a drawing line as the dividing line, constructing the dividing line;
the hollowing means that an area corresponding to a certain surface is dug, so that the area of the certain surface is not counted in the mapping work, for example, the area with a courtyard is not counted in the mapping work of a house, but the shape of the courtyard needs to be drawn and recorded, and the area corresponding to the courtyard can be deleted through hollowing.
Drawing position information corresponding to the surrounding surface operation is used for indicating vector lines for surrounding surface, so that a plurality of specified independent vector lines are extended to obtain intersection points, a complete planar vector is constructed, and finally vectorization drawing is completed;
(4) The information configuration request data comprises configuration information and assignment regions, wherein the configuration information is used for indicating attributes of the assignment regions, and the configuration information comprises structure information (building structures) and floor information.
Further, after the two-dimensional image data and the three-dimensional model data are acquired and displayed in the step S100, the method further comprises a two-dimensional and three-dimensional linkage display step, and the specific steps are as follows:
acquiring scene adjustment request data, and adjusting two-dimensional image data and/or three-dimensional model data based on the scene adjustment request data;
the scene adjustment request data comprises adjustment request information, position information and scene information;
the adjustment request information includes, but is not limited to, zoom in, zoom out, position, rotation, and translation;
the scene information is used for indicating the adjustment of the two-dimensional image data and/or the three-dimensional model data;
the position information is used for indicating a reference point when the two-dimensional image data or the three-dimensional model data is adjusted, and the two-dimensional image data and the three-dimensional model data are mapped, so that the position information corresponding to a certain scene is determined, and the other scene and the corresponding associated position information thereof can be obtained according to the mapping relation.
In this embodiment, through the design to two three-dimensional linkage show steps, make two-dimensional image data and three-dimensional model data linkage show, at the drawing in-process, need not the mapping personnel and adjust two-dimensional image data or three-dimensional model data respectively, further improve mapping efficiency.
And the drawn vectorized graph is projected to the corresponding position of the two-dimensional image data, so that the vectorized graph is synchronously adjusted while the two-dimensional image data is adjusted.
For example:
presetting scene information corresponding to amplification, reduction, positioning and translation as synchronous adjustment two-dimensional image data and three-dimensional model data;
when the mapping personnel performs the enlarging or reducing operation, acquiring coordinates of a preset fixed point or coordinates of an operation point selected by a user as position information, and synchronously enlarging or reducing two-dimensional scene and three-dimensional model data based on the position information;
when a mapping person selects a certain point/line/surface of the vectorized image to position, positioning coordinate information is acquired as position information, two-dimensional image data and three-dimensional model data are adjusted based on the position information, a spatial position corresponding to the position information is made to serve as a central point of the two-dimensional image data and the three-dimensional model data, the three-dimensional model data are rotated to enable the spatial position to be displayed in the front, for example, the mapping person selects a painting line corresponding to a certain wall in the vectorized image, the painting line serves as the center of the two-dimensional image data, a wall surface corresponding to the painting line serves as the center of the three-dimensional model data, meanwhile, the three-dimensional model data are automatically rotated to enable the wall surface to be displayed to the mapping person, the inclination angle display of the three-dimensional model data is automatically adjusted, and the three-dimensional model data are adjusted to the optimal visual angle to be displayed.
When the mapping personnel translate the two-dimensional image data, coordinate information of a center point of the two-dimensional image data in the translation process is acquired in real time to serve as position information, and the three-dimensional model data are translated synchronously based on the position information.
And the scene information corresponding to the three-dimensional scene rotation is adjusted three-dimensional model data.
When the mapping personnel rotate the three-dimensional model data, the three-dimensional model data are rotated based on the preset fixed point coordinates or the coordinates of the operation points selected by the user as position information, the inclination angle display of the three-dimensional model data is automatically adjusted to be the optimal view angle for displaying.
Note: the above is only an example of two-dimensional and three-dimensional linkage display, and a person skilled in the art can configure the adjustment request information, the position information, and the scene information by himself or herself according to actual needs, and this embodiment is not limited to this.
Further, obtaining the vectorized image further includes a result output method, specifically:
and summarizing the vectorized images, and outputting corresponding line drawing images in a layered mode according to preset mapping parameters.
Namely, the above steps are repeated, and for the completed vectorized image, line drawing graphs corresponding to each layer are respectively output according to the preset drawing parameters and the layer numbers configured in the drawing process.
Further, before the outcome is output, the method also comprises a quality inspection step:
performing quality inspection on the drawn vectorized image, and based on the quality inspection result and restoration;
the quality inspection content comprises pseudo node inspection, short line segment inspection, graph gap inspection, graph overlapping inspection and the like;
the quality inspection result comprises error category information and error position information, and a preset repair rule is called based on the error category information, so that automatic repair is performed according to the repair rule and the error position information;
this embodiment effectively improves the precision of final mapping achievement through the quality testing step, and guarantees mapping efficiency through automatic quality testing and prosthetic mode.
Taking a scene of room-ground digital mapping as an example, mapping is performed according to the mapping method described in embodiment 1, in this case, spatial feature data is used as feature point data, the extracted feature point data is projected to a corresponding position of two-dimensional image data, operations of mapping personnel are collected, and corresponding scene adjustment request data or drawing request data are acquired and responded based on the operations.
And (3) carrying out statistical analysis on drawing results of various mapping personnel to find that:
each person who surveys compares with the person who surveys who shares 250 households per day, compares with the person who shares 60 households per day that the realization was earlier according to traditional mapping scheme, and the mapping efficiency promotes 4 times.
The error of the obtained mapping result and actual measuring points reaches high precision of 3.5cm to 4cm, and is reduced by 2cm compared with the error of the prior conventional mapping scheme; in addition, the out-of-limit difference of 10cm in the case is less than 10%, and the precision requirement is met.
To sum up, the digital mapping method provided by the embodiment can also greatly improve the drawing efficiency of mapping personnel at colleagues who improve the drawing precision, and can also reduce the working threshold of mapping personnel and reduce the training cost due to the combination of the convenience of two-dimensional mapping and the intuitiveness of three-dimensional mapping and the automatic optimization of the mapping line.
Embodiment 2, a digital mapping system, as shown in fig. 2, includes:
a scene display module 100, configured to obtain and display two-dimensional image data and three-dimensional model data;
the feature extraction module 200 is configured to determine a region to be drawn, perform feature extraction on the three-dimensional model data based on the region to be drawn, and obtain corresponding spatial feature data;
the drawing module 300 is configured to, after the spatial feature data is projected to the two-dimensional image data, collect drawing operations performed by a user on the two-dimensional image data or the three-dimensional model data, generate corresponding drawing request data, and perform vectorization drawing based on the drawing request data to obtain a vectorized image.
The quality inspection module 400 is used for performing quality inspection on the vectorized image drawn by the drawing module 300 and repairing the vectorized image based on a quality inspection result;
and the achievement output module 500 is used for summarizing the vectorized images subjected to quality inspection by the quality inspection module 400 and outputting corresponding line drawing images in a layered manner according to preset set mapping parameters.
Further, the scene display module 100 includes two three-dimensional linkage display units configured to:
acquiring scene adjustment request data, and adjusting two-dimensional image data and/or three-dimensional model data based on the scene adjustment request data;
the scene adjustment request data comprises adjustment request information, position information and scene information;
the adjustment request information includes, but is not limited to, zoom in, zoom out, position, rotation, and translation;
the scene information is used for indicating the adjustment of the two-dimensional image data and/or the three-dimensional model data;
the position information is used for indicating a reference point when the two-dimensional image data or the three-dimensional model data is adjusted.
Further, the feature extraction module includes a region drawing unit, a height acquisition unit, and an extraction unit:
the region drawing unit is used for drawing the extraction range on the two-dimensional image data to generate a region to be drawn;
the height acquisition unit is used for acquiring preset height data or clicking and extracting height on the three-dimensional model data to acquire corresponding height data;
the extraction unit is used for extracting the characteristics of the three-dimensional model data based on the area to be drawn and the height data to obtain corresponding spatial characteristic data.
The extraction unit comprises a characteristic point extraction subunit and a cutting point extraction subunit;
the clipping point extracting subunit is configured to:
cutting the three-dimensional model data based on the area to be drawn to obtain a cutting model, wherein the cutting model corresponds to the area to be drawn;
and performing horizontal plane cutting on the cutting model based on the height data, and extracting all intersection points of the horizontal section and the model to obtain cutting point data.
The feature point extraction subunit is configured to:
after the three-dimensional model data is divided into a plurality of areas, respectively extracting the characteristics of each area to obtain a plurality of sub-characteristic libraries;
the sub-feature library comprises area coordinate data and a plurality of feature points, wherein the area coordinate data is used for indicating the distribution area of each feature point in the three-dimensional model data;
extracting a sub-feature library matched with the area to be drawn based on the area coordinate data;
and extracting corresponding feature points from each sub-feature library based on the height data to obtain feature point data.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Embodiment 3, a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of the method of embodiment 1.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
In addition, it should be noted that the specific embodiments described in the present specification may differ in the shape of the components, the names of the components, and the like. All equivalent or simple changes of the structure, the characteristics and the principle of the invention which are described in the patent conception of the invention are included in the protection scope of the patent of the invention. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (9)

1. A method for digital mapping, comprising the steps of:
acquiring and displaying two-dimensional image data and three-dimensional model data;
determining a region to be drawn, and performing feature extraction on the three-dimensional model data based on the region to be drawn to obtain corresponding spatial feature data;
after the spatial characteristic data are projected to the two-dimensional image data, collecting drawing operation performed on the two-dimensional image data or the three-dimensional model data by a user, generating corresponding drawing request data, and performing vectorization drawing based on the drawing request data to obtain a vectorized image;
performing vectorization drawing based on the drawing request data, wherein the specific step of obtaining a vectorized image comprises the following steps:
vectorizing and drawing on the two-dimensional image data or the three-dimensional model data based on the drawing request data to obtain two-dimensional vector data, and generating a corresponding vectorized image according to the two-dimensional vector data;
the drawing request data includes reference baseline fitting request data and wire configuration request data;
the reference baseline fitting request data comprises a fitting area, fitting is carried out based on spatial characteristic data in the fitting area, and an obtained fitting line is used as a reference baseline;
the wire formation request data includes construction operation information indicating parallel lines or perpendicular lines constructing the reference base line, construction position information indicating positions of the constructed parallel lines or perpendicular lines, and a specified reference base line.
2. The digital mapping method of claim 1, wherein:
drawing an extraction range on the two-dimensional image data to generate a region to be drawn;
acquiring preset height data, or clicking and extracting height on the three-dimensional model data to obtain corresponding height data;
and performing feature extraction on the three-dimensional model data based on the region to be drawn and the height data to obtain corresponding spatial feature data.
3. The digital mapping method of claim 2, wherein the spatial feature data includes feature point data and/or crop point data;
the acquisition mode of the cutting point data is as follows:
cutting the three-dimensional model data based on the area to be drawn to obtain a cutting model, wherein the cutting model corresponds to the area to be drawn;
and performing horizontal plane cutting on the cutting model based on the height data, and extracting all intersection points of the horizontal section and the model to obtain cutting point data.
4. The digital mapping method of claim 3, wherein the feature point data is obtained by:
after the three-dimensional model data are divided into a plurality of areas, respectively extracting the characteristics of each area to obtain a plurality of sub-characteristic libraries;
the sub-feature library comprises area coordinate data and a plurality of feature points, wherein the area coordinate data is used for indicating the distribution area of each feature point in the three-dimensional model data;
extracting a sub-feature library matched with the region to be drawn based on the region coordinate data;
and extracting corresponding feature points from each sub-feature library based on the height data to obtain feature point data.
5. The digital mapping method of any of claims 1 to 4, wherein:
the drawing request data also comprises graph drawing request data and information configuration request data;
the graph drawing request data comprises drawing operation information and drawing position information, wherein the drawing operation information comprises surrounding surface operation, hollow operation, merging operation and segmentation operation, and the drawing position information is used for indicating the position for drawing operation;
the information configuration request data comprises configuration information and assignment areas, wherein the configuration information is used for indicating attributes of the assignment areas, and the configuration information comprises structure information and floor information.
6. The digital mapping method of claim 5, wherein after the two-dimensional image data and the three-dimensional model data are acquired and displayed, the method further comprises a two-three-dimensional linkage display step, and the specific steps are as follows:
acquiring scene adjustment request data, and adjusting two-dimensional image data and/or three-dimensional model data based on the scene adjustment request data;
the scene adjustment request data comprises adjustment request information, position information and scene information;
the adjustment request information comprises zooming-in, zooming-out, positioning, rotating and translating;
the scene information is used for indicating the adjustment of the two-dimensional image data and/or the three-dimensional model data;
the position information is used for indicating a reference point when the two-dimensional image data or the three-dimensional model data is adjusted.
7. A digital mapping system, comprising:
the scene display module is used for acquiring and displaying two-dimensional image data and three-dimensional model data;
the characteristic extraction module is used for determining a region to be drawn, and extracting the characteristics of the three-dimensional model data based on the region to be drawn to obtain corresponding spatial characteristic data;
a drawing module:
the system comprises a space characteristic data acquisition unit, a data processing unit and a data processing unit, wherein the space characteristic data acquisition unit is used for acquiring drawing operation of a user on two-dimensional image data or three-dimensional model data after the space characteristic data is projected to the two-dimensional image data, and generating corresponding drawing request data;
the image processing device is further configured to perform vectorization drawing based on the drawing request data to obtain a vectorized image, specifically, perform vectorization drawing on two-dimensional image data or three-dimensional model data based on the drawing request data to obtain two-dimensional vector data, and generate a corresponding vectorized image according to the two-dimensional vector data;
the drawing request data includes reference baseline fitting request data and wire configuration request data;
the reference baseline fitting request data comprises a fitting area, fitting is carried out based on spatial feature data in the fitting area, and an obtained fitting line is used as a reference baseline;
the wire formation request data includes construction operation information indicating parallel lines or perpendicular lines constructing the reference base line, construction position information indicating positions of the constructed parallel lines or perpendicular lines, and a specified reference base line.
8. The digital mapping system of claim 7, wherein the feature extraction module comprises a region drawing unit, a height acquisition unit, and an extraction unit:
the region drawing unit is used for drawing the extraction range on the two-dimensional image data to generate a region to be drawn;
the height acquisition unit is used for acquiring preset height data or clicking and extracting height on the three-dimensional model data to acquire corresponding height data;
the extraction unit is used for extracting the characteristics of the three-dimensional model data based on the area to be drawn and the height data to obtain corresponding spatial characteristic data.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN202011567550.3A 2020-12-25 2020-12-25 Digital mapping method, system and computer readable storage medium Active CN113140022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011567550.3A CN113140022B (en) 2020-12-25 2020-12-25 Digital mapping method, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011567550.3A CN113140022B (en) 2020-12-25 2020-12-25 Digital mapping method, system and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113140022A CN113140022A (en) 2021-07-20
CN113140022B true CN113140022B (en) 2022-11-11

Family

ID=76809843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011567550.3A Active CN113140022B (en) 2020-12-25 2020-12-25 Digital mapping method, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113140022B (en)

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090237396A1 (en) * 2008-03-24 2009-09-24 Harris Corporation, Corporation Of The State Of Delaware System and method for correlating and synchronizing a three-dimensional site model and two-dimensional imagery
CN102436669A (en) * 2011-10-13 2012-05-02 中国民用航空总局第二研究所 Two-dimensional vector map drawing method
JP2012150823A (en) * 2012-02-28 2012-08-09 Geo Technical Laboratory Co Ltd Three-dimensional map drawing system
CN102831626B (en) * 2012-06-18 2014-11-26 清华大学 Visualization method for multivariable spatio-temporal data under polar region projection mode
CN103065357B (en) * 2013-01-10 2015-08-05 电子科技大学 Based on the figure for shadow-play model production method of common three-dimensional model
CN105261052B (en) * 2015-11-03 2018-09-18 沈阳东软医疗系统有限公司 Method for drafting and device is unfolded in lumen image
US10635777B2 (en) * 2016-02-16 2020-04-28 Bayerische Motoren Werke Aktiengesellschaft Method for generating and using a two-dimensional drawing having three-dimensional orientation information
CN107356230B (en) * 2017-07-12 2020-10-27 深圳市武测空间信息有限公司 Digital mapping method and system based on live-action three-dimensional model
CN109727255B (en) * 2018-11-29 2022-11-18 广东中达规谷地信科技有限公司 Building three-dimensional model segmentation method
CN109949899B (en) * 2019-02-28 2021-05-28 未艾医疗技术(深圳)有限公司 Image three-dimensional measurement method, electronic device, storage medium, and program product
CN110175366A (en) * 2019-04-26 2019-08-27 南京友谱信息科技有限公司 Integral perspective threedimensional model modeling method is constructed and built to region class
CN111768498A (en) * 2020-07-09 2020-10-13 中国科学院自动化研究所 Visual positioning method and system based on dense semantic three-dimensional map and mixed features
CN112037318A (en) * 2020-07-22 2020-12-04 山东大学 Construction method and system of three-dimensional rock mass structure model and application of model

Also Published As

Publication number Publication date
CN113140022A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN110163198B (en) Table identification reconstruction method and device and storage medium
US8396284B2 (en) Smart picking in 3D point clouds
EP3028464B1 (en) System and method for detecting features in aerial images using disparity mapping and segmentation techniques
CN111915662B (en) Three-dimensional laser point cloud data preprocessing method and device
CN112633657B (en) Construction quality management method, device, equipment and storage medium
JP2002157576A (en) Device and method for processing stereo image and recording medium for recording stereo image processing program
Aiteanu et al. Hybrid tree reconstruction from inhomogeneous point clouds
CN110660125B (en) Three-dimensional modeling device for power distribution network system
CN113066112B (en) Indoor and outdoor fusion method and device based on three-dimensional model data
KR102204043B1 (en) System for automatic satellite image processing for improving image accuracy by position correcting of geographic feature
US20020094134A1 (en) Method and system for placing three-dimensional models
CN115937450A (en) Method, system, intelligent terminal and storage medium for building layered household division
CN109064482B (en) Method and device for automatically acquiring house outline in three-dimensional oblique photography scene
CN112907601B (en) Automatic extraction method and device for tunnel arch point cloud based on feature transformation
CN113140022B (en) Digital mapping method, system and computer readable storage medium
Rüther et al. Challenges in heritage documentation with terrestrial laser scanning
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
CN109559374B (en) Efficient mapping system based on point cloud data
Ziems et al. Multiple-model based verification of road data
KR102204040B1 (en) Image processing system for unifying collecting images by altitude
JP2020091590A (en) Display data generation apparatus, display data generation method, and display data generation program
CN111241228B (en) Comprehensive drawing method based on vector data and graphic processing technology
Gruen et al. An Operable System for LoD3 Model Generation Using Multi-Source Data and User-Friendly Interactive Editing
CN113140031A (en) Three-dimensional image modeling system and method and oral cavity scanning equipment applying same
CN112528379B (en) Building model generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Digital mapping methods, systems, and computer-readable storage media

Effective date of registration: 20230815

Granted publication date: 20221111

Pledgee: Bank of Jiangsu Limited by Share Ltd. Hangzhou branch

Pledgor: Hangzhou Jinao Information Technology Co.,Ltd.

Registration number: Y2023980052148

PE01 Entry into force of the registration of the contract for pledge of patent right