CN117593465A - Virtual display method and system for realizing smart city in three-dimensional visualization mode - Google Patents

Virtual display method and system for realizing smart city in three-dimensional visualization mode Download PDF

Info

Publication number
CN117593465A
CN117593465A CN202311718450.XA CN202311718450A CN117593465A CN 117593465 A CN117593465 A CN 117593465A CN 202311718450 A CN202311718450 A CN 202311718450A CN 117593465 A CN117593465 A CN 117593465A
Authority
CN
China
Prior art keywords
image
city
dimensional
coordinate
smart city
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311718450.XA
Other languages
Chinese (zh)
Inventor
林国登
李林松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jishi Chuangzhi Shenzhen Technology Co ltd
Original Assignee
Jishi Chuangzhi Shenzhen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jishi Chuangzhi Shenzhen Technology Co ltd filed Critical Jishi Chuangzhi Shenzhen Technology Co ltd
Priority to CN202311718450.XA priority Critical patent/CN117593465A/en
Publication of CN117593465A publication Critical patent/CN117593465A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of virtual display, and discloses a virtual display method and a virtual display system for realizing a smart city in a three-dimensional visualization manner, wherein the method comprises the following steps: acquiring a smart city image and performing image preprocessing to obtain a preprocessed image; detecting and removing a moving target in the image, and performing image restoration on the preprocessed image to obtain a restored image; based on the smart city coordinates, overlapping, dividing and fusing the repair images to obtain fused images; identifying a city center point in the fused image, constructing a space three-dimensional coordinate of a smart city based on the city center point, performing coordinate connection to obtain a space three-dimensional structure, and performing color and texture rendering to obtain a three-dimensional visual model; and marking the city attribute in the three-dimensional visual model, loading the three-dimensional visual model and the city attribute into a visual platform, and performing three-dimensional virtual display on the smart city to obtain a three-dimensional virtual display result of the smart city. The invention improves the information content of the smart city.

Description

Virtual display method and system for realizing smart city in three-dimensional visualization mode
Technical Field
The invention relates to the technical field of virtual display, in particular to a virtual display method and a virtual display system for realizing a smart city in a three-dimensional visualization mode.
Background
The virtual display of the three-dimensional city can display the data related to the smart city in an intuitive way by using data visualization tools such as charts, maps and the like.
At present, the virtual display of the smart city is mainly based on a smart phone application program or a website, so that a user can conveniently and rapidly acquire various city services and information, such as bus inquiry, parking space reservation, express inquiry and the like, at any time and any place.
Disclosure of Invention
In order to solve the problems, the invention provides a virtual display method and a virtual display system for realizing a smart city in a three-dimensional visualization manner, which can improve the display information quantity of the smart city.
In a first aspect, the present invention provides a virtual display method for implementing three-dimensional visualization of a smart city, including:
extracting a flight track of an unmanned aerial vehicle for acquiring a remote sensing image of a smart city, selecting a target flight track from the flight track based on the texture quality of the remote sensing image, acquiring a target remote sensing image of the smart city and city coordinates of the target remote sensing image by using the unmanned aerial vehicle based on the target flight track, and performing image preprocessing on the target remote sensing image to obtain a preprocessed image;
Detecting a moving target in the preprocessed image, and after the moving target is removed from the preprocessed image, performing image restoration on an image module corresponding to the moving target in the preprocessed image to obtain a restored image;
based on the city coordinates, overlapping and segmenting the overlapped buildings in the repair image to obtain segmented images, and fusing each segmented image in the segmented images to obtain a fused image;
identifying a city center point in the fused image, constructing a space three-dimensional coordinate of the smart city based on a first mapping relation between the city center point and the city coordinate, and carrying out coordinate connection on each space three-dimensional coordinate in the space three-dimensional coordinate by utilizing a second mapping relation between a pixel point coordinate in the fused image and the space three-dimensional coordinate to obtain a space three-dimensional structure;
performing color rendering on the space three-dimensional structure to obtain a color rendering structure, performing texture rendering on the color rendering structure to obtain a texture rendering structure, and taking the texture rendering structure as a three-dimensional visual model of the smart city;
marking city attributes in the three-dimensional visual model, loading the three-dimensional visual model and the city attributes into a pre-constructed visual platform, and completing three-dimensional virtual display of the smart city by using the visual platform to obtain a three-dimensional virtual display result of the smart city.
In a possible implementation manner of the first aspect, the performing image preprocessing on the target remote sensing image to obtain a preprocessed image includes:
and performing image filtering on the target remote sensing image by using the following formula to obtain a filtered image:
wherein ω represents the filtered image, M represents the total amount of pixel points of the target remote sensing image, arg represents a window function, (i, j) represents pixel points to be filtered in the target remote sensing image, Q (i, j) represents the pixel gray value of the pixel to be filtered,
and performing inclination correction on the filtered image to obtain a preprocessed image.
In a possible implementation manner of the first aspect, the performing tilt correction on the filtered image to obtain a preprocessed image includes:
taking an image center point of the filtered image as a coordinate origin, and taking an image pair corner point of the filtered image as a plane;
based on the origin of coordinates and the plane, constructing a space coordinate system of the filtered image, and carrying out coordinate rotation on the filtered image by using the following formula to obtain a rotation coordinate point:
wherein A is 2 ,B 2 ,C 2 Representing a rotating coordinate point after coordinate rotation, A 1 ,B 1 ,C 1 Representing the original spatial coordinate point epsilon of the filtered image in the spatial coordinate system A ,ε B ,ε C Representing the rotation angle in the rotation matrix,
and reversely rotating the inclination angle of the filtered image based on the rotation coordinate point to obtain an inclination correction image.
In a possible implementation manner of the first aspect, the detecting a moving target in the preprocessed image includes:
inputting the preprocessed image into a preconfigured convolutional neural network to identify image features of the preprocessed image using the convolutional neural network in combination with the following formula:
where θ represents the vector features, f represents the activation function, A, B, C represents the length, width, height of the convolution kernel cube in the convolution model,is the calculated value of the (a, b, c) position on the image and the kth convolution kernel of the ith layer,is a figureThe value at the (alpha+o, beta+p, gamma+q) position and the calculated value of the (u-1) th layer of the convolution kernel (m) th convolution kernel, C ku A hyper-parameter representing the u-th layer of the convolution kernel, alpha, beta, gamma representing the offset of the (a, b, c) position,
and identifying image information of the inclination correction image according to the image characteristics, and identifying a corresponding moving target in the image information.
In one possible implementation manner of the first aspect, the performing image restoration on the image module corresponding to the moving target in the preprocessed image to obtain a restored image includes:
Inquiring a historical image corresponding to the preprocessed image, and extracting an area corresponding to the missing part of the preprocessed image in the historical image to obtain a compensation image;
calculating pixel differences between the compensation map and the preprocessed image, and carrying out pixel correction on the compensation map based on the pixel differences to obtain a correction map;
and performing image deletion repair on the missing part in the preprocessed image by using the correction map to obtain a repaired image.
In a possible implementation manner of the first aspect, the performing, based on the city coordinates, overlap segmentation on the overlapping buildings in the repair image to obtain a segmented image includes:
converting the repair image into a gray level image, and performing feature matching on the gray level image to obtain a feature matching result;
and determining overlapping buildings in the repair image based on the feature matching result, and performing overlapping segmentation on the overlapping buildings to obtain segmented images.
In a possible implementation manner of the first aspect, the fusing each of the segmented images to obtain a fused image includes:
marking the feature points of the segmented image to obtain marked points;
And constructing a mark conversion matrix of the segmented image, and performing image fusion on the segmented image by using the mark conversion matrix and the mark points to obtain a fusion image.
In a possible implementation manner of the first aspect, the performing image fusion on the segmented image using the label transformation matrix and the label point to obtain a fused image includes:
and performing label conversion on the label points by using the label conversion matrix and combining the following formula to obtain conversion labels:
wherein,representing the conversion mark, alpha 1 Representing the first marker point, beta, prior to conversion 1 Representing the second marker point, alpha, prior to conversion 2 Representing the first marker point after conversion, beta 2 Represents the second mark point, mu, after conversion 1 ,μ 2 Represents rotation transformation parameters, mu 3 ,μ 4 Represents the translation transformation parameters, mu 5 ,μ 6 Represents the scale transformation parameters, mu 7 ,μ 8 Representing perspective transformation parameters, < >>The label switching matrix is represented by a pattern,
and carrying out image fusion on the segmented image by utilizing the conversion mark to obtain a fusion image.
In a second aspect, the present invention provides a virtual display system for implementing a smart city in three-dimensional visualization, the system comprising:
the image preprocessing module is used for extracting the flight track of an unmanned aerial vehicle for acquiring the remote sensing image of the smart city, selecting a target flight track from the flight tracks based on the texture quality of the remote sensing image, acquiring the target remote sensing image of the smart city and the city coordinates of the target remote sensing image by using the unmanned aerial vehicle based on the target flight track, and carrying out image preprocessing on the target remote sensing image to obtain a preprocessed image;
The image restoration module is used for detecting a moving target in the preprocessed image, and after the moving target is removed from the preprocessed image, carrying out image restoration on an image module corresponding to the moving target in the preprocessed image to obtain a restored image;
the image fusion module is used for carrying out overlapping segmentation on the overlapping buildings in the repair image based on the city coordinates to obtain segmented images, and fusing each segmented image in the segmented images to obtain a fused image;
the three-dimensional structure construction module is used for identifying a city center point in the fused image, constructing a space three-dimensional coordinate of the smart city based on a first mapping relation between the city center point and the city coordinate, and carrying out coordinate connection on each space three-dimensional coordinate in the space three-dimensional coordinate by utilizing a second mapping relation between the pixel point coordinate and the space three-dimensional coordinate in the fused image to obtain a space three-dimensional structure;
the model rendering module is used for performing color rendering on the space three-dimensional structure to obtain a color rendering structure, performing texture rendering on the color rendering structure to obtain a texture rendering structure, and taking the texture rendering structure as a three-dimensional visual model of the smart city.
The three-dimensional display module is used for marking the city attribute in the three-dimensional visual model, loading the three-dimensional visual model and the city attribute into a pre-constructed visual platform, and completing three-dimensional virtual display of the smart city by utilizing the visual platform to obtain a three-dimensional virtual display result of the smart city.
Compared with the prior art, the technical principle and beneficial effect of this scheme lie in:
the embodiment of the invention can provide a large number of unmanned aerial vehicle flight schemes for urban information acquisition of the smart city by extracting the flight locus of the unmanned aerial vehicle for acquiring the remote sensing image of the smart city, and can select a more reasonable scheme, and further, the embodiment of the invention can obtain a more excellent aerial photographing path by selecting the target flight locus from the flight locus based on the texture quality of the remote sensing image, the embodiment of the invention can identify the target which is easy to cause interference in the image by detecting the moving target in the preprocessing image, the moving target refers to a non-fixed building such as pedestrians, vehicles and the like in the corresponding urban geography in the image, the embodiment of the invention carries out overlapping segmentation on the overlapping building in the repairing image based on the urban coordinates, the segmentation image can remove the part with overlapped content in the aerial image when the smart city is aerial, further, the embodiment of the invention can provide basic construction points for the smart city when the smart city is three-dimensionally displayed by identifying the city center point in the fusion image, so that the distribution of corresponding city buildings in the three-dimensional image is more uniform, the embodiment of the invention can construct a basic frame for the virtual display of the smart city by constructing the space three-dimensional coordinates of the smart city based on the first mapping relation between the city center point and the city coordinates, wherein the first mapping relation refers to the corresponding relation between the two-dimensional coordinates of the image and the three-dimensional coordinates of the real space, further, the embodiment of the invention can obtain the color rendering structure by performing color rendering on the space three-dimensional structure, so that the color of the space three-dimensional structure is richer and plump, the visual effect is better, wherein the color rendering means that the sensory expressive force of the image is enhanced by changing the color and saturation of the image and applying various visual effects, and the detailed information of each region such as building name, street name, river, lake and the like can be known in the process of viewing the city model by marking the city attribute in the three-dimensional visual model. The city attribute refers to information used for describing various data in a city, such as names of city roads, distances, landmark building names, areas and the like. Therefore, the virtual display method and the virtual display system for realizing the smart city in three-dimensional visualization can improve the display information quantity of the smart city.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flow chart of a method for implementing virtual display of a smart city in three-dimensional visualization according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a virtual display system for implementing a smart city in three-dimensional visualization according to an embodiment of the present invention.
Detailed Description
It should be understood that the detailed description is presented by way of example only and is not intended to limit the invention.
The embodiment of the invention provides a virtual display method for realizing a smart city by three-dimensional visualization, wherein an execution subject of the virtual display method for realizing the smart city by the three-dimensional visualization comprises, but is not limited to, at least one of a server, a terminal and the like which can be configured to execute the method provided by the embodiment of the invention. In other words, the virtual display method for realizing the smart city by three-dimensional visualization may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, a flow chart of a virtual display method for realizing a smart city in three-dimensional visualization according to an embodiment of the invention is shown. The virtual display method for realizing the smart city by three-dimensional visualization depicted in fig. 1 comprises the following steps:
s1, extracting a flight track of an unmanned aerial vehicle for acquiring a remote sensing image of a smart city, selecting a target flight track from the flight tracks based on the texture quality of the remote sensing image, acquiring a target remote sensing image of the smart city and city coordinates of the target remote sensing image by using the unmanned aerial vehicle based on the target flight track, and performing image preprocessing on the target remote sensing image to obtain a preprocessed image.
According to the embodiment of the invention, a large number of unmanned aerial vehicle flight schemes can be provided for urban information acquisition of the smart city by extracting the flight track of the unmanned aerial vehicle for acquiring the remote sensing image of the smart city, so that a more reasonable scheme can be selected. The flight track refers to a path of unmanned aerial vehicle flight for urban aerial photography.
Optionally, the flight track of the unmanned aerial vehicle of the remote sensing image of the smart city is obtained by querying a database of the unmanned aerial vehicle shooting plan.
Furthermore, according to the embodiment of the invention, the target flight path is selected from the flight paths based on the texture quality of the remote sensing image, so that a relatively excellent aerial photographing path can be obtained. The texture quality refers to the definition and the richness of texture information existing in an image.
Optionally, the process of selecting the target flight trajectory from the flight trajectories based on the texture quality of the remote sensing image is as follows: identifying the image resolution of the remote sensing image, determining the texture quality of the remote sensing image based on the image resolution, selecting the remote sensing image with the texture quality reaching the preset requirement in the remote sensing image, inquiring the shooting sequence of the remote sensing image reaching the preset requirement, and determining the target flight track based on the shooting sequence, wherein the resolution of the remote sensing image is calculated by utilizing image processing software such as photoshop, adobe.
According to the embodiment of the invention, the smart city can be subjected to geographic positioning by utilizing the unmanned aerial vehicle to acquire the target remote sensing image of the smart city and the city coordinates of the target remote sensing image based on the target flight track, so that the geographic information of the smart city can be known, and basic data support can be provided for city datamation. The city coordinates refer to coordinates of the unmanned aerial vehicle when the target remote sensing image is shot, which are obtained by positioning through a satellite navigation system, and the coordinates comprise Z-axis height, X-axis coordinates and Y-axis coordinates.
According to the embodiment of the invention, the target remote sensing image is subjected to image preprocessing, so that a preprocessed image can be obtained, and the image with higher definition can be further recognized better. Wherein the preprocessing refers to processing such as screening, filtering, tilt correction, etc. before using the image.
As an embodiment of the present invention, the performing image preprocessing on the target remote sensing image to obtain a preprocessed image includes: and performing image filtering on the target remote sensing image by using the following formula to obtain a filtered image:
wherein ω represents the filtered image, M represents the total amount of pixel points of the target remote sensing image, arg represents a window function, (i, j) represents pixel points to be filtered in the target remote sensing image, Q (i, j) represents the pixel gray value of the pixel to be filtered,
and performing inclination correction on the filtered image to obtain a preprocessed image.
Optionally, the filtering the target remote sensing image by image filtering means removing noise influence in the image, and the correcting the filtered image by inclination means correcting distortion of a partial area of the image due to a shooting angle.
Further, in yet another optional embodiment of the present invention, the performing tilt correction on the filtered image to obtain a preprocessed image includes: taking an image center point of the filtered image as a coordinate origin, taking an image diagonal point of the filtered image as a plane, constructing a space coordinate system of the filtered image based on the coordinate origin and the plane, and carrying out coordinate rotation on the filtered image by using the following formula to obtain a rotation coordinate point:
Wherein A is 2 ,B 2 ,C 2 Representing a rotating coordinate point after coordinate rotation, A 1 ,B 1 ,C 1 Representing the original spatial coordinate point epsilon of the filtered image in the spatial coordinate system A ,ε B ,ε C Representing the rotation angle in the rotation matrix,
and reversely rotating the inclination angle of the filtered image based on the rotation coordinate point to obtain an inclination correction image.
The space coordinate system describes the position and direction of the object in the three-dimensional space, and it should be noted that, because of the distortion phenomenon, the image is not a plane, and three-dimensional coordinates are needed to find out the part beyond the plane.
Optionally, the spatial coordinate system of the filtered image is constructed by sql language.
S2, detecting a moving target in the preprocessed image, and after the moving target is removed from the preprocessed image, performing image restoration on an image module corresponding to the moving target in the preprocessed image to obtain a restored image.
According to the embodiment of the invention, the targets which are easy to interfere in the image can be identified by detecting the moving targets in the preprocessed image, wherein the moving targets refer to non-fixed buildings such as pedestrians, vehicles and the like in the corresponding urban geography in the image.
As an embodiment of the present invention, the detecting a moving object in the preprocessed image includes: inputting the preprocessed image into a preconfigured convolutional neural network to identify image features of the preprocessed image using the convolutional neural network in combination with the following formula:
where θ represents the vector features, f represents the activation function, A, B, C represents the length, width, height of the convolution kernel cube in the convolution model,is the calculated value of the (a, b, c) position on the image and the kth convolution kernel of the ith layer,for the calculated value of the value at the position of (alpha + o, beta + p, gamma + q) of the image and the mth convolution kernel of the (u-1) th layer of the convolution kernel, C ku A hyper-parameter representing the u-th layer of the convolution kernel, alpha, beta, gamma representing the offset of the (a, b, c) position,
and identifying image information of the inclination correction image according to the image characteristics, and identifying a corresponding moving target in the image information.
As an embodiment of the present invention, the performing image restoration on the image module corresponding to the moving object in the preprocessed image to obtain a restored image includes: inquiring a historical image corresponding to the preprocessed image, extracting an area corresponding to the missing part of the preprocessed image in the historical image to obtain a compensation image, calculating pixel difference between the compensation image and the preprocessed image, correcting pixels of the compensation image based on the pixel difference to obtain a corrected image, and performing image missing repair on the missing part in the preprocessed image by using the corrected image to obtain a repair image.
The history image refers to image data of the history aerial photograph of the preprocessed image, and the compensation image refers to an image used for compensating the missing information of the preprocessed image.
Optionally, the history image corresponding to the preprocessed image is obtained by accessing an aerial database corresponding to the preprocessed image, extracting an area corresponding to the missing part of the preprocessed image in the history image, obtaining a compensation image, overlapping the history image with the preprocessed image, and tracing the missing area of the preprocessed image to determine a compensation image range, calculating the pixel difference between the compensation image and the preprocessed image, converting the compensation image and the preprocessed image into a gray image, obtaining a compensation gray image and a preprocessed gray image, and calculating the pixel difference between the compensation gray image and the preprocessed gray image.
And S3, based on the city coordinates, performing overlapping segmentation on the overlapping buildings in the repair image to obtain segmented images, and fusing each segmented image in the segmented images to obtain a fused image.
According to the embodiment of the invention, the overlapping buildings in the repair image are subjected to overlapping segmentation based on the city coordinates, so that the segmented image can be obtained, and the part of the smart city, in which the contents overlap, can be removed from the aerial image when the smart city is subjected to aerial photography.
As one embodiment of the present invention, the performing, based on the city coordinates, overlapping segmentation on the overlapping buildings in the repair image to obtain a segmented image includes: converting the repair image into a gray level image, performing feature matching on the gray level image to obtain a feature matching result, determining overlapping buildings in the repair image based on the feature matching result, and performing overlapping segmentation on the overlapping buildings to obtain segmented images.
The gray level image is an image that converts pixel information of a color image into gray level, optionally, the feature matching of the gray level image is to match features of two images, a nearest neighbor search algorithm (such as violent matching or kd-Tree) can be used to find a most matching point of each feature point in another image, overlapping and splitting are performed on the overlapping building to obtain a split image, frame selection is performed on the overlapping building through a matlab tool to obtain a frame selection image, and then the frame selection image is cut by a cutting function in the matlab tool.
Furthermore, the embodiment of the invention obtains the fused image by fusing each of the divided images, can combine scattered aerial images into a complete city image,
As an embodiment of the present invention, the fusing each of the segmented images to obtain a fused image includes: and marking the characteristic points of the segmented image to obtain marking points, constructing a marking conversion matrix of the segmented image, and performing image fusion on the segmented image by using the marking conversion matrix and the marking points to obtain a fusion image.
The feature points refer to the highest point and the lowest point of a segmented region after image segmentation, and the fusion matrix refers to a mathematical matrix for realizing fusion between image points.
Further, as another optional embodiment of the present invention, the performing image fusion on the segmented image using the label transformation matrix and the label point to obtain a fused image includes: and performing label conversion on the label points by using the label conversion matrix and combining the following formula to obtain conversion labels:
wherein,representing the conversion mark, alpha 1 Representing the first marker point, beta, prior to conversion 1 Representing the second marker point, alpha, prior to conversion 2 Representing the first marker point after conversion, beta 2 Represents the second mark point, mu, after conversion 1 ,μ 2 Represents rotation transformation parameters, mu 3 ,μ 4 Represents the translation transformation parameters, mu 5 ,μ 6 Represents the scale transformation parameters, mu 7 ,μ 8 Representing perspective transformation parameters, < >>Representing the tag conversion matrix.
And carrying out image fusion on the segmented image by utilizing the conversion mark to obtain a fusion image.
The conversion mark is a fusion mark point when the images are fused, so that non-correspondence of fusion is avoided.
Optionally, the feature point marking is performed on the segmented image, the obtained marking points are marked by binary code constructed markers, the conversion matrix of the segmented image is constructed by java construction, the segmented image is subjected to image fusion by using the conversion marks, the fusion image is obtained by performing one-to-one correspondence on the fusion marking points in the segmented image, the accurate fusion position is obtained, and the segmented image is spliced by using an opencv tool based on the accurate fusion position, so that the fusion image is obtained.
And S4, identifying a city center point in the fused image, constructing a space three-dimensional coordinate of the smart city based on a first mapping relation between the city center point and the city coordinate, and carrying out coordinate connection on each space three-dimensional coordinate in the space three-dimensional coordinate by utilizing a second mapping relation between the pixel point coordinate and the space three-dimensional coordinate in the fused image to obtain a space three-dimensional structure.
According to the embodiment of the invention, the urban center point in the fusion image is identified, so that foundation construction points can be provided for the smart city in three-dimensional display, and the distribution of corresponding city buildings in the three-dimensional map is more uniform.
As an embodiment of the present invention, the identifying the city center point in the fused image includes: detecting image edge points of the fused image, connecting the image edge points to obtain a closed graph, and calculating city center points of the closed graph by using the following formula:
wherein (x ', y') represents the city center point, x 'represents the abscissa of the city center point, y' represents the ordinate of the city center point, x represents the argument on the horizontal axis, y represents the argument on the vertical axis, D represents the plane area where the closed figure is located, s D Representing the area of the closed figure, u (x, y) represents the density.
Optionally, the image edge points of the detected fusion image are detected by a Canny image edge detection technology.
Further, in the embodiment of the present invention, the building of the spatial three-dimensional coordinates of the smart city may build a basic frame for the virtual display of the smart city by the first mapping relationship between the city center point and the city coordinates, where the first mapping relationship refers to a corresponding relationship between the two-dimensional coordinates of the image and the three-dimensional coordinates of the real space.
As one embodiment of the present invention, the constructing the spatial three-dimensional coordinates of the smart city based on the first mapping relationship between the city center point and the city coordinates includes: calculating a coordinate center point of the city coordinates by using the following formula:
wherein,represents the coordinate center point, q 1 ,q 2 ,......,q n Representing the city coordinates, n representing the total number of city coordinates;
calculating a first mapping matrix corresponding to a first mapping relationship between the city center point and the coordinate center point by using the following formula:
the saidA coordinate center point;
converting pixel points in the fused image into three-dimensional space points by using the first mapping matrix; and carrying out equal proportion reduction on the three-dimensional space points to obtain the space three-dimensional coordinates.
Further, in the embodiment of the present invention, coordinate connection is performed on each spatial three-dimensional coordinate in the spatial three-dimensional coordinates by using the second mapping relationship between the pixel point coordinates and the spatial three-dimensional coordinates in the fused image, so as to obtain a spatial three-dimensional structure, which can construct a basic display layout for the three-dimensional model of the smart city. The second mapping relationship refers to a coordinate correspondence between a spatial three-dimensional coordinate of the smart city and a pixel point coordinate in the fused image.
Optionally, the coordinate connection is performed on each spatial three-dimensional coordinate in the spatial three-dimensional coordinates by using a second mapping relationship between the pixel point coordinates in the fused image and the spatial three-dimensional coordinates, so that when the pixel point coordinates in the fused image belong to the coordinates of the same building, the spatial three-dimensional coordinates corresponding to the pixel point coordinates in the fused image are connected to form the connection of the three-dimensional space coordinates of the same building.
And S5, performing color rendering on the space three-dimensional structure to obtain a color rendering structure, performing texture rendering on the color rendering structure to obtain a texture rendering structure, and taking the texture rendering structure as a three-dimensional visual model of the smart city.
According to the embodiment of the invention, the color rendering is carried out on the space three-dimensional structure to obtain the color rendering structure, so that the space three-dimensional structure is richer and more full in color and better in visual effect, wherein the color rendering is to enhance the sensory expressive force of the image by changing the color and saturation of the image and applying various visual effects.
Optionally, the color rendering is performed on the spatial three-dimensional structure, so that the obtained color rendering structure performs color rendering on the spatial three-dimensional structure through an application filter function, a tone adjustment function, a color adjustment function and a color equalization function in a gimp tool.
Furthermore, in the embodiment of the invention, the texture rendering structure is obtained by performing texture rendering on the color rendering structure, so that the space three-dimensional structure can clearly display details and structures, and a texture area and a background area can be clearly separated.
As an embodiment of the present invention, the performing texture rendering on the color rendering structure to obtain a texture rendering structure includes: and carrying out gradual mapping on the color rendering structure to obtain a gradual structure, carrying out concave-convex mapping on the gradual structure to obtain a concave-convex structure, carrying out normal mapping on the concave-convex structure to obtain a normal structure, and carrying out texture projection on the normal structure to obtain a texture rendering structure.
Optionally, the step of performing gradient mapping on the color rendering structure to obtain a gradient structure refers to a rendering method of applying gradient textures to a model surface, the step of performing concave-convex mapping on the gradient structure is realized through a mapping tool of 3dmax, the step of obtaining a concave-convex structure refers to a rendering technology of simulating tiny concave-convex details through changing a normal vector of the model surface, the step of performing normal mapping on the concave-convex structure is realized through a ps tool, the step of obtaining a normal structure refers to a technology for enhancing the model surface details, the step of performing texture projection on the normal structure is realized through a ps tool, and the step of adding shadow, reflection and other effects on the model surface, thereby enhancing the real sense of rendering, and the step of realizing through a projection function of a threejs tool.
S6, marking city attributes in the three-dimensional visual model, loading the three-dimensional visual model and the city attributes into a pre-constructed visual platform, and completing three-dimensional virtual display of the smart city by using the visual platform to obtain a three-dimensional virtual display result of the smart city.
According to the embodiment of the invention, detailed information of each region such as building names, street names, rivers, lakes and the like can be known in the process of checking the city model through marking the city attribute in the three-dimensional visual model. The city attribute refers to information used for describing various data in a city, such as names of city roads, distances, landmark building names, areas and the like.
As an embodiment of the present invention, the marking city attributes in the three-dimensional visualization model includes: inquiring a geographic space data system of a city corresponding to the three-dimensional visual model, identifying the area name of a corresponding area in the three-dimensional visual model by utilizing the geographic space data system, and marking the city attribute in the three-dimensional visual model through a geographic information system based on the area name.
Wherein, the geospatial data system refers to a big data system for counting geographic information.
Furthermore, according to the embodiment of the invention, the data can be displayed in the visual platform by loading the three-dimensional visual model and the city attribute into the pre-constructed visual platform, wherein the visual platform refers to a software tool or system for displaying and presenting data, information or knowledge, and complex data can be converted into visual forms such as visual graphics, chart maps and the like so as to help a user to understand the data better.
Optionally, the loading of the three-dimensional visual model and the city attribute into a pre-built visual platform is realized by connecting a local data system corresponding to the three-dimensional visual model and the city attribute with the pre-built visual platform for local data transmission.
According to the embodiment of the invention, the three-dimensional virtual display of the smart city is completed by utilizing the visual platform, so that a complete visual three-dimensional model of the smart city can be obtained by obtaining the three-dimensional virtual display result of the smart city, and various information of the city, such as building layout, city construction progress, city scenery and the like, can be observed according to the visual three-dimensional model.
It can be seen that the embodiment of the invention can provide a large number of unmanned aerial vehicle flight schemes for city information acquisition of the smart city by extracting the flight locus of the unmanned aerial vehicle for acquiring the remote sensing image of the smart city, and further can select a more reasonable scheme, and further, the embodiment of the invention can obtain a more excellent aerial photographing path by selecting the target flight locus from the flight locus based on the texture quality of the remote sensing image, the embodiment of the invention can identify the target which is easy to cause interference in the image by detecting the moving target in the preprocessing image, the moving target refers to the non-fixed building such as pedestrians, vehicles and the like in the corresponding city geography in the image, the embodiment of the invention carries out overlapping segmentation on the overlapping building in the repairing image based on the city coordinates, the segmentation image can remove the part with overlapped content in the aerial image when the smart city is aerial, further, the embodiment of the invention can provide basic construction points for the smart city when the smart city is three-dimensionally displayed by identifying the city center point in the fusion image, so that the distribution of corresponding city buildings in the three-dimensional image is more uniform, the embodiment of the invention can construct a basic frame for the virtual display of the smart city by constructing the space three-dimensional coordinates of the smart city based on the first mapping relation between the city center point and the city coordinates, wherein the first mapping relation refers to the corresponding relation between the two-dimensional coordinates of the image and the three-dimensional coordinates of the real space, further, the embodiment of the invention can obtain the color rendering structure by performing color rendering on the space three-dimensional structure, so that the color of the space three-dimensional structure is richer and plump, the visual effect is better, wherein the color rendering means that the sensory expressive force of the image is enhanced by changing the color and saturation of the image and applying various visual effects, and the detailed information of each region such as building name, street name, river, lake and the like can be known in the process of viewing the city model by marking the city attribute in the three-dimensional visual model. The city attribute refers to information used for describing various data in a city, such as names of city roads, distances, landmark building names, areas and the like. Therefore, the virtual display method and the virtual display system for realizing the smart city in three-dimensional visualization can improve the display information quantity of the smart city.
FIG. 2 is a functional block diagram of a virtual display system for realizing a smart city in three-dimensional visualization according to the present invention.
The virtual display system 200 for realizing the three-dimensional visualization of the smart city can be installed in an electronic device. Depending on the implemented functions, the virtual display system for implementing the smart city by three-dimensional visualization may include an image preprocessing module 201, an image restoration module 202, an image fusion module 203, a three-dimensional structure construction module 204, a model rendering module 205, and a three-dimensional display module 206. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the embodiment of the present invention, the functions of each module/unit are as follows:
the image preprocessing module 201 is configured to extract a flight trajectory of an unmanned aerial vehicle for acquiring a remote sensing image of a smart city, select a target flight trajectory from the flight trajectories based on texture quality of the remote sensing image, acquire a target remote sensing image of the smart city and city coordinates of the target remote sensing image by using the unmanned aerial vehicle based on the target flight trajectory, and perform image preprocessing on the target remote sensing image to obtain a preprocessed image;
The image restoration module 202 is configured to detect a moving target in the preprocessed image, and perform image restoration on an image module corresponding to the moving target in the preprocessed image after the moving target is removed from the preprocessed image, so as to obtain a restored image;
the image fusion module 203 is configured to perform overlapping segmentation on the overlapping buildings in the repair image based on the city coordinates to obtain segmented images, and fuse each segmented image in the segmented images to obtain a fused image;
the three-dimensional structure construction module 204 is configured to identify a city center point in the fused image, construct a spatial three-dimensional coordinate of the smart city based on a first mapping relationship between the city center point and the city coordinates, and coordinate each spatial three-dimensional coordinate in the spatial three-dimensional coordinate by using a second mapping relationship between a pixel point coordinate in the fused image and the spatial three-dimensional coordinate to obtain a spatial three-dimensional structure;
the model rendering module 205 is configured to perform color rendering on the spatial three-dimensional structure to obtain a color rendering structure, perform texture rendering on the color rendering structure to obtain a texture rendering structure, and use the texture rendering structure as a three-dimensional visual model of the smart city;
The three-dimensional display module 206 is configured to mark city attributes in the three-dimensional visual model, load the three-dimensional visual model and the city attributes into a pre-constructed visual platform, and complete three-dimensional virtual display of the smart city by using the visual platform to obtain a three-dimensional virtual display result of the smart city.
In detail, the modules in the virtual display system 200 for implementing the three-dimensional visualization in the embodiment of the present invention use the same technical means as the virtual display method for implementing the three-dimensional visualization in the smart city described in fig. 1, and can produce the same technical effects, which are not described herein.
The present invention also provides a storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
extracting a flight track of an unmanned aerial vehicle for acquiring a remote sensing image of a smart city, selecting a target flight track from the flight track based on the texture quality of the remote sensing image, acquiring a target remote sensing image of the smart city and city coordinates of the target remote sensing image by using the unmanned aerial vehicle based on the target flight track, and performing image preprocessing on the target remote sensing image to obtain a preprocessed image;
Detecting a moving target in the preprocessed image, and after the moving target is removed from the preprocessed image, performing image restoration on an image module corresponding to the moving target in the preprocessed image to obtain a restored image;
based on the city coordinates, overlapping and segmenting the overlapped buildings in the repair image to obtain segmented images, and fusing each segmented image in the segmented images to obtain a fused image;
identifying a city center point in the fused image, constructing a space three-dimensional coordinate of the smart city based on a first mapping relation between the city center point and the city coordinate, and carrying out coordinate connection on each space three-dimensional coordinate in the space three-dimensional coordinate by utilizing a second mapping relation between a pixel point coordinate in the fused image and the space three-dimensional coordinate to obtain a space three-dimensional structure;
performing color rendering on the space three-dimensional structure to obtain a color rendering structure, performing texture rendering on the color rendering structure to obtain a texture rendering structure, and taking the texture rendering structure as a three-dimensional visual model of the smart city;
marking city attributes in the three-dimensional visual model, loading the three-dimensional visual model and the city attributes into a pre-constructed visual platform, and completing three-dimensional virtual display of the smart city by using the visual platform to obtain a three-dimensional virtual display result of the smart city.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus, system and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of the modules is merely a logical function division, and other manners of division may be implemented in practice.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A virtual display method for realizing three-dimensional visualization of a smart city, the method comprising:
extracting a flight track of an unmanned aerial vehicle for acquiring a remote sensing image of a smart city, selecting a target flight track from the flight track based on the texture quality of the remote sensing image, acquiring a target remote sensing image of the smart city and city coordinates of the target remote sensing image by using the unmanned aerial vehicle based on the target flight track, and performing image preprocessing on the target remote sensing image to obtain a preprocessed image;
detecting a moving target in the preprocessed image, and after the moving target is removed from the preprocessed image, performing image restoration on an image module corresponding to the moving target in the preprocessed image to obtain a restored image;
Based on the city coordinates, overlapping and segmenting the overlapped buildings in the repair image to obtain segmented images, and fusing each segmented image in the segmented images to obtain a fused image;
identifying a city center point in the fused image, constructing a space three-dimensional coordinate of the smart city based on a first mapping relation between the city center point and the city coordinate, and carrying out coordinate connection on each space three-dimensional coordinate in the space three-dimensional coordinate by utilizing a second mapping relation between a pixel point coordinate in the fused image and the space three-dimensional coordinate to obtain a space three-dimensional structure;
performing color rendering on the space three-dimensional structure to obtain a color rendering structure, performing texture rendering on the color rendering structure to obtain a texture rendering structure, and taking the texture rendering structure as a three-dimensional visual model of the smart city;
marking city attributes in the three-dimensional visual model, loading the three-dimensional visual model and the city attributes into a pre-constructed visual platform, and completing three-dimensional virtual display of the smart city by using the visual platform to obtain a three-dimensional virtual display result of the smart city.
2. The method of claim 1, wherein the performing image preprocessing on the target remote sensing image to obtain a preprocessed image comprises:
and performing image filtering on the target remote sensing image by using the following formula to obtain a filtered image:
wherein ω represents the filtered image, M represents the total amount of pixel points of the target remote sensing image, arg represents a window function, (i, j) represents pixel points to be filtered in the target remote sensing image, Q (i, j) represents the pixel gray value of the pixel to be filtered,
and performing inclination correction on the filtered image to obtain a preprocessed image.
3. The method of claim 1, wherein said tilt correcting the filtered image to obtain a preprocessed image comprises:
taking an image center point of the filtered image as a coordinate origin, and taking an image pair corner point of the filtered image as a plane;
based on the origin of coordinates and the plane, constructing a space coordinate system of the filtered image, and carrying out coordinate rotation on the filtered image by using the following formula to obtain a rotation coordinate point:
wherein A is 2 ,B 2 ,C 2 Representing a rotating coordinate point after coordinate rotation, A 1 ,B 1 ,C 1 Representing the original spatial coordinate point epsilon of the filtered image in the spatial coordinate system A ,ε B ,ε C Representing the rotation angle in the rotation matrix,
and reversely rotating the inclination angle of the filtered image based on the rotation coordinate point to obtain an inclination correction image.
4. The method of claim 1, wherein the detecting the moving object in the preprocessed image comprises:
inputting the preprocessed image into a preconfigured convolutional neural network to identify image features of the preprocessed image using the convolutional neural network in combination with the following formula:
where θ represents the vector features, f represents the activation function, A, B, C represents the length, width, height of the convolution kernel cube in the convolution model,calculated value for (a, b, c) position on the image and the kth convolution kernel of the kth layer, is +.>For the calculated value of the value at the position of (alpha + o, beta + p, gamma + q) of the image and the mth convolution kernel of the (u-1) th layer of the convolution kernel, C ku A hyper-parameter representing the u-th layer of the convolution kernel, alpha, beta, gamma representing the offset of the (a, b, c) position,
and identifying image information of the inclination correction image according to the image characteristics, and identifying a corresponding moving target in the image information.
5. The method according to claim 1, wherein performing image restoration on the image module corresponding to the moving object in the preprocessed image to obtain a restored image comprises:
Inquiring a historical image corresponding to the preprocessed image, and extracting an area corresponding to the missing part of the preprocessed image in the historical image to obtain a compensation image;
calculating pixel differences between the compensation map and the preprocessed image, and carrying out pixel correction on the compensation map based on the pixel differences to obtain a correction map;
and performing image deletion repair on the missing part in the preprocessed image by using the correction map to obtain a repaired image.
6. The method of claim 1, wherein the performing overlap segmentation on the overlapping buildings in the repair image based on the city coordinates to obtain segmented images comprises:
converting the repair image into a gray level image, and performing feature matching on the gray level image to obtain a feature matching result;
and determining overlapping buildings in the repair image based on the feature matching result, and performing overlapping segmentation on the overlapping buildings to obtain segmented images.
7. The method of claim 1, wherein said fusing each of the segmented images results in a fused image, comprising:
marking the feature points of the segmented image to obtain marked points;
And constructing a mark conversion matrix of the segmented image, and performing image fusion on the segmented image by using the mark conversion matrix and the mark points to obtain a fusion image.
8. The method of claim 1, wherein the performing image fusion on the segmented image using the label transformation matrix and the label points to obtain a fused image comprises:
and performing label conversion on the label points by using the label conversion matrix and combining the following formula to obtain conversion labels:
wherein,representing the conversion mark, alpha 1 Representing the first marker point, beta, prior to conversion 1 Representing the second marker point, alpha, prior to conversion 2 Representing the first marker point after conversion, beta 2 Represents the second mark point, mu, after conversion 1 ,μ 2 Represents rotation transformation parameters, mu 3 ,μ 4 Represents the translation transformation parameters, mu 5 ,μ 6 Represents the scale transformation parameters, mu 7 ,μ 8 The perspective transformation parameters are represented by a set of parameters,the label switching matrix is represented by a pattern,
and carrying out image fusion on the segmented image by utilizing the conversion mark to obtain a fusion image.
9. The method of claim 1, wherein the identifying a city center point in the fused image comprises:
Detecting image edge points of the fusion image, and connecting the image edge points to obtain a closed graph;
calculating the city center point of the closed graph by using the following formula:
wherein (x ', y') represents the city center point, x 'represents the abscissa of the city center point, y' represents the ordinate of the city center point, x represents the argument on the horizontal axis, y represents the argument on the vertical axis, D represents the plane area where the closed figure is located, s D Representing the area of the closed figure, u (x, y) represents the density.
10. A virtual display system for three-dimensional visualization of a smart city, the system comprising:
the image preprocessing module is used for extracting the flight track of an unmanned aerial vehicle for acquiring the remote sensing image of the smart city, selecting a target flight track from the flight tracks based on the texture quality of the remote sensing image, acquiring the target remote sensing image of the smart city and the city coordinates of the target remote sensing image by using the unmanned aerial vehicle based on the target flight track, and carrying out image preprocessing on the target remote sensing image to obtain a preprocessed image;
the image restoration module is used for detecting a moving target in the preprocessed image, and after the moving target is removed from the preprocessed image, carrying out image restoration on an image module corresponding to the moving target in the preprocessed image to obtain a restored image;
The image fusion module is used for carrying out overlapping segmentation on the overlapping buildings in the repair image based on the city coordinates to obtain segmented images, and fusing each segmented image in the segmented images to obtain a fused image;
the three-dimensional structure construction module is used for identifying a city center point in the fused image, constructing a space three-dimensional coordinate of the smart city based on a first mapping relation between the city center point and the city coordinate, and carrying out coordinate connection on each space three-dimensional coordinate in the space three-dimensional coordinate by utilizing a second mapping relation between the pixel point coordinate and the space three-dimensional coordinate in the fused image to obtain a space three-dimensional structure;
the model rendering module is used for performing color rendering on the space three-dimensional structure to obtain a color rendering structure, performing texture rendering on the color rendering structure to obtain a texture rendering structure, and taking the texture rendering structure as a three-dimensional visual model of the smart city.
The three-dimensional display module is used for marking the city attribute in the three-dimensional visual model, loading the three-dimensional visual model and the city attribute into a pre-constructed visual platform, and completing three-dimensional virtual display of the smart city by utilizing the visual platform to obtain a three-dimensional virtual display result of the smart city.
CN202311718450.XA 2023-12-13 2023-12-13 Virtual display method and system for realizing smart city in three-dimensional visualization mode Pending CN117593465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311718450.XA CN117593465A (en) 2023-12-13 2023-12-13 Virtual display method and system for realizing smart city in three-dimensional visualization mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311718450.XA CN117593465A (en) 2023-12-13 2023-12-13 Virtual display method and system for realizing smart city in three-dimensional visualization mode

Publications (1)

Publication Number Publication Date
CN117593465A true CN117593465A (en) 2024-02-23

Family

ID=89921946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311718450.XA Pending CN117593465A (en) 2023-12-13 2023-12-13 Virtual display method and system for realizing smart city in three-dimensional visualization mode

Country Status (1)

Country Link
CN (1) CN117593465A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020211430A1 (en) * 2019-04-16 2020-10-22 广东康云科技有限公司 Smart city system and implementation method therefor
CN113963113A (en) * 2021-10-19 2022-01-21 西安东方宏业科技股份有限公司 Three-dimensional visualization method for urban building
CN115359223A (en) * 2022-08-24 2022-11-18 武汉柏汇达科技有限公司 Real-scene three-dimensional city development display system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020211430A1 (en) * 2019-04-16 2020-10-22 广东康云科技有限公司 Smart city system and implementation method therefor
CN113963113A (en) * 2021-10-19 2022-01-21 西安东方宏业科技股份有限公司 Three-dimensional visualization method for urban building
CN115359223A (en) * 2022-08-24 2022-11-18 武汉柏汇达科技有限公司 Real-scene three-dimensional city development display system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘月玲;: "三维虚拟现实技术的城市规划系统研究", 现代电子技术, no. 19, 25 September 2020 (2020-09-25) *

Similar Documents

Publication Publication Date Title
Zhou et al. A comprehensive study on urban true orthorectification
US8427505B2 (en) Geospatial modeling system for images and related methods
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
CA2662355C (en) Mosaic oblique images and methods of making and using same
US7983474B2 (en) Geospatial modeling system and related method using multiple sources of geographic information
CN110033475B (en) Aerial photograph moving object detection and elimination method based on high-resolution texture generation
CN109598794A (en) The construction method of three-dimension GIS dynamic model
GB2557398A (en) Method and system for creating images
Zhang et al. 3D building modelling with digital map, lidar data and video image sequences
EP2195757A2 (en) Geospatial modeling system providing building generation based upon user input on 3d model and related methods
Javadnejad et al. Dense point cloud quality factor as proxy for accuracy assessment of image-based 3D reconstruction
KR100904078B1 (en) A system and a method for generating 3-dimensional spatial information using aerial photographs of image matching
Pardo-García et al. Measurement of visual parameters of landscape using projections of photographs in GIS
CN115375857B (en) Three-dimensional scene reconstruction method, device, equipment and storage medium
CN108629742B (en) True ortho image shadow detection and compensation method, device and storage medium
CN114782824A (en) Wetland boundary defining method and device based on interpretation mark and readable storage medium
CN117115243B (en) Building group outer facade window positioning method and device based on street view picture
Hu et al. Building modeling from LiDAR and aerial imagery
Yoo et al. True orthoimage generation by mutual recovery of occlusion areas
CN105631849B (en) The change detecting method and device of target polygon
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN115631317B (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
Zhou et al. True orthoimage generation in urban areas with very tall buildings
CN117593465A (en) Virtual display method and system for realizing smart city in three-dimensional visualization mode
CN113487741B (en) Dense three-dimensional map updating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination