CN108766123A - A kind of urban plan model methods of exhibiting based on virtual reality - Google Patents

A kind of urban plan model methods of exhibiting based on virtual reality Download PDF

Info

Publication number
CN108766123A
CN108766123A CN201810416086.4A CN201810416086A CN108766123A CN 108766123 A CN108766123 A CN 108766123A CN 201810416086 A CN201810416086 A CN 201810416086A CN 108766123 A CN108766123 A CN 108766123A
Authority
CN
China
Prior art keywords
image
images
layer
virtual reality
exposure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810416086.4A
Other languages
Chinese (zh)
Inventor
陶颖
李淑
贾海艳
于泉城
马鹏超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huanggang Polytechnic College
Original Assignee
Huanggang Polytechnic College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huanggang Polytechnic College filed Critical Huanggang Polytechnic College
Priority to CN201810416086.4A priority Critical patent/CN108766123A/en
Publication of CN108766123A publication Critical patent/CN108766123A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/18Book-keeping or economics

Abstract

The invention belongs to three-dimensional model display technique fields, disclose a kind of urban plan model methods of exhibiting based on virtual reality, it is provided with projection platform, the projection platform is connected to the top of device, projection head is installed below the projection platform, the projection head connects computer terminal by data line, and the computer connects control panel by data line, and computer terminal processing routine includes:User interface layer, data processing, analysis layer, model construction layer, model treatment layer, image processing layer, the model construction layer include AutoCAD three-dimensional modelings and Object ARX data processing packets.The present invention can explicitly analyze and handle information input by user, and the plane geometric figure of input is analyzed, corresponding three-dimensional model is scrabbled up, three-dimensional model is illustrated on projection platform by projection head.

Description

A kind of urban plan model methods of exhibiting based on virtual reality
Technical field
The invention belongs to three-dimensional model display technique field more particularly to a kind of urban planning based on virtual reality Model display method.
Background technology
Urban planning is the construction that plans a city, and studies the future development in city, the rational deployment in city and comprehensive arrangement city The comprehensive deployment of every engineering construction in city, is the blueprint of urban development in the regular period, is the important composition portion of city management Divide is the important evidence of city management construction and the premise of urban planning, urban construction, city operations three phases management.
Urban planning be premised on developing eye, scientific appraisal, expert decision-making, to Urban Economic Structure, space structure, Social structure, which develops, to be planned, usually the piece section planning including city, the important function with guidance and specification urban construction, It is the previous work of the overall management of the urban area, is the tap of city management, traditional displaying device is shown using two dimension, two dimension displaying Limitation is visually brought for observer from certain degree, cannot achieve the effect that scene simulation.
In conclusion problem of the existing technology is:Two dimension displaying, cannot reach the simulated effect of scene.
Invention content
In view of the problems of the existing technology, the present invention provides a kind of urban plan model based on virtual reality Methods of exhibiting.
The invention is realized in this way a kind of urban plan model methods of exhibiting based on virtual reality includes:Control Panel processed, projection head, holder, computer terminal, projection platform, user interface layer, Data Management Analysis layer, model construction layer, mould Type process layer, image processing layer.
The projection platform is installed on the top of device, and projection head, the calculating are connected with below the projection platform Generator terminal is embedded in the front end of device, and the user interface layer connection data analysis, process layer, the data analysis, process layer connect Model construction layer, the model construction layer link model process layer are connect, the model treatment layer connects image processing layer, the mould Type structure layer includes AutoCAD three-dimensional modelings and ObjectARX data processing packets.
The projection platform obtains the different two images method of exposure time and includes the following steps:
The short image of the image that time for exposure is grown and time for exposure is respectively labeled as H images and L images by S1;
S2 obtains the YCbCr triple channel components of the H images and L images respectively, and carries out gradient calculating to each component The triple channel component Grad of the H images and each location of pixels in L images is obtained afterwards;
The ladder of the H images that S3 successively obtains S2 steps and the same location of pixels per same component in L images Degree is compared and carries out weights modification, obtains H images weight matrix corresponding with L images;Compare for GYH (m, n) It is compared in the case of identical m, n with GCrL (m, n) with GYL (m, n), GCbH (m, n) and GCbL (m, n), GCrH (m, n), Wherein, m indicates that the m rows of image H or image L, n indicate the n-th row of image H or image L;When carrying out weights modification, when two Image gradient difference takes identical weights, as 0.5 when within the 1/3 of greatest gradient difference;Conversely, when gradient difference is more than maximum Gradient difference 1/3 when, big to Grad weights of the imparting more than 0.5, small weights of the imparting less than 0.5 of Grad;Finally Obtain the corresponding weight matrix YA (m, n) of two width figures, CbA (m, n), CrA (m, n) and YB (m, n), CbB (m, n), CrB (m, n);
S4 multiplies the H images and the pixel of each same pixel position of the respective YCbCr triple channel components of L images respectively With its corresponding weights;
The product that S5 obtains S4 carries out summation process, finally obtains triple channel component and synthesizes new image.
Control panel described further is embedded in the front end of device.
Generator terminal is further calculated to connect by data line with projection head.
Further, the enhancing of the two dimensional image includes:Gray level correction, greyscale transformation, low pass filtering method.
Further, the gray level correction:
It is assumed that the gray level of uniform exposure hypograph is f (x, y), and the gray level of inhomogeneous exposure hypograph is:
G (x, y)=e (x, y) × f (x, y);
Wherein e (x, y) describes the heterogeneity of exposure;In order to determine e (x, y), the equal of known brightness can be used The image in shimming face calibrates image recording system, if this gray level of uniform scene after uniform exposure is constant C, this Image of a uniform surface after uniform exposure is g (x, y), then:
E (x, y)=g (x, y)/C;
Any image of the coefficient can be corrected according to e (x, y) in this way.
Further, the greyscale transformation includes:
If the gray level in original image at pixel (x, y) is f (x, y), pass through mapping function T, the ash of the image of generation Degree grade is g (x, y), i.e.,:
G (x, y)=T [f (x, y)];
Linear gradation converts:
It is assumed that the tonal range of original image f (x, y) is [a, b], the tonal range of the image g (x, y) after transformation be [c, D], then have:
Nonlinear gray converts:
When with certain nonlinear functions, as when logarithm, exponential function are as mapping function, it can be achieved that gradation of image it is non-thread Property transformation.The general formulae of logarithmic transformation is:
A, b, c are adjustable parameter, when it is desirable that there is larger extension in the low gray area to image, and are pressed high gray area When contracting, this transformation can be used.
Further, described in order to keep image smoothened using low pass filtering method, mathematic(al) representation is:
G (u, v)=H (u, v) F (u, v);
Wherein:
F(u,v):It is the fourier spectrum of image;
H (u, v) is the transfer function i.e. spectral response of low-pass filtering.
Advantages of the present invention and good effect are:The present invention can be comprehensive displaying urban planning three-dimensional model, It, only need to be by the corresponding data input system of two-dimensional picture when needing to show model in city, you can stand simcity true to nature Body Model.Present invention combination CAD, 3D, VR technology constructs three-dimensional stereo model, uses the three-dimensional modeling based on professional object Technology and virtual reality technology, greatly improve design efficiency, can carry out Analysis of spatial relations well, and by image at The intention of the animation of reason layer and projection head output system comes design of expression person, to by urban planning from two-dimentional drawing graduated increasing To the three-dimensional artificial stage;The present invention can improve the matter of image by using gray level correction, greyscale transformation, low pass filtering method Amount makes computer convenient for processing.
Description of the drawings
Fig. 1 is the urban plan model structural schematic diagram provided in an embodiment of the present invention based on virtual reality.
Fig. 2 is the urban plan model methods of exhibiting computer terminal provided in an embodiment of the present invention based on virtual reality Handle the flow chart of data.
In figure:1, control panel;2, projection head;3, holder;4, computer terminal;5, projection platform;6, user interface layer;7, Data Management Analysis layer;8, model construction layer;9, model treatment layer;10, image processing layer.
Specific implementation mode
In order to further understand the content, features and effects of the present invention, the following examples are hereby given, and coordinate attached drawing Detailed description are as follows.
The structure of the present invention is explained in detail below in conjunction with the accompanying drawings.
As shown in Figure 1, the urban plan model structure provided in an embodiment of the present invention based on virtual reality includes:Control Panel 1 processed, projection head 2, holder 3, computer terminal 4, projection platform 5, user interface layer 6, Data Management Analysis layer 7, model structure Build-up layers 8, model treatment layer 9, image processing layer 10.
The projection platform 5 is installed on the top of device, and the lower section of the projection platform 5 is connected with projection head 2, the meter The front end that generator terminal 4 is embedded in device is calculated, the user interface layer 6 connects data analysis, process layer 7, the data analysis, processing 7 link model structure layer 8 of layer, 8 link model process layer 9 of the model construction layer, the model treatment layer 9 connect image processing Layer 10, the model construction layer includes AutoCAD three-dimensional modelings and ObjectARX data processing packets.
The projection platform obtains the different two images method of exposure time and includes the following steps:
The short image of the image that time for exposure is grown and time for exposure is respectively labeled as H images and L images by S1;
S2 obtains the YCbCr triple channel components of the H images and L images respectively, and carries out gradient calculating to each component The triple channel component Grad of the H images and each location of pixels in L images is obtained afterwards;
The ladder of the H images that S3 successively obtains S2 steps and the same location of pixels per same component in L images Degree is compared and carries out weights modification, obtains H images weight matrix corresponding with L images;Compare for GYH (m, n) It is compared in the case of identical m, n with GCrL (m, n) with GYL (m, n), GCbH (m, n) and GCbL (m, n), GCrH (m, n), Wherein, m indicates that the m rows of image H or image L, n indicate the n-th row of image H or image L;When carrying out weights modification, when two Image gradient difference takes identical weights, as 0.5 when within the 1/3 of greatest gradient difference;Conversely, when gradient difference is more than maximum Gradient difference 1/3 when, big to Grad weights of the imparting more than 0.5, small weights of the imparting less than 0.5 of Grad;Finally Obtain the corresponding weight matrix YA (m, n) of two width figures, CbA (m, n), CrA (m, n) and YB (m, n), CbB (m, n), CrB (m, n);
S4 multiplies the H images and the pixel of each same pixel position of the respective YCbCr triple channel components of L images respectively With its corresponding weights;
The product that S5 obtains S4 carries out summation process, finally obtains triple channel component and synthesizes new image.
The enhancing of the two dimensional image includes:Gray level correction, greyscale transformation, low pass filtering method.
The gray level correction:
It is assumed that the gray level of uniform exposure hypograph is f (x, y), and the gray level of inhomogeneous exposure hypograph is:
G (x, y)=e (x, y) × f (x, y);
Wherein e (x, y) describes the heterogeneity of exposure;In order to determine e (x, y), the equal of known brightness can be used The image in shimming face calibrates image recording system, if this gray level of uniform scene after uniform exposure is constant C, this Image of a uniform surface after uniform exposure is g (x, y), then:
E (x, y)=g (x, y)/C;
Any image of the coefficient can be corrected according to e (x, y) in this way.
The greyscale transformation includes:
If the gray level in original image at pixel (x, y) is f (x, y), pass through mapping function T, the ash of the image of generation Degree grade is g (x, y), i.e.,:
G (x, y)=T [f (x, y)];
Linear gradation converts:
It is assumed that the tonal range of original image f (x, y) is [a, b], the tonal range of the image g (x, y) after transformation be [c, D], then have:
Nonlinear gray converts:
When with certain nonlinear functions, as when logarithm, exponential function are as mapping function, it can be achieved that gradation of image it is non-thread Property transformation.The general formulae of logarithmic transformation is:
A, b, c are adjustable parameter, when it is desirable that there is larger extension in the low gray area to image, and are pressed high gray area When contracting, this transformation can be used.
It is described in order to make image is smoothened to use low pass filtering method, mathematic(al) representation is:
G (u, v)=H (u, v) F (u, v);
Wherein:
F(u,v):It is the fourier spectrum of image;
H (u, v) is the transfer function i.e. spectral response of low-pass filtering.
The present invention is in use, design data is inputted computer terminal 4, computer beginning by designer by control panel 1 Reason data construct corresponding mathematical logic and geometry by Data Management Analysis layer 7 come the corresponding data of analyzing processing, lead to AutoCAD three-dimensional modelings are crossed with ObjectARX data processings to make corresponding three-dimensional model, after the completion of model manufacturing is preliminary, Enter model treatment layer 9 after the completion of model manufacturing, model is carried out further modification and processing, modified by model treatment layer 9 Model is transferred to image processing layer 10 by Cheng Hou, and image processing layer 10 throws model conversation at corresponding image by projection head 2 It penetrates on projection platform 5, whole process can be checked in the display screen of computer terminal 4, and can midway manual operation modification.
The above is only the preferred embodiments of the present invention, and is not intended to limit the present invention in any form, Every any simple modification made to the above embodiment according to the technical essence of the invention, equivalent variations and modification, belong to In the range of technical solution of the present invention.

Claims (7)

1. a kind of urban plan model display systems based on virtual reality, which is characterized in that described to be based on virtual reality Urban plan model display systems be provided with:
Projection platform;
The projection platform is installed on the top of device, and projection head, the computer terminal are connected with below the projection platform It is embedded in the front end of device, the user interface layer connection data analysis, process layer, the data analysis, process layer connect mould Type structure layer, the model construction layer link model process layer, the model treatment layer connect image processing layer, the model structure Build-up layers include AutoCAD three-dimensional modelings and ObjectARX data processing packets;
The projection platform obtains the different two images method of exposure time and includes the following steps:
The short image of the image that time for exposure is grown and time for exposure is respectively labeled as H images and L images by S1;
S2 obtains the YCbCr triple channel components of the H images and L images respectively, and is obtained after carrying out gradient calculating to each component To the triple channel component Grad of each location of pixels in the H images and L images;
The gradient of H images that S3 successively obtains S2 steps and the same location of pixels per same component in L images into It goes relatively and carries out weights modification, obtain H images weight matrix corresponding with L images;Compare for GYH (m, n) and GYL (m, n), GCbH (m, n) are compared with GCbL (m, n), GCrH (m, n) and GCrL (m, n) in the case of identical m, n, wherein m Indicate that the m rows of image H or image L, n indicate the n-th row of image H or image L;When carrying out weights modification, when two image ladders Degree difference takes identical weights, as 0.5 when within the 1/3 of greatest gradient difference;Conversely, when gradient difference is poor more than greatest gradient 1/3 when, big to Grad weights of the imparting more than 0.5, small weights of the imparting less than 0.5 of Grad;Finally obtain two The corresponding weight matrix YA (m, n) of width figure, CbA (m, n), CrA (m, n) and YB (m, n), CbB (m, n), CrB (m, n);
The pixel of the H images and each same pixel position of the respective YCbCr triple channel components of L images is multiplied by it by S4 respectively Corresponding weights;
The product that S5 obtains S4 carries out summation process, finally obtains triple channel component and synthesizes new image.
2. the urban plan model display systems based on virtual reality as described in claim 1, which is characterized in that described Control panel is embedded in the front end of device.
3. the urban plan model display systems based on virtual reality as described in claim 1, which is characterized in that described Computer terminal is connect with projection head by data line.
4. the urban plan model display systems based on virtual reality as described in claim 1, which is characterized in that described The enhancing of two dimensional image includes:Gray level correction, greyscale transformation, low pass filtering method.
5. the urban plan model display systems based on virtual reality as claimed in claim 4, which is characterized in that described Gray level correction:
It is assumed that the gray level of uniform exposure hypograph is f (x, y), and the gray level of inhomogeneous exposure hypograph is:
G (x, y)=e (x, y) × f (x, y);
Wherein e (x, y) describes the heterogeneity of exposure;In order to determine e (x, y), the uniform field of a known brightness can be used The image in face calibrates image recording system, if this gray level of uniform scene after uniform exposure is constant C, this is Image of the even face after uniform exposure is g (x, y), then:
E (x, y)=g (x, y)/C;
Any image of the coefficient can be corrected according to e (x, y) in this way.
6. the urban plan model display systems based on virtual reality as claimed in claim 4, which is characterized in that described Greyscale transformation includes:
If the gray level in original image at pixel (x, y) is f (x, y), pass through mapping function T, the gray level of the image of generation For g (x, y), i.e.,:
G (x, y)=T [f (x, y)];
Linear gradation converts:
It is assumed that the tonal range of original image f (x, y) is [a, b], the tonal range of the image g (x, y) after transformation is [c, d], then Have:
Nonlinear gray converts:
When realizing the nonlinear transformation of gradation of image when logarithm, exponential function are as mapping function with certain nonlinear functions; The general formulae of logarithmic transformation is:
A, b, c are adjustable parameter, when it is desirable that there is larger extension in the low gray area to image, and are compressed to high gray area When, this transformation can be used.
7. the urban plan model display systems based on virtual reality as claimed in claim 4, which is characterized in that described In order to keep image smoothened using low pass filtering method, mathematic(al) representation is:
G (u, v)=H (u, v) F (u, v);
Wherein:
F(u,v):It is the fourier spectrum of image;
H (u, v) is the transfer function i.e. spectral response of low-pass filtering.
CN201810416086.4A 2018-05-03 2018-05-03 A kind of urban plan model methods of exhibiting based on virtual reality Pending CN108766123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810416086.4A CN108766123A (en) 2018-05-03 2018-05-03 A kind of urban plan model methods of exhibiting based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810416086.4A CN108766123A (en) 2018-05-03 2018-05-03 A kind of urban plan model methods of exhibiting based on virtual reality

Publications (1)

Publication Number Publication Date
CN108766123A true CN108766123A (en) 2018-11-06

Family

ID=64009630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810416086.4A Pending CN108766123A (en) 2018-05-03 2018-05-03 A kind of urban plan model methods of exhibiting based on virtual reality

Country Status (1)

Country Link
CN (1) CN108766123A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003096155A2 (en) * 2002-05-07 2003-11-20 Paul Resnick Children's computer banking system
US20070042329A1 (en) * 2005-08-18 2007-02-22 Diane Curtin Personal organizer method and system
CN104301636A (en) * 2014-10-30 2015-01-21 西安电子科技大学 Low-complexity and high-efficiency synthesis method for high-dynamic digital image
CN104834379A (en) * 2015-05-05 2015-08-12 江苏卡罗卡国际动漫城有限公司 Repair guide system based on AR (augmented reality) technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003096155A2 (en) * 2002-05-07 2003-11-20 Paul Resnick Children's computer banking system
US20070042329A1 (en) * 2005-08-18 2007-02-22 Diane Curtin Personal organizer method and system
CN104301636A (en) * 2014-10-30 2015-01-21 西安电子科技大学 Low-complexity and high-efficiency synthesis method for high-dynamic digital image
CN104834379A (en) * 2015-05-05 2015-08-12 江苏卡罗卡国际动漫城有限公司 Repair guide system based on AR (augmented reality) technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李苏旻: "虚拟现实技术在建筑与城市规划中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王晓红: "城市规划与建筑虚拟现实多通道投影系统的研究与实现", 《安徽建筑》 *

Similar Documents

Publication Publication Date Title
CN106056658B (en) A kind of virtual objects rendering method and device
Lerones et al. A practical approach to making accurate 3D layouts of interesting cultural heritage sites through digital models
CN112819947A (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
Lallensack et al. Photogrammetry in ichnology: 3D model generation, visualisation, and data extraction
CN103810729B (en) A kind of based on isocontour raster image vector quantized method
JP7294788B2 (en) Classification of 2D images according to the type of 3D placement
CN108376421A (en) A method of human face three-dimensional model is generated based on shape from shading method
Rocha et al. Decal-maps: Real-time layering of decals on surfaces for multivariate visualization
Chen et al. Image vectorization with real-time thin-plate spline
CN110610504A (en) Pencil drawing generation method and device based on skeleton and tone
Moustakides et al. 3D image acquisition and NURBS based geometry modelling of natural objects
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
EP3794910B1 (en) A method of measuring illumination, corresponding system, computer program product and use
CN108766123A (en) A kind of urban plan model methods of exhibiting based on virtual reality
JPH03138784A (en) Reconstructing method and display method for three-dimensional model
Juckette et al. Using Virtual Reality and Photogrammetry to Enrich 3D Object Identity
CN104361629B (en) A kind of cigarette model space edit methods deformed based on streamline
US11138791B2 (en) Voxel to volumetric relationship
CN113593007A (en) Single-view three-dimensional point cloud reconstruction method and system based on variational self-coding
CN109493419B (en) Method and device for acquiring digital surface model from oblique photography data
US6903740B1 (en) Volumetric-based method and system for visualizing datasets
Qi et al. Applying 3D spatial metrics for landscape planning: Creating and measuring landscape scenarios by a point cloud-based approach
Ju et al. Architecture and Scene Restoration Using Multi-feature Fusion of Building Information
Liu et al. Research on Optimization Algorithms for Visual Communication Systems Based on VR Technology
CN117274466B (en) Realistic water surface rendering method and system integrating multisource tile map service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181106

RJ01 Rejection of invention patent application after publication