CN113377892A - Dynamic visual perception information acquisition method for urban space form evaluation - Google Patents

Dynamic visual perception information acquisition method for urban space form evaluation Download PDF

Info

Publication number
CN113377892A
CN113377892A CN202110766541.5A CN202110766541A CN113377892A CN 113377892 A CN113377892 A CN 113377892A CN 202110766541 A CN202110766541 A CN 202110766541A CN 113377892 A CN113377892 A CN 113377892A
Authority
CN
China
Prior art keywords
viewpoint
urban
data point
city
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110766541.5A
Other languages
Chinese (zh)
Other versions
CN113377892B (en
Inventor
王建国
金欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110766541.5A priority Critical patent/CN113377892B/en
Publication of CN113377892A publication Critical patent/CN113377892A/en
Application granted granted Critical
Publication of CN113377892B publication Critical patent/CN113377892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/211Schema design and management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Tourism & Hospitality (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Instructional Devices (AREA)

Abstract

The invention discloses a dynamic visual perception information acquisition method facing to urban space form evaluation, which characterizes the dynamic visual perception information of an urban space form layer as follows: and (4) vector data point sets of the city contour lines observed by continuous viewpoints in the target path under different motion guides. The method comprises the following steps: firstly, constructing a three-dimensional space model of the current urban situation; then, creating continuous viewpoints according to the target path; then acquiring a visual field range according to the advancing direction and the motion state elements of the person at the visual point along the path; then extracting a vector data point set corresponding to the city contour line in the visual field range of the viewpoint; and finally, acquiring dynamic visual perception information of the urban space form level under the path in a circulating processing mode. According to the method, the working states of human eyes under different motion modes are simulated, the vector data point set corresponding to the urban contour line observed from the continuous viewpoint in the target path is obtained, and objective data basis and effective scientific support are provided for urban space form evaluation.

Description

Dynamic visual perception information acquisition method for urban space form evaluation
Technical Field
The invention relates to the technical field of urban planning, in particular to a dynamic visual perception information acquisition method for urban spatial form evaluation.
Background
Urban space morphology is the three-dimensional geometry of an urban material space environment, including buildings and their enclosed exterior open spaces. The vision is an important way for people to directly feel the urban spatial form, and the dynamic viewing is a common way for people to experience and perceive the urban spatial form. Compared with static visual perception, dynamic visual perception emphasizes a continuous visual cognitive process; on the other hand, there is also some difference in dynamic visual perception under different motion type guidance.
The dynamic visual perception information of the urban space form level is characterized as follows: vector data point set of city contour observed from continuous viewpoint in target path under different motion guide[1]. The vector data point set is a core information basis for urban spatial form evaluation, and has the characteristics of large sample number, complex calculation process, difficulty in repeated acquisition and the like.
The existing quantitative method for dynamic visual perception mostly depends on the evaluation of internal characteristics[2]Expert scoring[3]And multi-participant scoring[4]And the like. The methods evaluate the urban spatial form through visual perception describing one or more static viewpoints, and the method for acquiring the information mainly analyzes the urban spatial form by extracting characteristic information from a three-dimensional model or a photo[5]. However, the method only depends on the acquisition of the visual perception information of a limited number of viewpoints, ignores the continuity and dynamic process of visual perception in a motion state, and cannot completely reflect the visual perception provided by continuous dynamic viewpoints during dynamic viewing. On the other hand, the information acquisition method mainly analyzes the urban space form by extracting characteristic information from a three-dimensional model or a photo, and researches the dynamic visual perception information acquisition under different motion guides although the information acquisition method relates to a certain extent[6]But the method is too dependent on subjective evaluation of researchers, and lacks scientificity, universality and repeatability.
With the ever-increasing strengthening and development of the planning and design concept centered on human, the human dynamic visual perception gradually becomes one of the important factors considered in the urban space planning, and a dynamic visual perception information acquisition method facing the urban space form evaluation is urgently needed to provide basic vector information for the urban space form evaluation.
Reference documents:
[1]Panerai P,Demorgon M,Depaule J-C.Analyse urbaine[M].Marseilles:Parenthèses,1999.
[2]Stamps AE.Fractals,skylines,nature and beauty[J].Landscape and Urban Planning,2002,60(3):163-184.
[3]Moore T,Hunt W.Ecosystem service provision by stormwater wetlands and ponds-a means for evaluation?[J].Water research,2012,46(20):6811-6823.
[4]de Vries S,de Groot M,Boers J.Eyesores in sight:Quantifying the impact of man-made elements on the scenic beauty ofDutch landscapes[J].Landscape and Urban Planning,2012,105(1-2):118-127.
[5]MinW,Mei S,Liu L,et al.Multi-Task Deep Relative Attribute Learning for Visual Urban Perception[J].IEEE Transactions on Image Processing,2020,29:657-669.
[6]Lai Y,Kontokosta C E.Quantifying place:Analyzing the drivers of pedestrian activity in dense urban environments[J].Landscape and Urban Planning,2018,180:166-178.
disclosure of Invention
The purpose of the invention is as follows: aiming at the defects of the prior art, the invention provides a dynamic visual perception information acquisition method facing to urban space form evaluation, which carries out information representation on the dynamic visual perception of continuous viewpoints in a target path by simulating the working states of human eyes under different motion modes, acquires a vector data point set of urban contour lines observed by the continuous viewpoints, and provides objective data base and effective scientific support for the urban space form evaluation.
The technical scheme is as follows: the invention provides a dynamic visual perception information acquisition method facing to urban space form evaluation, which specifically comprises the following steps:
step 1, collecting three-dimensional space form basic information of the current city situation, and establishing a city space form database, wherein the method specifically comprises the following steps:
step 1.1, acquiring three-dimensional space form basic information of the current city situation according to a mapping file, data published by a national resource department and open source data, and constructing a city digital terrain model, a city building model polyhedron layer and a path layer;
and step 1.2, importing the constructed urban digital terrain model, the polyhedral layer of the urban building model and the path layer into geographic information system software (ArcGIS) and unifying a coordinate system.
Step 2, creating continuous viewpoints in random paths under different motion guides, specifically as follows:
step 2.1, obtaining a continuous viewpoint distance S according to the following formula,
S=V*Ts
in the formula, the movement speed in a specific movement mode is represented by V, and based on the persistence of vision effect of human eyes, the effective sampling period of a continuous viewpoint is Ts;
step 2.2, creating N two-dimensional point elements at equal intervals along a path according to the continuous viewpoint distance S by using geographic information system software, converting the two-dimensional point elements into three-dimensional point elements according to the viewpoint height h, and forming a continuous three-dimensional viewpoint set OP { (x)i,yiH), i ∈ N }, where xi,yiThe coordinate value of the ith two-dimensional point element is represented, and the method for determining the number N of the two-dimensional point elements equidistantly created along the path comprises the following steps:
N=[L/S]
in the formula, L represents the path length, [ ] represents the rounding operation, and the calculation result of N takes the integer part.
Step 3, obtaining the view range of each viewpoint according to the advancing direction and the motion state elements of the viewpoint, which is as follows:
step 3.1, OP according to the current viewpointiAnd the next viewpoint OPi+1Obtaining a travelling azimuth Az as a central sight line reference azimuth in the travelling process;
step 3.2, determining a central sight line offset angle beta to simulate the sight line turning of a person in the advancing process; determining the visual field range [ Azlow, Azup ] on the viewpoint according to the horizontal visual angle alpha,
Azlow=Az+β-α/2
Azup=Az+β+α/2
in the formula, Azlow represents the azimuth of the lower boundary of the visual field range, and Azup represents the azimuth of the upper boundary of the visual field range; the central sight line offset angle beta represents an included angle between the sight line and the advancing direction, namely the reference azimuth angle Az of the central sight line, the rightward offset beta of the sight line is a positive value, and the leftward offset beta of the sight line is a negative value; the horizontal viewing angle alpha represents the angular width of the effective visual field of the person and is set according to different motion states.
Step 4, extracting the global city contour line observed at the viewpoint and the corresponding data point set by using geographic information system software, wherein the steps are as follows:
step 4.1, extracting the global city contour line observed at the viewpoint: for one view OP of a set of consecutive three-dimensional views OPiInputting urban digital terrain model information, urban building model information and viewpoint information, and acquiring contour line elements at the viewpoint through a Skyline tool in geographic information system software;
step 4.2, extracting a data point set corresponding to the contour line: generating a viewpoint OP by a Skyline Graph tooliSet of vector data points D to the azimuth A and vertical Z angles of each break in the contour elementsi{(Aj,Zj) I belongs to N, and j is a positive integer }; the interval of the azimuth angle A is 0-360 degrees, 0 degree represents the north direction, and 90 degrees represents the east direction; the vertical angle Z interval is 0-90 degrees, 0 degree represents the level, and 90 degrees represents the vertical direction; connecting these break points allows a 360 degree contour of the viewpoint to be re-delineated with azimuth as the horizontal axis and vertical as the vertical axis.
Step 5, processing information according to the view range at the viewpoint to obtain a vector data point set corresponding to the city outline in the view range, which is as follows:
step 5.1, set D of vector data points generated in the stepiGenerating a data point set D in ascending order of azimuthi’;
Step 5.2, OP according to the viewpointiIn a determined visual field range [ Azlow, Azup]From a set of data points Di' searching and extracting the broken point vector data in the range to form a city contour vector data point set Ds in the visual fieldi
Step 5.3, judging whether the lower boundary Azlow of the visual field range is less than 0, if so, indicating that the visual field range comprises 0-degree azimuth, and copying the data set DiIncreasing the azimuth angle of each data point by 360 degrees, keeping the vertical angle data unchanged, forming a new data point set and merging the new data point set into DiGenerating a new set of vector data points D, still in ascending azimuthal orderi', where the search extracts the horizon [ Azlow +360, Azup +360]Obtaining a view city contour vector data point set Ds by using the break point vector datai
Step 5.4, judging the data point set Di'whether the vertical angle of the folding point p' closest to the upper and lower boundaries of the visual field is greater than 0, if so, at the data point set DiSearching a point closest to p' outside a visual field range, solving a linear equation between two points, obtaining a value Zb of a boundary value Ab in the linear equation, and further searching a data point set DsiMedium increase data pb (Ab, Zb). Step 6, judging whether the data acquisition of all the viewpoints is finished, if not, repeating the processes of the step 3, the step 4 and the step 5 through a Python software programming cycle, sequentially processing all the viewpoints in the formed continuous three-dimensional viewpoint set OP until all the viewpoints are finished, and acquiring a vector data point set Ds { Ds } of the city contour line observed by the continuous viewpoints under the path1,Ds2,…DsNAnd acquiring dynamic visual perception information for urban spatial form evaluation.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: 1. the invention overcomes the defect that the existing dynamic visual perception information acquisition method cannot describe the continuity and the integrity of dynamic sightseeing. By extracting visual perception features of different motion types, digital representations of real-time scenes observed by continuous viewpoints in random paths under different motion states are obtained. The perceptual dynamic visual perception is converted into an objective vector data point set, so that the continuous visual perception of a human in the motion process is simulated, and a data basis and a scientific support are provided for urban spatial form evaluation. 2. The method has repeatability and expandability, and is convenient for comparison of multiple motion modes, multiple paths and multiple schemes. The method can be used for evaluating the current situation of the urban space form and can also provide data information support for decision prediction of different planning schemes.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a view of the study range along the great canal in Hangzhou city domain;
FIG. 3 is a model diagram of a three-dimensional spatial configuration of a city along a large canal in Hangzhou city domain;
FIG. 4 is a schematic diagram of a motion path and a continuous viewpoint of different motion modes;
FIG. 5 is a schematic view of a continuous viewpoint viewing direction;
FIG. 6 is a different motion pattern solution diagram;
fig. 7 is a city contour line graph observed from a random viewpoint in a path drawn from a vector data point set.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments. It should be understood that the following examples are illustrative of the present invention only and are not intended to limit the scope of the present invention. Various equivalent modifications of the invention, which fall within the scope of the appended claims of this application, will occur to persons skilled in the art upon reading this disclosure.
The invention provides a dynamic visual perception information acquisition method facing urban space form evaluation, aiming at the defects of a quantitative method of dynamic visual perception. Firstly, a three-dimensional space form basic model of the current city situation is constructed based on geographic information and building model data, and a continuous viewpoint is created according to a target path. And then determining the visual field range according to the advancing direction and the motion state element at the visual point. And then extracting a city contour vector data point set observed at the viewpoint based on a geographic information system software skyline tool, and extracting the city contour vector data point set in the viewpoint range to obtain visual perception information at the viewpoint. And finally, extracting a city contour vector data click set in each viewpoint visual field range established along the path in a circulating processing mode to obtain dynamic visual perception information of the continuous viewpoints on the path.
The following combinationsThe land area along the large canal in the Hangzhou urban area range is about 94km2The detailed description of the specific case shows that the flow of the method for acquiring the dynamic visual perception information is shown in fig. 1, and the specific operation steps are as follows:
step 1, collecting three-dimensional space form basic information of the current city situation, establishing a city space form database, and constructing a city three-dimensional space form basic model (shown in figures 2-3) along a large canal in the Hangzhou city domain range, wherein the three-dimensional space form basic model specifically comprises the following steps:
step 1.1, acquiring three-dimensional space form basic information of the current city situation according to a mapping file, data published by a national resource department and open source data, and constructing a tin-format city digital terrain model, a shp-format city building model polyhedral layer and an shp-format path layer corresponding to three motion modes of walking, bicycles and pleasure boats;
step 1.2, importing the constructed urban digital terrain model, the urban building model polyhedron layer and the path layer into geographic information system software (ArcGIS), and unifying a coordinate system to be WGS 84.
Step 2, creating continuous viewpoints in random paths under different motion guides, and representing the dynamic visual perception information of the urban space morphological aspect as a vector data point set (fig. 4) of the urban contour line observed by the continuous viewpoints, specifically as follows:
step 2.1 obtains the continuous viewpoint distance S from the formula S ═ V × Ts. The moving speed V is 6km/h, 15km/h and 30km/h respectively under three moving modes of walking, bicycle and boat walking. Based on the persistence effect of human eyes, the effective sampling period Ts of the continuous viewpoints is 0.1 second, and the effective sampling intervals S of three motion types of walking, bicycle and ship are respectively 0.17m, 0.42m and 0.83 m.
Step 2.2, creating N two-dimensional point elements at equal intervals along a path according to the continuous viewpoint distance S by using geographic information system software, converting the two-dimensional point elements into three-dimensional point elements according to the viewpoint height h, and forming a continuous three-dimensional viewpoint set OP { (x)i,yi,h),i∈N}。
The three-dimensional viewpoint number N can be obtained by the formula N ═ L/S. L represents the path length, [ ] represents the rounding operation, and the result of N is the integer part. In the case of the scheme, the path length L is about 54km, and the three-dimensional viewpoints corresponding to three motion types of walking, bicycle and ship running are 317647, 128571 and 65060 respectively. While the viewpoint heights h are all set to 1.7m and the terrain heights are superimposed.
Step 3, obtaining the view range of each viewpoint according to the advancing direction and the motion state elements of the viewpoint, which is as follows:
and 3.1, acquiring a travelling azimuth Az as a reference azimuth of the central sight line in the travelling process according to the position relation between the current viewpoint OPi and the next viewpoint OPi + 1.
Step 3.2, determining a central sight line offset angle beta to simulate the sight line turning of a person in the advancing process; determining a view field range [ Azlow, Azup ] on a viewpoint according to the horizontal view angle alpha;
Azlow=Az+β-α/2
Azup=Az+β+α/2
in the formula, the central line-of-sight offset angle β represents an angle between the line-of-sight and the traveling direction, i.e., the reference azimuth Az of the central line-of-sight, and the line-of-sight is biased to the right by a positive value and biased to the left by a negative value (fig. 5). The horizontal viewing angle alpha represents the angular width of the human effective field of view (fig. 6). In the case of the landing of the east coast of the canal, when the traveling direction is from south to north, the viewing range is a viewing range which is combined with the traveling direction, the viewing angle is 30 degrees to the left, namely beta is-30 degrees, and the horizontal viewing angle alpha is 120 degrees; when the traveling direction is from north to south, the viewing range is a viewing range included by a horizontal viewing angle α of 120 degrees with the viewing line being deviated to the right, that is, β is 30 ° in combination with the traveling direction Az. In the case of west bank walking, when the traveling direction is from south to north, the visual field range is a visual field range which is combined with the traveling direction Az, points with a right deviation of 30 degrees, namely, beta is 30 degrees, and is included by a horizontal visual angle alpha of 120 degrees; when the traveling direction is from north to south, the viewing range is a viewing range included by a horizontal viewing angle α 120 degrees with a left-hand 30 degree deviation, i.e., β of-30 ° as a reference point in conjunction with the traveling direction. The bicycle has a fast speed, belongs to a through type viewing mode, has a narrow visual field, and is pointed by taking the traveling direction as a reference, namely the visual field range which is contained by beta of 0 DEG and a horizontal visual angle alpha of 60 deg. The speed of the ship is fast, but the ship belongs to the tour mode, the view is wide, and the ship is directed based on the traveling direction, namely, the view range is included by the horizontal view angle alpha of 120 degrees and beta is 0 degrees.
Step 4, extracting the global city contour line observed at the viewpoint and the corresponding data point set by using geographic information system software, wherein the steps are as follows:
step 4.1, extracting the global city contour line observed at the viewpoint: inputting urban digital terrain model information, urban building model information and viewpoint information aiming at one viewpoint OPi in a continuous three-dimensional viewpoint set OP, and acquiring contour line elements at the viewpoint through a Skyline tool in geographic information system software;
step 4.2, extracting a data point set corresponding to the contour line: generating a vector data point set D of an azimuth angle A and a vertical angle Z from a viewpoint OPi to each break point in the contour line elements by a Skyline Graph tooli{(Aj,Zj) I belongs to N, and j is a positive integer }; the interval of the azimuth angle A is 0-360 degrees, 0 degree represents the north direction, and 90 degrees represents the east direction; the vertical angle Z interval is 0-90 degrees, 0 degree represents the level, and 90 degrees represents the vertical direction; connecting these break points allows a 360 degree contour of the viewpoint to be re-delineated with azimuth as the horizontal axis and vertical as the vertical axis.
Step 5, performing information processing according to the view range at the viewpoint to obtain a vector data point set (fig. 7) corresponding to the city contour in the view range, which is specifically as follows:
step 5.1, arranging the vector data point sets Di generated in the step 4 in ascending order of the azimuth angles to generate data point sets Di';
step 5.2, searching and extracting the breakpoint vector data in the range from the data point set Di' according to the view range [ Azlow, Azup ] determined at the viewpoint OPi to form a view city contour vector data point set Dsi;
step 5.3, judging whether the lower boundary Azlow of the visual field range is less than 0, if so, copying a data point set Di, increasing the azimuth angle of each data point by 360 degrees, keeping the vertical angle data unchanged, forming a new data point set, merging the new data point set into the Di, still arranging the new data point set Di 'in an ascending order according to the azimuth angle to generate a new vector data point set Di', searching and extracting the breakpoint vector data of the visual field range [ Azlow +360, Azu p +360], and acquiring a visual field city contour vector data point set Dsi;
and 5.4, judging whether the vertical angle of a folding point p 'closest to the upper and lower boundaries of the visual field range in the data point set Di' is larger than 0, searching a point closest to p 'outside the visual field range in the data point set Di' if the vertical angle is larger than 0, solving a linear equation between two points, acquiring a value Zb of the boundary value Ab in the linear equation, and further adding data pb (Ab, Zb) in the data point set Dsi.
Step 6, judging whether all viewpoint information acquisition is finished, if not, repeating the processes of the step 3, the step 4 and the step 5 through a Python software programming cycle, sequentially processing each viewpoint in the continuous three-dimensional viewpoint set OP formed in the step 5 until all viewpoints are finished, and acquiring a vector data point set Ds { of the city contour line observed by the continuous viewpoints under the path1,Ds2,…DsNAnd acquiring dynamic visual perception information for urban spatial form evaluation.

Claims (7)

1. A dynamic visual perception information acquisition method for urban space form evaluation is characterized by comprising the following steps:
step 1, acquiring three-dimensional space form basic information of the current city situation, and establishing a city space form database;
step 2, establishing continuous viewpoints in random paths under different motion guides, and representing dynamic visual perception information of the urban space morphological aspect as a vector data point set of an urban contour line observed by the continuous viewpoints;
step 3, obtaining the view range of each viewpoint according to the advancing direction and the motion state elements of the viewpoint;
step 4, extracting a universe city contour line observed at the viewpoint and a corresponding data point set by using geographic information system software;
step 5, processing information according to the visual field range at the viewpoint to obtain a vector data point set corresponding to the city outline in the visual field range;
and 6, judging whether information acquisition at all viewpoints is finished or not, if not, repeating the processes of the step 3, the step 4 and the step 5 through Python software programming circulation until the information acquisition at all viewpoints is finished, generating a vector data point set of the city contour line observed by the continuous viewpoints under the path, and realizing dynamic visual perception information acquisition facing city space form evaluation.
2. The method for acquiring dynamic visual perception information oriented to urban spatial morphological assessment according to claim 1, wherein the step 1 comprises the following steps:
step 1.1, acquiring three-dimensional space form basic information of the current city situation according to a mapping file, data published by a national resource department and open source data, and constructing a city digital terrain model, a city building model polyhedron layer and a path layer;
and step 1.2, importing the constructed urban digital terrain model, the polyhedral layer of the urban building model and the path layer into geographic information system software, and unifying a coordinate system.
3. The method for acquiring dynamic visual perception information oriented to urban spatial morphological assessment according to claim 1, wherein the step 2 comprises the following steps:
step 2.1, obtaining a continuous viewpoint distance S according to the following formula 1,
s ═ V × Ts (formula 1)
In formula 1, the motion speed in a specific motion mode is represented by V, and based on the persistence of vision effect of human eyes, the effective sampling period of a continuous viewpoint is Ts;
step 2.2, creating N two-dimensional point elements at equal intervals along a path according to the continuous viewpoint distance S by using geographic information system software, converting the two-dimensional point elements into three-dimensional point elements according to the viewpoint height h, and forming a continuous three-dimensional viewpoint set OP { (x)i,yiH), i ∈ N }, where xi,yiCoordinate values representing the ith two-dimensional point element created, N being the number of two-dimensional point elements created along the pathThe number, obtained by the following formula,
n ═ L/S (formula 2)
In the formula, L represents a path length, S represents a viewpoint distance, [ ] represents a rounding operation, and a calculation result of N takes an integer part.
4. The method for acquiring dynamic visual perception information oriented to urban space morphology evaluation according to claim 1, wherein the step 3 comprises the following steps:
step 3.1, OP according to the current viewpointiAnd the next viewpoint OPi+1Obtaining a travelling azimuth Az as a central sight line reference azimuth in the travelling process;
step 3.2, determining a central sight line offset angle beta to simulate the sight line turning of a person in the advancing process; determining the visual field range [ Azlow, Azup ] on the viewpoint according to the horizontal visual angle alpha,
azlow ═ Az + beta-alpha/2 (formula 3)
Azup ═ Az + beta + alpha/2 (formula 4)
In the formula, Azlow represents the azimuth of the lower boundary of the visual field range, and Azup represents the azimuth of the upper boundary of the visual field range; the central sight line offset angle beta represents an included angle between the sight line and the advancing direction, namely the reference azimuth angle Az of the central sight line, the rightward offset beta of the sight line is a positive value, and the leftward offset beta of the sight line is a negative value; the horizontal viewing angle alpha represents the angular width of the effective visual field of the person and is set according to different motion states.
5. The method for acquiring dynamic visual perception information oriented to urban space morphology evaluation according to claim 1, wherein the step 4 comprises the following steps:
step 4.1, extracting the global city contour line observed at the viewpoint: for one view OP of a set of consecutive three-dimensional views OPiInputting urban digital terrain model information, urban building model information and viewpoint information, and acquiring contour line elements at the viewpoint through a Skyline tool in geographic information system software;
step 4.2, extracting a data point set corresponding to the contour line: skyline by SkylineGraph tool generating viewpoint OPiSet of vector data points D to the azimuth A and vertical Z angles of each break in the contour elementsi{(Aj,Zj) I belongs to N, and j is a positive integer }; the interval of the azimuth angle A is 0-360 degrees, 0 degree represents the north direction, and 90 degrees represents the east direction; the vertical angle Z interval is 0-90 degrees, 0 degree represents the level, and 90 degrees represents the vertical direction; connecting these break points allows a 360 degree contour of the viewpoint to be re-delineated with azimuth as the horizontal axis and vertical as the vertical axis.
6. The method for acquiring dynamic visual perception information facing urban space morphology evaluation according to claim 1 or 5, wherein the step 5 comprises the following steps:
step 5.1, set D of vector data points generatediGenerating a data point set D in ascending order of azimuthi’;
Step 5.2, OP according to the viewpointiIn a determined visual field range [ Azlow, Azup]From a set of data points Di' searching and extracting the broken point vector data in the range to form a city contour vector data point set Ds in the visual fieldi
Step 5.3, judging whether the lower boundary Azlow of the visual field range is less than 0, if so, indicating that the visual field range comprises 0-degree azimuth, and copying the data set DiIncreasing the azimuth angle of each data point by 360 degrees, keeping the vertical angle data unchanged, forming a new data point set and merging the new data point set into DiGenerating a new set of vector data points D, still in ascending azimuthal orderi', where the search extracts the horizon [ Azlow +360, Azup +360]Obtaining a view city contour vector data point set Ds by using the break point vector datai
Step 5.4, judging the data point set Di'whether the vertical angle of the folding point p' closest to the upper and lower boundaries of the visual field is greater than 0, if so, at the data point set DiSearching a point closest to p' outside a visual field range, solving a linear equation between two points, obtaining a value Zb of a boundary value Ab in the linear equation, and further searching a data point set DsiMedium increase data pb (Ab, Zb).
7. The method for acquiring dynamic visual perception information oriented to urban space morphological evaluation according to claim 1 or 3, wherein the step 6 is to determine whether information acquisition of all viewpoints is completed, and if not, the processes of the step 3, the step 4 and the step 5 are repeated through a Python software programming loop, and the viewpoints in the formed continuous three-dimensional viewpoint set OP are sequentially processed until information acquisition of all viewpoints is completed.
CN202110766541.5A 2021-07-07 2021-07-07 Dynamic visual perception information acquisition method for urban space form evaluation Active CN113377892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110766541.5A CN113377892B (en) 2021-07-07 2021-07-07 Dynamic visual perception information acquisition method for urban space form evaluation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110766541.5A CN113377892B (en) 2021-07-07 2021-07-07 Dynamic visual perception information acquisition method for urban space form evaluation

Publications (2)

Publication Number Publication Date
CN113377892A true CN113377892A (en) 2021-09-10
CN113377892B CN113377892B (en) 2022-11-01

Family

ID=77581298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110766541.5A Active CN113377892B (en) 2021-07-07 2021-07-07 Dynamic visual perception information acquisition method for urban space form evaluation

Country Status (1)

Country Link
CN (1) CN113377892B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679229A (en) * 2017-10-20 2018-02-09 东南大学 The synthetical collection and analysis method of city three-dimensional building high-precision spatial big data
CN109285177A (en) * 2018-08-24 2019-01-29 西安建筑科技大学 A kind of digital city skyline extracting method
CN112084916A (en) * 2020-08-31 2020-12-15 东南大学 Automatic generation and diagnosis method for urban three-dimensional skyline contour line based on shielding rate

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679229A (en) * 2017-10-20 2018-02-09 东南大学 The synthetical collection and analysis method of city three-dimensional building high-precision spatial big data
CN109285177A (en) * 2018-08-24 2019-01-29 西安建筑科技大学 A kind of digital city skyline extracting method
CN112084916A (en) * 2020-08-31 2020-12-15 东南大学 Automatic generation and diagnosis method for urban three-dimensional skyline contour line based on shielding rate

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曲冰等: "城市商务核心区街道空间形态量化描述与评价方法研究", 《新建筑》 *
江海燕等: "基于数据共享平台的动态视觉景观分析信息技术集成及其初步应用――以广珠轻轨为例", 《中国园林》 *

Also Published As

Publication number Publication date
CN113377892B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN109828592B (en) A kind of method and apparatus of detection of obstacles
CN108416840A (en) A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
Bruno et al. Development and integration of digital technologies addressed to raise awareness and access to European underwater cultural heritage. An overview of the H2020 i-MARECULTURE project
CN110335337A (en) A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN109493407A (en) Realize the method, apparatus and computer equipment of laser point cloud denseization
CN111261016B (en) Road map construction method and device and electronic equipment
CN106845515A (en) Robot target identification and pose reconstructing method based on virtual sample deep learning
CN107038713A (en) A kind of moving target method for catching for merging optical flow method and neutral net
CN104899590A (en) Visual target tracking method and system for unmanned aerial vehicle
CN112598796A (en) Method for building and automatically updating three-dimensional building information model based on generalized point cloud
CN106485207A (en) A kind of Fingertip Detection based on binocular vision image and system
CN116258608B (en) Water conservancy real-time monitoring information management system integrating GIS and BIM three-dimensional technology
CN109000655A (en) Robot bionic indoor positioning air navigation aid
CN112150616A (en) Water traffic accident track simulation method based on spatial data fusion
CN110136174A (en) A kind of target object tracking and device
CN110532963A (en) A kind of accurate extracting method of roadmarking of mobile lidar point cloud driving
CN111627103A (en) Smart city CIM imaging method based on pedestrian activity and density perception
CN109798899B (en) Tree diffusion heuristic path planning method for submarine unknown terrain search
CN114943870A (en) Training method and device of line feature extraction model and point cloud matching method and device
CN109115232A (en) The method and apparatus of navigation
CN113377892B (en) Dynamic visual perception information acquisition method for urban space form evaluation
Huang et al. Urban Building Classification (UBC) V2-A Benchmark for Global Building Detection and Fine-grained Classification from Satellite Imagery
Smith et al. High-fidelity autonomous surface vehicle simulator for the maritime RobotX challenge
CN106651921A (en) Motion detection method and moving target avoiding and tracking method
CN116184376A (en) Underwater three-dimensional terrain and multi-beam image sonar data simulation system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant