WO2006134962A1 - Projection diagram generation system - Google Patents

Projection diagram generation system Download PDF

Info

Publication number
WO2006134962A1
WO2006134962A1 PCT/JP2006/311918 JP2006311918W WO2006134962A1 WO 2006134962 A1 WO2006134962 A1 WO 2006134962A1 JP 2006311918 W JP2006311918 W JP 2006311918W WO 2006134962 A1 WO2006134962 A1 WO 2006134962A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
feature points
dimensional
viewpoint
point
Prior art date
Application number
PCT/JP2006/311918
Other languages
French (fr)
Japanese (ja)
Inventor
Shigeo Takahashi
Kenichi Yoshida
Tomoyuki Nishita
Kenji Shimada
Original Assignee
The University Of Tokyo
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of Tokyo filed Critical The University Of Tokyo
Priority to JP2007521319A priority Critical patent/JPWO2006134962A1/en
Publication of WO2006134962A1 publication Critical patent/WO2006134962A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the present invention relates to a projection map generation system.
  • the car navigation system has made remarkable progress in recent years, and it has a 3D view as a function as a bird's-eye view, making it easy for users to associate with the actual landscape.
  • the 3D terrain shape is displayed three-dimensionally using perspective projection, the searched route will be shielded by buildings and terrain ups and downs, and the information necessary for driving cannot be conveyed.
  • Non-Patent Document 1 A method of creating a projection that combines the effects of projection from different viewpoints by distorting the perspective projection image using two-dimensional transformation [Non-Patent Documents 4 and 8] and a method of breaking the perspective projection with a curved projection line [Non-Patent Document 2], a method of creating a panorama image used for the background image of a cell animation as seen from a specified camera path [Non-Patent Document 7], rendering individual independent objects with different viewpoint powers.
  • a method of synthesizing on a two-dimensional screen in consideration of depth [Non-Patent Document 1] has been proposed.
  • Non-Patent Documents 3, 5, 6 a technique for improving the expression ability of non-perspective projection by transforming a three-dimensional model has been devised in recent years [Non-Patent Documents 3, 5, 6].
  • a variety of non-perspective projections can be expressed by using the deformation of the 3D model.
  • Non-Patent Documents 3, 5, and 6 it is difficult to directly grasp the behavior on the two-dimensional projection plane from the deformation of the model in the three-dimensional space. others Therefore, it is difficult to confirm whether the shielding of the path is avoided in the actually obtained 2D projection plane. If recalculation is performed when occlusion is occurring, the amount of computation and computation time will increase. In particular, if an animation is generated by continuously displaying a two-dimensional projection plane as a frame, this problem becomes more noticeable because a large number of images must be generated continuously.
  • Non-patent ⁇ ffl ⁇ l M. Agrawala, D. Zorin, and T. unzner. Artistic multiprojection rena ering. In Eurographics Rendering Workshop 2000, pages 125-136, 2000.
  • Non-patent document 2 ⁇ ". Kurzion and R. Yagel. Interactive space deformation with hardware -assisted rendering. IEEE Computer Graphics & Applications, 17 (5): 66-77, 1997.
  • Non-patent document 3 P. Rademacher. View -dependent geometry.In Computer Graphics (Pr oceedings of Siggraph '99), pages 439-446, 1999.
  • Non-Patent Document 4 S. M. Seitzand C. R. Dyer. View morphing. In Computer Graphics (Proceedings of Siggraph '96), pages 2 ⁇ _30, 1996.
  • Non-Patent Document 5 K. Singh. A fresh perspective. In Proceedings of Graphics Interface 2 002, pages 17-24, 2002.
  • Non-Patent Document 6 S. Takahashi, N. Ohta, H. Nakamura, Y. Takeshima, and I. Fujishiro. Modeling surperspective projection of landscapes for geographical guide-map generation. Computer Graphics Forum, 21 (3): 259- 268, 2002.
  • Non-Patent Document 7 D. N. Wood, A. Finkelstein, J. F. Hughes, S. E. Thayer, and D. H. Sales.Multiperspective panoramas for eel animation.In Computer Graphics (Procee dings of biggraph '97), pages 243-250, 1997.
  • Patent Document 8 D. Zorin and A. H. Barr. And orrection of geometric perceptual distortio ns in pictures.In Computer Graphics (Proceedings of Siggraph '95), pages 257-264, 1995.
  • the present invention has been made in view of the above circumstances, and is a system, method, or computer that can generate a projection map that avoids obstruction of geographical features such as a route at a relatively high speed. It is intended to provide a program. Another object of the present invention is to provide a system for displaying a generated two-dimensional projection diagram.
  • a projection map generation system includes a processing unit.
  • the processing unit is configured to perform the following processing:
  • the projection map generation system of the present invention may further include a storage unit.
  • the storage unit stores the three-dimensional terrain model.
  • the processing unit is configured to acquire the three-dimensional terrain model from the storage unit. Further, the processing unit is configured to store the generated two-dimensional projection view in the storage unit.
  • a projection map display system includes the above-described projection map generation system and a display unit.
  • the display unit is configured to display the two-dimensional projection view.
  • the processing unit In the projection map generation system, the processing unit generates a two-dimensional projection map at different time points by performing the processes (1) to (4) in response to the movement of the viewpoint. It may be. Further, when generating a two-dimensional projection map at a certain point in time, the processing unit uses a feature point that satisfies the following condition as a feature point in the processing of (1). May be:
  • Feature points used in the 2D projection generated before the certain time point and exist in the view volume in the 2D projection map at the certain time point.
  • the optimal arrangement of feature points in the projection map generation system is, for example, the certain viewpoint On the two-dimensional projection plane when looking at the force, shielding of the line connecting the feature points on the road is avoided, and the relative positional relationship of each feature point is maintained.
  • a projection map generation method includes the following steps:
  • a computer program according to the present invention is for causing a computer to execute the steps in the above-described projection drawing generation method.
  • the projection map generation system includes a processing unit 1 and a storage unit.
  • a unit 2, a display unit 3, and a communication path 4 are provided.
  • the processing unit 1 can be configured by a CPU, for example.
  • the processing unit 1 is configured to perform the following processing.
  • the storage unit 2 includes a 3D terrain model used for processing in the processing unit 1, computer software necessary for operating the processing unit 1, and a 3D terrain shape generated by the processing unit 1. ⁇ Necessary data (information) such as 2D projection maps can be stored.
  • the storage unit 2 can be configured by an internal storage device or an external storage device.
  • the processing unit 1 is configured to acquire a 3D terrain model from the storage unit 2.
  • the processing unit 1 is configured to store the generated 3D topographic shape and 2D projection map in the storage unit 2.
  • the display unit 3 is configured to display the two-dimensional projection view generated by the processing unit 1.
  • An example of the display unit 3 is a display.
  • the communication path 4 is a medium that enables exchange of information among the processing unit 1, the storage unit 2, and the output unit 3.
  • the communication path 4 may be, for example, a network such as a power LAN that is a bus line in a computer or the Internet. That is, the processing unit 1, the storage unit 2, and the output unit 3 may be physically separated from each other and connected by a network. Means for connecting to the network, such as interface configuration and Since the protocol is well known, detailed description is omitted.
  • the 3D terrain model used in this system is represented by a monovalent function.
  • the monovalent function refers to a function in which, for example, the altitude is uniquely determined when the latitude and longitude are determined.
  • the system of this embodiment generates a bird's-eye view animation as shown in Fig. 3 when a route to be taken is given.
  • the depression angle of the viewpoint is a force that can take a value from 20 degrees to 70 degrees, as in the case of a general force Navi, where the depression angle is set to 30 degrees so that the effect of shielding avoidance can be best confirmed.
  • An example of setting is shown.
  • FIG. 3 is a snapshot taken by the implemented system, where the sphere is the current location, the dark line (thick line) is the road as the route, and the light line is not the route (that is, it is not the way to go). Represents.
  • Fig. (A) is a perspective projection diagram in which a part of the route is shielded
  • Fig. (B) is a display example of a non-perspective projection diagram in which shielding of the route is avoided.
  • the processing unit 1 extracts feature points that represent geographic features in parts that may be related to occlusion from the 3D terrain model.
  • the number of feature points to be extracted is determined by the accuracy and usage of the 2D projection figure to be obtained, and computer restrictions.
  • a point near the top of a mountain or a point for reproducing a road shape is selected.
  • a point (vertex or bending point) for reproducing the outline of the building may be used as the feature point.
  • intersection F between contour lines (see Fig. 5 (a)) is extracted as a feature point.
  • the road force also has a point F, an end point F, and an intersection F at which the curvature is maximized.
  • the shape can be roughly approximated.
  • a normal car navigation system often has road information as two-dimensional information separately from terrain curved surface information.
  • this two-dimensional information is also converted into a three-dimensional terrain model.
  • the 3D terrain model may be a collection of a plurality of information including 2D information.
  • the feature points on the terrain and the road in the entire 3D terrain model it is not necessary to use the feature points on the terrain and the road in the entire 3D terrain model, and only the feature points in the view volume may be selected and used.
  • the feature points on the road illustrated in FIG. 5 (b) may be limited to feature points on the driving route and feature points on the road around the intersection. In order to avoid occlusion, only the feature points on the road as a route (points that can approximate the road by connecting them with straight lines) are sufficient. However, by extracting feature points on the road around the intersection, the road shape around the intersection can be prevented from being distorted.
  • Black points (square points) in the figure are feature points.
  • the feature point is specified by coordinate information.
  • Information on the extracted feature points is also stored in the storage unit 2.
  • the optimal feature points on the binary projection plane when viewed from a certain viewpoint The placement is determined using a spring model.
  • the optimal arrangement here means an arrangement that keeps the original perspective projection as much as possible while avoiding the shielding of the road. To avoid the shielding of the road is to avoid the shielding of the line connecting the feature points on the road in this embodiment. Retaining the original perspective projection as much as possible means maintaining the relative positional relationship of the feature points in the present embodiment.
  • Such an optimal arrangement is obtained by obtaining the positional relationship of feature points in which occlusion is completely avoided, and approaching the arrangement of feature points in perspective projection using, for example, a spring model,
  • the power to seek is S.
  • the processing in this step is performed on the premise of the display state on the two-dimensional projection plane.
  • the spring force acting on each feature point is formulated as follows.
  • Equation (1) is the force acting on the feature point with index i
  • A is the set of feature point indexes adjacent to each other in the Delaunay triangulation
  • R is on the road
  • T is the feature on the topography A set of point indices.
  • X is the position of the feature point, and when the spring model reaches the equilibrium state, its value converges to r in Fig. 7 (b).
  • the first term on the right side is the force that approaches the perspective projection, and the second term is the force that maintains the relative positional relationship.
  • the third term acts as a repulsive force between the feature points on the road and the feature points on the terrain. This repulsive force is introduced so that the geographical features such as roads and mountains are separated from each other so that they do not interfere with each other on the projection.
  • the position of the feature point (position in the three-dimensional space) is obtained so as to satisfy the optimum arrangement of the feature point on the two-dimensional projection plane obtained in step Sa_2.
  • the 3D terrain model is transformed. As a result, it is possible to generate a three-dimensional terrain shape that satisfies the feature point arrangement obtained on the two-dimensional projection plane.
  • the position of the feature point position in the three-dimensional space in the depth direction cannot be determined only from the feature point arrangement on the two-dimensional projection plane.
  • the distance from the screen S (projection plane) to the feature point F on the 3D terrain model does not change” before and after the deformation. It has been converted. To do this
  • the difference between the terrain shape before and after the deformation is expressed by a linear sum of unimodal basis functions using a Gaussian function or the like, and the position of the feature point on the terrain.
  • the differential shape can be obtained by solving the simultaneous equations obtained from the constraints. By considering the differential shape in this way, deformation can be performed while maintaining the undulations of the high-frequency components that originally exist on the topographic curved surface. Such a deformation operation using a basis function is well known as such and will not be described in detail.
  • the processing unit 1 can generate two-dimensional projection diagrams at different points in time by performing the processing of each step described above corresponding to the movement of the viewpoint. If these figures are displayed in succession, an animation (movie) can be displayed as the display changes along the path (that is, in response to a change in viewpoint). This display can be performed by the display unit 3 in the example of the present embodiment.
  • step Sa_l when generating a two-dimensional projection map at a certain point in time, if there is a feature point that satisfies the following condition, it is used as a feature point in the process of step Sa_l.
  • FIG. 11 shows a state in which feature points are taken in consecutive frames.
  • Figure (a) shows that feature points are not reused, and it can be confirmed that a completely different feature point is selected as the frame advances by one.
  • the time axis goes from top to bottom in the figure.
  • Figure (b) the reuse of feature points eliminates the difference in deformation caused by non-uniform feature points.
  • inter-frame coherence can be further maintained. For example, smoothing can be performed for each of the four frames before and after the weight of each frame obtained from a Gaussian function centered on the current frame.
  • the calculation time per frame is about 3 seconds with CPU 3.0GHz Pentium (registered trademark) 4, memory 2GB RAM.
  • FIG. 3 (b) shows a part of the results.
  • FIG. 4A is a perspective projection view.
  • each of the above embodiments is merely an example, and does not indicate a configuration essential to the present invention.
  • the configuration of each part is not limited to the above as long as the gist of the present invention can be achieved.
  • step Sa_2 the optimum arrangement of feature points is obtained.
  • the spring model was used as a method for this.
  • various other methods such as particles that approximate the force acting between feature points to the force between atoms.
  • the particles themselves are a well-known method.
  • the specific content is not limited as long as it is a method capable of obtaining the optimum arrangement of feature points.
  • the method for preventing the shielding of the road has been described.
  • the feature points should be placed on the contour of the object, preventing occlusion.
  • step Sa-l (see Fig. 2) of the first embodiment described above, from the contour of the topographic curved surface, the peak F of the mountain, the bottom of the valley, these end points F, and the intersection F of the contours (Fig. 5 ( see a))
  • Extraction of feature points in the second embodiment is performed as follows. Also in the present embodiment, extracting the peak points of the mountains, the bottoms of the valleys, their end points, and the intersection points of the contour lines (see FIG. 12) as feature points from the contour line of the topographic curved surface is the first embodiment. The form is basically the same. However, the feature points in the second embodiment are extracted at a plurality of viewpoint positions by moving the viewpoint P so as to surround the target terrain curved surface. Symbol R is attached to the movement trajectory of viewpoint P. Then, the feature point set obtained at each viewpoint position is summed, and the entire sum set is used as a feature point set on the terrain curved surface as a set of feature points independent of the viewpoint position.
  • feature point extraction according to the present embodiment is performed by the following processing:
  • (a) A process of placing the viewpoint P in a direction inclined at a certain angle with respect to a horizontal plane passing through the reference point.
  • the certain angle is not particularly limited, but in the case of a general car navigation system, it is in a range of 20 ° to 70 °.
  • the reference point is a point that is always referred from the viewpoint P.
  • it is the position of the own vehicle.
  • the vertex means the vertex of the polyhedron that composes the 3D terrain model.
  • the relative rotation angle of the viewing point around the reference point is generally the entire circumference (360 °) (see Fig. 12).
  • a relative movement method of the visual point it may be moved continuously, but usually 15 ° to 30 °. It is preferable to move discretely for each angle. In short, any method for moving the viewpoint may be used as long as it can extract substantially necessary vertices.
  • points, end points, and intersections are extracted from the road as feature points.
  • Black points (square points) in the figure are feature points.
  • a spring model was used. This point is basically the same in the second embodiment, but in the second embodiment, the following method is used.
  • step Sa_3 of the first embodiment described above the difference in topographical shape before and after deformation is expressed by a linear sum of basal functions that form a single peak using a Gaussian function, etc.
  • the difference shape is obtained.
  • the terrain shape difference before and after deformation is represented hierarchically by a linear sum of basis functions that form a single peak using a B-spline function, and feature points on the terrain.
  • the position constraints are applied to the coarse constraint, B-spline approximation, and then the fine force trace and B-spline approximation.
  • the difference shape that satisfies the position constraints of the feature points can be obtained at high speed.
  • step Sa-4 of the first embodiment processing such as selecting the same feature point as much as possible between frames was performed in order to maintain coherence with respect to time. However, in the second embodiment, such processing is not necessary.
  • a non-perspective projection animation was generated by the method of the second embodiment described above.
  • the experimental conditions are the same as in Experimental Example 1, CPU 3.0GHz Pentium (registered trademark) 4, memory 2GB RAM.
  • the calculation time per frame was approximately 0.5-1.0 seconds. That is, it can be seen that the processing speed can be increased by the method of the second embodiment.
  • feature points can be extracted without depending on the viewpoint. For this reason, it is not necessary to perform recalculation for feature point extraction when the viewpoint is moved. Thereby, it is considered that the processing is speeded up.
  • each unit including functional blocks
  • hardware, software, network, a combination thereof, and any other means can be used. This is obvious to those skilled in the art. It is also possible to combine functional blocks into a single functional block. Further, the functional block may be realized by cooperation of a plurality of hardware or software.
  • FIG. 1 is a block diagram showing a schematic configuration of a projection map generation system according to a first embodiment of the present invention.
  • FIG. 2 is a flowchart for explaining a projection map generation method according to the first embodiment of the present invention.
  • Fig. 3 is an example of a perspective projection view in which a part of the route is hidden by a mountain.
  • Figure (b) is an example of a non-perspective projection view in a state where the shielding of the route is avoided.
  • FIG. 4 (a) is a diagram schematically showing the arrangement of feature points where a route (road) is blocked.
  • Figure (b) is a diagram schematically showing a state in which shielding is avoided by changing the arrangement of feature points.
  • FIG. 5 is an explanatory diagram showing feature points on the topography.
  • Figure (b) is an explanatory diagram showing feature points on the road.
  • FIG. 6 An example of feature points extracted by the system is shown.
  • Fig. (A) shows feature points on the terrain
  • Fig. (B) shows feature points on the road.
  • FIG. 7 An explanatory diagram for explaining the arrangement of feature points.
  • Fig. (A) shows the arrangement on the perspective view
  • Fig. (B) shows the optimum arrangement
  • Fig. (C) shows the arrangement projected on the horizontal plane. ing.
  • FIG. 8 An explanatory diagram for explaining Delaunay triangulation of feature points.
  • Fig. (A) shows the state before optimization
  • Fig. (B) shows the state after optimization.
  • FIG. 9 is an explanatory diagram for explaining the movement of the feature points in the three-dimensional space so as to satisfy the optimal arrangement on the projection plane.
  • FIG. 10 An explanatory diagram showing the limited range of the deformation area (range represented by black dots), where FIG. (A) is an example of wire frame display and FIG. (B) is an example of surface display.
  • FIG. 11 An explanatory diagram for explaining how feature points are taken.
  • Fig. (A) shows no reuse of feature points
  • Fig. (B) shows an example of reuse.
  • FIG. 12 is an explanatory diagram showing feature points on the topography in the second embodiment.
  • FIG. 13 (a) is a diagram showing an example of feature points on the terrain extracted by the system of the second embodiment.
  • FIG. 13 (b) is a diagram showing an example of feature points on the road extracted by the system of the second embodiment.
  • FIG. 14 (a) An explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. (A) shows a state before optimization.
  • FIG. 14 (b) is an explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 14 (b) is an enlarged view of FIG.
  • FIG. 15 (a) is an explanatory diagram for explaining Delaunay triangulation of feature points
  • FIG. 15 (a) shows an example of calculating the optimum arrangement of feature points on the road.
  • FIG. 15 (b) An explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 15 (b) is an enlarged view of FIG.
  • FIG. 16 (a) is an explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 16 (a) shows a state after optimization.
  • FIG. 16 (b) is an explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 16 (b) shows an enlarged view of FIG.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Instructional Devices (AREA)
  • Image Generation (AREA)

Abstract

[PROBLEMS] To provide a system capable of comparatively rapidly generating a projection diagram such as a route while eliminating shield of geographical characteristics. [MEANS FOR SOLVING PROBLEMS] A processing unit (1) performs the following processes: (1) A process for extracting a characteristic point expressing geographical characteristics in a part which may be associated with a shield from a 3D geographical model. (2) A process for calculating an optimal arrangement of the characteristic point on a 2D projection plane when viewed from a certain point of view. (3) A process for generating a 3D geographical shape satisfying the optimal arrangement of the characteristic point on the 2D projection plane. (4) A process for generating a 2D projection diagram when the 3D shape is viewed from the point of view or from its vicinity.

Description

明 細 書  Specification
投影図生成システム  Projection diagram generation system
技術分野  Technical field
[0001] 本発明は、投影図生成システムに関するものである。  [0001] The present invention relates to a projection map generation system.
背景技術  Background art
[0002] カーナビゲーシヨンシステムは、近年目覚しい発展を遂げ、鳥瞰図による 3次元表 示が機能として備わり、ユーザーが、実際の景観との対応付けを容易に行えるように なった。その一方で、透視投影を用いて 3次元地形形状を立体的に表示すると、探 索された経路が建物や地形の起伏などに遮蔽され、運転に必要な情報を伝えること ができない問題が生じる。  [0002] The car navigation system has made remarkable progress in recent years, and it has a 3D view as a function as a bird's-eye view, making it easy for users to associate with the actual landscape. On the other hand, if the 3D terrain shape is displayed three-dimensionally using perspective projection, the searched route will be shielded by buildings and terrain ups and downs, and the information necessary for driving cannot be conveyed.
[0003] この問題は、商用のカーナビゲーシヨンシステムでは、遮蔽物を透明に表示するな どして対応している力 遮蔽が幾重にも重なった状況では、注目している経路や地理 特徴も重なってしまい、それらの相対的位置関係が伝わりにくい。そのため、このよう な遮蔽の問題を抜本的に解決するためには、透視投影の法則を崩すことで、道路や 地理特徴を投影図上に適切に配置し、ユーザに提示する必要がある。このような透 視投影を崩した投影を、ここでは非透視投影と呼ぶ。  [0003] In commercial car navigation systems, the problem is that, in situations where there are multiple overlapping force shields, such as displaying the shielding objects transparently, the route and geographical features that are of interest also vary. They overlap, making it difficult to convey their relative positional relationship. Therefore, in order to drastically solve such a shielding problem, it is necessary to properly arrange roads and geographical features on the projection map and to present them to the user by breaking the law of perspective projection. Such a projection that has lost the perspective projection is referred to herein as a non-perspective projection.
[0004] 非透視投影の研究においては、レ、くつかの手法が提案されてきている。透視投影 の画像を 2次元的な変換を用いてゆがめることにより、異なる視点からの投影の効果 をあわせた投影を作り出す手法 [非特許文献 4,8]や曲がった投影線により透視投影 を崩す方法 [非特許文献 2]、指定したカメラパスから見たようなセルアニメーションの 背景画に使われるパノラマ画像を作成する手法 [非特許文献 7]、個々の独立した物 体を別々の視点力 描画し、奥行きを考慮して 2次元画面上で合成する手法 [非特許 文献 1]などが提案されてきた。その中で、 3次元モデルを変形することにより非透視投 影の表現能力を向上させる手法が近年考案された [非特許文献 3,5,6]。 3次元モデル の変形を用いることで、多様な非透視投影を表現することが可能となる。  [0004] In the research of non-perspective projection, several methods have been proposed. A method of creating a projection that combines the effects of projection from different viewpoints by distorting the perspective projection image using two-dimensional transformation [Non-Patent Documents 4 and 8] and a method of breaking the perspective projection with a curved projection line [Non-Patent Document 2], a method of creating a panorama image used for the background image of a cell animation as seen from a specified camera path [Non-Patent Document 7], rendering individual independent objects with different viewpoint powers. In addition, a method of synthesizing on a two-dimensional screen in consideration of depth [Non-Patent Document 1] has been proposed. Among them, a technique for improving the expression ability of non-perspective projection by transforming a three-dimensional model has been devised in recent years [Non-Patent Documents 3, 5, 6]. A variety of non-perspective projections can be expressed by using the deformation of the 3D model.
[0005] し力 ながら、非特許文献 3,5,6に記載の技術においても、 3次元空間におけるモデ ルの変形から、 2次元投影面における振る舞いを直接把握することは難しい。このた め、実際に得られた 2次元投影面において経路の遮蔽が回避されているかどうかが 確認し難い。遮蔽を生じている時に再計算を行うとすれば、計算量や計算時間の増 加を招く。特に、 2次元投影面をフレームとして連続して表示することによりアニメーシ ヨンを生成しょうとすると、多数の画像を連続的に生成する必要があるので、この問題 はさらに顕著になる。 [0005] However, even with the techniques described in Non-Patent Documents 3, 5, and 6, it is difficult to directly grasp the behavior on the two-dimensional projection plane from the deformation of the model in the three-dimensional space. others Therefore, it is difficult to confirm whether the shielding of the path is avoided in the actually obtained 2D projection plane. If recalculation is performed when occlusion is occurring, the amount of computation and computation time will increase. In particular, if an animation is generated by continuously displaying a two-dimensional projection plane as a frame, this problem becomes more noticeable because a large number of images must be generated continuously.
非特許乂 ffl^l : M. Agrawala, D. Zorin, and T. unzner. Artistic multiprojection rena ering. In Eurographics Rendering Workshop 2000, pages 125-136, 2000. Non-patent 乂 ffl ^ l: M. Agrawala, D. Zorin, and T. unzner. Artistic multiprojection rena ering. In Eurographics Rendering Workshop 2000, pages 125-136, 2000.
非特許文献 2: ϊ". Kurzion and R. Yagel. Interactive space deformation with hardware -assisted rendering. IEEE Computer Graphics & Applications, 17(5): 66-77, 1997. 非特許文献 3: P. Rademacher. View-dependent geometry. In Computer Graphics (Pr oceedings of Siggraph '99), pages 439-446, 1999. Non-patent document 2: ϊ ". Kurzion and R. Yagel. Interactive space deformation with hardware -assisted rendering. IEEE Computer Graphics & Applications, 17 (5): 66-77, 1997. Non-patent document 3: P. Rademacher. View -dependent geometry.In Computer Graphics (Pr oceedings of Siggraph '99), pages 439-446, 1999.
非特許文献 4 : S. M. Seitzand C. R. Dyer. View morphing. In Computer Graphics (Pr oceedings of Siggraph '96), pages 2丄 _30, 1996. Non-Patent Document 4: S. M. Seitzand C. R. Dyer. View morphing. In Computer Graphics (Proceedings of Siggraph '96), pages 2 丄 _30, 1996.
非特許文献 5: K. Singh. A fresh perspective. In Proceedings of Graphics Interface 2 002, pages 17-24, 2002. Non-Patent Document 5: K. Singh. A fresh perspective. In Proceedings of Graphics Interface 2 002, pages 17-24, 2002.
非特許文献 6 : S. Takahashi, N. Ohta, H. Nakamura, Y. Takeshima, and I. Fujishiro. Modeling surperspective projection of landscapes for geographical guide-map genera tion. Computer Graphics Forum, 21(3): 259-268, 2002. Non-Patent Document 6: S. Takahashi, N. Ohta, H. Nakamura, Y. Takeshima, and I. Fujishiro. Modeling surperspective projection of landscapes for geographical guide-map generation. Computer Graphics Forum, 21 (3): 259- 268, 2002.
非特許文献 7 : D. N. Wood, A. Finkelstein, J. F. Hughes, S. E. Thayer, and D. H. S alesin. Multiperspective panoramas for eel animation. In Computer Graphics (Procee dings of biggraph '97), pages 243-250, 1997. Non-Patent Document 7: D. N. Wood, A. Finkelstein, J. F. Hughes, S. E. Thayer, and D. H. Sales.Multiperspective panoramas for eel animation.In Computer Graphics (Procee dings of biggraph '97), pages 243-250, 1997.
特許文献 8 : D. Zorin and A. H. Barr.し orrection of geometric perceptual distortio ns in pictures. In Computer Graphics (Proceedings of Siggraph '95), pages 257-264, 1995.  Patent Document 8: D. Zorin and A. H. Barr. And orrection of geometric perceptual distortio ns in pictures.In Computer Graphics (Proceedings of Siggraph '95), pages 257-264, 1995.
発明の開示 Disclosure of the invention
発明が解決しょうとする課題 Problems to be solved by the invention
本発明は、前記の状況に鑑みてなされたもので、経路などの地理的特徴の遮蔽が 回避された投影図を、比較的に高速で生成しうるシステム、方法、又はコンピュータ プログラムを提供しょうとするものである。本発明の他の目的は、生成された 2次元投 影図を表示するシステムを提供することである。 The present invention has been made in view of the above circumstances, and is a system, method, or computer that can generate a projection map that avoids obstruction of geographical features such as a route at a relatively high speed. It is intended to provide a program. Another object of the present invention is to provide a system for displaying a generated two-dimensional projection diagram.
課題を解決するための手段  Means for solving the problem
[0007] 本発明に係る投影図生成システムは、処理部を備えている。前記処理部は、以下 の処理を行う構成となっている: [0007] A projection map generation system according to the present invention includes a processing unit. The processing unit is configured to perform the following processing:
(1) 3次元地形モデルから、遮蔽に関係している可能性のある部分における地理特 徴を表す特徴点を抽出する処理;  (1) Processing to extract feature points representing geographic features in a part that may be related to occlusion from a 3D terrain model;
(2)ある視点から見たときの 2次元投影面上において、前記特徴点の最適配置を算 出する処理;  (2) A process for calculating the optimum arrangement of the feature points on the two-dimensional projection plane when viewed from a certain viewpoint;
(3)前記 2次元投影面上における前記特徴点の最適配置を満たす 3次元地形形状 を生成する処理;  (3) A process of generating a 3D terrain shape that satisfies the optimal arrangement of the feature points on the 2D projection plane;
(4)前記 3次元形状を、前記視点又はその近傍から見たときの 2次元投影図を生成 する処理。  (4) A process of generating a two-dimensional projection when the three-dimensional shape is viewed from the viewpoint or the vicinity thereof.
[0008] 本発明の投影図生成システムは、さらに記憶部を備えていてもよい。前記記憶部は 、前記 3次元地形モデルを記憶している。この場合、前記処理部は、前記記憶部から 前記 3次元地形モデルを取得する構成となっている。さらに、前記処理部は、生成さ れた前記 2次元投影図を前記記憶部に格納する構成となっている。  [0008] The projection map generation system of the present invention may further include a storage unit. The storage unit stores the three-dimensional terrain model. In this case, the processing unit is configured to acquire the three-dimensional terrain model from the storage unit. Further, the processing unit is configured to store the generated two-dimensional projection view in the storage unit.
[0009] 本発明に係る投影図表示システムは、前記した投影図生成システムと、表示部とを 備えている。前記表示部は、前記 2次元投影図を表示する構成となっている。  [0009] A projection map display system according to the present invention includes the above-described projection map generation system and a display unit. The display unit is configured to display the two-dimensional projection view.
[0010] 前記投影図生成システムにおいて、前記処理部は、前記視点の移動に対応して、 前記(1)〜(4)の処理を行うことにより、異なる時点における 2次元投影図を生成する 構成であってもよい。また、前記処理部は、ある時点における 2次元投影図を生成す る場合において、次の条件を満たす特徴点が存在する場合には、それを、前記(1) の処理における特徴点として用いる構成であっても良い:  [0010] In the projection map generation system, the processing unit generates a two-dimensional projection map at different time points by performing the processes (1) to (4) in response to the movement of the viewpoint. It may be. Further, when generating a two-dimensional projection map at a certain point in time, the processing unit uses a feature point that satisfies the following condition as a feature point in the processing of (1). May be:
(条件)  (Condition)
前記ある時点より前に生成した 2次元投影図で用いた特徴点であって、それが、前記 ある時点の 2次元投影図におけるビューボリューム内に存在すること。  Feature points used in the 2D projection generated before the certain time point, and exist in the view volume in the 2D projection map at the certain time point.
[0011] 前記投影図生成システムにおける特徴点の最適配置とは、例えば、前記ある視点 力 見たときの 2次元投影面上において、道路上における特徴点を結ぶ線の遮蔽が 回避され、かつ、各特徴点の相対位置関係が保持された状態である。 The optimal arrangement of feature points in the projection map generation system is, for example, the certain viewpoint On the two-dimensional projection plane when looking at the force, shielding of the line connecting the feature points on the road is avoided, and the relative positional relationship of each feature point is maintained.
[0012] 前記(1)の処理における、前記特徴点の抽出は、例えば以下の処理により行うこと ができる: [0012] The extraction of the feature points in the process (1) can be performed, for example, by the following process:
(a)参照点を通る水平面に対して、ある一定の角度で傾斜する方向に視点を置く処 理;  (a) The process of placing the viewpoint in a direction inclined at a certain angle with respect to the horizontal plane passing through the reference point;
(b)前記 3次元地形モデルから得られる 3次元の地形形状に対する、前記視点の位 置を、前記参照点を中心とし、かつ水平な方向に、相対的に回転移動させるときに、 前記 3次元の地形形状の輪郭線上に乗る頂点をすベて特徴点として抽出する処理。  (b) When the position of the viewpoint relative to the three-dimensional terrain shape obtained from the three-dimensional terrain model is relatively rotated around the reference point and in a horizontal direction, the three-dimensional Processing to extract all the vertices on the outline of the topographic shape as feature points.
[0013] 本発明に係る投影図生成方法は、以下のステップを備えている:  [0013] A projection map generation method according to the present invention includes the following steps:
(1) 3次元地形モデルから、遮蔽に関係している可能性のある部分における地理特 徴を表す特徴点を抽出するステップ;  (1) extracting a feature point representing a geographic feature in a part that may be related to occlusion from a 3D terrain model;
(2)ある視点から見たときの 2次元投影面上において、前記特徴点の最適配置を算 出するステップ;  (2) calculating an optimum arrangement of the feature points on a two-dimensional projection plane when viewed from a certain viewpoint;
(3)前記 2次元投影面上における前記特徴点の最適配置を満たす 3次元地形形状 を生成するステップ;  (3) generating a 3D terrain shape that satisfies an optimal arrangement of the feature points on the 2D projection plane;
(4)前記 3次元形状を、前記視点又はその近傍から見たときの 2次元投影図を生成 するステップ。  (4) A step of generating a two-dimensional projection view when the three-dimensional shape is viewed from the viewpoint or the vicinity thereof.
[0014] 本発明に係るコンピュータプログラムは、前記した投影図生成方法におけるステツ プをコンピュータに実行させるためのものである。  [0014] A computer program according to the present invention is for causing a computer to execute the steps in the above-described projection drawing generation method.
発明の効果  The invention's effect
[0015] 本発明によれば、経路などの地理的特徴の遮蔽が回避された投影図を、比較的に 高速で生成しうるシステム、方法、又はコンピュータプログラムを提供することができる 発明を実施するための最良の形態  [0015] According to the present invention, it is possible to provide a system, method, or computer program capable of generating a projection map in which the obstruction of geographical features such as a route is avoided at a relatively high speed. Best form for
[0016] 以下、本発明の第 1実施形態を、添付図面を参照して説明する。 Hereinafter, a first embodiment of the present invention will be described with reference to the accompanying drawings.
[0017] (第 1実施形態の構成) [0017] (Configuration of First Embodiment)
本実施形態に係る投影図生成システムは、図 1に示されるように、処理部 1と、記憶 部 2と、表示部 3と、通信路 4とを備えている。 As shown in FIG. 1, the projection map generation system according to this embodiment includes a processing unit 1 and a storage unit. A unit 2, a display unit 3, and a communication path 4 are provided.
[0018] 処理部 1は、例えば CPUにより構成することができる。 The processing unit 1 can be configured by a CPU, for example.
[0019] 処理部 1は、以下の処理を行う構成となっている。 The processing unit 1 is configured to perform the following processing.
(1) 3次元地形モデルから、遮蔽に関係している可能性のある部分における地理特 徴を表す特徴点を抽出する処理;  (1) Processing to extract feature points representing geographic features in a part that may be related to occlusion from a 3D terrain model;
(2)ある視点から見たときの 2次元投影面上において、前記特徴点の最適配置を算 出する処理;  (2) A process for calculating the optimum arrangement of the feature points on the two-dimensional projection plane when viewed from a certain viewpoint;
(3)前記 2次元投影面上における前記特徴点の最適配置を満たす 3次元地形形状 を生成する処理;  (3) A process of generating a 3D terrain shape that satisfies the optimal arrangement of the feature points on the 2D projection plane;
(4)前記 3次元形状を、前記視点又はその近傍力 見たときの 2次元投影図を生成 する処理。  (4) A process of generating a two-dimensional projection view when the three-dimensional shape is viewed at the viewpoint or its nearby force.
[0020] これらの処理の詳細は、本実施形態の動作として後述する。また、これらの処理は、 計算機(又は CPU)上で実行されるコンピュータソフトウェアにより実施することができ る。  [0020] Details of these processes will be described later as operations of the present embodiment. In addition, these processes can be performed by computer software executed on a computer (or CPU).
[0021] 記憶部 2は、処理部 1での処理に用いられる 3次元地形モデルや、処理部 1を動作 させるために必要なコンピュータソフトウェアや、処理部 1で生成された 3次元地形形 状や、 2次元投影図など、必要なデータ(情報)を記憶できるようになっている。記憶 部 2は、内部記憶装置あるいは外部記憶装置により構成することができる。  [0021] The storage unit 2 includes a 3D terrain model used for processing in the processing unit 1, computer software necessary for operating the processing unit 1, and a 3D terrain shape generated by the processing unit 1.・ Necessary data (information) such as 2D projection maps can be stored. The storage unit 2 can be configured by an internal storage device or an external storage device.
[0022] 処理部 1は、記憶部 2から 3次元地形モデルを取得する構成となっている。また、処 理部 1は、生成された 3次元地形形状や 2次元投影図を記憶部 2に格納する構成とな つている。  The processing unit 1 is configured to acquire a 3D terrain model from the storage unit 2. The processing unit 1 is configured to store the generated 3D topographic shape and 2D projection map in the storage unit 2.
[0023] 表示部 3は、処理部 1で生成された 2次元投影図を表示する構成となっている。表 示部 3としては、例えばディスプレイである。  [0023] The display unit 3 is configured to display the two-dimensional projection view generated by the processing unit 1. An example of the display unit 3 is a display.
[0024] 通信路 4は、処理部 1と記憶部 2と出力部 3との間における、情報のやりとりを可能に する媒体である。通信路 4は、例えば、コンピュータ内のバスラインである力 LANや インターネットなどのネットワークであってもよレ、。すなわち、処理部 1と記憶部 2と出 力部 3とは、物理的に離間した位置にあり、ネットワークによって接続されたものであ つても良い。ネットワークに接続するための手段、例えばインタフェースの構成やプロ トコルは良く知られているので、詳細な説明は省略する。 [0024] The communication path 4 is a medium that enables exchange of information among the processing unit 1, the storage unit 2, and the output unit 3. The communication path 4 may be, for example, a network such as a power LAN that is a bus line in a computer or the Internet. That is, the processing unit 1, the storage unit 2, and the output unit 3 may be physically separated from each other and connected by a network. Means for connecting to the network, such as interface configuration and Since the protocol is well known, detailed description is omitted.
[0025] (第 1実施形態の動作)  [0025] (Operation of First Embodiment)
次に、前記したシステムを用いて 2次元投影図を生成する方法について、図 2のフ ローチャートを参照しながら説明する。このシステムは、カーナビゲーシヨンシステムと して実装されている。このシステムで用いる 3次元地形モデルは、一価関数で表現さ れると仮定する。一価関数とは、この実施形態では、例えば緯度と経度とが決まると 標高が一意に定まるような関数をいう。  Next, a method for generating a two-dimensional projection map using the above-described system will be described with reference to the flowchart of FIG. This system is implemented as a car navigation system. It is assumed that the 3D terrain model used in this system is represented by a monovalent function. In this embodiment, the monovalent function refers to a function in which, for example, the altitude is uniquely determined when the latitude and longitude are determined.
[0026] 本実施形態のシステムは、進むべき経路が与えられたときに、図 3に示されるような 鳥瞰図のアニメーションを生成する。本システムでは、視点の俯角として、一般的な力 一ナビのように 20度から 70度の値をとることができる力 ここでは遮蔽回避の効果が 最もよく確認できるように、俯角を 30度に設定した例を示す。  [0026] The system of this embodiment generates a bird's-eye view animation as shown in Fig. 3 when a route to be taken is given. In this system, the depression angle of the viewpoint is a force that can take a value from 20 degrees to 70 degrees, as in the case of a general force Navi, where the depression angle is set to 30 degrees so that the effect of shielding avoidance can be best confirmed. An example of setting is shown.
[0027] 本システムにおいて、 生成する各フレームにおいて経路の遮蔽が生じている部分 は、 それが回避されるように投影図が変換される力 それ以外は透視投影図の状態 をできる限り保つように非透視投影図を作成する。図 3は実装したシステムによって得 られたスナップショットであり、球が現在地を、色の濃い線 (太線)が経路としての道路 を、色の薄い線が経路でない(つまり進むべき道ではない)道路を表している。図 (a) は、経路の一部が遮蔽された透視投影図、図 (b)は、経路の遮蔽が回避された非透 視投影図の表示例である。  [0027] In this system, the portion where the path is blocked in each frame to be generated is the force that transforms the projection map so that it is avoided. Otherwise, the state of the perspective projection map is kept as much as possible. Create a non-perspective projection. Figure 3 is a snapshot taken by the implemented system, where the sphere is the current location, the dark line (thick line) is the road as the route, and the light line is not the route (that is, it is not the way to go). Represents. Fig. (A) is a perspective projection diagram in which a part of the route is shielded, and Fig. (B) is a display example of a non-perspective projection diagram in which shielding of the route is avoided.
[0028] 本実施形態の手法における、運転経路の遮蔽を回避するアイデアは、簡単に述べ れば、つぎのようなものである。もし、投影面上において、図 4(a)のように、山の輪郭 線 5が道路 6の上にあり遮蔽が生じてしまう場合に、図 4(b)のように、山の輪郭線 5が 道路 6と交差しなレ、 2次元投影図を生成する。このような変更は、山や道路などの地 理特徴力 特徴点を取り出し、それらの配置を投影面上で変更し、その変換を地形 形状に反映させることにより行うことができる。以下、詳しく説明する。なお、図 4にお いて、符号 Fは道路の特徴点、符号 Fは地形の特徴点を示す。  [0028] The idea of avoiding the shielding of the driving route in the method of the present embodiment is briefly as follows. If, on the projection plane, the mountain outline 5 is on the road 6 as shown in Fig. 4 (a) and the shielding occurs, the mountain outline 5 as shown in Fig. 4 (b). Generates a two-dimensional projection that does not intersect with road 6. Such a change can be made by extracting the feature points of geological features such as mountains and roads, changing their arrangement on the projection plane, and reflecting the transformation on the topographic shape. This will be described in detail below. In Fig. 4, symbol F indicates road feature points, and symbol F indicates topographic feature points.
1 2  1 2
[0029] (図 2のステップ Sa_ l)  [0029] (Step Sa_ l in Figure 2)
このステップでは、遮蔽に関係している可能性のある部分における地理特徴を表す 特徴点を、 3次元地形モデルから、処理部 1によって抽出する。具体的にどこを特徴 点とするか、どれだけ多くの特徴点を抽出するかは、求める 2次元投影図の精度や用 レ、るコンピュータの制約などによって決まる。一般には、カーナビゲーシヨンシステム であれば、山の頂点近傍の点や、道路形状を再現するための点が選ばれる。さらに 、建物の輪郭を再現するための点(頂点や屈曲点)を特徴点としてもよい。 In this step, the processing unit 1 extracts feature points that represent geographic features in parts that may be related to occlusion from the 3D terrain model. Where specifically features The number of feature points to be extracted is determined by the accuracy and usage of the 2D projection figure to be obtained, and computer restrictions. In general, in the case of a car navigation system, a point near the top of a mountain or a point for reproducing a road shape is selected. Furthermore, a point (vertex or bending point) for reproducing the outline of the building may be used as the feature point.
[0030] 本実施形態では、地形曲面の輪郭線から、山の頂点 F や、谷の底、これらの端点 [0030] In this embodiment, from the contour line of the topographic curved surface, the peak F of the mountain, the bottom of the valley, and the end points thereof.
21  twenty one
F 、 輪郭線同士の交差点 F (図 5(a)参照)を特徴点として抽出する。この抽出は、 F, intersection F between contour lines (see Fig. 5 (a)) is extracted as a feature point. This extraction
22 23 22 23
視点の情報と、 3次元地形モデルの標高の情報とを用いて、容易に算出できる。この ような算出の方法は、従来から知られているので、詳細な説明は省略する。  It can be easily calculated using viewpoint information and elevation information of the 3D terrain model. Since such a calculation method is conventionally known, detailed description thereof is omitted.
[0031] さらに、本実施形態では、道路力もも、曲率が極大となる点 F 、端点 F 、交差点 F [0031] Furthermore, in this embodiment, the road force also has a point F, an end point F, and an intersection F at which the curvature is maximized.
11 12 11 12
、変曲点 F (図 5(b))を特徴点として取り出す。これらの特徴点を結ぶことで、道路Then, the inflection point F (Fig. 5 (b)) is taken out as a feature point. By connecting these feature points, the road
13 14 13 14
形状を荒く近似することが可能となる。なお、通常のカーナビゲーシヨンシステムでは 、道路の情報を、地形曲面の情報とは別に、 2次元情報として持っていることも多い 力 本実施形態では、この 2次元情報も、 3次元地形モデルに概念的に含まれるもの として説明する。つまり、 3次元地形モデルは、 2次元情報を含めた複数の情報の寄 せ集めであってもよい。  The shape can be roughly approximated. Note that a normal car navigation system often has road information as two-dimensional information separately from terrain curved surface information. In this embodiment, this two-dimensional information is also converted into a three-dimensional terrain model. It will be explained as conceptually included. In other words, the 3D terrain model may be a collection of a plurality of information including 2D information.
[0032] ここで、本実施形態では、 3次元地形モデルの全体における地形上および道路上 の特徴点を用いる必要はなぐビューボリューム内にある特徴点のみを選んで使用す ればよい。  Here, in the present embodiment, it is not necessary to use the feature points on the terrain and the road in the entire 3D terrain model, and only the feature points in the view volume may be selected and used.
[0033] さらに、図 5(b)に例示された道路上の特徴点は、運転経路上の特徴点、および、交 差点の周囲の道路上の特徴点に限定してもよい。遮蔽回避を行うためには、経路と しての道路における特徴点(直線で結ぶことにより道路を近似できる点)のみで十分 である。しかし、交差点の周囲の道路において特徴点を抽出することにより、交差点 周囲における道路形状がゆがまないようにすることができる。  [0033] Furthermore, the feature points on the road illustrated in FIG. 5 (b) may be limited to feature points on the driving route and feature points on the road around the intersection. In order to avoid occlusion, only the feature points on the road as a route (points that can approximate the road by connecting them with straight lines) are sufficient. However, by extracting feature points on the road around the intersection, the road shape around the intersection can be prevented from being distorted.
[0034] システムにより自動的に抽出された特徴点の例を図 6に示す。図中黒い点(四角い 点)が特徴点である。なお、本実施形態では、特徴点は、座標の情報により特定され る。抽出された特徴点の情報も記憶部 2に格納される。  An example of feature points automatically extracted by the system is shown in FIG. Black points (square points) in the figure are feature points. In the present embodiment, the feature point is specified by coordinate information. Information on the extracted feature points is also stored in the storage unit 2.
[0035] (図 2のステップ Sa_ 2)  [0035] (Step Sa_ 2 in Figure 2)
このステップでは、ある視点から見たときの、 2元投影面上における特徴点の最適 配置を、ばねモデルを用いて求める。ここでの最適配置とは、道路の遮蔽を回避しつ つ、元の透視投影をできる限り保持する配置のことを意味する。道路の遮蔽を回避す るとは、本実施形態では、道路上の特徴点を結ぶ線の遮蔽を回避することである。元 の透視投影をできる限り保持するとは、本実施形態では、特徴点の相対的位置関係 を保持することである。 In this step, the optimal feature points on the binary projection plane when viewed from a certain viewpoint The placement is determined using a spring model. The optimal arrangement here means an arrangement that keeps the original perspective projection as much as possible while avoiding the shielding of the road. To avoid the shielding of the road is to avoid the shielding of the line connecting the feature points on the road in this embodiment. Retaining the original perspective projection as much as possible means maintaining the relative positional relationship of the feature points in the present embodiment.
[0036] このような最適配置は、遮蔽が完全に回避された特徴点の位置関係を求め、その 位置関係から、透視投影における特徴点の配置へ、例えばばねモデルを用いて近 づけることで、求めること力 Sできる。このステップでの処理は、 2次元投影面での表示 状態を前提にして行われる。  [0036] Such an optimal arrangement is obtained by obtaining the positional relationship of feature points in which occlusion is completely avoided, and approaching the arrangement of feature points in perspective projection using, for example, a spring model, The power to seek is S. The processing in this step is performed on the premise of the display state on the two-dimensional projection plane.
[0037] まず、遮蔽が完全に回避された位置関係として、特徴点を水平面に投影した位置 { ^}(ί=1,...,η)(図 7(c)参照)を用いる (nは特徴点の数)。これは、地形形状が一価関数で あると仮定しているため、水平面上に射影すれば必ず遮蔽が生じないことが保障さ れるからである。  [0037] First, as a positional relationship in which shielding is completely avoided, a position {^} (ί = 1, ..., η) (see Fig. 7 (c)) where feature points are projected on a horizontal plane is used (n Is the number of feature points. This is because it is assumed that the topographic shape is a monovalent function, so that if it is projected onto the horizontal plane, it is guaranteed that no shielding will occur.
[0038] 次に、その配置から、透視投影での特徴点の配置 {p }(i=l,...,n)(図 7(a)参照)にでき る限り近づけるために、以下のようにばねモデルを構築する。なお、このモデルで用 レ、られるばねの意味はあくまで、計算機で計算するための仮想的なものである。  [0038] Next, in order to make the arrangement as close as possible to the arrangement of feature points {p} (i = l, ..., n) (see Fig. 7 (a)) in perspective projection, the following The spring model is constructed as follows. The meaning of the spring used in this model is only a virtual one for calculation by a computer.
[0039] まず、図 7 (c)の特徴点配置に対し、 Delaunay三角形分割を施し(図 8(a)参照)、そ れぞれの辺の端点に、各特徴点の相対位置関係を保持するばねを付ける。さらに、 各特徴点 qが、対応する透視投影の位置 piに近づくように、 pと qの間にもばねをつけ る。このばねモデルの平衡状態として、特徴点の最適配置 (図 7(b))を求める。ここで、 最適化の処理中に、特徴点間を結ぶ辺が交差しないようにチェックすることで、道路 の特徴点とそれに接続する地形上の特徴点の相対位置関係が変わらないようにする ことができ、遮蔽が回避された特徴点配置を得ることができる(図 8(b)参照)。  [0039] First, Delaunay triangulation is applied to the feature point arrangement in Fig. 7 (c) (see Fig. 8 (a)), and the relative positional relationship of each feature point is maintained at the end point of each side. A spring is attached. In addition, a spring is also placed between p and q so that each feature point q approaches the corresponding perspective projection position pi. As an equilibrium state of this spring model, the optimum arrangement of feature points (Fig. 7 (b)) is obtained. Here, during the optimization process, check that the edges connecting the feature points do not intersect so that the relative positional relationship between the feature points on the road and the feature points on the terrain connected to it does not change. Therefore, it is possible to obtain a feature point arrangement that avoids occlusion (see Fig. 8 (b)).
[0040] 本実施形態の実装では、各特徴点に働くばねの力を以下のように定式化した。
Figure imgf000011_0001
一 ^ - xj \ - \qi - τ —Γΐ j A Xt 1
[0040] In the implementation of the present embodiment, the spring force acting on each feature point is formulated as follows.
Figure imgf000011_0001
One ^-xj \-\ qi -τ —Γΐ j A Xt 1
― (1) i ― (1) i
i€
Figure imgf000011_0002
i €
Figure imgf000011_0002
[0041] 式(1)中、 がインデックス iの特徴点に働く力であり、 Aは Delaunay三角形分割で隣 接している特徴点のインデックスの集合、 Rが道路上の、 Tが地形上の特徴点のイン デッタスの集合である。また、式(1)中、 Xは、特徴点の位置であり、ばねモデルが平 衡状態に達するとその値は図 7 (b)の rに収束する。右辺の第 1項が透視投影に近づ ける力、第 2項が相対位置関係を保持するための力である。さらに、第 3項は、道路上 の特徴点と地形上の特徴点の間の斥力として働く。この斥力は、道路と山などのそれ ぞれの地理特徴が、投影図上でお互い干渉しあうことなくなるべく離れて配置される ように導入されている。 [0041] In Equation (1), is the force acting on the feature point with index i, A is the set of feature point indexes adjacent to each other in the Delaunay triangulation, R is on the road, and T is the feature on the topography A set of point indices. In Equation (1), X is the position of the feature point, and when the spring model reaches the equilibrium state, its value converges to r in Fig. 7 (b). The first term on the right side is the force that approaches the perspective projection, and the second term is the force that maintains the relative positional relationship. Furthermore, the third term acts as a repulsive force between the feature points on the road and the feature points on the terrain. This repulsive force is introduced so that the geographical features such as roads and mountains are separated from each other so that they do not interfere with each other on the projection.
[0042] (ステップ Sa_ 3)  [0042] (Step Sa_ 3)
このステップでは、前記ステップ Sa_ 2で求めた、 2次元投影面上での特徴点の最 適配置を満たすように、特徴点の位置(3次元空間での位置)を求める。この特徴点 の位置を、 3次元空間における制約として、 3次元地形モデルを変形する。これにより 、 2次元投影面上において求められた特徴点の配置を満たす 3次元地形形状を生成 すること力 Sできる。  In this step, the position of the feature point (position in the three-dimensional space) is obtained so as to satisfy the optimum arrangement of the feature point on the two-dimensional projection plane obtained in step Sa_2. Using the position of this feature point as a constraint in the 3D space, the 3D terrain model is transformed. As a result, it is possible to generate a three-dimensional terrain shape that satisfies the feature point arrangement obtained on the two-dimensional projection plane.
[0043] ただし、 2次元投影面上の特徴点配置からだけでは、 3次元空間における特徴点位 置の、奥行き方向での位置までは決めることができない。本実施形態では、図 9に示 されるように、変形前と変形後とにおいて、「スクリーン S (投影面)から、 3次元地形モ デル上の特徴点 F までの距離が変化しないように」変換している。このようにすること  [0043] However, the position of the feature point position in the three-dimensional space in the depth direction cannot be determined only from the feature point arrangement on the two-dimensional projection plane. In the present embodiment, as shown in FIG. 9, “the distance from the screen S (projection plane) to the feature point F on the 3D terrain model does not change” before and after the deformation. It has been converted. To do this
24  twenty four
で、変形後の地形上の特徴点の位置 F (3次元空間での位置)が定まるので、その  Then, the position F (position in 3D space) of the feature point on the terrain after deformation is determined.
241  241
位置を幾何的制約として地形モデルに変形を加える。なお、ここで、スクリーン Sの上 に投影される特徴点に、符号 F 及び符号 F を付している。  Deform the terrain model using the position as a geometric constraint. Here, the symbol F and the symbol F are attached to the feature points projected on the screen S.
24S 241S [0044] 本実施形態では、より具体的には、変形前後の地形形状の差分を、ガウス関数など を用いた単峰形をなす基底関数の線形和により表現し、地形上の特徴点の位置制 約から得られる連立方程式を解くことにより、差分形状を求めることができる。このよう に差分形状を考えることで、 地形曲面に本来存在する高周波成分の起伏を保持し たまま、 変形を施すことができる。このような基底関数を用いた変形操作は、それ自 体としてはよく知られているので、詳細な説明を省略する。 24S 241S In the present embodiment, more specifically, the difference between the terrain shape before and after the deformation is expressed by a linear sum of unimodal basis functions using a Gaussian function or the like, and the position of the feature point on the terrain. The differential shape can be obtained by solving the simultaneous equations obtained from the constraints. By considering the differential shape in this way, deformation can be performed while maintaining the undulations of the high-frequency components that originally exist on the topographic curved surface. Such a deformation operation using a basis function is well known as such and will not be described in detail.
[0045] このステップにおいても、特徴点の抽出の場合と同様に、変形領域を限定して効率 化を行うことができる。実際には、急激なカメラのパンに対応するために、投影面を 2 倍程度のサイズとしたときのビューボリュームの範囲で変形を行うことが好ましい。図 1 0は、通常の視点より遠くから見たものであり、黒い点で表された範囲が、実際に変形 される領域を示している。  [0045] In this step as well, as in the case of feature point extraction, efficiency can be improved by limiting the deformation area. Actually, in order to cope with a sudden pan of the camera, it is preferable to perform deformation within the range of the view volume when the projection plane is about twice the size. FIG. 10 is viewed from a distance from the normal viewpoint, and the range indicated by the black dots shows the area that is actually deformed.
[0046] (ステップ Sa_4)  [0046] (Step Sa_4)
前記のようにして生成した 3次元地形形状を視点から見ることで、個々の静的な非 透視投影図(2次元投影図)を生成することができる。すなわち、遮蔽が回避された 2 次元投影図を得ることができる。得られた 2次元投影図においては、基本的に、遮蔽 が回避されているので、遮蔽回避のための再計算は原則として不要となる。したがつ て、本実施形態では、比較的高速に、遮蔽が回避された 2次元投影図を得ることがで きる。得られた静的な非透視投影図(2次元投影図)を表示部 3に表示することも可能 である。  By looking at the 3D terrain shape generated as described above from the viewpoint, it is possible to generate individual static non-perspective projections (2D projections). In other words, it is possible to obtain a two-dimensional projection diagram in which shielding is avoided. In the obtained 2D projection map, shielding is basically avoided, so recalculation for shielding avoidance is basically unnecessary. Therefore, in the present embodiment, it is possible to obtain a two-dimensional projection diagram in which shielding is avoided at a relatively high speed. The obtained static non-perspective projection (two-dimensional projection) can be displayed on the display unit 3.
[0047] さらに、処理部 1は、視点の移動に対応して、前記した各ステップの処理を行うこと により、異なる時点における 2次元投影図を生成することができる。これらの図を連続 して表示すれば、経路に沿って(つまり視点の変化に対応して)、表示が変化してい くアニメーション (動画)を表示することができる。この表示は、本実施形態の例では、 表示部 3により行うことができる。  [0047] Furthermore, the processing unit 1 can generate two-dimensional projection diagrams at different points in time by performing the processing of each step described above corresponding to the movement of the viewpoint. If these figures are displayed in succession, an animation (movie) can be displayed as the display changes along the path (that is, in response to a change in viewpoint). This display can be performed by the display unit 3 in the example of the present embodiment.
[0048] ただし、前記のステップで生成した図をコマ送りしても、時間に関するコヒーレンスを 保持したアニメーションは生成できない。このようなコヒーレンスを保持するために、本 実施形態では、 2つの処理を施す。 [0048] However, even if the diagram generated in the above step is frame-by-frame, animation that retains coherence with respect to time cannot be generated. In order to maintain such coherence, two processes are performed in this embodiment.
[0049] まず、フレーム間において、可能な限り同一の特徴点を選択する。これは、前フレ ームで用いられていた地理特徴点力 S、現在のフレームでもビューボリューム内に存在 していれば、そのまま同じ特徴点を用いるようにすることで、実現される。 First, feature points that are as identical as possible are selected between frames. This is This is realized by using the same feature point as it is if it exists in the view volume even in the current frame.
[0050] つまり、ある時点における 2次元投影図を生成する場合において、次の条件を満た す特徴点が存在する場合には、それを、ステップ Sa_ lの処理における特徴点とし て用いる。  [0050] That is, when generating a two-dimensional projection map at a certain point in time, if there is a feature point that satisfies the following condition, it is used as a feature point in the process of step Sa_l.
(条件)  (Condition)
ある時点より前に生成した 2次元投影図で用いた特徴点であって、それが、ある時点 の 2次元投影図におけるビューボリューム内に存在すること。  A feature point used in a 2D projection created before a certain point in time, and exists in the view volume in the 2D projection at a certain point in time.
[0051] 図 11は、連続したフレームで特徴点が取られた様子を示している。図 (a)は、特徴点 の再利用を行っていないものであり、フレームが 1つ進むことでまったく異なる特徴点 が選ばれていることが確認できる。なお、この図では、時間軸は、図中の上から下に 進むものとする。図 (b)から分かるように、特徴点の再利用を行うことにより、特徴点の 不均一によって生じる変形の差異もなくなつている。 FIG. 11 shows a state in which feature points are taken in consecutive frames. Figure (a) shows that feature points are not reused, and it can be confirmed that a completely different feature point is selected as the frame advances by one. In this figure, the time axis goes from top to bottom in the figure. As can be seen from Figure (b), the reuse of feature points eliminates the difference in deformation caused by non-uniform feature points.
[0052] さらに、前後複数のフレームにおいて求められる変形を表す差分形状を平滑化す ることで、フレーム間コヒーレンスをより保つこともできる。例えば、現在のフレームを中 心としたガウス関数から求めた各フレームの重みによって、前後各 4フレームについ て平滑化を行うこともできる。 [0052] Further, by smoothing the differential shape representing the deformation required in a plurality of frames before and after, inter-frame coherence can be further maintained. For example, smoothing can be performed for each of the four frames before and after the weight of each frame obtained from a Gaussian function centered on the current frame.
[0053] (実験例 1) [0053] (Experiment 1)
実際の地形、道路のデータをもとに、前記した本実施形態の方法により、非透視投 景のアニメーションを生成した。 1フレームあたりの計算時間は、 CPU 3.0GHz Penti um (登録商標) 4、メモリ 2GB RAMで、およそ 3秒である。  Based on the actual terrain and road data, a non-perspective projection animation was generated by the method of this embodiment described above. The calculation time per frame is about 3 seconds with CPU 3.0GHz Pentium (registered trademark) 4, memory 2GB RAM.
[0054] 結果の一部を抜き出したものが図 3 (b)である。同図(a)は、透視投影の図である。 [0054] Fig. 3 (b) shows a part of the results. FIG. 4A is a perspective projection view.
本実施形態によれば、経路を投影図上で適切に配置することで、不必要な遮蔽を回 避すること力 Sできる。  According to the present embodiment, it is possible to avoid unnecessary shielding S by appropriately arranging the path on the projection view.
[0055] なお、前記各実施形態の記載は単なる一例に過ぎず、本発明に必須の構成を示し たものではない。各部の構成は、本発明の趣旨を達成できるものであれば、上記に 限らない。  It should be noted that the description of each of the above embodiments is merely an example, and does not indicate a configuration essential to the present invention. The configuration of each part is not limited to the above as long as the gist of the present invention can be achieved.
[0056] 例えば、前記実施形態では、ステップ Sa_ 2において、特徴点の最適配置を求め る手法としてばねモデルを用いた。し力 ながら、このような手法としては、他に、特徴 点間の間に働く力を原子間の力に近似させるパーティクルなど、種々のものが存在 する。パーティクル自体はよく知られた方法である。要するに、特徴点の最適配置を 求めることが可能な手法であれば具体的な内容は限定されない。 [0056] For example, in the embodiment, in step Sa_2, the optimum arrangement of feature points is obtained. The spring model was used as a method for this. However, there are various other methods such as particles that approximate the force acting between feature points to the force between atoms. The particles themselves are a well-known method. In short, the specific content is not limited as long as it is a method capable of obtaining the optimum arrangement of feature points.
[0057] また、前記実施形態では、道路の遮蔽を防ぐ手法を説明したが、道路以外の特徴 、例えば建物や地理的目印の遮蔽を防止することもできる。この場合は、遮蔽を防ぎ たレ、対象物の輪郭上に特徴点を配置すればょレ、。  In the above embodiment, the method for preventing the shielding of the road has been described. However, it is also possible to prevent the shielding of features other than the road, such as buildings and geographical landmarks. In this case, the feature points should be placed on the contour of the object, preventing occlusion.
[0058] (第 2実施形態の構成及び動作)  (Configuration and operation of the second embodiment)
前記した第 1実施形態のステップ Sa—l (図 2参照)では、地形曲面の輪郭線から、 山の頂点 F や、谷の底、これらの端点 F 、 輪郭線同士の交差点 F (図 5(a)参照)  In step Sa-l (see Fig. 2) of the first embodiment described above, from the contour of the topographic curved surface, the peak F of the mountain, the bottom of the valley, these end points F, and the intersection F of the contours (Fig. 5 ( see a))
21 22 23  21 22 23
を特徴点として抽出した。この抽出は、視点の情報と、 3次元地形モデルの標高の情 報とを用いて行った。  Were extracted as feature points. This extraction was performed using viewpoint information and elevation information of the 3D terrain model.
[0059] 第 2実施形態における特徴点の抽出は、以下のようにして行われる。本実施形態で も、地形曲面の輪郭線から、山の頂点や、谷の底、これらの端点、輪郭線同士の交 差点(図 12参照)を特徴点として抽出することは、前記第 1実施形態と基本的に同様 である。ただし、第 2実施形態における特徴点の抽出は、対象となる地形曲面を取り 囲むように視点 Pを移動して、複数の視点位置において行う。視点 Pの移動軌跡に符 号 Rを付した。そして、各視点位置で得られる特徴点集合の和をとり、その和集合全 体を視点位置に依存しない特徴点の集合として地形曲面の特徴点として用いていく 。地形曲面から輪郭線やその上の特徴点を抽出する方法は、従来から知られている ので、詳細な説明は省略する。また、このような地形曲面上の特徴点は、あらかじめ 抽出しておいて、地形曲面データと一緒に入力として読み込むようにすることができ る。  [0059] Extraction of feature points in the second embodiment is performed as follows. Also in the present embodiment, extracting the peak points of the mountains, the bottoms of the valleys, their end points, and the intersection points of the contour lines (see FIG. 12) as feature points from the contour line of the topographic curved surface is the first embodiment. The form is basically the same. However, the feature points in the second embodiment are extracted at a plurality of viewpoint positions by moving the viewpoint P so as to surround the target terrain curved surface. Symbol R is attached to the movement trajectory of viewpoint P. Then, the feature point set obtained at each viewpoint position is summed, and the entire sum set is used as a feature point set on the terrain curved surface as a set of feature points independent of the viewpoint position. Since a method for extracting a contour line and a feature point thereon from a topographic curved surface has been conventionally known, a detailed description thereof will be omitted. Also, such feature points on the terrain curved surface can be extracted in advance and read as input together with the terrain curved surface data.
[0060] すなわち、本実施形態による特徴点の抽出は、以下の処理により行われる:  That is, feature point extraction according to the present embodiment is performed by the following processing:
(a)参照点を通る水平面に対して、ある一定の角度で傾斜する方向に視点 Pを置く処 理。ここで、ある一定の角度とは、特に限定されないが、一般的なカーナビゲーシヨン システムの場合、 20° 〜70° の範囲である。また、参照点とは、視点 Pから常に参照 されている点であり、例えばカーナビゲーシヨンシステムの場合は、 自車の位置であ る。 (a) A process of placing the viewpoint P in a direction inclined at a certain angle with respect to a horizontal plane passing through the reference point. Here, the certain angle is not particularly limited, but in the case of a general car navigation system, it is in a range of 20 ° to 70 °. The reference point is a point that is always referred from the viewpoint P. For example, in the case of a car navigation system, it is the position of the own vehicle. The
(b) 3次元地形モデルから得られる 3次元の地形形状に対する、視点の位置を、参照 点を中心とし、かつ水平な方向に相対的に回転移動させるときに、 3次元の地形形 状の輪郭線上に乗る頂点をすベて特徴点として抽出する処理。ここで、頂点とは、 3 次元地形モデルを構成する多面体の頂点を意味する。また、参照点を中心とした視 点の相対的な回転角度は、一般的には全周(360° )である(図 12参照)。また、視 点の相対的な移動方法としては、連続的に移動させても良レ、が、通常は、 15° 〜30 。 程度の角度毎に、離散的に移動させることが好ましい。要するに、視点の移動方 法としては、実質的に必要な頂点を抽出できるものであればよい。  (b) The contour of the 3D terrain shape when the position of the viewpoint relative to the 3D terrain shape obtained from the 3D terrain model is rotated relative to the reference point in the horizontal direction. Processing to extract all the vertices on the line as feature points. Here, the vertex means the vertex of the polyhedron that composes the 3D terrain model. The relative rotation angle of the viewing point around the reference point is generally the entire circumference (360 °) (see Fig. 12). In addition, as a relative movement method of the visual point, it may be moved continuously, but usually 15 ° to 30 °. It is preferable to move discretely for each angle. In short, any method for moving the viewpoint may be used as long as it can extract substantially necessary vertices.
[0061] また、本実施形態でも、道路から、曲率が極大となる点、端点、交差点(図 5(b))を 特徴点として取り出す。 Also in the present embodiment, points, end points, and intersections (FIG. 5 (b)) at which the curvature is maximized are extracted from the road as feature points.
[0062] 第 2実施形態のシステムにより自動的に抽出された特徴点の例を図 13に示す。図 中黒い点(四角い点)が特徴点である。  An example of feature points automatically extracted by the system of the second embodiment is shown in FIG. Black points (square points) in the figure are feature points.
[0063] また、前記した第 1実施形態のステップ Sa— 2では、特徴点を水平面に投影した位 置 {q}(i=l,...,n)(図 7(c)参照)から、透視投影での特徴点の配置 {P }(i=l,...,n)(図 7(a)参 照)にできる限り近づけるために、ばねモデルを用いた。この点は、第 2実施形態でも 基本的に同様であるが、第 2実施形態では、つぎのような方法を用いる。 [0063] Further, in step Sa-2 of the first embodiment described above, from the position {q} (i = l, ..., n) (see FIG. 7 (c)) where the feature points are projected onto the horizontal plane. In order to get as close as possible to the arrangement of feature points { P } (i = l, ..., n) in perspective projection (see Fig. 7 (a)), a spring model was used. This point is basically the same in the second embodiment, but in the second embodiment, the following method is used.
[0064] 第 2実施形態においても、特徴点を水平面に投影した位置 {qiKi=l,...,n)(図 7(c)参 照)から、道路の遮蔽を回避しっっ透視投影の配置{ ½=1,.. . ,11)(図7( 参照)にでき る限り近づけるためのばねモデルを構築する。ただし、第 2実施形態では、特徴点の 最適配置を求める処理において、各特徴点はそれぞれ投影面上で {q }(i=l,...,n)と {p }( i=l,...,n)を結んだ直線上を移動することにし、計算を簡単にしている。 [0064] Also in the second embodiment, from the positions { qi Ki = l, ..., n ) (see Fig. 7 (c)) where the feature points are projected onto the horizontal plane, it is possible to avoid seeing the roads. Projection Arrangement {½ = 1,..., 11) (Building a spring model as close as possible to Fig. 7 (see)) However, in the second embodiment, in the process of finding the optimum arrangement of feature points , Each feature point moves on a straight line connecting {q} (i = l, ..., n) and {p} (i = l, ..., n) on the projection plane, The calculation is simplified.
[0065] 特徴点の最適配置の計算は 2つの段階を踏んで行われる。まず、図 7 (c)の特徴点 配置に対し、
Figure imgf000015_0001
(図14 (&) (b)参照)、それぞれの辺の端点 に、各特徴点の相対位置関係を保持するばねを付ける。さらに、各特徴点 qが、対応 する透視投影の位置 Pに近づくように、 pと qの間にもばねをつける。このばねモデル
[0065] The calculation of the optimal arrangement of feature points is performed in two steps. First, for the feature point arrangement in Fig. 7 (c),
Figure imgf000015_0001
(See Fig. 14 (&) (b)), and attach a spring to the end point of each side to maintain the relative position of each feature point. In addition, a spring is also placed between p and q so that each feature point q approaches the corresponding perspective projection position P. This spring model
i i i  i i i
の平衡状態として、特徴点の最適配置 (図 7(b))を求める。この際、第 2実施形態では 、道路の遮蔽の原因となる特徴点間を結ぶ辺の交差を許した状態で、道路上の特徴 点の最適配置だけを先に決定することにする (図 15 (a) (b)参照)。 As an equilibrium state, find the optimal feature point arrangement (Fig. 7 (b)). At this time, in the second embodiment, the feature on the road is allowed in the state where the intersection of the edges connecting the feature points causing the shielding of the road is permitted. Only the optimal arrangement of points is determined first (see Fig. 15 (a) and (b)).
[0066] このあと、道路の特徴点の位置を固定した状態で、特徴点間を結ぶ辺の交差を避 けるように地形曲面上の特徴点の位置を移動して、その交差を避けるようなかたちで 再度ばねモデルによる計算をおこない、地形曲面上の特徴点の最適配置を計算す る (図 16 (a) (b)参照)。結果として、遮蔽が回避された特徴点配置を得ることができる [0066] After that, with the position of the feature point of the road fixed, the position of the feature point on the terrain curved surface is moved so as to avoid the intersection of the edges connecting the feature points, and the intersection is avoided. In this way, the calculation using the spring model is performed again, and the optimal arrangement of the feature points on the topographic curved surface is calculated (see Figs. 16 (a) and 16 (b)). As a result, it is possible to obtain a feature point arrangement in which shielding is avoided.
[0067] さらに、前記した第 1実施形態のステップ Sa_ 3では、変形前後の地形形状の差分 を、ガウス関数などを用いた単峰形をなす基底関数の線形和により表現し、地形上 の特徴点の位置制約から得られる連立方程式を解くことにより、差分形状を求めるこ ととしてレ、る。 [0067] Further, in step Sa_3 of the first embodiment described above, the difference in topographical shape before and after deformation is expressed by a linear sum of basal functions that form a single peak using a Gaussian function, etc. By solving the simultaneous equations obtained from the point position constraints, the difference shape is obtained.
[0068] 第 2実施形態においては、変形前後の地形形状の差分を、 B-スプライン関数を用 レ、た単峰形をなす基底関数の線形和により階層的に表現し、地形上の特徴点の位 置制約を粗レ、B-スプライン近似から順次細力レ、B-スプライン近似を施す。これにより 、特徴点の位置制約をみたす差分形状を高速に求めることができる。このように差分 形状を考えることで、地形曲面に本来存在する高周波成分の起伏を保持したまま、 変形を施すことができる。このような基底関数を用いた変形操作は、以下の論文に示 されるように、既に知られているので、詳細な説明を省略する。  [0068] In the second embodiment, the terrain shape difference before and after deformation is represented hierarchically by a linear sum of basis functions that form a single peak using a B-spline function, and feature points on the terrain. The position constraints are applied to the coarse constraint, B-spline approximation, and then the fine force trace and B-spline approximation. As a result, the difference shape that satisfies the position constraints of the feature points can be obtained at high speed. By considering the differential shape in this way, it is possible to perform deformation while maintaining the undulation of the high-frequency component that originally exists on the topographic curved surface. Such deformation operations using basis functions are already known as shown in the following papers, so detailed explanations are omitted.
S. Lee, G. Wolberg, and S. Y. Shin, "Scattered Data Interpolation with Multilevel B -Splines," IEEE Transactions on Visualization and Computer Graphics, Vol. 3, No. 3, pp. 228—244, 1997.  S. Lee, G. Wolberg, and S. Y. Shin, "Scattered Data Interpolation with Multilevel B -Splines," IEEE Transactions on Visualization and Computer Graphics, Vol. 3, No. 3, pp. 228—244, 1997.
[0069] さらに、第 1実施形態のステップ Sa— 4では、時間に関するコヒーレンスを保持する ために、フレーム間において、可能な限り同一の特徴点を選択する等の処理を行つ た。しなしながら、第 2実施形態では、このような処理は必要ではない。  [0069] Further, in step Sa-4 of the first embodiment, processing such as selecting the same feature point as much as possible between frames was performed in order to maintain coherence with respect to time. However, in the second embodiment, such processing is not necessary.
[0070] 第 2実施形態における、前記した以外の構成及び利点については、第 1実施形態 と同様なので、説明を省略する。  Since the configuration and advantages other than those described above in the second embodiment are the same as those in the first embodiment, description thereof will be omitted.
[0071] (実験例 2)  [0071] (Experimental example 2)
実際の地形、道路のデータをもとに、前記した第 2実施形態の方法により、非透視 投影のアニメーションを生成した。実験条件としては、実験例 1と同様、 CPU 3.0GHz Pentium (登録商標) 4、メモリ 2GB RAMとした。この場合、 1フレームあたりの計算 時間は、およそ 0.5-1.0秒であった。すなわち、第 2実施形態の方法により、処理の高 速化を達成できることが判る。第 2実施形態では、第 1実施形態と異なり、特徴点を、 視点に依存しない状態で抽出できる。このため、視点の移動の際に、特徴点抽出の ための再計算を行う必要がない。これにより、処理が高速化していると考えられる。 Based on the actual terrain and road data, a non-perspective projection animation was generated by the method of the second embodiment described above. The experimental conditions are the same as in Experimental Example 1, CPU 3.0GHz Pentium (registered trademark) 4, memory 2GB RAM. In this case, the calculation time per frame was approximately 0.5-1.0 seconds. That is, it can be seen that the processing speed can be increased by the method of the second embodiment. In the second embodiment, unlike the first embodiment, feature points can be extracted without depending on the viewpoint. For this reason, it is not necessary to perform recalculation for feature point extraction when the viewpoint is moved. Thereby, it is considered that the processing is speeded up.
[0072] なお、各実施形態を実現するための各部 (機能ブロックを含む)の具体的手段は、 ハードウェア、ソフトウェア、ネットワーク、これらの組み合わせ、その他の任意の手段 を用いることができ、このこと自体は当業者において自明である。また、機能ブロック どうしが複合して一つの機能ブロックに集約されても良レ、。さらに、機能ブロックが複 数のハードウェアまたはソフトウェアの協働によって実現されても良い。 [0072] As specific means of each unit (including functional blocks) for realizing each embodiment, hardware, software, network, a combination thereof, and any other means can be used. This is obvious to those skilled in the art. It is also possible to combine functional blocks into a single functional block. Further, the functional block may be realized by cooperation of a plurality of hardware or software.
図面の簡単な説明  Brief Description of Drawings
[0073] [図 1]本発明の第 1実施形態に係る投影図生成システムの概略的な構成を示すプロ ック図である。  FIG. 1 is a block diagram showing a schematic configuration of a projection map generation system according to a first embodiment of the present invention.
[図 2]本発明の第 1実施形態に係る投影図生成方法を説明するためのフローチャート である。  FIG. 2 is a flowchart for explaining a projection map generation method according to the first embodiment of the present invention.
[図 3]図(a)は、経路の一部が山によって隠された透視投影図の例である。図(b)は、 経路の遮蔽が回避された状態における非透視投影図の例である。  [Fig. 3] Fig. 3 (a) is an example of a perspective projection view in which a part of the route is hidden by a mountain. Figure (b) is an example of a non-perspective projection view in a state where the shielding of the route is avoided.
[図 4]図(a)は、経路 (道路)の遮蔽が生じる特徴点の配置を模式的に示す図である。 図(b)は、特徴点の配置を変換することにより遮蔽が回避された状態を模式的に示 す図である。  [FIG. 4] FIG. 4 (a) is a diagram schematically showing the arrangement of feature points where a route (road) is blocked. Figure (b) is a diagram schematically showing a state in which shielding is avoided by changing the arrangement of feature points.
[図 5]図(a)は、地形上の特徴点を示す説明図である。図(b)は、道路上の特徴点を 示す説明図である。  [FIG. 5] FIG. 5 (a) is an explanatory diagram showing feature points on the topography. Figure (b) is an explanatory diagram showing feature points on the road.
[図 6]システムにより抽出された特徴点の例を示すもので、図(a)は地形上の特徴点、 図(b)は道路上の特徴点を示している。  [Fig. 6] An example of feature points extracted by the system is shown. Fig. (A) shows feature points on the terrain, and Fig. (B) shows feature points on the road.
[図 7]特徴点の配置を説明するための説明図であり、図 (a)は透視投影図上の配置、 図 (b)は最適配置、図 (c)は水平面に射影した配置を表している。  [Fig. 7] An explanatory diagram for explaining the arrangement of feature points. Fig. (A) shows the arrangement on the perspective view, Fig. (B) shows the optimum arrangement, and Fig. (C) shows the arrangement projected on the horizontal plane. ing.
[図 8]特徴点の Delaunay三角形分割を説明するための説明図であり、図 (a)は最適化 前、図 (b)は最適化後の状態を示している。 [図 9]投影面上の最適配置を満たすような、特徴点の 3次元空間における移動を説明 するための説明図である。 [Fig. 8] An explanatory diagram for explaining Delaunay triangulation of feature points. Fig. (A) shows the state before optimization, and Fig. (B) shows the state after optimization. FIG. 9 is an explanatory diagram for explaining the movement of the feature points in the three-dimensional space so as to satisfy the optimal arrangement on the projection plane.
[図 10]変形領域の制限範囲(黒い点で表された範囲)を示す説明図であって、図 (a) はワイヤーフレーム表示の例、図 (b)はサーフェス表示の例である。  [FIG. 10] An explanatory diagram showing the limited range of the deformation area (range represented by black dots), where FIG. (A) is an example of wire frame display and FIG. (B) is an example of surface display.
[図 11]特徴点の取られ方を説明するための説明図であって、図 (a)は特徴点の再利 用なし、図 (b)は再利用ありの例である。 [Fig. 11] An explanatory diagram for explaining how feature points are taken. Fig. (A) shows no reuse of feature points, and Fig. (B) shows an example of reuse.
[図 12]第 2実施形態における、地形上の特徴点を示す説明図である。  FIG. 12 is an explanatory diagram showing feature points on the topography in the second embodiment.
[図 13(a)]第 2実施形態のシステムにより抽出された地形上の特徴点の例を示す図で ある。  FIG. 13 (a) is a diagram showing an example of feature points on the terrain extracted by the system of the second embodiment.
[図 13(b)]第 2実施形態のシステムにより抽出された道路上の特徴点の例を示す図で ある。  FIG. 13 (b) is a diagram showing an example of feature points on the road extracted by the system of the second embodiment.
[図 14(a)]特徴点の Delaunay三角形分割を説明するための説明図であり、図(a)は最 適化前の状態を示している。  [FIG. 14 (a)] An explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. (A) shows a state before optimization.
[図 14(b)]特徴点の Delaunay三角形分割を説明するための説明図であり、図(b)は図 (a)の拡大図である。  FIG. 14 (b) is an explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 14 (b) is an enlarged view of FIG.
[図 15(a)]特徴点の Delaunay三角形分割を説明するための説明図であり、図(a)は道 路上における特徴点の最適配置を計算した例を示している。  FIG. 15 (a) is an explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 15 (a) shows an example of calculating the optimum arrangement of feature points on the road.
[図 15(b)]特徴点の Delaunay三角形分割を説明するための説明図であり、図(b)は図 (a)の拡大図である。  [FIG. 15 (b)] An explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 15 (b) is an enlarged view of FIG.
[図 16(a)]特徴点の Delaunay三角形分割を説明するための説明図であり、図(a)は最 適化後の状態を示している。  FIG. 16 (a) is an explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 16 (a) shows a state after optimization.
[図 16(b)]特徴点の Delaunay三角形分割を説明するための説明図であり、図(b)は図 (a)の拡大図を示している。  FIG. 16 (b) is an explanatory diagram for explaining Delaunay triangulation of feature points, and FIG. 16 (b) shows an enlarged view of FIG.
符号の説明 Explanation of symbols
1 処理部  1 Processing section
pLfe p|5  pLfe p | 5
3 表示部  3 Display section
4 通信路 山の輪郭線 道路 4 communication path Mountain contour road

Claims

請求の範囲 [1] 処理部を備え、前記処理部は、以下の処理を行うことを特徴とする投影図生成シス テム: Claims [1] A projection map generation system comprising a processing unit, wherein the processing unit performs the following processing:
(1) 3次元地形モデルから、遮蔽に関係している可能性のある部分における地理特 徴を表す特徴点を抽出する処理;  (1) Processing to extract feature points representing geographic features in a part that may be related to occlusion from a 3D terrain model;
(2)ある視点から見たときの 2次元投影面上において、前記特徴点の最適配置を算 出する処理;  (2) A process for calculating the optimum arrangement of the feature points on the two-dimensional projection plane when viewed from a certain viewpoint;
(3)前記 2次元投影面上における前記特徴点の最適配置を満たす 3次元地形形状 を生成する処理;  (3) A process of generating a 3D terrain shape that satisfies the optimal arrangement of the feature points on the 2D projection plane;
(4)前記 3次元形状を、前記視点又はその近傍から見たときの 2次元投影図を生成 する処理。  (4) A process of generating a two-dimensional projection when the three-dimensional shape is viewed from the viewpoint or the vicinity thereof.
[2] さらに記憶部を備え、前記記憶部は、前記 3次元地形モデルを記憶しており、 前記処理部は、前記記憶部から前記 3次元地形モデルを取得する構成となっており 前記処理部は、生成された前記 2次元投影図を前記記憶部に格納する構成となって いる  [2] The apparatus further includes a storage unit, the storage unit stores the 3D terrain model, and the processing unit is configured to acquire the 3D terrain model from the storage unit. Is configured to store the generated two-dimensional projection map in the storage unit
ことを特徴とする、請求項 1に記載の投影図生成システム。  The projection map generation system according to claim 1, wherein:
[3] 請求項 1又は 2に記載の投影図生成システムと、表示部とを備え、前記表示部は、 前記 2次元投影図を表示する構成となっていることを特徴とする投影図表示システム [3] A projection diagram display system comprising: the projection diagram generation system according to claim 1 or 2; and a display unit, wherein the display unit is configured to display the two-dimensional projection diagram.
[4] 前記処理部は、前記視点の移動に対応して、前記(1)〜(4)の処理を行うことによ り、異なる時点における 2次元投影図を生成する構成となっており、 [4] The processing unit is configured to generate two-dimensional projection diagrams at different points in time by performing the processes (1) to (4) in response to the movement of the viewpoint.
ある時点における 2次元投影図を生成する場合において、次の条件を満たす特徴点 が存在する場合には、それを、前記(1)の処理における特徴点として用いることを特 徴とする、請求項:!〜 3のレ、ずれ力 1項に記載の投影図生成システム:  When generating a two-dimensional projection map at a certain point in time, if there is a feature point that satisfies the following condition, it is used as a feature point in the processing of (1) above. :! ~ 3, displacement force Projection map generation system according to item 1:
(条件)  (Condition)
前記ある時点より前に生成した 2次元投影図で用いた特徴点であって、それが、前記 ある時点の 2次元投影図におけるビューボリューム内に存在すること。 Feature points used in the 2D projection generated before the certain time point, and exist in the view volume in the 2D projection map at the certain time point.
[5] 前記特徴点の最適配置とは、前記ある視点から見たときの 2次元投影面上におい て、道路上における特徴点を結ぶ線の遮蔽が回避され、かつ、各特徴点の相対位置 関係が保持された状態であることを特徴とする、請求項 1〜4のいずれ力 1項に記載 の投影図生成システム。 [5] The optimal arrangement of the feature points means that on the two-dimensional projection plane when viewed from the certain viewpoint, shielding of lines connecting the feature points on the road is avoided, and the relative positions of the feature points are The projection map generation system according to any one of claims 1 to 4, wherein the relationship is maintained.
[6] 前記(1)の処理における、前記特徴点の抽出を、以下の処理により行うことを特徴 とする、請求項:!〜 5のレ、ずれ力、 1項に記載の投影図生成システム:  [6] The projection generation system according to claim 1, wherein the extraction of the feature points in the processing of (1) is performed by the following processing. :
(a)参照点を通る水平面に対して、ある一定の角度で傾斜する方向に視点を置く処 理;  (a) The process of placing the viewpoint in a direction inclined at a certain angle with respect to the horizontal plane passing through the reference point;
(b)前記 3次元地形モデルから得られる 3次元の地形形状に対する、前記視点の位 置を、前記参照点を中心とし、かつ水平な方向に、相対的に回転移動させるときに、 前記 3次元の地形形状の輪郭線上に乗る頂点をすベて特徴点として抽出する処理。  (b) When the position of the viewpoint relative to the three-dimensional terrain shape obtained from the three-dimensional terrain model is relatively rotated around the reference point and in a horizontal direction, the three-dimensional Processing to extract all the vertices on the outline of the topographic shape as feature points.
[7] 以下のステップを備えることを特徴とする、投影図生成方法:  [7] Projection map generation method characterized by comprising the following steps:
(1) 3次元地形モデルから、遮蔽に関係している可能性のある部分における地理特 徴を表す特徴点を抽出するステップ;  (1) extracting a feature point representing a geographic feature in a part that may be related to occlusion from a 3D terrain model;
(2)ある視点から見たときの 2次元投影面上において、前記特徴点の最適配置を算 出するステップ;  (2) calculating an optimum arrangement of the feature points on a two-dimensional projection plane when viewed from a certain viewpoint;
(3)前記 2次元投影面上における前記特徴点の最適配置を満たす 3次元地形形状 を生成するステップ;  (3) generating a 3D terrain shape that satisfies an optimal arrangement of the feature points on the 2D projection plane;
(4)前記 3次元形状を、前記視点又はその近傍から見たときの 2次元投影図を生成 するステップ。  (4) A step of generating a two-dimensional projection view when the three-dimensional shape is viewed from the viewpoint or the vicinity thereof.
[8] 請求項 7に記載のステップをコンピュータに実行させるためのコンピュータプロダラ ム。  [8] A computer program for causing a computer to execute the steps according to claim 7.
PCT/JP2006/311918 2005-06-15 2006-06-14 Projection diagram generation system WO2006134962A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007521319A JPWO2006134962A1 (en) 2005-06-15 2006-06-14 Projection diagram generation system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005175076 2005-06-15
JP2005-175076 2005-06-15

Publications (1)

Publication Number Publication Date
WO2006134962A1 true WO2006134962A1 (en) 2006-12-21

Family

ID=37532317

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/311918 WO2006134962A1 (en) 2005-06-15 2006-06-14 Projection diagram generation system

Country Status (2)

Country Link
JP (1) JPWO2006134962A1 (en)
WO (1) WO2006134962A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11161159A (en) * 1997-11-28 1999-06-18 Hitachi Ltd Three-dimensional map display device
JP2002024863A (en) * 2000-06-30 2002-01-25 Tomohiko Sugimoto Landscape image display system
JP2003302897A (en) * 2002-04-12 2003-10-24 Matsushita Electric Ind Co Ltd Map display device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11161159A (en) * 1997-11-28 1999-06-18 Hitachi Ltd Three-dimensional map display device
JP2002024863A (en) * 2000-06-30 2002-01-25 Tomohiko Sugimoto Landscape image display system
JP2003302897A (en) * 2002-04-12 2003-10-24 Matsushita Electric Ind Co Ltd Map display device

Also Published As

Publication number Publication date
JPWO2006134962A1 (en) 2009-01-08

Similar Documents

Publication Publication Date Title
US9153062B2 (en) Systems and methods for sketching and imaging
AU2016211612B2 (en) Map-like summary visualization of street-level distance data and panorama data
JP5036179B2 (en) Two-dimensional linear data real-time three-dimensional conversion method and apparatus, two-dimensional linear data real-time three-dimensional image display method and display apparatus
US20170090460A1 (en) 3D Model Generation From Map Data
US9149309B2 (en) Systems and methods for sketching designs in context
US20170091993A1 (en) 3D Model Generation From Map Data and User Interface
Ferley et al. Skeletal reconstruction of branching shapes
US20100085350A1 (en) Oblique display with additional detail
US8466915B1 (en) Fusion of ground-based facade models with 3D building models
CN102184572B (en) 3-D graphic method of cutting out, rendering method and its graphic processing facility
CN105678683A (en) Two-dimensional storage method of three-dimensional model
CN112189220B (en) Soft occlusion for computer graphics rendering
JP2001036732A (en) Method and device for automatically arranging image in area
Turner et al. Sketching space
Kim et al. Interactive 3D building modeling method using panoramic image sequences and digital map
CN105095314A (en) Point of interest (POI) marking method, terminal, navigation server and navigation system
Takahashi et al. Modeling Surperspective Projection of Landscapes for Geographical Guide‐Map Generation
Cipriano et al. Text scaffolds for effective surface labeling
CN111161123A (en) Decryption method and device for three-dimensional live-action data
WO2006134962A1 (en) Projection diagram generation system
WO2014014928A2 (en) Systems and methods for three-dimensional sketching and imaging
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments
KR101673442B1 (en) The method and apparatus for remeshing visual hull approximation by DBSS(displaced butterfly subdivision surface)
Wang et al. Feature-preserved geometry simplification of triangular meshes from LiDAR sensor
Liu et al. Seamless texture mapping algorithm for image-based three-dimensional reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007521319

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06766687

Country of ref document: EP

Kind code of ref document: A1