WO2011082651A1 - 空间实体遮挡类型的判定方法及装置 - Google Patents

空间实体遮挡类型的判定方法及装置 Download PDF

Info

Publication number
WO2011082651A1
WO2011082651A1 PCT/CN2010/080583 CN2010080583W WO2011082651A1 WO 2011082651 A1 WO2011082651 A1 WO 2011082651A1 CN 2010080583 W CN2010080583 W CN 2010080583W WO 2011082651 A1 WO2011082651 A1 WO 2011082651A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
spatial
entity
spatial entity
pixel
Prior art date
Application number
PCT/CN2010/080583
Other languages
English (en)
French (fr)
Inventor
董福田
Original Assignee
Dong futian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201010144131A external-priority patent/CN101814094A/zh
Application filed by Dong futian filed Critical Dong futian
Publication of WO2011082651A1 publication Critical patent/WO2011082651A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal

Definitions

  • the present invention claims the Chinese patent application filed on January 7, 2010, the Chinese patent application, the application number is 201010017269.2, and the invention name is "the method for selecting the spatial entity based on the spatial entity view model"
  • the present invention relates to the field of spatial information technology, computer graphics, and computer operating systems, and more particularly to a method and apparatus for determining a spatial entity occlusion type.
  • the main representation of spatial entities is displayed by electronic maps, which are visual maps that display spatial entities on electronic screens through certain hardware and software.
  • the spatial entities are rasterized on the view window of the electronic screen. process.
  • the attributes and graphical information assigned to the spatial entity for display on the electronic map are referred to as elements.
  • the point entity corresponds to the point element
  • the line entity corresponds to the line element
  • the surface entity corresponds to the surface element.
  • the graphical information of the spatial entity generally includes: a symbol type of the point, a symbol size, a symbol color;
  • the graphic information of the line includes: a type of the linear symbol, a width of the linear symbol, a line shape
  • the graphic information of the surface includes: the filling type of the surface (such as whether it is transparent), the type of the surface symbol, and the filling color of the surface.
  • the above rasterization refers to converting spatial data representing a graphic in a vector graphic format into a raster image, and each pixel value of the raster image generally represents a color value for display display, printout on paper, and image file generation. Output and other processes.
  • the occlusion type of the spatial entity that is, whether the analysis spatial entity is occluded by other spatial entities, and the data of the spatial entity that needs to be determined in the prior art.
  • the amount is large, the judgment process is inefficient, and the determined spatial entity cannot guarantee display in the view window, and the judgment effect is also poor.
  • the present invention provides a method and a device for determining a occlusion type of a spatial entity, so as to solve the problem that the occlusion type determination process for the massive spatial data in the prior art is complicated, the processing efficiency is low, and the determination effect is poor.
  • a method for determining a spatial entity occlusion type includes:
  • a occlusion type of the spatial entity in the view window is analyzed.
  • the view window is represented by the data structure according to the view control parameter, and specifically: the pixel of the view window is represented by the raster data structure according to the view control parameter, wherein the pixel is the a uniform grid unit divided into a view window plane, wherein the pixel is a basic information storage unit in the raster data, and a coordinate position of the pixel is corresponding to a row number and a column number of the pixel in the view window. It is determined that the initial values of the raster data representing the pixels are all set to zero.
  • the process of analyzing the occlusion type of the spatial entity in the view window comprises:
  • the pixel having a pixel value of 0 in the pixel to be drawn when the spatial entity is displayed on the view window is assigned a value of 1.
  • the spatial entity is determined whether the occlusion type of the spatial entity meets a preset effective space entity condition, and if yes, the spatial entity is determined to be a valid space entity, and if not, the spatial entity is determined to be an invalid space entity.
  • condition of the preset effective space entity includes: a occlusion type is an unoccluded space entity or an occlusion type is an unoccluded and partially occluded space entity.
  • the view control parameter comprises: an outsourcing rectangle parameter of the view mode and the view window;
  • the view mode comprises: a two-dimensional mode and a three-dimensional mode, and the outer-out rectangle parameter of the view window comprises: a width of the outer-out rectangle of the view window and The height of the outer rectangle of the view window;
  • the view control parameter further includes: a central coordinate point of the spatial entity in the view window and an enlarged scale of the spatial entity in the view, or a rectangular range and a view of the query space entity The magnification ratio of the spatial entity;
  • the view control parameter further includes: a view point parameter and a projection parameter
  • the view point parameter includes a position of the view point in the world coordinate system, a target position observed by the view point, and a vector of the virtual camera upward
  • the projection parameters include: orthogonal projection and perspective projection.
  • the preset sorting rule is to sort the spatial entities in reverse order according to the spatial entity drawing order.
  • the preset sorting rule is to sort the spatial entities in a near and far order according to the spatial entity departure point.
  • the method further includes: selecting the effective space entity.
  • a determining device for a spatial entity occlusion type comprising:
  • a selecting unit configured to select a current spatial entity to be analyzed from the spatial entities to be analyzed sorted according to a preset sorting rule
  • the coordinate conversion unit converts the original coordinate of the original spatial data of the current space to be analyzed to the view coordinate of the view window according to the preset view control parameter;
  • An occlusion type analysis unit is configured to analyze an occlusion type of the spatial entity in the view window.
  • the method for determining the spatial entity occlusion type disclosed in the embodiment of the present invention transforms the original coordinate of the original spatial data of the spatial entity into the view coordinate in the view window according to the preset view control parameter.
  • the occlusion situation when the spatial entity is displayed is determined by analyzing the view coordinates, which simplifies the calculation process of occlusion calculation between spatial entities in the prior art, reduces the calculation amount, and improves the determination efficiency of the occlusion type. It solves the problem that the real-time judgment process of massive spatial entity occlusion type is complicated and difficult.
  • FIG. 1 is a flowchart of a method for determining a spatial entity occlusion type according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for determining an effective space entity according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a device for determining a spatial entity occlusion type according to an embodiment of the present invention.
  • the invention studies the spatial data from the perspective of the view display. From the perspective of the view, in the case where the resolution of the view window is determined, the maximum effective spatial data required for the display of the view window is displayed regardless of the multi-large and multi-fine spatial data. Is constant, is the spatial data needed to fill all the pixels of the view window, because the total number of pixels that the view window can display is limited, no matter how large the amount of spatial data, the pixels we can see are ok, If the space entity drawn first is overwritten by the space entity drawn later, it is equivalent to the space entity drawn later occluded the space entity drawn first. If it is completely occluded, the spatial entity that is completely occluded from the perspective of view display is No need to read, transfer, or draw on the view window.
  • the display process of the spatial entity is generally: first, the spatial entity that meets the given spatial condition is taken out through the spatial data index, and transmitted to the spatial entity user, ie, the client, through the transmission medium, and then the spatial data of the spatial entity is performed. After the geometric transformation and processing of the series, it is transformed into coordinate points on the two-dimensional image; according to the display parameters, the spatial entity is finally rasterized into image pixels by a drawing algorithm, and is drawn into a two-dimensional raster image, which is displayed or output on the screen. As Computer screen display, printout on paper, and image file output. The drawing of the spatial entity is finally reduced by the drawing algorithm to the operation of one pixel.
  • the invention is based on the above spatial data display process, based on the preset view control parameters, and the original spatial data of the spatial entity.
  • the original coordinate transformation is performed to the view coordinate of the view window represented by the data structure according to the view control parameter, and the spatial entity is analyzed on the view window actually displayed by analyzing and processing the pixel that needs to be drawn when the spatial entity is displayed on the view window.
  • the situation is displayed, and then the occlusion situation between the spatial entities is analyzed, so that the subsequent processing process can selectively select or transmit the spatial entity according to the analysis result, which simplifies the process and calculation amount of the occlusion type calculation in the prior art, and improves The efficiency and accuracy of the decision.
  • the spatial data of the spatial entity to be processed in this application file is referred to as original spatial data
  • the coordinates of the spatial data to be processed are referred to as the original coordinates of the original spatial data
  • the coordinate points of the spatial data to be processed are called It is the original coordinate point of the original spatial data, or directly called the original coordinate point.
  • the specific implementation manner is as follows:
  • the process of determining a spatial entity occlusion type disclosed in the embodiment of the present invention is as shown in FIG. 1 , and includes:
  • Step S11 Select a current spatial entity to be analyzed from the spatial entities to be analyzed sorted according to a preset sorting rule
  • Step S12 Convert the original coordinate of the spatial data of the current to-be-analyzed spatial entity to the view coordinate of the view window according to the preset view control parameter;
  • the view control parameters in this embodiment include: view mode and outsourcing rectangle parameters of the view window.
  • the view mode pre-sets the view window to the 2D mode or the 3D mode according to the actual view window.
  • the outsourcing rectangle parameter of the view window is the view window range (0, 0, ViewWidth, ViewHeight) of the display space entity, such as the range of the computer screen map display window, including: the width of the outer rectangle of the view window, the viewWidth, and the outer rectangle of the view window.
  • Height ViewHeight with these two parameters, you can determine the size range of the window used to display the image in the actual view window. At the same time, you can get the size of the raster data used to represent the view window.
  • the size of the raster data representing the view window is: ( ViewWidth *ViewHeight* m ). And the initial value of the raster data used to represent the view window is assigned a value of 0.
  • the specific contents of the view control parameters are different depending on the view mode.
  • the view mode is a two-dimensional mode
  • the rectangular range of the query space entity and the enlargement ratio of the spatial entity in the view are further included, and the rectangular range of the query space entity may be replaced by the central coordinate point of the space entity under the view window.
  • the rectangular range of the query space entity refers to the space entity in the range displayed in the view window, that is, the outer rectangle of the space entity that can be displayed in the view window, and the specific range value is set according to the actual display condition. .
  • the view point parameter and the projection parameter are further included, and the view point parameter includes a position 0 (x.,; y., , X , U, a point of view of the viewpoint in a preset world coordinate system; Three components in the world coordinate system, observed by the viewpoint The target position A ( . , , Z . ) and the virtual camera up vector " ⁇ ' ⁇ ' ⁇ ); the projection parameters include: orthogonal projection and perspective projection, or a view matrix and a projection matrix obtained according to the above parameters, The coordinate transformation is performed by using the view matrix and the projection matrix. According to different view control parameters, the original coordinates of the original spatial data are transformed to the view coordinates of the view window in the corresponding mode.
  • the data corresponding to the view coordinate is view data.
  • the view window is represented by the data structure according to the view control parameter.
  • the representation window described here can be a physical view window that can actually be displayed, or a logical view window environment generated for analysis.
  • the raster data is used to represent the two-dimensional raster image, and the display view window plane is divided into a uniform mesh, each grid cell is called a pixel, and the raster data structure is a pixel array.
  • Each pixel in the raster is the most basic information storage unit in the raster data, and its coordinate position can be determined by the row number and column number. Since the raster data is arranged according to certain rules, the spatial entity positional relationship represented is implicit in the line number and column number.
  • Each pixel value is used to represent the encoding of an attribute or attribute of a spatial entity.
  • the original coordinate of the received original spatial data is transformed into the view coordinate in the view window coordinate system, and the original coordinate point of the original spatial data corresponds to the view coordinate point in the view window coordinate system, each view The coordinate point corresponds to the pixel of the view window represented by the raster data according to the view control parameter, and analyzes the space entity to be displayed on the view window actually displayed by analyzing the pixel that needs to be drawn when the spatial entity is displayed on the view window.
  • the occlusion situation at the time is occlusion situation at the time.
  • Step S13 analyzing an occlusion type of the spatial entity in the view window; the occlusion type includes: a complete occlusion, that is, indicating that the spatial entity has been completely covered by other spatial entities Partial occlusion means that the spatial entity is partially obscured by other spatial entities; unoccluded, that is, the spatial entity is not obscured by other spatial entities.
  • the specific judging process can obtain the pixels that need to be drawn when the spatial entity is displayed in the view window by simulating the display process of the spatial entity on the actual view window.
  • the display process of the spatial entity is generally: performing a series of geometrics on the spatial data of the spatial entity After transformation and processing, it is transformed into coordinate points on the two-dimensional image; according to the display parameters, the spatial entities are finally rasterized into image pixels by a drawing algorithm, and drawn into a two-dimensional raster image, which is displayed or output on the screen.
  • the present invention is based on the above process, and obtains pixels that need to be drawn when the spatial entity is displayed on the view window according to the drawing algorithm (such as the drawing algorithm Bresenham algorithm of the line segment), and then determines the value of the pixel to be drawn. If all are 1, the occlusion type is full occlusion; if all is 0; then the occlusion type is unoccluded; if the part is 1, the occlusion type is partially occluded, that is, as long as The pixel to be drawn has a pixel value of 1, and the spatial entity is obscured by other spatial entities. As long as the pixel value to be drawn has a pixel value of 0, the spatial entity is not completely occluded by other spatial entities.
  • the drawing algorithm such as the drawing algorithm Bresenham algorithm of the line segment
  • Pixel with a pixel value of 0 in the pixel to be drawn when the spatial entity is displayed on the view window is assigned a value of 1, to ensure that the spatial entity occlusion type is subsequently determined, if other spatial entities are also in the above Displayed on the pixel, it will be judged as occluded to ensure the accuracy of the decision process.
  • the analysis or processing of the view coordinates is performed by combining a single pixel or a plurality of pixels, and the specific configuration may be flexibly set according to actual needs. Processing method.
  • the specific operation for the pixel includes the assignment of the pixel, that is, the rasterization of the spatial data, the reading of the pixel and the determination of the pixel value.
  • the assignment of the pixel can be performed. It is expressed as assigning a value to a pixel as a whole or assigning one or more of a plurality of bits representing a pixel; the operation of reading a pixel can also be performed by reading and reading an integer value of a pixel.
  • the determination of the pixel value is also to determine the meaning of the overall value of one pixel or the value represented by one or several bits.
  • bits of data are used to represent one pixel of the simulated view window, wherein the first bit indicates whether a bit of a feature is rasterized on the pixel, and the second bit indicates whether the wired feature is rastered on the pixel. Whether the third bit has a polygon feature rasterized on this pixel, and the fourth bit is used for the space vector data.
  • the pixel operation method for the line space entity is as follows:
  • Read pixel value The value of the raster data of P(x, y) is the value of P(x, y) pixels;
  • Pixel value determination operation For example, it is determined whether the pixel is rasterized by the original spatial data, and is determined by the operation of the defined constant line and the pixel value. If it is determined whether the P(x, y) pixel is operated by the line rasterization, it is determined whether the value of P(x, y) & line is greater than 0. If it is greater than 0, the P(x, y) pixel is connected by the line space solid grid. Grid, if equal to 0, the P(x,y) pixel is not bounded by the line space Personalization. The pixel operations corresponding to other spatial entities can also be operated as described above. After the step, the method may further include: Step S14: determining whether there is an unprocessed spatial entity to be analyzed, and if yes, performing the step
  • Step S15 determining whether there is an unprocessed pixel in the view window, and if yes, returning to step S11, and if not, ending.
  • Step S15 determining whether there is an unprocessed pixel in the view window, and if yes, returning to step S11, and if not, ending.
  • the invention also discloses a method for determining a spatial entity as a valid space entity according to the above-mentioned spatial entity occlusion type determination method, and the effective space entity is a spatial entity that can be displayed after the pixels in the view window are drawn.
  • the specific process is shown in Figure 2.
  • the method disclosed in this embodiment is applied to the server.
  • the two-dimensional view mode is used as an example.
  • Step S21 Select a current spatial entity to be analyzed from the spatial entities sorted in the reverse order of the spatial entity drawing order.
  • Step S22 Perform original coordinates of the spatial data of the current spatial entity to be analyzed according to preset view control parameters. Transform to get the view coordinates of the view window;
  • the view control parameter in this embodiment is the view control parameter of the view window actually displayed by the client.
  • Step S23 determining whether the value of the pixel to be drawn when the spatial entity is displayed in the view window is all 1, if yes, executing step S24a, if all are 0, executing step S24b, if the part is 1, Step S24c, step S24a, determining that the occlusion type is completely occluded; step S24b, determining that the occlusion type is unoccluded; step S24c, determining that the occlusion type is partially occluded; step S25, determining whether the occlusion type is valid according to the preset
  • the spatial entity condition if yes, step S26a is performed, and if no, step S26b is performed; the preset effective space entity condition in this embodiment is a spatial entity whose occlusion type is unoccluded.
  • Step S26a determining that the space entity is a valid space entity, performing step S27;
  • Step S26b determining that the space entity is an invalid space entity, performing step S28;
  • Step S27 displaying the effective space entity on the view window
  • the pixel to be drawn is assigned a value of 1 to indicate that there is already a spatial entity to draw on the pixel.
  • the pixel to be drawn when the effective space entity is displayed on the view window may also be assigned a value of 1 for indicating that the pixel has been used for display.
  • the effective space entity is used to determine the pixel value that has been used to display the effective space entity in the process of determining the occlusion type of the spatial entity, and if other spatial entities are to be displayed on the pixel, the occlusion will be occluded. To ensure the accuracy of the decision process.
  • the spatial entity has graphical information
  • the graphical information should be considered in the analysis and processing.
  • the graphical information of the surface entity is transparent or semi-transparent. A space entity, but all pixels of this polygon to be drawn are not assigned an operation of 1.
  • Step S29 Determine whether there are unprocessed pixels in the view window, and if yes, return to step S21, and if no, end.
  • the method for determining a valid space entity disclosed in this embodiment pre-processes the spatial data requested by the client, and determines that the spatial entity that can be effectively displayed in the actual view window is valid according to the view control parameter actually displayed by the client.
  • the spatial entity the subsequent step of correspondingly increasing the selection and transmission of the effective space entity, ensures the lossless display of the data, greatly reduces the data transmission amount, and improves the data transmission efficiency and display efficiency.
  • the method for determining the effective space entity disclosed in this embodiment is not limited to the specific application scenario, and may also be set on the client side to determine the occlusion type of the displayed mass space entity data, and according to the currently set effective space.
  • the physical condition is used to determine the effective space entity for display, thereby ensuring that the unseen data is not displayed during the display process, and the display efficiency is improved.
  • the application may be included in the scenario, and the above steps may also include: The step in which the spatial entity performs the display.
  • the embodiment does not limit the foregoing two application scenarios, and the usage scenario of the method may be set according to the actual application situation, and may also be applied to both the client and the server.
  • the data of the effective space entity may be further processed, such as a progressive process, a progressive transmission, and the like, which are not enumerated here, and any solution for determining the occlusion type of the spatial entity is used.
  • the process of performing effective space entity determination belongs to the protection scope of this embodiment.
  • the present invention also discloses a spatial entity occlusion type determining device, which has the structure shown in FIG. 3, and includes: a selecting unit 31, a coordinate converting unit 32, and an occlusion type analyzing unit 33, wherein the selecting unit 31 is used to The spatial entity to be analyzed is selected from the spatial entities to be analyzed, and the coordinate conversion unit 32 is configured to transform the original coordinate data of the original spatial data of the current spatial entity to be analyzed according to a preset view control parameter.
  • the view coordinate of the view window; the occlusion type analysis unit 33 is configured to analyze the occlusion type of the spatial entity in the view window.
  • the processing of each unit is as follows:
  • the selecting unit selects the current spatial entity to be analyzed from the spatial entity to be analyzed sorted according to the preset sorting rule. If the two-dimensional mode is used, the preset sorting rule is sorted in the reverse order according to the drawing order of the spatial entity, and if it is a three-dimensional mode, The preset sorting rule is to sort the spatial entities from the viewpoints in the near and far directions, and sequentially select the sorted spatial entities.
  • the coordinate transformation unit transforms the original coordinates of the spatial data of the selected spatial entity according to the view control parameter, and transforms to the view coordinate under the view window represented by the data structure according to the view control parameter, and analyzes the spatial entity by using the occlusion type analysis unit.
  • the occlusion type includes determining whether the value of the pixel that the spatial entity needs to draw in the view window is full If the part is 1, the occlusion type is completely occluded; if all is 0, the occlusion type is unoccluded; if the part is 1, the occlusion type is partially occluded, thereby determining Whether the occlusion type of the currently analyzed spatial entity is fully occluded, partially occluded, or unoccluded.
  • the process After processing the current spatial entity, determining whether the spatial entity is the last spatial entity, and if so, ending, if not, determining whether there are unprocessed pixels in the view window represented by the data structure, if Then, according to the above sequence, the next spatial entity is selected for processing. If there are no unprocessed pixels in the view window, the process ends, until all the spatial entities are processed, or there is no unprocessed in the view window represented by the data structure. The pixels are up.
  • the device may further include a valid space entity determining unit, which pre-sets a condition of the effective space entity, which may be an unoccluded spatial entity, or a partially occluded spatial entity, or a combination of the two, determining Whether the occlusion type of the spatial entity meets the condition of the effective space entity, and if yes, determines that the spatial entity is a valid space entity, and the pixel that needs to be drawn when the effective space entity is displayed on the view window in a subsequent process The value is assigned to 1 to indicate that there are already spatial entities drawing on the pixel.
  • a valid space entity determining unit which pre-sets a condition of the effective space entity, which may be an unoccluded spatial entity, or a partially occluded spatial entity, or a combination of the two, determining Whether the occlusion type of the spatial entity meets the condition of the effective space entity, and if yes, determines that the spatial entity is a valid space entity, and the pixel that needs to be drawn when the effective space entity is
  • the execution process of the spatial entity occlusion type determining apparatus disclosed in this embodiment is a flow of the method embodiment corresponding to the embodiment of the present invention, which is a preferred device embodiment.
  • the apparatus for determining the spatial entity occlusion type disclosed in the present invention may be disposed in a computer, or may be disposed in a mobile phone or other device in which the present invention can be used, or other smart devices. It can be set on the server side.
  • the client Before sending the data requested by the client, first determine the occlusion type of the spatial entity, or set it on the client, and send the spatial entity to Before the actual view window, determine the occlusion type of the space entity, or set it at both the server and the client, and select which party or both parties to process together according to the actual situation.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein may be implemented directly in hardware, in a software module executed by a processor, or in a combination of the two.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Description

空间实体遮挡类型的判定方法及装置 本申请要求于 2010年 1月 7 日 提交中 国专利局、 申请号为 201010017269.2、 发明名称为 "基于空间实体视图模型的空间实体的选取 方法"的中国专利申请的优先权和于 2010年 3月 21日提交中国专利局、 申请 号为 201010144131.9、 发明名称为 "基于空间实体视图模型的空间实体的 选取方法" 的中国专利申请的优先权, 其全部内容通过引用结合在本申请 中。
技术领域 本发明涉及空间信息技术、 计算机图形学和计算机操作系统领域, 尤 其涉及空间实体遮挡类型的判定方法及装置。
背景技术
随着空间信息技术的快速发展, 获取的高分辨率、 高精度的空间数据 呈爆炸式增长, 但随之产生了一系列的问题需要解决, 特别突出的是高精 细地图的海量空间数据的实时快速传输和显示的问题。
空间实体主要表示方式是通过电子地图来展示的, 电子地图是将空间 实体通过一定的硬件和软件在电子屏幕上显示的可视地图, 是空间实体在 电子屏幕的视图窗口上栅格化显示的过程。 给空间实体赋予的用于在电子 地图上显示的属性和图示化信息, 称之为要素。 点实体对应点要素, 线实 体对应线要素, 面实体对应面要素。 其中空间实体的图示化信息, 点要素 的图示信息一般包括: 点的符号类型, 符号大小, 符号颜色; 线的图示信 息包括: 线状符号的类型, 线状符号的宽度, 线状符号的颜色; 面的图示 信息包括: 面的填充类型 (如是否透明), 面符号的类型, 面的填充颜色。 有的空间实体本身记录空间实体的图示化信息,有的是在电子地图显示时, 按照图层, 给同一类空间实体设置统一的图示信息。
上述栅格化指的是将矢量图形格式表示图形的空间数据转换成栅格图 像, 栅格图像的每个像素值通常代表颜色值, 以用于显示器显示、 在纸上 打印输出及生成图像文件输出等过程。
在对空间实体的处理过程中, 有些情况下, 需要对空间实体的遮挡类 型进行判定, 也就是, 分析空间实体有没有被其他的空间实体遮挡, 现有 技术中需要进行判定的空间实体的数据量大, 判定过程效率低, 判定后的 空间实体也不能保证能够在视图窗口中进行显示, 判定效杲也较差。
发明内容
有鉴于此, 本发明提供一种空间实体遮挡类型的判定方法及装置, 以 解决现有技术中对海量空间数据的遮挡类型判定过程复杂, 处理效率低, 判定效果差的问题, 其具体方案如下:
一种空间实体遮挡类型的判定方法, 包括:
从按照预设排序规则进行排序的待分析空间实体中选取当前待分析空 间实体;
依据预先设定的视图控制参数, 将所述当前待分析空间实体的原始空 间数据的原始坐标变换得到视图窗口的视图坐标;
分析所述空间实体在所述视图窗口中的遮挡类型。
优选的,所述视图窗口利用数据结构依据所述视图控制参数进行表示, 具体为: 依据所述视图控制参数用所述栅格数据结构来表示所述视图窗口 的像素, 所述像素为所述视图窗口平面划分成的均匀网格单元, 所述像素 为所述栅格数据中的基本信息存储单元, 所述像素的坐标位置依据所述像 素在所述视图窗口中对应的行号和列号确定, 设定表示所述像素的栅格数 据的初始值全部为 0。 优选的, 所述分析所述空间实体在所述视图窗口中的遮挡类型的过程 包括:
判断所述空间实体在所述视图窗口中显示时需要绘制的像素的值是否 全部为 1 , 若全部为 1 , 则所述遮挡类型为完全遮挡; 若全部为 0, 则所述 遮挡类型为未被遮挡; 若部分为 1 , 则所述遮挡类型为部分遮挡。
优选的, 当所述空间实体的遮挡类型为未被遮挡和 /或部分遮挡时, 将 所述空间实体在所述视图窗口上显示时需要绘制的像素中像素值为 0的像 素赋值为 1。
优选的, 判断所述空间实体的遮挡类型是否符合预设有效空间实体条 件, 若是, 则确定所述空间实体为有效空间实体, 若否, 则确定所述空间 实体为无效空间实体。
优选的, 所述预设有效空间实体的条件包括: 遮挡类型为未被遮挡的 空间实体或遮挡类型为未被遮挡和部分被遮挡的空间实体。
优选的,所视图控制参数包括: 视图模式和视图窗口的外包矩形参数; 所述视图模式包括: 二维模式和三维模式, 所述视图窗口的外包矩形参数 包括: 视图窗口的外包矩形的宽度和视图窗口的外包矩形的高度;
当所述视图模式为二维模式时, 所述视图控制参数还包括: 空间实体 在所述视图窗口中的中心坐标点和视图中空间实体的放大比例, 或者查询 空间实体的矩形范围和视图中空间实体的放大比例;
当所述视图模式为三维模式时, 所述视图控制参数还包括: 视点参数 和投影参数, 所述视点参数包括视点在世界坐标系中的位置、 视点所观察 的目标位置和虚拟照相机向上的向量; 所述投影参数包括: 正交投影和透 视投影。
优选的, 当所述视图控制参数中的视图模式为二维模式时, 所述预设 排序规则为按照空间实体绘制顺序的逆序对空间实体进行排序。
优选的, 当所述视图控制参数中的视图模式为三维模式时, 所述预设 排序规则为按照空间实体离视点由近及远的顺序对空间实体进行排序。 优选的, 还包括: 选取所述有效空间实体。
一种空间实体遮挡类型的判定装置, 包括:
选取单元, 用于从按照预设排序规则进行排序的待分析空间实体中选 取当前待分析空间实体;
坐标转换单元, 依据预先设定的视图控制参数, 将所述当前待分析空 间实体的原始空间数据的原始坐标变换得到视图窗口的视图坐标;
遮挡类型分析单元, 用于分析所述空间实体在所述视图窗口中的遮挡 类型。
从上述的技术方案可以看出, 本发明实施例公开的空间实体遮挡类型 的判定方法, 根据预先设定的视图控制参数将空间实体的原始空间数据的 原始坐标变换得到视图窗口中的视图坐标, 通过分析所述视图坐标判定所 述空间实体显示时的遮挡情况, 简化了现有技术中对空间实体之间进行遮 挡计算的计算过程, 减小了计算量, 提高了对遮挡类型的判定效率, 解决 了海量空间实体遮挡类型的实时判断过程复杂困难的问题。
附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对 实施例或现有技术描述中所需要使用的附图作筒单地介绍, 显而易见地, 下面描述中的附图仅仅是本发明的一些实施例 , 对于本领域普通技术人员 来讲, 在不付出创造性劳动的前提下, 还可以根据这些附图获得其他的附 图。
图 1为本发明实施例公开的空间实体遮挡类型的判定方法的流程图; 图 2为本发明实施例公开的有效空间实体的判定方法的流程图; 图 3为本发明实施例公开的空间实体遮挡类型的判定装置的结构示意 图。 具体实施方式
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进 行清楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没 有做出创造性劳动前提下所获得的所有其他实施例, 都属于本发明保护的 范围。
本发明以视图显示的角度来研究空间数据, 从视图的角度, 在视图窗 口的分辨率确定的情况下, 无论多海量、 多精细的空间数据, 用于视图窗 口显示所需要的最大有效空间数据是恒定的, 就是用于填充完视图窗口的 全部像素所需的空间数据, 因为视图窗口能显示的像素总数是有限的, 无 论空间数据的量有多大, 我们能够看到的像素是确定的, 先绘制的空间实 体如果被后绘制的空间实体压盖, 则相当于后绘制的空间实体遮挡了先绘 制的空间实体, 如果是完全遮挡, 从视图显示的角度来看被完全遮挡的空 间实体是不需要读取、 传输或者在视图窗口上绘制的。
具体来说, 空间实体的显示过程一般是: 首先通过空间数据索引将符 合给定空间条件的空间实体取出来经过传输介质传给空间实体使用者即客 户端, 然后对空间实体的空间数据进行一系列的几何变换和处理之后, 变 换为二维图像上的坐标点; 根据显示参数, 空间实体最终通过绘图算法栅 格化成图像像素, 绘制成一幅二维栅格图像, 在屏幕上显示或输出, 如计 算机屏幕显示、 在纸上打印输出及生成图像文件输出等。 其中空间实体的 绘制, 最终被绘图算法归结为对一个个像素的操作, 本发明就是在基于上 述空间数据显示过程的基础上, 依据预先设定的视图控制参数, 将空间实 体的原始空间数据的原始坐标变换到利用数据结构依据视图控制参数表示 的视图窗口的视图坐标, 通过分析和处理空间实体在所述视图窗口上显示 时需要绘制的像素来分析空间实体在实际进行显示的视图窗口上的显示情 况, 进而分析空间实体之间的遮挡情况, 以便于后续处理过程可以依据分 析的结果而有针对性的选取或传输空间实体, 简化了现有技术中遮挡类型 计算的过程及计算量, 提高了判定的效率及准确性。 为了方便描述, 本申 请文件中将需要处理的空间实体的空间数据称之为原始空间数据, 需要处 理的空间数据的坐标称之为原始空间数据的原始坐标, 需要处理的空间数 据的坐标点称之为原始空间数据的原始坐标点,或直接称之为原始坐标点。 其具体的实施方式如下所述: 本发明实施例公开的一种空间实体遮挡类型的判定方法的流程如图 1 所示, 包括:
步骤 Sll、 从按照预设排序规则进行排序的待分析空间实体中选取当 前待分析空间实体;
当所述视图控制参数中的视图模式为二维模式时, 所述预设排序规则 为按照空间实体绘制顺序的逆序对空间实体进行排序。 当所述视图控制参 数中的视图模式为三维模式时, 所述预设排序规则为按照空间实体离视点 由近及远的顺序对空间实体进行排序。 步骤 S12、 依据预先设定的视图控制参数, 将所述当前待分析空间实 体的空间数据的原始坐标变换得到视图窗口的视图坐标;
本实施例中的视图控制参数包括: 视图模式和视图窗口的外包矩形参 数。 视图模式即根据实际的视图窗口预先设定视图窗口为二维模式还是三 维模式。 视图窗口的外包矩形参数是显示空间实体的视图窗口范围 (0, 0, ViewWidth, ViewHeight), 如计算机屏幕地图显示窗口的范围, 包括: 视 图窗口的外包矩形的宽度 ViewWidth 和视图窗口的外包矩形的高度 ViewHeight, 通过这两个参数, 可以确定实际视图窗口中用于显示图像的 窗口的大小范围。 同时可以得到用于表示视图窗口的栅格数据的大小。 如 用 m 个字节表示一个像素值, 则表示视图窗口的栅格数据的大小为: ( ViewWidth *ViewHeight* m )。 并且将用于表示视图窗口的栅格数据的 初始值赋值为 0。
除包括视图模式和视图窗口的外包矩形参数外,根据视图模式的不同, 视图控制参数的具体内容也不尽相同。 当视图模式为二维模式时, 还包括 查询空间实体的矩形范围和视图中空间实体的放大比例, 还可以利用所述 空间实体在所述视图窗口下的中心坐标点替换查询空间实体的矩形范围, 只要能实现将原始空间数据的原始坐标变换得到视图窗口的视图坐标即 可。 查询空间实体的矩形范围是指将此范围内的空间实体显示在视图窗口 中, 也就是在视图窗口中能显示出来的空间实体的外包矩形, 其具体的范 围值根据实际的显示情况而设定。 当视图模式为三维模式时, 还包括视点 参数和投影参数, 所述视点参数包括视点在预先设定的世界坐标系中的位 置 0(x。,;y。, , X。,U。表示视点在世界坐标系中的三个分量,视点所观察的 目标位置 A(。, ,Z。)和虚拟照相机向上的向量 "^'^'^); 所述投影参数 包括: 正交投影和透视投影。 或者是依据上述参数获得的视图矩阵和投影 矩阵, 利用视图矩阵和投影矩阵进行坐标变换。根据不同的视图控制参数, 将原始空间数据的原始坐标变换到对应模式下的视图窗口的视图坐标。
所述视图坐标对应的数据为视图数据, 确定视图控制参数后, 利用数 据结构依据视图控制参数表示视图窗口。 此处所述的表示视图窗口可以为 实际可以进行显示的物理视图窗口, 也可以是为了进行分析而生成的逻辑 视图窗口环境。
当利用栅格数据结构表示视图窗口时,用栅格数据表示二维栅格图像, 把显示视图窗口平面划分成均匀的网格, 每个网格单元称为像素, 栅格数 据结构就是像素阵列, 栅格中的每个像素是栅格数据中最基本的信息存储 单元, 其坐标位置可以用行号和列号确定。 由于栅格数据是按一定规则排 列的, 所以表示的空间实体位置关系是隐含在行号、 列号之中的。 每个像 素值用于代表空间实体的属性或属性的编码。
依据预先设定的视图控制参数, 将接收的原始空间数据的原始坐标变 换得到视图窗口坐标系下的视图坐标, 原始空间数据的原始坐标点对应视 图窗口坐标系下的视图坐标点, 每个视图坐标点与用栅格数据依据视图控 制参数所表示的视图窗口的像素相对应, 通过分析空间实体在所述视图窗 口上显示时需要绘制的像素来分析空间实体在实际进行显示的视图窗口上 显示时的遮挡情况。
步骤 S13、 分析所述空间实体在所述视图窗口中的遮挡类型; 遮挡类型包括: 完全遮挡, 即表示空间实体已被其它空间实体完全遮 挡; 部分遮挡, 即表示所述空间实体被其它空间实体部分遮挡; 未被遮挡, 即表示空间实体未被其它空间实体遮挡。
具体的判断过程可以通过模拟空间实体在实际视图窗口上的显示过程 来获得空间实体在视图窗口显示时需要绘制的像素, 空间实体的显示过程 一般是: 对空间实体的空间数据进行一系列的几何变换和处理之后, 变换 为二维图像上的坐标点; 根据显示参数, 空间实体最终通过绘图算法栅格 化成图像像素, 绘制成一幅二维栅格图像, 在屏幕上显示或输出。 本发明 基于上述过程, 通过所述视图坐标依据绘图算法 (如线段的绘图算法 Bresenham算法)得到空间实体在所述视图窗口上显示时需要绘制的像素, 然后判断所述需要绘制的像素的值, 如果全部为 1 , 则所述遮挡类型为完 全遮挡; 若全部为 0; 则所述遮挡类型为未被遮挡; 若部分为 1, 则所述遮 挡类型为部分遮挡, 也就是说, 只要所述需要绘制的像素中有像素值为 1 , 所述空间实体就被其他空间实体所遮挡了。 只要所述需要绘制的像素中有 像素值为 0, 所述空间实体就没有被其他空间实体完全遮挡。 当所述空间实体没有被其他空间实体完全遮挡, 即当所述空间实体的 遮挡类型为未被遮挡和 /或部分遮挡时, 如果需要标示所述空间实体在所述 像素上进行绘制, 则可将所述空间实体在所述视图窗口上显示时需要绘制 的像素中像素值为 0的像素赋值为 1 , 以保证后续判定空间实体遮挡类型 的过程中, 如果有其他的空间实体也要在上述像素上显示, 就会被判定为 被遮挡, 以保证判定过程的准确性。
本实施例中对视图坐标进行的分析或处理是以单个像素或者将多个像 素进行组合后进行的处理, 可以根据实际情况的需要, 灵活的设定具体的 处理方式。 其针对像素的具体操作除包括给像素赋值, 即将空间数据进行 栅格化外, 还包括读取像素和对像素值进行判定, 当像素以多个比特位来 进行表示时, 对像素的赋值可以表现为对一个像素整体赋值或者对表示像 素的多个比特位中的任意一个或多个进行赋值; 读取像素的操作也可以表 现为对一个像素的整体值进行读取和读取像素中某个或某几个比特位的 值; 同理, 对像素值的判定也为对一个像素的整体值或某个或某几个比特 位的值所代表的含义进行判定。
如用 4个比特位数据表示模拟的视图窗口的一个像素, 其中用第一个 比特位表示是否有点要素在此像素上栅格化, 第二个比特位表示是否有线 要素在此像素上栅格化, 第三个比特位是否有面要素在此像素上栅格化, 第四个比特位用于空间矢量数据的化筒。 首先定义几个常量:
#define point 0x0001
#define line 0x0002
#define region 0x0004
#define simple 0x0008
例如, 对线空间实体所对应的像素操作方法如下所示:
像素的赋值操作: 用定义的常量 line 同像素值的或操作来对像素进行 赋值, 实现原始空间数据的栅格化。如给 P(x,y)像素线栅格化操作, P(x,y)= P(x,y) I line; 清除原始空间数据栅格化操作, 用定义的常量 line进行取反 后同像素值的与操作来清除,如清除 P(x,y)像素线栅格化操作, P(x,y)= P(x,y) & ~liri6。
读取像素值: P(x,y)的栅格数据的值就是 P(x,y)像素的值;
像素值判定操作: 例如, 判定像素是否被原始空间数据栅格化操作, 用定义的常量 line 同像素值的与操作来判定。 如判定 P(x,y)像素是否被线 栅格化操作, 则判定 P(x,y)& line的值是否大于 0, 如果大于 0, 则 P(x,y) 像素被线空间实体栅格化,如果等于 0, 则 P(x,y)像素没有被线空间实体栅 格化。 对于其它空间实体所对应的像素操作同样可以按照上述方法进行操 作。 此步骤之后, 还可以包括: 步骤 S14、 判断是否存在未处理待分析空间实体, 若是, 则执行步骤
S15, 若否, 则结束; 步骤 S15、 判断所述视图窗口中是否存在未处理像素, 若是, 则返回 步骤 S11 , 若否, 则结束。 本实施例公开的空间实体的遮挡类型的判定方法中, 通过分析空间实 体在所述视图窗口上显示时需要绘制的像素来分析空间实体在实际进行显 示的视图窗口上的显示时的遮挡情况, 保证空间实体之间遮挡计算的计算 量小、 算法筒单高效, 解决了海量空间实体遮挡类型的实时判断复杂困难 的问题, 提高了判定效率和判定结果的准确性。
本发明同时公开了一种依据上述空间实体遮挡类型的判定方法, 判定 空间实体为有效空间实体的方法, 所述有效空间实体为对视图窗口中的像 素绘制后能被显示出来的空间实体。 具体流程如图 2所示, 本实施例公开 的方法应用于服务器端, 以二维视图模式为例, 包括:
步骤 S21、 从按照空间实体绘制顺序的逆序进行排序的空间实体中选 取当前待分析空间实体; 步骤 S22、 依据预先设定的视图控制参数, 将所述当前待分析空间实 体的空间数据的原始坐标变换得到视图窗口的视图坐标; 本实施例中的视图控制参数为客户端的实际显示的视图窗口的视图控 制参数。 步骤 S23、 判断所述空间实体在所述视图窗口中显示时所需要绘制的 像素的值是否全部为 1 , 若是, 则执行步骤 S24a, 若全部为 0, 则执行步 骤 S24b, 若部分为 1, 则执行步骤 S24c; 步驟 S24a、 确定其遮挡类型为完全遮挡; 步骤 S24b、 确定其遮挡类型为未被遮挡; 步骤 S24c、 确定遮挡类型为部分遮挡; 步骤 S25、 判断其遮挡类型是否符合预设有效空间实体条件, 若是, 则执行步驟 S26a, 若否, 则执行步骤 S26b; 本实施例中的预设有效空间实体条件为遮挡类型为未被遮挡的空间实 体。 步骤 S26a、 确定所述空间实体为有效空间实体, 执行步骤 S27; 步骤 S26b、 确定所述空间实体为无效空间实体, 执行步骤 S28; 步骤 S27、 将所述有效空间实体在所述视图窗口上显示时需要绘制的 像素赋值为 1 , 以标示已经有空间实体在所述像素上进行绘制。
当需要标示所述有效空间实体在所述像素上进行绘制时, 同样可以将 所述有效空间实体在所述视图窗口上显示时需要绘制的像素赋值为 1 , 用 于表示该像素已经用于显示所述有效空间实体, 以保证后续判定空间实体 遮挡类型的过程中将已经用于显示有效空间实体的像素值作为判别基础, 如果有其他的空间实体也要在上述像素上显示, 就会被遮挡, 以保证判定 过程的准确性。 同时, 如果空间实体有图示化信息, 在分析和处理时要考 虑图示化信息, 如面实体的图示化信息为透明或半透明, 若此面实体为有 效空间实体, 但此面实体所有要绘制的像素不进行赋值为 1的操作。 步骤 S28、 判断是否存在未处理待分析空间实体, 若是, 则执行步骤
S29, 若否, 则结束;
步骤 S29、 判断所述视图窗口中是否存在未处理像素, 若是, 则返回 步骤 S21 , 若否, 则结束。
本实施例公开的判定有效空间实体的方法, 对客户端请求的空间数据 进行了预先处理, 根据客户端的实际显示的视图控制参数, 确定能够在实 际的视图窗口中进行有效显示的空间实体为有效空间实体, 其后续可相应 的增加将有效空间实体进行选取并进行传输的步骤, 保证了数据的无损显 示的同时, 大大缩减了数据传输量, 提高了数据传输效率和显示效率。
将该方法应用于服务器端, 对客户端请求的空间实体的海量数据依据 客户端进行显示的视图窗口的视图控制参数进行处理, 分析空间实体在所 述视图窗口上的遮挡类型, 进而判断该遮挡类型是否符合预设的有效空间 实体条件, 从而获得可以在客户端进行无损显示的有效数据进行传输, 而 无需传输在客户端的显示界面上无法看到的数据, 缩减了数据传输量, 提 高了传输效率。
本实施例公开的判定有效空间实体的方法并不限定其具体的应用场 景, 其同样可以设置在客户端, 对进行显示的海量空间实体数据的遮挡类 型进行判定, 并根据当前设定的有效空间实体条件, 以确定出有效空间实 体进行显示, 从而保证了显示过程中不需显示无法看到的数据, 提高了显 示效率, 如应用在该场景下, 上述步骤中还可以包括, 将确定的有效空间 实体进行显示的步骤。 当然, 本实施例并不限定上述两种应用场景, 可以根据实际的应用情 况而自行设定该方法的使用场景, 同样也可以同时应用于客户端和服务器 端。 而在将有效空间实体进行判定后, 同样可以包括将有效空间实体的数 据进行化筒、 渐进传输等后续处理, 在此不再一一列举, 凡是利用上述对 空间实体遮挡类型进行判定的方案来进行有效空间实体判定的过程, 都属 于本实施例的保护范围。
本发明同时公开了一种空间实体遮挡类型的判定装置, 其结构如图 3 所示, 包括: 选取单元 31、 坐标转换单元 32、 遮挡类型分析单元 33 , 其 中,选取单元 31用于从按照预设排序规则进行排序的待分析空间实体中选 取当前待分析空间实体;坐标转换单元 32用于依据预先设定的视图控制参 数, 将所述当前待分析空间实体的原始空间数据的原始坐标变换得到视图 窗口的视图坐标;遮挡类型分析单元 33用于分析所述空间实体在所述视图 窗口中的遮挡类型。 其中, 各单元的处理过程如下所述:
选取单元从按照预设排序规则进行排序的待分析空间实体中选取当前 待分析空间实体, 如果是二维模式, 则预设排序规则为按照空间实体绘制 顺序的逆序进行排序, 如果是三维模式, 则预设排序规则为按照空间实体 离视点由近及远的顺序进行排序, 对排序后的空间实体进行按序选取。 坐 标变换单元将选取的空间实体的空间数据的原始坐标按照视图控制参数进 行坐标变换, 变换到利用数据结构依据视图控制参数进行表示的视图窗口 下的视图坐标, 利用遮挡类型分析单元分析空间实体的遮挡类型, 具体过 程包括判断所述空间实体在所述视图窗口中所需要绘制的像素的值是否全 部为 1 , 若全部为 1 , 则所述遮挡类型为完全遮挡; 若全部为 0, 则所述遮 挡类型为未被遮挡; 若部分为 1, 则所述遮挡类型为部分遮挡, 从而确定 出当前被分析的空间实体的遮挡类型是完全遮挡、部分遮挡还是未被遮挡。
对当前的空间实体处理完后, 判断该空间实体是否是最后一个空间实 体, 如果是, 则结束, 如果不是, 则判断利用数据结构表示的视图窗口中, 是否还有未处理的像素, 如果是, 则按照上述的顺序, 选取下一个空间实 体, 进行处理, 如果视图窗口中没有未处理的像素, 则结束, 直到将全部 空间实体处理完, 或者利用数据结构表示的视图窗口中不存在未处理的像 素为止。
本装置中还可以包括有效空间实体确定单元, 其预先设定了有效空间 实体的条件, 其可以是未被遮挡的空间实体, 也可以是部分遮挡的空间实 体, 或者是两者的结合, 判断空间实体的遮挡类型是否符合有效空间实体 的条件, 如果符合, 则确定所述空间实体为有效空间实体, 后续过程中还 可将所述有效空间实体在所述视图窗口上显示时需要绘制的像素赋值为 1 , 以标示已经有空间实体在所述像素上进行绘制。
本实施例公开的空间实体遮挡类型的判定装置的执行过程为对应于上 述本发明实施例所公开的方法实施例流程, 为较佳的装置实施例, 其具体 执行过程可参见上述方法实施例, 在此不再赘述。 本发明公开的空间实体遮挡类型的判定装置可以设置在计算机内, 也 可以设置在手机或其他可以使用本发明的设备内, 或者是其他智能设备。 其既可以设置在服务器端, 在将客户端请求的数据发送之前, 首先对空间 实体的遮挡类型进行判定, 也可将其设置在客户端, 在将空间实体发送到 实际的视图窗口前, 对空间实体的遮挡类型进行判定, 或者同时设置在服 务器和客户端, 根据实际情况选择由哪一方或者双方共同进行处理。
本说明书中各个实施例采用递进的方式描述, 每个实施例重点说明的 都是与其他实施例的不同之处, 各个实施例之间相同相似部分互相参见即 可。 对于实施例公开的装置而言, 由于其与实施例公开的方法相对应, 所 以描述的比较筒单, 相关之处参见方法部分说明即可。
专业人员还可以进一步意识到, 结合本文中所公开的实施例描述的各 示例的单元及算法步骤, 能够以电子硬件、 计算机软件或者二者的结合来 实现, 为了清楚地说明硬件和软件的可互换性, 在上述说明中已经按照功 能一般性地描述了各示例的组成及步骤。 这些功能究竟以硬件还是软件方 式来执行, 取决于技术方案的特定应用和设计约束条件。 专业技术人员可 以对每个特定的应用来使用不同方法来实现所描述的功能, 但是这种实现 不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬 件、 处理器执行的软件模块, 或者二者的结合来实施。 软件模块可以置于 随机存储器(RAM )、 内存、 只读存储器 (ROM )、 电可编程 ROM、 电可 擦除可编程 ROM、 寄存器、 硬盘、 可移动磁盘、 CD-ROM, 或技术领域内 所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明, 使本领域专业技术人员能够实现或使 用本发明。 对这些实施例的多种修改对本领域的专业技术人员来说将是显 而易见的, 本文中所定义的一般原理可以在不脱离本发明的精神或范围的 情况下, 在其它实施例中实现。 因此, 本发明将不会被限制于本文所示的 这些实施例, 而是要符合与本文所公开的原理和新颖特点相一致的最宽的 范围。

Claims

权 利 要 求
1、 一种空间实体遮挡类型的判定方法, 其特征在于, 包括: 从按照预设排序规则进行排序的待分析空间实体中选取当前待分析空 间实体;
依据预先设定的视图控制参数, 将所述当前待分析空间实体的原始空 间数据的原始坐标变换得到视图窗口的视图坐标;
分析所述空间实体在所述视图窗口中的遮挡类型。
2、 根据权利要求 1所述的方法, 其特征在于, 所述视图窗口利用数据 结构依据所述视图控制参数进行表示, 具体为: 依据所述视图控制参数用 所述栅格数据结构来表示所述视图窗口的像素, 所述像素为所述视图窗口 平面划分成的均匀网格单元, 所述像素为所述栅格数据中的基本信息存储 单元, 所述像素的坐标位置依据所述像素在所述视图窗口中对应的行号和 列号确定, 设定表示所述像素的栅格数据的初始值全部为 0。
3、 根据权利要求 2所述的方法, 其特征在于, 所述分析所述空间实体 在所述视图窗口中的遮挡类型的过程包括:
判断所述空间实体在所述视图窗口中显示时需要绘制的像素的值是否 全部为 1, 若全部为 1, 则所述遮挡类型为完全遮挡; 若全部为 0, 则所述 遮挡类型为未被遮挡; 若部分为 1 , 则所述遮挡类型为部分遮挡。
4、 根据权利要求 3所述的方法, 其特征在于, 还包括: 当所述空间实 体的遮挡类型为未被遮挡和 /或部分遮挡时,将所述空间实体在所述视图窗 口上显示时需要绘制的像素中像素值为 0的像素赋值为 1。
5、 根据权利要求 3所述的方法, 其特征在于, 还包括: 判断所述空间 实体的遮挡类型是否符合预设有效空间实体条件, 若是, 则确定所述空间 实体为有效空间实体, 若否, 则确定所述空间实体为无效空间实体。
6、 根据权利要求 5所述的方法, 其特征在于, 所述预设有效空间实体 的条件包括: 遮挡类型为未被遮挡的空间实体或遮挡类型为未被遮挡和部 分被遮挡的空间实体。
7、 根据权利要求 6所述的方法, 其特征在于, 所视图控制参数包括: 视图模式和视图窗口的外包矩形参数; 所述视图模式包括: 二维模式和三 维模式, 所述视图窗口的外包矩形参数包括: 视图窗口的外包矩形的宽度 和视图窗口的外包矩形的高度;
当所述视图模式为二维模式时, 所述视图控制参数还包括: 空间实体 在所述视图窗口中的中心坐标点和视图中空间实体的放大比例, 或者查询 空间实体的矩形范围和视图中空间实体的放大比例;
当所述视图模式为三维模式时, 所述视图控制参数还包括: 视点参数 和投影参数, 所述视点参数包括视点在世界坐标系中的位置、 视点所观察 的目标位置和虚拟照相机向上的向量; 所述投影参数包括: 正交投影和透 视投影。
8、 根据权利要求 7所述的方法, 其特征在于, 当所述视图控制参数中 的视图模式为二维模式时, 所述预设排序规则为按照空间实体绘制顺序的 逆序对空间实体进行排序。
9、 根据权利要求 7所述的方法, 其特征在于, 当所述视图控制参数中 的视图模式为三维模式时, 所述预设排序规则为按照空间实体离视点由近 及远的顺序对空间实体进行排序。
10、 根据权利要求 5-9中任意一项所述的方法, 其特征在于, 还包括: 选取所述有效空间实体。
11、 一种空间实体遮挡类型的判定装置, 其特征在于, 包括: 选取单元, 用于从按照预设排序规则进行排序的待分析空间实体中选 取当前待分析空间实体;
坐标转换单元, 依据预先设定的视图控制参数, 将所述当前待分析空 间实体的原始空间数据的原始坐标变换得到视图窗口的视图坐标;
遮挡类型分析单元, 用于分析所述空间实体在所述视图窗口中的遮挡 类型。
PCT/CN2010/080583 2010-01-07 2010-12-31 空间实体遮挡类型的判定方法及装置 WO2011082651A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201010017269 2010-01-07
CN201010017269.2 2010-01-07
CN201010144131.9 2010-03-21
CN201010144131A CN101814094A (zh) 2010-01-07 2010-03-21 基于空间实体视图模型的空间实体的选取方法

Publications (1)

Publication Number Publication Date
WO2011082651A1 true WO2011082651A1 (zh) 2011-07-14

Family

ID=44032534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/080583 WO2011082651A1 (zh) 2010-01-07 2010-12-31 空间实体遮挡类型的判定方法及装置

Country Status (2)

Country Link
CN (1) CN102074004B (zh)
WO (1) WO2011082651A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092892A (zh) * 2011-11-08 2013-05-08 董福田 一种矢量数据的处理方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106898051B (zh) * 2017-04-14 2019-02-19 腾讯科技(深圳)有限公司 一种虚拟角色的视野剔除方法和服务器
CN109377552B (zh) * 2018-10-19 2023-06-13 珠海金山数字网络科技有限公司 图像遮挡计算方法、装置、计算设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5619433A (en) * 1991-09-17 1997-04-08 General Physics International Engineering Simulation Inc. Real-time analysis of power plant thermohydraulic phenomena
CN101515372A (zh) * 2009-02-04 2009-08-26 北京石油化工学院 基于虚拟地质模型的可视化分析预测方法
CN101814094A (zh) * 2010-01-07 2010-08-25 董福田 基于空间实体视图模型的空间实体的选取方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480205B1 (en) * 1998-07-22 2002-11-12 Nvidia Corporation Method and apparatus for occlusion culling in graphics systems
US6924801B1 (en) * 1999-02-09 2005-08-02 Microsoft Corporation Method and apparatus for early culling of occluded objects
KR100814424B1 (ko) * 2006-10-23 2008-03-18 삼성전자주식회사 폐색영역 검출장치 및 검출방법
CN100576934C (zh) * 2008-07-03 2009-12-30 浙江大学 基于深度和遮挡信息的虚拟视点合成方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5619433A (en) * 1991-09-17 1997-04-08 General Physics International Engineering Simulation Inc. Real-time analysis of power plant thermohydraulic phenomena
CN101515372A (zh) * 2009-02-04 2009-08-26 北京石油化工学院 基于虚拟地质模型的可视化分析预测方法
CN101814094A (zh) * 2010-01-07 2010-08-25 董福田 基于空间实体视图模型的空间实体的选取方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092892A (zh) * 2011-11-08 2013-05-08 董福田 一种矢量数据的处理方法及装置

Also Published As

Publication number Publication date
CN102074004A (zh) 2011-05-25
CN102074004B (zh) 2013-09-25

Similar Documents

Publication Publication Date Title
JP5562439B2 (ja) 空間データ処理方法及び装置
US11348308B2 (en) Hybrid frustum traced shadows systems and methods
US10540789B2 (en) Line stylization through graphics processor unit (GPU) textures
JP2012230689A (ja) ピクセルマスクを用いたグラフィックシステム
CN102096945B (zh) 空间数据渐进传输方法及装置
US9576381B2 (en) Method and device for simplifying space data
US8438199B1 (en) System and method for identifying and highlighting a graphic element
US10950044B2 (en) Methods and apparatus to facilitate 3D object visualization and manipulation across multiple devices
WO2011082651A1 (zh) 空间实体遮挡类型的判定方法及装置
CN106445445B (zh) 一种矢量数据的处理方法及装置
US9754384B2 (en) Relevant method and device for compression, decompression and progressive transmission of spatial data
CN102053837B (zh) 空间实体要素标注的冲突检测与避让方法及装置
CN112634431A (zh) 一种三维纹理贴图转化成三维点云的方法及装置
US7127118B2 (en) System and method for compressing image files while preserving visually significant aspects
CN101814094A (zh) 基于空间实体视图模型的空间实体的选取方法
CN102956028B (zh) 图形数据跨平台加速传输与显示的方法与装置
US20230196674A1 (en) Method and apparatus for processing three dimentional graphic data, device, storage medium and product
CN117557711B (zh) 可视域的确定方法、装置、计算机设备、存储介质
CN118474421B (zh) 一种基于WebGL的三维视频融合方法及系统
JPH08123980A (ja) 三次元図形描画装置
CN103678587B (zh) 空间数据渐进传输方法及装置
CN117788641A (zh) 一种实体绘制方法、装置、计算机设备和存储介质
CN118247467A (zh) 一种数字孪生数据渲染方法、系统、电子设备及存储介质
CN115830282A (zh) 图像的转换方法及装置、存储介质
JPH11306393A (ja) 仮想空間表示システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10841994

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10841994

Country of ref document: EP

Kind code of ref document: A1