WO2012037862A1 - Procédé de dessin réparti de données de modèle 3d et dispositif correspondant - Google Patents

Procédé de dessin réparti de données de modèle 3d et dispositif correspondant Download PDF

Info

Publication number
WO2012037862A1
WO2012037862A1 PCT/CN2011/079726 CN2011079726W WO2012037862A1 WO 2012037862 A1 WO2012037862 A1 WO 2012037862A1 CN 2011079726 W CN2011079726 W CN 2011079726W WO 2012037862 A1 WO2012037862 A1 WO 2012037862A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
pixel
current
drawn
image
Prior art date
Application number
PCT/CN2011/079726
Other languages
English (en)
Chinese (zh)
Inventor
董福田
Original Assignee
Dong futian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dong futian filed Critical Dong futian
Publication of WO2012037862A1 publication Critical patent/WO2012037862A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention claims to be submitted to the Chinese Patent Office on September 20, 2010, the application number is 201010287543.8, and the invention name is "a three-dimensional model of adaptive tube, progressive transmission and efficient rendering.
  • Priority of the Chinese Patent Application the entire contents of which is incorporated herein by reference.
  • TECHNICAL FIELD The present invention relates to the field of spatial information technology, computer graphics, virtual reality technology, and computer operating systems, and more particularly to a method and apparatus for distributed drawing of three-dimensional model data. Background technique
  • the model is an abstraction or simulation of various laws or processes in the objective world.
  • the spatial data model is an abstraction of spatial entities and their interconnections in the real world. It is the basis for describing and organizing spatial data and designing spatial database patterns.
  • three-dimensional objects are usually described using a three-dimensional spatial data model (a three-dimensional model), which is then displayed by a computer or other video device.
  • the displayed 3D model can be a real-world entity or a fictional thing that can be as small as an atom or as large as a large size. Anything that exists in physics and nature can be represented by a three-dimensional model.
  • the 3D model consists of two parts, geometry and texture.
  • the geometric part is mainly divided into three types: a vector model based on a vector, a solid model based on a voxel, and a mixed model.
  • Texture is an image, but it makes the model more detailed and looks more realistic.
  • the Internet has become a basic platform for data sharing, distributed storage, distributed computing, transmission, visualization, etc., so the 3D models that a system needs to draw may be distributed and stored in a network environment, or even a heterogeneous environment.
  • how to realize the three-dimensional model of the distributed storage required for display on a client that is, how to realize the distributed drawing of the three-dimensional model data becomes an urgent problem to be solved.
  • an embodiment of the present invention provides a method and an apparatus for distributed drawing of three-dimensional model data, and the technical solution is as follows:
  • a three-dimensional model data distributed drawing method comprising:
  • the post-rendering data includes an image corresponding to the request receiving end to draw the three-dimensional model data according to the view control parameter, and corresponding to each pixel on the image Synthetic identification amount;
  • the received post-rendered data is synthesized according to the combined identification amount of each pixel in the drawn data to determine that the synthesized image is a distributed-drawn image of the three-dimensional model data.
  • the embodiment of the present invention provides a three-dimensional model data distributed drawing device, including: a request sending module, configured to send a drawing data request to a plurality of request receiving ends, where the drawing data request carries a view control parameter;
  • a data receiving module configured to receive the post-rendered data corresponding to the drawing data request fed back by the request receiving end, where the drawn data includes an image corresponding to the request receiving end to draw the three-dimensional model data according to the view control parameter a composite identification amount corresponding to each pixel on the image;
  • the merging processing module is configured to synthesize the received post-rendered data according to the combined identification quantity of each pixel in the drawn data.
  • the embodiment of the present invention further provides a three-dimensional model data distributed drawing method, including: receiving a drawing data request sent by a requesting sending end, the drawing data request carrying a view control parameter, where the view control parameter includes: an outsourcing rectangle of the view window , viewpoint parameters and projection parameters; And drawing three-dimensional model data according to the view control parameter, and generating an image of the same size of the outer envelope of the view window corresponding to the view control parameter;
  • the embodiment of the present invention provides a three-dimensional model data distributed drawing apparatus, including: a request receiving module, configured to receive a drawing data request sent by a requesting sending end, where the drawing data request carries a view control parameter, the view Control parameters include: an outsourcing rectangle of the view window, viewpoint parameters, and projection parameters;
  • An image generating module configured to draw three-dimensional model data according to the view control parameter, and generate an image of the same size of the outer-out rectangle of the view window corresponding to the view control parameter;
  • An identifier quantity generating module configured to acquire a depth of each pixel on the image, and use the pixel depth to form a combined identifier quantity of the corresponding pixel, where the pixel depth is used to determine a distance of the three-dimensional model data corresponding to each pixel The distance of the viewpoint determined by the view control parameter;
  • a data sending module configured to send the drawn image and the combined identifier amount corresponding to each pixel to the request sending end.
  • the requesting sending end when the three-dimensional model data needs to be drawn, the requesting sending end sends a corresponding drawing data request carrying the view control parameter to the plurality of request receiving ends, and when receiving the request receiving end After the image corresponding to the request, including the image corresponding to the three-dimensional model data according to the view control parameter and the combined identifier of the corresponding pixel on the image, the number of the combined identifiers of each pixel is The images are synthesized to determine that the synthesized image is a distributed image of the three-dimensional model data.
  • FIG. 1 is a first flowchart of a method for distributed drawing of three-dimensional model data according to an embodiment of the present invention
  • FIG. 2 is a second flowchart of a method for distributed drawing of three-dimensional model data according to an embodiment of the present invention
  • FIG. 3 is a third flowchart of a method for distributed drawing of three-dimensional model data according to an embodiment of the present invention
  • FIG. 4 is a fourth flowchart of a method for distributed drawing of three-dimensional model data according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a first type of coordinates of a Z-buffer algorithm according to an embodiment of the present invention
  • FIG. 6 is a second schematic diagram of a Z-buffer algorithm according to an embodiment of the present invention
  • a third schematic diagram of a Z-buffer algorithm is provided
  • FIG. 8 is a schematic diagram of a first structure of a three-dimensional model data distributed drawing device according to an embodiment of the present invention
  • FIG. 9 is a second structural diagram of a three-dimensional model data distributed drawing apparatus according to an embodiment of the present invention.
  • an embodiment of the present invention provides a 3D model data distribution. Drawing method and device.
  • the basic idea of the three-dimensional model data distributed drawing method is:
  • the request sending end sends a corresponding drawing data request containing the same view control parameter to the plurality of request receiving ends; each request receiving end feeds back the corresponding drawn data after receiving the drawing data request; After receiving the post-rendered data fed back by each request receiving end, the transmitting end synthesizes the plurality of post-rendered data to determine that the synthesized image is a distributed image of the three-dimensional model data. Therefore, the solution provided by the embodiment of the present invention can effectively implement distributed drawing of three-dimensional model data.
  • the display process of the three-dimensional model in the view window is first introduced.
  • the display process of the 3D model in the view window is generally as follows: First, the 3D model conforming to the given spatial condition is taken out through the spatial data index and transmitted to the 3D model user (such as the client) through the transmission medium; then, the 3D model data is performed. After the series of coordinate transformation and processing, transform into coordinate points on the 2D image; according to the display parameters, the 3D model is finally rasterized into image pixels by the drawing algorithm, drawn into a raster image, displayed or output on the client (such as a computer Screen display, printout on paper, and generation of image file output, etc.). The drawing of the 3D model is finally reduced by the drawing algorithm to the operation of one pixel. Finally, the pixels drawn by the 3D model data closest to the observation point in the 3D model data drawn on the same pixel can be displayed.
  • the present invention proposes to control the distributed rendering of the three-dimensional model data by using the same view control parameter, and simultaneously recording the depth of each pixel on the image, and determining the three-dimensional model data corresponding to the pixel from the view control parameter.
  • the distance between the viewpoints constitutes a composite marker amount of the corresponding pixel, and finally the plurality of images after the image are synthesized according to the combined marker amount of the pixel, and the combined image is determined to be a distributed image of the three-dimensional model data.
  • the request sending end may be a computer device, a mobile phone device, or the like.
  • a three-dimensional model data distributed drawing method may include:
  • Step S101 the requesting sending end sends a drawing data request to the plurality of request receiving ends, where the drawing data request carries a view control parameter;
  • the request receiving end can be an image processing server or other device having image processing functions and the like.
  • the request sending end is based on the view of the 3D model data display.
  • the view control parameter determined by the window generates a corresponding drawing data request, and sends the generated drawing data request to the corresponding request receiving end.
  • the view control parameter may include:
  • the matrix transforms the coordinates of the vertices in the original coordinate system to the viewpoint coordinate system.
  • the projection parameters include: orthogonal projection and perspective projection, or a view matrix and a projection matrix obtained by the above parameters.
  • Step S102 receiving the post-drawing data corresponding to the drawing data request fed back by the request receiving end;
  • the post-rendering data includes an image corresponding to the requesting receiving end to draw the three-dimensional model data according to the view control parameter and a synthetic identifier corresponding to each pixel on the image.
  • the request receiving end After receiving the corresponding drawing data request, the request receiving end draws the three-dimensional model data according to the view control parameter carried in the drawing data request, and generates an image of the same size of the outer-out rectangle of the view window corresponding to the view control parameter. Simultaneously recording a composite identification amount corresponding to each pixel on the image, the composite identification amount including a depth of the pixel on the image. The depth of the pixel is used to determine the distance of the three-dimensional model data corresponding to each pixel from the viewpoint determined by the view control parameter. Finally, the request receiving end feeds back the image corresponding to the three-dimensional model data and the combined identification amount corresponding to each pixel as the post-rendered data to the request transmitting end. It can be understood that the synthetic identifier may further include information such as an identification number of the three-dimensional model, transparency of the pixel, and the like.
  • the pixels of the view window are represented by the raster data structure according to the view control parameter, and the pixels are uniform grid cells divided by the view window plane, and the pixels are in the raster data.
  • the basic information storage unit determines a coordinate position of the pixel according to a corresponding row number and column number of the pixel in the view window.
  • z Value is expressed as the depth of the pixel (ie, the z coordinate).
  • the pixel depth value is smaller, the closer to the viewpoint, the initial value of the raster data representing the pixel depth is maximized; if the pixel depth The larger the value is, the closer the viewpoint is, the smaller the initial value of the raster data representing the pixel depth.
  • the pixel depth is an initial value, it means that the pixel has not been drawn by any three-dimensional model.
  • Step S103 according to the combined identification quantity of each pixel in the drawn data, the received After the data is drawn, the data is synthesized to determine that the synthesized image is a distributed image of the three-dimensional model data.
  • the requesting transmitting end After receiving the corresponding post-rendered data fed back by the request receiving end, the requesting transmitting end synthesizes the received plurality of post-rendered data by using the combined identification quantity of the pixels in the drawn data, that is, performs multiple images on the image.
  • the synthesis process is such that the far object is replaced by a close object at the pixel level, and is independent of the order in which the form appears on the screen.
  • the synthesized data is synthesized according to the combined identifier of each pixel of the data after the drawing, which may be:
  • the requesting sender After the requesting sender receives the post-rendered data fed back by the requesting end, it determines whether there is pre-rendered data, and if so, the currently received post-rendered data is used as the current post-rendered data, and according to the current drawing a composite identification amount of each pixel in the subsequent data and a composite identification amount of the corresponding pixel in the previously drawn data, synthesizing the current drawn data and the previously drawn data, and using the synthesized data as a new The data is drawn first; otherwise, the currently received data is taken as the data after the first drawing;
  • the above-mentioned processing is performed on the received post-rendered data, and the final synthesized image is determined to be an image of the three-dimensional model data distributed drawing.
  • the first post-rendered data received by the requesting end and satisfying the specific requirement is generally used as the data after the first drawing, and the post-rendered data that is subsequently received and meets the specific requirement is sequentially used as the current post-rendered data, and
  • the currently existing pre-rendered data is subjected to a specific synthesis process, and the synthesized data is used as a new pre-rendered data until the post-rendered data that satisfies the requirements are combined with the previously drawn data to form a three-dimensional model data.
  • Distributed image after drawing is generally used as the data after the first drawing, and the post-rendered data that is subsequently received and meets the specific requirement is sequentially used as the current post-rendered data, and
  • the currently existing pre-rendered data is subjected to a specific synthesis process, and the synthesized data is used as a new pre-rendered data until the post-rendered data that satisfies the requirements are combined with the previously drawn data to form
  • a composite identification quantity of the pixel the composite processing of the current drawn data and the prior drawn data may include:
  • Step S201 determining an unanalyzed pixel on the view window corresponding to the view control parameter as the current pixel to be analyzed Pi;
  • the image in the data after drawing is the request receiving end after receiving the corresponding drawing data request, according to And generating, by the view control parameter carried in the data request, the three-dimensional model data, and generating an image of the same size of the outer-out rectangle of the view window corresponding to the view control parameter, that is, the pixel on the view window corresponding to the view control parameter It is in one-to-one correspondence with the pixels of the image in the current drawn data and the pixels of the image in the previously drawn data.
  • Step S202 determining a synthetic identifier corresponding to the current pixel to be analyzed in the currently drawn data as the current synthetic identifier Zi to be analyzed;
  • Step S203 it is determined whether the pixel depth recorded in the current to-be-analyzed synthetic identifier Zi is equal to the initial value, and if yes, step S207 is performed; otherwise, step S204 is performed;
  • the initial pixel depth value in this embodiment is a maximum value. It can be understood that, according to different systems, if the pixel depth value is smaller, the closer to the viewpoint, the pixel depth initial value is a maximum value; if the pixel depth value is larger, the closer to the viewpoint, the pixel depth initial The value is assigned a minimum value. Wherein, when the pixel depth is an initial value, it means that the pixel has not been drawn by any three-dimensional model.
  • Step S204 Obtain a synthetic identifier Zi corresponding to the pixel P to be analyzed in the data that is previously drawn.
  • Step S205 determining whether the pixel depth recorded in the current to-be-analyzed synthetic identification amount Zi is smaller than the synthesized identification amount Zi corresponding to the current to-be-analyzed pixel Pi in the previously drawn data, and the recorded pixel depth, if yes, Then step S206 is performed; otherwise, step S207 is performed;
  • Step S206 the data in the previously drawn data corresponding to the current pixel to be analyzed Pi is replaced by the data in the current mapped data corresponding to the current pixel to be analyzed, and step S208 is performed;
  • Step S207 retaining data in the previously drawn data corresponding to the current pixel to be analyzed Pi;
  • Step S208 determining whether there are unparsed pixels in the view window, and if yes, executing step S201; otherwise, ending.
  • the synthesis processing of the current post-rendered data and the pre-rendered data can be completed in the above manner.
  • Synthesizing the identifier, and synthesizing the current post-rendered data and the pre-rendered data may include: Step S301, determining an unanalyzed pixel on the view window corresponding to the view control parameter as the current pixel to be analyzed Pi;
  • the request receiving end draws the three-dimensional model data according to the view control parameter carried in the drawing data request, and generates an outsourcing of the view window corresponding to the view control parameter after receiving the corresponding drawing data request.
  • An image having the same size of the rectangle, that is, a pixel on the view window corresponding to the view control parameter and a pixel of the image in the current drawn data and a pixel of the image in the previously drawn data are in one-to-one correspondence of.
  • Step S302 determining a synthetic identifier corresponding to the current pixel to be analyzed in the currently drawn data as the current synthetic identifier Zi to be analyzed;
  • Step S303 it is determined whether the pixel depth recorded in the current to-be-analyzed synthetic identifier Zi is equal to the initial value, and if yes, step S307 is performed; otherwise, step S304 is performed;
  • the initial value of the pixel depth described in this embodiment is a minimum value. It can be understood that, according to different systems, if the pixel depth value is smaller, the closer to the viewpoint, the pixel depth initial value is a maximum value; if the pixel depth value is larger, the closer to the viewpoint, the pixel depth initial The value is assigned a minimum value. Wherein, when the pixel depth is an initial value, it means that the pixel has not been drawn by any three-dimensional model.
  • Step S304 obtaining a synthetic identifier Zi corresponding to the pixel P to be analyzed in the previously drawn data
  • Step S305 determining whether the pixel depth recorded in the current to-be-analyzed synthetic identifier Zi is greater than the synthesized identification amount Zi corresponding to the current to-be-analyzed pixel Pi in the previously drawn data, and the recorded pixel depth, if yes, Then step S306 is performed; otherwise, step S307 is performed;
  • Step S306 the data in the previously drawn data corresponding to the current pixel to be analyzed Pi is replaced by the data in the current drawn data corresponding to the current pixel to be analyzed, and step S308 is performed;
  • Step S307 retaining data in the previously drawn data corresponding to the current pixel to be analyzed Pi;
  • Step S308 determining whether there are pixels that are not analyzed in the view window, and if yes, executing step S301; otherwise, ending.
  • a three-dimensional model data distributed drawing method may include:
  • Step S401 Receive a request for drawing data sent by the sending end, where the drawing data request carries a view control parameter;
  • the view control parameters include: an outsourcing rectangle of the view window, a viewpoint parameter, and a projection parameter.
  • Step S402 drawing three-dimensional model data according to the view control parameter, and generating an image of the same size of the outer envelope of the view window corresponding to the view control parameter;
  • Step S403 Obtain a depth of each pixel on the image, and use the pixel depth to form a combined identifier of the corresponding pixel, where the pixel depth is used to determine that the three-dimensional model data corresponding to each pixel is away from the view control parameter. The distance of the determined viewpoint;
  • the synthetic identifier may further include information such as a label of the three-dimensional model, pixel transparency, and the like.
  • the calculation method of each pixel depth on the image may select different algorithms according to actual conditions, for example: Z buffer algorithm.
  • the Z buffer algorithm is taken as an example to introduce the calculation of pixel depth:
  • the Z-buffer algorithm also called the depth buffer algorithm, belongs to the image space blanking algorithm, which first transforms the original coordinates of the three-dimensional model data according to the view control parameters to obtain the view coordinates of the view window, and then performs analysis and calculation.
  • the depth buffer algorithm has two buffers: a depth buffer and a frame buffer, corresponding to two arrays: a depth array depth ( x, y ) and an attribute array intensity ( x, y ).
  • the former stores the z coordinate of each visible pixel of the image space, and the latter stores the attribute (light intensity or color) value of each visible pixel of the image space.
  • the algorithm usually calculates the depth of each object's surface from the observation plane along the Z-axis of the observation coordinate system. It processes the surface of each object in the scene separately and performs point by point on each patch. Description of the object After being transformed into a projected coordinate system, each point (X, y, z) on the polygon surface corresponds to the orthographic projection point (X, y) on the observation plane. Thus, for each pixel (X, y) on the viewing plane, the comparison of their depths can be achieved by comparison of their z values. For the right-handed coordinate system, the point with the largest z value should be visible. As shown in Fig. 5, on the observation plane, the face si is opposite to the face s2 and the face s3 is closest to the viewpoint, so it is visible at this position (x, y).
  • each unit of the frame buffer is set to the background color, and then each patch in the polygon table is processed one by one.
  • the depth value z ( X, y ) corresponding to each pixel (X, y ) of the row is calculated, and the result is compared with the depth value depth (x, y) stored in the pixel unit in the depth buffer.
  • n the slope of the edge
  • n the depth along the edge
  • the calculation formula is: For each scan line, first calculate the depth value corresponding to the leftmost intersection of the polygon intersecting with it according to formula (3), and then all subsequent points on the scan line are calculated by (4).
  • the image after blanking is obtained, and the depth corresponding to each pixel on the image is obtained, which can constitute the composite mark corresponding to the pixel.
  • Step S404 Send the drawn image and the combined identification amount corresponding to each pixel as the post-rendered data to the request sending end.
  • the requesting sending end sends a corresponding drawing data request carrying the view control parameter to the plurality of request receiving ends, and when receiving the request receiving end
  • the image corresponding to the request including the image corresponding to the three-dimensional model data according to the view control parameter and the combined identifier of the corresponding pixel on the image
  • the number of the combined identifiers of each pixel is The images are synthesized to determine that the synthesized image is a distributed image of the three-dimensional model data.
  • the corresponding drawing data request is sent to the plurality of request receiving ends, and the drawn data fed back by the request receiving end is synthesized, thereby realizing distributed drawing of the three-dimensional model data.
  • the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for causing a A computer device (which may be a personal computer, server, or network device, etc.) performs all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a medium that can store program codes, such as a read only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • the embodiment of the present invention further provides a three-dimensional model data distributed drawing device, as the request sending end, as shown in FIG. 8, the device may include:
  • the request sending module 110 is configured to send a drawing data request to the multiple request receiving ends, where the drawing data request carries a view control parameter;
  • the data receiving module 120 is configured to receive the post-rendered data corresponding to the drawing data request fed back by the request receiving end, where the post-rendering data includes an image corresponding to the request receiving end to draw the three-dimensional model data according to the view control parameter, and a combined identification amount corresponding to each pixel on the image;
  • the merging processing module 130 is configured to synthesize the received post-rendered data according to the combined identification amount of each pixel in the drawn data.
  • the three-dimensional model data distributed drawing device as the request transmitting end can be a computer device, a mobile phone device or the like.
  • the merge processing module 130 may include:
  • a first determining unit configured to: after receiving the post-rendered data fed back by the request receiving end, determine whether there is data after the first drawing, and if yes, trigger the merge processing unit; otherwise, trigger the previous data determining unit;
  • a merging processing unit configured to use the currently received post-rendered data as the current post-rendered data, and according to the synthesized identification amount of each pixel in the currently drawn data and the previously drawn data a composite identification quantity of the corresponding pixel, synthesizing the current drawn data and the previously drawn data, and using the synthesized data as new prior-drawn data;
  • the prior data determining unit is configured to use the currently received post-rendered data as the pre-rendered data.
  • the embodiment of the present invention further provides a three-dimensional model data distributed drawing device, which is used as a request receiving end, as shown in FIG.
  • the request receiving module 210 is configured to receive a drawing data request sent by the requesting sending end, where the drawing data request carries a view control parameter, where the view control parameter includes: an outsourcing rectangle, a view point parameter, and a projection parameter of the view window;
  • the image generation module 220 is configured to draw the three-dimensional model data according to the view control parameter, and generate an image of the same size of the outer envelope of the view window corresponding to the view control parameter;
  • the identifier quantity generating module 230 is configured to acquire a depth of each pixel on the image, and use the pixel depth to form a combined identifier quantity of the corresponding pixel, where the pixel depth is used to determine a distance of a three-dimensional model data corresponding to each pixel. The distance of the viewpoint determined by the view control parameter;
  • the data sending module 240 is configured to send the drawn image and the combined identifier amount corresponding to each pixel to the request sending end.
  • the three-dimensional model data distributed drawing device as the request receiving end can be an image processing server or other device having image processing functions and the like.
  • a device or system embodiment since it substantially corresponds to a method embodiment, reference is made to the partial description of the method embodiment.
  • the apparatus or system embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie It can be located in one place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without undue creative labor.
  • the disclosed system, apparatus, and method may be implemented in other manners without departing from the spirit and scope of the present application.
  • the present embodiments are merely exemplary, and should not be taken as limiting, and the specific content given should not limit the purpose of the application.
  • the division of the unit or subunit is only a logical function division. In actual implementation, there may be another way of dividing, for example, multiple units or multiple sub-units are combined. In addition, multiple units may or may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the described systems, apparatus, and methods, and the schematic diagrams of various embodiments may be combined or integrated with other systems, modules, techniques or methods without departing from the scope of the present application.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

L'invention porte sur un procédé de dessin réparti de données de modèle 3D et sur un dispositif correspondant. Le procédé consiste à : envoyer à de multiples extrémités réceptrices de requête des requêtes de dessin de données qui contiennent des paramètres de commande de vue; recevoir des extrémités réceptrices de requête un retour d'informations de données post-dessin correspondant aux requêtes de dessin de données, lesdites données post-dessine contenant une image correspondant aux données de modèle 3D dessinées, par les extrémités réceptrices de requête, sur la base des paramètres de commande de vue, et contenant des valeurs d'étiquette composites correspondant à tous les pixels de l'image; conformément aux valeurs d'étiquette composites de tous les pixels dans les données post-dessin, faire la synthèse de toutes les données post-dessin reçues afin d'assurer que l'image synthétisée est l'image dessinée de manière répartie à partir des données de modèle 3D. La solution décrite par la présente invention met en œuvre le dessin réparti de données de modèle 3D par envoi de requêtes de dessin de données correspondantes à de multiples extrémités réceptrices de requête, et ensuite synthèse et traitement du retour d'informations provenant des extrémités réceptrices de requête.
PCT/CN2011/079726 2010-09-20 2011-09-16 Procédé de dessin réparti de données de modèle 3d et dispositif correspondant WO2012037862A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201010287543.8 2010-09-20
CN201010287543 2010-09-20
CN201010597111.7 2010-12-06
CN2010105971117A CN102044089A (zh) 2010-09-20 2010-12-06 一种三维模型的自适应化简、渐进传输和快速绘制的方法

Publications (1)

Publication Number Publication Date
WO2012037862A1 true WO2012037862A1 (fr) 2012-03-29

Family

ID=43910201

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2011/079726 WO2012037862A1 (fr) 2010-09-20 2011-09-16 Procédé de dessin réparti de données de modèle 3d et dispositif correspondant
PCT/CN2011/079727 WO2012037863A1 (fr) 2010-09-20 2011-09-16 Procédé de simplification et de transmission progressive de données de modèle 3d et dispositif correspondant

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/079727 WO2012037863A1 (fr) 2010-09-20 2011-09-16 Procédé de simplification et de transmission progressive de données de modèle 3d et dispositif correspondant

Country Status (2)

Country Link
CN (3) CN102044089A (fr)
WO (2) WO2012037862A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958332A (zh) * 2023-09-20 2023-10-27 南京竹影数字科技有限公司 基于图像识别的纸张绘图实时映射3d模型的方法及系统
CN117369633A (zh) * 2023-10-07 2024-01-09 上海铱奇科技有限公司 一种基于ar的信息交互方法及系统

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044089A (zh) * 2010-09-20 2011-05-04 董福田 一种三维模型的自适应化简、渐进传输和快速绘制的方法
CN102184572B (zh) * 2011-05-19 2017-07-21 威盛电子股份有限公司 三维图形裁剪方法、呈现方法及其图形处理装置
CN102364439B (zh) * 2011-10-19 2014-03-26 广东威创视讯科技股份有限公司 分布式处理系统的窗口快速移动方法及装置
CN106503171B (zh) * 2011-11-08 2020-02-07 苏州超擎图形软件科技发展有限公司 一种矢量数据的处理方法及装置
CN103378863B (zh) * 2012-04-18 2017-11-10 苏州超擎图形软件科技发展有限公司 空间数据压缩、解压与渐进传输的有关方法与装置
CN103035164B (zh) * 2012-12-24 2015-03-25 广东威创视讯科技股份有限公司 一种地理信息系统渲染方法及系统
CN103337090B (zh) * 2013-06-17 2016-07-13 清华大学 月球模型远程交互浏览可视化方法、客户端及系统
CN103455970A (zh) * 2013-08-30 2013-12-18 天津市测绘院 利用三维数字城市系统模型非可见部分的加速显示方法
EP2881918B1 (fr) 2013-12-06 2018-02-07 My Virtual Reality Software AS Procédé pour visualiser des données tridimensionnelles
CN103678587B (zh) * 2013-12-12 2017-10-13 中国神华能源股份有限公司 空间数据渐进传输方法及装置
JP6087301B2 (ja) * 2014-02-13 2017-03-01 株式会社ジオ技術研究所 3次元地図表示システム
CN103927396B (zh) * 2014-05-05 2018-02-02 曾志明 利用辅助数据在三维渲染中获得三维空间信息的查找方法
CA2963159C (fr) * 2014-09-30 2021-06-15 Cae Inc. Restitution d'images endommagees-ameliorees lors d'une simulation informatique
CN104658041A (zh) * 2015-02-12 2015-05-27 中国人民解放军装甲兵工程学院 一种分布式三维虚拟环境的实体模型动态调度方法
CN106600679B (zh) * 2015-10-20 2019-11-08 星际空间(天津)科技发展有限公司 一种三维模型数据简化的方法
CN106600700B (zh) * 2015-10-20 2020-01-17 星际空间(天津)科技发展有限公司 一种三维模型数据处理系统
CN105303607B (zh) * 2015-10-28 2018-09-18 沈阳黎明航空发动机(集团)有限责任公司 一种保持精度的三维模型简化方法
CN105513118B (zh) * 2015-11-26 2018-07-10 北京像素软件科技股份有限公司 一种体素化游戏世界的渲染方法
CN105817031A (zh) * 2016-03-16 2016-08-03 小天才科技有限公司 游戏地图的物体绘制方法及装置
CN105894551B (zh) * 2016-03-31 2020-02-14 百度在线网络技术(北京)有限公司 图像绘制方法及装置
US10841557B2 (en) * 2016-05-12 2020-11-17 Samsung Electronics Co., Ltd. Content navigation
CN107545222A (zh) * 2016-06-29 2018-01-05 中国园林博物馆北京筹备办公室 在虚拟现实场景中显示目标图像的方法及其系统
CN107657530A (zh) * 2016-07-25 2018-02-02 武汉票据交易中心有限公司 一种业务流程的处理方法及系统
CN106557620A (zh) * 2016-11-08 2017-04-05 广东柳道热流道系统有限公司 热流道标准化产品的辅助出图方法
CN106776020B (zh) * 2016-12-07 2020-02-21 长春理工大学 大型三维场景的计算机集群分布式路径跟踪绘制方法
CN107422952B (zh) * 2017-05-04 2019-04-09 广州视源电子科技股份有限公司 一种立体图形显示的方法、装置及设备
CN107967716B (zh) * 2017-11-01 2021-08-06 深圳依偎控股有限公司 一种基于立体图片的缩略图显示控制方法及系统
CN108055351B (zh) * 2017-12-29 2021-04-16 深圳市毕美科技有限公司 三维文件的处理方法及装置
CN108090305A (zh) * 2018-01-10 2018-05-29 安徽极光照明工程有限公司 一种基于光线追踪技术的舞台灯光控制系统
CN108267154B (zh) * 2018-02-09 2020-08-14 城市生活(北京)资讯有限公司 一种地图显示方法及装置
CN108776995A (zh) * 2018-06-06 2018-11-09 广东您好科技有限公司 基于像素合成技术的虚拟机器人定制系统
CN109522381B (zh) * 2018-11-02 2021-05-04 长江空间信息技术工程有限公司(武汉) 基于3dgis+bim的建筑物隐蔽设施安全检测方法
CN116156164B (zh) * 2018-12-30 2023-11-28 北京达佳互联信息技术有限公司 用于对视频进行解码的方法、设备和可读存储介质
CN109925715B (zh) * 2019-01-29 2021-11-16 腾讯科技(深圳)有限公司 一种虚拟水域生成方法、装置及终端
CN110084870B (zh) * 2019-05-13 2023-03-24 武汉轻工大学 平面方程的绘图区域的确定方法、装置、设备及存储介质
CN110368694B (zh) * 2019-08-22 2023-05-16 网易(杭州)网络有限公司 游戏场景的数据处理方法、装置、设备及可读存储介质
CN110647515A (zh) * 2019-08-29 2020-01-03 北京浪潮数据技术有限公司 分布式绘图方法及装置
CN110889901B (zh) * 2019-11-19 2023-08-08 北京航空航天大学青岛研究院 基于分布式系统的大场景稀疏点云ba优化方法
CN112396682B (zh) * 2020-11-17 2021-06-22 重庆市地理信息和遥感应用中心 一种三维场景下视觉递进的模型浏览方法
CN113256784B (zh) * 2021-07-02 2021-09-28 武大吉奥信息技术有限公司 一种基于gpu进行超高效绘制gis空间三维体素数据的方法
CN115657855A (zh) * 2022-11-10 2023-01-31 北京有竹居网络技术有限公司 人机交互的方法、装置、设备和存储介质
CN115994410B (zh) * 2023-03-22 2023-05-30 中国人民解放军国防科技大学 基于八叉树细化四面体网格的飞行器仿真驱动设计方法
CN116883469B (zh) * 2023-07-20 2024-01-19 中国矿业大学 平面特征约束下基于eiv模型描述的点云配准方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151621A (en) * 1997-04-10 2000-11-21 International Business Machines Corp. Personal conferencing system
CN1441363A (zh) * 2002-02-27 2003-09-10 惠普公司 分布式资源结构和系统
CN1802668A (zh) * 2002-09-06 2006-07-12 索尼计算机娱乐公司 用于表现三维对象的方法和设备
CN101334891A (zh) * 2008-08-04 2008-12-31 北京理工大学 一种多通道的分布式绘制系统与方法
CN102044089A (zh) * 2010-09-20 2011-05-04 董福田 一种三维模型的自适应化简、渐进传输和快速绘制的方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6438576B1 (en) * 1999-03-29 2002-08-20 International Business Machines Corporation Method and apparatus of a collaborative proxy system for distributed deployment of object rendering
CN100428218C (zh) * 2002-11-13 2008-10-22 北京航空航天大学 一种实现通用虚拟环境漫游引擎的方法
KR100810294B1 (ko) * 2006-09-12 2008-03-06 삼성전자주식회사 3차원 메쉬 데이터의 특징-유지 간략화 방법
US8760450B2 (en) * 2007-10-30 2014-06-24 Advanced Micro Devices, Inc. Real-time mesh simplification using the graphics processing unit
US7983487B2 (en) * 2007-11-07 2011-07-19 Mitsubishi Electric Research Laboratories, Inc. Method and system for locating and picking objects using active illumination
CN101226640B (zh) * 2007-12-21 2010-08-18 西北工业大学 基于多双目立体视觉的运动捕获方法
CN101587583A (zh) * 2009-06-23 2009-11-25 长春理工大学 基于gpu集群的渲染农场

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151621A (en) * 1997-04-10 2000-11-21 International Business Machines Corp. Personal conferencing system
CN1441363A (zh) * 2002-02-27 2003-09-10 惠普公司 分布式资源结构和系统
CN1802668A (zh) * 2002-09-06 2006-07-12 索尼计算机娱乐公司 用于表现三维对象的方法和设备
CN101334891A (zh) * 2008-08-04 2008-12-31 北京理工大学 一种多通道的分布式绘制系统与方法
CN102044089A (zh) * 2010-09-20 2011-05-04 董福田 一种三维模型的自适应化简、渐进传输和快速绘制的方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958332A (zh) * 2023-09-20 2023-10-27 南京竹影数字科技有限公司 基于图像识别的纸张绘图实时映射3d模型的方法及系统
CN116958332B (zh) * 2023-09-20 2023-12-22 南京竹影数字科技有限公司 基于图像识别的纸张绘图实时映射3d模型的方法及系统
CN117369633A (zh) * 2023-10-07 2024-01-09 上海铱奇科技有限公司 一种基于ar的信息交互方法及系统

Also Published As

Publication number Publication date
CN102044089A (zh) 2011-05-04
WO2012037863A1 (fr) 2012-03-29
CN102306395A (zh) 2012-01-04
CN102332179A (zh) 2012-01-25
CN102332179B (zh) 2015-03-25
CN102306395B (zh) 2014-03-26

Similar Documents

Publication Publication Date Title
WO2012037862A1 (fr) Procédé de dessin réparti de données de modèle 3d et dispositif correspondant
JP6260924B2 (ja) レーザスキャンデータの画像レンダリング
EP1703470B1 (fr) Procédé et appareil de modélisation basés sur la profondeur d'image
CN103810744B (zh) 在点云中回填点
JPH0757117A (ja) テクスチャマップへの索引を生成する方法及びコンピュータ制御表示システム
CN102834849A (zh) 进行立体视图像的描绘的图像描绘装置、图像描绘方法、图像描绘程序
KR100967296B1 (ko) 그래픽 인터페이스 및 스테레오스코픽 디스플레이용 그래픽데이터를 래스터라이즈하는 방법
CN101236662A (zh) 生成用于3d显示的cg图像的装置和方法
WO2011082650A1 (fr) Procédé et dispositif destinés au traitement de données spatiales
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
KR20130012504A (ko) 다시점 렌더링 장치 및 방법
JP2003091745A (ja) 三次元シーンでイメージベースのレンダリング情報を表現するための方法
US9401044B1 (en) Method for conformal visualization
Tredinnick et al. Experiencing interior environments: New approaches for the immersive display of large-scale point cloud data
JP5846373B2 (ja) 画像処理装置、画像処理方法、画像処理プログラム、および、画像処理システム
JP2017215706A (ja) 映像合成方法、映像取得装置、映像合成装置、映像合成システム及びコンピュータプログラム。
JP2012003520A (ja) 立体印刷物制作支援装置、プラグインプログラム、立体印刷物制作方法および立体印刷物
JP2023527438A (ja) リアルタイム深度マップを用いたジオメトリ認識拡張現実効果
JP4114385B2 (ja) 仮想3次元空間画像管理システム及び方法、並びにコンピュータ・プログラム
JPH1027268A (ja) 画像処理方法及び画像処理装置
JP2022162653A (ja) 描画装置及びプログラム
JP2003323636A (ja) 三次元モデル供給装置およびその方法、画像合成装置およびその方法、ユーザーインターフェース装置
JP2952585B1 (ja) 画像生成方法
JP2007299080A (ja) 画像生成方法及び画像生成装置
Aliaga Automatically reducing and bounding geometric complexity by using images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11826389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11826389

Country of ref document: EP

Kind code of ref document: A1