CN109155074B - System and method for seamlessly rendering points - Google Patents

System and method for seamlessly rendering points Download PDF

Info

Publication number
CN109155074B
CN109155074B CN201780032052.9A CN201780032052A CN109155074B CN 109155074 B CN109155074 B CN 109155074B CN 201780032052 A CN201780032052 A CN 201780032052A CN 109155074 B CN109155074 B CN 109155074B
Authority
CN
China
Prior art keywords
point
image
pixel
channel
image space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780032052.9A
Other languages
Chinese (zh)
Other versions
CN109155074A (en
Inventor
格拉哈姆·法伊夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Sony Pictures Entertainment Inc
Original Assignee
Sony Corp
Sony Pictures Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp, Sony Pictures Entertainment Inc filed Critical Sony Corp
Publication of CN109155074A publication Critical patent/CN109155074A/en
Application granted granted Critical
Publication of CN109155074B publication Critical patent/CN109155074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

Various systems and methods disclosed herein relate to rendering a point-based graphic on a computing device, wherein a space between points is filled with color, thereby generating a seamless or pore-free surface. According to one method, one or more rasterizing channels are performed on an image space. One or more fill channels are performed on pixels in an image space, wherein spaces between pixels in the image space are filled with colors, thereby forming a continuous surface in a new image plane. After filling the channels, one or more mixing channels are performed on the image space, wherein the colors of a set of pixels are mixed together. The new image space is rendered from the image space in the image buffer.

Description

System and method for seamlessly rendering points
Copyright description
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office patent files or records, but otherwise reserves all copyright rights whatsoever.
Cross reference
The present application claims the benefit of U.S. provisional patent application US62/314,334 filed 28 at 3/2016, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates generally to computer graphics, and more particularly to a system and method for seamlessly rendering points.
Background
In computer graphics, point-based graphics have been used for model acquisition, processing, animation, and rendering. One particular challenge in point-based rendering is how to build a continuous surface without holes as the point of view changes. Various point-based rendering techniques have been developed to address this problem. Although these techniques can be used to fill holes in an image, the resulting solution is always lacking in practice or results in poor image quality.
Disclosure of Invention
Various systems and methods disclosed herein are directed to rendering a dot-based graphic on a computing device, wherein spaces between dots are filled with colors, thereby creating a seamless or pore-free surface appearance. According to one method, one or more rasterizing channels are performed on an image space. Based on the first camera image plane, color attributes and color effects of pixels in the image space are stored in an image buffer. One or more fill channels are performed for the pixels in the image space, wherein the space between the pixels in the image space is filled with colors, thereby forming a continuous surface in the new image plane. After filling the channels, one or more mixing channels are performed on the image space, wherein the colors of a set of pixels are mixed together. Rendering a new image space from the image space in the image buffer, wherein the new image plane is based on a second camera image plane, wherein the first and second camera image planes are different.
Drawings
FIG. 1 is a flow chart of an exemplary method for rendering points.
FIG. 2 is a schematic diagram of one embodiment of an image buffer.
Detailed Description
Briefly summarized, the various systems and methods disclosed herein are directed to rendering a point-based graphic on a computing device, wherein spaces between points are filled with colors, thereby creating a seamless or pore-free surface appearance. The system and method disclosed by the application improves the rendering performance of a large number of points through gap filling, and simultaneously provides consistent performance for rendering of various points with different sizes.
As shown in fig. 1, an exemplary method for rendering a point-based graphic includes the steps of: executing one or more rasterization channels (110); executing one or more filling channels (120); and executing one or more mixing channels (130).
In one embodiment, the system includes a Graphics Processing Unit (GPU) running software code. The software can store in the image buffer, during one or more of the rasterization channels 110, point attributes at locations determined by projecting 3D point locations in the image plane coordinates of the camera model, including point colors and point geometry parameters. The GPU vertex shader and GPU fragment shader may be used to project and store point colors and geometric parameters. The geometric parameters may include the 2D center of the point in the image plane coordinates, the image space radius in the case of a spherical point primitive, and other parameters used to define an ellipse in the case of an ellipsoidal point primitive.
In an alternative approach, the rasterizing step may store UV texture coordinates without storing color, or additionally store color as well. The geometric parameters stored in the image buffer may comprise the original 3D parameters of the 3D point primitive or parameters derived based on the projection of the point primitive on the camera image plane. In a variant embodiment, the point attributes including color and geometry parameters may be stored as a linear array, and the indices of the array may be stored in the image buffer instead of the point attributes themselves. In another variant embodiment, in the rasterizing step, 48-bit half-floating-point red, green, blue values are stored in an image buffer.
After rasterizing the channels, the GPU executes software for executing one or more fill channels 120 in image space. In one embodiment, a GPU fragment shader is used to fill channels. Each fill channel accesses each pixel site 210 in the image buffer 220 shown in fig. 2, calculates updated geometric parameters at each pixel site, and stores the results in the second image buffer. The first and second image buffers may exchange roles after each of the fill channels. Updating the geometric parameters at a pixel location is accomplished by accessing the geometric parameters of the pixel location and its 8 adjacent pixel locations on grid 230, which may not contain a geometric quantity (i.e., zero radius), or may contain the same geometric parameters as the central pixel location or different geometric parameters. The grid pitch may be one grid point per pixel, one grid point per two pixels, or one grid point per N pixels, N being some integer. The grid spacing N in the first fill channel is to the power of 2, which is greater than the image space radius of the largest point primitive that is desired to be rendered, and the grid spacing is halved after each fill channel, such that the geometric parameter propagates a longer distance in the image buffer in the first fill channel, and progressively shorter distances in subsequent fill channels, until only a single pixel distance propagates in the last fill channel. The geometric parameters on the central pixel site and its 8 neighboring pixel sites (as shown in fig. 2) are tested for ray intersection using rays passing through the central pixel site based on the camera model. The geometry of the pixel location after updating is the geometry of the intersection closest to the position of the camera model (if any) or the point with the smallest depth value. The intersection can be calculated approximately; for example, a spherical dot element can be considered as a cone, a disk, or a truncated paraboloid in close proximity to a sphere. As a result of the completion of filling the channel, each pixel location in the image buffer approximately contains the geometric parameters of the dot primitive that are visible at the pixel location closest to the camera model or having the smallest depth value.
In an alternative approach, the fill channel may propagate an index of a linear array of point properties, rather than the propagation properties themselves. In another variant embodiment, the fill channel may not propagate any point attributes, or propagate some or all of the point attributes, except for propagating image buffer coordinates corresponding to pixel locations containing the remaining non-propagated point attributes. In another variant embodiment, the image buffer coordinates are the same or identical to the point center attribute.
In the fill channel, different adjacent regions other than the conventional grid containing the center pixel site and its 8 adjacent pixel sites may be accessed, and different grid spacing schemes other than the power of 2 may be employed. In a variant embodiment, a larger grid may be employed consisting of a central pixel site of 5*5 grid and 24 adjacent pixel sites thereof, along with a grid spacing scheme of the power of 4. In another variant embodiment, the fill channels may be switched between the horizontal grid fill channels of 1*3 and the vertical grid fill channels of 3*1, and employ a grid spacing that is halved only after every two channels to the power of 2. Similar variant embodiments with even larger grids may also be employed.
After the last fill channel, the GPU executes the mix channel 130. In one embodiment, a GPU fragment shader is used for the mixing channel. The blend channel accesses each pixel site 210 in the image buffer 220. The geometric parameters stored at the pixel site during the last fill channel 120 may be accessed and the point center (if any) may be used to access the point color information or other point attributes stored at the second pixel site in the image buffer that includes the point center. In this way, the dot color and other dot properties do not need to be propagated through multiple filling channels, thereby improving the efficiency of the method. The dot color and dot properties of 8 immediately adjacent pixel locations of the pixel location can be accessed in a similar manner. The final rendered color may be calculated as a weighted mix of up to 9 accessed colors associated with up to 9 dot centers. The weight of a color in a blend may be inversely proportional to a monotonic function of the distance from the center of the relevant point to the center of the pixel location, such as the square of the distance plus a small constant.
In an alternative approach, the mixing channel may smoothly interpolate color or UV texture coordinates between the dot centers. Based on the mixed UV coordinates, the color may be accessed from one or more texture maps. In a variant embodiment, a color effect is applied to the color during the mixing channel instead of the rasterizing channel.
In another approach, all channels employ GPUs that use vertex shaders and fragment shaders. In rasterizing the channel, colors are input as 48-bit half-floating point red, green, blue values, which are then processed for color effects such as gamma, brightness, contrast, or color look-up tables. The color data is then stored in the image buffer as 24-bit red, green, blue values. The fill channels use a grid spacing of 2 to the power of 2, starting with the user specified maximum power of 2, and halving after each channel until the last channel uses a grid spacing of 1. In the fill channel, the propagated geometric parameter is simply the 2D point center corresponding to the image buffer coordinates. Other geometric parameters are accessed from pixel locations corresponding to the dot centers, if desired. In a mixed channel, the monotonic weight function is the square of the distance from the associated point center to the pixel site center plus a small constant.
These various methods and embodiments may be implemented on an exemplary system including at least one host processor coupled to a communication bus. The system includes a main memory. The main memory may store software (control logic) and data. In one embodiment, the main memory may be in the form of Random Access Memory (RAM).
The system may also include a graphics processor and a display (e.g., a computer monitor). In an embodiment, the GPU includes one or more shader modules, rasterizer modules, or other modules known or developed in the art. Each of these modules may be disposed on a separate semiconductor platform, forming a Graphics Processing Unit (GPU).
It will be appreciated by those skilled in the art that an individual semiconductor platform may refer to an integrated circuit or chip based on a single semiconductor. It should be noted that the term "stand-alone semiconductor platform" may also refer to a multi-chip module with enhanced connectivity that mimics on-chip operation and greatly improves over the use of conventional central processor and bus implementations. Alternatively, the various modules may also be arranged individually or in different semiconductor platform combinations. The system may also be implemented by reconfigurable logic, which may include, but is not limited to, field Programmable Gate Arrays (FPGAs).
The system may also include a secondary memory including, for example, a hard disk drive and/or a removable storage drive representing a floppy disk drive, a magnetic tape drive, or an optical disk drive. The removable storage drive reads from and/or writes to a removable storage unit in a manner known in the art.
The computer program or computer control logic algorithm may be stored in the main memory and/or the secondary memory. The computer programs, when executed, cause the system to perform various functions. Main memory, secondary memory, volatile or non-volatile memory, and/or any other type of memory are all available examples of non-transitory computer-readable media.
In an embodiment, the architecture and/or functionality of the various systems disclosed may be implemented in the context of a host processor, a graphics processor, or an integrated circuit having at least some of the capabilities of both a host processor and an graphics processor, or a chipset (i.e., a set of integrated circuits designed to operate and sell one unit for performing related functions), and/or any other integrated circuit for that purpose.
In another embodiment, the architecture and/or functionality of the various systems disclosed may be implemented in the context of a general purpose computer system, a circuit board system, a gaming machine system dedicated for entertainment purposes, a special purpose system, and/or other desired systems. For example, the system may be in the form of a desktop computer, a portable computer, and/or other types of logic. Also, the system may be in the form of various other devices including, but not limited to, a personal digital assistant (PAD), a cell phone, or other similar devices.
In the various embodiments disclosed, the system may be connected to a network (e.g., a communication network, a Local Area Network (LAN), a wireless network, a Wide Area Network (WAN) such as the internet, a peer-to-peer network, or a wired network) for communication purposes.
The various embodiments described above are for illustrative purposes only and should not be construed as limiting the application. Various modifications and alterations of this application will become apparent to those skilled in the art without departing from the true spirit and scope of this application, and the application is not to be limited to the embodiments and applications illustrated and described herein.

Claims (8)

1. A method for rendering point-based graphics on a computer, comprising:
storing, in an image buffer, point attributes of an image space at an orientation determined by projecting three-dimensional point locations in image plane coordinates of a camera image plane during one or more rasterization channels;
executing one or more filling channels on the point attribute of the image space, wherein each filling channel accesses each pixel part in the image buffer, calculates updated geometric parameters on each pixel part, and stores the updated geometric parameters in a second image buffer;
after performing the fill channel for each pixel site in the buffer, performing a blend channel, wherein updated geometric parameters from each pixel of the second image buffer are applied to each pixel; and
after the filling channel and mixing channel, a new image space is rendered based on the image buffer.
2. The method of claim 1, wherein the point attribute is a point color.
3. The method of claim 1, wherein the point attribute is UV texture coordinates.
4. The method of claim 1, wherein the point attribute is a geometric parameter of a point.
5. The method of claim 1, wherein the geometric parameter is a 2D center of a point in image plane coordinates, an image space radius for a spherical point primitive, or a geometric parameter for defining an ellipse for an ellipsoidal point primitive.
6. The method of claim 1, wherein calculating the updated geometric parameters comprises: the geometric parameters of 8 immediately adjacent pixels of each pixel site are accessed and a determination is made as to whether a ray intersection exists between each pixel site and any of the 8 immediately adjacent pixels.
7. The method of claim 6, wherein the mixing channel further comprises mixing colors from each pixel site and the 8 immediately adjacent pixels.
8. The method of claim 6, further comprising applying a color effect to pixels in the image space stored in the image buffer.
CN201780032052.9A 2016-03-28 2017-03-28 System and method for seamlessly rendering points Active CN109155074B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201662314334P 2016-03-28 2016-03-28
US62/314,334 2016-03-28
PCT/US2017/024636 WO2017172842A1 (en) 2016-03-28 2017-03-28 System and method for rendering points without gaps

Publications (2)

Publication Number Publication Date
CN109155074A CN109155074A (en) 2019-01-04
CN109155074B true CN109155074B (en) 2023-09-08

Family

ID=59897337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780032052.9A Active CN109155074B (en) 2016-03-28 2017-03-28 System and method for seamlessly rendering points

Country Status (4)

Country Link
US (1) US10062191B2 (en)
EP (1) EP3437072B1 (en)
CN (1) CN109155074B (en)
WO (1) WO2017172842A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533742B (en) * 2019-09-03 2021-05-11 广州视源电子科技股份有限公司 Image color filling method, device, equipment and storage medium
US20240104874A1 (en) * 2022-09-26 2024-03-28 Faro Technologies, Inc. Gap filling for three-dimensional data visualization

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594854A (en) * 1995-03-24 1997-01-14 3Dlabs Inc. Ltd. Graphics subsystem with coarse subpixel correction
US6275622B1 (en) * 1998-06-30 2001-08-14 Canon Kabushiki Kaisha Image rotation system
CN1795468A (en) * 2003-06-26 2006-06-28 佳能株式会社 A method for tracking depths in a scanline based raster image processor
CN101719154A (en) * 2009-12-24 2010-06-02 中国科学院计算技术研究所 Grid structure-based spatial index establishing method and grid structure-based spatial index establishing system
CN101840585A (en) * 2009-03-18 2010-09-22 乐大山 Method for rendering three-dimensional object into two-dimensional image
CN102301401A (en) * 2009-01-29 2011-12-28 微软公司 Single-pass bounding box calculation

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11195132A (en) * 1997-10-31 1999-07-21 Hewlett Packard Co <Hp> Buffer for texture mapping and three-dimensional graphics processor and system therefor and method therefor and storage medium for storing processing program
EP1027682B1 (en) * 1997-10-31 2003-07-16 Hewlett-Packard Company, A Delaware Corporation Method and apparatus for rapidly rendering an image in response to three-dimensional graphics data in a data rate limited environment
AU2001239926A1 (en) 2000-02-25 2001-09-03 The Research Foundation Of State University Of New York Apparatus and method for volume processing and rendering
US8259106B2 (en) * 2002-05-15 2012-09-04 Mental Images Gmbh Low-dimensional rank-1 lattices in computer image synthesis
US7215340B2 (en) * 2002-07-19 2007-05-08 Mitsubishi Electric Research Laboratories, Inc. Object space EWA splatting of point-based 3D models
US7231084B2 (en) * 2002-09-27 2007-06-12 Motorola, Inc. Color data image acquistion and processing
KR101556593B1 (en) * 2008-07-15 2015-10-02 삼성전자주식회사 Method for Image Processing
JP5259449B2 (en) * 2009-02-17 2013-08-07 オリンパス株式会社 Image processing apparatus and method, and program
US20110216065A1 (en) * 2009-12-31 2011-09-08 Industrial Technology Research Institute Method and System for Rendering Multi-View Image
US20110285736A1 (en) * 2010-05-21 2011-11-24 Kilgard Mark J Decomposing cubic bèzier segments for tessellation-free stencil filling
US8830249B2 (en) * 2011-09-12 2014-09-09 Sony Computer Entertainment Inc. Accelerated texture lookups using texture coordinate derivatives
US20150302115A1 (en) * 2012-11-30 2015-10-22 Thomson Licensing Method and apparatus for creating 3d model
US9563962B2 (en) * 2015-05-19 2017-02-07 Personify, Inc. Methods and systems for assigning pixels distance-cost values using a flood fill technique

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594854A (en) * 1995-03-24 1997-01-14 3Dlabs Inc. Ltd. Graphics subsystem with coarse subpixel correction
US6275622B1 (en) * 1998-06-30 2001-08-14 Canon Kabushiki Kaisha Image rotation system
CN1795468A (en) * 2003-06-26 2006-06-28 佳能株式会社 A method for tracking depths in a scanline based raster image processor
CN102301401A (en) * 2009-01-29 2011-12-28 微软公司 Single-pass bounding box calculation
CN101840585A (en) * 2009-03-18 2010-09-22 乐大山 Method for rendering three-dimensional object into two-dimensional image
CN101719154A (en) * 2009-12-24 2010-06-02 中国科学院计算技术研究所 Grid structure-based spatial index establishing method and grid structure-based spatial index establishing system

Also Published As

Publication number Publication date
US10062191B2 (en) 2018-08-28
EP3437072B1 (en) 2020-10-14
WO2017172842A1 (en) 2017-10-05
EP3437072A4 (en) 2019-08-14
EP3437072A1 (en) 2019-02-06
CN109155074A (en) 2019-01-04
US20170278285A1 (en) 2017-09-28

Similar Documents

Publication Publication Date Title
JP6678209B2 (en) Gradient adjustment for texture mapping to non-orthonormal grid
JP6563048B2 (en) Tilt adjustment of texture mapping for multiple rendering targets with different resolutions depending on screen position
CN101116111B (en) 2d/3d line rendering using 3d rasterization algorithms
US9633469B2 (en) Conservative rasterization of primitives using an error term
CN103946895B (en) The method for embedding in presentation and equipment based on tiling block
US8659589B2 (en) Leveraging graphics processors to optimize rendering 2-D objects
US9558586B2 (en) Method for estimating the opacity level in a scene and corresponding device
US20070229503A1 (en) Method of and system for non-uniform image enhancement
US10078911B2 (en) System, method, and computer program product for executing processes involving at least one primitive in a graphics processor, utilizing a data structure
US8854392B2 (en) Circular scratch shader
US20180232915A1 (en) Line stylization through graphics processor unit (gpu) textures
US10134171B2 (en) Graphics processing systems
KR20190080274A (en) Graphic processor performing sampling-based rendering and Operating method thereof
US11037357B2 (en) Pixelation optimized delta color compression
CN109155074B (en) System and method for seamlessly rendering points
JP6944304B2 (en) Texture processing method and its device
JPWO2014020801A1 (en) Image processing apparatus and image processing method
Xu et al. Visualization methods of vector data on a Digital Earth System
Ma et al. Rasterization of geometric primitive in graphics based on FPGA
US20180005432A1 (en) Shading Using Multiple Texture Maps
WO2012114386A1 (en) Image vectorization device, image vectorization method, and image vectorization program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200311

Address after: Tokyo, Japan

Applicant after: Sony Corp.

Applicant after: SONY PICTURES ENTERTAINMENT Inc.

Address before: California, USA

Applicant before: NURULIZE, Inc.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant