CN111047666B - Unified digital content selection system for vector graphics and grid graphics - Google Patents
Unified digital content selection system for vector graphics and grid graphics Download PDFInfo
- Publication number
- CN111047666B CN111047666B CN201910684659.6A CN201910684659A CN111047666B CN 111047666 B CN111047666 B CN 111047666B CN 201910684659 A CN201910684659 A CN 201910684659A CN 111047666 B CN111047666 B CN 111047666B
- Authority
- CN
- China
- Prior art keywords
- grid
- vector
- selection representation
- digital image
- graphic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000013598 vector Substances 0.000 title claims abstract description 165
- 230000004044 response Effects 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims description 46
- 238000003860 storage Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 9
- 230000000873 masking effect Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000004040 coloring Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 description 10
- 239000003086 colorant Substances 0.000 description 5
- 238000012800 visualization Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 241000234282 Allium Species 0.000 description 2
- 235000002732 Allium cepa var. cepa Nutrition 0.000 description 2
- 241000257303 Hymenoptera Species 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 208000019300 CLIPPERS Diseases 0.000 description 1
- 241000282376 Panthera tigris Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 208000021930 chronic lymphocytic inflammation with pontine perivascular enhancement responsive to steroids Diseases 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
Embodiments of the present disclosure relate to a unified digital content selection system for vector graphics and raster images. A unified digital image selection system for selecting and editing both vector graphics and grid graphics together is described. In one example, a first user input is received selecting an area of a digital image. In response to a first user input, a vector selection representation of at least a portion of the vector graphic included in the selected region is generated. In response to the first user input, a grid selection representation of at least a portion of the grid pattern included in the selected region is also generated. A second user input specifying a digital image editing operation is also received. In response to the second user input, both the vector selection representation and the grid selection representation use a digital editing operation. The digital image is then displayed with the edited vector selection representation and the edited grid selection representation.
Description
Cross Reference to Related Applications
The present application claims priority from U.S. provisional patent application No. 62/745,121 entitled "Unified Digital Content Selection System for Vector AND RASTER GRAPHICS (unified digital content selection system for vector and grid patterns)" filed on 10/12 of 2018 and U.S. non-provisional patent application No. 16/379,252 entitled "United Selection Model for Vector AND RASTER GRAPHICS (unified selection model for vector and grid patterns)" filed on 9/4.
Technical Field
The present disclosure relates to digital media image editing, and more particularly to a unified digital content selection system for vector graphics and grid graphics.
Background
The digital image creation application may be configured to generate a wide variety of graphical elements as part of creating the digital image, examples of which include vector graphics and grid graphics. Vector graphics, for example, may be used to support shapes that have smooth edges when rendered in a user interface, regardless of the degree of scaling applied to the shape. To this end, a vector graphic is mathematically defined (e.g., using Bezier curves) and then drawn for a particular zoom level of the user interface. On the other hand, the grid pattern is defined using a matrix representing a substantially rectangular grid of pixels, for example as a bitmap. Grid graphics are commonly used for digital photographs and to create visual effects such as mimicking the look of a pencil, brush strokes, paint-on painting, and the like.
However, conventional digital image creation applications do not address vector and grid pattern editing functionality together. In contrast, conventional digital image creation applications implement separate tool sets to select and edit vectors and grid patterns. For vector graphics, traditional applications rely on selection tools to select and modify the underlying mathematical structure of the vector graphics (e.g., to select and move control points of the bezier curve) and are not available for directly selecting pixels. On the other hand, for grid graphics, conventional applications employ selection tools to directly select pixels, but are not available for vector graphics. Thus, conventional digital image creation applications require users to interact with and learn many separate tools, which is inefficient in terms of both user and computation and leads to user frustration.
Disclosure of Invention
A unified digital image selection system for selecting and editing vector graphics and grid graphics together is described that overcomes the limitations of conventional techniques that rely on separate tools and applications. In one example, a first user input is received selecting an area of a digital image. In response to a first user input, a vector selection representation of at least a portion of the vector graphic included in the selected region is generated. In response to the first user input, a grid selection representation of at least a portion of the grid pattern included in the selected region is also generated.
A second user input specifying a digital image editing operation is also received. In response to the second user input, both the vector selection representation and the grid selection representation use a digital editing operation. The digital image is then displayed with the edited vector selection representation and the edited grid selection representation.
This summary presents some concepts in a simplified form that are further described below in the detailed description. Accordingly, this summary is not intended to identify essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Drawings
The specific embodiments are described with reference to the accompanying drawings. The entities represented in the figures may indicate one or more entities, and thus single or plural forms of entities may be referred to interchangeably in the discussion.
FIG. 1 is an illustration of an environment in an example implementation that is operable to employ the unified digital image selection and editing techniques for vector graphics and grid graphics described herein.
FIG. 2 depicts a system in an example implementation that illustrates operation of the graphics selection module of FIG. 1 in greater detail.
FIG. 3 is a flow chart depicting a procedure in an example implementation in which a single user input selecting an area of a digital image is used as a basis for selecting and editing vector graphics and grid graphics included in the selected area.
FIG. 4 depicts an example implementation of generating a grid pattern as part of a digital image using the digital image creation application of FIG. 1.
FIG. 5 depicts an example implementation of generating vector graphics as part of a digital image using the digital image creation application of FIG. 1.
FIG. 6 depicts an example implementation of a selected region that includes a portion of the grid pattern of FIG. 4.
FIG. 7 depicts an example implementation in which a selected region of a portion of the grid pattern of FIG. 6 is deleted and another selected region includes a portion of the vector pattern.
FIG. 8 depicts an example implementation in which the selected region includes both a portion of a grid pattern and a portion of a vector pattern, and in which the visualization of the selected region corresponds to the vector pattern.
FIG. 9 depicts an example implementation in which the selected region includes both a portion of a grid pattern and a portion of a vector pattern, and in which the visualization of the selected region corresponds to the grid pattern.
FIG. 10 depicts an example implementation in which the selected region includes both a portion of a grid graphic and a portion of a vector graphic that are deleted together using a digital editing operation.
FIG. 11 depicts an example implementation in which the selected region includes a portion of a grid pattern that is moved using a digital editing operation.
FIG. 12 depicts an example implementation in which the selected region includes a portion of a vector graphic that is moved using a digital editing operation.
FIG. 13 depicts an example implementation in which selected regions are used to transform grid graphics and vector graphics.
FIG. 14 depicts an example implementation in which a selected region is used to constrain a location at which a digital editing operation is to be performed to add vector graphics and/or grid graphics to a digital image.
FIG. 15 depicts an example implementation in which the selected region is specified based on color values of the pixels.
Fig. 16 illustrates an example system including various components of an example device that can be implemented as any type of computing device as described and/or utilized with reference to fig. 1-15 to implement embodiments of the techniques described herein.
Detailed Description
SUMMARY
Conventional digital image creation applications use separate functionality to address vector graphics and grid graphics. Vector graphics are defined mathematically, for example, using control points connected by curves, to form shapes, polygons, and the like. Each of these control points is defined on the X/Y axis and is used to determine the direction of the path by using the control keys (handles). The curve may also have defined properties including stroke color, shape, curvature, thickness, fill, etc. Bezier curves are examples of the types of parametric curves used to define vector graphics. For example, a Bezier curve may be used to model a smooth curve that may be infinitely scaled. The curves may be joined together, which is referred to as a path. Vector graphics may be found in a variety of graphics file formats, examples of which include Scalable Vector Graphics (SVG), packaging postscript (EPS), and Portable Document Format (PDF).
On the other hand, the grid pattern is implemented as a bitmap with a lattice data structure representing a substantially rectangular grid of pixels. The bitmap (i.e., single-bit grid) corresponds bit-by-bit to the graphic displayed by the display device. The grid pattern is typically characterized by the width and height of the pattern in pixels and by the number of bits per pixel or the color depth that determines the number of colors represented. Raster graphics may be found in a variety of graphics file formats, examples of which include Joint Photographic Experts Group (JPEG), portable Network Graphics (PNG), animated Portable Network Graphics (APNG), graphics Interchange Format (GIF), moving Picture Experts Group (MPEG) 4, and the like.
It is therefore apparent that the infrastructure of the vector graphics is quite different from that of the grid graphics. In view of this, conventional digital image editing operations support different functionalities for selecting and editing vector graphics than grid graphics. For example, for vector graphics, traditional applications rely on path selection tools to select and modify the underlying mathematical structure of the vector graphics (e.g., select and move control points of the Bezier curve) and are not available for directly selecting pixels. On the other hand, for grid patterns, conventional applications employ direct selection tools to directly select pixels, but are not available for vector patterns. Thus, conventional digital image creation applications require users to interact with and learn many separate tools, which is inefficient in terms of both user and computation and leads to user frustration.
Accordingly, techniques and systems are described that support the unified selection and editing of vectors and grid patterns by digital image creation applications. In one example, the digital image creation application receives a user selection of a representation of a graphical selection tool, e.g., "lasso" to specify regions within specified boundaries, "magic bars" to select adjacent colors within a threshold amount of the selected regions, bounding boxes, and so forth. The digital image creation application also receives another user input to select an area within the digital image displayed by the display device in the user interface, e.g., an area less than the entire digital image.
In response to selection of the region, the digital image creation application generates a vector selection representation and a separate grid selection representation. For example, the vector selection module may be employed by the digital image creation system to generate and store (e.g., in a computer-readable storage medium) a vector selection representation of a portion of any vector graphic included within the selected region. For example, the vector graphic representation may be formed to include control points and curves of the vector graphic that are included within the selected region, as well as defined attributes of the curves such as stroke color, shape, curvature, thickness, fill, and so forth.
A grid selection module is also employed to generate a grid selection representation that includes pixels of a grid pattern that is included in the selected region. The grid selection module may, for example, generate a gray grid mask in which white pixels indicate pixels to be included in their entirety as part of a selected region from the digital image, black pixels indicate digital pixels that are not to be included in the selected region, and gray pixels indicate a corresponding amount of pixels (i.e., pixel colors) to be included in the selected region to support the blended region.
In this way, the digital image creation module responds to a single user input and maintains separate graphical representations included in the selected region using a single selection tool, which may then be edited using a digital editing operation. For example, the digital image creation module may support movement or deletion of both vector and grid selection representations within the digital image together using a single operation. In another example, the digital image creation module also supports masking operations, wherein the selected region is used to contain effects of the digital image operation within the region, for example to draw vectors and grid patterns.
In yet another example, the digital image creation module also supports transformation of the vector and grid selection representations, e.g., to change color, size, rotation, etc. For example, user input may be received to increase the scale of the selected region. In response, the digital image creation application scales a portion of the vector graphics within the selected region using the underlying mathematical representation. The digital image creation application also upsamples portions of the grid pattern included in the grid selection representation. In this way, the digital image creation application can synchronize digital editing operations to two representations as needed, which is not possible in conventional techniques, by using a unified input structure that maintains the basic functionality of the vector and grid graphics.
In one implementation, the digital image creation application supports grid-based previews of vector graphics to support real-time performance prior to submission to the vector representation. For example, input may be received to draw a vector graphic that is initially displayed as a grid graphic until the strokes are completed and then converted to control points for the vector graphic. This supports real-time output of graphics and improves computational efficiency, especially when combined with multiple curves and paths as part of a union or subtraction operation. Further discussion of these and other examples is included in the following sections and shown in the corresponding figures.
In the following discussion, an example environment is first described in which the techniques described herein may be employed. Example processes are then described that may be performed in the example environment, as well as other environments. Accordingly, execution of the example process is not limited to the example environment, and the example environment is not limited to execution of the example process.
Example Environment
FIG. 1 is an illustration of a digital media environment 100 in an example implementation that is operable to employ techniques described herein. The illustrated environment 100 includes a computing device 102 that may be configured in various ways.
For example, computing device 102 may be configured as a desktop computer, a laptop computer, a mobile device (e.g., assuming a handheld configuration such as a tablet computer or mobile phone as illustrated), and so forth. Thus, computing device 102 may range from a full resource device (e.g., personal computer, game console) with substantial memory and processor resources to a low resource device (e.g., mobile device) with limited memory and/or processing resources. Additionally, while a single computing device 102 is shown, the computing device 102 may represent a plurality of different devices, such as a plurality of servers utilized by an enterprise to perform operations "on the cloud," as described in fig. 16.
Computing device 102 is illustrated as including a digital image creation application 104. Digital image creation application 104 is at least partially implemented in hardware of computing device 102 to process and transform digital image 106, digital image 106 being illustrated as being maintained in storage 108 of computing device 102. Such processing includes creating the digital image 106, modifying the digital image 106, and rendering the digital image 106 in the user interface 110 for output by, for example, the display device 112. Although illustrated as being implemented locally at computing device 102, the functionality of digital image creation application 104 may also be implemented in whole or in part via functionality available to network 114, such as a portion of a web service or "in the cloud.
An example of functionality incorporated by digital image creation application 104 to process image 106 is illustrated as graphics selection module 116. Graphics selection module 116 supports unified digital image selection and editing of vector graphics 118 and grid graphics 120 included within digital image 106 through the use of respective vector selection module 122 and grid selection module 124.
Vector graphic 118 is mathematically defined using two dimensional points (e.g., control points) connected by curves to form a shape, polygon, or the like. Each of these control points is defined on the X/Y axis and is used to determine the direction of the path by using the control keys. The curve may also have defined properties including stroke color, shape, curvature, thickness, fill, etc. The example vector graphic 126 is illustrated as being displayed by the display device 112 in the user interface 110.
On the other hand, the grid pattern 120 is implemented as a bitmap having a lattice data structure, which represents a substantially rectangular grid of pixels. The grid pattern is typically used to capture photographs with a digital camera, and the example grid pattern 128 is illustrated as being displayed by the display device 112 in the user interface 110. The grid pattern 120 is typically characterized by the width and height of the pattern in pixels and the number of bits per pixel or the color depth that determines the number of colors represented.
Due to the differences between vector graphic 118 and grid graphic 120, conventional digital image creation applications that support both vector and grid graphic editing do not support both types of graphics equally and do not support a single selection model. In contrast, conventional applications force users to edit vector and grid graphics using separate selection and drawing tools. In one conventional example where the application is pixel-based (e.g., for editing digital photographs), a selection tool selects pixels on a grid layer, where a path selection tool is used to select control points for the Bezier curve of the vector graphic. In this conventional example, the grid selection cannot be used on the vector graphics, and the operation available for editing the grid selection is different from the operation available for editing the vector selection. In another conventional example, an application utilizes vector graphics to generate artwork such as logos, clippers, and the like. In this conventional example, the application does not support the selection and editing commands at the pixel level, and thus forces the user to switch to a different application to do so. Thus, traditional digital image creation applications are not joint and inefficient, computationally inefficient with respect to using multiple applications and tools, and also inefficient on the user's side as it can be frustrating to require the user to learn and implement these separate tools.
Thus, in the techniques described herein, graphics selection module 116 supports a unified digital image selection system that supports the selection and editing of vector graphics 118 and grid graphics 120 together. For example, user input may be received by the graphical selection module 116, which selects a representation of the graphical selection tool 130, e.g., "lasso" in the illustrated example. Another user input is then received to select an area within the digital image, which causes vector selection module 122 and grid selection module 124 to generate separate representations that are maintained in memory of computing device 102. The selected area may be visualized in a variety of ways, including "marching ants" (e.g., moving dashed boundaries), colored coverings, "onion skin," and so forth.
The representation of vector graphic 118 maintains the underlying mathematical structure and thus has infinite resolution, while the representation of the grid graphic includes pixels from the selected region and thus has the same resolution as the underlying graphic. The digital editing operations implemented by editing the selected region using tools or commands are applied to the two representations in parallel by the graphic selection module 116, and thus editing for both types of graphics is synchronized. The selected region may also act as a mask to support rendering within the region, with the grid pattern masked using a mixed mode and the vector pattern masked using a plan view. Graphics selection module 116 may also support functionality to convert vector graphics 118 into grid graphics 120 via rasterization or to convert grid graphics 120 into vector graphics 118 via curve fitting. Further discussion of these and other examples is included in the following section.
In general, the functionality, features, and concepts described with respect to the examples above and below may be employed in the context of the example processes described below. Furthermore, the functionality, features, and concepts described with respect to different figures and examples in this document may be interchanged with one another and are not limited to implementation in the context of a particular figure or process. Furthermore, blocks associated with different representations and corresponding figures herein may be applied and/or combined together in different ways. Thus, the various functionalities, features, and concepts described herein with respect to the different example environments, devices, components, figures, and processes may be used in any suitable combination, and are not limited to the specific combinations represented by the examples listed in this specification.
Unified selection and editing of vector graphics and grid graphics
FIG. 2 depicts a system in an example implementation that illustrates operation of the graphics selection module of FIG. 1 in greater detail. FIG. 3 depicts a procedure 300 in an example implementation in which a single user input selecting a region of a digital image is used as a basis for selecting and editing vector and/or grid graphics included in the selected region.
The following discussion describes techniques that may be implemented utilizing the previously described systems and devices. Aspects of each of the procedures may be implemented in hardware, firmware, software, or a combination thereof. These processes are illustrated as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown to perform the operations by the respective blocks. In the sections discussed below, reference will be made to fig. 1-15.
The example begins by creating a digital image 106 by digital image creation application 104, the digital image 106 including vector graphics 118 and grid graphics 120. As shown in the example implementation 400 of FIG. 4, user input is received by the digital image creation application 104 that selects a representation of the grid drawing tool 402 and then draws the grid graphic 120 (e.g., by gesture or via a cursor control device), the grid graphic 120 being maintained in a dedicated grid layer 404 in the digital image 106.
As shown in the example implementation 500 of FIG. 5, the digital image creation application 104 also receives user input selecting a representation of the vector drawing tool 502, and then draws the vector graphic 118 (e.g., by gesture or via a cursor control device), the vector graphic 118 being maintained in a dedicated vector layer 504 in the digital image 106. Thus, at this point digital image 106 includes vector graphics 118 and grid graphics 120 within a single digital image 106.
To initiate the selection, user input is received selecting a representation of the graphical selection tool 130, such as "lasso" in the example 600 illustrated in FIG. 6. The selection detection module 202 then receives the first user input 204 selecting the region 206 of the digital image 106 (block 302). In the illustrated lasso example, for example, a free form line is used to define the outer boundary of the selected region 206. Once completed, the selection detection module 202 is configured to indicate the selected area, including "marching ants" (e.g., moving the dashed border), colored coverings, "onion skin," and so forth, as illustrated. Similar functionality may also be used to define bounding boxes, for example, through use of a cursor control device or a click and drag operation of a gesture. Other examples are also contemplated, including smart object selection using "magic wand" as described further below with respect to FIG. 15, as well as using machine learning to detect objects or object edges. In this manner, the selected region 206 may include a portion of the digital image 106 to which the digital editing operations are to be applied.
As previously described, graphics selection module 126 supports unified selection of vector graphics 118 and grid graphics 120 such that a single tool may be used to select and edit vector graphics 118, grid graphics 120, or both. Thus, in FIG. 6, the selected region 206 includes a portion of the grid pattern 120, which is then deleted in the digital editing operation as shown in example 700 of FIG. 7. As also shown in example 700 of fig. 7, another selected region 206 includes a portion of vector graphic 118. Thus, the graphic selection tool 130 may be used for vector graphics 118 or grid graphics 120.
The graphic selection tool 130 may also be used to simultaneously select vectors and grid graphics within the selected region 206. As shown in example 800 of fig. 8, for example, first user input 204 defines selected region 206 that includes at least a portion of vector graphic 118 and at least a portion of grid graphic 120. According to the vector graphics in fig. 8, the user interface 110 depicts the visualization of the selected region 206 as smooth, whereas in the example 900 of fig. 9, the visualization of the selected region 206 follows a grid of pixels.
Returning again to FIG. 2, in response to first user input 204, vector selection representation 208 of at least a portion of vector graphic 118 included in selected region 206 is generated by vector selection module 122 (block 304). For example, vector selection module 122 may identify control points and curves of vector graphics 118 that are included within selected region 206. The curve fit may be used to recreate the portion of the curve that was "cut" at the edge of the selected region 206.
Vector selection representation 208 also includes defined properties of vector graphic 118 within selected region 206, including stroke color, shape, tortuosity, thickness, fill, and the like. In this way, vector selection representation 208 maintains the underlying mathematical structure, and is therefore infinitely scalable and maintains the functionality of vector graphic 118, e.g., to change control points, tortuosity, defined properties, and the like.
In response to the first user input, a grid selection representation 210 of at least a portion of the grid pattern 120 included in the selected region is also generated by the grid selection module 124 (block 306). For example, the grid selection representation 210 may include pixels from the bitmap that lie within the boundaries of the selected region 206. In one example, grid selection representation 210 is generated as a gray grid mask, where white pixels indicate pixels that are to be included in their entirety as part of a selected region from the digital image, black pixels indicate pixels that are not to be included in the selected region, and gray pixels indicate a corresponding amount of pixels (i.e., pixel colors) that are to be included in the selected region 206 to support the blended region.
The vector selection representation 208 and the grid selection representation 210 are stored to a computer-readable storage medium (e.g., memory) of the computing device 102 as a basis for performing digital editing operations. For example, the digital image editing module 212 may receive a second user input 214 specifying a digital image editing operation (block 308). In response to the second user input 214, both the vector selection representation 208 and the grid selection representation 210 are edited together using a digital editing operation (block 310), thereby generating an edited vector selection representation 216 and an edited grid selection representation 218. The digital image 106 is then displayed with the edited vector selection representation 216 and the edited grid selection representation 218 (block 312).
Various different types of digital editing operations utilizing the selected regions may be performed, examples of which are represented by digital content transformation module 220, digital content masking module 222, and digital content preview module 224. The digital content transformation module 220 may support various transformations. As shown in the example implementation 1000 of fig. 10, for example, in a first stage 1002, the selected region 206 includes portions of the vector graphic 118 and the grid graphic 120 that are deleted together in a single operation as shown in a second stage 1004.
In the example implementation 1100 of fig. 11, in a first stage 1102, the selected region 206 includes a portion of the grid pattern 120. In the second stage 1104, the selected region 206 is moved within the digital image, for example, via a gesture, a click and drag operation, or the like. In the example implementation 1200 of fig. 12, in a first stage 1202, the selected region 206 includes a portion of the vector graphic 118. In the second stage 1204, the selected region 206 is moved within the digital image.
In the example implementation 1300 of fig. 13, resizing and rotation transformation are shown as being applied to the selected region 206. In a first stage 1302, a digital image includes vector graphics 118 and grid graphics 120. In a second stage 1304, the selected region 206 includes a portion of the grid pattern 120 that is resized, in the illustrated example, by upsampling pixels included in the grid selection representation 210 that correspond to the selected region. In a third stage 1306, selected region 206 includes a portion of vector graphic 118. A portion of vector graphic 118 is resized (e.g., the proportional upward adjustment of the spacing of the control points and curves) based on the mathematical structure included in vector selection representation 208 corresponding to selected region 206. In this example, the vector selection representation 208 is also rotated relative to the digital image 106. Scaling of the selected region 208 may be performed in a similar manner by downsampling and proportionally adjusting the selected region 206 downward. Other transformations, such as re-coloring, that may be implemented by the digital content transformation module 220 are also contemplated.
FIG. 14 depicts an example implementation 1400 in which the selected region 206 is used to constrain a location at which a digital editing operation is to be performed to add vectors and/or grid graphics to the digital image 106. In this example, the digital content masking module 222 receives the first user input 204 as defining the selected region 206 as previously described, which user input 204 may be performed using one or more strokes to form a union, intersection, etc. of the selected region 206.
The second user input 214 is then used to define the vector graphic 118 or the grid graphic 120 included within the selected region 206, which is "masked" outside of the region within the digital image 106. In this manner, a user may freely draw within user interface 110 on digital image 106, wherein input that appears within selected region 206 is added to digital image 106, and wherein input that appears outside selected region 206 is not added.
Fig. 15 depicts an example implementation 1500 in which the selected region 206 is specified based on color values of pixels. In the previous example, the selected region 206 is formed by drawing a boundary (e.g., a free-form line, a bounding box, etc.). However, in some instances, it may be difficult to manually select complex geometries using these tools. In the first stage 1502 of fig. 15, for example, a designer may wish to select a tiger's nose. However, manually drawing the bezel may be difficult to perform accurately. Thus, the selection may also be configured as a "magic wand" in which the first user input 204 selects a pixel in the digital image 106. Pixels having color values within a threshold amount, which may be user selectable, are included in the selected region 206 by the selection detection module 202. A pixel-based selected region 206 (e.g., a bitmap) may also be used to select portions of vector graphics 118. This may be performed based on color, by curve fitting based on the boundaries of the selected region 206, and so forth. For example, an "intelligent selection" tool may be used to detect edges to make it easier for a user to draw an image loosely and select along with editing. Further, a selection based on machine learning may be used to select the object of the image. Similar to the magic wand example above, pixel-based selections from the intelligent selection tool or object selection tool are transformed into equivalent vector representations. In this manner, the graphics selection module 126 may support various techniques to specify the selected region 206.
Example systems and apparatus
Fig. 16 illustrates an example system, generally 1600, that includes an example computing device 1602, the example computing device 1602 representing one or more computing systems and/or devices in which various techniques described herein can be implemented. This is illustrated by the inclusion of a graphic selection module 126. Computing device 1602 can be, for example, a server of a service provider, a device associated with a client (e.g., a client device), a system-on-chip, and/or any other suitable computing device or computing system.
The example computing device 1602 as illustrated includes a processing system 1604, one or more computer-readable media 1606, and one or more I/O interfaces 1608 communicatively coupled to each other. Although not shown, the computing device 1602 may also include a system bus or other data and command transfer system that couples the various components to one another. The system bus may include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. Various other examples are also contemplated, such as control and data lines.
The processing system 1604 represents functionality that performs one or more operations using hardware. Thus, the processing system 1604 is illustrated as including hardware elements 1610 that can be configured as processors, functional blocks, and the like. This may include implementation in hardware as application specific integrated circuits or other logic devices formed using one or more semiconductors. The hardware elements 1610 are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, the processor may include semiconductor(s) and/or transistors (e.g., electronic Integrated Circuits (ICs)). In such a context, the processor-executable instructions may be electronically-executable instructions.
The computer-readable storage medium 1606 is illustrated as including memory/storage 1612. Memory/storage 1612 represents memory/storage capacity associated with one or more computer-readable media. Memory/storage component 1612 may include volatile media (such as Random Access Memory (RAM)) and/or nonvolatile media (such as Read Only Memory (ROM), flash memory, optical disks, magnetic disks, and so forth). The memory/storage component 1612 may include fixed media (e.g., RAM, ROM, a fixed hard drive, etc.) and removable media (e.g., a flash memory, a removable hard drive, an optical disk, and so forth). The computer-readable medium 1606 may be configured in a variety of other ways as described further below.
Input/output interface(s) 1608 represent functionality that allows a user to input commands and information to computing device 1602, and that also allows information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., a capacitance or other sensor configured to detect physical touches), a camera (e.g., which may employ visible or invisible wavelengths (such as infrared frequencies) to recognize movements from gestures that do not involve touches), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, a haptic response device, and so forth. Accordingly, the computing device 1602 may be configured in a variety of ways as described further below to support user interaction.
Various techniques may be described herein in the general context of software, hardware elements, or program modules. Generally, such modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The terms "module," "functionality," and "component" as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
An implementation of the described modules and techniques may be stored on or transmitted across some form of computer readable media. Computer readable media can include a variety of media that can be accessed by the computing device 1602. By way of example, and not limitation, computer readable media may comprise "computer readable storage media" and "computer readable signal media".
"Computer-readable storage medium" may refer to media and/or devices that can store information permanently and/or non-transitory as compared to mere signal transmission, carrier waves, or signals themselves. Thus, computer-readable storage media refers to non-signal bearing media. Computer-readable storage media include hardware, such as volatile and nonvolatile, removable and non-removable media, and/or storage devices implemented in methods or techniques suitable for storage of information, such as computer-readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of a computer-readable storage medium may include, but are not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, hard disk, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage devices, tangible media, or articles of manufacture adapted to store the desired information and which may be accessed by a computer.
"Computer-readable signal medium" may refer to a signal bearing medium configured to transmit instructions to hardware of computing device 1602, for example, via a network. Signal media may generally embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, data signal, or other transport mechanism. Signal media also include any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
As previously described, the hardware elements 1610 and computer-readable media 1606 represent modules, programmable device logic, and/or fixed device logic implemented in hardware that may be employed in some embodiments to implement at least some aspects of the techniques described herein, such as executing one or more instructions. The hardware may include components of an integrated circuit or system-on-a-chip, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), complex Programmable Logic Devices (CPLDs), and other implementations in silicon or other hardware. In this context, the hardware may operate as a processing device executing program tasks defined by instructions and/or logic embodied by the hardware, as well as hardware for storing executing instructions, such as the previously described computer-readable storage medium.
Combinations of the foregoing may also be employed to implement the various techniques described herein. Thus, software, hardware, or executable modules may be implemented as one or more instructions and/or logic embodied on some form of computer readable storage medium and/or by one or more hardware elements 1610. The computing device 1602 may be configured to implement specific instructions and/or functions corresponding to software and/or hardware modules. Thus, implementations of modules executable by the computing device 1602 as software may be implemented at least partially in hardware, such as by using computer-readable storage media of the processing system 1604 and/or hardware elements 1610. The instructions/functions may be executable/operable by one or more articles of manufacture (e.g., one or more computing devices 1602 and/or processing systems 1604) to implement the techniques, modules, and examples described herein.
The techniques described herein may be supported by various configurations of the computing device 1602 and are not limited to the specific examples of techniques described herein. The functionality may also be implemented in whole or in part through the use of a distributed system, such as on a "cloud" 1614 via a platform 1616 as described below.
The cloud 1614 includes and/or represents a platform 1616 for resources 1618. The platform 1616 abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud 1614. The resources 1618 may include data and/or applications that may be utilized when executing computer processes on servers remote from the computing device 1602. The resources 1618 may also include services provided over the internet and/or over subscriber networks such as cellular or Wi-Fi networks.
The platform 1616 may abstract resources and functions to connect the computing device 1602 with other computing devices. The platform 1616 may also be used to abstract scaling of resources to provide a corresponding level of scaling to meet the demands on the resources 1618 implemented via the platform 1616. Thus, in an interconnected device embodiment, an implementation of the functionality described herein may be distributed throughout the system 1600. For example, the functionality may be implemented in part on the computing device 1602 and via a platform 1616 that abstracts the functionality of the cloud 1614.
Conclusion(s)
Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed invention.
Claims (20)
1. A method implemented by a computing device in a digital media image editing environment, the method comprising:
Displaying, by the computing device, a digital image in a user interface comprising vector graphics and grid graphics;
Detecting, by the computing device, a single user input as selecting a region of the digital image, the region including both a portion of the vector graphic and a portion of the grid graphic;
generating, by the computing device, a vector selection representation of the portion of the vector graphic included in the selected region in response to detecting the single user input;
generating, by the computing device, a grid selection representation of the portion of the grid pattern included in the selected region in response to detecting the single user input;
Receiving, by the computing device, a second user input specifying a digital editing operation;
displaying, by the computing device, in real-time, a grid-based preview of the digital editing operation as applied to the portion of the vector graphic in the selected region of the digital image as the second user input is received;
editing, by the computing device, both the vector selection representation and the grid selection representation using the digital editing operation in response to the second user input, the vector selection representation being edited during display of the grid-based preview;
Replacing, by the computing device, the grid-based preview in the digital image with the edited vector selection representation; and
The digital image is displayed by the computing device, the digital image having the edited vector selection representation and the edited grid selection representation.
2. The method of claim 1, wherein the vector selection representation defines the portion of the vector graphic using two-dimensional points connected by at least one curve.
3. The method of claim 2, wherein the at least one curve is a bezier curve and the two-dimensional point is a control point of the bezier curve.
4. The method of claim 1, wherein the grid selection representation is masked using a gray scale grid mask and the vector selection representation is masked using a plan view.
5. The method of claim 1, wherein the grid selection represents defining the portion of the grid pattern as pixels using a bitmap.
6. The method of claim 1, wherein the edited vector selection representation is mathematically defined using two dimensional points connected by a curve and the edited grid selection representation is defined using a bitmap.
7. The method of claim 1, wherein the digital editing operation causes a change to an underlying mathematical structure of the vector selection representation and to pixels of the grid selection representation.
8. The method of claim 1, wherein the digital editing operation comprises moving the portion of the vector graphic and the portion of the grid graphic within the digital image.
9. The method of claim 1, wherein the digital editing operation comprises a masking operation based on the portion of the vector graphic and the portion of the grid graphic within the digital image.
10. The method of claim 1, wherein the digital editing operation comprises a transformation operation based on the portion of the vector graphic and the portion of the grid graphic within the digital image.
11. The method of claim 10, wherein the transformation operation comprises resizing, rotating, or coloring.
12. The method of claim 1, further comprising displaying a grid-based preview, comprising: the vector selection representation is initially displayed as an initial pixel, the digital editing operation being applied to the portion of the vector graphic in real-time as the second user input is received, and wherein the display of the edited vector selection representation replaces the pixel of the initial display of the grid-based preview of the vector selection representation.
13. A system in a digital media image editing environment, comprising:
Means for displaying a digital image in a user interface comprising a vector graphic and a grid graphic;
means for detecting a single user input as selecting a region of the digital image, the region comprising both a portion of the vector graphic and a portion of the grid graphic;
Means for generating a vector selection representation of the portion of the vector graphic included in the selected region in response to detecting the single user input;
Means for generating a grid selection representation of the portion of the grid pattern included in the selected region in response to detecting the single user input;
means for receiving a second user input, the second user input specifying a digital editing operation;
Means for displaying, in real-time, a grid-based preview of the digital editing operation as applied to the portion of vector graphics in the selected region of the digital image as the second user input is received;
Means for editing both the vector selection representation and the grid selection representation using the digital editing operation in response to the second user input, the vector selection representation being edited during display of the grid-based preview;
means for replacing the grid-based preview in the digital image with the edited vector selection representation; and
Means for displaying the digital image with the edited vector selection representation and the edited grid selection representation.
14. The system of claim 13, wherein the vector selection representation defines the portion of the vector graphic using two-dimensional points connected by at least one curve.
15. The system of claim 14, wherein the at least one curve is a bezier curve and the two-dimensional point is a control point of the bezier curve.
16. The system of claim 13, wherein the grid selection representation is masked using a gray scale grid mask and the vector selection representation is masked using a plan view.
17. The system of claim 13, wherein the grid selection represents defining the portion of the grid pattern as pixels using a bitmap.
18. A computer-readable storage medium having instructions stored thereon that, in response to execution by a processing system, cause the processing system to perform operations comprising:
Displaying a digital image in a user interface comprising a vector graphic and a grid graphic, a dedicated vector layer comprising the vector graphic and a dedicated grid layer comprising the grid graphic;
Receiving a single user input, the single user input selecting a region of the digital image, the region comprising both a portion of the vector graphic and a portion of the grid graphic;
generating a vector selection representation of the portion of the vector graphic included in the selected region and a grid selection representation of the portion of the grid graphic included in the selected region;
receiving a second user input, the second user input specifying a digital editing operation;
Displaying, in real-time, a grid-based preview of the digital editing operation as applied to the digital image and the dedicated vector layer as the second user input is received;
Editing both the vector selection representation and the grid selection representation using the digital editing operation;
Replacing the grid-based preview and the dedicated vector layer in the digital image with the edited vector selection representation; and
The digital image is displayed with the edited vector selection representation in place of the grid-based preview and the edited grid selection representation, the dedicated vector layer includes the edited vector selection representation in place of the grid-based preview, and the dedicated grid layer includes the edited grid selection representation.
19. The computer-readable storage medium of claim 18, wherein the generating is responsive to the receiving of the single user input.
20. The computer readable storage medium of claim 19, wherein the replacing is performed in response to completion of the second user input.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862745121P | 2018-10-12 | 2018-10-12 | |
US62/745,121 | 2018-10-12 | ||
US16/379,252 | 2019-04-09 | ||
US16/379,252 US11314400B2 (en) | 2018-10-12 | 2019-04-09 | Unified digital content selection system for vector and raster graphics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111047666A CN111047666A (en) | 2020-04-21 |
CN111047666B true CN111047666B (en) | 2024-05-28 |
Family
ID=70161267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910684659.6A Active CN111047666B (en) | 2018-10-12 | 2019-07-26 | Unified digital content selection system for vector graphics and grid graphics |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111047666B (en) |
AU (1) | AU2019213404B2 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7385612B1 (en) * | 2002-05-30 | 2008-06-10 | Adobe Systems Incorporated | Distortion of raster and vector artwork |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6870545B1 (en) * | 1999-07-26 | 2005-03-22 | Microsoft Corporation | Mixed but indistinguishable raster and vector image data types |
US6999101B1 (en) * | 2000-06-06 | 2006-02-14 | Microsoft Corporation | System and method for providing vector editing of bitmap images |
US9025066B2 (en) * | 2012-07-23 | 2015-05-05 | Adobe Systems Incorporated | Fill with camera ink |
CA2927046A1 (en) * | 2016-04-12 | 2017-10-12 | 11 Motion Pictures Limited | Method and system for 360 degree head-mounted display monitoring between software program modules using video or image texture sharing |
-
2019
- 2019-07-26 CN CN201910684659.6A patent/CN111047666B/en active Active
- 2019-08-08 AU AU2019213404A patent/AU2019213404B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7385612B1 (en) * | 2002-05-30 | 2008-06-10 | Adobe Systems Incorporated | Distortion of raster and vector artwork |
Also Published As
Publication number | Publication date |
---|---|
AU2019213404A1 (en) | 2020-04-30 |
AU2019213404B2 (en) | 2021-08-19 |
CN111047666A (en) | 2020-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2017235889B2 (en) | Digitizing physical sculptures with a desired control mesh in 3d | |
US10878604B2 (en) | Generating a triangle mesh for an image represented by curves | |
US11314400B2 (en) | Unified digital content selection system for vector and raster graphics | |
Wu et al. | ViSizer: a visualization resizing framework | |
CN106997613B (en) | 3D model generation from 2D images | |
US10943375B2 (en) | Multi-state vector graphics | |
US10846889B2 (en) | Color handle generation for digital image color gradients using machine learning | |
US20190295217A1 (en) | Digital image transformation environment using spline handles | |
US11455752B2 (en) | Graphical element color diffusion techniques | |
CN104732479A (en) | Resizing An Image | |
US9955065B2 (en) | Dynamic motion path blur user interface | |
US10403040B2 (en) | Vector graphics rendering techniques | |
US9779484B2 (en) | Dynamic motion path blur techniques | |
US20230162413A1 (en) | Stroke-Guided Sketch Vectorization | |
US10573033B2 (en) | Selective editing of brushstrokes in a digital graphical image based on direction | |
US11348287B2 (en) | Rendering of graphic objects with pattern paint using a graphics processing unit | |
CN111047666B (en) | Unified digital content selection system for vector graphics and grid graphics | |
US20230267696A1 (en) | Responsive Video Canvas Generation | |
US20200388082A1 (en) | Editing bezier patch by selecting multiple anchor points | |
US7330183B1 (en) | Techniques for projecting data maps | |
US20240249475A1 (en) | Visualizing vector graphics in three-dimensional scenes | |
US20200272689A1 (en) | Vector-Based Glyph Style Transfer | |
US20220301263A1 (en) | Digital Object Surface Inflation | |
US20240257408A1 (en) | Scene graph structure generation and rendering | |
US20240212242A1 (en) | Digital Representation of Intertwined Vector Objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |