GB2341777A - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
GB2341777A
GB2341777A GB9816612A GB9816612A GB2341777A GB 2341777 A GB2341777 A GB 2341777A GB 9816612 A GB9816612 A GB 9816612A GB 9816612 A GB9816612 A GB 9816612A GB 2341777 A GB2341777 A GB 2341777A
Authority
GB
United Kingdom
Prior art keywords
line
dataset
objects
rendering
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9816612A
Other versions
GB9816612D0 (en
GB2341777B (en
Inventor
Stuart Green
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lightwork Design Ltd
Original Assignee
Lightwork Design Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lightwork Design Ltd filed Critical Lightwork Design Ltd
Priority to GB9816612A priority Critical patent/GB2341777B/en
Publication of GB9816612D0 publication Critical patent/GB9816612D0/en
Publication of GB2341777A publication Critical patent/GB2341777A/en
Application granted granted Critical
Publication of GB2341777B publication Critical patent/GB2341777B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An image processing method and apparatus is disclosed including the steps of receiving 3D object data (30) (Fig 2A), extracting boundary information (11)(Fig 2B) applying modifiers (12)(Fig 2C) to achieve characteristics distinctive of a expressive style, producing 3D line surfaces (13) (Fig 2D) based on the modified data, and optimally rendering (14) (Fig 2E) the 3D line surfaces to produce a 2D output image. Extracting boundaries of objects allows an expressive non-photorealistic treatment of the 3D objects resulting in a highly flexible and readily manipulable process for achieving 2D images. The applied modifier may be such as to simulate the use of an inky pen.

Description

2341777 IMAGE PROCESSING METHOD AND APPARATUS The present invention
relates in general to the f ield of image processing and in particular to image rendering.
Image rendering methods and apparatus take many f orms with one common txample being photorealistic rendering. Photorealistic rendering aims to take input data representing three dimensional objects and to produce output data representing a two dimensional image of those three dimensional objects. In photorealistic rendering it is desired that the two dimensional image if possible appears to the human eye as if it were a photograph of physical objects.
is The present invention is concerned with a second area of image processing generally known as non-photorealistic rendering (NPR) or expressive rendering. In this field it is desired to take 3D object data input and to produce a
2D image data output having abstract image qualities. For example, NPR methods and apparatu are used to create images looking as if they had been produced by a human cartoon animator, water colour artist or oil painter.
In the field of cartoon animation it is known to use cell animation techniques whereby key frames in a sequence are drawn by a human animator and intermediate frames are produced automatically. However, this still requires laborious and time consuming skilled effort from the human animators for the key frames.
It is also known to take 3D object data input and to produce photorealistic 2D rendered images, and then to post process the 2D images to give a non-photorealistic effect. However, in the post-processing step commonly 4 only colour information is available for each pixel of the image. The output colour effect is based on the underlying pixel colour of the photorealistic rendered image, without any knowledge of what that underlying colour represents, i.e. whether it represents a foreground object or a distant background object. Therefore, generally post-processing methods are not readily flexible and adaptable and have a number of limitations.
It is an aim of the present invention to provide a fast and accurate non-photorealistic rendering method and apparatus preferably having a high degree of flexibility over the end results produced.
is According to the present invention there is provided an image processing method for non-photorealistic image generation, comprising the steps of:
receiving 3D object data representing 3D objects in 3D space; extracting visible boundary information from the 3D object data; modifying the boundary information using one or more modifiers, to produce line elements relating to the extracted boundary information; and outputting a 3D line dataset including 3D objects representing said line elements.
In the preferred embodiments the 3D object data defines 3D objects in any suitable 3D co-ordinate system using any suitable representation format. For example, the 3D object dataset defines regular polygons by the coordinates of vertices. Alternatively, the objects may be represented in a constructive solid geometry format where complex objects are represented by the sum or difference of several simple objects, such as a cube or sphere.
objects can be represented in any suitable format, for example in terms of surfaces and edges, or by formula with, for example, a sphere represented by co-ordinates for its centre and a radius. Many other 3D object data formats will be apparent to the skilled person. In general these are referred to as 3D object data in world space.
Preferably, the step of extracting boundary information applies any suitable technique for determining is the boundaries of surfaces of the objects in the input 3D obj ect dataset. The boundaries are preferably defined explicitly as a 3D edge of a surface, i.e. in a manner which is independent of the view point of the surface. Alternatively, boundaries can be defined implicitly using a silhouette of a surface to define an implicit 3D edge in a manner which is dependent on the point from which this surface is viewed. The term visible edge dataset will be used below and is intended to cover both explicit boundaries and implicit silhouettes.
In the preferred embodiment boundary information is extracted to give a visible edge dataset representing visible boundary edges of the surfaces of the 3D objects. The visible edge dataset is preferably represented as both 2D and 3D data. That is, the 3D boundary information is preserved and reference may be made from the 2D data back to the original 3D data. Pref erably, a hidden line algorithm is employed. The algorithm may include a projection step specifying the point from which the 3D objects are to be viewed relative to the 3D world space.
The projection is preferably clipped according to the aspect available to the viewer, e.g. to fit a rectangular image field of a known size such as a CRT screen. The clipped projection is commonly known as screen space.
The hidden line algorithm generally is intended to provide information representing object boundaries. In a typical application in the field of engineering drawings, it is desired to represent the object boundaries with uniform width lines for visible boundary lines and with hidden boundary lines shown in dashed or dotted lines as will be familiar to the skilled person.
The extracted boundary information is preferably in the form of continuous hidden lines.
Preferably the hidden line algorithm generates line segments relating to the edges of visible polygons, and preferably includes a continuity step for joining adjacent lines together to give data representing a series of polylines.
The modifying step applies modifiers which modify the boundary information to produce line elements. The line elements produced are preferably an augmented form of the visible edge dataset.
A wide variety of modifiers may be employed sequentially or in parallel allowing a high degree of flexibility. The modifiers preferably apply characteristics of the image style desired. Preferably the modifiers apply characteristics relating to an applicator of any suitable type such as a pen, charcoal, paint brush or pencil; a substrate of any suitable type such as paper, canvas or blackboard; and an artistic style of any suitable type such as drawing speed, directional preferences (e.g. left or right handed strokes, or a preference for up or down strokes), accuracy, stoke length, line multiplicity and others.
In each case the modifiers operate on the boundary information to produce a three dimensional dataset with 3D objects, preferably in the form of 3D surfaces, representing the boundary as it would be drawn in the expressive style chosen. The term 3D line surface will be used below to refer to this form of dataset.
Preferably, the method comprises the additional step of rendering the 3D line surfaces to produce 2D image data output.
The rendering step allows additional modification of the 3D line surfaces, such as to apply surface effects. For example, where the applicator is selected to be charcoal each line surface is rendered with a surface typical for charcoal having an uneven surface patterned as charcoal is dislodged from a stick and applied to the substrate which can even result in unmarked areas within the line.
Preferably, the method also includes the step of surface rendering, preferably carried out in parallel with the boundary rendering method described above. The surface rendering is used to fill in areas of the 2D image representing the surface of 3D objects between boundaries with, for example, appropriate blocks of colour. That is, the boundary rendering method represents the strokes taken by an artist to draw boundary lines, and the surface rendering method may be used to fill in the surface areas defined by the boundary lines.
Preferably, modifiers are applied in the surface rendering step. In particular, modifiers operate to impart imprecision to the fill regions used to fill in the surface areas. For example, the fill areas can be modified to randomise their boundaries, for example by extending or reducing the fill area compared with a perfect fill. Suitably, the modifiers operate within the surface rendering step and therefore information is available as to the type and material of each object and its surfaces enabling different objects or materials to be given different treatment.
The present invention also extends to an apparatus having means for performing each of the method steps defined above.
The method and apparatus of the present invention have many advantages. Extracting boundary information allows the image processing technique to be performed very quickly and efficiently whilst retaining a high degree of accuracy. Further, a large number of modifiers may be combined to create sophisticated line effects and high level artistic expression from relatively rudimentary 3D object data. Further, the 3D line dataset representing boundaries of objects is easily distinguished from background information. The 2D output image can be drawn accurately simulating, for example, layering techniques distinguishing between distant background and close foreground objects.
For a better understanding of the invention, and to show how embodiments of the same may be carried into effect, reference will now be made, by way of example, to the accompanying diagrammatic drawings, in which:
Figure 1 is a schematic overview of the preferred method; Figure 2 illustrates the method of Figure 1; and Figure 3 schematically represents a second preferred method.
Referring to Figure 1 the method and apparatus of the preferred embodiment perform non-photorealistic rendering to produce a 2D output image 20 from 3D object data 30.
The 3D object data is provided in any suitable format following any suitable 3D co-ordinate system as will be is familiar to the skilled person. This is generally referred to 3D object data in world space. The 3D object data desirably includes information representing the size, shape and position of each object and ideally also includes information concerning the material from which the object is made, the type and position of any lighting and camera positions from which the objects are viewed.
The boundary rendering method 10 operates on a subset of the 3D object data 30 as will be described below.
An extracting step 11 is performed to extract boundary information from the 3D object data 30. Suitably, a hidden-lin e algorithm is employed to determine visible line segments relating to edges of the objects, i.e edges of simple or complex polygons or edges of meshes of simple or complex polygons. Typically the hidden line algorithm produces a large number of individual straight line segments which are joined together in a continuity step to give a series of polylines. The polylines at this stage represent the boundaries which should be drawn in order to provide a 2D visual output representing the 3D objects.
Referring to Figure 2, a series of images is shown to illustrate the method as applied to a graphical object, in this example a simple cylinder. Figure 2A shows the 3D cylinder object in 3D world space. Figure 2B shows the cylinder with boundary information extracted to show visible boundaries from a predetermined view point clipped to fit within a predetermined screen space. As shown in Figure 2B the extracted boundary information is represented here as simple lines of uniform width.
Referring again to Figure 1 the boundary information is modified in step 12 using one or more modifiers. The modifiers represent the many individual characteristics desired in the eventual 2D image. Suitably, modif iers apply characteristics relating to the type of applicator with which the image is drawn selected from, for example, any one or a combination of a pencil, a fountain pen, a drawing pen, a felt tip pen, a paint brush, charcoal or any other implement. Modifiers are also used to represent the way that the applicator is used including, for example, drawing speed which affects the extent to which each line is drawn accurately or wobbles from the perfect path determined by the boundary information. Other style characteristics include stoke length including, for example the extent to which any particular stroke falls short or overruns an intended end position. Further expressive effects are applied by modifiers representing, for example, hints as to the pressure likely to be applied at any point in the stroke, the preferred direction of the stroke, movement from one stroke to another, and whether multiple strokes will be used to represent one boundary.
The method allows a wide variety of modifiers to be employed selected according to the desired expressive effect in the 2D output image.
Figure 2C illustrates the line elements as may be produced following the characteristics, for example, a fountain pen where a large quantity of ink is deposited as the pen is placed on the paper represented by a relatively wide line, tapering as the line is drawn. Each line might finish with a very thin portion as the pen is raised from the paper at speed, or another thicker portion if the pen is brought to rest before being raised. Many features and characteristics for each type of applicator will be apparent to the skilled person.
is The line elements determined in the modifying step 12 are used to produce 3D line surfaces in step 13 of Figure 1. That is, the line elements determined by the modifying step 12 are used to create new 3D objects in a 3D coordinate system. The 3D objects are suitably surfaces represented as infinitesimally thin 3D geometry such as polygons or meshes of polygons. The 3D line surface generation step 13 is suitably the reverse of a projection step commonly used in rendering methods and will be familiar to the skilled person.
The generated 3D line surfaces are conveniently stored for future use. Figure 2D illustrates an example of the 3D line surfaces in 3D world space.
In order to produce the final 2D output image 20 of Figure 1 a rendering step 14 is performed.
The rendering step 14 is similar to rendering as performed in a photorealistic rendering method to give pattern, texture, shading and colour to the 3D line surfaces of the 3D line data and produce the eventual 2D output image. The rendering step includes, for example, determining the surface effect of each line or stroke combining the characteristics of the applicator, the 5 substrate, and the artistic style.
The output 2D image is illustrated in Figure 2E with fully represented line elements for each stroke making up the final image in an expressive style.
The method is adaptable to provide output images in a wide variety of expressive styles including, for example, recreating any style developed and used by a human artist. However, the method is not restricted to producing or reproducing human artistic styles and is readily adaptable to produce new styles not previously available.
The rendering step 14 operates on the 3D line data which has been created from a knowledge of the boundaries of the 3D obj ects. Therefore the 2D output image 20 can be created distinguishing the foreground objects represented by the 3D object data from a background scene. In complex 3D object data where objects overlap each other distant objects are represented first in the output image if desired, the closer objects optionally having a different visual characteristic being drawn later. This method gives much better result than the known 2D pixel colour oriented approach.
Referring to Figure 3 the boundary rendering method described above can optionally be combined with other methods such as a surface rendering method 40 used to fill in surfaces of the 3D objects with colour, texture and so on to provide a complete 2D output image. optionally, modifiers are used in the surface rendering method 40 similar to the modifiers used in the boundary rendering method 10. For example, modifiers operate to manipulate the area of the 2D image filled in to represent a surface of the 3D objects. This fill area, for example, is modified to impart imprecision to the fill area, in particular at edges of the fill area. For example, the fill area is modified to extend over or to fall short of the boundary by adjusting the stroke length, direction, speed or other characteristics. Further, the surface rendering method 40 operates on 3D object data and therefore information is available such that different objects or materials can be given a different treatment. For example, different diffusion properties of an object can be simulated with perhaps red water colour paint applied over a green base coat.
The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed,.may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of the foregoing embodiment (s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
13

Claims (23)

1. An image processing method, comprising the steps of:
(a) receiving a 3D object dataset representing 3D objects in 3D space; (b) extracting visible boundary information from the 3D object data to form a visible edge dataset; (c) modifying the boundary information of the visible edge dataset using one or more modifiers, to produce line elements relating to the extracted boundary information; and is (d) producing a 3D line surface dataset including 3D objects representing the line elements.
2. A method as claimed in claim 1, wherein the step (b) includes the step of determining the boundaries of surfaces of the 3D objects in the 3D object data.
3. A method as claimed in claim 2, wherein the boundaries are defined explicitly as a 3D edge of a surface, in a manner which is independent of the view point of the surface.
4. A method as claimed in claim 3, wherein a subset of the explicitly defined boundaries is selected which are visible from a predetermined view point, to be the visible edge data'set.
14
5. A method as claimed in claim 2, wherein the boundaries are defined implicitly using a silhouette of a surface, in manner which is dependent on a predetermined view point, to produce the visible edge dataset.
6. A method as claimed in claim 1, wherein the extracting step (b) employs a hidden line algorithm to produce the visible edge dataset.
7. A method as claimed in claim 1, wherein the modifying step (c) comprises the step of applying one or more modifiers to modify the boundary information of the visible edge dataset to produce a plurality of line elements.
is
8. A method as claimed in claim 7, wherein each the line element relates to at least one of the boundaries represented by the visible edge dataset.
9. A method as claimed in claim 8, wherein the or each the modifier applies one or more characteristics of a desired image style.
10. A method as claimed in claim 9, wherein the characteristics are selected from any one or more of an applicator type, a substrate type, an artistic style, and movement preferences.
11. A method as claimed in claim 7, wherein the step (d) comprises the step of producing the 3D line surface dataset comprising a 3D line surface object for each of the line elements.
is
12. A method as claimed in claim 11, wherein each of the 3D line surface objects is a 3D line surface object representing a surface of minimal thickness.
13. A method as claimed in claim 12, further comprising the step (e) of rendering the 3D line surface objects of the 3D line surface dataset to provide 2D image data.
14. A method as claimed in claim 13, wherein the rendering 10 step (e) comprises the application of surface effects to the 3D line surface objects.
15. A method as claimed in claim 14, further comprising the step (f) of rendering surfaces of the 3D objects in the 3D is object data, and combining the rendered surfaces with the rendered 3D line surfaces to form the 2D image data.
16. A method as claimed in claim 15, wherein the surface rendering step (f) is performed in parallel with the 20 boundary rendering step (e).
17. A method as claimed in claim 16, wherein the step (f) comprises the step of applying secondary modifiers to surface information from the 3D object dataset.
18. A method as claimed in claim 17, wherein, where the 3D object data includes information relating to the type and material of each 3D object, the secondary modifiers operate on the type and material information to selectively apply 30 modifying effects.
19. An image processing apparatus, comprising:
16 means for receiving a 3D object dataset representing 3D objects in 3D space; means for extracting visible boundary information from the 3D object data to form a visible edge dataset; means for modifying the boundary information of the visible edge dataset using one or more modifiers, to produce line elements relating to the extracted boundary information; and means for producing a 3D line surface dataset including 3D objects representing the line elements.
20. A data carrying medium conveying instructions for performing the method as claimed in any of claims 1 to 18.
21. A program for a data processing apparatus for performing the method as claimed in any of claims 1 to 18.
22. An image processing method substantially as hereinbefore described with reference to the accompanying drawings.
23. An image processing apparatus substantially as hereinbefore described with reference to the accompanying drawings.
GB9816612A 1998-07-31 1998-07-31 Image processing method and apparatus Expired - Fee Related GB2341777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9816612A GB2341777B (en) 1998-07-31 1998-07-31 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9816612A GB2341777B (en) 1998-07-31 1998-07-31 Image processing method and apparatus

Publications (3)

Publication Number Publication Date
GB9816612D0 GB9816612D0 (en) 1998-09-30
GB2341777A true GB2341777A (en) 2000-03-22
GB2341777B GB2341777B (en) 2000-11-08

Family

ID=10836438

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9816612A Expired - Fee Related GB2341777B (en) 1998-07-31 1998-07-31 Image processing method and apparatus

Country Status (1)

Country Link
GB (1) GB2341777B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2374774A (en) * 2001-04-18 2002-10-23 Voxar Ltd Region boundary identification in 2d representation of 3d subject

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ACM SIGGRAPH *
'Comprehensive Rendering of 3D Shapes',Computer Graphics, Vol. 24, No. 4, August 1990, pp197/206 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2374774A (en) * 2001-04-18 2002-10-23 Voxar Ltd Region boundary identification in 2d representation of 3d subject
GB2374774B (en) * 2001-04-18 2003-05-14 Voxar Ltd A method of displaying selected objects in image processing

Also Published As

Publication number Publication date
GB9816612D0 (en) 1998-09-30
GB2341777B (en) 2000-11-08

Similar Documents

Publication Publication Date Title
Lansdown et al. Expressive rendering: A review of nonphotorealistic techniques
Durand An invitation to discuss computer depiction
US7365744B2 (en) Methods and systems for image modification
EP1271411A3 (en) Hierarchical image-based apparatus and method of representation and rendering of three-dimentional objects
Way et al. The synthesis of trees in Chinese landscape painting using silhoutte and texture strokes
Greenfield On the Origins of the Term" Computational Aesthetics".
Gerl et al. Interactive example-based hatching
CN104063888A (en) Pop art style drawing method based on non-photorealistic
CN110610504A (en) Pencil drawing generation method and device based on skeleton and tone
Chen et al. Embroidery modeling and rendering
Bratkova et al. Artistic rendering of mountainous terrain.
Guo et al. " Nijimi" rendering algorithm for creating quality black ink paintings
Gooch Interactive non-photorealistic technical illustration
Arpa et al. Perceptual 3D rendering based on principles of analytical cubism
GB2341777A (en) Image processing method
KR101635992B1 (en) Pencil Rendering Framework based on Noise Generalization
Kwon et al. Texture-based pencil drawings from pictures
Oh A system for image-based modeling and photo editing
Di Fiore et al. Highly stylised animation
Wang et al. IdiotPencil: an interactive system for generating pencil drawings from 3D polygonal models
Sauvaget et al. Stylization of lighting effects for images
Chen Interactive specification and acquisition of depth from single images
Öhrn Different mapping techniques for realistic surfaces
Kalnins WYSIWYG NPR: Interactive stylization for stroke-based rendering of three-dimenisional animation
Madono et al. Data‐Driven Ink Painting Brushstroke Rendering

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20060731

728V Application for restoration filed (sect. 28/1977)
728Y Application for restoration allowed (sect. 28/1977)
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20080731