EP0532505A1 - Video image creation - Google Patents

Video image creation

Info

Publication number
EP0532505A1
EP0532505A1 EP19910906949 EP91906949A EP0532505A1 EP 0532505 A1 EP0532505 A1 EP 0532505A1 EP 19910906949 EP19910906949 EP 19910906949 EP 91906949 A EP91906949 A EP 91906949A EP 0532505 A1 EP0532505 A1 EP 0532505A1
Authority
EP
European Patent Office
Prior art keywords
data
image
video image
patch
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19910906949
Other languages
German (de)
French (fr)
Inventor
Michael Joseph 11 Church End Kemp
Gary Michael Sleet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SPACEWARD GRAPHICS Ltd
Original Assignee
SPACEWARD GRAPHICS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SPACEWARD GRAPHICS Ltd filed Critical SPACEWARD GRAPHICS Ltd
Publication of EP0532505A1 publication Critical patent/EP0532505A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Definitions

  • This invention relates to video image systems, and in particular to systems which create an image in a frame buffer under the control of an artist using a tablet and stylus and which mimic an artists natural drawings and painting techniques.
  • This technique allows for applications of profiled brushes to a frame buffer so as to achieve a graduation in intensity across the width of the brush, and, for repeated applications of a brush, creates a stroke with a profiled cross section. This is achieved without the necessity of any blending operations but by the use of a simple boolean comparison between values in the frame buffer and the brush profile.
  • Figure 1 shows apparatus for i_ ⁇ plementing this techique where an 8-bit frame buffer 3 is arranged to display a monochrome image.
  • an 8-bit value of 255 represents white and a value of 0 represents black.
  • a brush profile store 5 contains a predetermined set of brush pixel values with higher values towards the centre, tapering towards the edge.
  • the (x,y) co-ordinates designate a location 15 in the frame buffer 3 where the centre of the brush is to be applied.
  • a rectangular area around this pixel of a size corresponding to the brush profile store is then processed by performing the following steps for each pixel in the area. The value in the pixel is read to provide an input 21 to a comparator 6.
  • the matching value in the profile store is read to provide a second input 20 to the comparator. If the result of the comparison is that the value in profile store 5 exceeds that in the frame store 3 the value from the profile store is written to that pixel in the frame store, otherwise the original data 21 is written back. It is of course permissible to omit the write operation in this latter case as the value being written is the same as that read from the frame buffer 3. This process is repeated for each pixel in the rectangular area.
  • frame store 3 starts with values of all zeros in it, the first application of the brush will result in a copy of the brush being placed in the store. If the pen is kept still at this location no further change occurs in frame store 3 although the process continues to operate because the test described above always leads to the same value being written back to the store for all pixels.
  • the brush is then applied at a different but overlapping location.
  • Pixels on the leading edge of the stroke will pass the comparison test and new values from the profile will be written to the store in this area.
  • Pixels on the trailing edge will be of greater value in the store than in the brush profile so the pixel values will be written back unchanged, or alternatively not be written at all according to the process described above.
  • the brush is applied at overlapping positions to form a line corresponding to the positions the pen has moved t ⁇ nrough. Applying the brush at overlapping positions along the line is visually pleasing to the user since the line appears to grow in a way similar to the way it would look if a real airbrush was being sprayed on a surface.
  • control image value for a pixel may be worked out many times with different positions of the overlapping brush profile. Each time a control image value is worked out, this is used in a calculation to work out a value for the corresponding pixel on the viewed screen.
  • a development of this process is to allow the computer to initialise the store to the maximum value 255, and to use the brush profile 5 in an inverted sense, only writing values to the store when the "depth" of the brush towards zero exceeds that already in the store. It can be seen that to achieve this the value from the profile store 5 must be subtracted from 255 before application to the comparator 6. This is identical to inverting every bit in the 8-bit value which is an easier operation. The test in c ⁇ rrparator 6 must now be changed so that new values are written only when the inverted data 20 is less than the original pixel value 21.
  • This inverted painting process can be selected by the user at any time even when he has started from a zeroed store as originally described. This allows him to selectively "rub out” sections of his drawing by drawing over it in reverse.
  • Figure 1 shows the prior art arrangement described above
  • Figure 2 shows a block diagram of one emtodi ⁇ ent of the invention
  • Figure 3 shows, schematically, a process for drawing onto an output video image in one embodiment of the invention
  • Figure 4 shows an arrangement for refreshing of an image or changing the colour an image has been drawn with
  • Figure 5 shows a block diagram of a real-time embodiment of the invention
  • Figure 6 shows a block diagram of a further embodiment in which a painting mask is simulated
  • Figure 7 shows an embodiment of the invention which enables painting to be made pressure sensitive
  • Figure 8 shows the type of interpolator circuit used in the invention.
  • Figure 9 shows various repeated brush applications for drawing lines;
  • Figure 10 shows a further line drawing technique.
  • the technique can be applied to a full colour video system where a frame buffer consists of 48-bit values per pixel; one each for red, green and blue, and one for a "blending control" value.
  • the method is preferably implemented with two frame buffers of this configuration, although in a siirpler embodiment the blending plane is only required in one of the frame buffers. In this siirpler embodiment there is no requirement to view the contents of the 32-bit frame buffer so the technique is fully applicable to a system where a 24-bit frame buffer is available and ordinary computer memory is used for the second, 32-bit frame storage area.
  • Figure 2 shows a two frame buffer configuration with a partly drawn image represented in a viewable store 30.
  • a computer 32 is also shown which addresses the frame buffer storage by known techniques.
  • a tablet and stylus 34 are shown connected to the computer.
  • Scan circuitry 36 scans the 24 bits of RGB frame buffer 30 to provide separate red, green and blue signals 38 for display on a monitor 40 by means of any necessary digital to analogue conversion.
  • any contents of the corresponding blending plane 46 are erased.
  • the artist commences to draw and the painting algorithm already described with reference to Figure 1 is applied to blendi g plane 46 with a conparator 48 determining the data 22 written back to the blending plane 46.
  • this data is now also fed to a linear interpolator 50 where it is used as the alpha value for a linear interpolation between the original background image data from store 44 at input B, and ink or paint colour data from a store 52 providing the input A of interpolator 50.
  • the ink colour data represents the magnitude of the appropriate R, G, or B component of a 24-bit ink colour previously selected by the operator.
  • the data for each pixel at the output of the interpolator 50 is written into the frame store 30 at the corresponding pixel to enable the artist to view the results as he works.
  • the image in the visible frame buffer 30 is a blending of the original background picture stored unchanged in frame buffer 44 and the current ink colour under control of the blending plane 46.
  • the data in store 44 can be periodically updated to that in store 30 and blending plane data in store 46 cleared by a user control operated by the artist. This could also be done automatically, for example when the ink colour is changed.
  • the operator can at any time choose to reverse the drawing process into the blending plane as described above and "undraw” over areas already drawn on thus creating a wholly new drawing technique at the touch of a button.
  • a further development of this process is to provide the ability for the addressing circuitry controlling the linear interpolator 50 to perform a "refresh" operation on the viewed store 30.
  • This is shown in Figure 4 where each pixel is processed over a rectangular area (which is usually the entire extent of stores 30, 44 and 46 but may be a more limited area if desired).
  • the original image in store 44 is re-written to store 30 but the blending plane data in store 46 is maintained.
  • the ink colour from store 52 is then re-applied over the whole of the rectangular area under the control of data from store 46.
  • Another use of this redrawing technique is to wipe the store 46 before performing the refresh so the artist regains his untouched background image and can begin painting afresh if he is unhapply with the drawing he has made.
  • the artist can also choose to fill store 46 with the value 255 and perform a refresh, filling the entire screen with ink. He can the use the reverse drawing, or undraw, operation above to selectively reveal parts of the background image.
  • Figure 5 shows a real-time implementation where the linear interpolation circuitry 50 is fast enough to perform its calculations on video data as it is scanned.
  • the background image is placed into store 46 as previously described. Under computer control (not shown) the artist draws into blending plane 46 using the simple drawing process of Figure 1 or its development in which "undrawing" is permitted.
  • the scan circuitry 36 scans the three colour components and the blending component at video rates.
  • the same blending data is used for each of the three colours and the process for green only is shown in Figure 5.
  • Green data is fed to the B input of the linear interpolator 50 and the green component of the current ink colour is fed to the A input.
  • Blending data from store 46 is fed to the control input to cause an interpolation between the original picture and the current ink colour.
  • the output data is fed (via any necessary digital to analogue conversion) to a monitor 54 on which the artist views the result of his drawing.
  • This process relieves the drawing process of the additional steps of writing to store 30 as in Figure 3 with a consequent speed up and circuit simplicity by using a real-time interpolator and repeatedly scanning the blending plane as the image is viewed.
  • the computer can cause the image in store 44 to be updated with the current ink colour.
  • FIG. 2 shows the addition of a data path from the extra 8-bit store 42 providing data for each pixel to the B input of arithmetic function unit F 54 along with data from the blending process of comparator 48 being fed to input A of function unit 54. The resulting value is fed as the control value to the interpolator 34.
  • Function F is arranged using strai ⁇ tforward techniques to provide an output equal to A - B (if A greater than B) or 0 (if A not greater than B) .
  • the protective drawing in store 42 can be created using all the above drawing techniques while substituting the output of store 42 for the output of store 46. At the end of drawing the mask, instead of keeping the blended image (which would overlay the protective mask as ink), the image is retained and the output of store 42 switched back to the configuration of Figure 6.
  • Circuitry needs to be added to replace the data from store 42 with zero if the artist wishes to disable this protective function.
  • Figure 7 shows ⁇ a development to the process of Figure 1 in which a pressure value derived form the pen and in the range 0 (no pressure) to 255 (full pressure) causes a gradual increase in the effect of the drawing ijrplement.
  • the value from the pen 58 is fed to an inverter 36 to generate a value which is 255 for no pressure and 0 for full pressure. This is subtracted from the profile data 5 in a subtractor 56 and the resultant value is fed to the comparator 48.
  • Brush profiles for all these operations can be a choice of predefined shapes or can be generated by the artist f om areas of any drawn image. Same examples with their effects are described here.
  • Most predetermined shapes can be considered as samples taken at the centre of each pixel of a 3-dimensional profile shape (as for example illustrated in Figure 1) .
  • the technique described herein in which the viewed image is a composite of ink with the background image, and in which the background image is preserved can be used with other drawing operations into the blending plane, for example where geometric shapes are drawn by the computer into this plane, followed by the above described refresh operation to render them visible (not necessary where the real-time option is used) . Also it is possible for the artist to shift entire areas of the blending store by means of a raster copy operation if he wishes to shift his drawing with respect to the background before being composited.
  • Figure 8 shows an implementation of the linear interpolator used herein in the various examples described.
  • An Alpha value 70 is multiplied with input video A in multiplier 72. Both input values are in the range 0 - 255. The output is divided by 255 in divider 74 to produce a result 76 in the range 0 - 255.
  • the Alpha value is also inverted by inverter 78 to produce 255 minus alpha at 80. This value is multiplied by input B in multiplier 82 and divided by 255 in divider 84. The results from dividers 74 and 84 are summed in adder 88.
  • the output data 86 thus consists of a linear interpolation between input data A and input data B, such that if alpha is 0, the output is B and if alpha is 255 the input is A. Intermediate values of alpha will give linearly interpolated intermediate values at 86.
  • the dividers 74 and 84 can be more easily made to divide by 256 in which case the arithmetic is slightly wrong but this can be acceptable due to the non-iterative nature of the process. In practice there are well known techniques to correct for this and these should be used to prevent a slight change in the picture as each layer of ink is added.
  • a siibstantial change to the drawing algorithm discussed can be achieved by changing the ⁇ c ⁇ rparator of fig 1 for a different circuit.
  • This new circuit adds the two in ⁇ ing values and then checks to see if the sum is higher than 255. If this is the case the value of 255 is fed to the output.
  • Such a circuit is easily achievable using known techniques.
  • Figure 9 shows the repeated application method and how this is effected by increasing the spacing for a line from A to B.
  • the spacing shown is larger than would ever be used in practice but looking at a pixel at point P is can be seen that 3 writes are necessary in figure 9a and only one write in 9b.
  • start and end points of the traced line are known then the rest of the line may be interpolated for a straight line. This is achieved by making the distance between position samples along the line correspond to the preferred distance between brush applications and applying the brush at each sample position. The start and end points can be worked out independently of the sample distance and hence independently of the final brushed line quality.
  • FIG 10. A more accurate way of finding the line through a brush matrix is shown in figure 10.
  • a new line is constructed going through the pixel P and parallel to the brushed line from A to B. Where this constructed line intersects a brush matrix placed at A the start and end positions (X and Y) of the traced line through the brush matrix A are given.
  • Ths brush matrix is shown here as square as this is the usual implementation for the matrix and this makes the intersection simpler to solve.
  • the brush profile may be any shape within the square matrix holding it. Knowledge of these points (X and Y) enables the highest value in the brush matrix to be determined and then applied at the pixel P as well as at other pixels along the line.
  • pressure values are linearly interpolated between the start and end pressure values from the pen values.
  • any one application of the brush only one pressure value is used.
  • the pressure change betweeen adjacent applications can be considerable and this discontinuity is picked out easily with the human eye.
  • each pixel is considered separately so the pressure value used may be calculated for each pixel using the distance of the pixel along the line being drawn thus making the line more realistic since large discontinuities are not seen on the line.
  • the order in which the control image pixels are written needs to be such that the line appears to follow the pen direction. Also the shape of the end of the line as it is being drawn should look something like the profile of the brush itself. A shape such as 'V is good enough for this and not very- difficult to generate.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Système de création d'une image vidéo comprenant une mémoire destinée à une image vidéo (44) et une mémoire d'image de commande (46). Un moyen manuel (34) fournit un ensemble de signaux de données et ceux-ci sont comparés avec un ensemble correspondant dans la mémoire d'image de commande (46). Les contenus de la mémoire d'image de commande sont modifiés en foncion du résultat de la comparaison avec chacun des pixels formant l'ensemble. Un moyen (32) de modification d'image reçoit les données d'image vidéo en réponse aux signaux de données mémorisés dans l'ensemble. L'image vidéo ainsi modifiée est affichée sur un moyen d'affichage (40).A video image creation system comprising a memory for a video image (44) and a control image memory (46). Manual means (34) provides a set of data signals and these are compared with a corresponding set in the control image memory (46). The contents of the control image memory are modified according to the result of the comparison with each of the pixels forming the set. Image modification means (32) receives the video image data in response to the data signals stored therein. The thus modified video image is displayed on a display means (40).

Description

VIDBO IMBGE CREATION
This invention relates to video image systems, and in particular to systems which create an image in a frame buffer under the control of an artist using a tablet and stylus and which mimic an artists natural drawings and painting techniques.
A known technique for drawing onto an 8-bit frame buffer is described in Alvy Ray Smith's "Paint" paper handed out at Siggraph 1978 under the sub-heading "z-painting".
This technique allows for applications of profiled brushes to a frame buffer so as to achieve a graduation in intensity across the width of the brush, and, for repeated applications of a brush, creates a stroke with a profiled cross section. This is achieved without the necessity of any blending operations but by the use of a simple boolean comparison between values in the frame buffer and the brush profile.
Figure 1 shows apparatus for i_πplementing this techique where an 8-bit frame buffer 3 is arranged to display a monochrome image. In this an 8-bit value of 255 represents white and a value of 0 represents black. A brush profile store 5 contains a predetermined set of brush pixel values with higher values towards the centre, tapering towards the edge. When a stylus is pressed onto a tablet the (x,y) co-ordinates designate a location 15 in the frame buffer 3 where the centre of the brush is to be applied. A rectangular area around this pixel of a size corresponding to the brush profile store is then processed by performing the following steps for each pixel in the area. The value in the pixel is read to provide an input 21 to a comparator 6. At the same time the matching value in the profile store is read to provide a second input 20 to the comparator. If the result of the comparison is that the value in profile store 5 exceeds that in the frame store 3 the value from the profile store is written to that pixel in the frame store, otherwise the original data 21 is written back. It is of course permissible to omit the write operation in this latter case as the value being written is the same as that read from the frame buffer 3. This process is repeated for each pixel in the rectangular area.
If frame store 3 starts with values of all zeros in it, the first application of the brush will result in a copy of the brush being placed in the store. If the pen is kept still at this location no further change occurs in frame store 3 although the process continues to operate because the test described above always leads to the same value being written back to the store for all pixels.
If the pen is now moved a little on the tablet, the brush is then applied at a different but overlapping location. In this case as the area is processed, pixels on the leading edge of the stroke will pass the comparison test and new values from the profile will be written to the store in this area. Pixels on the trailing edge will be of greater value in the store than in the brush profile so the pixel values will be written back unchanged, or alternatively not be written at all according to the process described above.
As the pen moves the brush is applied at overlapping positions to form a line corresponding to the positions the pen has moved t±nrough. Applying the brush at overlapping positions along the line is visually pleasing to the user since the line appears to grow in a way similar to the way it would look if a real airbrush was being sprayed on a surface.
A major disadvantage of this method is that the control image value for a pixel may be worked out many times with different positions of the overlapping brush profile. Each time a control image value is worked out, this is used in a calculation to work out a value for the corresponding pixel on the viewed screen.
Another disadvantage arises where the processing speed of the computer lays significantly behind the speed of line drawing. This causes the line appearing on the screen to lay behind the position of the artist's pen on the tablet.
It is necessary for the computer to be able to initialise the whole store with the value zero before drawing commences to create a "clean-canvas" for the artist to paint on.
A development of this process is to allow the computer to initialise the store to the maximum value 255, and to use the brush profile 5 in an inverted sense, only writing values to the store when the "depth" of the brush towards zero exceeds that already in the store. It can be seen that to achieve this the value from the profile store 5 must be subtracted from 255 before application to the comparator 6. This is identical to inverting every bit in the 8-bit value which is an easier operation. The test in cαrrparator 6 must now be changed so that new values are written only when the inverted data 20 is less than the original pixel value 21. This inverted painting process can be selected by the user at any time even when he has started from a zeroed store as originally described. This allows him to selectively "rub out" sections of his drawing by drawing over it in reverse.
The invention is defined in its various aspects in teh appended claims to which reference should now be made.
The invention will now be described in detail by way of example with reference to the figures in which:
Figure 1 shows the prior art arrangement described above;
Figure 2 shows a block diagram of one emtodiπent of the invention;
Figure 3 shows, schematically, a process for drawing onto an output video image in one embodiment of the invention;
Figure 4 shows an arrangement for refreshing of an image or changing the colour an image has been drawn with;
Figure 5 shows a block diagram of a real-time embodiment of the invention;
Figure 6 shows a block diagram of a further embodiment in which a painting mask is simulated;
Figure 7 shows an embodiment of the invention which enables painting to be made pressure sensitive;
Figure 8 shows the type of interpolator circuit used in the invention. Figure 9 shows various repeated brush applications for drawing lines; and
Figure 10 shows a further line drawing technique.
The technique can be applied to a full colour video system where a frame buffer consists of 48-bit values per pixel; one each for red, green and blue, and one for a "blending control" value. The method is preferably implemented with two frame buffers of this configuration, although in a siirpler embodiment the blending plane is only required in one of the frame buffers. In this siirpler embodiment there is no requirement to view the contents of the 32-bit frame buffer so the technique is fully applicable to a system where a 24-bit frame buffer is available and ordinary computer memory is used for the second, 32-bit frame storage area.
Figure 2 shows a two frame buffer configuration with a partly drawn image represented in a viewable store 30. A computer 32 is also shown which addresses the frame buffer storage by known techniques. A tablet and stylus 34 are shown connected to the computer. Scan circuitry 36 scans the 24 bits of RGB frame buffer 30 to provide separate red, green and blue signals 38 for display on a monitor 40 by means of any necessary digital to analogue conversion. There is an 8 bit blending control plane 42 associated with the store 30, and a second store 44 has a similar blending control plane 46 associated with it. All the stores are addressable by the computer 32.
The drawing process is now described with reference to Figure 3 which shows the operation for one 8-bit colour plane of the 24-bit colour store. The circuitry is replicated for each of the three colours with the same 8-bit blending plane process being common to each of the three colour components.
First of all the partly drawn image is transferred to the second frame buffer 44 and any contents of the corresponding blending plane 46 are erased. The artist commences to draw and the painting algorithm already described with reference to Figure 1 is applied to blendi g plane 46 with a conparator 48 determining the data 22 written back to the blending plane 46. However, this data is now also fed to a linear interpolator 50 where it is used as the alpha value for a linear interpolation between the original background image data from store 44 at input B, and ink or paint colour data from a store 52 providing the input A of interpolator 50. The ink colour data represents the magnitude of the appropriate R, G, or B component of a 24-bit ink colour previously selected by the operator.
The data for each pixel at the output of the interpolator 50 is written into the frame store 30 at the corresponding pixel to enable the artist to view the results as he works. Thus after each brush application the image in the visible frame buffer 30 is a blending of the original background picture stored unchanged in frame buffer 44 and the current ink colour under control of the blending plane 46.
Because of the comparison test as the artist writes into store 46, as the artist redraws over lines already drawn, or where the artist holds the pen still on one location, the edge pixels will not change above their prescribed values in the blending plane. Thus any soft edged brush data is not corrupted by repeated applications.
The data in store 44 can be periodically updated to that in store 30 and blending plane data in store 46 cleared by a user control operated by the artist. This could also be done automatically, for example when the ink colour is changed.
The operator can at any time choose to reverse the drawing process into the blending plane as described above and "undraw" over areas already drawn on thus creating a wholly new drawing technique at the touch of a button.
A further development of this process is to provide the ability for the addressing circuitry controlling the linear interpolator 50 to perform a "refresh" operation on the viewed store 30. This is shown in Figure 4 where each pixel is processed over a rectangular area (which is usually the entire extent of stores 30, 44 and 46 but may be a more limited area if desired). The original image in store 44 is re-written to store 30 but the blending plane data in store 46 is maintained. The ink colour from store 52 is then re-applied over the whole of the rectangular area under the control of data from store 46.
This can be used if for some reason the data in store 30 becomes corrupted and it is necessary to refresh it to show the original background picture together with the current ink drawing from store 46. However, a new effect is achieved if ink store 52 is updated with a new value before this refresh operation is performed. On refreshing the artist's drawing will change colour. This is ideal if the drawing work is felt to be good by the artist but he is not happy with the colour. He can select a new colour and see the drawing in that colour in the time it takes the process of Figure 4 to be applied to the whole screen.
Another use of this redrawing technique is to wipe the store 46 before performing the refresh so the artist regains his untouched background image and can begin painting afresh if he is unhapply with the drawing he has made.
The artist can also choose to fill store 46 with the value 255 and perform a refresh, filling the entire screen with ink. He can the use the reverse drawing, or undraw, operation above to selectively reveal parts of the background image.
When an artist has finished drawing in his desired colour and wishes to choose a new colour without losing what he has already drawn, it is necessary to make the previous drawing permanent. This is done by discarding the non-viewed image in store 44 and making the viewed image in store 30 the new background picture. This new background image is copied to store 44 to act as the basis for the next sequence of drawing operations as described above. The store 46 is cleared at the same time. This can conveniently be done when the artist goes to select a new colour, and the process can be carried out automatically without the artist being aware of it. To achieve the colour changing of an existing drawing as described above he would carry out a different set of operations which would stop this automatic discarding of the necessary original background image.
A real-time refresh version of this algorithm is now described which removes the need for two frame stores and also allows the colour changing of an existing drawing described above to be performed in real-time.
Figure 5 shows a real-time implementation where the linear interpolation circuitry 50 is fast enough to perform its calculations on video data as it is scanned.
The background image is placed into store 46 as previously described. Under computer control (not shown) the artist draws into blending plane 46 using the simple drawing process of Figure 1 or its development in which "undrawing" is permitted.
The scan circuitry 36 scans the three colour components and the blending component at video rates. The same blending data is used for each of the three colours and the process for green only is shown in Figure 5.
Green data is fed to the B input of the linear interpolator 50 and the green component of the current ink colour is fed to the A input. Blending data from store 46 is fed to the control input to cause an interpolation between the original picture and the current ink colour. The output data is fed (via any necessary digital to analogue conversion) to a monitor 54 on which the artist views the result of his drawing.
This process relieves the drawing process of the additional steps of writing to store 30 as in Figure 3 with a consequent speed up and circuit simplicity by using a real-time interpolator and repeatedly scanning the blending plane as the image is viewed.
Once the artist is happy with image he has drawn the computer can cause the image in store 44 to be updated with the current ink colour.
There is now described an improvement to this drawing process utilising the extra 8-bit blending plane 42 (Figure 2) . In this process this plane 42 acts as a protection plane for the image, preventing the visualisation of the drawing already described in selected areas of the store. Figure 6 shows the addition of a data path from the extra 8-bit store 42 providing data for each pixel to the B input of arithmetic function unit F 54 along with data from the blending process of comparator 48 being fed to input A of function unit 54. The resulting value is fed as the control value to the interpolator 34.
Function F is arranged using straiφtforward techniques to provide an output equal to A - B (if A greater than B) or 0 (if A not greater than B) .
It can be seen that a large value of data in store 42 will be subtracted from the blending value from comparator 48 to give a much smaller blending value, thus preventing the ink from being applied (or being visible) in the viewed store 30 (or, on the real-time monitor of Figure 5 if that implementation is used) . A value of 255 in store 42 will always result in no blending, and a value 0 will always allow ink to appear . Thus any data in store 42 acts as a protection mask with varying degrees of transparency.
Due to the non-iterative nature of this protection process, repeated brush applications cannot "eat into" the edge of the protective shield as is the case with iterative methods. Any soft edge in the protection plane 4 remains a soft edge.
An alternative implementation of function F which yields similar results is for its output 25 to be calculated as
(A * (255 - B))/255 but this is generally more expensive to implement. A divisor of 256 can be used to save substantially on cost but steps need to be taken to achieve 100% ink application if this is done. The protective drawing in store 42 can be created using all the above drawing techniques while substituting the output of store 42 for the output of store 46. At the end of drawing the mask, instead of keeping the blended image (which would overlay the protective mask as ink), the image is retained and the output of store 42 switched back to the configuration of Figure 6.
Circuitry needs to be added to replace the data from store 42 with zero if the artist wishes to disable this protective function.
The system as described is perceived as very "abrupt" to the artist. When he does not press on the pen the drawing process described does not take place, but when he presses, the ink is deposited at its maximum magnitude straight away. This process can be made more natural by making this deposition more gradual.
Figure 7 shows a development to the process of Figure 1 in which a pressure value derived form the pen and in the range 0 (no pressure) to 255 (full pressure) causes a gradual increase in the effect of the drawing ijrplement. The value from the pen 58 is fed to an inverter 36 to generate a value which is 255 for no pressure and 0 for full pressure. This is subtracted from the profile data 5 in a subtractor 56 and the resultant value is fed to the comparator 48.
It can be seen that at no pressure, the value of data 27 is always 255 and when this is subtracted from the value of the profile the result is always negative or zero. Hence the test in comparator 6 always fails and no drawing occurs. As the pressure is increased, data from the inverter 60 decreases allowing more and more values from brush store 5 to result in positive values which can affect the drawing. When full pressure is reached the full drawing process as originally described takes place.
This causes an effect of lowering the entire profile below a "water level" of zero. A soft shaped profile will thus decrease in the size of its perimeter as well as in altitude.
Brush profiles for all these operations can be a choice of predefined shapes or can be generated by the artist f om areas of any drawn image. Same examples with their effects are described here.
Most predetermined shapes can be considered as samples taken at the centre of each pixel of a 3-dimensional profile shape (as for example illustrated in Figure 1) .
(i) The cone
This is a sample of a conical profile with a peak of 255 at the centre descending to 0 at a fixed distance from the centre. It creates a good soft edged effect which can be considered an analogue to an airbrush except that with this drawing technique no ink build-up occurs.
(ii) The plateau
This is a sample of a top-hat like cylinder placed over the sampling matrix. A pure cylinder will result in alising as it contains edge frequencies in excess of the sampling matrix. Such a shape results in samples with all the central area pixels at the value of 255. The edge values are all zero. This can be used where aliasing is acceptable or where other techniques are available for removing such aliasing (see for example UK Patent Application No.9002463.9). In practice better results are obtained if the edges are softened by prefiltering the sampled image in each direction according to normal sampling practice. This results in an edge profile where there are some intermediate values between 255 and zero.
Using the plateau shape described above betteer quality lines are obtained if multiple samples of the plateau are made with small (sub pixel) displacements between the sampling matrix and the shape. When each drawing application is made higher precision data from the tablet is used to select the appropriate sub pixel positioned sample so that the deposited circular brush more accurately reflects the location of the pen on the tablet.
The technique described herein in which the viewed image is a composite of ink with the background image, and in which the background image is preserved can be used with other drawing operations into the blending plane, for example where geometric shapes are drawn by the computer into this plane, followed by the above described refresh operation to render them visible (not necessary where the real-time option is used) . Also it is possible for the artist to shift entire areas of the blending store by means of a raster copy operation if he wishes to shift his drawing with respect to the background before being composited.
All these techniques can be combined with the type of high resolution drawing processes described in UK Patent Application No.9002463.9 to achieve additional fineness of precision in drawing.
Figure 8 shows an implementation of the linear interpolator used herein in the various examples described.
An Alpha value 70 is multiplied with input video A in multiplier 72. Both input values are in the range 0 - 255. The output is divided by 255 in divider 74 to produce a result 76 in the range 0 - 255.
The Alpha value is also inverted by inverter 78 to produce 255 minus alpha at 80. This value is multiplied by input B in multiplier 82 and divided by 255 in divider 84. The results from dividers 74 and 84 are summed in adder 88.
The output data 86 thus consists of a linear interpolation between input data A and input data B, such that if alpha is 0, the output is B and if alpha is 255 the input is A. Intermediate values of alpha will give linearly interpolated intermediate values at 86.
The dividers 74 and 84 can be more easily made to divide by 256 in which case the arithmetic is slightly wrong but this can be acceptable due to the non-iterative nature of the process. In practice there are well known techniques to correct for this and these should be used to prevent a slight change in the picture as each layer of ink is added.
A siibstantial change to the drawing algorithm discussed can be achieved by changing the σcβrparator of fig 1 for a different circuit. This new circuit adds the two in∞ ing values and then checks to see if the sum is higher than 255. If this is the case the value of 255 is fed to the output. Such a circuit is easily achievable using known techniques.
The effect of this change is that while the pen is held still, ink continues to 'add up' until a maximum value of ink is achieved. This can be used to simulate airbrushes where in a traditional airbrush ink continues to flow while the release valve is held pressed.
This technique can be of use but suffers from the disadvantage avoided by the techniques described above in that the softness of edges is destroyed by repeated applications. If the pen is held still for some time aliased results occur. Despite this the effect may be preferred. A system in which the circuits are selected under artist control allows the artist to choose whether to use this effect.
The problems of repeated calculation of control image values for pixels in overlapping brush strokes and of the painting process lagging behind the pen position are now considered.
When the pen is moved too fast there is a position in the control image where the last brush application was made and a position where the pen is currently being held. Where these two positions are different the brush must be applied to generate a line from the last brush application to the pen position thus filling in the line between the two pen positions. This line may be curved reflecting the path the pen went through to get to the new position but a straight line will give satisfactory results if the brush drawing is not lagging behind the pen position by more than a few pixels.
If the viewing operation is left until control image values for the whole line are known, this will result in line generation lagging even further behind the actual pen position. The viewed screen needs to be updated as soon as possible to give the greatest interactive user feedback. A solution is to work out the viewed screen pixels after the brush has passed over them and the control image contents are corrplete, i.e. the calculation which modifies the viewed screen is not done after each brush application. For long lines (greater than the size of the brush itself) this will be a useful speedup.
While in this mode the viewed line will always be at least one half of the brush size behiixl the pen position. This is because at this distance the final control image contents are known and the viewed screen at this position is then updated accordingly. A combination of this method for long lines and the original method for short lines gives good results.
Where brush applications are excessively lagging behind the pen position there are various possibilities for speeding up the drawing of a straight line segment. The simplest method is to increase the distance between adjacent applications so that with fewer applications required for a given distance the line can be drawn faster. This reduces the quality of the line and in practice the difference can be considerable.
Figure 9 shows the repeated application method and how this is effected by increasing the spacing for a line from A to B. The spacing shown is larger than would ever be used in practice but looking at a pixel at point P is can be seen that 3 writes are necessary in figure 9a and only one write in 9b.
For any pixel in the path of a line to be drawn there will be a sequence of control image values applied as the brush profile is passed over the pixel. Viewing this in reverse a line is traced back through the brush matrix for the control image pixel. Considering the point P in figure 9a, 3 values were read from different positions within the brush matrix. Figure 9c shows these 3 positions which form a straight line through the matrix. The only brush control value required for the output control image is the largest value along the line traced back through the brush matrix. This value is then combined with the control image in the normal way and the viewed screen updated.
If the start and end points of the traced line are known then the rest of the line may be interpolated for a straight line. This is achieved by making the distance between position samples along the line correspond to the preferred distance between brush applications and applying the brush at each sample position. The start and end points can be worked out independently of the sample distance and hence independently of the final brushed line quality.
There are various methods for finding the position of the line through the brush matrix. The simplest is a simulation of the repeated application method where the brush matrix positions along the line can be examined in relation to the pixel required and the necessary pixels read from the brush matrix.
There are some refinements to this method. Once the start position of the line is known it is only necessary to move through the matrix in the reverse direction of the original line. It is also possible to generate a data structure containing the line start conditions for every pixel in the line before brush matrices are examined. This preprocessing can be done quickly because the initial conditions for adjacent pixels are very similar.
A more accurate way of finding the line through a brush matrix is shown in figure 10. Here a new line is constructed going through the pixel P and parallel to the brushed line from A to B. Where this constructed line intersects a brush matrix placed at A the start and end positions (X and Y) of the traced line through the brush matrix A are given. Ths brush matrix is shown here as square as this is the usual implementation for the matrix and this makes the intersection simpler to solve. Of course the brush profile may be any shape within the square matrix holding it. Knowledge of these points (X and Y) enables the highest value in the brush matrix to be determined and then applied at the pixel P as well as at other pixels along the line.
Where start and end points are designated and the old repeated application method is to be used, pressure values are linearly interpolated between the start and end pressure values from the pen values. For any one application of the brush only one pressure value is used. The pressure change betweeen adjacent applications can be considerable and this discontinuity is picked out easily with the human eye. In the new method each pixel is considered separately so the pressure value used may be calculated for each pixel using the distance of the pixel along the line being drawn thus making the line more realistic since large discontinuities are not seen on the line.
Where assumptions can be made about the brush matrix values there is a faster way of finding the largest matrix values along a straight line through the brush matrix. For a line going through a brush matrix with a centered conical profile the value required from the matrix will be derived from the position where the line is closest to the centre. Another method is to use a binary search along the line. This will work where the cone is distorted eg. to accomodate a non-square pixel aspect ratio. In general a binary search will work out when the profile has one .raximum and no πLinumum.
To make this method look visually pleasing to the user the order in which the control image pixels are written needs to be such that the line appears to follow the pen direction. Also the shape of the end of the line as it is being drawn should look something like the profile of the brush itself. A shape such as 'V is good enough for this and not very- difficult to generate.
Where a straight line segment is very short (less than a brush width) it may be more efficient to use the method of applying whole brush applications since the number of applications required could be as low as 1.

Claims

Γ AIMS
1. A video image creation system comprising means for storing a video image, means for storing a control image corresponding to at least a portion of the video image, manually operable means providing data signals pertaining to a patch of image points, means for altering the data in the control image in dependence on the result of a comparison between the data signal in each pixel of the patch and the data stored in each corresponding pixel of the control image, means for receiving the video image and the control image and modifying a component of the video image in response to the data signals stored in the control image, and means for displaying the thus modified video image.
2. A system according to claim 1 in which the mo tying means modifies the video image by ccarbining data in the video image with new image data in proportions dependent upon values stored in the control image.
3. A system according to claim 1 and 2 including means for selectively storing the modified video image.
4. A system according to claim 3 in which the modified video image is selectively stored in the same storing means as the first video image.
5. A system according to any preceding claim in which the modified video image is stored in a further storage means.
6. A system according to any of claims 1 or 5 in which the control image is altered in response to pressure applied to a pressure sensitive means in the manually operable means.
7. A video image creation system according to any preceding claim cαmprisng means for determining start and end points of a line drawn with the manually operable means and means for determining a maximum value of the data signals in the patch to be used in altering the data signals in the control image for each pixel in the line.
8. A video image creation system according to claim 7 in which the determining means positions a patch at the start and end points of the line and the maximum value to be used for altering the data signal stored in a pixel in the control image is determined from the maximum value of the data signals in the patch intersected by a line passing through that pixel and the patch.
9. A method for creating a video image comprising the steps of providing a store for video image data, providing a store for control image data corresponding to at least a portion of the video image, providing data signals from a manually operable means, said signals pertaining to a patch of image points comparing the data signal for each pixel in the patch with the data stored in a corresponding pixel in the control image, altering the data in each pixel of the corresponding patch in the control image in dependence on the result of the comparison, reading out video image data and control image data from the respective stores, modifying a component of the video image data in response to the control image, and displaying the thus modified video image data on a display means.
10. A method according to claim 9 in which the modifying step comprises combining the video image data with new video image data in proportions dependent upon the control image data values.
11. A method according to claim 9 or 10 including the step of selectively storing the modified video image.
12. A method according to any of claims 9 to 11 or 14 in which the step of altering the control image includes the step of providing signals in response to pressure applied to a pressure sensitive means in the manually operable means and altering the control image in response to these signals.
13. A method according to claim 15 in which the data signals pertaining to the patch of image points are provided in response to the signals provided by the pressure sensitive means.
14. A method according to any preceding claim including the steps of dete__mi_ning start and end points of a line drawn with the manually operable means, and determining a maximum value of the data signals in the patch to be used in altering the data signals in the control image for each pixel in the line.
15. A method according to claim 14 including the steps of positioning a patch at the start and end points of the line, simulating a line substantially parallel to the line drawn by the manually operable means and passing through a pixel in the control image and intersecting the patch, and determining the maximum value of the data signals in the patch on the simulated line.
EP19910906949 1990-03-30 1991-03-28 Video image creation Withdrawn EP0532505A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB909007136A GB9007136D0 (en) 1990-03-30 1990-03-30 Video image creation
GB9007136 1990-03-30

Publications (1)

Publication Number Publication Date
EP0532505A1 true EP0532505A1 (en) 1993-03-24

Family

ID=10673538

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19910906949 Withdrawn EP0532505A1 (en) 1990-03-30 1991-03-28 Video image creation

Country Status (3)

Country Link
EP (1) EP0532505A1 (en)
GB (1) GB9007136D0 (en)
WO (1) WO1991015830A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412402A (en) * 1989-09-01 1995-05-02 Quantel Limited Electronic graphic systems
CA2076687A1 (en) * 1991-11-27 1993-05-28 Thomas A. Pandolfi Photographic filter metaphor for control of digital image processing software
FR2702861B1 (en) * 1993-03-15 1995-06-09 Sunline Method for processing an image in a computerized system.
CN1147822C (en) * 1993-03-25 2004-04-28 Mgi软件公司 Method and system for image processing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4345313A (en) * 1980-04-28 1982-08-17 Xerox Corporation Image processing method and apparatus having a digital airbrush for touch up
US4514818A (en) * 1980-12-04 1985-04-30 Quantel Limited Video image creation system which simulates drafting tool
EP0202747A3 (en) * 1985-04-20 1989-10-18 Quantel Limited Improvements in or relating to video image creation systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9115830A1 *

Also Published As

Publication number Publication date
GB9007136D0 (en) 1990-05-30
WO1991015830A1 (en) 1991-10-17

Similar Documents

Publication Publication Date Title
EP0202014B1 (en) Improvements in video image creation systems
US4633416A (en) Video image creation system which simulates drafting tool
US4602286A (en) Video processing for composite images
US5459529A (en) Video processing for composite images
US7382378B2 (en) Apparatus and methods for stenciling an image
US5754183A (en) Image processing apparatus and method for producing pixel data in dependence upon the shape of a sectional line extending between boundary lines of an object
US5216755A (en) Video image creation system which proportionally mixes previously created image pixel data with currently created data
US4524421A (en) Computerized graphics system and method using an electronically synthesized palette
US5175806A (en) Method and apparatus for fast surface detail application to an image
US4345313A (en) Image processing method and apparatus having a digital airbrush for touch up
US6496599B1 (en) Facilitating the compositing of video images
US5412402A (en) Electronic graphic systems
US6571012B1 (en) Adjusting a softness region
US5999194A (en) Texture controlled and color synthesized animation process
US5289566A (en) Video image creation
WO1992021096A1 (en) Image synthesis and processing
GB2140257A (en) Video image creation
US5222206A (en) Image color modification in a computer-aided design system
US5412767A (en) Image processing system utilizing brush profile
GB2157122A (en) Image composition system
US5175625A (en) Video image creation systems combining overlapping stamps in a frame period before modifying the image
US4751503A (en) Image processing method with improved digital airbrush touch up
US6456300B1 (en) Method and apparatus for processing image data to produce control data
EP0532505A1 (en) Video image creation
GB1605135A (en) Variable image display apparatus

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19921022

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): BE CH DE ES FR GB LI NL

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19941001