GB2256109A - Transforming a two-dimensional image video signal on to a three-dimensional surface - Google Patents

Transforming a two-dimensional image video signal on to a three-dimensional surface Download PDF

Info

Publication number
GB2256109A
GB2256109A GB9207660A GB9207660A GB2256109A GB 2256109 A GB2256109 A GB 2256109A GB 9207660 A GB9207660 A GB 9207660A GB 9207660 A GB9207660 A GB 9207660A GB 2256109 A GB2256109 A GB 2256109A
Authority
GB
United Kingdom
Prior art keywords
spot light
signal
image
video signal
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9207660A
Other versions
GB9207660D0 (en
GB2256109B (en
Inventor
Masafumi Kurashige
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of GB9207660D0 publication Critical patent/GB9207660D0/en
Publication of GB2256109A publication Critical patent/GB2256109A/en
Application granted granted Critical
Publication of GB2256109B publication Critical patent/GB2256109B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Generation (AREA)
  • Studio Circuits (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Apparatus for transforming a two-dimensional input video signal on to a three-dimensional surface and for depicting illumination thereof by a spot light source, transforms 30 the two-dimensional video signal in accordance with a mapping data signal on to the three-dimensional surface. A spot light source signal including first and second data signals is generated, the first data signal representing the direction of the axis of a spot light source and the second data signal representing a radius of the spot light source. A spot light key signal Ko is generated 22 based on the second data signal and a distance signal representing distances between locations on the three-dimensional surface and the axis of the spot light source based on the first data signal and the mapping data signal. At least one of a luminance component Y and hue components U, V of at least one of the two-dimensional video image signal and the transformed two-dimensional video image signal is modified 32 in accordance with the spot light key signal Ko. <IMAGE>

Description

TRANSFORMING A TWO-DIMENSIONAL IMAGE VIDEO SIGNAL ON TO A THREE-DIMENSIONAL SURFACE This invention relates to apparatus and methods for transforming a two-dimensional image input video signal on to a three-dimensional surface.
A previously-proposed image processing apparatus provides the capability of converting video signals representing two-dimensional images into transformed video signals representing cylindrically shaped three-dimensional images. The operation of the previously-proposed apparatus is illustrated with reference to Figures 5(A) and 5(B) of the accompanying drawings. Referring to Figure 5(A), an input video signal representing an image 1M1 is divided into a plurality of blocks corresponding to respective portions of the two-dimensional image, such as the illustrated block B1.Referring also to Figure 5(B), the input video signal representing the two-dimensional image depicted in Figure 5(A) is transformed to depict a three-dimensional, cylindrical image by transforming each of the blocks of the video signal representing corresponding portions of the image 1M1 (for example, the block B1) into corresponding portions of the three-dimensional, cylindrical image (for example, the block B2 corresponding to the original block B1).
US Patent No. US-A-4 965 844, assigned to the present applicants, describes a further image converting apparatus which provides an image of the type illustrated in Figure 5(B) which is shaded to depict illumination by a diverging beam of light from a desired position, such as the image illustrated in Figure 6(A).
A block diagram of the previously-proposed image converting apparatus is provided in Figure 7, in which a host computer 1, such as a microcomputer, is coupled to a mass memory 2 and an input/output device 3. The mass memory 2 contains programs stored in advance such as a conversion program for converting a video signal representing a two-dimensional image into a transformed video signal representing a three-dimensional cylindrical image, as described above.
When a desired program stored in the mass memory 2 is designated by means of the input/output device 3, the host computer 1 reads the program from the mass memory 2 and, as a result of the program, generates data necessary for carrying out the image conversion, as described below. The generated data is then stored by the host computer 1 in a buffer memory 4. In the image conversion process, a target image is divided into a plurality of blocks and the image transformation process is carried out block by block. In the case of the two-dimensional image as depicted in Figure 5(A), the original image IM1 thereof may comprise 64 by 96 blocks, each block including 8 x 8 pixels. The converted image IM2, as illustrated in Figure 5(B), comprises 128 x 128 blocks, each including 4 x 6 pixels.The selected program determines a post-conversion position of a representative point of each block in the original image 1M1 in accordance with X, Y and Z three-dimensional coordinates. The results of such determinations are then stored in the buffer memory 4 of Figure 7.
It will be seen that the number of blocks in the original image differs from the number of blocks in the transformed image in the above illustrative operation. Consequently, the blocks of the converted image do not necessarily correspond on a one-to-one basis with the blocks of the original image. Nevertheless, the resulting converted image depends generally on which block the representative point of each given block (for example, block B1) of the original image IM1 is placed (for example, block B2 in Figure 5(B)).
The data for the converted image are obtained in the following manner. With reference to Figure 8(A), four blocks of the original image have respective representative points a, b, c and d. These four blocks, as illustrated in Figure 8(A), surround a central block having a representative point P. Once the conversion has been accomplished, the point P is relocated to a position P' as illustrated in Figure 8(B), while the positions of the representative points a, b, c and d of the surrounding blocks in Figure 8(A) are relocated to the positions indicated by the points A, B, C and D, respectively, in Figure 8(B).
The three-dimensional coordinates of the post-conversion representative points, such as points A, B, C, D and P' as illustrated in Figure 8(B), define the type of surface to be formed by the transformation process. A post-conversion surface is then produced by linear approximation in the vicinity of each representative point as follows.
In order to approximate the portion of the surface containing the point P' by such linear approximation, the direction of that surface is defined as parallel to two segment vectors: the segment vector AC connecting the points A and C in Figure 8(B), and the segment vector DB connecting the points D and B. In Figure 8(B), the linearly approximated surface containing the representative point P' is defined by two unit vectors: a unit vector PX parallel to the vector AC and a unit vector PY parallel to the vector DB. In this fashion, the surface corresponding to each representative point is linearly approximated, until the entire surface of the converted image is thus obtained.
The magnitude of the vector PX and that of the vector PY are each made proportional to the distances between the representative points A, C and D, B, respectively, of the adjacent blocks as follows: PX = AC/4 ..... (1) PY = DB/4 ..... (2) As described above, the buffer memory LI contains data required for transforming the representative points of the blocks of the original image IM1, and to obtain the post-conversion positions of these representative points, as well as to convert the difference values involved. Such data in the buffer memory 4 is provided to an image converting circuit 5 which also receives the input image data from an input terminal 6.The input image data is converted by the circuit 5 in accordance with the data received thereby from the buffer memory 4 and the converted data is provided at an output terminal 7.
As a preliminary step, the image converting circuit 5 designates a target region of the input image to be converted using the data from the buffer memory 4. That is, the image converting circuit 5 first specifies to what region a given region (for example, the region B1) in the original image Ism1, as illustrated in Figure 5(A) will be transformed (for example, the region B2 in the converted image 1M2 as illustrated in Figure 5(B)).
According to this process, read addresses for every pixel in the target region are obtained and placed in an input buffer memory of the image converting circuit 5 of Figure 7. Then, in accordance with each read address thus obtained from the input buffer memory, the corresponding pixel data is obtained and written in an output buffer memory at an address representing the post-conversion position thereof.
The image conversion process is carried out concurrently with a smoothing process whereby jagged edge portions which may appear between the background and the image contour are smoothed out by interpolating sampling points in the image to produce additional sampling points.
The additional sampling points produced by interpolation are likewise written in the output buffer memory.
The foregoing describes the basic process for converting a video signal representing a two-dimensional image into a transformed video signal representing a three-dimensional image in accordance with the previous proposal. Such a system may also include a shading coefficient memory 8 which serves to provide the output image with shading to depict illumination by diverging light from a source located at a particular position relative to the converted image. For this purpose, the memory 8 provides shading coefficients to the image converting circuit 5 for weighting the image data output therefrom to produce a shading effect such as that illustrated in Figure 6(A).If the image is monochromatic, the brightness level thereof is changed in accordance with the shading coefficient; if the image is a colour image, the hue and/or brightness level thereof is varied depending upon the degree of shading represented by the shading coefficients.
With reference to Figure 9, an illustrative method for deriving the shading coefficients is illustrated therein. Each of the rectangular areas depicted in Figure 9 represents a portion of a threedimensional image surface approximated by corresponding plane surfaces each containing at least three sampling points. A normal vector i is obtained for each such plane surface and a vector a oriented from each plane surface toward the light source is likewise obtained. The inner product of the vector a and the normal vector i is obtained for each of the plane surfaces and used to find a shading coefficient therefor.
The same shading coefficient typically is applied to all pixels contained in a single plane surface. In the alternative, a different shading coefficient may be obtained for application to each pixel on an individual basis.
The image conversion apparatus described above is operable to apply a shading effect to an entire three-dimensional image and is, thus, of limited usefulness in creating special three-dimensional video effects.
According to one aspect of the invention there is provided apparatus for transforming an input video signal representing a two dimensional image into a transformed video signal representing said two-dimensional image transformed on to a three-dimensional surface and for depicting illumination thereof by a spot light source, said input video signal having at least one of a luminance component and hue components, the apparatus comprising:: means for defining said three-dimensional surface; means for providing a mapping data signal for transforming said input video signal on to said three-dimensional surface; means for transforming said input video signal in accordance with said mapping data signal to generate said transformed video signal representing an image conforming with said three-dimensional surface, said transformed video signal having at least one of a luminance component and hue components; first means for generating a spot light source signal including first and second data signals, the first data signal representing a direction of an axis of a spot light source and the second data signal representing a radius of said spot light source;; second means for generating a spot light key signal based on said second data signal and a distance signal representing distances between locations on said three-dimensional surface and the axis of said spot light source based on said first data signal and said mapping data signal; and means for modifying at least one of said luminance component and said hue components of at least one of said input video signal and said transformed video signal in accordance with said spot light key signal.
According to another aspect of the invention there is provided a method of transforming an input video signal representing a twodimensional image into a transformed video signal representing said two-dimensional image transformed on to a three-dimensional surface and for depicting illumination thereof by a spot light source, said input video signal having at least one of a luminance component and hue components, the method comprising the steps of:: defining said three-dimensional surface; providing a mapping data signal for transforming said input video signal on to said three-dimensional surface; transforming said input video signal in accordance with said mapping data signal to generate said transformed video signal representing an image conforming with said three-dimensional surface, said transformed video signal having at least one of a luminance component and hue components; generating a spot light source signal including first and second data signals, the first data signal representing a direction of an axis of a spot light source and the second data signal representing a radius of said spot light source;; generating a spot light key signal based on said second data signal and a distance signal representing distances between locations on said three-dimensional surface and the axis of the spot light source based on said first data signal and said mapping data signal; and modifying at least one of said luminance component and said hue components of at least one of said input video signal and said transformed video signal in accordance with said spot light key signal.
A preferred embodiment of the invention provides an apparatus and method for depicting illumination of only a portion of a transformed two-dimensional video image signal by a source such as a spot light or the like.
Illumination of a limited portion of such a transformed twodimensional video image signal may be depicted, such as by a spot light or the like, in combination with the depiction of a shading effect representing illumination by a diverging light beam of similar nondirectional beam.
The invention will now be described by way of example with reference to the accompanying drawings, throughout which like parts are referred to by like references, and in which: Figure 1 is a block diagram of an image converting apparatus in accordance with one embodiment of the invention; Figures 2(A) to 2(C) are respective diagrammatic views of a three-dimensional transformed image for illustrating the manner in which shading is carried out with the use of the embodiment of Figure 1; Figure 3(A) is a block diagram of a spot light source key data generating section of the Figure 1 embodiment for use in implementing a spot light effect therewith; Figures 3(B) to 3(D) are diagrams used in explaining the operation of the spot light source key data generating section of Figure 3(A);; Figures 4(A) to 4(F) are schematic representations of twodimensional images producible by respective input video signals and corresponding transformed three-dimensional images producible by transformed output video signals, for depicting the manner in which a spot light illumination thereof is produced with the use of the embodiment of Figure 1; Figures 5(A) and 5(B) are schematic views illustrating a previously-proposed technique for transforming a video signal representing a two-dimensional image into a transformed video signal representing an image conforming with a three-dimensional surface; Figure 6(A) is a schematic illustration of a three-dimensional cylindrical image shaded to depict illumination thereof by a diverging light source;; Figure 6(B) is a schematic illustration of a three-dimensional cylindrical image illuminated by a spot light source in accordance with certain aspects in an embodiment of the present invention; Figure 7 is a block diagram of a previously-proposed image conversion apparatus; Figures 8(A) and 8(B) illustrate a previously-proposed technique for mapping a video signal representing a two-dimensional image on to a three-dimensional surface; and Figure 9 is a schematic diagram illustrating a previouslyproposed technique for shading a three-dimensional image produced by a transformed video signal to depict illumination thereof by a diverging light source.
Referring to the drawings, and initially to Figure 1 thereof, a block diagram of a preferred embodiment of the present invention is shown in which a first microprocessor 11 (implemented by a microcomputer or the like) provides a user-machine interface for selecting and initiating a suitable image conversion program based upon input commands. A disc memory 12 coupled to the microprocessor 11 provides mass storage for one or more programs which may be used for transforming a video signal representing a two-dimensional image into a transformed video signal for depicting such two-dimensional image conforming with a three-dimensional surface. The microprocessor 11 is coupled to a keyboard 13 and a joystick 14 to receive user commands therefrom, and to a cathode ray tube (CRT) display 15 serving as an output device.
In operation, a user employs the keyboard 13 to designate a desired type of image conversion, such as the transformation of a signal representing a plane image into a signal representing a threedimensional, cylindrical image. In response to the input commands, the microprocessor 11 reads a computer program from the disc memory 12 which implements the designated image conversion process and stores the program thus accessed in its main memory. The microprocessor 11 then provides an output to the CRT display 15 for indicating that these read and storage operations have taken place.
The user operates the joystick 14 to select image position, orientation and other pertinent factors for the conversion process, and corresponding parameters of the accessed program are modified accordingly. The modified program is then transferred to a program memory 17M of a second microprocessor 17 (such as a microcomputer) included in a three-dimensional address generating section 16.
The second microprocessor 17 then proceeds to execute the transferred program for determining the post-conversion positions on a block-by-block basis (pursuant to the previously-proposed image transformation process described hereinabove), as well as postconversion difference values between adjacent blocks based on a linear approximation process, and reciprocal difference values obtained from reciprocal computations, as described below. The data thus obtained are stored in a buffer memory 18 of the three-dimensional address generating section 16.
In the operation of the embodiment of Figure 1, similarly to that of the previously-proposed image conversion process, each block of the original image IM1 includes a representative point whose postconversion position is determined. Image data representing the image region proximate to the converted representative point is then obtained pursuant to the linear approximation process. From this representative data, the addresses of the original image data corresponding to the region of the transformed image proximate to the converted representative point are obtained. In turn, the addresses thus obtained are converted to effect the image conversion.
It will be appreciated that certain conversions will require suppression of image data representing image portions which are obscured in the transformed, three-dimensional image. This process is carried out as follows. A pointer is created which defines an order in which to process blocks one at a time based on positions thereof along a Z axis extending in the direction of image depth. The pointer is written in a table of the buffer memory 18 and, based thereon, the data transformation is carried out block-by-block beginning with the deepest block (furthest from the observation point) and continuing in sequence with blocks which are progressively shallower. This process is also described in Japanese Patent Laid-Open No. 58-219664, as well as in US Patent No US-A-4 965 844.
The data thus accumulated in the buffer memory 18 are supplied to a first dedicated hardware circuit 19 which processes the blocks one at a time in accordance with the above-mentioned pointer. For each block, the post conversion positions and difference values thereof are determined and used to obtain the range of the input block following conversion.
An output block (including 4 x 6 = 24 pixels) covering the input block range is then acquired. Appropriate reciprocal difference values are used to find each point in each output block corresponding to the representative point in the original image IM1. The data thus obtained are provided via a data converting hardware circuit 20 to a second dedicated hardware circuit 30 which carries out the image conversion process.
Using the image data obtained through the image transformation process carried out by the second microprocessor 17, a shading coefficient generating section 21 generates a shading coefficient representing the degree of reflection of light from a diverging light source incident on a given surface of the transformed image represented by image data positioned relative to the light source. The shading coefficient, used to weight the image data in accordance with such shading, is stored in a memory of the shading coefficient generating section 21.
The embodiment of Figure 1 receives a luminance signal Y and colour difference signals U and V as the components of an input colour video signal at respective input terminals of an analog-to-digital (A/D) converter 31, and outputs corresponding digital signals to respective input terminals of a data modifying circuit 32. A spot light source key data generating section 22 produces a spot light key signal Ko which it supplies to a first input of a coefficient unit 23.
The manner in which the spot light source key data generating section 22 produces the signal Ko is described below in greater detail. The coefficient unit 23 receives the shading coefficient data from the shading coefficient generating section 21 at a second input and is operative to multiply the spot light source key signal Ko with the shading coefficient data from the shading coefficient generating section 21 to produce an output coefficient. The coefficient unit 23 supplies the output coefficient at an output terminal thereof coupled with a further input of the data modifying circuit 32 which serves to weight the digitised video data corresponding to the output image so that the image data when transformed is appropriately modified to depict the desired shading and illumination.
The data modifying circuit 32 modifies the digitised luminance and colour difference signals in accordance with the output coefficient and supplies the modified data to a digital filter 33 which filters the same in accordance with a pass-band controlled in response to an output from the buffer memory 18. For example, where the original image is contracted by the transformation such that minute details thereof are suppressed in the transformed image, the pass-band is narrowed so as to suppress the resulting noise. If the original image is enlarged and contracted in different respective portions, the pass-band of the filter is suitably selected to accommodate each situation.
The filtered signals are output by the digital filter 33 to respective inputs of an input frame memory 34 of an image converting hardware circuit 30. A read/write address control circuit 37 of the image converting hardware circuit 30 receives conversion data from the data converting hardware circuit 20 and, using the received data, controllably addresses the input frame memory 34 for storing the filtered image data therein.The input frame memory 34 includes respective output terminals for providing the three digitised components read therefrom under the control of read addresses supplied by the address control circuit 37 to an interpolating circuit 35 which serves to produce image data for locations between data sampling points represented by data stored in the input frame memory 34, as may be necessary to produce portions of the transformed image signal. The operation of the interpolating circuit 35 is controlled by the address control circuit 37. More particularly, the blocks of image data stored in the input frame memory are read by the address control circuit 37 for processing one at a time starting with the deepest block according to the pointer as described above.The interpolating circuit 35 produces any required interpolated image data for constructing the output transformed image signal which it then writes, together with uninterpolated image data, in an output frame memory 36 also under the control of the address control circuit 37 pursuant to block addresses provided thereby representing output image locations.
The output frame memory 36, having received the three-dimensional transformed image data in block units, then proceeds to read the threedimensional transformed image data consecutively to a digital-to-analog (D/A) converter 39 via a filter 38. The D/A converter 39 serves to produce an analog luminance signal Y and colour difference signals U and V from the transformed data supplied thereto. These analog signals are, in turn, supplied to a CRT display (not shown) to reproduce the transformed image.
Following is a description of the manner in which the shading coefficient is produced pursuant to the embodiment of Figure 1. The plane surface of a converted output image block (having LI x 6 = 24 pixels) provides a model for producing output blocks starting with the deepest block in accordance with the pointer as described above. A flag is added to the given block for indicating whether the block is on a front surface or a back surface of the transformed image. This information is required where an image conforming to a plane surface is transformed on to a surface such as the cylindrical surface depicted in Figures 2(A) to 2(C), since the back surfaces of certain portions of the transformed image thereby become visible to the observer.
In the embodiment described above, flags are provided which afford the necessary ability to distinguish front and back surfaces in order to produce appropriate shading in the resulting image. For example, a flag "1" is included with a front surface block and a flag "-1" is included with a back surface block, as illustrated in Figure 2(A), and a normal vector i for the plane surface of each given block is obtained. Once the normal vector is obtained, a check is made to determine whether the block is a front surface block or a back surface block. If a given block is found to be a front surface block, a normal vector for the front surface is established therefor. If, instead, the given block is found to be a back surface block, the normal vector for the back surface is established therefor.That is, the unit vectors for each block indicating the normal direction of the surface thereof, as shown in Figure 2(B), are multiplied by the flag. Exemplary resulting vectors are depicted in Figure 2(C).
With reference also to Figure 9, for each given block a vector a in the direction of a point light source which is assumed to be at a predetermined position is obtained and the inner product of the vector a and the normal vector i is determined (i ~ a). From the inner product, the shading coefficient is determined for each block and is then stored in the memory of the shading coefficient generating section 21.
When each input data sample corresponding to a given output block is provided by the A/D converter 31, the microprocessor 17 reads the respective shading coefficient from the shading coefficient generating section 21. The shading coefficient is multiplied in the coefficient unit 23 by the spot light key signal Ko from the spot light key data generating section 22, as described above. The resulting signal is provided by the coefficient unit 23 to the data modifying circuit 32 in order to modify the luminance level of the luminance signal as well as the hue of the colour difference signals in order to provide appropriate shading in the corresponding image.
Referring now to Figure 3(A), a functional block diagram is provided therein for illustrating the process by which the spot light key data generating section 22 generates the spot light key signal so as to modify the image data to depict illumination by a light beam from a spot light source. First the respective distances of the converted image data from the centre of the spot light are determined. With reference also to Figure 3(B), a first distance determining processor 41 is provided with coordinate values of a given point or region P (Xn, Yns Zn) in a converted three-dimensional surface image.The processor 41 is also provided with coordinate values (Xc, Yc, Zc) of the central point Q of a spot light source having a radius r which illuminates the three-dimensional surface image, as illustrated in Figure 3(B), and produces coordinate values X'n, Ytn and Z'n as follows: X'n = Xn - Xc ..... (3) Y'n = Yn - Yc ..... (4) Z, = Zn - Zc (5) The coordinate values (Xln, Yln, Z'n) represent the distance between the centre Q of the spot light source and the given point P in three-dimensional terms.These coordinate values are then provided to a second distance determining processor 42 in which they are combined with direction vector component values lx, 1y and 1z of the spot in the three-dimensional coordinate system in accordance with the following relationship: R = X'n + Y'n + Z'n - (1xX'n + 1yY'n + 1zZ'n) ..... (6) The value R, as illustrated in Figure 3(C), represents a distance between the given point P (Xn, YnS Zn) in the transformed image and a centre 0 of the light spot projected on to the transformed image.
When, for example, the spot light beam axis extending from the centre point Q (Xc, Yc, Zc) of the spot light source intercepts the given point P (Xn, Yns Zn) on the surface of the transformed image, it will be seen that the value R determined in accordance with equation (6) above will be equal to zero due to the coincidence of the point P with the spot light beam axis. If, on the other hand, the point P is spaced some distance from the point of intersection of the spot light beam axis with the surface of the transformed image, it will be seen that the direction of the spot light beam deviates from that of the line joining the point P with the centre point Q of the spot light source, as in the case of the point P illustrated in Figure 3(C).In that event, a non-zero value of R representing the distance from a point 0 at the intersection of the spot light beam axis with the surface of the transformed image to the point P is provided.
The value R representing the distance of the given point from the centre of the spot light beam on the surface of the transformed image is supplied to an edge determining processor 43, along with the value r representing the diameter of the spot light beam irradiated on to the transformed image. The processor 43 produces a key data value K for each given point P of the transformed image corresponding to the difference between the values R and r which indicates a distance of the point P from the edge of the light spot. If, for example, the key data value K is greater than zero, this indicates that the given point P (Xn, YnX Zn) is within the range of the light spot on the transformed image surface. If, however, the key data value K is less than zero, the point P is located outside the light spot range.
With reference also to Figure 3(D), the key data values K are plotted therein along a straight line extending in the radial direction of the light spot. The plot of the key data values K provided in Figure 3(D) indicates that light is reflected strongly from image locations where the light spot value K is greater than zero (that is, "+" values as shown in Figure 3(D)), and that relatively less light is reflected where K is smaller than zero (that is, "-" values in Figure 3(D)), thus indicating a darkened condition at the corresponding points of the transformed image. In order to smooth the key data values K which are obtained discretely, a data interpolating processor 44 is provided downstream of the edge calculating processor 43.
When a light spot is projected on to a three-dimensional surface, it is natural for the edge of the spot to display a sharp gradient from light to dark areas. So as to present the edge of the light spot in this manner, the disclosed embodiment of the invention provides "softness" data E to a data conversion processor 45 operative to output edge coefficient data which is multiplied with the key data values K, as indicated by the multiplier 46, thereby to provide converted key data values K' as illustrated in Figure 3(D). The converted key data values K' are supplied to a limiter offset processor 47 which limits the converted key data values K' and adds an offset thereto to produce the spot light key data signal Ko having values greater than zero whose relative magnitudes with respect to the radial direction of the light spot are as indicated in Figure 3(D).
The foregoing operations provide a light spot edge having a fairly sharp gradation, affording a natural appearance of the light spot on a given surface. The light spot key signal Ko output by the limiter offset processor 47, as indicated in Figure 1, is supplied to the coefficient unit 23 in which it is multiplied by the corresponding shading coefficient from the shading coefficient generating section 21.
This multiplication process is carried out so that only the portion of the transformed signal representing the portion of the image illuminated by the spot light is brightened by the output of the coefficient unit 23.
Figures 4(A) to 4(F) schematically illustrate the key data defining the light spot edge relative to the input image as well as the resulting light spot after image transformation. In Figures 4(A) to 4(F), the arrows Sp indicate the direction of spot light illumination.
With reference to Figure 4(A), the key data defines a light spot 99 on an input two-dimensional image 100, together with a resulting image 102 which has been output without transformation by the embodiment of Figure 1 Figures 4(B) and 4(C) depict transformations in which respective input images 104 and 106 are transformed to produce inclined output images 108 and 110, respectively, to depict perspective views.
In the case of the transformed images of Figures 4(B) and 4(C), the spot light Sp is directed perpendicularly to the inclined image, and consequently, the positions of the key data generated for the input image signal remain unchanged.
In Figures 4(D) and 4(E), respective input images 112 and 114 are transformed into corresponding cylindrical images 116 and 118. Unlike the transformations illustrated in Figures 4(A) to 4(C), the positions of the key data generated for the input images 112 and 114 (indicated as 113 and 115, respectively) vary drastically depending on the direction of the spot light source Sp. Figure 4(F) illustrates the positioning of the key data 119 on an input image 120 which is transformed on to a spherical surface, such that the light spot source Sp is above the resulting spherical output image 122.
It will be appreciated that embodiments of the invention may provide the capability, not heretofore realised, to depict localised shading/illumination effects, such as that in which a light spot is projected on to a two-dimensional image mapped in real-time onto a three-dimensional surface. The apparatus therefore provides the ability to present converted images in more diverse ways thus to enhance the available visual effects. A further benefit afforded by certain advantageous embodiments of the invention is the ability to present the illumination of three-dimensional images by a light spot source wherein the edge of the light spot is smoothed to produce a natural appearance.
It will be appreciated that the image transforming apparatus and method may be implemented in whole or in part using either analog or digital circuitry and all or part of the signal processing functions thereof may be carried out either by hardwired circuits or with the use of a microprocessor, microcomputer or the like.

Claims (11)

1. Apparatus for transforming an input video signal representing a two-dimensional image into a transformed video signal representing said two-dimensional image transformed on to a three-dimensional surface and for depicting illumination thereof by a spot light source, said input video signal having at least one of a luminance component and hue components, the apparatus comprising:: means for defining said three-dimensional surface; means for providing a mapping data signal for transforming said input video signal on to said three-dimensional surface; means for transforming said input video signal in accordance with said mapping data signal to generate said transformed video signal representing an image conforming with said three-dimensional surface, said transformed video signal having at least one of a luminance component and hue components; first means for generating a spot light source signal including first and second data signals, the first data signal representing a direction of an axis of a spot light source and the second data signal representing a radius of said spot light source;; second means for generating a spot light key signal based on said second data signal and a distance signal representing distances between locations on said three-dimensional surface and the axis of said spot light source based on said first data signal and said mapping data signal; and means for modifying at least one of said luminance component and said hue components of at least one of said input video signal and said transformed video signal in accordance with said spot light key signal.
2. Apparatus according to claim 1, wherein said second means is operative to generate said spot light key signal based on a comparison of said second data signal and said distance signal.
3. Apparatus according to claim 2, wherein said second means is operative to generate said spot light key signal based on differences between said radius of said spot light source as represented by said second data signal and said distances between locations on said three dimensional surface and the axis of the spot light source.
4. Apparatus according to claim 1, claim 2 or claim 3, wherein said modifying means is operative to multiply said at least one of said luminance component and said hue components by a coefficient proportional to said spot light key signal.
5. Apparatus according to claim 4, wherein said second means is operative to generate values of said spot light key signal for a portion of the image represented by said transformed video signal including the axis of said spot light source which uniformly represent a maximum illumination value.
6. Apparatus according to claim LI, wherein said second means is operative to generate values of said spot light key signal for portions of said transformed two-dimensional video image signal adjacent an edge of said spot light source which are proportional to said distance signal multiplied by an edge coefficient selected to increase an illumination gradient represented by said spot light key signal adjacent the edge of said spot light source.
7. Apparatus according to claim 4, claim 5 or claim 6, comprising shading coefficient generating means for producing a shading coefficient signal representing shading of the image corresponding to said transformed video signal, and wherein the modifying means is operative to modify said at least one of said luminance component and said hue components in accordance with said shading coefficient signal.
8. Apparatus according to claim 7, wherein said modifying means is operative to modify said at least one of said luminance component and said hue components based upon a combined illumination signal proportional to a product of said shading coefficient signal and said spot light key signal.
9. Apparatus for transforming an input video signal representing a two-dimensional image into a transformed video signal representing said two-dimensional image transformed on to a three-dimensional surface, the apparatus being substantially as hereinbefore described with reference to Figures 1 to LI of the accompanying drawings.
10. A method of transforming an input video signal representing a two-dimensional image into a transformed video signal representing said two-dimensional image transformed on to a three-dimensional surface and for depicting illumination thereof by a spot light source, said input video signal having at least one of a luminance component and hue components, the method comprising the steps of:: defining said three-dimensional surface; providing a mapping data signal for transforming said input video signal on to said three-dimensional surface; transforming said input video signal in accordance with said mapping data signal to generate said transformed video signal representing an image conforming with said three-dimensional surface, said transformed video signal having at least one of a luminance component and hue components; generating a spot light source signal including first and second data signals, the first data signal representing a direction of an axis of a spot light source and the second data signal representing a radius of said spot light source; ; generating a spot light key signal based on said second data signal and a distance signal representing distances between locations on said three-dimensional surface and the axis of the spot light source based on said first data signal and said mapping data signal; and modifying at least one of said luminance component and said hue components of at least one of said input video signal and said transformed video signal in accordance with said spot light key signal.
11. A method of transforming an input video signal representing a two-dimensional image into a transformed video signal representing said two-dimensional image transformed on to a three-dimensional surface, the method being substantially as hereinbefore described.
GB9207660A 1991-04-12 1992-04-08 Transforming an input video signal representing a 2-D image into a transformed video signal representing said 2-D image transformed on to a 3-D surface Expired - Fee Related GB2256109B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3106586A JP2973573B2 (en) 1991-04-12 1991-04-12 Image conversion device

Publications (3)

Publication Number Publication Date
GB9207660D0 GB9207660D0 (en) 1992-05-27
GB2256109A true GB2256109A (en) 1992-11-25
GB2256109B GB2256109B (en) 1994-11-16

Family

ID=14437307

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9207660A Expired - Fee Related GB2256109B (en) 1991-04-12 1992-04-08 Transforming an input video signal representing a 2-D image into a transformed video signal representing said 2-D image transformed on to a 3-D surface

Country Status (3)

Country Link
JP (1) JP2973573B2 (en)
KR (1) KR100227246B1 (en)
GB (1) GB2256109B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0567219A1 (en) * 1992-04-24 1993-10-27 Sony United Kingdom Limited Video special effects apparatus and method
GB2267203A (en) * 1992-05-15 1993-11-24 Fujitsu Ltd Three-dimensional graphics drawing apparatus and a memory apparatus to be used in texture mapping
EP0574111A1 (en) * 1992-04-24 1993-12-15 Sony United Kingdom Limited Lighting effects for digital video effects system
GB2287387A (en) * 1994-03-01 1995-09-13 Virtuality Texture mapping
GB2313278A (en) * 1996-05-14 1997-11-19 Philip Field Mapping images onto three-dimensional surfaces
US5867727A (en) * 1992-06-24 1999-02-02 Fujitsu Limited System for judging read out transfer word is correct by comparing flag of transfer word and lower bit portion of read destination selection address
US6014472A (en) * 1995-11-14 2000-01-11 Sony Corporation Special effect device, image processing method, and shadow generating method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2301514B (en) * 1994-12-01 1999-06-09 Namco Ltd Apparatus and method for image synthesization
JP4244391B2 (en) * 1997-04-04 2009-03-25 ソニー株式会社 Image conversion apparatus and image conversion method
GB2329312A (en) * 1997-04-10 1999-03-17 Sony Corp Special effect apparatus and special effect method
US6333742B1 (en) 1997-05-07 2001-12-25 Sega Enterprises, Ltd. Spotlight characteristic forming method and image processor using the same

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4965844A (en) * 1985-04-03 1990-10-23 Sony Corporation Method and system for image transformation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4965844A (en) * 1985-04-03 1990-10-23 Sony Corporation Method and system for image transformation

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0567219A1 (en) * 1992-04-24 1993-10-27 Sony United Kingdom Limited Video special effects apparatus and method
EP0574111A1 (en) * 1992-04-24 1993-12-15 Sony United Kingdom Limited Lighting effects for digital video effects system
US5361100A (en) * 1992-04-24 1994-11-01 Sony United Kingdom Limited Apparatus and method for transforming a video image into a three dimensional video image with shadows
GB2267203A (en) * 1992-05-15 1993-11-24 Fujitsu Ltd Three-dimensional graphics drawing apparatus and a memory apparatus to be used in texture mapping
GB2267203B (en) * 1992-05-15 1997-03-19 Fujitsu Ltd Three-dimensional graphics drawing apparatus, and a memory apparatus to be used in texture mapping
US5867727A (en) * 1992-06-24 1999-02-02 Fujitsu Limited System for judging read out transfer word is correct by comparing flag of transfer word and lower bit portion of read destination selection address
GB2287387A (en) * 1994-03-01 1995-09-13 Virtuality Texture mapping
US6014472A (en) * 1995-11-14 2000-01-11 Sony Corporation Special effect device, image processing method, and shadow generating method
GB2313278A (en) * 1996-05-14 1997-11-19 Philip Field Mapping images onto three-dimensional surfaces

Also Published As

Publication number Publication date
JP2973573B2 (en) 1999-11-08
JPH04315274A (en) 1992-11-06
GB9207660D0 (en) 1992-05-27
KR920020979A (en) 1992-11-21
GB2256109B (en) 1994-11-16
KR100227246B1 (en) 1999-11-01

Similar Documents

Publication Publication Date Title
US5282262A (en) Method and apparatus for transforming a two-dimensional video signal onto a three-dimensional surface
US5742749A (en) Method and apparatus for shadow generation through depth mapping
Greene et al. Creating raster omnimax images from multiple perspective views using the elliptical weighted average filter
US4965844A (en) Method and system for image transformation
US5175806A (en) Method and apparatus for fast surface detail application to an image
US5537638A (en) Method and system for image mapping
US5325200A (en) Apparatus and method for transforming a digitized signal of an image into a reflective surface
US5644689A (en) Arbitrary viewpoint three-dimensional imaging method using compressed voxel data constructed by a directed search of voxel data representing an image of an object and an arbitrary viewpoint
US5704024A (en) Method and an apparatus for generating reflection vectors which can be unnormalized and for using these reflection vectors to index locations on an environment map
US6850236B2 (en) Dynamically adjusting a sample-to-pixel filter in response to user input and/or sensor input
EP0228231B1 (en) Method for seismic dip filtering
EP0437074B1 (en) Special effects using polar image coordinates
US5412402A (en) Electronic graphic systems
US6014143A (en) Ray transform method for a fast perspective view volume rendering
JPH01205277A (en) Computer graphic display device
WO1996036011A1 (en) Graphics system utilizing homogeneity values for depth for occlusion mapping and texture mapping
US6157387A (en) Image generating apparatus and method
JP2001512265A (en) Texture mapping in 3D computer graphics
KR960013365B1 (en) Specifying 3d points in 2d graphic displays
US5222206A (en) Image color modification in a computer-aided design system
Chen et al. Manipulation, display, and analysis of three-dimensional biological images
GB2256109A (en) Transforming a two-dimensional image video signal on to a three-dimensional surface
JPH0771936A (en) Device and method for processing image
US4899295A (en) Video signal processing
US4607255A (en) Three dimensional display using a varifocal mirror

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20100408