EP0364455A1 - An improved method of colorizing black and white footage - Google Patents

An improved method of colorizing black and white footage

Info

Publication number
EP0364455A1
EP0364455A1 EP88903009A EP88903009A EP0364455A1 EP 0364455 A1 EP0364455 A1 EP 0364455A1 EP 88903009 A EP88903009 A EP 88903009A EP 88903009 A EP88903009 A EP 88903009A EP 0364455 A1 EP0364455 A1 EP 0364455A1
Authority
EP
European Patent Office
Prior art keywords
information signal
color
black
image
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP88903009A
Other languages
German (de)
French (fr)
Other versions
EP0364455A4 (en
Inventor
David Geshwind
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP0364455A1 publication Critical patent/EP0364455A1/en
Publication of EP0364455A4 publication Critical patent/EP0364455A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/43Conversion of monochrome picture signals to colour picture signals for colour picture display

Definitions

  • the instant invention relates to an improved method for adding color to black ⁇ white film or videotape motion pictures, through a combination of human determination and computer generation and computer processing of information.
  • the color-only information signal may be of relatively low information density in any combination of three ways: low spatial density, low temporal density and low color choice density. Techniques such as inbetweening, cross-dissolving, low-pass filtering enhance the final results.
  • the entire process may be described as comprising three steps: creation of the color-only information signal; processing of the color-only information signal; and combining the color-only information signal with the original black & white information signal to-create a full-color information signal.
  • Figure la shows a typical unprocessed confidence signal frame for a circular area.
  • Figure lb shows that same frame after low-pass filtering.
  • Figure- lc shows a cross section of an unprocessed confidence signal line.
  • Figure Id shows that same cross section ofter low-pass filtering.
  • Figure 2aa shows a typical image; a red circle on a blue background.
  • Figure 2a shows a neutral gray zone separating the red circle from the blue background.
  • Figure 2b shows the intersection of adjacent red and blue areas without a separating neutral zone, before low-pass filtering.
  • Figure 2c shows the intersection of adjacent red and blue areas with a separating neutral zone, before low-pass filtering.
  • Figure 2d shows the intersection of adjacent red and blue areas without a separating neutral zone, after low-pass filtering.
  • Figure 2e shows the intersection of adjacent red and blue areas with a separating neutral zone, after low-pass filtering.
  • Figure 3 shows the output of three polygon Tenderers (or videotape recorders, playing back a recording of one, or more, polygon Tenderer's output).
  • the rendering devices are displaying a bounded area representing a face and depicted as a circle in the drawing.
  • the first Tenderer displays the face area as pink for skin' one.
  • the second displays the face as blue and the third as yellow, to be used as modifications to the pink signal for shadows and highlights respectively.
  • the outputs of the three are routed to a color signal mixer which also has in input for a black & white signal to be used as control.
  • the black & white signal is also combined with the output of the color mixer to create a full color signal.
  • Figure 4a shows the use of a digital tape recorder to record separate black & white, color tag and confidence signal components, and the decoding of those components into standard color coodinate systems such as red, green and blue.
  • Figure 4b shows a detail of the decoder in figure 4a. Shown are the hue generation and color coordinate converter subsections.
  • Figure 5a shows a face with a uniform flat pink color overlay.
  • Figure 5b shows that same face with the addition of a bluish tinted bounded area representing a shadow.
  • Figure 5c shows that same face with the addition of a yellowish tinted bounded area representing a sunli ' highlight.
  • a first set of improvements relates to reducing the impact of inaccuracies in object boundary outlines created by either a human or a computer.
  • the selection of unsaturated or pastel colors helps hide the inaccuracies of boundaries between the individual image areas to be colored.
  • a set of related hues are chosen so as to further minimize these defects. For example, in a number of colorized films most scenes are composed of pink, purple and blue pastels that have been chosen so as to provide minimal color contrast. However, the resulting colorized film is monotonous and unrealistic.
  • the low-pass filtering that may be intentionally used as part of patent 4,606,625 or may be inherent in the recording or broadcast process can cause mixing of adjacent color areas at the boundaries. Similar pastels blend inobtrusively.
  • a technique that will allow the use of more saturated and contrasting colors, yet reduce the foregoing problems, is to vary the saturation of the color information over the image. Colors will be made less saturated where the probability of error is high, i.e., near the boundaries. Colors will be made more saturated where the probability or confidence is high that the color choice is accurate, i.e., away from the boundaries.
  • the quality of the lines may be varied based on a number of criteria. Qualities that may be varied include line width, opacity, darkness or hue. Additionally, the amount or type of low-pass (or other) filtering applied to the images containing those lines may be varied.
  • Criteria for varying line or filtering quality may be based upon characteristics of individual areas to be colored, or upon characteristics of the two adjacent areas that border on the line. For example:
  • object - fast moving objects may require thicker lines and/or more or less filtering because object edges may be less distinct and harder to specify; still objects may require thin boundaries because thick boundaries will become noticable under prolonged scrutiny;
  • prominence of object - objects that are particularly noticible or important to a scene may require special treatment; also, objects may be accentuated by special treatment;
  • luminance contrast between objects - line and filtering quality may be varied in accordance with the difference in brightness between the two objects separated by the line;
  • chrominance contrast between objects - line and filtering quality may be varied in accordance with the difference in the intended hues between the two objects separated by the line;
  • the techniques are applied too sparingly, the problems they are meant to alleviate will persist; if the techniques are applied too heavily, they may create their own visual anoma lies, ( e. g. , too thick a l ine with to o l itt l e fil tering cou ld result in a perceivable non-colored border around objects.)
  • the parameters wil l be adjusted to achieve an optimal balance .
  • a next set of techniques relates to creating object boundary outlines more efficiently.
  • the underlying approach is to redefine the division and overlap between the tasks assigned to the human operator - and the computer.
  • the operator will be further freed from routine and repetitive tasks. His role in making specific judgements, beyond the computer's ability, will be increased.
  • the interface between operator and computer will be made more flexible and complex to allow the human to more fully communicate his judgements to the machine. The computer will then act on those judgements much more efficiently than the human could.
  • the system will contain two major sub-systems, a collection of algorithms for edge and contour extraction, and a set of line editing functions.
  • the computer has the ability to apply such software 'tools' efficiently and rapidly.
  • computers do not now have the ability to determine how and where to best apply these 'tools' to black & white image frames. Therefore, they cannot create the object boundaries that comprise the color information frames required to colorize black & white frames.
  • the two major sub-systems will be integrated with other functions in an overall interactive structure that will allow the operator to give complex directions to the computer which it will then adapt for use with a series of frames.
  • a number of edge and contour extraction algorithms are applied by the computer to quickly and inexpensively create potential boundary lines between objects.
  • a significant number of these lines will be 'false boundaries'. For example, a shadow overlaping one or more objects may be outlined as a distinct object. Similarly, where adjacent objects are very close in brightness, texture, etc., two objects may not be distinguished completely.
  • the scene with the various edges and contours overlayed, is displayed to the operator.
  • the operator will be supplied with a set of software functions, integrated into an interactive system, that will allow for the easy and efficient editing of the total set of potential boundary lines. Functions will be included that allow for some lines to be eliminated, other lines to be connected into a continuous object boundary, lines to be adjusted, to 'hand draw' segments to fill in gaps or missing boundaries and other line editing functions.
  • the operator and computer will interact through another set of software functions, and the operator will direct the computer as to which image processing algorithms to apply for various image areas and how the parameters of each algorithm will be adjusted.
  • the interaction can be accomplished by displaying the resulting extracted lines overlayed upon the image, and changing the results as the operator adjusts various parameters on a menu of functions.
  • This session will then set the rules for applying the image processing algorithms to a number of similar frames in a scene.
  • the computer will contain software to retain the editing functions applied by the operator to the first frame, and to apply those same functions to the next frame.
  • the operator will have the opportunity to review the computer's attempt at applying the correct line creation and editing functions to each subsequent frame. When computer errors result, the operator will have the ability to re-enter the line editing mode and make corrections. These corrections will be retained and used to update the set of editing functions to be automatically applied to the next frame. (Similarly, at any point in the sequence, the operator will be able to enter the mode where the line extraction algorithms are specified and update that set of rules to be applied by the computer.)
  • An additional feature of the software will be that information about object movements will be retained and used to update the set of rules to be applied by the computer when performing line extraction and line editing. For example, assume the operator had indicated to the computer to apply 'line extraction by texture comparison' to a particular area of the image in a first frame. Lines would be created by the extraction algorithm and then edited into an object boundary specification. If, for subsequent frames, the object boundary created by this process were moving in a particular direction, the area over which the 'line extraction by texture comparison' algorithm was applied would be similarly moved.
  • a related feature of the refined system is a technique- to be applied to often encou ⁇ ted image features.
  • a problem mentioned, on a particular colorized film is that "01' Blue Eyes" (a well known nick name for Frank Sinatra) was rendered as "01' Brown Eyes". This is due to the fact that is was not practical to separately outline Sinatra's blue eyes as individual objects. The flesh tone applied to the rest of his face was applied to his eyes as well, creating dark pink or 'brown' irises. Outlining each eye for each face in each scene is an impractical burden with current colorization systems. Nor is automatic identification of eyes in a scene practical with current Al techniques. However, a system based on human machine cooperation as explained above is practical and is described below.
  • the operator would have a number of additional functions on his menu, for example 'eye identification 1 .
  • the operator would then point out a number of key locations to the computer, for example, left corner, right corner and pupil.
  • the computer Once the computer was directed to the appropriate image area, and based on well defined feature extraction rules, the computer would identify and specify the various parts of the eye to be individually colored.
  • Feature extraction rules could be based on the relative lightness of the sclera (white of the eye) compared to skin tone, the relative darkness of the iris compared to (Caucasian) skin tone, the round shape of the iris, the long and dark shape of the eyebrow, etc.
  • Individual eye features such as the sclera, iris, brow, lashes would then be individually specified, by the computer, with the appropriate colors and other areas, e.g., the eye lids, would be treated similarly to the face skin.
  • a similar function would allow for the cooperative (human and computer) identification and treatment of the lips, teeth and interior, as part of a 'mouth identification' function.
  • Functions for other common objects or image features, such as hands, shoes, ties, hats, hair, etc., would also be made available.
  • Specialized functions for particular films might be incorporated such as 'guns' for a western film, or trees, birds, mountains, etc.
  • An Object tracking' and/or 'location by inbetweening' mechanism similar to that described above, would also be incorporated into the 'feature identification' software functions described here so that the computer could find such objects on its own for intervening frames.
  • the above examples are meant to be illustrative rather than limiting in nature.
  • the refined system features described above relate to the reduction of the impact of inaccuracies in object boundary outlines, and to creating object boundary outlines more efficiently.
  • the next several refinements relate to improving the look of colorization that has been described as washed-out, over- simple, unsophisticated, unsubtle or timid.
  • the color-only signal in this 'low color choice density' state and obtain these advantages throughout all stages of the colorization process.
  • the combination of the color-only information signal with the original black and white signal the 'low color choice density' color-only information can be modulated or varied based on the high density information present in the black and white image with which It is to be combined. The resulting modifications will enhance the color-only information signal by adding variation and sophistication to produce a more interesting and realistic effect.
  • the video output of these three rendering engines would then be input to a video mixing device.
  • the high-density black & white information signal would also be input to the video mixing device as a control signal. Wherever the control signal were brightest, progressively more of the yellow signal would be added to the pink signal. Wherever the control signal were darkest, progressively more of the blue signal would be added to the pink signal.
  • a composite color-only signal is produced with a full range of skin tones from blueish shadows, through full pink mid-tones, to yellowish highlights. This composite signal would then be combined with the high-density black & white signal to create a more realistic full color image.
  • the mixing device described above could be implemented using well understood, analog video, post-production technology.
  • a single rendering engine could be made to display all three of the color signals, one at a time to be recorded on videotape. This is practical because the rendering engines operate so quickly. In place of three rendering engines, the three videotapes would be played back in synchronization.
  • each bounded area would be rendered as a unique flat color.
  • This output would be input to a mixing device along with the high density black & white information signal.
  • the mixing device would replace that color with another color, at each point, depending on the luminance value range of the black & white information signal at that point.
  • the number of luminance value ranges could be 3, 5 or more depending on requirements and the sophistication of the mixing device. And differences between the ranges could be smoothly and continuously varied for even more color modulation.
  • DTR Digital Information Recorder
  • the DTR system records three independent digital information signals, each of which is a monochrome signal. Designated Y (luminance), R-Y (red minus luminance) and B-Y (blue minus luminance), the first is recorded at twice the density of the other two. (It will either be possible to record each signal independently on a single machine, or to insert each signal independently by shuttling back and forth between two DTRs. Since the DTR is a digital recorder, there is virtually no signal degradation when recording multiple generations. )
  • the DTR is ideal for implementing the system described in 4,606,625.
  • the original black & white signal is to be stored at higher information density and will be stored in the Y channel.
  • the other two channels will be used as follows.
  • One channel will be used to record the output of a frame buffer or, preferably, the output of a single, high speed polygon rendering engine.
  • each individual area would be uniformly displayed as a single, unique, though arbitrary "color”. Since the signal, as recorded by the DTR, is a monochrome signal, only shades of gray will register. Therefore, all of the designated colors must be shades of gray. These gray "colors” will be used later to distinguish between the various bounded areas within the image to be separately colored. This signal, once recorded, will be referred to as the "color tag signal" .
  • the other channel will be used to store a confidence signal as ' described earlier. If these three signals were to be played back through the normal display circuitry of the recorder, the result would be unintelligible. However, the output of the three signals will, instead, be routed to a specialized decoding device whose output will be a highly sophisticated and realistic full color signal.
  • the decoding device will itself be a highly sophisticated digital device with its own microprocessor and memory circuits and will operate as follows. The three digital inputs to the decoding device will be the high-information-density original black & white information signal, and the lower density 'color tag' and 'confidence' signals, as played back by the DTR.
  • a color hue will be derived for each point on the screen.
  • Information stored in the memory of the decoding device will designate how the black & white signal will modulate each different color tag.
  • the saturation of the color is modified by the corresponding portion of the 'confidence' signal.
  • the luminance value is derived for each point on the high density black & white signal.
  • the enormous advantage that is obtained from this embodiment Is that it allows the various information signals that are combined to generate the sophisticated full color signal to be created and/or stored independently. Further, the two color component signals (the color tag and confidence signals) can be created and stored separately, at very low information density, and in the form that is most convenient. There is no need to provide the storage space and processing time in the main computer to combine the three signals on a frame-by-frame basis. The combination is done later as an inexpensive, real time, decoding step.
  • the two low density color component signals can be created.and stored at very low information density and not until the final step need they be combined with the high density black & white signal to create a high density full color signal.
  • the high density black & white signal to create a high density full color signal.
  • the goal of this technique is also to create a more sophisticated color signal, but unlike the previous techniques, it does so by increasing the information content of the color-only signal at the beginning of the process.
  • the amount of additional information is small when compared to the additional color information created at the final stage of the preferred embodiment. Therefore, the additional burden of passing this additional information through all stages of the process (e.g., inbetweening, rendering, processing, storage, etc.) is tolerable.
  • the additional information will be created completely and automatically by the computer and will not increase overhead of human labor at all.
  • the technique involves the automatic identification and specification of sub-area boundaries within a bounded area specified by man-machine interaction or other process. These sub- area boundaries are created from contours derived from density slices or other criteria. For example, returning to the example of the sunlit face, let us assume that the values in the face run from full black (0) to full white (255). Again, we may wish to create yellowish highlights and blueish shadows, to indicate the sunlight effect, rather than a uniformly flat pink face. With this technique, we will create separate boundaries of shadow sub-areas as follows. The computer will be instructed to consider all those pixels (picture elements) within the boundary of the face area below a certain threshhold, say 80. The computer will be further instructed to create sub-area boundaries around connected groups of those pixels.
  • the computer may be instructed to Ignore groups of pixels that are too small to matter. The result will be islands of shadow outlined within the face boundary. These islands may now be considered as separate bounded areas to be colored b lueish-pink. Similarly, islands of highlight (e.g., those pixels above value 210) would be treated as separate bounded areas to be colored yellowish-pink.
  • these sub-area boundaries may be created by extracting a contour line defined by those pixels of only one, or a narrow range of, values.
  • Sub-area boundaries can also be created based on other criteria. For example, textures or patterns can be broken into separately colored sub-areas to add subtlty and sophistication. For example, consider a wall that has been specified, by human interaction, as a single bounded area. If that wall were covered by patterned wallpaper ? the computer could be instructed to seek out various repeated elements (e.g., stripes, flowers, etc.) or the background, and create sub-area boundaries around each to be separately colored. A similar technique could be applied to stripes or plaids in fabric, stones or bricks on a wall, flowers in a garden, or any other textured area for which it is impractical to hand outline numerous objects. These examples are meant to be illustrative rather than limiting in nature and variations on this technique are intended to be within the scope of this invention.
  • the operator may notice a scratch, hole or other white defect in a black & white image.
  • the operator will then select a repair function from his menu and then indicate the defect to the computer.
  • the computer will then, in the color-only information signal create a patch, of the appropriate hue but with a darker value to match the area surrounding the scratch.
  • the computer may further soften the edges between the patch and the surrounding area in the color- only signal to hide the fact that there is a "patch".
  • the operator would select, a different repair function that would create a patch of lighter color in the color-only information signal.
  • the darker patch can successfully cover a white hole, the light patch will only be able to diminish the effect, but not completely eliminate, a dark defect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

Un signal noir et blanc est coloré en le combinant avec un signal de référence représentant des couleurs désignées qui sont utilisées pour remplir des zones limites créées par un ordinateur pour obtenir un signal de teinte. La saturation du signal de teinte est ensuite déterminée en accord avec un signal de fiabilité qui indique la probabilité d'erreur de couleur de sorte que la teinte est moins saturée au niveau de zones dans lesquelles elle sera plus vraisemblablement mélangée à d'autres couleurs. Le signal de teinte ajusté par saturation est ensuite mélangé avec le signal noir et blanc original pour obtenir le signal coloré.A black and white signal is colored by combining it with a reference signal representing designated colors which are used to fill boundary areas created by a computer to obtain a hue signal. The saturation of the hue signal is then determined in accordance with a reliability signal which indicates the probability of color error so that the hue is less saturated in areas where it will more likely be mixed with other colors. The hue signal adjusted by saturation is then mixed with the original black and white signal to obtain the color signal.

Description

AN IMPROVED METHOD OF COLORIZING BLACK & WHITE FOOTAGE
TECHNICAL FIELD
The instant invention relates to an improved method for adding color to black ά white film or videotape motion pictures, through a combination of human determination and computer generation and computer processing of information.
BACKGROUND ART
Several computer-assisted film colorization systems now exist, including the systems disclosed in applicant's patent US 4,606,625. The basic principle of that patent is to create, through a combination of human manual specification and computer image processing, a color-only information signal, of relatively low information density, which is then combined with the original black & white information signal which is of relatively high information density.
The color-only information signal may be of relatively low information density in any combination of three ways: low spatial density, low temporal density and low color choice density. Techniques such as inbetweening, cross-dissolving, low-pass filtering enhance the final results.
The entire process may be described as comprising three steps: creation of the color-only information signal; processing of the color-only information signal; and combining the color-only information signal with the original black & white information signal to-create a full-color information signal.
The computerized colorization of black & white film and videotape has recently become a viable industry. Several companies now provide commercial colorization services. However, the product delivered by these companies has been extensively criticized for its low quality. Therefore, it is desirable to create a system that will not only make colorization practical, but will produce results of an acceptably high quality while ' maintaining a reasonable cost. The instant application relates to a number of refinements of, and improvements to, applicant's US patent 4,606,625. The refinements are to the steps of "creating, processing and/or combining (with the black & white information) the color information signal and may be incorporated into the system described in US 4,606,625 or other computer-assisted colorization systems.
Specific complaints about commercial colorization are that the color element of the image is overly simple and washed out, and that the boundaries between colored areas are inaccurate, particularly for fast moving scenes.
These defects are due to insufficiencies in the colorization systems now in use. The design of current systems makes it economically infeasible to create product of high quality.
To understand the trade-offs between costs and quality, one must realize that the popular notion - that an 'artist' colors only one frame of a scene and the 'computer' colors all the rest - is a gross exageration. With the current commercial colorization systems, including the system as described in applicant's patent 4,606,625, human interaction is required to identify and specify outlines of the objects to be colored. Artificial Intelligence (Al) and pattern recognition has had limited success in being able to extract known objects in restricted situations. However, tracking and extracting multiple, unfamiliar, moving objects, that mutually overlap, in contexts that change every few seconds with each camera shot, is beyond the current state-of-the-art. Al research may one day result in an affordable, fully automated colorization technology but, for the forseeable future, continual human judgement and interaction is still a necessary component of computerized colorization systems. Thus, although these processes are highly computer assisted, and much more efficient than hand painting each frame, they are still very labor intensive. The colorization operator must outline each separately colored object in a large proportion of the film's frames. 'And, since labor is the most expensive element of the process, the fewer distinct objects and the fewer colors in a scene, the less colorization costs. Mimicking the visual complexity inherent in a 'real' color scene is very expensive.
Economy also plays a part in the choice of washed-out pastels. With thousands of frames to paint, colorizers can afford to spend little time getting the details correct on each one. (Further, even if unlimited time were available to colorizers, some incon¬ sistencies in object boundaries, from frame to frame, would still result.) And, the computer process that 'colors' the interviening frames is not always accurate. The greater the movement in the scene, the more these problems are apparent. The inaccuracies in the placement of colors are much less noticable if the colors are less saturated. In some converted films the colors are so washed- out they can barely be seen - but then neither can the mistakes.
DESCRIPTION OF DRAWINGS
Figure la shows a typical unprocessed confidence signal frame for a circular area.
Figure lb shows that same frame after low-pass filtering.
Figure- lc shows a cross section of an unprocessed confidence signal line.
Figure Id shows that same cross section ofter low-pass filtering.
Figure 2aa shows a typical image; a red circle on a blue background.
Figure 2a shows a neutral gray zone separating the red circle from the blue background.
Figure 2b shows the intersection of adjacent red and blue areas without a separating neutral zone, before low-pass filtering. Figure 2c shows the intersection of adjacent red and blue areas with a separating neutral zone, before low-pass filtering.
Figure 2d shows the intersection of adjacent red and blue areas without a separating neutral zone, after low-pass filtering.
Figure 2e shows the intersection of adjacent red and blue areas with a separating neutral zone, after low-pass filtering.
Figure 3 shows the output of three polygon Tenderers (or videotape recorders, playing back a recording of one, or more, polygon Tenderer's output). The rendering devices are displaying a bounded area representing a face and depicted as a circle in the drawing. The first Tenderer displays the face area as pink for skin' one. The second displays the face as blue and the third as yellow, to be used as modifications to the pink signal for shadows and highlights respectively. The outputs of the three are routed to a color signal mixer which also has in input for a black & white signal to be used as control. The black & white signal is also combined with the output of the color mixer to create a full color signal.
Figure 4a shows the use of a digital tape recorder to record separate black & white, color tag and confidence signal components, and the decoding of those components into standard color coodinate systems such as red, green and blue.
Figure 4b shows a detail of the decoder in figure 4a. Shown are the hue generation and color coordinate converter subsections.
Figure 5a shows a face with a uniform flat pink color overlay.
Figure 5b shows that same face with the addition of a bluish tinted bounded area representing a shadow. Figure 5c shows that same face with the addition of a yellowish tinted bounded area representing a sunli ' highlight.
DISCLOSURE
The instant application is a continuat ion-in-par t of the applicant's US patent 4,606,625 and US application, serial number 601,091. It relates to improvements to colorization systems including the system described in 4,606,625.
A first set of improvements relates to reducing the impact of inaccuracies in object boundary outlines created by either a human or a computer.
As described above, the selection of unsaturated or pastel colors helps hide the inaccuracies of boundaries between the individual image areas to be colored. In addition, often a set of related hues are chosen so as to further minimize these defects. For example, in a number of colorized films most scenes are composed of pink, purple and blue pastels that have been chosen so as to provide minimal color contrast. However, the resulting colorized film is monotonous and unrealistic.
Also, the low-pass filtering that may be intentionally used as part of patent 4,606,625 or may be inherent in the recording or broadcast process can cause mixing of adjacent color areas at the boundaries. Similar pastels blend inobtrusively.
While the selected use of well-saturated and contrasting colors would create a more realistic colorized effect, if such colors are used in adjacent image areas, any inaccuracies in, or color mixing at, the boundary between those areas will be made mor obvious.
A technique that will allow the use of more saturated and contrasting colors, yet reduce the foregoing problems, is to vary the saturation of the color information over the image. Colors will be made less saturated where the probability of error is high, i.e., near the boundaries. Colors will be made more saturated where the probability or confidence is high that the color choice is accurate, i.e., away from the boundaries.
There are at least two distinct ways to implement the above concept. One is to create a 'confidence signal' that will indicate what the color saturation is to be at each point on the image. This may be done by first having the computer render the boundaries, between individual areas to be colored, as lines, e.g. black on white. Once these lines are drawn, the resulting image would be low-pass filtered or 'blurred'. This will result in an image with dark lines surounded by fuzzy gray areas. The areas corresponding to the objects to be colored will be mostly white, shading to gray near their edges. This resulting image would be used as control information when.combining the 'color- only information with the original black & white information. The color saturation of the final full-color image would be additionally varied, on a point-by-point basis, such that, where the control image is lighter the final image will be made more saturated.
An alternate approach is to incorporate the 'confidence' information directly into the color-only information component. This would be accomplished by computer rendering the boundaries between individual areas as thick, neutral colored lines (in general, gray, black or white). When the resulting color-only imformation frame were low-pass filtered, the neutral line would act as a buffer, preventing the colors from two differently colored areas from mixing. Further, it would create a zone of reduced saturation between the two colored areas. This zone would vary smoothly with minimum saturation at the line's center.
An additional result of either of these implementations would be that, when combining the color-only information with the black & white information, the reduced saturation zones around object edges will leave the black & white object edges less obscured by the color overlay. The high density visual information in the scene, such as that used by the eye/brain to perceive edges, is provided by the black & white information component. Thus, leaving the black & white edges less obscured by color, will help reduce the perception that colorized films are soft or fuzzy when compared with the original black & white versions.
When rendering the lines that are used to generate the confidence signal or the neutral boundary lines of the second implementa¬ tion, the quality of the lines may be varied based on a number of criteria. Qualities that may be varied include line width, opacity, darkness or hue. Additionally, the amount or type of low-pass (or other) filtering applied to the images containing those lines may be varied.
Criteria for varying line or filtering quality may be based upon characteristics of individual areas to be colored, or upon characteristics of the two adjacent areas that border on the line. For example:
size of object - very small objects may need to be bordered by finer lines, or filtered less;
movement of object - fast moving objects may require thicker lines and/or more or less filtering because object edges may be less distinct and harder to specify; still objects may require thin boundaries because thick boundaries will become noticable under prolonged scrutiny;
prominence of object - objects that are particularly noticible or important to a scene may require special treatment; also, objects may be accentuated by special treatment;
luminance contrast between objects - line and filtering quality may be varied in accordance with the difference in brightness between the two objects separated by the line;
chrominance contrast between objects - line and filtering quality may be varied in accordance with the difference in the intended hues between the two objects separated by the line;
confidence of object boundary specification - for example, if object boundaries in some frames were specified by human operation and object boundaries in other frames were derived by computer interpolati e inbetweening, the accuracy of the two classes of boundaries might not be equal. Thus, the less accurate boundaries might be represented by thicker lines or filtered differently;
The examples above are meant to be illustrative rather than limiting in nature. Other criteria may be considered when adjusting the line and filtering parameters, or other image processing parameters, and are within the scope of this invention. In general, the line and filtering parameters will be adjusted so as to provide the optimal perceived results.
For example, if the techniques are applied too sparingly, the problems they are meant to alleviate will persist; if the techniques are applied too heavily, they may create their own visual anoma lies, ( e. g. , too thick a l ine with to o l itt l e fil tering cou ld result in a perceivable non-colored border around objects.) The parameters wil l be adjusted to achieve an optimal balance .
A next set of techniques relates to creating object boundary outlines more efficiently.
Practical problems that exist In creating object boundary outlines with traditionally computer-assisted colorization systems are:
the human labor component makes the process expensive and 'drawing' enough boundaries for realistic detail is impractical;
humans are not as mechanically precise as computers and cannot apply the same 'edge specification rules' consistently from frame to frame.
On the other hand, while completely automated comuter systems do operate very Inexpensively and precisely, even with Al computers cannot Identify arbitrary objects or their boundaries. What follows is a description of a system that combines these two approaches to achieve a more effective balance between human and computer effort.
The underlying approach is to redefine the division and overlap between the tasks assigned to the human operator - and the computer. The operator will be further freed from routine and repetitive tasks. His role in making specific judgements, beyond the computer's ability, will be increased. The interface between operator and computer will be made more flexible and complex to allow the human to more fully communicate his judgements to the machine. The computer will then act on those judgements much more efficiently than the human could.
The system will contain two major sub-systems, a collection of algorithms for edge and contour extraction, and a set of line editing functions. The computer has the ability to apply such software 'tools' efficiently and rapidly. However, computers do not now have the ability to determine how and where to best apply these 'tools' to black & white image frames. Therefore, they cannot create the object boundaries that comprise the color information frames required to colorize black & white frames. The two major sub-systems will be integrated with other functions in an overall interactive structure that will allow the operator to give complex directions to the computer which it will then adapt for use with a series of frames.
Well known computer image processing techniques exist for extracting contours and edges from digitized images. These may include extraction based on differences in brightness, texture, sharpness or other quality, on connectivity or other criteria. The particulars of these techniques are not within the scope of the instant invention.
In the instant invention a number of edge and contour extraction algorithms are applied by the computer to quickly and inexpensively create potential boundary lines between objects. A significant number of these lines will be 'false boundaries'. For example, a shadow overlaping one or more objects may be outlined as a distinct object. Similarly, where adjacent objects are very close in brightness, texture, etc., two objects may not be distinguished completely.
At this point, the scene, with the various edges and contours overlayed, is displayed to the operator. The operator will be supplied with a set of software functions, integrated into an interactive system, that will allow for the easy and efficient editing of the total set of potential boundary lines. Functions will be included that allow for some lines to be eliminated, other lines to be connected into a continuous object boundary, lines to be adjusted, to 'hand draw' segments to fill in gaps or missing boundaries and other line editing functions.
For the first frame of a given scene, the operator and computer will interact through another set of software functions, and the operator will direct the computer as to which image processing algorithms to apply for various image areas and how the parameters of each algorithm will be adjusted. The interaction can be accomplished by displaying the resulting extracted lines overlayed upon the image, and changing the results as the operator adjusts various parameters on a menu of functions. This session will then set the rules for applying the image processing algorithms to a number of similar frames in a scene.
Learning to operate the system described above will require more training, experience and effort on the part of the operators when compared to 'tradi ionally computer-assisted' colorization systems. However, the long-term gains in efficiency, accuracy and consistancy, when compared with computer-assisted hand drawing of outlines, will more than make up for the added effort.
Once the operator has directed the computer as to how to best apply the various image processing algorithms to a scene, and once the operator has edited the potential boundary lines created for the first image frame of that scene, the second and subsequent frames will be treated in a more automated fashion. There is generally only slight movement of objects from frame to frame. Therefore, while each extracted line of a second frame may be slightly changed with respect to a first frame, the approximate position and shape of each line will be similar. Therefore, the computer will contain software to retain the editing functions applied by the operator to the first frame, and to apply those same functions to the next frame. For example, if on a first frame the operator had edited the boundary line frame to "remove the four long lines in the upper-right-hand corner and connect the three short lines on the right side into a continuous boundary line" this same series of operations would be retained and applied to the next frame. Although the exact position and shape of each line may have changed slightly, it is not beyond current Al capabilities to take such changes into account.
The operator will have the opportunity to review the computer's attempt at applying the correct line creation and editing functions to each subsequent frame. When computer errors result, the operator will have the ability to re-enter the line editing mode and make corrections. These corrections will be retained and used to update the set of editing functions to be automatically applied to the next frame. (Similarly, at any point in the sequence, the operator will be able to enter the mode where the line extraction algorithms are specified and update that set of rules to be applied by the computer.)
An additional feature of the software, as described above, will be that information about object movements will be retained and used to update the set of rules to be applied by the computer when performing line extraction and line editing. For example, assume the operator had indicated to the computer to apply 'line extraction by texture comparison' to a particular area of the image in a first frame. Lines would be created by the extraction algorithm and then edited into an object boundary specification. If, for subsequent frames, the object boundary created by this process were moving in a particular direction, the area over which the 'line extraction by texture comparison' algorithm was applied would be similarly moved.
Similarly, for the line editing phase, if an object boundary were created by connecting a number of lines and deleting all other lines contained within the boundary, as that edited object boundary moved, from frame to frame, the computer would be automatically instructed to look, for the relevant lines to be edited in the next frame, at an updated position.
What has been described above Involves automatic object tracking. An alternative is to apply computer interpolative inbetweening. The operator would specify the position of objects, areas to apply algorithms, or relevant lines to be edited, in two non- adjacent frames. The computer would calculate intervening positions, to apply the rules, for the intervening frames.
As Al software techniques become more sophisticated and capable, the adaptive functions described above will become more automatic and accurate. Eventually, a colorization system may be developed which truly operates by having an operator paint and outline only a first frame of a scene (or even no frames). The process as described above will form the basis for such a fully automatic system and is within the intended scope of the instant invention.
A related feature of the refined system is a technique- to be applied to often encouπted image features. For example, a problem mentioned, on a particular colorized film, is that "01' Blue Eyes" (a well known nick name for Frank Sinatra) was rendered as "01' Brown Eyes". This is due to the fact that is was not practical to separately outline Sinatra's blue eyes as individual objects. The flesh tone applied to the rest of his face was applied to his eyes as well, creating dark pink or 'brown' irises. Outlining each eye for each face in each scene is an impractical burden with current colorization systems. Nor is automatic identification of eyes in a scene practical with current Al techniques. However, a system based on human machine cooperation as explained above is practical and is described below. The operator would have a number of additional functions on his menu, for example 'eye identification1. The operator would then point out a number of key locations to the computer, for example, left corner, right corner and pupil. Once the computer was directed to the appropriate image area, and based on well defined feature extraction rules, the computer would identify and specify the various parts of the eye to be individually colored. Feature extraction rules could be based on the relative lightness of the sclera (white of the eye) compared to skin tone, the relative darkness of the iris compared to (Caucasian) skin tone, the round shape of the iris, the long and dark shape of the eyebrow, etc. Individual eye features such as the sclera, iris, brow, lashes would then be individually specified, by the computer, with the appropriate colors and other areas, e.g., the eye lids, would be treated similarly to the face skin.
A similar function would allow for the cooperative (human and computer) identification and treatment of the lips, teeth and interior, as part of a 'mouth identification' function. Functions for other common objects or image features, such as hands, shoes, ties, hats, hair, etc., would also be made available. Specialized functions for particular films might be incorporated such as 'guns' for a western film, or trees, birds, mountains, etc. An Object tracking' and/or 'location by inbetweening' mechanism, similar to that described above, would also be incorporated into the 'feature identification' software functions described here so that the computer could find such objects on its own for intervening frames. The above examples are meant to be illustrative rather than limiting in nature.
The refined system features described above relate to the reduction of the impact of inaccuracies in object boundary outlines, and to creating object boundary outlines more efficiently. The next several refinements relate to improving the look of colorization that has been described as washed-out, over- simple, unsophisticated, unsubtle or timid.
As has been discussed in the aforementioned criticisms, the colorized films currently being produced are uninteresting and unsophisticated. In its most restrictive application in the color choice domain, applicant's US patent 4,606,625, would produce similarly uninteresting results, although it would produce those results much faster and more efficiently. The restriction of the 'color choice density' of the color signal, as taught by '625, allows for the use of time, money, processing and labor saving techniques, such as inbetweening or the use of a high speed polygon rendering engine to display the final version of the color-only signal.
However, it is possible to maintain the color-only signal in this 'low color choice density' state and obtain these advantages throughout all stages of the colorization process. Yet, at the last stage of the process, specifically, the combination of the color-only information signal with the original black and white signal, the 'low color choice density' color-only information can be modulated or varied based on the high density information present in the black and white image with which It is to be combined. The resulting modifications will enhance the color-only information signal by adding variation and sophistication to produce a more interesting and realistic effect.
In its most restrictive embodiment, limiting the 'color choice density' of the color-only information signal to a single color for each bounded area, 4,606,625 will result in a flat and simple colorized film. However, such a representation, as bounded areas, each of a uniform color, is ideally suited as input to a high speed polygon rendering engine. Such devices, developed for flight simulators and CAD/CAM systems, can draw colored polygons at a extremely high rate. Some of these devices would be capable of displaying all of the bounded, colored areas contained in one color-only frame in real- or near-real-time (i.e., 24 or 30 times a second).
The deficiencies inherent in using a polygon rendering engine as described above may be overcome by the following method. Let us restrict the discussion to a single bounded area representing a person's face. As described above the rendering engine would display a bounded area of a single uniform pink color. However, it would be advantageous to vary the color of the face over this area. For example, if the face were lighted by sunlight, this might be indicated by coloring the bright highlight portions of the face a slightly yellowish pink. Similarly, the deeply shadowed portions would be colored a slightly blueish pink. This effect might be obtained by running three rendering engines in parallel. One would display the bounded face area as pink, the second would display the same area as yellow, the third as blue. The video output of these three rendering engines would then be input to a video mixing device. The high-density black & white information signal would also be input to the video mixing device as a control signal. Wherever the control signal were brightest, progressively more of the yellow signal would be added to the pink signal. Wherever the control signal were darkest, progressively more of the blue signal would be added to the pink signal. Thus, a composite color-only signal is produced with a full range of skin tones from blueish shadows, through full pink mid-tones, to yellowish highlights. This composite signal would then be combined with the high-density black & white signal to create a more realistic full color image. The mixing device described above could be implemented using well understood, analog video, post-production technology.
In an alternative embodiment, a single rendering engine could be made to display all three of the color signals, one at a time to be recorded on videotape. This is practical because the rendering engines operate so quickly. In place of three rendering engines, the three videotapes would be played back in synchronization.
It would be possible to use more than three rendering engines (or videotapes) to produce more than three uniform color signals to be mixed. Further, the way these color-only video signals would be mixed need not be as simple and straightforward as described above. Many different and complex mixing schemes could be used to create a final image of great variation and subtlty.
However, if it were desired to create a color mixing scheme so complex as to require many more than three rendering engines (or videotapes) this technique would become unwieldly and an alternate embodiment would be preferred. This alternative embodiment would require a single rendering engine but a more specialized and sophisticated mixing device.
In this embodiment, each bounded area would be rendered as a unique flat color. This output would be input to a mixing device along with the high density black & white information signal. For each unique flat color, the mixing device would replace that color with another color, at each point, depending on the luminance value range of the black & white information signal at that point. The number of luminance value ranges could be 3, 5 or more depending on requirements and the sophistication of the mixing device. And differences between the ranges could be smoothly and continuously varied for even more color modulation.
PREFERRED EMBODIMENT
The recent introduction of a new device, the SONY Digital Tape
Recorder (DTR), will facilitate the integration of a number of these refinements into a single system. The DTR system records three independent digital information signals, each of which is a monochrome signal. Designated Y (luminance), R-Y (red minus luminance) and B-Y (blue minus luminance), the first is recorded at twice the density of the other two. (It will either be possible to record each signal independently on a single machine, or to insert each signal independently by shuttling back and forth between two DTRs. Since the DTR is a digital recorder, there is virtually no signal degradation when recording multiple generations. )
Thus, the DTR is ideal for implementing the system described in 4,606,625. The original black & white signal is to be stored at higher information density and will be stored in the Y channel. The other two channels will be used as follows.
One channel will be used to record the output of a frame buffer or, preferably, the output of a single, high speed polygon rendering engine. In this embodiment, each individual area would be uniformly displayed as a single, unique, though arbitrary "color". Since the signal, as recorded by the DTR, is a monochrome signal, only shades of gray will register. Therefore, all of the designated colors must be shades of gray. These gray "colors" will be used later to distinguish between the various bounded areas within the image to be separately colored. This signal, once recorded, will be referred to as the "color tag signal" .
The other channel will be used to store a confidence signal as 'described earlier. If these three signals were to be played back through the normal display circuitry of the recorder, the result would be unintelligible. However, the output of the three signals will, instead, be routed to a specialized decoding device whose output will be a highly sophisticated and realistic full color signal. The decoding device will itself be a highly sophisticated digital device with its own microprocessor and memory circuits and will operate as follows. The three digital inputs to the decoding device will be the high-information-density original black & white information signal, and the lower density 'color tag' and 'confidence' signals, as played back by the DTR.
From a combination of the color tag signal and the original black & white signal, a color hue will be derived for each point on the screen. Information stored in the memory of the decoding device will designate how the black & white signal will modulate each different color tag. Once the hue is chosen, the saturation of the color is modified by the corresponding portion of the 'confidence' signal. Lastly, the luminance value is derived for each point on the high density black & white signal. Once the hue, saturation and luminance values are determined for a point, the last stage will be to translate those parameters into a form suitable for input to whatever particular recording device is being used. These could be R,G,B; Y, R-Y, B-Y; or any other appropriate color specification parameters.
The enormous advantage that is obtained from this embodiment, Is that it allows the various information signals that are combined to generate the sophisticated full color signal to be created and/or stored independently. Further, the two color component signals (the color tag and confidence signals) can be created and stored separately, at very low information density, and in the form that is most convenient. There is no need to provide the storage space and processing time in the main computer to combine the three signals on a frame-by-frame basis. The combination is done later as an inexpensive, real time, decoding step.
The two low density color component signals can be created.and stored at very low information density and not until the final step need they be combined with the high density black & white signal to create a high density full color signal. Thus there is no need to store a high density, full-color signal or a high density intermediate signal in computer memory; that signal need only be recorded on film or videotape at the final stage.
ALTERNATE TECHNIQUE
Another, less drastic, refinement to the system as taught by 4,606,625 will also result in film colorization that is superior to the current commercial product. Although less comprehensive than the preferred embodiment, this technique has its own merit and can be -applied to '625 , the preferred embodiment above or any of the current commercial systems.
The goal of this technique is also to create a more sophisticated color signal, but unlike the previous techniques, it does so by increasing the information content of the color-only signal at the beginning of the process. However, the amount of additional information is small when compared to the additional color information created at the final stage of the preferred embodiment. Therefore, the additional burden of passing this additional information through all stages of the process (e.g., inbetweening, rendering, processing, storage, etc.) is tolerable. The additional information will be created completely and automatically by the computer and will not increase overhead of human labor at all.
Basically, the technique involves the automatic identification and specification of sub-area boundaries within a bounded area specified by man-machine interaction or other process. These sub- area boundaries are created from contours derived from density slices or other criteria. For example, returning to the example of the sunlit face, let us assume that the values in the face run from full black (0) to full white (255). Again, we may wish to create yellowish highlights and blueish shadows, to indicate the sunlight effect, rather than a uniformly flat pink face. With this technique, we will create separate boundaries of shadow sub-areas as follows. The computer will be instructed to consider all those pixels (picture elements) within the boundary of the face area below a certain threshhold, say 80. The computer will be further instructed to create sub-area boundaries around connected groups of those pixels. The computer may be instructed to Ignore groups of pixels that are too small to matter. The result will be islands of shadow outlined within the face boundary. These islands may now be considered as separate bounded areas to be colored b lueish-pink. Similarly, islands of highlight (e.g., those pixels above value 210) would be treated as separate bounded areas to be colored yellowish-pink.
Alternately, these sub-area boundaries may be created by extracting a contour line defined by those pixels of only one, or a narrow range of, values.
Sub-area boundaries can also be created based on other criteria. For example, textures or patterns can be broken into separately colored sub-areas to add subtlty and sophistication. For example, consider a wall that has been specified, by human interaction, as a single bounded area. If that wall were covered by patterned wallpaper? the computer could be instructed to seek out various repeated elements (e.g., stripes, flowers, etc.) or the background, and create sub-area boundaries around each to be separately colored. A similar technique could be applied to stripes or plaids in fabric, stones or bricks on a wall, flowers in a garden, or any other textured area for which it is impractical to hand outline numerous objects. These examples are meant to be illustrative rather than limiting in nature and variations on this technique are intended to be within the scope of this invention.
ADDITIONAL IMPROVEMENT
Commercial colorizers have been taking unusual care to obtain clean and restored black & white prints prior to colorization. While cleaning the negative and creating a fresh print are straightforward procedures, scratches or stubborn dirt may create white (or black) spots or lines even on the refurbished print. In several of the embodiments of 4,606,625, although the original black & white image is referenced when creating the color-only information signal, it remains unchanged until a final combination stage. In some embodiments the color-only signal will vary in hue but be of a uniform and moderate luminance value and be used simply as an overlay signal. For those embodiments, or other similarly functioning colorization systems, the following technique may be incorporated into those embodiments.
As the black & white signal is available for refererence in creating the color only signal, the operator may notice a scratch, hole or other white defect in a black & white image. The operator will then select a repair function from his menu and then indicate the defect to the computer. The computer will then, in the color-only information signal create a patch, of the appropriate hue but with a darker value to match the area surrounding the scratch. The computer may further soften the edges between the patch and the surrounding area in the color- only signal to hide the fact that there is a "patch". Similarly, for dust or dark defects on the black &. white frame the operator would select, a different repair function that would create a patch of lighter color in the color-only information signal. However, whereas the darker patch can successfully cover a white hole, the light patch will only be able to diminish the effect, but not completely eliminate, a dark defect.
It will thus be seen that the objects set forth above, among those made apparent from the proceeding description, are efficiently obtained and certain changes may be made in carrying out the above method and in the construction set forth. Accordingly, it is intended that all matter contained in the above description or shown in the accompanying figures shall be interpreted as illustrative and not in a limiting sense.
Now that the invention has been described, what is claimed as new and desired to be secured by Letters Patent is:

Claims

1. A method for generating an information signal comprising the steps of:
a. specifying at least one area within a black & white image frame to create at least one specified area;
b. generating a boundary line around at least one of said specified areas;
c. processing said boundary line to generate a gradated information signal;
d. applying said gradiated information signal to a color information signal to create a gradated color information signal.
A method as in claim 1 where the processing specified in step c is based on characteristics of the specified area.
A method as in claim 1, wherein the application of said gradiated information signal to create a gradated color information signal is performed a part of a decoding procedure.
4. A method for colorizing a black & white information signal comprising the steps of: .
a. identifying at least one area in the black & white information signal;
b. separating said black & white information signal into at least two specified signal parts; c. creating a color-only signal part for at least two of said specified signal parts;
d. separating at least two adjacent color-only signal parts by a neutral zone;
e. combining said color-only signal with said black & white information signal to create a full-color information signal.
A method as in claim 4 with the additional step, between steps d. and e. of:
processing said color-only information signal.
6. A method as in claim 4 where the quality of said neutral zone Is varied in accordance to some characteristic of at least one of said signal parts.
7. A method as in claim 4 where the quality of said processing is varied in accordance to some characteristic of at least one of said signal parts.
8. A method for generating boundary information comprising the steps of:
a. interactively adjusting the parameters of image processing algorithms;
b. creating by said image processing algorithms a multiplicity of image lines;
c. editing said image lines to create boundary information;
A method as in claim 8 comprising the additional step of applying said adjusted image processing algorithms to multiple frames in an image sequence.
10. A method as in claim 8 comprising the additional step of applying said editing functions to image lines created from multiple frames in an image sequence.
11. A method as in claim 8 comprising the additional step of adaptavely adjusteing said image processing algorithms based on the movement of some image element.
12. A method as in claim 8 comprising the additional step of adaptavely adjusteing said editing functions based on the movement of some image element.
13. A method for generating boundary information for an image object comprising the steps of:
a. identifying the object in a black & v/hite image frame;
b. inputting into a computer the position of at least feature of the identified object;
c. instructing the computer to apply an image processing algorithm to generate boundary information for that image object.
14. A method as in claim 13 wherein the image object is an eye. 5
15. A method for generating a composite color information signal comprising the steps of:
10 a. specifying at least one area within a black & white image frame to create at least one specified area;
b. creating at least two color information signals for said specified area;
15 modulating at least one of said color information signals based on at least one image .characteristic of said specified area to produce at least one modulated color information signal;
20 combining the said color information signals to produce a composite color information signal.
25 16. A method as in claim 15 wherein the said image characteristic is luminance information.
17. A method as in claim 15 wherein the said image 30 characteristic is texture information.
18. A method for colorizing a sequence of black & white image frames employing a digital tape recorder to store separate 35 information components comprising a black & white information component, a color tag information component and a confidence information component.
19. A method for generating a full color image comprising the steps of;
a. specifying at least one area within a black & white image frame to create at least one specified area;
using a polygon rendering engine to produce a polygonally rendered information signal for said specified area;
processing in combination said polygonally rendered information signal and said black & white image to generate a color information signal;
processing in combination said color information signal and said black & white image to generate a full color information signal.
20. A method for generating a full color image comprising the steps of;
a. specifying at least one area within a black & white image frame to create at least one specified area;
b. generating a boundary line around at least one of said specified areas;
c. processing said boundary line to generate a gradated information signal; d. producing a polygonally rendered information signal for said specified area;
e. processing in combination said polygonally rendered information signal and said black & white image to generate a color information signal;
f. applying said gradiated information signal to a color information signal to create a gradated color information signal;
g. processing in combination said gradated color information signal and said black & white image to generate a full color information signal.
21. A method for generating a full color image comprising the steps of;
a. specifying at least one area within a black & white image frame to create at least one specified area;
b. using a polygon rendering engine to produce a polygonally rendered information signal for said specified area;
c. processing in combination said polygonally rendered information signal and said black & white imags to generate a full color information signal.
22. A method of generating boundary information comprising the steps of:
a. specifying at least one area within a black & white image frame to create at least one specified area;
b. identifying for said specified area, at least two sub- areas;
c. specifying for each sub-area a color designation.
23. A method as in claim 22 wherein the sub-areas are specified based on luminance characteristics of the specified area.
24. A method for colorizing black & white frames substantially as specified in the disclosure and drawings.
25. A method for colorizing a black & white information signal comprising the steps of:
a. identifying at least one area in the black & white information signal;
b. separating said black & white information signal into at least two specified parts;
c. creating a one color-only signal for at least two of said specified parts;
d. identifying at least one image defect in said black & white information signal;
e. creating an density variation in the color-only signal to correct for said identified image defect;
f. combining said color-only signal with said black & white information signal to create a full-color information signal.
26. A product created by any of the methods specified in claims, 1, through 25.
EP19880903009 1987-02-18 1988-02-18 An improved method of colorizing black and white footage. Withdrawn EP0364455A4 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US1742187A 1987-02-18 1987-02-18
US17421 1987-02-18
US14195988A 1988-01-07 1988-01-07
US141959 1988-01-07

Publications (2)

Publication Number Publication Date
EP0364455A1 true EP0364455A1 (en) 1990-04-25
EP0364455A4 EP0364455A4 (en) 1990-07-03

Family

ID=26689844

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19880903009 Withdrawn EP0364455A4 (en) 1987-02-18 1988-02-18 An improved method of colorizing black and white footage.

Country Status (5)

Country Link
EP (1) EP0364455A4 (en)
JP (1) JPH02502236A (en)
AU (1) AU1497688A (en)
BR (1) BR8807374A (en)
WO (1) WO1988006392A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823973B (en) * 2023-08-25 2023-11-21 湖南快乐阳光互动娱乐传媒有限公司 Black-white video coloring method, black-white video coloring device and computer readable medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4642676A (en) * 1984-09-10 1987-02-10 Color Systems Technology, Inc. Priority masking techniques for video special effects

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4149185A (en) * 1977-03-04 1979-04-10 Ralph Weinger Apparatus and method for animated conversion of black and white video to color
CA1258118A (en) * 1983-05-05 1989-08-01 Wilson Markle Method of, and apparatus for, colouring a black and white video signal
US4606625A (en) * 1983-05-09 1986-08-19 Geshwind David M Method for colorizing black and white footage
US4608596A (en) * 1983-09-09 1986-08-26 New York Institute Of Technology System for colorizing video with both pseudo-colors and selected colors

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4642676A (en) * 1984-09-10 1987-02-10 Color Systems Technology, Inc. Priority masking techniques for video special effects

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO8806392A1 *

Also Published As

Publication number Publication date
EP0364455A4 (en) 1990-07-03
WO1988006392A1 (en) 1988-08-25
AU1497688A (en) 1988-09-14
JPH02502236A (en) 1990-07-19
BR8807374A (en) 1990-03-20

Similar Documents

Publication Publication Date Title
US5912994A (en) Methods for defining mask of substantially color-homogeneous regions of digitized picture stock
US5519826A (en) Stop motion animation system
US6263101B1 (en) Filtering in picture colorization
US6049628A (en) Polygon reshaping in picture colorization
US5050984A (en) Method for colorizing footage
US4872056A (en) Method for displaying selected hairstyles in video form
US5325473A (en) Apparatus and method for projection upon a three-dimensional object
US5469536A (en) Image editing system including masking capability
US5093717A (en) System and method for digitally coloring images
EP0927496B1 (en) Device and process for producing a composite picture
US5577179A (en) Image editing system
US6021221A (en) Image processing apparatus
US5687306A (en) Image editing system including sizing function
Pei et al. Virtual restoration of ancient Chinese paintings using color contrast enhancement and lacuna texture synthesis
US5852673A (en) Method for general image manipulation and composition
CN105892839B (en) A kind of screenshot processing method and device based on instant messaging tools
US4862256A (en) Method of, and apparatus for, coloring a black and white video signal
JP2007257165A (en) Makeup simulation method
GB2378340A (en) Generation of an image bounded by a frame or of overlapping images
EP0364455A1 (en) An improved method of colorizing black and white footage
DE102007041719A1 (en) Augmented reality producing method for use in e.g. TV studio, involves synchronizing camera with video projectors, and modulating images in luminance and/or chrominance in pixelwise, and integrating coded structure into projected images
US8184925B1 (en) System for converting a photograph into a portrait-style image
CA1332192C (en) Method for colorizing footage
KR102606373B1 (en) Method and apparatus for adjusting facial landmarks detected in images
Geshwind Computer-Assisted Color Conversionsm

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19890817

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LI LU NL SE

A4 Supplementary search report drawn up and despatched

Effective date: 19900703

17Q First examination report despatched

Effective date: 19921006

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20000522