EP1587301A2 - Système et procédé de traitement d'images et support pour le programme correspondant - Google Patents
Système et procédé de traitement d'images et support pour le programme correspondant Download PDFInfo
- Publication number
- EP1587301A2 EP1587301A2 EP05076288A EP05076288A EP1587301A2 EP 1587301 A2 EP1587301 A2 EP 1587301A2 EP 05076288 A EP05076288 A EP 05076288A EP 05076288 A EP05076288 A EP 05076288A EP 1587301 A2 EP1587301 A2 EP 1587301A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- image data
- luminance
- saturation
- transformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
- H04N1/4072—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
- H04N1/4074—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original using histograms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6027—Correction or control of colour gradation or colour contrast
Definitions
- the present invention relates to an image processing system, an image processing method, and a medium having an image processing control program recorded thereon.
- the invention is concerned with an image processing system and method for executing an optimum image processing with use of a computer, as well as a medium having an image processing control program recorded thereon.
- image data of natural pictures such as photographs and image data of drawing type unnatural pictures
- softwares such as photoretouch software for performing various effect processings.
- image data are displayed on a display or the like, to which an operator applies a desired processing to form nice-looking image data.
- the gradation data is in the range of 0 to 255, 20 is always added to the gradation data of red to make red color more vivid, or 20 is always added to the gradation data of blue to make blue color more vivid.
- the conventional methods merely adopt the technique of providing plural settings different in strength of contrast beforehand and then switching over from one to another.
- it has been impossible to automatically selected the most preferred one corresponding to the actual image data.
- luminance is changed in accordance with the foregoing expression (1), the result is such that only brightness is highlighted in the case of an image which is bright as a whole, or only darkness is highlighted in the case of an image which is dark as a whole.
- the strength of contrast can be adjusted in television for example, but the parameter "b", namely offset, is uniform and it has been impossible to effect an optimum highlighting for each individual image.
- an object of the present invention to provide an image processing system, an image processing method, and a medium having an image processing control program recorded thereon, capable of judging the type of image automatically on the basis of image data and performing an optimum image processing.
- the image processing system comprises a number-of-color detecting means which inputs image data representing information of each of pixels resolved in a dot matrix form from an image and which detects the number of colors used while regarding information corresponding to the luminance of each pixel as color, and an image discriminating means for judging the type of image on the basis of the number of colors thus detected.
- the image processing method performs a predetermined image processing for image data which represents information of each of pixels resolved in a dot matrix form from an image.
- the same method comprises inputting the said image data, detecting the number of colors used while regarding information corresponding to the luminance of each pixel as color, and judging the type of image on the basis of the number of colors thus detected.
- the medium according to the present invention has an image processing control program recorded thereon for inputting to a computer image data which represents information of each of pixels resolved in a dot matrix form from an image and for performing a predetermined processing.
- the said image processing control program includes the step of inputting the said image data and detecting the number of colors used while regarding information corresponding to the luminance of each pixel as color and the step of judging the type of image on the basis of the number of colors thus detected.
- the number-of-color detecting means detects image data which represents information of each pixel resolved in a dot matrix form from au image, it detects the number of colors used while regarding information corresponding to the luminance of each pixel as color, and the image discriminating means judges the type of image on the basis of the number of colors thus detected.
- the number of colors used differs depending on the type of image. For example, in the case of a natural picture such as a photograph, even if the object to be photographed is of the same blue color, plural colors are detected due to a shadow and thus a considerable number of colors may be used. On the other hand, in the case of a drawing type data or a business graph, a limit is encountered in the number of colors used because it has been drawn with colors designated by an operator.
- the image discriminating means judges that the image concerned is a natural picture, while if the number of color used is small, the image discriminating means judges that the image concerned is a business graph or the like.
- information stands for each pixel representative of color directly or indirectly. It covers component values of plural elementary colors, coordinate values of a known absolute color space, and the brightness of a monotone.
- each color and luminance are correlated with each other, and although plural colors correspond to a single luminance value, there does not occur that the number of luminance values is large and that of color is small. Besides, it is actually inconceivable that an image is constructed of only colors of the same luminance value. Thus, it can be said that a rough tendency as to whether the number of colors used is large or not can be judged in terms of luminance values.
- the "number of colors used" as referred to herein is of a broad concept corresponding to the number of colors.
- image data sometimes include luminance data directly or may include luminance data only indirectly.
- direct luminance data it suffices to make transformation thereof, or even in the case of indirect luminance data, it suffices to first transform it into luminance data and thereafter performs a predetermined luminance transformation.
- the transformation of luminance must be extremely accurate, but it can be said that a rough transformation will do.
- the number- of-color detecting means determines a luminance by weighted integration of the said component values.
- luminance can be determined by weighted integration of the component values without requiring a large number of processings.
- RGB red, green, blue
- the number-of-color detecting means makes sampling of pixels substantially uniformly from among all the pixels and the number of colors is detected on the basis of the image data concerning the sampled pixels.
- the image data having been subjected to image processing can be applied not only to display but also to various other purposes, including printing.
- printing it is necessary to make transformation into a color space of printing ink different from that of the original image data.
- the type of image exerts a great influence on the amount of processing in such color transformation.
- This image processing system comprises a pre-gray level transforming means provided with a table having lattice points in a color-specification space of data to be transformed in association with colorimetric gradation data in a color-specification space of data after transformation, the pre-gray level transforming means performing a gray level transformation of calorimetric gradation data before transformation into those corresponding to the lattice points in the said table, then referring to the same table and reading the corresponding colorimetic gradation data for color transformation; an interpolating color transformation means capable of making color transformation into corresponding colorimetic gradation data by interpolating calculation between lattice points in the above table and having a storage area capable of reading data at high speed and storing information on that color transformation, the interpolating color transformation means utilizing a cache for color transformation by an interpolating calculation in the case of information not stored in the said storage area; a color transformation selection control means which, in the case of a natural picture having a large number of colors used in the image data, makes color transformation with
- a table in which gradation data in the color-specification space after transformation are associated with the lattice points in the like space before transformation, and corresponding data at predetermined lattice points are read out successively by referring to the said table, whereby it is made possible to effect transformation.
- the table becomes too large.
- the pre-gray level transforming means makes a gray level transformation of the gradation data before transformation into gradation data corresponding to the lattice points in the table to thereby reduce the size of the table.
- transformation can be done by interpolating calculation.
- the gray level transformation executed by error diffusion for example is characteristic in that the amount of calculation is sure to become smaller.
- the interpolating color transformation means utilizing a cache may be advantageous at a point of processing speed, from the function of repeating the processing of storing the result of color transformation in the high-speed readable storage area while performing an interpolating calculation and reading it from the storage area if necessary.
- the present invention therefore, in the case of a natural picture using a large number of colors, it is possible to make color transformation at a minimum amount of calculation in accordance with the error diffusion technique for example, while in the case where the number of colors used is small, the result of transformation can be utilized repeatedly by using a cache. Thus, the amount of processing can be kept to a minimum by a color transformation method adopted in accordance with the type of image.
- Judgment of the type of image can also be utilized for judgment as to whether edge highlighting is to be performed or not. It is therefore a further object of the present invention to provide an image processing system capable of judging whether edge highlighting is to be performed or not.
- This image processing system comprises a natural picture discriminating means which, in the case where the number of colors judged on the above image date is a predetermined number of colors or more, judges that the image data represents a natural picture, and an edge emphasizing means which, when the image data is judged to be a natural picture by the natural picture discriminating means, determines a low-frequency component on the basis of a marginal pixel distribution with respect to each of the pixels which constitute the image data, and diminishes the low-frequency component, thereby eventually making a modification so as to enhance the edge degree of each pixel.
- the natural picture discriminating means judges that the image data represents a natural picture.
- the edge highlighting means determines allow-frequency component on the basis of a marginal pixel distribution with respect to each of the pixels which constitute the image data, and diminishes the low-frequency component, thereby eventually making a modification so as to enhance the edge degree of each pixel.
- This image processing system comprises a luminance distribution totaling means which inputs image data representing information of each of pixels resolved in a matrix form from an image and which totals luminance values of the pixels into a luminance distribution as a whole on the basis of the luminance of each pixel, and an image data luminance transforming means which, when the thus-totaled luminance distribution is not widely dispersed in an effective luminance range of the image data, transforms the luminance information of each pixel in the image data so that the above luminance distribution covers widely the said luminance range.
- the image processing method performs a predetermined image processing for image data which represents information of each of pixels resolved in a dot matrix form from an image.
- luminance values of the pixels are totaled into a luminance distribution as a whole on the basis of the luminance of each pixel, and when the thus-totaled luminance distribution is not widely dispersed in an effective luminance range of the image data, the luminance information of each pixel in the image data is transformed so that the luminance distribution widely covers the said luminance range.
- the medium according to the present invention has an image processing control program recorded thereon for inputting to a computer image data which represents information of each of pixels_ resolved in a dot matrix form from an image and for performing a predetermined image processing.
- the image processing control program comprises the steps of inputting the image data and totaling luminance values of the pixels into a luminance distribution as a whole, and when the thus-totaled luminance distribution is not widely dispersed in an effective luminance range of the image data, transforming the luminance information of each pixel in the image data so that the luminance distribution widely covers the said luminance range.
- the width of contrast in the image data can be digitized to some extent by determining a luminance distribution in the image data, and after the digitization can be done, the said distribution is enlarged correspondingly to a reproducible range. It is not that the digitization always requires concrete numerical values. In the course of processing, the data may be treated as a numerical value or as the magnitude of signal.
- the luminance distribution detecting means detects a luminance distribution of the image data in the unit of pixel. Then, using this detected luminance distribution, the image data luminance transforming means judges an amount of enlargement of the luminance distribution in an effective available range and makes transformation of the image data.
- the image data luminance transforming means compares a statistical width of the luminance distribution detected with the width of the aforesaid luminance range and determines an enlargeable degree as the enlargement ratio. At the same time, the image data luminance transforming means determines an adjustment value to make adjustment so that the upper and lower ends of the thus-enlarged luminance distribution are within the luminance range in question, and modifies the luminance of each pixel.
- both enlargement ratio and adjustment value are determined and the luminance of each pixel is modified on the basis of those values.
- An example is a linear enlargement.
- the transformation in this example is a linear transformation in a narrow sense and no limitation is placed thereon. It is also possible to make a non-linear transformation in a broad sense.
- the transforming expression in question is merely one example and it goes without saying that even other transforming expressions of the same meaning are also employable.
- the image data luminance transforming means determines a maximum distribution luminance of the luminance y before transformation and executes ⁇ correction to change the luminance distribution so that the range to which the maximum distribution luminance gives a desired brightness, thereby obtaining the luminance Y after transformation.
- the maximum distribution luminance of luminance y before transformation is utilized, and if the maximum distribution luminance is on the bright side, there is made ⁇ correction to render the whole rather dark, while if the maximum distribution luminance is on the dark side, there is made ⁇ correction to render the whole rather bright. In this way the entire brightness is corrected automatically, which cannot be attained by only the highlighting of contrast. It is optional whether the maximum distribution luminance of luminance y before transformation is to be obtained in terms of median or in terms of a mean value.
- the degree of brightness which cannot be adjusted by only the highlighting of contrast is also adjustable.
- the luminance distribution totaling means regards positions inside by a predetermined distribution ratio from the actual ends of a luminance distribution as end portions at the time of determining the above luminance distribution.
- the predetermined distribution ratio is not specially limited if only it permits the skirt portion of an extremely reduced distribution to be ignored.
- it may be the number of pixels corresponding to a certain ratio of the total number of pixels, or the positions where the distribution ratio is below a certain ratio may be regarded as end portions.
- a luminance distribution range to be enlarged is set inside the end portions of the actual reproducible range by a predetermined amount. By so doing, an intentional enlargement is no longer performed at both end portions.
- the luminance distribution center of image in such a sense can be grasped in various ways.
- the luminance distribution may be enlarged so that an enlargeable range ratio remaining at each of upper and lower ends of the luminance distribution range before transformation is retained also after transformation.
- the automatic judgment based on image data be applied to judging the degree of highlighting at the time of highlighting the vividness of a natural picture. It is therefore a further object of the present invention to provide an image processing system, an image processing method and a medium having an image processing control program recorded thereon, capable of adjusting the vividness automatically on the basis of image data.
- This image processing system comprises a saturation distribution totaling means which inputs image data representing information of each of pixels resolved in a dot matrix form from an image and which then totals saturations of the pixels into a saturation distribution as a whole, a saturation transformation degree judging means for judging a saturation transformation degree of image data from the saturation distribution obtained by the saturation distribution totaling means, and an image data saturation transforming means which modifies saturation information contained in the image data on the basis of the transformation degree thus judged and transforms the modified information into a new image data.
- the image processing method is for performing a predetermined image processing for image data which represents information of each of pixels resolved in a dot matrix form from an image.
- the method comprises totaling saturations of the pixels into a saturation distribution as a whole, then judging a saturation transformation degree for the image data from the saturation distribution thus obtained, then modifying saturation information contained in the image data on the basis of the transformation degree thus judged and transforming it into a new image data.
- the medium according to the present invention has an image processing control program recorded thereon for inputting to a computer image data which represents information of each of pixels resolved in a dot matrix form from an image and for performing a predetermined image processing, the program comprising the steps of inputting the image data and totaling saturations of the pixels into a saturation distribution as a whole, judging a saturation transformation degree for the image data from the saturation distribution thus obtained, and modifying saturation information contained in the image data on the basis of the transformation degree thus judged and transforming it into a new image data.
- the saturation transformation degree judging means judges a saturation transformation degree for the image data from the saturation distribution thus obtained, and the image data saturation transforming means transforms the image data in accordance with the transformation degree thus judged. That is, for each image, an optimum transformation degree is judged from a saturation distribution of image data and the image data is transformed on the basis thereof.
- the saturation of each pixel is judged in accordance with the saturation of a warm color hue in color components.
- the human visual characteristic tends to regard the difference between a warm color hue and a non-warm color hue as being vivid, so it is relatively convenient to judge saturation on the basis of such difference.
- the saturation transformation degree judging means strengthens the degree of saturation highlighting in the case of a small saturation at a predetermined ratio from the upper end in the totaled saturation distribution, while when the said saturation is large, it weakens the saturation highlighting degree, thereby judging the degree of saturation transformation.
- judging the degree of saturation transformation at a predetermined ratio from the upper end in the totaled saturation distribution facilitates the judging.
- the transformation of saturation may be performed by radial displacement according to the foregoing degree of transformation within the Luv space in the standard calorimetric system.
- this parameter may be transformed, but in the Luv space as the standard calorimetric system having a parameter of luminance or lightness and a parameter of hue in a plane coordinate system with respect to each luminance, the radial direction corresponds to saturation. In the Luv space, therefore, the transformation of saturation is performed by radial displacement.
- the minimum component value is also contained in the other color components, and such minimum component values are merely combined together and constitute a saturation- free gray.
- the differential value based on the other colors and exceeding the minimum component value exerts an influence on saturation, and by increasing or decreasing the differential value there is performed transformation of saturation.
- a corresponding value of luminance is subtracted from each component value and the differential value obtained is increased or decreased to perform the transformation of saturation.
- a mere displacement of component values exclusive of a saturation-free component will cause a change in luminance.
- a corresponding value of luminance is subtracted beforehand from each component value and the transformation of saturation is performed by increasing or decreasing the resulting differential value, it becomes possible to store luminance.
- the luminance or lightness and the saturation are in a relation such that the color specification space assumes an inverted cone shape up to a certain range. It can be said that the lower the luminance, the larger the component values of hue. In such a case, if an attempt is made to apply a transformation degree proportional to a small value of saturation, there is a fear of breaking through the conical color specification space. Therefore, when the luminance is low, the transformation degree of saturation may be weakened to prevent the occurrence of such inconvenience.
- the transformation degree of saturation is judged by any of the foregoing various methods. However, if saturation is highlighted to a greater extent than necessary even when it is weak, there will not be obtained good results.
- the image processing system inputs image data which represents information of each of pixels resolved in a dot matrix form from an image and then executes a predetermined image processing.
- the image processing system is provided with a contrast enlarging means for enlarging the luminance distribution in the image data, a saturation highlighting means for highlighting saturation in the image data, and a highlighting process suppressing means for suppressing the luminance distribution enlarging operation and the saturation highlighting operation with respect to each other.
- the image processing method comprises inputting image data which represents information of each of pixels resolved in a dot matrix form from an image, enlarging the luminance distribution in the image data and highlighting saturation in the image data.
- the luminance distribution enlarging operation and the saturation highlighting operation are performed in a correlated manner so as to suppress each other.
- the medium according to the present invention has an image processing control program recorded therein, the said program comprising the steps of inputting image data which represents information of each of pixels resolved in a dot matrix form from an image, enlarging the luminance distribution in the image data, highlighting saturation in the image data, and suppressing the luminance distribution enlarging operation and the saturation highlighting operation with respect to each other.
- the contrast enlarging means enlarges the luminance distribution in the image data, while the saturation highlighting means highlights the saturation of each pixel.
- the highlighting process suppressing means suppresses the luminance distribution enlarging operation and the saturation highlighting operation with respect to each other.
- the highlighting process suppressing means suppresses both highlighting operations with respect to each other to prevent a synergistic effect of affording a loud image. Moreover, even if after one adjustment there is made the other adjustment, it is possible to keep the previous adjustment effective.
- the highlighting process suppressing means is not specially limited if only it can suppress both luminance distribution enlarging operation and saturation highlighting operation with respect to each other.
- the luminance distribution enlarging operation can be done by various methods and this is also true of the saturation highlighting operation. Therefore, concrete methods may be selected suitably according to the contrast enlarging means and saturation highlighting means adopted.
- the suppressing method may be such that the suppressing process is applied from one to the other but is not applied from the other to one. By so doing, it becomes possible to make selection between the case where a synergistic highlighting is to be prevented and the case where it is allowed.
- the luminance distribution enlarging operation and the saturation highlighting operation can eventually be suppressed with respect to each other, this will do.
- a method wherein when it is necessary to suppress the enlarging operation of the contrast enlarging means, the enlarging operation of the contrast enlarging means is not suppressed, but there may be performed a further transformation processing which denies the result of enlargement in the enlarged image data. It is also possible to obtain the same result by darkening the entire image though this is different from the enlargement of contrast. This can also be said of the saturation highlighting operation.
- the highlighting process suppressing means sets a correlation so that when one of a parameter which represents the degree of luminance distribution enlargement in the contrast enlarging means and a parameter which represents the degree of saturation highlighting is large, the other is small.
- the contrast enlarging means transforms image data with use of the parameter which represents the degree of luminance distribution enlargement
- the saturation highlighting means also transforms image data with use of the parameter which represents the degree of saturation highlighting. Therefore, the correlation of one being large and the other small made by the highlighting process suppressing means eventually causes the luminance distribution enlarging operation and the saturation highlighting operation to suppress each other.
- the correlation using the parameters representing the degree of contrast enlargement and the degree of saturation highlighting facilitates the processing in the present invention.
- the image data transforming operation may be done for each pixel.
- the image data transformation for contrast enlargement and the image data transformation for saturation highlighting are not performed for each pixel, a causal relation between both processings becomes complicated, and as the case may be it is required to conduct a complicated processing for suppressing the two with respect to each other, or a work area may be needed separately.
- the image data transforming operation performed for each pixel is advantageous in that the influence of the image data increasing or decreasing process on contrast and saturation becomes simple and hence the mutually suppressing process also becomes simple.
- one or both of the contrast enlarging means and the saturation highlighting means may analyze image data and set the degree of highlighting. That is, the contrast enlarging means and the saturation highlighting means set the degree of highlighting automatically and in the course of this automation the highlighting process suppressing means performs the foregoing suppressing operation. Therefore, while the contrast enlarging means sets the degree of highlighting, if the degree of highlighting is weakened or a change is made from one processing to another by reference to, for example, the parameter in the saturation highlighting means, such a processing itself leads to constitution of the highlighting process suppressing means.
- a binary image data such as monochrome image date
- a binary image data concerned is a binary image data
- the enlargement of luminance distribution and highlighting of saturation may be omitted.
- a binary image there is no luminance distribution in a substantial sense nor is saturation, so once a binary image data is determined from a luminance distribution, there is performed neither enlargement of luminance distribution nor highlighting of saturation. Further, since it is possible that a binary image data will have a certain color, there can be two luminances corresponding respectively to the presence and absence of that color. It is possible to judge whether a luminance is of that color or not, but when there is no suggestive information, it may be judged that the image data concerned is a binary black-and-white image data when the luminance distribution is concentrated on both ends in a reproducible range. That is, in the case of a black-and-white image, it can be said that the luminance distribution is concentrated on both ends in the reproducible range, thus permitting judgment.
- the image processing system inputs image data which represents information of each of pixels resolved in a dot matrix form from an image and then executes a predetermined image processing.
- the image processing system is provided with a frame discriminating means which, on the basis of the image data, judges a portion including an extremely large number of certain pixels to be a frame portion, and an image data excluding means which excludes from the image processing the image data of pixels having been judged to be a frame portion.
- a predetermined image processing is performed for image data which represents information of each of pixels resolved in a dot matrix form from an image. According to the same method, if a portion of the image data includes an extremely large number of certain pixels, it is judged to be a frame portion, and the image data of the pixels having been judged to be a frame portion is not subjected to the image processing.
- the medium according to the present invention has an image processing control program recorded thereon for inputting image data which represent information of each of pixels resolved in a dot matrix form from an image and performing a predetermined image processing, which program comprises the steps of judging, on the basis of the image data, a portion to be a frame portion if an extremely large number of certain pixels are included therein, and excluding from the image processing the image data of the pixels having been judged to be a frame portion.
- the frame discriminating means judges, on the basis of the image data, a portion of the image data to be a frame portion if an extremely large number of pixels are included therein, and the image data excluding means excludes from the image processing the image data of the pixels having been judged to be a frame portion.
- the pixels which constitute a frame are almost the same pixels and the number thereof is extremely large as compared with the number of the other pixels. Therefore, if they are included in an extremely large number in a certain portion of the image data, this portion is judged to be a frame portion and is not subjected to the image processing.
- the frame which occurs in the image data is black or white in many cases. It is therefore a further object of the present invention to provide an image processing system capable of excluding such a black or white frame efficiently.
- the frame discriminating means regards pixels which take both-end values in an effective range of the image data as a candidate for a frame portion.
- the color of frame is not limited to black and white. It is a further object of the present invention to exclude even other frames than black and white frames.
- the frame determining means totals luminances of the pixels into a luminance distribution as a whole when the image data is a natural picture and regards a prominent luminance portion in the luminance distribution as a candidate for a frame portion.
- a prominent luminance portion exists as a monocolor frame, only the luminance portion corresponding to that color projects. Therefore, a prominent luminance portion, if any, is judged to be a frame portion of image data, and once a frame portion is detected, the data of the frame portion need not be used in the enlargement of luminance distribution and highlighting of saturation. More particularly, if a prominent luminance portion is used as a criterion in the luminance enlargement, an effective judgment may no longer be feasible, so such a portion is judged to be a frame portion and is not used in the enlargement of luminance distribution.
- the luminance portions concentrated on both ends in a reproducible range may be judged to be a white or black frame portion.
- a white or black frame is often detected and adopted and can also result from trimming. It corresponds to an end portion in a reproducible range. Therefore, the luminance portion concentrated on the end portion is judged to be a frame portion.
- a natural picture discriminating means which judges the image data to be not of a natural picture when the luminance distribution exists in the shape of a linear spectrum. It can be said that a natural picture is characterized by a smooth width of its luminance distribution. In most cases, therefore, if the luminance distribution is in the shape of a linear spectrum, one can judge that the image data is not of a natural picture.
- the image processing system is used alone, and in another case it may be incorporated in a certain apparatus.
- the idea of the invention is not restricted to only a limited case, but covers various modes. A change may be made as necessary, for example, between software and hardware.
- the image processing system in question may be applied to a printer driver in which inputted image data is transformed into image data corresponding to the printing ink used and the thus-transformed image data is printed using a predetermined color printer.
- the printer driver transforms the inputted image data correspondingly to the printing ink used, and at this time the foregoing processing may be carried out for the image data.
- a materialized example of the idea of the invention takes the form of software in the image processing system, such a software is inevitably present and utilized also on the recording medium used in the system. It is therefore a further object of the present invention to provide not only the foregoing image processing system and method but also a medium having an image processing control program recorded thereon which program substantially controls the image processing system and method.
- the recording medium maybe a magnetic recording medium or a magneto-optic recording medium, or any other recording medium which will be developed in future.
- the same way of thinking is applicable also to duplicate products, including primary and secondary duplicate products.
- the present invention is applicable also to the case where a communication line is utilized as a software supply line.
- the software supply side using the communication line functions as a software supply system and thus the present invention is also utilized here.
- FIG.1 shows as a block diagram a concrete hardware configuration example of an image processing system according to an embodiment of the present invention.
- a scanner 11, a digital still camera 12 and a video camera 14 are provided as image input devices 10.
- a computer body 21 and a hard disk 22 are provided as image processors 20 which play main roles in image processing including image discrimination.
- a printer 31 and a display 32 are provided as image output devices 30 for the display and output of images after image processing.
- an operating system 21a operates in the interior of the computer body 21, and a printer driver 21b and a video driver 21c are incorporated therein correspondingly to the printer 31 and the display 32, respectively.
- an application 21d is controlled its execution of processing by the operating system 21a, and where required, it cooperates with the printer driver 21b and the video driver 21c to execute a predetermined image processing.
- the scanner 11 and the digital still camera 12 as image input devices 10 output gradation data of RGB (red, green, blue)as image data.
- the printer 31 requires CMY (cyan, magenta, yellow) or CMYK (CMY + black) as an input of binary colorimetric data
- the display 32 requires gradation data of RGB as an input. Therefore, a concrete role of the computer body 21 as an image processor 20 is to input gradation data of RGB, prepare gradation data of RGB having been subjected to a necessary highlighting process, and cause it to be displayed on the display 32 through the video driver 21c or cause it to be printed by the printer 31 after transformation into binary data of CMY through the printer driver 21b.
- a number-of-color detecting means 21d1 in the application 21d detects the number of colors used in the inputted image data, then an image discriminating means 21d2 judges the type of image, and an image processing means 21d3 performs an appropriate image processing automatically which processing is set beforehand according to the type of image.
- the image data after the image processing is displayed on the display 32 through the video driver 21c and, after confirmation, are transformed into printing data by the printer driver 21b, which is printed by the printer 31.
- the image processing means 21d3 which is set according to the type of image, there are provided a contrast enlarging process, a saturation highlighting process and an edge highlighting process, as shown in FIG.3.
- the printer driver 21b which transforms image data into printing data, as shown in FIG.4, there are provided a rasterizer 21b1 for rasterizing the printing data, a color transforming section 21b2 which performs color transformation by pre-gray level transformation or by a combination of cache and interpolating calculation, and a gray level transforming section21b3 which performs a binary coding process for gradation data after the color transformation.
- a computer system is interposed between the image input and output devices to perform the image processing, such a computer system is not always needed.
- a computer system is not always needed.
- FIG. 5 wherein image processors are mounted for highlighting contrast within the digital still camera12a, and image data after transformation is displayed on a display 32a or printed by a printer 31a.
- FIG. 6 in the case of a printer 31b which inputs and prints image data without going through a computer system, it is possible to construct the printer so that image data inputted through a scanner 11b, a digital still camera 12b, or a modem 13b, is highlighted its contrast automatically.
- FIG.7 is a flowchart corresponding to the image processing in the application.
- steps S102 and S104 luminances are totaled into a luminance distribution and the number of colors used is detected.
- step S102 shown in FIG.7 there is performed a processing for thinning out pixels concerned.
- the inputted image is a bit map image, it is formed in a two-dimensional dot matrix shape comprising a predetermined number of dots in the longitudinal direction and a predetermined number of dots in the lateral direction.
- the distribution detecting process in question aims at detecting the number of colors used indirectly from luminance, so it is not always required to be accurate. Therefore, it is possible to effect thinning-out of pixels within a certain range of error.
- an error relative to the number of samples N is expressed generally as 1/(N**(1/2)) where ** stands for a power. Accordingly, for carrying out the processing with an error of 1% or so, N is equal to 10000.
- min(width, height) indicates the smaller one of width and height, and A is a constant.
- the sampling cycle ratio indicates the number of pixel at which sampling is performed.
- sampling cycle ratio is based on min(width, height) is for the following reason. For example, if width» height as in the bit map image shown in FIG.11 (a) if the sampling cycle ratio is determined on the basis of the longer width, there may occur a phenomenon such that pixels are sampled longitudinally in only two upper- and lower-end lines as in FIG.11(b). However, if the sampling cycle ratio is determined on the basis of the smaller one like min (width, height), it becomes possible to effect thinning-out of pixels so as to include an intermediate portion also in the longitudinal direction of the smaller one as in FIG. 11(c).
- the thinning-out of pixels is performed at an accurate sampling cycle for pixels in both longitudinal and lateral directions. This is suitable for the case where successively inputted pixels are processed while they are thinned out.
- coordinates may be designated randomly in the longitudinal or lateral direction to select pixels.
- the image data of the thus-selected pixels has luminances as constituent elements, a luminance distribution can be obtained using the luminance values.
- luminance values are not direct component values, it is indirectly provided with component values indicative of luminance. Therefore, if transformation is made from a color specification space wherein luminance values are not direct component values to a color specification space wherein luminance values are direct component values, it is possible to obtain luminance values.
- the color transformation between different color specification spaces is determined in a unitary manner, but it is required to first determine a correlation of color spaces including component values as coordinates and then perform transformation successively while making reference to a color transformation table with the said correlation stored therein. Because of the table, component values are represented as gray scale values, and in the case of 256 gray scales having three-dimensional coordinate axes, the color transformation table must cover about 16,700,000 (256 ⁇ 256 ⁇ 256) elements. In view of an effective utilization of the memory resource, correlations are prepared usually with respect to sporadic lattice points, not all the coordinate values, and an interpolating calculation is utilized. Since this interpolating calculation is feasible through several multiplications and additions, the calculation volume becomes vast.
- the processing volume becomes smaller, but the table size poses an unreal problem, and if the table size is set at a real size, the calculation volume becomes unreal in many cases.
- step S104 After luminances have been totaled by such thinning-out process, the number of luminance values not "0" in the number of pixels is counted in step S104, whereby the number of colors used can be detected. Although in step S102 not only luminance values but also saturation values are totaled to detect a distribution thereof, this totaling process will be described later.
- step S106 a comparison is made in step S106 between the detected number of colors and a predetermined threshold value, and if the number of colors used is larger, it is judged that the image data in question is of a natural picture.
- the threshold value may be set at "50" colors.
- image data than natural pictures there are mentioned business graphs and drawing type images, which are read using the scanner 11. In some case, a computer graphic image is read through the scanner 11 or is inputted from a network through the modem 13b.
- a related flag is set in step S108.
- the reason why the flag is set is that it is intended to transmit the result of judgment also to the printer driver 21b in an operation other than the image processing in the application 21d.
- there is provided only one threshold value to judge whether the image data concerned is of a natural picture or not there may be performed a more detailed judgment in the range corresponding to the number of colors used. For example, even in the case of a computer graphic image, the number of colors sometimes becomes larger depending on gradation or may become larger due to a blunt edge portion at the time of read by the scanner 11 despite the actual number of colors being not so large. In this case there may be adopted a method wherein such image data are classified between natural and unnatural pictures and there is conducted only an edge highlighting process even without conducting the image processing for natural pictures as will be described later.
- step S110 contrast is enlarged in step S110, saturation is highlighted in step S112, and edge is highlighted in step S114.
- the contrast enlarging process is shown in FIGS.12 and 13 in terms of flowcharts. As shown in the same figures, the following points should be taken into consideration before obtaining a luminance distribution of the pixels selected by the above thinning-out processing.
- the image concerned is a binary image such as a black-and-white image.
- a binary image including a black-and-white image
- the concept of highlighting contrast is inappropriate.
- a luminance distribution of this image concentrates on both ends in a reproducible range, as shown in FIG. 15. Basically, it concentrates on the gray scales "0" and "255.”
- step S124 the judgment in step S124 as to whether the inputted image is a black-and-white image or not can be done on the basis of whether the sum of pixels at gray scales of "0" and "255" is coincident or not with the number of the pixels thinned out and selected. If the image is a black-and-white image, the processing flow shifts to step S126 in which an unenlarging process is executed to stop the processing without going through processings which follow.
- the distribution sampling process and the luminance transforming process are broadly classified, and in this unenlarging process a flag is raised so as not to execute the latter stage of luminance transforming process, thereby bringing this distribution sampling process to an end.
- Binary data is not limited to black and white, but there can be even colored binary data. Also in this case it is not necessary to carry out the contrast highlighting process. If a check of the state of distribution shows that distribution is concentrated on only two values (one is usually "0"), the data concerned is regarded as a binary data and the processing may be interrupted.
- the second point to be taken into account is whether the inputted image is a natural picture or not. This judgment can be done on the basis of the flag set in step S108. If the number of colors is small, the luminance distribution is sparse, and in the case of a business graph or the like, it is represented in the form of a linear spectrum. Whether the distribution is in the form of a linear spectrum or not can be judged in terms of an adjacent ratio of luminance values which are not "0" in the number thereof distributed. To be more specific, a judgment is made as to whether there is a distribution number in adjacent luminance values which are not "0" in the number thereof distributed.
- the distribution number if at least one of two adjacent luminance values is in an adjacent state, there is done nothing, while if neither is in an adjacent state, there is made counting. As a result, judgment may be made on the basis of the ratio between the number of luminance values which are not "0" and the counted value. For example, if the number of luminance values not "0" is 50 and that of those not in an adjacent state is 50, it is seen that the luminance distribution is in the form of a linear spectrum.
- the judgment as to whether the inputted image is a natural picture or not can be made using an image file extender.
- the bit map file particularly for photographic images
- the file is compressed and there often is used an implicit extender for representing the compressing method.
- the extender "JPG" shows that the compression is made in JPEG format. Since the operating system manages the file name, if an inquiry is issued from the printer driver side for example to the operating system, the extender of the file is replied, so there may be adopted a procedure of judging the image concerned to be a natural picture on the basis of that extender and then highlighting the contrast. In the case of an extender peculiar to a business graph such as "XLS,” it is possible to determine that contrast highlighting is not performed.
- the third point to be taken into account is whether the marginal portion of the image is framed as in FIG.16. If the frame portion is white or black, it influences the luminance distribution and a linear spectral shape appears at each end in a reproducible range as in FIG.17. At the same time, a smooth luminance distribution also appears inside both ends correspondingly to the interior natural picture.
- step S132 there is performed a frame processing.
- the number of pixels in the gray scales of "0" and "255" in the luminance distribution is set to "0" in order to ignore the frame portion.
- the frame portion being considered in this example is white or black, there sometimes is a frame of a specific color. In this case, there appears a linear spectral shape projecting from the original smooth curve of the luminance distribution. Therefore, as to a linear spectral shape which causes a great difference between adjacent luminance values, it is considered to be a frame portion and not considered in the luminance distribution. Since it is possible that the said color may be used in the other portion than the frame portion, there may be allocated a mean of both adjacent luminance values.
- portions located respectively inside the highest luminance side and the lowest luminance side in the distribution range by a certain distribution ratio are regarded as both ends.
- the said distribution ratio is set at 0.5%, as shown in FIG.18. Needless to say, this ratio may be changed as necessary.
- both upper and lower ends are determined through such a processing for the luminance distribution
- a statistical processing For example, there may be adopted a method wherein a portion of a certain percent relative to a mean value of luminance values is regarded as an end portion.
- the above processing corresponds to the distribution detecting processing.
- a description will be given of a luminance transforming process which performs the transformation of image data on the basis of the luminance values y max and y min thus obtained. If an unenlarging process is executed in step S126, this is detected by reference to a predetermined flag in step S142 and the image processing is ended without performing the processings which follow.
- a basic transformation of luminance is performed in the following manner.
- transformation is not performed in the ranges of y ⁇ y min and y>max.
- the area of 5 in terms of a luminance value from each end portion is set as an unenlarged area for retaining both highlight portion and high-shadow portion.
- the above area may be narrowed, or in the case where the reproducing power is weaker, the above area may be widened. It is not always required to stop the enlargement completely.
- the enlargement ratio may be gradually limited at the border region.
- FIG. 20 (a) shows a narrow luminance distribution of image. If the enlargement ratio of luminance distribution (corresponding to a) is applied in the manner described above, there sometimes is obtained a very large enlargement ratio in proportion to a reproducible range. In this case, although it is natural that the contrast width form the brightest portion to the darkest portion should be narrow in a dusk state, for example at nightfall, an attempt to enlarge the image contrast to a great extent may result in transformation into an image as if the image were a daytime image. Since such a transformation is not desired, a limit is placed on the enlargement ratio so that the parameter a does not exceed 1.5 ( ⁇ 2). By so doing, a dusk state is expressed as such.
- transformation is made in such a manner that the ratio of areas (m1:m2) remaining on the upper end side and the lower end side in the reproducible range of a luminance distribution before transformation coincides with the ratio of areas (n1:n2)remaining on the upper and lower end sides after transformation.
- the parameter b is determined in the following manner.
- step S144 ends.
- ⁇ is assumed equal to 0.7. This is because, without such a limitation, a night image would become an image like a daytime image. Excessive brightness would result in a whitish image of weak contrast as a whole, so it is desirable to highlight saturation at the same time.
- FIG.24 shows a correlation in the case of ⁇ correction being made. If ⁇ ⁇ 1, there is obtained an upwardly expanded curve, and if ⁇ >1, there is obtained a downwardly expanded curve. The result of the ⁇ correction may be reflected in the table shown in FIG.22. The ⁇ correction is made for the table data in step S148. Lastly, in step S510 there is made transformation of image data. The correlations so far obtained are for the transformation of luminance and are not transforming relations for the component values (Rp,Gp,Bp) on RGB coordinate axes.
- step S150 the processing of making reference to the transformation tables corresponding to the expressions (19) ⁇ (21) to obtain image data (R,G,B) after transformation is repeated with respect to the image data (rr,gg,bb) of all the pixels.
- step S102 after the luminance y is obtained with respect to each of the thinned-out pixels to determine a luminance distribution, a judgment is made in step S124 as to whether the image concerned is such a binary image as black-and-white image or not, while in step S128 it is judged whether the image is a natural picture or not.
- step S130 it is judged whether a frame portion is included in the image data or not, except the case where the image is a binary image or is not a natural picture, then the frame portion if any is removed and the areas of 0.5% from the upper and lower ends of the luminance distribution obtained are removed to obtain both ends y max and y min of the distribution.
- step S148 ⁇ correction is performed as necessary and in step S150 there is made transformation of image data with respect to all the pixels.
- the enlargement ratio is limited to a predetermined certain value, a modification may be made so that it can be selected by the user through a predetermined GUI on the computer 21. It is also possible to let the user designate part of image data and execute such a contrast highlighting process only within the range concerned.
- saturation highlighting in step S112.
- This saturation highlighting process is shown in FIG.25.
- a saturation distribution is obtained beforehand from image data and then a saturation highlighting coefficient S is determined from the saturation distribution.
- Saturation values are obtained and totaled with respect to the pixels thinned out in step S102. If image data has saturation values as its component factors, it is possible to determine a saturation distribution by using the saturation values. Even image data with saturations being not direct component factors are indirectly provided with component values representative of saturation.
- saturation values can be obtained by performing transformation from a color specification space wherein saturations are not direct component factors into a like space wherein saturation values are direct component values.
- L axis represents luminance (lightness)
- U and V axes represent hue.
- the distance from an intersecting point from both U and V axes represents saturation, so saturation is substantially represented by (U**2+V**2)**(1/2).
- the calculation volume required for such transformation is vast.
- saturations of the pixels after thinning-out are totaled into a saturation distribution from the image data of RGB and in accordance with the expression (5), it is seen that the saturations are distributed in the range from a minimum value of 0 to a maximum value of 511 and that the distribution is substantially like that shown in FIG.26.
- a saturation highlighting index of the image concerned is determined in step S212. If the saturation distribution obtained is as shown in FIG.26, then in this embodiment a range occupied by the higher 16% in terms of a distribution number is determined within an effective range of pixels. Then, on the basis of the assumption that the lowest saturation A in the said 16% range represents the saturation of the image, a saturation highlighting index S is determined using the following expressions:
- FIG.29 shows a relation between the saturation A and the saturation highlighting index S.
- the saturation highlighting index S varies gradually in the range from a maximum value of 50 to a minimum value of 0 so as to be large at a small saturation A and small at a large saturation A.
- a saturation range occupied by a certain higher ratio in the totaled saturation distribution this does not constitute any limitation.
- a mean value or a median may be used in calculating the saturation highlighting index S.
- using a certain higher ratio in the saturation distribution affords a good result as a whole because a sudden error becomes less influential.
- the saturation highlighting can be done by once transforming the space into the Luv space as a standard colorimetric system and subsequent radial shift in the Luv space.
- the saturation highlighting index S ranges from a maximum value of 50 to a minimum value of 0, (u, v) is multiplied a maximum of 1.5 times into (u' ,v').
- the image data is of RGB
- the gradation data (L, u',v ') of Luv is then re-transformed into RGB, whereby this image transforming process is terminated.
- the saturation highlighting index S is shifted in proportion to the luminance L.
- the saturation highlighting is performed by once transforming the image data of RGB to the image data in the Luv space and, after saturation highlighting, re-transforming to RGB, with the result that the increase of the calculation volume is unavoidable.
- a description will be given below of a modification in which the gradation data of RGB is utilized as it is in saturation highlighting.
- any of the above methods is used and the operation of obtaining RGB gradation data (R',G',B') after transformation from the RGB gradation data of each pixel is performed for all the pixels.
- Saturation values X of the pixels thinned out in step S102 are determined and totaled into a saturation distribution, then it is judged in step S204 whether the inputted image is a binary image such as a black-and-white image or not, while in step S206 it is judged whether the image is a natural picture or not. Then, except the case where the image is a binary image and the case where it is not a natural picture, it is judged in step S208 whether the image data is framed or not. If there is a frame portion, it is removed and the minimum saturation A is determined in a predetermined higher range of the resulting saturation distribution.
- step S214 there is made transformation of image data in step S214.
- FIGS.31 to 33 illustrate a contrast enlarging-saturation highlighting interadjusting process, of which FIG.31 shows a main processing in the said interadjusting process, and FIGS.32 and 33 show the contrast enlarging process and the saturation highlighting process each individually in more detail. The details of these processings are the same as those described above and are therefore omitted here.
- step S310 there are performed thinning-out and totaling of pixels, then in steps S320 and S330 which are front stages of both contrast enlarging process and saturation highlighting process there are obtained a contrast enlarging coefficient "a" and a saturation highlighting coefficient S, and thereafter in step S340 which corresponds to a medium stage there is executed a highlighting suppressing process to determine a formal saturation highlighting coefficient S'.
- the contrast enlarging coefficient "a” is first determined and with this coefficient fixed there is performed the highlighting suppressing process to determine the formal saturation highlighting coefficient S' .
- the contrast enlarging coefficient "a" is 1 or larger and (1/a) becomes smaller than 1 as the enlargement tendency becomes stronger. Consequently, the formal saturation highlighting coefficient S' becomes smaller than the temporary saturation highlighting coefficient S determined in the above manner. That is, the parameter used in the saturation highlighting process is made small by the parameter used in the contrast enlarging process to suppress the highlighting process.
- both coefficients may act each individually and may not exert a synergistic bad influence.
- step S350 the transformation of image data with use of the foregoing highlighting coefficients. More specifically, in correspondence to the rear stage of each of the above processings, data transformation is performed pixel by pixel with respect to the image data of each attentional pixel.
- step S114 an edge highlighting process is executed in step S114, which process is illustrated in FIG.39.
- step S404 there is determined an edge highlighting degree proportional to the thus-detected number of pixels.
- the edge highlighting degree depends greatly on the edge highlighting method, so reference will first be made to the edge highlighting method. In this example there is used such an unsharp mask 40 as shown in FIG. 40.
- the central value "100" is used as a weight of an attentional pixel Pij in a matrix-like image data and the marginal pixels are weighted correspondingly to the numerical values described in the squares of the mask and are utilized for integration, which is performed in accordance with the following expression (48) if the unsharp mask 40 is used:
- P' ij stands for the result of addition at a low weight of the marginal pixels relative to the attentional pixel and hence it is an unsharp image data, which is the same as image data having passed through a low- pass filter. Therefore, "Pij-P' ij" represents the result of having subtracted a low frequency component from all the components, indicating the same image data as that having passed through a high-pass filter. Then, if this high frequency component from the high-pass filer is multiplied by an edge highlighting coefficient C and the result is added to Pij, it follows that the high frequency component has been increased in proportion to the edge highlighting coefficient. The edge is highlightened in this way.
- E ratio min(width, height)/640+1
- the edge highlighting coefficient C is 1 if the number of shorter pixels is less than 640, C is 2 if the said number is 640 or more and less than 1920, and C is 3 if the said number is 1920 or more.
- the edge highlighting coefficient C is set as above in this example, it may be varied in a proportional manner because the image size may change in some particular dot density.
- edge highlighting degree varies also according to the size of the unsharp mask, a large size of an unsharp mask can be used for a large number of pixels, or a small size of an unsharp mask for a small number of pixels.
- both edge highlighting coefficient C and unsharp mask 40 may be changed, or the edge highlighting degree may be changed with respect to only one of the two.
- the unsharp mask 40 is most heavily weighted at the central portion and gradually becomes smaller in the values of weighting toward the marginal portion. This degree of change is not always fixed but may be changed as necessary.
- the weighting value of the outermost peripheral squares is 0 or 1.
- a weighting multiplication is meaningless, and the weighting of 1 has only a very slight weight in comparison with the total square value of 632.
- an unsharp mask 41 of 5x5 as shown in FIG. 41 is used as a substitute for the unsharp mask 40 of 7 ⁇ 7.
- the unsharp mask 41 corresponds to an unsharp mask obtained by omitting the outermost periphery of the 7 ⁇ 7 unsharp mask 40. Both are coincident with each other. At the inside 5x5 mask portion the unsharp mask 40 coincides in weighting with the unsharp mask 41. As a result, a concrete processing volume is reduced by half.
- the operation volume is reduced in the following manner.
- each component value corresponds to the luminance (lightness) of the associated color component. Therefore, the calculation of the expression (49) should be performed individually for each gradation data of RGB.
- multiplication and addition are repeated by the number to times corresponding to the number of the squares of the unsharp mask 40, such an individual operation for each component inevitably results in a large operation volume.
- the edge highlighting process corresponds to a process wherein image data (R'G'B') after edge highlighting is calculated using the unsharp mask 40 with respect to each pixel in the matrix-like image data.
- steps S406 to S412 is represented a loop processing wherein the edge highlighting process is repeated for each pixel.
- step S410 included in the loop processing an attentional pixel is shifted successively in both horizontal and vertical directions, and this processing is repeated until when the pixel being processed is judged to be the final pixel in step S412.
- step S406 there is made a comparison between adjacent pixels with respect to image data and only when the resulting difference is large, the operation using the unsharp mask in step S408 is conducted.
- FIG.43 is a flowchart showing a processing corresponding to the printing process.
- step S502 there is formed a raster data
- step S504 the type of image data is judged on the basis of the foregoing flag, and any one of steps S506 and S508 as color transformation processings is executed.
- binary coding is performed in step S510 and printing data is outputted in step S512.
- FIG.45 A cube comprising eight lattice points around coordinates P involving RGB gradation data as component values in a color specification space before transformation is here assumed. Given that the transformation value at kth vertex of the cube is Dk and the volume of the cube is V, a transformation value Px at point P of the cube can be interpolated as follows from a weight based on the ratio of volume Vk of such eight small rectangular parallelepipeds as shown in the figure which are divided at point P:
- This caching table is a table of a certain capacity for retaining CMY gradation data obtained by executing the 8-point interpolating operation using the RGB gradation data before transformation. Initially, this table is blank, but the CMY gradation data obtained just after execution of the 8-point interpolating operation in step S606 is added and updated in step S608.
- the processing routine of "cache + interpolating operation" is executed in step S506 as shown in FIG. 43, the interior of the cache table is retrieved using the gradation data of RGB before transformation as component values in the first step S602, and upon cache hit (discovery by the retrieval), reference is made in step S612 to the CMY gradation data stored.
- the processing is repeated until the final image is obtained in step S610 for transformation of the RGB gradation data into CMY gradation data with respect to each of the dot matrix-like pixels.
- step S508 which is another color transformation processing.
- FIG.47 and the diagram of FIG.48 showing an error distribution of pixel are for explaining an outline of the pre-gray level transformation. Since the basic expression of the foregoing 8-point interpolation requires multiplication eight times and addition seven times, resources and time consumption are large in both hardwarization and execution using software. For easier color transformation, therefore, the applicant in the present case has developed a gray level transformation as a substitute for the interpolating operation in Japanese Patent Laid Open No.30772/95.
- the pre-gray level transformation disclosed in the above unexamined publication uses an error diffusing method for example for gray level transformation of pixel gradation data into coincidence with lattice coordinates.
- Lattice coordinates proximate to the pixel to be transformed are searched for (S702), then an error (dg) from the lattice coordinates is calculated (S704), and the error (dg) is distributed to nearby pixels (S706).
- the burden of the operation can be lightened to a great extent in comparison with the repetition of multiplication and addition.
- a detailed description thereof is here omitted. Since a gray level transformation for binary coding is performed even after color transformation, the gray level transformation conducted first is called herein a pre-gray level transformation.
- step S710 gray level transformation is conducted for each of rasterized dot matrix-like pixels and therefore the processing is repeated until the final pixel in step S708.
- step S710 reference is made to the color transformation table using the RGB gradation data after gray level transformation. At this time, because of lattice points, it is not necessary to conduct an interpolating operation and thus the reading process is very easy.
- step S102 by totaling luminance values of pixels which have been selected by a thinning-out processing for their image data, to obtain a luminance distribution, (step S102), it becomes possible to count the number of colors used (step S104). As a result, if the number of colors used is large, the related image data can be judged to be a natural picture. Besides, it becomes possible to automatically select a contrast enlarging process (step S110), a saturation highlighting process (S112) and an edge highlighting process (S114) which are suitable for application to the natural picture, on the basis of the said result of judgment.
- a contrast enlarging process step S110
- S112 saturation highlighting process
- S114 edge highlighting process
- a color transformation based on the pre-gray level transformation is performed if the image data concerned is a natural picture (S508), while if the image data concerned is not a natural picture, there is performed the color transformation of "cache + interpolating" operation (S506), and thus it is possible to automatically select a color transformation processing of a small processing volume.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Color, Gradation (AREA)
- Processing Or Creating Images (AREA)
- Television Signal Processing For Recording (AREA)
- Editing Of Facsimile Originals (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP30222396 | 1996-11-13 | ||
JP30222396 | 1996-11-13 | ||
JP30637096 | 1996-11-18 | ||
JP30637196 | 1996-11-18 | ||
JP30637096 | 1996-11-18 | ||
JP30637196 | 1996-11-18 | ||
JP31107096 | 1996-11-21 | ||
JP31107096 | 1996-11-21 | ||
EP97309128A EP0843464B1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP97309128A Division EP0843464B1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1587301A2 true EP1587301A2 (fr) | 2005-10-19 |
EP1587301A3 EP1587301A3 (fr) | 2005-11-30 |
Family
ID=27479825
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP05076288A Withdrawn EP1587301A3 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images et support pour le programme correspondant |
EP05076287A Withdrawn EP1587303A3 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images et support pour le programme correspondant |
EP05076286A Withdrawn EP1587302A1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images, et support sur lequel est enregistré un logiciel de commande de traitement d'image |
EP97309128A Expired - Lifetime EP0843464B1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images |
EP05076285A Expired - Lifetime EP1587300B1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images et support pour le programme correspondant |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP05076287A Withdrawn EP1587303A3 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images et support pour le programme correspondant |
EP05076286A Withdrawn EP1587302A1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images, et support sur lequel est enregistré un logiciel de commande de traitement d'image |
EP97309128A Expired - Lifetime EP0843464B1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images |
EP05076285A Expired - Lifetime EP1587300B1 (fr) | 1996-11-13 | 1997-11-13 | Système et procédé de traitement d'images et support pour le programme correspondant |
Country Status (4)
Country | Link |
---|---|
US (5) | US6351558B1 (fr) |
EP (5) | EP1587301A3 (fr) |
AT (2) | ATE315875T1 (fr) |
DE (2) | DE69739095D1 (fr) |
Families Citing this family (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6351558B1 (en) | 1996-11-13 | 2002-02-26 | Seiko Epson Corporation | Image processing system, image processing method, and medium having an image processing control program recorded thereon |
JPH11289454A (ja) * | 1997-11-28 | 1999-10-19 | Canon Inc | 画像処理方法および画像処理装置およびコンピュータが読み出し可能なプログラムを格納した記憶媒体 |
JPH11266369A (ja) * | 1998-03-17 | 1999-09-28 | Fuji Photo Film Co Ltd | 画像の明るさ調整方法および装置 |
JPH11298736A (ja) * | 1998-04-14 | 1999-10-29 | Minolta Co Ltd | 画像処理方法、画像処理プログラムが記録された可読記録媒体及び画像処理装置 |
JP3492202B2 (ja) * | 1998-06-24 | 2004-02-03 | キヤノン株式会社 | 画像処理方法、装置および記録媒体 |
US6694051B1 (en) * | 1998-06-24 | 2004-02-17 | Canon Kabushiki Kaisha | Image processing method, image processing apparatus and recording medium |
JP3748172B2 (ja) * | 1998-12-09 | 2006-02-22 | 富士通株式会社 | 画像処理装置 |
JP3714657B2 (ja) * | 1999-05-12 | 2005-11-09 | パイオニア株式会社 | 階調補正装置 |
JP2001148776A (ja) * | 1999-11-18 | 2001-05-29 | Canon Inc | 画像処理装置及び方法及び記憶媒体 |
US6650771B1 (en) * | 1999-11-22 | 2003-11-18 | Eastman Kodak Company | Color management system incorporating parameter control channels |
US6980326B2 (en) | 1999-12-15 | 2005-12-27 | Canon Kabushiki Kaisha | Image processing method and apparatus for color correction of an image |
US7006668B2 (en) * | 1999-12-28 | 2006-02-28 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
US6771311B1 (en) * | 2000-12-11 | 2004-08-03 | Eastman Kodak Company | Automatic color saturation enhancement |
JP4783985B2 (ja) * | 2001-02-28 | 2011-09-28 | 日本電気株式会社 | 映像処理装置、映像表示装置及びそれに用いる映像処理方法並びにそのプログラム |
US6818013B2 (en) * | 2001-06-14 | 2004-11-16 | Cordis Corporation | Intravascular stent device |
US20040161152A1 (en) * | 2001-06-15 | 2004-08-19 | Matteo Marconi | Automatic natural content detection in video information |
JP3631169B2 (ja) * | 2001-06-19 | 2005-03-23 | 三洋電機株式会社 | ディジタルカメラ |
JP3741212B2 (ja) * | 2001-07-26 | 2006-02-01 | セイコーエプソン株式会社 | 画像処理システム、プロジェクタ、プログラム、情報記憶媒体および白黒伸張処理方法 |
US6947597B2 (en) * | 2001-09-28 | 2005-09-20 | Xerox Corporation | Soft picture/graphics classification system and method |
US6983068B2 (en) * | 2001-09-28 | 2006-01-03 | Xerox Corporation | Picture/graphics classification system and method |
US7119924B2 (en) * | 2001-09-28 | 2006-10-10 | Xerox Corporation | Detection and segmentation of sweeps in color graphics images |
KR100453038B1 (ko) * | 2001-12-24 | 2004-10-15 | 삼성전자주식회사 | 컬러 영상의 채도 조절 장치 및 방법 |
US6996277B2 (en) * | 2002-01-07 | 2006-02-07 | Xerox Corporation | Image type classification using color discreteness features |
US6985628B2 (en) * | 2002-01-07 | 2006-01-10 | Xerox Corporation | Image type classification using edge features |
US7076298B2 (en) * | 2002-06-14 | 2006-07-11 | Medtronic, Inc. | Method and apparatus for prevention of arrhythmia clusters using overdrive pacing |
US6778183B1 (en) * | 2002-07-10 | 2004-08-17 | Genesis Microchip Inc. | Method and system for adaptive color and contrast for display devices |
US7034843B2 (en) * | 2002-07-10 | 2006-04-25 | Genesis Microchip Inc. | Method and system for adaptive color and contrast for display devices |
JP2006501977A (ja) * | 2002-10-07 | 2006-01-19 | コンフォーミス・インコーポレイテッド | 関節表面に適合する3次元外形を伴う最小限侵襲性関節インプラント |
JP2006505366A (ja) * | 2002-11-07 | 2006-02-16 | コンフォーミス・インコーポレイテッド | 半月板サイズおよび形状の決定および工夫した処置の方法 |
JP4167097B2 (ja) * | 2003-03-17 | 2008-10-15 | 株式会社沖データ | 画像処理方法および画像処理装置 |
JP4189654B2 (ja) * | 2003-04-18 | 2008-12-03 | セイコーエプソン株式会社 | 画像処理装置 |
JP4374901B2 (ja) * | 2003-05-16 | 2009-12-02 | セイコーエプソン株式会社 | 画像の明度補正処理 |
KR101027825B1 (ko) * | 2003-09-11 | 2011-04-07 | 파나소닉 주식회사 | 시각 처리 장치, 시각 처리 방법, 시각 처리 프로그램 및반도체 장치 |
WO2005027043A1 (fr) | 2003-09-11 | 2005-03-24 | Matsushita Electric Industrial Co., Ltd. | Dispositif de traitement visuel, procede, programme de traitement visuel, circuit integre, afficheur, imageur, et terminal d'information mobile |
EP1667065B1 (fr) * | 2003-09-11 | 2018-06-06 | Panasonic Intellectual Property Corporation of America | Appareil de traitement visuel, procede de traitement visuel, programme de traitement visuel et dispositif semi-conducteur |
KR100612494B1 (ko) * | 2004-06-07 | 2006-08-14 | 삼성전자주식회사 | 칼러 영상의 채도 조절 장치 및 방법 |
US7587084B1 (en) | 2004-07-15 | 2009-09-08 | Sun Microsystems, Inc. | Detection of anti aliasing in two-color images for improved compression |
US20060050084A1 (en) * | 2004-09-03 | 2006-03-09 | Eric Jeffrey | Apparatus and method for histogram stretching |
KR100752850B1 (ko) * | 2004-12-15 | 2007-08-29 | 엘지전자 주식회사 | 디지털 영상 촬영장치와 방법 |
US7502145B2 (en) * | 2004-12-22 | 2009-03-10 | Xerox Corporation | Systems and methods for improved line edge quality |
JP4372747B2 (ja) * | 2005-01-25 | 2009-11-25 | シャープ株式会社 | 輝度レベル変換装置、輝度レベル変換方法、固体撮像装置、輝度レベル変換プログラム、および記録媒体 |
US7512268B2 (en) * | 2005-02-22 | 2009-03-31 | Texas Instruments Incorporated | System and method for local value adjustment |
US20060204086A1 (en) * | 2005-03-10 | 2006-09-14 | Ullas Gargi | Compression of palettized images |
US8131108B2 (en) * | 2005-04-22 | 2012-03-06 | Broadcom Corporation | Method and system for dynamic contrast stretch |
US7586653B2 (en) * | 2005-04-22 | 2009-09-08 | Lexmark International, Inc. | Method and system for enhancing an image using luminance scaling |
JP4648071B2 (ja) * | 2005-04-28 | 2011-03-09 | 株式会社日立製作所 | 映像表示装置及び映像信号の色飽和度制御方法 |
US20060274937A1 (en) * | 2005-06-07 | 2006-12-07 | Eric Jeffrey | Apparatus and method for adjusting colors of a digital image |
US7580580B1 (en) | 2005-07-12 | 2009-08-25 | Sun Microsystems, Inc. | Method for compression of two-color anti aliased images |
KR101128454B1 (ko) * | 2005-11-10 | 2012-03-23 | 삼성전자주식회사 | 콘트라스트 향상 방법 및 장치 |
US7746411B1 (en) | 2005-12-07 | 2010-06-29 | Marvell International Ltd. | Color management unit |
US8094959B2 (en) * | 2005-12-09 | 2012-01-10 | Seiko Epson Corporation | Efficient detection of camera shake |
US20070133899A1 (en) * | 2005-12-09 | 2007-06-14 | Rai Barinder S | Triggering an image processing function |
US7848569B2 (en) * | 2005-12-14 | 2010-12-07 | Micron Technology, Inc. | Method and apparatus providing automatic color balancing for digital imaging systems |
WO2007072548A1 (fr) * | 2005-12-20 | 2007-06-28 | Fujitsu Limited | Dispositif de discrimination d’image |
US20070154089A1 (en) * | 2006-01-03 | 2007-07-05 | Chang-Jung Kao | Method and apparatus for calculating image histogram with configurable granularity |
KR101225058B1 (ko) * | 2006-02-14 | 2013-01-23 | 삼성전자주식회사 | 콘트라스트 조절 방법 및 장치 |
TWI315961B (en) * | 2006-03-16 | 2009-10-11 | Quanta Comp Inc | Method and apparatus for adjusting contrast of image |
TWI319688B (en) * | 2006-03-16 | 2010-01-11 | Quanta Comp Inc | Method and apparatus for adjusting contrast of image |
WO2007142134A1 (fr) * | 2006-06-02 | 2007-12-13 | Rohm Co., Ltd. | circuit de traitement d'image, dispositif à semi-conducteurs et dispositif de traitement d'image |
JP4997846B2 (ja) * | 2006-06-30 | 2012-08-08 | ブラザー工業株式会社 | 画像処理プログラムおよび画像処理装置 |
US7733535B2 (en) * | 2006-07-10 | 2010-06-08 | Silverbrook Research Pty Ltd | Method and apparatus for image manipulation via a dither matrix |
US7999812B2 (en) * | 2006-08-15 | 2011-08-16 | Nintendo Co, Ltd. | Locality based morphing between less and more deformed models in a computer graphics system |
JP4393491B2 (ja) * | 2006-09-12 | 2010-01-06 | キヤノン株式会社 | 画像処理装置およびその制御方法 |
TWI327868B (en) * | 2006-11-13 | 2010-07-21 | Wintek Corp | Image processing method |
JP4907382B2 (ja) * | 2007-02-23 | 2012-03-28 | ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー | 超音波画像表示方法および超音波診断装置 |
US20080219561A1 (en) * | 2007-03-05 | 2008-09-11 | Ricoh Company, Limited | Image processing apparatus, image processing method, and computer program product |
KR20090005621A (ko) * | 2007-07-09 | 2009-01-14 | 삼성전자주식회사 | 색상 자동 변경 방법 및 그 장치 |
JP4861924B2 (ja) * | 2007-07-31 | 2012-01-25 | キヤノン株式会社 | 画像処理装置、その制御方法、そのプログラム、その記憶媒体 |
US8117134B2 (en) * | 2008-10-16 | 2012-02-14 | Xerox Corporation | Neutral pixel correction for proper marked color printing |
JP2010127994A (ja) * | 2008-11-25 | 2010-06-10 | Sony Corp | 補正値算出方法、表示装置 |
JP5694761B2 (ja) * | 2010-12-28 | 2015-04-01 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
US8937749B2 (en) | 2012-03-09 | 2015-01-20 | Xerox Corporation | Integrated color detection and color pixel counting for billing |
TWI475556B (zh) * | 2012-11-05 | 2015-03-01 | Chunghwa Picture Tubes Ltd | 用於提升顯示系統顯示的彩色影像之對比的方法以及利用此方法的影像處理系統 |
JP6381311B2 (ja) | 2013-07-04 | 2018-08-29 | キヤノン株式会社 | 画像形成装置、画像形成方法、およびプログラム |
US9691138B2 (en) | 2013-08-30 | 2017-06-27 | Google Inc. | System and method for adjusting pixel saturation |
JP6417851B2 (ja) | 2014-10-28 | 2018-11-07 | ブラザー工業株式会社 | 画像処理装置、および、コンピュータプログラム |
TWI570635B (zh) * | 2016-03-08 | 2017-02-11 | 和碩聯合科技股份有限公司 | 圖像辨識方法及執行該方法之電子裝置、電腦可讀取記錄媒體 |
US9762772B1 (en) * | 2016-07-12 | 2017-09-12 | Ricoh Company, Ltd. | Color hash table reuse for print job processing |
CN110868908B (zh) * | 2017-06-15 | 2022-04-08 | 富士胶片株式会社 | 医用图像处理装置及内窥镜系统以及医用图像处理装置的工作方法 |
CN109616040B (zh) * | 2019-01-30 | 2022-05-17 | 厦门天马微电子有限公司 | 一种显示装置及其驱动方法以及电子设备 |
JP7506870B2 (ja) * | 2020-04-08 | 2024-06-27 | 日本コントロールシステム株式会社 | マスク情報調整装置、マスクデータ調整方法、プログラム |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5121224A (en) * | 1990-06-01 | 1992-06-09 | Eastman Kodak Company | Reproduction apparatus with selective screening and continuous-tone discrimination |
US5280367A (en) * | 1991-05-28 | 1994-01-18 | Hewlett-Packard Company | Automatic separation of text from background in scanned images of complex documents |
Family Cites Families (90)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US518105A (en) * | 1894-04-10 | Mathias jensen | ||
US578696A (en) * | 1897-03-09 | Machine for inserting fastenings in boots | ||
US4731671A (en) | 1985-05-06 | 1988-03-15 | Eastman Kodak Company | Contrast adjustment in digital image processing method employing histogram normalization |
JPS6230473A (ja) | 1985-07-31 | 1987-02-09 | Mitsubishi Electric Corp | イメ−ジプリンタ装置 |
JPS6288071A (ja) | 1985-10-14 | 1987-04-22 | Fujitsu Ltd | 色画素の濃度パタ−ン変換方式 |
US5181105A (en) | 1986-05-30 | 1993-01-19 | Canon Kabushiki Kaisha | Color image correction based on characteristics of a highlights or other predetermined image portion |
DE3629403C2 (de) | 1986-08-29 | 1994-09-29 | Agfa Gevaert Ag | Verfahren zur Korrektur der Farbsättigung bei der elektronischen Bildverarbeitung |
JP2511006B2 (ja) | 1986-10-30 | 1996-06-26 | キヤノン株式会社 | 色画像デ−タ補間方法 |
JPH0613821Y2 (ja) | 1987-07-09 | 1994-04-13 | アミテック株式会社 | ワイドベルトサンダ−機 |
JPH0750913B2 (ja) | 1987-12-17 | 1995-05-31 | 富士写真フイルム株式会社 | 画像信号処理方法 |
JPH01169683A (ja) | 1987-12-25 | 1989-07-04 | Pfu Ltd | 画像入力装置 |
JPH01207885A (ja) | 1988-02-15 | 1989-08-21 | Fujitsu Ltd | 画像階調制御方式 |
JPH01237144A (ja) | 1988-03-17 | 1989-09-21 | Fuji Photo Film Co Ltd | 彩度に依存した色修正方法 |
JPH0264875A (ja) * | 1988-08-31 | 1990-03-05 | Toshiba Corp | カラー画像の高速彩度変換装置 |
JPH0229072A (ja) | 1988-04-14 | 1990-01-31 | Ricoh Co Ltd | デジタル画像処理装置の画像補正装置 |
JPH0681251B2 (ja) | 1988-05-31 | 1994-10-12 | 松下電器産業株式会社 | 画像処理装置 |
JPH02205984A (ja) | 1989-02-06 | 1990-08-15 | Canon Inc | 画像処理装置 |
JPH02268075A (ja) | 1989-04-10 | 1990-11-01 | Canon Inc | 画像処理装置 |
JP2964492B2 (ja) | 1989-08-15 | 1999-10-18 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
JP3057255B2 (ja) | 1989-08-15 | 2000-06-26 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
US5046118A (en) * | 1990-02-06 | 1991-09-03 | Eastman Kodak Company | Tone-scale generation method and apparatus for digital x-ray images |
JPH05205039A (ja) | 1992-01-23 | 1993-08-13 | Hitachi Ltd | カラー画像処理方法およびカラー画像処理装置 |
EP0448330B1 (fr) * | 1990-03-19 | 1996-08-21 | Canon Kabushiki Kaisha | Procédé et appareil de traitement d'image |
JP3150137B2 (ja) | 1990-03-19 | 2001-03-26 | キヤノン株式会社 | 画像処理方法 |
JP2505300B2 (ja) | 1990-03-22 | 1996-06-05 | 日立電子株式会社 | カラ―映像信号の色調補正装置 |
JP3011432B2 (ja) | 1990-05-14 | 2000-02-21 | 株式会社東芝 | カラー画像処理装置 |
JP2521843B2 (ja) | 1990-10-04 | 1996-08-07 | 大日本スクリーン製造株式会社 | セットアップパラメ―タ決定特性を修正する方法及び自動セットアップ装置 |
JPH04248681A (ja) | 1991-02-04 | 1992-09-04 | Nippon Telegr & Teleph Corp <Ntt> | カラー画像強調・弛緩処理方法 |
JP2936791B2 (ja) | 1991-05-28 | 1999-08-23 | 松下電器産業株式会社 | 階調補正装置 |
JP2573434B2 (ja) | 1991-05-31 | 1997-01-22 | 松下電工株式会社 | 特定色抽出方法 |
JP3133779B2 (ja) | 1991-06-14 | 2001-02-13 | キヤノン株式会社 | 画像処理装置 |
JP3053913B2 (ja) | 1991-07-15 | 2000-06-19 | 帝三製薬株式会社 | ビンポセチン類含有貼付剤 |
JP2616520B2 (ja) | 1991-08-30 | 1997-06-04 | 株式会社島津製作所 | 医療用画像表示装置 |
EP0536821B1 (fr) | 1991-09-27 | 1996-09-04 | Agfa-Gevaert N.V. | Procédé de reproduction d'images médicales pour générer des images de qualité optimale pour diagnostic |
JP3253112B2 (ja) | 1991-10-30 | 2002-02-04 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
JP2936085B2 (ja) | 1991-11-19 | 1999-08-23 | 富士写真フイルム株式会社 | 画像データ処理方法および装置 |
DE69325527T2 (de) * | 1992-02-21 | 1999-11-25 | Canon K.K., Tokio/Tokyo | Gerät und Verfahren zur Bildverarbeitung |
JP3253117B2 (ja) | 1992-02-21 | 2002-02-04 | キヤノン株式会社 | 画像処理装置および方法 |
JP3189156B2 (ja) | 1992-05-15 | 2001-07-16 | キヤノン株式会社 | ビデオ・プリンタ及び画像処理方法 |
JPH05344328A (ja) | 1992-06-12 | 1993-12-24 | Canon Inc | 印刷装置 |
JPH0650983A (ja) | 1992-07-28 | 1994-02-25 | Anima Kk | 被検体部位の動作解析装置 |
JP3249616B2 (ja) | 1993-01-22 | 2002-01-21 | キヤノン株式会社 | 画像データ圧縮装置及び方法 |
DE69333694T2 (de) * | 1992-09-11 | 2005-10-20 | Canon K.K. | Verfahren und Anordnung zur Bildverarbeitung |
GB2271493A (en) | 1992-10-02 | 1994-04-13 | Canon Res Ct Europe Ltd | Processing colour image data |
JPH06124329A (ja) | 1992-10-13 | 1994-05-06 | Kyocera Corp | 彩度変更回路 |
JPH06178111A (ja) | 1992-12-10 | 1994-06-24 | Canon Inc | 画像処理装置 |
JPH06178113A (ja) | 1992-12-10 | 1994-06-24 | Konica Corp | 画像データ保存及び変換装置と画像代表値の算出装置 |
JPH06197223A (ja) | 1992-12-24 | 1994-07-15 | Konica Corp | 画像読取り装置 |
JP3347378B2 (ja) | 1993-01-07 | 2002-11-20 | キヤノン株式会社 | 印刷装置および印刷のための画像処理方法 |
JP3268512B2 (ja) | 1993-03-03 | 2002-03-25 | セイコーエプソン株式会社 | 画像処理装置および画像処理方法 |
JPH06284281A (ja) | 1993-03-25 | 1994-10-07 | Toshiba Corp | 画像処理装置 |
JP3548589B2 (ja) * | 1993-04-30 | 2004-07-28 | 富士通株式会社 | 出力装置の色再現方法及びその装置 |
JP2600573B2 (ja) | 1993-05-31 | 1997-04-16 | 日本電気株式会社 | カラー画像の彩度強調方法及び装置 |
JP2876934B2 (ja) | 1993-05-31 | 1999-03-31 | 日本電気株式会社 | 画像のコントラスト強調方法及び装置 |
JPH0723224A (ja) | 1993-06-28 | 1995-01-24 | Konica Corp | 複写装置 |
US5382976A (en) * | 1993-06-30 | 1995-01-17 | Eastman Kodak Company | Apparatus and method for adaptively interpolating a full color image utilizing luminance gradients |
JP3042945B2 (ja) * | 1993-07-07 | 2000-05-22 | 富士通株式会社 | 画像抽出装置 |
JPH0821079B2 (ja) | 1993-07-15 | 1996-03-04 | 工業技術院長 | 画質改善方法および装置 |
JP3472601B2 (ja) | 1993-08-27 | 2003-12-02 | 新光電気工業株式会社 | 半導体装置 |
KR0180577B1 (ko) * | 1993-12-16 | 1999-05-15 | 모리시다 요이치 | 멀티윈도우 장치 |
JP3238584B2 (ja) | 1993-12-16 | 2001-12-17 | 松下電器産業株式会社 | マルチウィンドウ装置 |
US6006010A (en) * | 1993-12-28 | 1999-12-21 | Minolta Co., Ltd. | Digital image forming apparatus |
US5729626A (en) * | 1993-12-28 | 1998-03-17 | Minolta Co., Ltd. | Digital image forming apparatus |
JPH07236038A (ja) | 1993-12-28 | 1995-09-05 | Minolta Co Ltd | デジタル画像形成装置 |
JP3489796B2 (ja) | 1994-01-14 | 2004-01-26 | 株式会社リコー | 画像信号処理装置 |
JP2906974B2 (ja) | 1994-01-14 | 1999-06-21 | 富士ゼロックス株式会社 | カラー画像処理方法および装置 |
JP2906975B2 (ja) | 1994-01-14 | 1999-06-21 | 富士ゼロックス株式会社 | カラー画像処理方法および装置 |
JP3491998B2 (ja) | 1994-01-31 | 2004-02-03 | キヤノン株式会社 | 画像処理方法及び装置 |
US5450217A (en) | 1994-05-23 | 1995-09-12 | Xerox Corporation | Image-dependent color saturation correction in a natural scene pictorial image |
JPH0832827A (ja) | 1994-07-13 | 1996-02-02 | Toppan Printing Co Ltd | ディジタル画像の階調補正装置 |
JP3605856B2 (ja) | 1994-07-25 | 2004-12-22 | 富士写真フイルム株式会社 | 画像処理装置 |
JPH08102860A (ja) | 1994-08-04 | 1996-04-16 | Canon Inc | 画像処理装置及びその方法 |
JP3142108B2 (ja) | 1994-11-07 | 2001-03-07 | 株式会社東芝 | 画像処理装置および画像処理方法 |
US5982926A (en) | 1995-01-17 | 1999-11-09 | At & T Ipm Corp. | Real-time image enhancement techniques |
JPH08272347A (ja) | 1995-02-01 | 1996-10-18 | Canon Inc | 色変換方法とその装置及び画像処理方法とその装置 |
JPH08223409A (ja) | 1995-02-10 | 1996-08-30 | Canon Inc | 画像処理装置およびその方法 |
US6118895A (en) * | 1995-03-07 | 2000-09-12 | Minolta Co., Ltd. | Image forming apparatus for distinguishing between types of color and monochromatic documents |
JP3401977B2 (ja) | 1995-03-07 | 2003-04-28 | ミノルタ株式会社 | 画像再現装置 |
JP3333894B2 (ja) | 1995-03-07 | 2002-10-15 | ミノルタ株式会社 | 画像処理装置 |
JPH08297054A (ja) * | 1995-04-26 | 1996-11-12 | Advantest Corp | 色感測定装置 |
JPH09130596A (ja) | 1995-10-27 | 1997-05-16 | Ricoh Co Ltd | 画像処理装置 |
US5652621A (en) * | 1996-02-23 | 1997-07-29 | Eastman Kodak Company | Adaptive color plane interpolation in single sensor color electronic camera |
JP3380831B2 (ja) | 1996-03-08 | 2003-02-24 | シャープ株式会社 | 画像形成装置 |
JP3626966B2 (ja) | 1996-03-28 | 2005-03-09 | コニカミノルタビジネステクノロジーズ株式会社 | 画像処理装置 |
JP3277818B2 (ja) | 1996-08-23 | 2002-04-22 | 松下電器産業株式会社 | 多値画像2値化装置 |
JP3596614B2 (ja) | 1996-11-13 | 2004-12-02 | セイコーエプソン株式会社 | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 |
JP3698205B2 (ja) | 1996-11-13 | 2005-09-21 | セイコーエプソン株式会社 | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 |
JP3682872B2 (ja) | 1996-11-13 | 2005-08-17 | セイコーエプソン株式会社 | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 |
US6351558B1 (en) | 1996-11-13 | 2002-02-26 | Seiko Epson Corporation | Image processing system, image processing method, and medium having an image processing control program recorded thereon |
JP3503372B2 (ja) * | 1996-11-26 | 2004-03-02 | ミノルタ株式会社 | 画素補間装置及びその画素補間方法 |
-
1997
- 1997-11-05 US US08/964,885 patent/US6351558B1/en not_active Expired - Lifetime
- 1997-11-13 EP EP05076288A patent/EP1587301A3/fr not_active Withdrawn
- 1997-11-13 EP EP05076287A patent/EP1587303A3/fr not_active Withdrawn
- 1997-11-13 EP EP05076286A patent/EP1587302A1/fr not_active Withdrawn
- 1997-11-13 DE DE69739095T patent/DE69739095D1/de not_active Expired - Lifetime
- 1997-11-13 DE DE69735083T patent/DE69735083T2/de not_active Expired - Lifetime
- 1997-11-13 AT AT97309128T patent/ATE315875T1/de not_active IP Right Cessation
- 1997-11-13 EP EP97309128A patent/EP0843464B1/fr not_active Expired - Lifetime
- 1997-11-13 EP EP05076285A patent/EP1587300B1/fr not_active Expired - Lifetime
- 1997-11-13 AT AT05076285T patent/ATE413772T1/de not_active IP Right Cessation
-
2001
- 2001-12-06 US US10/004,406 patent/US6539111B2/en not_active Expired - Lifetime
-
2002
- 2002-12-20 US US10/323,722 patent/US6754381B2/en not_active Expired - Lifetime
-
2004
- 2004-06-14 US US10/710,031 patent/US7512263B2/en not_active Expired - Fee Related
- 2004-06-14 US US10/710,033 patent/US7155060B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5121224A (en) * | 1990-06-01 | 1992-06-09 | Eastman Kodak Company | Reproduction apparatus with selective screening and continuous-tone discrimination |
US5280367A (en) * | 1991-05-28 | 1994-01-18 | Hewlett-Packard Company | Automatic separation of text from background in scanned images of complex documents |
Also Published As
Publication number | Publication date |
---|---|
EP1587303A2 (fr) | 2005-10-19 |
DE69735083D1 (de) | 2006-04-06 |
US6754381B2 (en) | 2004-06-22 |
EP1587300A2 (fr) | 2005-10-19 |
US7155060B2 (en) | 2006-12-26 |
EP1587300B1 (fr) | 2008-11-05 |
DE69735083T2 (de) | 2006-07-20 |
ATE413772T1 (de) | 2008-11-15 |
EP1587302A1 (fr) | 2005-10-19 |
ATE315875T1 (de) | 2006-02-15 |
EP1587301A3 (fr) | 2005-11-30 |
US20040208360A1 (en) | 2004-10-21 |
US20030095706A1 (en) | 2003-05-22 |
EP0843464A3 (fr) | 2000-11-29 |
EP1587300A3 (fr) | 2005-11-30 |
US20040208366A1 (en) | 2004-10-21 |
EP0843464A2 (fr) | 1998-05-20 |
US7512263B2 (en) | 2009-03-31 |
DE69739095D1 (de) | 2008-12-18 |
US6539111B2 (en) | 2003-03-25 |
EP0843464B1 (fr) | 2006-01-11 |
EP1587303A3 (fr) | 2005-11-30 |
US6351558B1 (en) | 2002-02-26 |
US20020126329A1 (en) | 2002-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1587300B1 (fr) | Système et procédé de traitement d'images et support pour le programme correspondant | |
US8553285B2 (en) | Image processing apparatus, an image processing method, a medium on which an image processing control program is recorded, an image evaluation device, an image evaluation method and a medium on which an image evaluation program is recorded | |
JPH10198802A (ja) | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 | |
JPH11146219A (ja) | 画像処理装置、画像処理方法、画像処理プログラムを記録した媒体 | |
JP4243362B2 (ja) | 画像処理装置、画像処理方法、および画像処理プログラムを記録した記録媒体 | |
JP4019204B2 (ja) | 画像処理装置、画像処理方法、画像処理制御プログラムを記録した媒体 | |
JP4240236B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラムを記録した媒体および印刷装置 | |
JP3981779B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 | |
JP3953897B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 | |
JPH10340332A (ja) | 画像処理装置、画像処理方法、画像処理制御プログラムを記録した媒体 | |
JP3501151B2 (ja) | 画像処理装置、画像処理方法、画像処理制御プログラムを記録した媒体 | |
JP2003085546A5 (fr) | ||
JP3596614B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 | |
JP2003050999A (ja) | 画像処理装置、画像処理方法および画像処理プログラムを記録した媒体 | |
JP2003050999A5 (fr) | ||
JP2003178289A (ja) | 画像処理装置、画像評価装置、画像処理方法、画像評価方法および画像処理プログラムを記録した記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
17P | Request for examination filed |
Effective date: 20050621 |
|
AC | Divisional application: reference to earlier application |
Ref document number: 0843464 Country of ref document: EP Kind code of ref document: P |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: 7H 04N 1/56 B Ipc: 7H 04N 1/407 B Ipc: 7H 04N 1/60 A |
|
AKX | Designation fees paid |
Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
17Q | First examination report despatched |
Effective date: 20060915 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20100429 |