WO2009130820A1 - 画像処理装置、表示装置、画像処理方法、プログラムおよび記録媒体 - Google Patents
画像処理装置、表示装置、画像処理方法、プログラムおよび記録媒体 Download PDFInfo
- Publication number
- WO2009130820A1 WO2009130820A1 PCT/JP2008/072403 JP2008072403W WO2009130820A1 WO 2009130820 A1 WO2009130820 A1 WO 2009130820A1 JP 2008072403 W JP2008072403 W JP 2008072403W WO 2009130820 A1 WO2009130820 A1 WO 2009130820A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- interpolation
- edge
- pixel
- difference
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 139
- 238000003672 processing method Methods 0.000 title claims description 7
- 238000000034 method Methods 0.000 claims abstract description 175
- 238000012935 Averaging Methods 0.000 claims abstract description 60
- 230000004069 differentiation Effects 0.000 claims abstract description 7
- 230000008569 process Effects 0.000 claims description 110
- 238000004364 calculation method Methods 0.000 claims description 108
- 238000012937 correction Methods 0.000 claims description 12
- 238000003708 edge detection Methods 0.000 description 36
- 239000004973 liquid crystal related substance Substances 0.000 description 28
- 238000010586 diagram Methods 0.000 description 12
- 230000008859 change Effects 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 229930091051 Arenine Natural products 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0125—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards being a high definition standard
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/0142—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being edge adaptive
Definitions
- the present invention relates to an image processing apparatus and an image processing method for upscaling the resolution of input image data to a high resolution.
- a method of interpolating target pixel information by a simple filter such as a bilinear method, a bicubic method, or a LANCZOS method.
- these methods only fill in the spaces between the pixels in the input image data according to a certain rule, and essentially only an image equivalent to the sharpness of the original image can be obtained.
- Patent Document 1 since the technique of Patent Document 1 performs interpolation so as to maintain continuity of gradations in all directions, there is a problem that appropriate interpolation cannot be performed at the edge portion. For this reason, it cannot be said that the technique of the said patent document 1 is optimal for the scene which requires a higher definition image.
- transmission standards for transmitting a high-resolution video signal standards such as DVI and HD-SDI are known.
- 2K1K class horizontal 2000 pixels ⁇ vertical 1000
- 4K2K class horizontal direction 4000 pixels ⁇ vertical direction 2000 pixels
- the upscaled 4K2K class video signal Since it cannot be transmitted by one transmission line, it is necessary to divide into video signals for each of a plurality of areas so as to transmit using a plurality of transmission lines.
- 4K2K is a term used in the expansion of digital cinema and high-definition, and a resolution of 4096 dots ⁇ 2160 lines in cinema and 3840 dots ⁇ 2160 lines in high-vision is used.
- 2K1K is a term used correspondingly, and generally has a resolution of 2048 ⁇ 1080 and 1920 ⁇ 1080. Since these two transmission standards are defined identically, they are generally expressed as 4K2K and 2K1K.
- the present invention has been made in view of the above problems, and an object thereof is to divide input image data into image data of a plurality of regions, and to perform image processing for converting the resolution of each divided image data to a high resolution.
- An object of the present invention is to provide an image processing apparatus capable of generating a high-definition image without increasing the circuit scale and processing time.
- an image processing apparatus configured to divide input image data into a plurality of divided image data, and to upscale the resolution of each of the divided image data to a high resolution.
- An image processing apparatus including a plurality of upscale processing units, wherein the division processing unit superimposes a part of the other divided image data on a boundary portion between the divided image data and the other divided image data.
- Each of the divided image data so as to be included, and each of the upscale processing units is configured to extract the edge of the image by extracting an edge in the image by a calculation using a differential or difference of a gradation value near the pixel of interest.
- a difference calculation unit that performs a difference calculation process for calculating a gradation value of the pixel, and an averaging process for performing an averaging process for calculating a value obtained by averaging the gradation values near the target pixel as the gradation value of the target pixel
- a correlation value indicating a correlation between the difference image data obtained by performing the difference calculation process on the divided image data and the averaged image data obtained by performing the difference calculation process and the averaging process on the divided image data.
- each divided image data only needs to include a gradation value near each target pixel to be referred to in the difference calculation process, it is not necessary to track the entire image as in the conventional edge detection method.
- Image data to be used can be reduced, the circuit scale can be reduced, and the processing time can be shortened.
- Each of the upscaling processing units includes an edge identification unit that identifies the edge portion included in the divided image data and a portion other than the edge portion by comparing the correlation value for each pixel with a preset threshold value.
- the interpolation processing unit may be configured to perform interpolation processing on the edge portion by an interpolation method in which the edge is more prominent than the edge portion.
- Interpolation processing is performed using gradation values of a predetermined number of pixels or less selected in order of the distance from the straight line parallel to the straight line, and for interpolation pixels other than the edge portion, the level of each pixel adjacent to the interpolation pixel is calculated. Key It may be configured to perform interpolation processing using.
- the ratio between the gradation value of the target pixel after the horizontal difference calculation process and the gradation value of the target pixel after the vertical difference calculation process is determined by the inclination angle of the edge.
- the edge direction can be detected.
- the pixels of the predetermined number or less selected from the pixels adjacent to the interpolated pixel in the order of the distance from the straight line passing through the interpolated pixel and parallel to the inclination direction of the edge are interpolation processing.
- interpolation processing is performed using gradation values
- interpolation processing is performed using the gradation values of each pixel adjacent to the interpolation pixel, thereby sharpening the edges for the edge portion. This makes it possible to perform interpolation with an emphasis on continuity of gradation except for the edge portion, and a more detailed upscale image can be generated.
- the edges can be clearly expressed and the influence of noise can be appropriately reduced.
- the difference calculation unit may be configured to perform the difference calculation process on a 5 pixel ⁇ 5 pixel block centered on the pixel of interest.
- an edge can be accurately detected by the correlation value calculated without referring to the divided image data for the other upscale circuits in each upscale circuit. Therefore, the circuit scale can be reduced by reducing the size of image data used for edge detection in each upscale circuit.
- the division processing unit may be configured to superimpose image data of 2 lines or more and 5 lines or less in the other divided image data on a boundary portion with the other divided image data in each divided image data.
- the display device of the present invention includes any one of the above-described image processing devices and a display unit that displays an image upscaled by the image processing device.
- a high-definition image can be generated and displayed without increasing the circuit scale and processing time.
- the image processing method of the present invention uses a plurality of upscaling units to perform a division processing step for dividing input image data into a plurality of divided image data and a process for upscaling the resolution of each of the divided image data to a high resolution.
- Difference calculation processing for generating data and calculating the tone value of the pixel of interest for extracting an edge in the image by calculation using the differentiation or difference of the tone value near the pixel of interest
- a difference calculating step an averaging process step of performing an averaging process for calculating a gradation value of the pixel of interest as a gradation value of the pixel of interest, and the divided image
- a correlation calculation step for calculating a correlation value indicating a correlation between the difference image data obtained by performing the difference calculation process on the data and the averaged image data obtained by performing the difference calculation process and the averaging process on the divided image data;
- an interpolation processing step of performing interpolation processing on the divided image data by an interpolation method according to the correlation value.
- the correlation value calculated in this way it is possible to appropriately identify whether the vicinity of the target pixel is an edge portion or a portion other than the edge portion. In other words, noise other than the edge and thin lines other than the edge are erased by the averaging process except for the edge part, so the above correlation value is small. Therefore, the correlation value becomes large. For this reason, it is possible to appropriately identify whether the vicinity of the target pixel is an edge portion or a portion other than the edge portion based on the correlation value.
- the divided image data is upscaled by performing an interpolation process on the divided image data by an interpolation method according to the correlation value.
- different interpolation processing can be performed on the edge portion and the portion other than the edge portion, so that a high-definition image can be generated.
- each divided image data only needs to include a gradation value near each target pixel to be referred to in the difference calculation process, it is not necessary to track the entire image as in the conventional edge detection method.
- Image data to be used can be reduced, the circuit scale can be reduced, and the processing time can be shortened.
- the image processing apparatus may be realized by a computer.
- Possible recording media are also included in the scope of the present invention.
- FIG. 2 is a block diagram showing a schematic configuration of the display device 1 according to the present embodiment.
- the display device 1 includes an image processing device 10 and a liquid crystal panel (display unit) 2.
- the image processing apparatus 10 includes a dividing circuit 11, upscale circuits 12a to 12d, and a liquid crystal driving circuit (display control unit) 13.
- the dividing circuit 11 divides the image data input to the image processing apparatus 10 into a predetermined number of areas of image data, and outputs the divided image data to the upscale circuits 12a to 12d, respectively.
- 2K1K class high-definition data is input and divided into upper left, upper right, lower left, and lower right image data.
- the number of image divisions and the arrangement positions of the divided regions are not limited to this.
- the divided areas may be divided so that they are arranged in the horizontal direction, or the divided areas may be divided so that they are arranged in the vertical direction. Which division method is adopted may be selected in view of characteristics of each division method, circuit technology at the time of implementation, liquid crystal panel technology, and the like.
- each area becomes 2K1K image data, so that the drive used in the conventional 2K1K class display device is used.
- the system can be applied as it is, and the same signal processing circuit (signal processing LSI) as that used in the 2K1K class can be used, so that there is an advantage that manufacturing cost and development cost can be reduced.
- each divided area is not divided into 960 pixels ⁇ 540 pixels, but (960 + ⁇ ) pixels ⁇ (540 + ⁇ ) pixels.
- a part of the adjacent divided regions overlap each other. Details of the image dividing method will be described later.
- the liquid crystal driving circuit 13 controls the liquid crystal panel 2 based on the image data after upscaling input from the upscaling circuits 12a to 12d, and displays the upscaled image on the liquid crystal panel 2.
- the liquid crystal driving circuit 13 is described as one block. However, the configuration is not limited thereto, and the liquid crystal driving circuit 13 may be configured by a plurality of blocks.
- liquid crystal drive circuits 13a to 13d may be provided corresponding to the upscale circuits 12a to 12d, and the respective divided regions in the liquid crystal panel 2 may be driven by these liquid crystal drive circuits.
- the drive timings of the respective regions can be easily matched so that there is an advantage of good controllability, while the number of input / output pins increases.
- the size (IC size) becomes large.
- the chip size can be reduced (in particular, in the case of the present embodiment, each divided area is a 2K1K class, so that the conventional 2K1K class display device is provided.
- the input image data to the image processing apparatus 10 is 1920 ⁇ 1080 and the display size of the liquid crystal panel 2 is 4096 ⁇ 2160
- the input image data is upscaled (enlarged) twice vertically and horizontally to 3840 ⁇ 2160. become.
- the size in the horizontal direction (3840 dots) is smaller than the display size (4096 dots)
- it is necessary to display the image of the left half divided area by shifting to the right by 2048 ⁇ 1920 128 dots. .
- the correction process for shifting the left half image to the right may be performed in any part of the image processing apparatus 10, but the divided circuit data is corrected by the dividing circuit 11 so that the video of all the divided image data is 2048 ⁇ 1080.
- this correction is performed by the liquid crystal drive circuit 13 a general-purpose 4K2K liquid crystal display module capable of supporting many video formats can be realized. It should be noted that correction is preferably performed later in an application where a relatively high priority is not given to projecting 2K video.
- a liquid crystal panel is used as the display unit.
- the present invention is not limited to this.
- a display unit including a plasma display, an organic EL display, or a CRT may be used.
- a display control unit corresponding to the display unit may be provided.
- FIG. 3 is an explanatory diagram schematically showing processing in the display device 1 according to the present embodiment.
- the dividing circuit 11 converts the input image data into four (1K + ⁇ ) ⁇ (0.5K + ⁇ ) divided image data. To divide.
- the broken line portion ( ⁇ portion) shown in FIG. 3 is an overlap portion with other adjacent divided image data.
- the upscale circuits 12a to 12d perform interpolation processing (upscale processing) on each divided image data divided as described above, and generate 2K1K post-interpolation image data (upscaled image data).
- the upscale circuits 12a to 12d perform the above interpolation processing in parallel.
- the liquid crystal drive circuit 13 generates divided video signals corresponding to the respective post-interpolation image data subjected to the interpolation processing by the upscale circuits 12a to 12d, and the images corresponding to the respective divided video signals are divided into the respective divisions of the liquid crystal panel 2. Display in the area.
- FIG. 4 is a block diagram showing a schematic configuration of the upscale circuits 12a to 12d.
- each of the upscale circuits 12a to 12d includes an edge detection circuit 21 and an interpolation circuit 22.
- the edge detection circuit 21 detects the position and direction of the edge in the divided image data.
- the interpolation circuit 22 performs an interpolation process using different interpolation methods for the edge portion and the portion other than the edge portion. Specifically, for the edge portion, interpolation is performed using the average value of the pixel values of pixels adjacent in the edge direction, and for other than the edge portion, interpolation is performed using the weighted average value of the pixel values of pixels adjacent to all directions. To do.
- FIG. 1 is a block diagram showing a schematic configuration of the edge detection circuit 21. As shown in the figure, a difference circuit 31, a filter rotation circuit 32, a direction setting circuit 33, an averaging circuit 34, a correlation calculation circuit 35, and an edge identification circuit 36 are provided.
- a differential filter in which a filter coefficient is set for each dot of 3 dots ⁇ 3 dots for a 5 dot ⁇ 5 dot block centered on the target pixel in the input image data. Applying this, a difference calculation result of 3 dots ⁇ 3 dots centered on the target pixel is obtained.
- the pixel value of each dot in the input image data is dij (i and j are integers of 1 to 3)
- the difference filter is aij
- the pixel value of each dot in the difference calculation result is bkl (k , L is an integer from 1 to 3)
- the difference filter aij is a 1: 2: 1 filter shown below,
- the difference filter aij is not limited to this, and any filter can be used as long as it can extract an edge in an image by calculation using a differentiation or difference of gradation values near the target pixel.
- the following 3: 2: 3, 1: 1: 1, or 1: 6: 1 filter may be used.
- Etc. may be used.
- the difference filter is expressed as a: b: a as described above, the greater the weight of b, the more accurately the vicinity of the pixel of interest can be evaluated, but the weaker against noise.
- the smaller the weight of b the easier it is to miss a small change, although the state around the pixel of interest can be comprehensively captured.
- the filter coefficient of the difference filter may be appropriately selected according to the target image characteristics. For example, in a content such as a photograph that is essentially dense and less blurry, it is easier to grasp the feature when the weight of b is larger.
- a 3 dot ⁇ 3 dot filter is used as the difference filter.
- the present invention is not limited to this.
- a 5 dot ⁇ 5 dot or 7 dot ⁇ 7 dot difference filter may be used.
- the filter rotation circuit 32 performs a rotation process on the difference filter used in the difference circuit 31.
- the direction setting circuit 33 controls the rotation of the difference filter by the filter rotation circuit 32 and outputs a signal indicating the application state of the difference filter to the edge identification circuit 36.
- the difference calculation is performed on the input image data using the difference filter aij to perform horizontal edge detection processing, and then the filter obtained by rotating the difference filter aij by 90 degrees is used.
- the vertical edge is detected by performing the difference calculation again on the input image data.
- the edge detection processing in the horizontal direction and the vertical direction may be performed in parallel.
- the difference circuit 31, the filter rotation circuit 32, the direction setting circuit 33, the averaging circuit 34, the correlation calculation circuit 35, Two sets of edge identification circuits 36 may be provided.
- FIG. 6 shows an image of sharp edges in the vertical direction (image A), an image of thin lines extending in the vertical direction (image B), an image of messy lines (image C), and 1 for each of these images.
- FIG. 6 is an explanatory diagram showing a result of performing a difference calculation in the horizontal direction and the vertical direction using a difference filter of 2: 1: 1;
- edge detection process may be performed, and the processing result may be databased as an exception process when an erroneous detection occurs in edge detection using 3 dot ⁇ 3 dot difference image data.
- edge detection with higher accuracy can be performed. For example, even an edge that is buried in a texture with high periodicity can be detected appropriately.
- the ratio of the average value for the 3 dot ⁇ 3 dot block centered on the target pixel to the median value is 0.67 for image D and 0 for image E.
- the numerical value increases as there is a clear edge (or an image close to the edge).
- the ratio of the average value to the median value for the 3 dot ⁇ 3 dot block is 0.06, and it is difficult to be recognized as an edge.
- FIG. 8 shows an image with an edge with an inclination 1/2 (image G), an image with an edge with inclination 1 (image H), an image with an edge with inclination 2 (image I), and 1: 2 for each of these images: It is explanatory drawing which shows the result of having performed the difference calculation of the horizontal direction and the vertical direction using 1 difference filter. Since each image in FIG. 8 is an edge portion image, the ratio of the average value to the median value of the 3 dot ⁇ 3 dot block centered on the target pixel in the difference calculation result in the horizontal direction and the vertical direction is increased. Yes.
- the ratio of the median value of the difference calculation results in the horizontal direction and the median value of the difference calculation results in the vertical direction in these images is 2/4 for the image G, 3/3 for the image H, and 4/2 for the image I. And coincides with the inclination of the edge in each image.
- the median value the value of the pixel of interest
- the slope of the edge is calculated on the basis of the ratio. As for the edge in the horizontal direction or the vertical direction, since either the median value in the difference calculation result in the horizontal direction or the median value in the difference calculation result in the horizontal direction is 0, the edge direction can be easily determined.
- the averaging circuit 34 calculates b13, b23, b31, b32, and b33 by sequentially shifting a 3 dot ⁇ 3 dot block in the difference image data by one dot at a time. That is, averaged image data is calculated for a total of nine pixels including the target pixel and the surrounding eight pixels. Then, the averaged image data of these nine pixels is output to the correlation calculation circuit 35.
- FIG. 10 is an explanatory diagram showing the concept of edge identification processing by the edge identification circuit 36. As shown in FIG. 10, when the edge portion and the noise are mixed in the input image data, the edge portion and the influence of the noise are reflected in the difference image data. Therefore, the edge detection is performed using only the difference image data. Will be affected by this noise.
- the edge portion remains as it is even after the averaging process, so that the correlation value R increases in the edge portion, and conversely in the other portions than the edge portion.
- the value R becomes smaller.
- the correlation value R has a value of 1 or a value close to 1 at the edge portion, and becomes a value abruptly smaller than the correlation value of the edge portion except for the edge portion. Therefore, the edge portion can be detected with very high accuracy by checking in advance the range in which the correlation value changes abruptly through experiments or the like and setting the threshold Th within this range.
- the edge identification circuit 36 detects the edge direction (edge extension direction) using the result of the difference calculation process in the horizontal direction and the result of the difference calculation process in the vertical direction, and the detection result is interpolated by the interpolation circuit. 22 for output.
- the value of the ratio a may vary due to the influence of noise included in the input image data. For this reason, it is not always necessary to strictly calculate the angle ⁇ for the edge direction, and any one of the five patterns shown in FIG. 11 or any of nine patterns including an intermediate inclination of these five patterns. It only needs to be categorized. Therefore, in order to simplify the edge direction detection process and reduce the circuit scale required for edge direction detection, the value of the ratio a does not necessarily have to be directly calculated. It may be determined which one of the five patterns shown in FIG.
- a 5 dot ⁇ 5 dot filter may be used to detect the inclination in the edge direction.
- the value (luminance) of each pixel (reference point: ⁇ in the figure) in the input image data is left as it is, and the space between these pixels is left as it is.
- This is a method of interpolating pixels ( ⁇ mark in the figure).
- the present invention is not limited to this, and the second method can also be used.
- pixels B, E, F, and I are selected as peripheral pixels
- pixels D, E, H, and I are selected as peripheral pixels.
- the pixels adjacent in the edge direction are selected as peripheral pixels.
- a texture-oriented interpolation method in which the edge is not conspicuous is applied.
- Texture emphasis here refers to processing that is relatively resistant to noise, with emphasis on tone and hue maintenance and continuity of tone change.
- various conventionally known methods such as a bilinear method, a bicubic method, and a lanczos filter method (LANCZOS method) can be used.
- LANCZOS method when the upscale enlargement factor is constant (in this embodiment, the enlargement factor is double), the LANCZOS method is known as an excellent and simple filter and is suitable.
- the target pixel is an edge portion based on difference image data and averaged image data calculated based on 5 dot ⁇ 5 dot image data centered on the target pixel in the input image data.
- Judge whether or not. Therefore, when the input image data is divided into a plurality of regions, each of the divided image data obtained by simply dividing the input image data into four is divided into two boundary portions included in the image data of the divided regions adjacent to the divided image data.
- the edge portion in each divided image data is It can be detected with high accuracy.
- the number of pixels in the horizontal direction of the input image data is nx and the number of pixels in the vertical direction is ny
- the number of pixels in each divided area is set to nx / 2 + 2 in the horizontal direction and ny + 2 in the vertical direction.
- Edge detection and upscaling can be performed with high accuracy without considering the interaction with the region.
- the circuit scale can be reduced and the processing time can be shortened. That is, since it is not necessary to track the edge of the entire image as in the prior art, it is not necessary to pass the information of the entire image to each divided upscale circuit for edge determination. Therefore, edge detection can be performed with high accuracy in each upscale circuit without considering the interaction with other divided regions.
- the edge detection process for each divided image data is performed in parallel. As a result, the time required for edge detection for the entire image can be further shortened.
- difference image data (3 dot ⁇ 3 dot difference image data) and averaged image data calculated based on image data of 5 dots ⁇ 5 dots centered on the target pixel in the input image data.
- the size of the block referred to at the time of edge detection is not limited to this.
- the size of the block to be referenced is 5 dots ⁇ 5 dots
- the size of the difference image data and the average image data is 3 dots ⁇ 3 dots, and is added to each divided image data in consideration of symmetry with respect to the pixel of interest (
- edge detection is performed using 3 dot ⁇ 3 dot difference image data and an average value of 9 dots in the averaged image data.
- the present invention is not limited to this.
- the edge detection may be performed using the average value of the surrounding 8 dots excluding the target pixel in the difference image data of 3 dots ⁇ 3 dots and the averaged image data. Even in this case, it is possible to obtain edge detection accuracy substantially equivalent to the case where the average value of 9 dots is used only by changing the parameter accordingly.
- the averaging process is performed on the difference image data calculated by the difference calculation, but the present invention is not limited to this.
- the averaged image data may be calculated by performing an averaging process on the input image data and performing a difference operation on the image data subjected to the averaging process. Good.
- substantially the same effect as when the averaging process is performed on the difference image data calculated by the difference calculation can be obtained.
- the configuration in which only one difference circuit 33 is provided in each of the upscale circuits 12a to 12d has been described.
- the present invention is not limited to this.
- a difference circuit that generates difference image data to be output to the averaging circuit 34 (or a difference circuit that performs a difference operation on the averaged image data output from the averaging circuit 34) may be provided separately.
- each interpolation pixel is discriminated into two types, that is, an edge portion or other than an edge portion, and interpolation is performed by an interpolation method according to the discrimination result. There is no need to distinguish clearly.
- an edge parameter obtained by dividing a value that can be taken by the correlation value R into a plurality of values is generated, and an interpolation value calculated by an interpolation method for an edge portion and an interpolation value calculated by an interpolation method for a portion other than the edge portion are set according to the edge parameter.
- the weighted and added value may be calculated as the pixel value of the interpolated pixel.
- the interpolation value calculated by the interpolation method for the edge portion and the interpolation value calculated by the interpolation method for other than the edge portion may be synthesized at a ratio according to the degree of edgeness.
- the possible values (0 to 1) of the correlation value R are divided into 16, and an edge parameter (correction coefficient) ⁇ is set for each division range, and an edge identification circuit (correction coefficient) (Calculation unit) 36 or another circuit (correction coefficient calculation unit) specifies the edge parameter ⁇ corresponding to the correlation value R calculated based on the divided image data.
- the divided range including the maximum value has a ratio between the value of the target pixel of the difference calculation result and the average value of 3 dots ⁇ 3 dots in the clear edge portion as shown in FIGS. Since it was 0.67, for example, it may be set to about 0.7 to 1, and the edge parameter ⁇ corresponding to this division range may be set to 1. Further, the number of divisions for dividing the possible values of the correlation value R is not limited to 16, and may be about 8 divisions, for example.
- each circuit (each block) constituting the image processing apparatus 10 may be realized by software using a processor such as a CPU. That is, the image processing apparatus 10 includes a CPU (central processing unit) that executes instructions of a control program that realizes each function, a ROM (read only memory) that stores the program, and a RAM (random access memory) that expands the program.
- a configuration may be adopted in which a storage device (recording medium) such as a memory for storing the program and various data is provided.
- an object of the present invention is a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program of the image processing apparatus 10 which is software for realizing the functions described above is recorded so as to be readable by a computer. Is supplied to the image processing apparatus 10, and the computer (or CPU or MPU) reads and executes the program code recorded on the recording medium.
- a program code execution format program, intermediate code program, source program
- the image processing apparatus 10 may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
- the communication network is not particularly limited.
- the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite communication. A net or the like is available.
- the transmission medium constituting the communication network is not particularly limited.
- wired such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL line, etc.
- infrared rays such as IrDA and remote control, Bluetooth (Registered trademark), 802.11 wireless, HDR, mobile phone network, satellite line, terrestrial digital network, and the like can also be used.
- the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
- the present invention can be applied to an image processing apparatus and an image processing method that upscale the resolution of input image data to a high resolution.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
2 液晶パネル
10 画像処理装置
11 分割回路
12a~12d アップスケール回路
13 液晶駆動回路
21 エッジ検出回路
22 補間回路
31 差分回路
32 フィルタ回転回路
33 方向設定回路
34 平均化回路
35 相関演算回路
36 エッジ識別回路
また、本実施形態では、各アップスケール回路12a~12dに差分回路33を1つのみ設ける構成について説明したが、これに限らず、例えば相関演算回路35に出力する差分画像データを生成する差分回路と、平均化回路34に出力する差分画像データを生成する差分回路(あるいは平均化回路34から出力された平均化画像データに差分演算を施す差分回路)とを別々に設けてもよい。
Claims (11)
- 入力画像データを複数の分割画像データに分割する分割処理部と、上記各分割画像データの解像度を高解像度にアップスケールするための複数のアップスケール処理部とを備えた画像処理装置であって、
上記分割処理部は、上記各分割画像データにおける他の分割画像データとの境界部に上記他の分割画像データの一部を重畳して含めるように上記各分割画像データを生成し、
上記各アップスケール処理部は、
注目画素近傍の階調値の微分または差分を用いた演算によって画像中のエッジを抽出するための上記注目画素の階調値を算出する差分演算処理を行う差分演算部と、
注目画素近傍の階調値を平均化した値を上記注目画素の階調値として算出する平均化処理を行う平均化処理部と、
上記分割画像データに上記差分演算処理を施した差分画像データと上記分割画像データに上記差分演算処理および上記平均化処理を施した平均化画像データとの相関関係を示す相関値を算出する相関演算部と、
上記相関値に応じた補間方法で上記分割画像データに補間処理を施す補間処理部とを備えていることを特徴とする画像処理装置。 - 上記各アップスケール処理部は、互いに並行して上記差分演算処理、上記平均化処理、上記相関値の算出処理、および上記補間処理を行うことを特徴とする請求項1に記載の画像処理装置。
- 上記各アップスケール処理部は、各画素についての上記相関値と予め設定された閾値とを比較することで上記分割画像データに含まれるエッジ部分とエッジ部分以外とを識別するエッジ識別部を備え、
上記補間処理部は、エッジ部分に対してはエッジ部分以外よりもエッジが際立つ補間方法で補間処理を施すことを特徴とする請求項1または2に記載の画像処理装置。 - 上記差分演算部は、上記分割画像データに対して水平方向のエッジを抽出するための上記差分演算処理である水平差分演算処理と垂直方向のエッジを抽出するための上記差分演算処理である垂直差分演算処理とを行い、
上記エッジ識別部は、エッジ部分であると判定された注目画素について、上記水平差分演算処理後の注目画素の階調値と上記垂直差分演算処理後の注目画素の階調値との比をエッジの傾き角度として算出し、
上記補間処理部は、エッジ部分の補間画素に対しては当該補間画素に隣接する各画素のうち上記補間画素を通り上記エッジの傾き方向に平行な直線との距離が近い順に選択される所定数以下の画素の階調値を用いて補間処理を行い、エッジ部分以外の補間画素に対しては当該補間画素に隣接する各画素の階調値を用いて補間処理を行うことを特徴とする請求項3に記載の画像処理装置。 - 上記各アップスケール処理部は、上記相関値の大きさに応じた補正係数を算出する補正係数算出部を備え、
上記補間処理部は、
各補間画素について、エッジを抽出するための第1の補間処理と、全方位についての階調の連続性を実現するための第2の補間処理とを行い、第1の補間処理の結果および第2の補間処理の結果に上記補正係数に応じた重み付けを行って加算することで上記分割画像データの補間処理を行うことを特徴とする請求項1または2に記載の画像処理装置。 - 上記差分演算部は、注目画素を中心とする5画素×5画素のブロックに対して上記差分演算処理を施すことを特徴とする請求項1から5のいずれか1項に記載の画像処理装置。
- 上記分割処理部は、
上記各分割画像データにおける他の分割画像データとの境界部に上記他の分割画像データにおける2ライン以上5ライン以下の画像データを重畳させることを特徴とする請求項1から6のいずれか1項に記載の画像処理装置。 - 請求項1から7のいずれか1項に記載の画像処理装置と、
上記画像処理装置によってアップスケールされた画像を表示する表示部とを備えていることを特徴とする表示装置。 - 入力画像データを複数の分割画像データに分割する分割処理工程と、上記各分割画像データの解像度を高解像度にアップスケールする処理を複数のアップスケール処理部を用いて行う画像処理方法であって、
上記分割処理工程では、上記各分割画像データにおける他の分割画像データとの境界部に上記他の分割画像データの一部を重畳して含めるように上記各分割画像データを生成し、
上記アップスケール処理部は、
注目画素近傍の階調値の微分または差分を用いた演算によって画像中のエッジを抽出するための上記注目画素の階調値を算出する差分演算処理を行う差分演算工程と、
注目画素近傍の階調値を平均化した値を上記注目画素の階調値として算出する平均化処理を行う平均化処理工程と、
上記分割画像データに上記差分演算処理を施した差分画像データと上記分割画像データに上記差分演算処理および上記平均化処理を施した平均化画像データとの相関関係を示す相関値を算出する相関演算工程と、
上記相関値に応じた補間方法で上記分割画像データに補間処理を施す補間処理工程とを行うことを特徴とする画像処理方法。 - 請求項1から7のいずれか1項に記載の画像処理装置を動作させるプログラムであって、コンピュータを上記各部として機能させるためのプログラム。
- 請求項10に記載のプログラムを記録したコンピュータ読み取り可能な記録媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/863,212 US8358307B2 (en) | 2008-04-21 | 2008-12-10 | Image processing device, display device, image processing method, program, and storage medium |
CN200880125297.7A CN101952854B (zh) | 2008-04-21 | 2008-12-10 | 图像处理装置、显示装置、图像处理方法、程序和记录介质 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008110737 | 2008-04-21 | ||
JP2008-110737 | 2008-04-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009130820A1 true WO2009130820A1 (ja) | 2009-10-29 |
Family
ID=41216565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2008/072403 WO2009130820A1 (ja) | 2008-04-21 | 2008-12-10 | 画像処理装置、表示装置、画像処理方法、プログラムおよび記録媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US8358307B2 (ja) |
CN (1) | CN101952854B (ja) |
WO (1) | WO2009130820A1 (ja) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012144158A1 (ja) * | 2011-04-22 | 2012-10-26 | パナソニック株式会社 | 画像処理装置及び画像処理方法 |
JP2013130902A (ja) * | 2011-12-20 | 2013-07-04 | Jvc Kenwood Corp | 映像信号処理装置及び映像信号処理方法 |
WO2014065160A1 (ja) * | 2012-10-24 | 2014-05-01 | シャープ株式会社 | 画像処理装置 |
JP2014106909A (ja) * | 2012-11-29 | 2014-06-09 | Jvc Kenwood Corp | 画像拡大装置、画像拡大方法、及び画像拡大プログラム |
JP2014187601A (ja) * | 2013-03-25 | 2014-10-02 | Sony Corp | 画像処理装置、画像処理方法、及び、プログラム |
JP2016143006A (ja) * | 2015-02-04 | 2016-08-08 | シナプティクス・ディスプレイ・デバイス合同会社 | 表示装置、表示パネルドライバ、表示パネルの駆動方法 |
JP2017215941A (ja) * | 2016-05-27 | 2017-12-07 | キヤノン株式会社 | 画像処理装置及びその制御方法 |
WO2018193333A1 (ja) * | 2017-04-21 | 2018-10-25 | 株式会社半導体エネルギー研究所 | 画像処理方法および受像装置 |
CN108734668A (zh) * | 2017-04-21 | 2018-11-02 | 展讯通信(上海)有限公司 | 图像色彩恢复方法、装置、计算机可读存储介质及终端 |
CN116563312A (zh) * | 2023-07-11 | 2023-08-08 | 山东古天电子科技有限公司 | 一种用于双屏机显示图像分割方法 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101685000B (zh) * | 2008-09-25 | 2012-05-30 | 鸿富锦精密工业(深圳)有限公司 | 影像边界扫描的计算机系统及方法 |
WO2013025157A2 (en) * | 2011-08-17 | 2013-02-21 | Telefonaktiebolaget L M Ericsson (Publ) | Auxiliary information map upsampling |
WO2013036972A1 (en) * | 2011-09-09 | 2013-03-14 | Panamorph, Inc. | Image processing system and method |
JP2013219462A (ja) * | 2012-04-05 | 2013-10-24 | Sharp Corp | 画像処理装置、画像表示装置、画像処理方法、コンピュータプログラム及び記録媒体 |
JP6255760B2 (ja) * | 2013-07-16 | 2018-01-10 | ソニー株式会社 | 情報処理装置、情報記録媒体、および情報処理方法、並びにプログラム |
JP6075888B2 (ja) * | 2014-10-16 | 2017-02-08 | キヤノン株式会社 | 画像処理方法、ロボットの制御方法 |
JP6473608B2 (ja) * | 2014-11-27 | 2019-02-20 | 三星ディスプレイ株式會社Samsung Display Co.,Ltd. | 画像処理装置、画像処理方法、及びプログラム |
US9478007B2 (en) * | 2015-01-21 | 2016-10-25 | Samsung Electronics Co., Ltd. | Stable video super-resolution by edge strength optimization |
CN104820577B (zh) | 2015-05-29 | 2017-12-26 | 京东方科技集团股份有限公司 | 显示装置及其显示信号输入系统、显示信号输入方法 |
CN105847730B (zh) * | 2016-04-01 | 2019-01-15 | 青岛海信电器股份有限公司 | 一种视频码流输出的控制及处理方法、芯片、系统 |
JP7139333B2 (ja) * | 2017-08-11 | 2022-09-20 | 株式会社半導体エネルギー研究所 | 表示装置 |
CN110998703A (zh) * | 2017-08-24 | 2020-04-10 | 株式会社半导体能源研究所 | 图像处理方法 |
KR102407932B1 (ko) * | 2017-10-18 | 2022-06-14 | 삼성디스플레이 주식회사 | 영상 프로세서, 이를 포함하는 표시 장치, 및 표시 장치의 구동 방법 |
JP7005458B2 (ja) * | 2018-09-12 | 2022-01-21 | 株式会社東芝 | 画像処理装置、及び、画像処理プログラム、並びに、運転支援システム |
GB2578769B (en) | 2018-11-07 | 2022-07-20 | Advanced Risc Mach Ltd | Data processing systems |
US10896492B2 (en) * | 2018-11-09 | 2021-01-19 | Qwake Technologies, Llc | Cognitive load reducing platform having image edge enhancement |
US10417497B1 (en) | 2018-11-09 | 2019-09-17 | Qwake Technologies | Cognitive load reducing platform for first responders |
US11890494B2 (en) | 2018-11-09 | 2024-02-06 | Qwake Technologies, Inc. | Retrofittable mask mount system for cognitive load reducing platform |
GB2583061B (en) * | 2019-02-12 | 2023-03-15 | Advanced Risc Mach Ltd | Data processing systems |
US11915376B2 (en) | 2019-08-28 | 2024-02-27 | Qwake Technologies, Inc. | Wearable assisted perception module for navigation and communication in hazardous environments |
US11238775B1 (en) * | 2020-12-18 | 2022-02-01 | Novatek Microelectronics Corp. | Image adjustment device and image adjustment method suitable for light-emitting diode display |
CN117115433B (zh) * | 2023-10-24 | 2024-05-07 | 深圳市磐鼎科技有限公司 | 显示异常检测方法、装置、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001346070A (ja) * | 2000-06-02 | 2001-12-14 | Alps Electric Co Ltd | 画像信号の輪郭検出回路及びそれを備えた画像表示装置 |
JP2002064704A (ja) * | 2000-08-23 | 2002-02-28 | Sony Corp | 画像処理装置および方法、並びに記録媒体 |
JP2005293265A (ja) * | 2004-03-31 | 2005-10-20 | Canon Inc | 画像処理装置及び方法 |
JP2005346639A (ja) * | 2004-06-07 | 2005-12-15 | Nec Display Solutions Ltd | 画像処理装置および画像処理方法 |
JP2006308665A (ja) * | 2005-04-26 | 2006-11-09 | Canon Inc | 画像処理装置 |
JP2008021207A (ja) * | 2006-07-14 | 2008-01-31 | Fuji Xerox Co Ltd | 画像処理システムおよび画像処理プログラム |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE68926702T2 (de) * | 1988-09-08 | 1996-12-19 | Sony Corp | Bildverarbeitungsgerät |
KR100360206B1 (ko) * | 1992-12-10 | 2003-02-11 | 소니 가부시끼 가이샤 | 화상신호변환장치 |
JP3243861B2 (ja) | 1992-12-10 | 2002-01-07 | ソニー株式会社 | 画像情報変換装置 |
JPH06189231A (ja) * | 1992-12-16 | 1994-07-08 | Toshiba Corp | 液晶表示装置 |
JP3766231B2 (ja) | 1999-05-10 | 2006-04-12 | Necビューテクノロジー株式会社 | 液晶表示装置 |
DE60040786D1 (de) * | 1999-08-05 | 2008-12-24 | Sanyo Electric Co | Bildinterpolationsverfahren |
JP2001078113A (ja) * | 1999-09-06 | 2001-03-23 | Sony Corp | 映像機器および映像表示方法 |
JP3523170B2 (ja) | 2000-09-21 | 2004-04-26 | 株式会社東芝 | 表示装置 |
JP4218249B2 (ja) * | 2002-03-07 | 2009-02-04 | 株式会社日立製作所 | 表示装置 |
JP4177652B2 (ja) | 2002-12-06 | 2008-11-05 | シャープ株式会社 | 液晶表示装置 |
JP2004212503A (ja) | 2002-12-27 | 2004-07-29 | Casio Comput Co Ltd | 照明装置及びその発光駆動方法並びに表示装置 |
JP2005309338A (ja) | 2004-04-26 | 2005-11-04 | Mitsubishi Electric Corp | 画像表示装置および画像表示方法 |
JP4904783B2 (ja) | 2005-03-24 | 2012-03-28 | ソニー株式会社 | 表示装置及び表示方法 |
JP2007225871A (ja) | 2006-02-23 | 2007-09-06 | Alpine Electronics Inc | 表示装置及びその表示方法 |
JP4808073B2 (ja) | 2006-05-22 | 2011-11-02 | シャープ株式会社 | 表示装置 |
JP5114872B2 (ja) | 2006-06-03 | 2013-01-09 | ソニー株式会社 | 表示制御装置、表示装置及び表示制御方法 |
JP2008096956A (ja) * | 2006-09-15 | 2008-04-24 | Olympus Corp | 画像表示方法および画像表示装置 |
JP2008116554A (ja) | 2006-11-01 | 2008-05-22 | Sharp Corp | バックライト制御装置、及び該装置を備えた映像表示装置 |
TWI354960B (en) * | 2006-11-07 | 2011-12-21 | Realtek Semiconductor Corp | Method for controlling display device |
JP4285532B2 (ja) | 2006-12-01 | 2009-06-24 | ソニー株式会社 | バックライト制御装置、バックライト制御方法、および液晶表示装置 |
JP5117762B2 (ja) | 2007-05-18 | 2013-01-16 | 株式会社半導体エネルギー研究所 | 液晶表示装置 |
JP2009031585A (ja) | 2007-07-27 | 2009-02-12 | Toshiba Corp | 液晶表示装置 |
-
2008
- 2008-12-10 CN CN200880125297.7A patent/CN101952854B/zh not_active Expired - Fee Related
- 2008-12-10 US US12/863,212 patent/US8358307B2/en not_active Expired - Fee Related
- 2008-12-10 WO PCT/JP2008/072403 patent/WO2009130820A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001346070A (ja) * | 2000-06-02 | 2001-12-14 | Alps Electric Co Ltd | 画像信号の輪郭検出回路及びそれを備えた画像表示装置 |
JP2002064704A (ja) * | 2000-08-23 | 2002-02-28 | Sony Corp | 画像処理装置および方法、並びに記録媒体 |
JP2005293265A (ja) * | 2004-03-31 | 2005-10-20 | Canon Inc | 画像処理装置及び方法 |
JP2005346639A (ja) * | 2004-06-07 | 2005-12-15 | Nec Display Solutions Ltd | 画像処理装置および画像処理方法 |
JP2006308665A (ja) * | 2005-04-26 | 2006-11-09 | Canon Inc | 画像処理装置 |
JP2008021207A (ja) * | 2006-07-14 | 2008-01-31 | Fuji Xerox Co Ltd | 画像処理システムおよび画像処理プログラム |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012144158A1 (ja) * | 2011-04-22 | 2012-10-26 | パナソニック株式会社 | 画像処理装置及び画像処理方法 |
US8842930B2 (en) | 2011-04-22 | 2014-09-23 | Panasonic Corporation | Image processing device and image processing method |
JP5914843B2 (ja) * | 2011-04-22 | 2016-05-11 | パナソニックIpマネジメント株式会社 | 画像処理装置及び画像処理方法 |
JP2013130902A (ja) * | 2011-12-20 | 2013-07-04 | Jvc Kenwood Corp | 映像信号処理装置及び映像信号処理方法 |
WO2014065160A1 (ja) * | 2012-10-24 | 2014-05-01 | シャープ株式会社 | 画像処理装置 |
JP2014085892A (ja) * | 2012-10-24 | 2014-05-12 | Sharp Corp | 画像処理装置 |
JP2014106909A (ja) * | 2012-11-29 | 2014-06-09 | Jvc Kenwood Corp | 画像拡大装置、画像拡大方法、及び画像拡大プログラム |
JP2014187601A (ja) * | 2013-03-25 | 2014-10-02 | Sony Corp | 画像処理装置、画像処理方法、及び、プログラム |
JP2016143006A (ja) * | 2015-02-04 | 2016-08-08 | シナプティクス・ディスプレイ・デバイス合同会社 | 表示装置、表示パネルドライバ、表示パネルの駆動方法 |
JP2017215941A (ja) * | 2016-05-27 | 2017-12-07 | キヤノン株式会社 | 画像処理装置及びその制御方法 |
WO2018193333A1 (ja) * | 2017-04-21 | 2018-10-25 | 株式会社半導体エネルギー研究所 | 画像処理方法および受像装置 |
CN108734668A (zh) * | 2017-04-21 | 2018-11-02 | 展讯通信(上海)有限公司 | 图像色彩恢复方法、装置、计算机可读存储介质及终端 |
JPWO2018193333A1 (ja) * | 2017-04-21 | 2020-02-27 | 株式会社半導体エネルギー研究所 | 画像処理方法および受像装置 |
CN108734668B (zh) * | 2017-04-21 | 2020-09-11 | 展讯通信(上海)有限公司 | 图像色彩恢复方法、装置、计算机可读存储介质及终端 |
US11238559B2 (en) | 2017-04-21 | 2022-02-01 | Semiconductor Energy Laboratory Co., Ltd. | Image processing method and image receiving apparatus |
JP7184488B2 (ja) | 2017-04-21 | 2022-12-06 | 株式会社半導体エネルギー研究所 | 画像処理方法および受像装置 |
CN116563312A (zh) * | 2023-07-11 | 2023-08-08 | 山东古天电子科技有限公司 | 一种用于双屏机显示图像分割方法 |
CN116563312B (zh) * | 2023-07-11 | 2023-09-12 | 山东古天电子科技有限公司 | 一种用于双屏机显示图像分割方法 |
Also Published As
Publication number | Publication date |
---|---|
US8358307B2 (en) | 2013-01-22 |
CN101952854A (zh) | 2011-01-19 |
CN101952854B (zh) | 2012-10-24 |
US20110043526A1 (en) | 2011-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2009130820A1 (ja) | 画像処理装置、表示装置、画像処理方法、プログラムおよび記録媒体 | |
JP5302961B2 (ja) | 液晶表示装置の制御装置、液晶表示装置、液晶表示装置の制御方法、プログラムおよびその記録媒体 | |
JP4806102B2 (ja) | 液晶表示装置の制御装置、液晶表示装置、液晶表示装置の制御方法、プログラムおよび記録媒体 | |
US7613363B2 (en) | Image superresolution through edge extraction and contrast enhancement | |
US8224085B2 (en) | Noise reduced color image using panchromatic image | |
EP2130176B1 (en) | Edge mapping using panchromatic pixels | |
US9240033B2 (en) | Image super-resolution reconstruction system and method | |
US8873889B2 (en) | Image processing apparatus | |
JP4002871B2 (ja) | デルタ構造ディスプレイでのカラー映像の表現方法及びその装置 | |
US20040145599A1 (en) | Display apparatus, method and program | |
US7945121B2 (en) | Method and apparatus for interpolating image information | |
CN101753838B (zh) | 图像处理装置和图像处理方法 | |
US11854157B2 (en) | Edge-aware upscaling for improved screen content quality | |
CN101675454A (zh) | 采用全色像素的边缘绘图 | |
JP2005122361A (ja) | 画像処理装置及び方法、コンピュータプログラム、記録媒体 | |
US8233748B2 (en) | Image-resolution-improvement apparatus and method | |
US8948502B2 (en) | Image processing method, and image processor | |
TWI384417B (zh) | 影像處理方法及其裝置 | |
US20110032269A1 (en) | Automatically Resizing Demosaicked Full-Color Images Using Edge-Orientation Maps Formed In The Demosaicking Process | |
JP4966080B2 (ja) | 対象物検出装置 | |
US20160203617A1 (en) | Image generation device and display device | |
US6718072B1 (en) | Image conversion method, image processing apparatus, and image display apparatus | |
JP4658714B2 (ja) | 画像中の線を検出する方法、装置、プログラム、および記憶媒体 | |
CN101847252B (zh) | 保持图像光滑性的图像放大方法 | |
JP4827137B2 (ja) | 解像度変換処理方法、画像処理装置、画像表示装置及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200880125297.7 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08874047 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12863212 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 08874047 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |