WO1998051072A1 - Image converter and image conversion method - Google Patents
Image converter and image conversion method Download PDFInfo
- Publication number
- WO1998051072A1 WO1998051072A1 PCT/JP1998/002009 JP9802009W WO9851072A1 WO 1998051072 A1 WO1998051072 A1 WO 1998051072A1 JP 9802009 W JP9802009 W JP 9802009W WO 9851072 A1 WO9851072 A1 WO 9851072A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- class
- prediction
- pixel data
- data
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
- H04N5/205—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
- H04N5/208—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/98—Adaptive-dynamic-range coding [ADRC]
Definitions
- the present invention relates to a pixel conversion device and method.
- the present invention relates to a pixel conversion device and a pixel conversion method capable of reliably performing blurred image correction.
- the applicant of the present invention has proposed, for example, Japanese Patent Application Laid-Open No. 8-51599, which is a technique for obtaining pixel data with higher resolution.
- this proposal for example, when creating image data consisting of HD (High Definition) pixel data from image data consisting of SD (Standard Definition) pixel data, the Classification is performed using the located SD pixel data (the class is determined), a set of prediction coefficient values is learned for each class, and the intra-frame correlation is used in the still image part. In the moving part, HD pixel data closer to the true value is obtained using the intra-field correlation.
- image data having very poor image quality can be corrected to image data having good image quality.
- image data with very poor image quality if class classification is performed using the image data with very poor image quality, appropriate class classification cannot be performed, and an appropriate class must be determined. Can not. Therefore, if an appropriate class cannot be obtained, an appropriate set of prediction coefficient values cannot be obtained, and as a result, there has been a problem that a sufficient image quality cannot be corrected. Disclosure of the invention
- the present invention has been made in view of such circumstances, and provides an image conversion apparatus and an image conversion method that can surely correct the image quality even if the image quality of input image data is poor. Things.
- An image conversion device is an image conversion device that converts a first image signal including a plurality of pixel data into a second image data including a plurality of pixel data.
- a class tap extraction circuit for extracting, as class taps, a plurality of pixel data for generating a class code from a class tap; a class classification circuit for classifying the class taps to generate a class code representing the class;
- a generation circuit that generates prediction data corresponding to the class code; a generation circuit that outputs a second image signal using the prediction data; and a feature amount that indicates a degree of blur of the image of the first image signal
- a detection circuit that controls the operation of extracting the class taps of the first extraction circuit in accordance with the detection result.
- the image conversion method of the present invention is an image conversion method for converting a first image signal composed of a plurality of pixel data into a second image data composed of a plurality of pixel data.
- a plurality of pixel data for generating a class code are extracted as class taps, a class code representing the class is generated by classifying the class taps, and prediction data corresponding to the class code is generated. Is generated, and a second image signal is output using the prediction data.
- a feature amount representing a degree of blur of the image of the first image signal is detected, and the feature amount corresponding to the detection result is detected. And controls the extraction operation of the class tap.
- the class tap is controlled in accordance with the feature amount representing the blur amount of the input image data. Therefore, even if the image quality of the input image data is poor, the optimal class tap can be extracted, and the optimal prediction processing can be performed.
- FIG. 1 is a block diagram illustrating a configuration example of an image conversion apparatus to which the present invention has been applied.
- FIG. 2 is a diagram for explaining a cutout process in the region cutout unit 1 in FIG.
- FIG. 3 is a diagram for explaining a cutout process in the region cutout unit 1 in FIG.
- FIG. 4 is a flowchart for explaining the feature amount extraction processing in the feature amount extraction unit 3 in FIG.
- FIG. 5 is a diagram illustrating the process of calculating the autocorrelation coefficient in step S1 of FIG.
- FIG. 6 is a diagram illustrating the autocorrelation coefficient calculated in step S1 of FIG.
- FIG. 7 is a diagram illustrating another feature amount detection process in the feature amount extraction unit 3 of FIG.
- FIG. 9 is a diagram showing an example of another feature amount detection in the feature amount extraction unit 3 in FIG.
- FIG. 10 is a diagram for explaining a cutout process in the region cutout unit 1 of FIG.
- FIG. 11 is a diagram for explaining a cutout process in the region cutout unit 1 of FIG.
- FIG. 12 is a block diagram showing a configuration example for performing a learning process of a prediction coefficient of the ROM table 6 of FIG. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a block diagram illustrating a configuration example of an image conversion device to which the present invention has been applied.
- SD image data with poor image quality (of a blurred image) (or HD image data) is improved in SD image data.
- SD image data with poor image quality (of a blurred image) (or HD image data) is shown.
- the input image data is the SD image data.
- image data with poor image quality is input to an image conversion device via an input terminal.
- the input image data is supplied to a region cutout unit 1, a region cutout unit 2, and a feature amount extraction unit 3.
- the feature amount extraction unit 3 detects a feature amount representing a blur amount of the input SD image data, and outputs the detected feature amount to the region cutout unit 1, the region cutout unit 2, and the class code generation unit 5.
- the area cutout unit 1 cuts out a set of class taps within a predetermined range from the input pixel data, and divides the set of ADRC (
- a d ap t i v e D y n a m i c R a n g e C o d i n g n o n o n o n t e n Output to the extraction unit 4.
- the cluster cut out by the region cutout unit 1 is controlled according to the feature amount output from the feature amount extraction unit 3.
- the ADRC pattern extraction unit 4 performs class classification for the purpose of representing a waveform in space.
- the class code generator 5 generates a class code corresponding to the class output from the ADRC pattern extractor 4 and the feature output from the feature extractor 3, and outputs the generated class code to the ROM table 6.
- ROM table 6 includes each class A predetermined set of prediction coefficients is stored in advance corresponding to (class code), and the set of prediction coefficients corresponding to the class code is output to the prediction calculation unit 7.
- the area cutout unit 2 cuts out a predetermined range of pixel data from the input image data as a set of prediction taps, and outputs pixel data forming the prediction taps to the prediction calculation unit 7.
- the set of prediction taps cut out by the region cutout unit 2 is controlled in accordance with the feature amount output from the feature amount extraction unit 3 and representing the blur amount.
- the prediction calculation unit 7 performs a prediction calculation from the set of prediction taps input from the region cutout unit 2 and the set of prediction coefficients input from the ROM table 6, and calculates the image quality based on the calculation result. Output as corrected image data.
- the output image data is displayed on, for example, a display device (not shown), recorded on a recording device, or transmitted by a transmission device.
- the area cutout unit 1 executes a process of cutting out predetermined pixel data from the input image data as a class tap. For example, as shown in Fig. 2, a total of five pixel data consisting of the pixel data at the position corresponding to the pixel data of interest and the pixel data adjacent vertically, horizontally and Cut out as a tap. Alternatively, as shown in FIG. 3, the pixel data corresponding to the target pixel data and the pixel data adjacent to a position separated by three pixels in the vertical, horizontal, and horizontal directions are extracted as a cluster. What kind of pixel data is cut out as a class tap is determined in accordance with the feature amount representing the blur amount output from the feature amount extraction unit 3.
- step S1 the feature amount extraction unit 3 calculates an autocorrelation coefficient for each frame with respect to each pixel data of the input image data. Then, the autocorrelation coefficient is calculated as the blur amount of the pixel data. It is used as a measure of the feature value to be expressed. That is, as shown in FIG. 5, assuming that the image data of the i-frame is composed of pixel data of 720 pixels ⁇ 480 pixels, the pixel data of interest is determined for predetermined pixel data of interest.
- a block composed of pixel data of 512 pixels X 256 pixels of the pixel data of 720 pixels X 480 pixels (referred to as a reference block as appropriate) is formed.
- the position of the block is moved up and down and left and right within a predetermined range in pixel units, and the autocorrelation coefficient corresponding to each position when moved is calculated.
- FIG. 6 shows an example of the autocorrelation coefficient thus obtained.
- the autocorrelation coefficient is 1.
- frame F1 for example, when the block (reference block) is shifted rightward by three pixels, the autocorrelation coefficient decreases to 0.85, and the shift is further reduced. As the amount increases, the autocorrelation coefficient decreases to smaller values. This is the same when the block (reference block) is shifted to the left.
- step S2 the feature amount extraction unit 3 calculates a pixel shift amount at which the autocorrelation coefficient becomes a predetermined reference value (for example, 0.85), and in step S3, calculates the pixel shift amount. It is output as a feature that indicates the amount of blur. In other words, by comparing the reference value with the autocorrelation coefficient corresponding to each position where the reference block is shifted within a predetermined range, the pixel shift amount at which the autocorrelation coefficient becomes the reference value is determined.
- the feature amount is set to 3.
- the feature amount is 1. It is said.
- the region cutout unit 1 cuts out pixel data arranged within a narrow interval as a class tap, for example, as shown in FIG. ).
- the region cutout unit 1 cuts out (extracts) pixel data arranged at wider intervals as class taps as shown in FIG.
- the range of the pixel data having a strong autocorrelation is narrower in the image having the feature amount of 1 (frame F2). Therefore, as shown in Figure 2 As described above, pixel data that constitutes a class tap is selected in a narrow range. On the other hand, in the case of an image having a feature value of 3 (frame F 1), the range having strong autocorrelation is wider. Therefore, as shown in Fig. 3, the pixel data constituting the class tap is also cut out from a wider range. In this way, by appropriately changing the pixel data to be cut out as a class tap in accordance with the feature amount representing the blur amount, a more appropriate class tap can be cut out.
- the prediction tap in the region extraction unit 2 also corresponds to the feature amount representing the blur amount output from the feature amount extraction unit 3, similarly to the extraction of the cluster in the region extraction unit 1. Then, the pixel data to be cut out as a prediction tap is changed dynamically. Note that the method of cutting out the prediction taps in the area cutout unit 2 may be the same as or different from the class taps in the area cutout unit 1.
- the ADRC pattern extraction unit 4 performs ADRC processing on the class taps cut out by the region cutout unit 1 to perform class classification (determines the class). That is, the dynamic range in the five pixel data extracted as class taps is DR, the bit allocation is n, the level of each pixel data as a class tap is L, and the requantization code is When Q is given, the following equation is calculated.
- ⁇ means truncation processing.
- MAX and MIN represent the maximum and minimum values of the five pixel data that make up the class tap, respectively.
- the class code generation unit 5 adds a bit representing the feature amount representing the blur amount supplied from the feature value extraction unit 3 to the data representing the space class input from the ADRC pattern extraction unit 4. Generate class code. For example, assuming that the feature amount representing the blur amount is represented by 2 bits, a 12-bit class code S is generated and supplied to the ROM table 6. This class code corresponds to the address in ROM table 6.
- ROM table 6 a set of prediction coefficients corresponding to each class (class code) is stored in an address corresponding to the class code, and based on the class code supplied from the class code generator 5, The set of prediction coefficients ⁇ to ⁇ stored in the address corresponding to the class code is read and supplied to the prediction calculation unit 7.
- Predictive calculation unit 7 the pixel data X 1 to chi eta that make up the prediction tap supplied from the region clipping unit 2, with respect to set omega 1 to con prediction coefficients, as shown in the following equation, the product Perform a sum operation
- the predicted value y is the pixel data whose image quality (blurring) has been corrected.
- FIG. 7 illustrates an example of another feature amount extraction process in the feature amount extraction unit 3.
- step S11 an edge near a predetermined target pixel is detected.
- step S12 an edge code corresponding to the detected edge is output as a feature value. For example, when an oblique edge is detected from the upper right to the lower left as shown in FIG. 8, the feature amount extraction unit 3 outputs an edge code 0, and as shown in FIG. If edge is detected, edge code 1 is output.
- the region extracting unit 1 outputs the edge code 0 shown in FIG.
- pixel data as shown in Fig. 10 is cut out (extracted) as a class tap.
- This class tap is composed of pixel data that is optimal for detecting an edge extending from the upper right to the lower left.
- the area cutout unit 1 cuts out (extracts) pixel data as shown in FIG. 11 as a class tap.
- This class tap is composed of pixel data that is optimal for detecting a horizontal edge.
- the region cutout unit 2 executes a cutout (extraction) process of pixel data forming a prediction tap corresponding to an edge code.
- FIG. 12 shows a configuration example for obtaining a set of prediction coefficients for each class (for each class code) stored in the ROM table 6 by learning.
- a set of prediction coefficients for each class (for each class code) is generated using SD image data (or HD image data) as a teacher signal (learning signal) having good image quality. Is shown.
- the configuration example described below is an example for generating a set of prediction coefficients for each class corresponding to the image conversion apparatus in FIG. 1 of the present embodiment.
- image data as a teacher signal (learning signal) having good image quality is input to the normal equation operation unit 27 and to a low-pass filter (LPF) 21.
- the LPF 21 generates image data (learning signal) with degraded image quality by removing the low-frequency component of the image data as the input teacher signal (learning signal).
- the degraded image data (learning signal) from the low-pass filter 21 is used as a class tap to cut image data in a predetermined range. Extracting (extracting) an area extracting section 22, extracting (extracting) image data in a predetermined range as a prediction tap (extracting) an area extracting section 23, and a feature extracting section extracting a feature representing an amount of blur. 2 Entered in 4.
- the feature amount extraction unit 24 extracts the feature amount representing the blur amount of the input image data (learning signal) pixel data with deteriorated image quality, and extracts the extracted feature amount into the region cutout unit 22 and the region cutout unit. 2 and 3 and the class code generator 26.
- the region cutout unit 22 and the region cutout unit 23 dynamically change the pixel data cut out as a class tap or a prediction tap according to the input feature amount representing the blur amount. .
- the ADRC pattern extraction unit 25 classifies the pixel data as the class tap input from the region extraction unit 22 (determines the class), and classifies the classification result into a class code generation unit 2. Output to 6.
- the class generator 26 generates a class code from the classified class and the feature amount representing the blur amount, and outputs the generated class code to the normal equation calculator 27.
- FIG. 1 shows the configuration and operation of each of the above-described region cutout unit 22, region cutout unit 23, feature amount extraction unit 24, ADRC pattern extraction unit 25, and classcode generation unit 26. Since they are the same as the region cutout unit 1, the region cutout unit 2, the feature amount extraction unit 3, the ADRC pattern extraction unit 4, and the class code generation unit 6, the description is omitted here.
- the normal equation calculation unit 27 calculates a normal equation for each class (for each class code) from the input teacher signal (learning signal) and the pixel data as a prediction tap supplied from the region cutout unit 23. Is generated, and the normal equation is supplied to the class prediction coefficient determination unit 28. When the required number of normal equations is obtained for each class, for example, the normal equations are solved using the least squares method for each class, and a set of prediction coefficients for each class is calculated. The calculated set of prediction coefficients for each class is supplied from the prediction coefficient determination unit 28 to the memory 29, and the It is stored in the memory 29. The set of prediction coefficients for each class stored in the memory 29 is written to the ROM table 6 in FIG.
- the set of prediction coefficients for each class is described as being calculated by the configuration shown in FIG. 12, but in practice, it is calculated by a simulation using a computer. May be done.
- a set of predictors for each class calculated by the method shown in FIG. 12 and stored in the ROM table 6 shown in FIG. 1 is cut out as a prediction tap.
- the pixel data with improved image quality (improved blur) is generated from the pixel data obtained by the present invention.
- the present invention is not limited to this.
- For each class (each class code) calculated by learning the ROM table 6 The predicted value itself of the pixel data may be stored, and the predicted value may be read based on the class code.
- the region cutout unit 2 shown in FIG. 1 and the region cutout unit 23 shown in FIG. 12 can be omitted, and the prediction calculation unit 7 shown in FIG. Is converted to a format corresponding to the output device and output.
- a prediction value for each class is generated using the centroid method, and the prediction value for each class is Stored in memory 29.
- the predicted value for each class may be normalized with a reference value, and the normalized predicted value for each class may be stored in the ROM table 6. .
- the prediction calculation unit 7 shown in FIG. 1 calculates a prediction value from a prediction value normalized based on the reference value.
- the number of pixel data extracted as class taps or prediction taps is five when the autocorrelation coefficient is used and 13 when the edge code is obtained.
- the present invention is not limited to this, and the number of pieces of pixel data cut out as class taps or prediction taps is arbitrary. There may be.
- the number of cuts as class taps or prediction taps increases, the accuracy of image quality improvement increases.However, since the amount of computation increases and the memory increases, the amount of computation and hardware are reduced. Since the addition is large, it is necessary to set an optimal number.
- the conversion from the SD image signal to the SD image signal is performed.
- SD-SD conversion conversion from HD image signals to HD image signals
- HD-HD conversion conversion from SD image signals to HD image signals
- the gist of the present invention is not limited to the present embodiment.
- the extraction of the cluster tap or the prediction tap is controlled in accordance with the feature amount representing the blur amount of the input image data. Even if the image quality of the image data to be obtained is poor, the optimal pixel data can be extracted as a class tap or a prediction tab, and appropriate prediction processing can be performed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Systems (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE69838536T DE69838536T2 (de) | 1997-05-06 | 1998-05-06 | Bildwandler und bildwandlungsverfahhren |
IL12791098A IL127910A (en) | 1997-05-06 | 1998-05-06 | Image converter and image converting method |
CN98800490A CN1129301C (zh) | 1997-05-06 | 1998-05-06 | 图像转换设备和图像转换方法 |
EP98919492A EP0912045B1 (en) | 1997-05-06 | 1998-05-06 | Image converter and image conversion method |
US09/226,808 US6233019B1 (en) | 1997-05-06 | 1999-01-06 | Image converter and image converting method for improving image quality |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP11543797 | 1997-05-06 | ||
JP9/115437 | 1997-05-06 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/226,808 Continuation US6233019B1 (en) | 1997-05-06 | 1999-01-06 | Image converter and image converting method for improving image quality |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1998051072A1 true WO1998051072A1 (en) | 1998-11-12 |
Family
ID=14662546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1998/002009 WO1998051072A1 (en) | 1997-05-06 | 1998-05-06 | Image converter and image conversion method |
Country Status (7)
Country | Link |
---|---|
US (1) | US6233019B1 (ja) |
EP (1) | EP0912045B1 (ja) |
KR (1) | KR100499434B1 (ja) |
CN (1) | CN1129301C (ja) |
DE (1) | DE69838536T2 (ja) |
IL (1) | IL127910A (ja) |
WO (1) | WO1998051072A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002013512A1 (fr) * | 2000-08-07 | 2002-02-14 | Sony Corporation | Procede et dispositif de traitement d'image et support enregistre |
US6907413B2 (en) | 2000-08-02 | 2005-06-14 | Sony Corporation | Digital signal processing method, learning method, apparatuses for them, and program storage medium |
US7412384B2 (en) | 2000-08-02 | 2008-08-12 | Sony Corporation | Digital signal processing method, learning method, apparatuses for them, and program storage medium |
US7584008B2 (en) | 2000-08-02 | 2009-09-01 | Sony Corporation | Digital signal processing method, learning method, apparatuses for them, and program storage medium |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4345231B2 (ja) * | 1998-09-18 | 2009-10-14 | ソニー株式会社 | データ変換装置および方法、並びに記録媒体 |
US6718073B1 (en) | 1998-09-30 | 2004-04-06 | Sony Corporation | Arithmetic device, and converter, and their methods |
EP1037471B1 (en) * | 1998-10-05 | 2012-02-08 | Sony Corporation | Image transform device and method and recording medium |
EP1197946B1 (en) * | 2000-02-10 | 2007-04-04 | Sony Corporation | Image processing device and method, and recording medium |
JP4277446B2 (ja) * | 2000-12-26 | 2009-06-10 | ソニー株式会社 | 情報信号処理装置、情報信号処理方法、画像信号処理装置およびそれを使用した画像表示装置、それに使用される係数種データ生成装置および生成方法、並びに記録媒体 |
JP4066146B2 (ja) * | 2002-04-26 | 2008-03-26 | ソニー株式会社 | データ変換装置およびデータ変換方法、学習装置および学習方法、並びにプログラムおよび記録媒体 |
JP3702464B2 (ja) * | 2002-05-08 | 2005-10-05 | ソニー株式会社 | データ変換装置およびデータ変換方法、学習装置および学習方法、並びにプログラムおよび記録媒体 |
JP4265291B2 (ja) * | 2003-06-06 | 2009-05-20 | ソニー株式会社 | 情報信号の処理装置および処理方法、並びに情報信号の処理方法を実行するためのプログラム |
JP4281453B2 (ja) * | 2003-07-31 | 2009-06-17 | ソニー株式会社 | 信号処理装置および信号処理方法 |
US7595819B2 (en) * | 2003-07-31 | 2009-09-29 | Sony Corporation | Signal processing device and signal processing method, program, and recording medium |
JP4311258B2 (ja) * | 2004-04-02 | 2009-08-12 | ソニー株式会社 | 係数データの生成装置および生成方法、係数種データの生成装置および生成方法、情報信号処理装置、並びにプログラムおよびそれを記録した記録媒体 |
JP2005311629A (ja) * | 2004-04-20 | 2005-11-04 | Sony Corp | 係数データの生成装置および生成方法、情報信号の処理装置および処理方法、並びにプログラムおよびそれを記録した媒体 |
US7916177B2 (en) * | 2007-08-03 | 2011-03-29 | Panasonic Corporation | Image-capturing apparatus, image-capturing method and program for detecting and correcting image blur |
CN103326151B (zh) * | 2010-07-30 | 2015-04-29 | 昆山宏泽电子有限公司 | 卡缘连接器 |
DE102021210426A1 (de) | 2021-09-20 | 2023-03-23 | Zf Friedrichshafen Ag | Verfahren und System zur Kollisionsverhinderung eines Wasserfahrzeugs |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5947881A (ja) * | 1982-09-10 | 1984-03-17 | Pioneer Electronic Corp | 画像処理フイルタ |
JPH0851599A (ja) * | 1994-08-08 | 1996-02-20 | Sony Corp | 画像情報変換装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0361538A1 (en) * | 1985-04-12 | 1990-04-04 | International Business Machines Corporation | Method and system for edge enhancement in reproducing multi-level digital images on a bi-level printer of fixed dot size |
JP2830111B2 (ja) | 1989-07-21 | 1998-12-02 | ソニー株式会社 | 高能率符号化装置 |
JP3092024B2 (ja) | 1991-12-09 | 2000-09-25 | 松下電器産業株式会社 | 画像処理方法 |
KR100360206B1 (ko) | 1992-12-10 | 2003-02-11 | 소니 가부시끼 가이샤 | 화상신호변환장치 |
US5499057A (en) | 1993-08-27 | 1996-03-12 | Sony Corporation | Apparatus for producing a noise-reducded image signal from an input image signal |
JP3845870B2 (ja) | 1994-09-09 | 2006-11-15 | ソニー株式会社 | ディジタル信号処理用集積回路 |
US5852470A (en) * | 1995-05-31 | 1998-12-22 | Sony Corporation | Signal converting apparatus and signal converting method |
US5912708A (en) | 1996-12-26 | 1999-06-15 | Sony Corporation | Picture signal encoding device, picture signal encoding method, picture signal decoding device, picture signal decoding method, and recording medium |
-
1998
- 1998-05-06 WO PCT/JP1998/002009 patent/WO1998051072A1/ja active IP Right Grant
- 1998-05-06 EP EP98919492A patent/EP0912045B1/en not_active Expired - Lifetime
- 1998-05-06 DE DE69838536T patent/DE69838536T2/de not_active Expired - Lifetime
- 1998-05-06 KR KR10-1999-7000013A patent/KR100499434B1/ko not_active IP Right Cessation
- 1998-05-06 CN CN98800490A patent/CN1129301C/zh not_active Expired - Fee Related
- 1998-05-06 IL IL12791098A patent/IL127910A/xx not_active IP Right Cessation
-
1999
- 1999-01-06 US US09/226,808 patent/US6233019B1/en not_active Expired - Lifetime
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5947881A (ja) * | 1982-09-10 | 1984-03-17 | Pioneer Electronic Corp | 画像処理フイルタ |
JPH0851599A (ja) * | 1994-08-08 | 1996-02-20 | Sony Corp | 画像情報変換装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP0912045A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6907413B2 (en) | 2000-08-02 | 2005-06-14 | Sony Corporation | Digital signal processing method, learning method, apparatuses for them, and program storage medium |
US6990475B2 (en) | 2000-08-02 | 2006-01-24 | Sony Corporation | Digital signal processing method, learning method, apparatus thereof and program storage medium |
US7412384B2 (en) | 2000-08-02 | 2008-08-12 | Sony Corporation | Digital signal processing method, learning method, apparatuses for them, and program storage medium |
US7584008B2 (en) | 2000-08-02 | 2009-09-01 | Sony Corporation | Digital signal processing method, learning method, apparatuses for them, and program storage medium |
WO2002013512A1 (fr) * | 2000-08-07 | 2002-02-14 | Sony Corporation | Procede et dispositif de traitement d'image et support enregistre |
US6987884B2 (en) | 2000-08-07 | 2006-01-17 | Sony Corporation | Image processing device and method, and recorded medium |
KR100780516B1 (ko) * | 2000-08-07 | 2007-11-29 | 소니 가부시끼 가이샤 | 화상 처리 장치 및 방법, 및 기록 매체 |
JP4725001B2 (ja) * | 2000-08-07 | 2011-07-13 | ソニー株式会社 | 画像処理装置及び方法、並びに記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
EP0912045B1 (en) | 2007-10-10 |
IL127910A0 (en) | 1999-11-30 |
DE69838536T2 (de) | 2008-07-24 |
IL127910A (en) | 2003-01-12 |
KR20000023569A (ko) | 2000-04-25 |
CN1223050A (zh) | 1999-07-14 |
DE69838536D1 (de) | 2007-11-22 |
CN1129301C (zh) | 2003-11-26 |
KR100499434B1 (ko) | 2005-07-07 |
US6233019B1 (en) | 2001-05-15 |
EP0912045A4 (en) | 2000-08-02 |
EP0912045A1 (en) | 1999-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO1998051072A1 (en) | Image converter and image conversion method | |
EP1804488A2 (en) | Motion estimator and motion method | |
US20100202711A1 (en) | Image processing apparatus, image processing method, and program | |
US8331710B2 (en) | Image processing apparatus and method, learning apparatus and method, and program | |
WO2016002068A1 (ja) | 画像拡大装置、画像拡大方法、及び監視カメラ、並びにプログラム及び記録媒体 | |
EP1806925A2 (en) | Frame rate converter | |
EP1039746B1 (en) | Line interpolation method and apparatus | |
US20080239144A1 (en) | Frame rate conversion device and image display apparatus | |
JP5116602B2 (ja) | 映像信号処理装置及び方法、プログラム | |
JP4062771B2 (ja) | 画像変換装置および方法、並びに記録媒体 | |
JP4035895B2 (ja) | 画像変換装置および方法、並びに記録媒体 | |
JP2002247611A (ja) | 画像判別回路、画質補正装置、画像判別方法、画質補正方法、およびプログラム | |
JP4650683B2 (ja) | 画像処理装置および方法、プログラム並びに記録媒体 | |
JPH10155139A (ja) | 画像処理装置および画像処理方法 | |
KR101509552B1 (ko) | 비디오 화상에서 가장자리 방위를 나타내는 거리들을 생성하는 방법, 대응하는 디바이스, 및 디인터레이싱이나 포맷 전환을 위한 방법의 용도 | |
JP4139979B2 (ja) | 画像変換装置および方法、並びに記録媒体 | |
JP4131303B2 (ja) | 画像変換装置および方法、学習装置および方法、画像変換システム、並びに記録媒体 | |
JP2000092455A (ja) | 画像情報変換装置および画像情報変換方法 | |
JP3826434B2 (ja) | 信号変換装置および方法 | |
JP4250807B2 (ja) | フィールド周波数変換装置および変換方法 | |
JP2003319171A (ja) | 画像信号処理装置 | |
JP4650684B2 (ja) | 画像処理装置および方法、プログラム並びに記録媒体 | |
JP3480011B2 (ja) | 画像情報変換装置 | |
JP2000200349A (ja) | 画像情報変換装置および画像情報変換方法、学習装置および学習方法 | |
JP3800638B2 (ja) | 画像情報変換装置および方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 127910 Country of ref document: IL Ref document number: 98800490.9 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN IL KR SG US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): CY DE FR GB |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1998919492 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019997000013 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09226808 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 1998919492 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1019997000013 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1019997000013 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1998919492 Country of ref document: EP |