WO2000019724A1 - Arithmetic device, converter, and their methods - Google Patents
Arithmetic device, converter, and their methods Download PDFInfo
- Publication number
- WO2000019724A1 WO2000019724A1 PCT/JP1999/005384 JP9905384W WO0019724A1 WO 2000019724 A1 WO2000019724 A1 WO 2000019724A1 JP 9905384 W JP9905384 W JP 9905384W WO 0019724 A1 WO0019724 A1 WO 0019724A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- image data
- teacher
- prediction coefficient
- class
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 55
- 238000006243 chemical reaction Methods 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 29
- 238000001914 filtration Methods 0.000 claims description 10
- 230000015654 memory Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 11
- 238000013507 mapping Methods 0.000 description 8
- 101710163391 ADP-ribosyl cyclase/cyclic ADP-ribose hydrolase Proteins 0.000 description 7
- 230000003044 adaptive effect Effects 0.000 description 5
- 238000013139 quantization Methods 0.000 description 4
- 102100021916 Sperm-associated antigen 1 Human genes 0.000 description 3
- 239000010749 BS 2869 Class C1 Substances 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 239000010750 BS 2869 Class C2 Substances 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
Definitions
- the present invention relates to an arithmetic device, a conversion device, and a method thereof.
- connection between devices with different signal formats requires a signal conversion device that realizes signal conversion between the devices.
- an image data conversion device for format-converting low-resolution image data to high-resolution image data is required.
- a high resolution image data is obtained by performing a frequency interpolation process using an interpolation filter on a low resolution image data and performing pixel interpolation. The evening is forming.
- the image data conversion apparatus classifies low-resolution image data into classes corresponding to the signal level distribution of each pixel, and then stores the data called a prediction coefficient from a memory in which the data is stored in advance.
- the prediction coefficients stored in the memory are generated in advance by data processing called learning.
- the learning circuit that generates this prediction coefficient downloads high-resolution image data as a teacher image by digital filling. By converting, low-resolution image data as a student image is generated, and learning is performed between high-resolution image data and low-resolution image data to generate prediction coefficients. .
- the frequency characteristics of the digital filter when a high-resolution image data has a plurality of signal characteristics, it is desirable to change the frequency characteristics according to each signal characteristic.
- the visual resolution of the still image portion is improved, so that the digital filter has frequency characteristics that improve the resolution.
- the digital filter has a frequency characteristic that suppresses the improvement in resolution. Things are desirable.
- the high-resolution image data has multiple signal characteristics. As a result, it is not possible to generate a prediction coefficient corresponding to each signal characteristic. As a result, when generating high-resolution image data from low-resolution image data, there is a problem that hinders improvement in image quality. Was.
- the present invention has been made in consideration of the above points, and has a further
- the present invention provides an arithmetic device that calculates a prediction coefficient for converting first image data into a second image data having a higher quality than the first image data.
- a class deciding unit that classifies the teacher image data, which is higher in quality than the first image data, into a plurality of classes based on its characteristics; and a different filtering process for each class determined by the class deciding unit.
- the student image data of the same quality as the first image data is obtained by performing this process for the teacher image data.
- a student image data that matches the characteristics of the first image data and the teacher image data is generated.
- a prediction coefficient matching the characteristics of the first image data and the teacher image data is generated.
- the first image data is converted from the first image data to the second image data.
- the conversion process can be performed according to the features of the night o Brief description of the drawing
- FIG. 1 is a block diagram showing the configuration of the upcomer's evening.
- FIG. 2 is a schematic diagram illustrating an example of class tap arrangement.
- FIG. 3 is a schematic diagram illustrating an example of a prediction tap arrangement.
- FIG. 4 is a block diagram showing the configuration of the learning circuit.
- FIG. 5 is a flowchart showing a prediction coefficient generation procedure.
- FIG. 6 is a flowchart showing a prediction coefficient generation procedure according to the first embodiment.
- FIG. 7 is used to explain a first embodiment of down-conversion according to the present invention.
- FIG. 8 is a schematic diagram illustrating a learning method and a coefficient memory according to a first embodiment of the present invention.
- FIG. 9 is a block diagram showing a first embodiment of the upcoming process according to the present invention.
- FIG. 10 is a schematic diagram used for describing a second embodiment of the downcomer according to the present invention.
- FIG. 11 is a schematic diagram illustrating a learning method and a coefficient memory according to a second embodiment of the present invention.
- FIG. 12 is a block diagram showing a second embodiment of the upconverter according to the present invention.
- FIG. 13 is a schematic diagram illustrating an example of a cluster tap arrangement at the time of matching.
- Figure 14 is a schematic diagram showing an example of class tap placement during down-conversion. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 shows a circuit configuration of the APC converter 51 that realizes the classification adaptive processing.
- the up-comparator 51 for example, an 8-bit pulse code modulation (PCM) is supplied from the outside (PCM).
- PCM pulse code modulation
- Image data S 51 is input to the classification unit 52 and the prediction calculation unit 53.
- the class classification unit 52 divides a total of seven pixels (tap) of the target pixel and a plurality of peripheral pixels around the target pixel from the SD image data S 51 into pixels for class classification.
- class tap a class code S52 is generated based on the signal level distribution of those pixels.
- the solid figure The line indicates the first field, and the dotted line indicates the second field.
- the class classifying unit 52 reduces the number of classes by performing a data compression process (ie, requantization process) such as ADRC.
- a data compression process ie, requantization process
- This ADRC classification method uses an ADRC code from a few taps in a neighboring area centered on the pixel of interest,
- C i is the ADR C code
- X i is the input pixel value of each class
- MIN is the minimum pixel value of the input pixel of each class in the ADRC block
- DR is The dynamic range (difference between the maximum pixel value and the minimum pixel value) in the region
- k is the number of requantization bits.
- the classification method based on ADRC calculates the quantization step width according to the number of requantization bits from the dynamic range in the area, and subtracts the minimum pixel value from the input pixel value to obtain the pixel value according to the quantization step width.
- Requantize Things For example, in the case of performing 1-bit ADRC that requantizes each class tap to 1 bit at 7 taps in the area, each input pixel value at 7 taps is adaptively determined based on the dynamic range in the area. One-bit quantization is performed, and as a result, the input pixel value of 7 taps can be reduced to 7-bit data instantaneously, so that the total number of classes can be reduced to 128 classes.
- This one-bit ADRC is disclosed in Japanese Patent Application Laid-Open No. 7-87481 and corresponding US Pat. No. 5,488,18.
- the prediction coefficient ROM (Read Only Memory) 54 stores prediction coefficient data S 53 corresponding to each class generated in advance by a learning circuit 60 described below, and The prediction coefficient data S53 corresponding to the class code S52 supplied from the section 52 is read out and sent to the prediction calculation section 53.
- the HD image data which is a set of HD (Hih Definition) pixels that do not exist in the prediction tap, is displayed. And output this to the outside.
- x ′ is the HD pixel value
- X i is the pixel value of each prediction tap
- W i is the prediction coefficient
- n is the number of prediction taps
- n is 13 in this case.
- FIG. 4 shows the prediction coefficient data stored in the prediction coefficient ROM 54.
- 7 shows a circuit configuration of a learning circuit 60 that generates evenings.
- the learning circuit 60 generates prediction coefficient data in advance and stores the data in a prediction coefficient ROM 54.
- the HD image data S60 as a teacher signal is input to the vertical thinning file 61 and the prediction coefficient calculation circuit 62.
- the learning circuit 60 decimates the SD image data S61 as a student signal by thinning out the HD image data S60 with the vertical thinning filter 61 and the horizontal thinning filter 62. This is generated and input to the class classification unit 64 and the prediction coefficient calculation circuit 62.
- the class classification unit 64 has the same configuration as the class classification unit 52 of the up-comparator shown in FIG. 1, and selects a cluster map from the SD image data S 61 and classifies it based on the signal level distribution.
- a code S62 is generated and sent to the prediction coefficient calculation circuit 62.
- the prediction coefficient calculation circuit 62 calculates a prediction coefficient for each class based on the class indicated by the class code S62 based on the HD image data S60 and the SD image data S61, The resulting prediction coefficient S 63 is stored in the prediction coefficient R OM 54.
- the prediction coefficient calculation circuit 62 obtains the prediction coefficient w in the above equation (2) by the least square method. Specifically, the prediction coefficient calculation circuit 62 uses the following equation called an observation equation, where X is an SD pixel value, W is a prediction coefficient, and Y is an HD pixel value.
- n is the number of prediction taps.
- the prediction coefficient calculation circuit 62 calculates the following equation based on the equation (3).
- the prediction coefficient calculation circuit 62 only needs to calculate w 2 , w n that satisfies the n equations (6).
- the prediction coefficient calculating circuit 62 calculates the following equation from the above equations (4) and (8).
- the prediction coefficient calculation circuit 62 generates a normal equation composed of simultaneous equations of the same order as the prediction tap number n, and solves this normal equation using the sweeping-out method (Gauss J 0 rdan elimination method). Thereby, each prediction coefficient W i is calculated.
- step SP 62 entered from step SP 61, the learning circuit 60 generates an SD image data S 61 as a student signal from the HD image data S 60 as a teacher signal, thereby obtaining a prediction coefficient.
- step SP 62 entered from step SP 61, the learning circuit 60 generates an SD image data S 61 as a student signal from the HD image data S 60 as a teacher signal, thereby obtaining a prediction coefficient.
- a class map is selected from the SD image data S61, and the class is classified based on the signal level distribution.
- step SP63 the learning circuit 60 determines whether or not a sufficient number of learning data has been obtained to generate the prediction coefficient. As a result, learning data that is still necessary and sufficient is obtained. If it is determined that the answer is NO, a negative result is obtained in step SP63, and the routine goes to step SP65.
- step SP65 the learning circuit 60 generates a normal equation represented by the above equation (9) for each class, returns to step SP62, and repeats the same processing procedure to generate a prediction coefficient. Generate sufficient regular equations to perform
- step SP63 if a positive result is obtained in step SP63, this indicates that the necessary and sufficient learning time has been obtained.
- the learning circuit 60 proceeds to step SP66 and generates the prediction coefficients w ls w 2 ,..., w n for each class by solving the normal equation represented by the above equation (9) by the sweeping method. I do.
- step SP 6 7 the learning circuit 6 0, the prediction coefficient w 2 of each class generated, ..., the w n stored in the coefficient R OM 5 4 predict, the process proceeds to step SP 6 8 Finish ⁇
- the learning circuit down-converts the HD image data as the teacher image to the SD image data as the student image, and performs learning between these HD image data and the SD image data to perform each learning.
- a prediction coefficient is generated for each class.
- a procedure of generating a prediction coefficient by the learning circuit 60 will be described with reference to a flowchart shown in FIG.
- step SP 72 entered from step SP 71 the learning circuit is required to generate prediction coefficients by generating SD image data as student images from HD image data as teacher images. Generates a great learning day.
- the learning circuit uses a plurality of down-fill filters F1 to F4 having different passbands to output one HD image data and a plurality of SD image data from one HD. Overnight Generate SD1 to SD4.
- the passband of the down-fill filter F1 is set to be the highest, and the pass-bands are set to decrease in the order of the down-filter filters F2, F3, and F4.
- step SP73 the learning circuit determines whether or not a sufficient number of learning data has been obtained to generate the prediction coefficients, and as a result, a sufficient and sufficient learning data is still obtained. If it is determined that they do not exist, a negative result is obtained in step SP73, and the routine goes to step SP75.
- step SP75 the learning circuit generates a normal equation represented by the above equation (9) for each of the SD image data SD1 to SD4, returns to step SP72, and repeats the same processing procedure. Generate prediction coefficients Generate as many normal equations as necessary.
- step SP73 if a positive result is obtained in step SP73, this indicates that a necessary and sufficient number of learning data has been obtained.
- the learning circuit proceeds to step SP76 and A prediction coefficient is generated for each of the SD image data SD1 to SD4 by solving the normal equation generated in step SP75 by the sweeping method.
- the learning circuit performs a learning ST1 between the HD image data HD and the SD image data SD1 as shown in FIG. Generate 1 and store it in coefficient memory M1.
- the learning circuit performs the learning ST2 to ST4 between the HD image data HD and the SD image data SD2 to SD4, thereby calculating the prediction coefficients Y2 to Y4 respectively. And stores them in the coefficient memories M 2 to M 4.
- the learning circuit classifies the HD image data HD into four motion classes C1 to C4 according to the degree of the motion in step SP77 in FIG. 6, and stores the HD image data in the coefficient memory M1.
- the prediction coefficient Y1 corresponding to the motion class C1 is stored in the prediction coefficient memory M5.
- the prediction coefficients Y2 stored in the coefficient memory M2 those corresponding to the motion class C2 are stored in the coefficient memory M5, and the prediction coefficients Y3 stored in the coefficient memory M3 are stored.
- the one corresponding to the motion class C3 is stored in the coefficient memory M5, and the one corresponding to the motion class C4 among the prediction coefficients Y4 stored in the coefficient memory M4 is stored in the coefficient memory M5. Then, the prediction coefficient is registered in the coefficient memory M5, and the prediction coefficient generation procedure ends in a succeeding step SP78.
- FIG. 9 shows a configuration of an upconverter 100 that employs a coefficient memory M5 generated by learning.
- Upconverter 100 is input
- the SD image data S100 is input to the classifying unit 101 and the delay circuit 102.
- the classifying unit 101 generates a class code S101 by classifying the SD image data S100 into a class, and sends it to the coefficient memory M5.
- the coefficient memory M5 reads the prediction coefficient of the motion class C corresponding to the class code S101 among the motion classes C1 to C4 based on the supplied classcode S101, and predicts the prediction coefficient.
- the coefficient data S102 is sent to the mapping circuit 103.
- the delay circuit 102 delays the SD image data S 100 by a predetermined time, and sends it to the mapping circuit 103.
- the mapping circuit 103 generates the HD image data S103 by performing a product-sum operation of the SD image data S100 and the prediction coefficient data S102, and outputs the generated HD image data S103 to the outside. Has been made.
- the learning circuit converts a plurality of SD image data SD 1 to SD 4 from one HD image data HD using a plurality of down files F 1 to F 4 having different pass bands.
- learning is performed between the HD image data HD and the SD image data SD1 to SD4 to generate prediction coefficients Y1 to Y4, respectively, and the prediction coefficients Y1 to Y4 are generated.
- the prediction coefficients corresponding to the HD motion can be stored in the coefficient memory ⁇ 5. become.
- the HD image data S103 is generated using the prediction coefficients corresponding to the movement of the SD image data S100, and is generated by one down-file as in the past.
- the image quality of the HD image data S103 is improved as compared with the case where mapping is performed using the calculated prediction coefficients.
- a plurality of prediction coefficients ⁇ 1 to ⁇ 4 are generated using a plurality of down filters F 1 to F 4 having different passbands, and the prediction coefficients ⁇ 1 to ⁇ 4 are selected from among the prediction coefficients ⁇ 1 to ⁇ 4.
- FIG. 10 in which parts corresponding to those in FIG. 7 are assigned the same reference numerals, shows the configuration of the down converter 110 according to the second embodiment.
- the down converter 110 inputs the HD image data HD as the teacher image to the switch SW 1 and the class classification unit 111.
- the class classification unit 111 generates a class data S 110 by classifying the HD image data HD into motion classes C 1 to C 4 according to the amount of motion, and generates the class data S 110 by the switch SW 1. And supply to SW2.
- the class classification unit 111 classifies the one with the smallest amount of motion into a motion class C1, classifies it into motion classes C2 to C4 in accordance with the increase in the amount of motion, and Larger objects are classified into motion class C4.
- an image signal conversion device that performs class classification based on the amount of motion is described in Japanese Patent Application Laid-Open No. 9-74543.
- the switches SW1 and SW2 adaptively switch a plurality of down filters F1 to F4 having different pass bands for each pixel based on the motion class C indicated by the supplied class data S110. While choosing.
- the downcomer 110 generates one SD image data SD by downconverting the HD image data HD by the switched downfill F. At this time, if the amount of motion of the HD image is small, the down-fill filter F 1 having a high pass band is selected, and the down-fill filter F 2 having a low pass band is selected as the amount of motion increases. Select ⁇ F4.
- the download comp. 110 generates the SD image data SD while adaptively switching the down-fill F according to the amount of HD motion. Next, as shown in FIG.
- the learning circuit generates a prediction coefficient Y by performing a learning ST between the HD image data HD as the teacher image and the SD image data SD as the student image, and Store in memory M.
- FIG. 12 in which parts corresponding to those in FIG. 9 are assigned the same reference numerals, shows a configuration of an upconverter 120 using the above-described coefficient memory M.
- the upcomer 120 inputs the input SD image data S100 to the classifier 101 and the delay circuit 102.
- the class classification unit 101 generates a class code S 101 by classifying the SD image data S 100 into a class, and sends this to the coefficient memory M.
- the coefficient memory M reads out a prediction coefficient based on the supplied class code S101, and sends the prediction coefficient data S120 to the matching circuit 103.
- the delay circuit 102 delays the SD image data S 100 for a predetermined time and sends it to the mapping circuit 103.
- the matching circuit 103 generates an HD image data S 1 21 by performing a product-sum operation of the SD image data S 100 and the prediction coefficient data S 120, Output to
- the classification by the classification unit 101 (FIG. 12) of the upcomer 120 will be described with reference to FIG.
- the classifying unit 101 pays attention to 9 pixels existing in the same field and 9 pixels one frame before existing at the same position as the 9 pixels in the SD image data. After calculating the inter-frame differences of each pixel, the sum of their absolute values is calculated, and the sum of the absolute values is determined based on a threshold, thereby classifying the SD image data into four classes.
- the class classification unit 111 classifies the HD image data having four times the number of pixels of the SD image data into classes, so that every 9 pixels from the area having the same area as that of the SD image data is used. Pixels are extracted and classified into four classes by taking the sum of the absolute values of the differences between frames of each pixel.
- the class obtained by classifying the SD image data by the class classification unit 101 of the up-converter 120 and the HD image data corresponding to the SD image data by the down-converter 110 Classification part of 1 1 1
- the class may not be the same as the class that has been classified, the class taps in almost the same area are extracted and classified, so that the inconsistency between the two classes is negligible and negligible. .
- the downcomer 110 is configured to convert the HD image data HD while adaptively switching the down-filter F having a plurality of frequency characteristics according to the amount of motion of the HD image data HD.
- One SD image data SD is generated by down-computing.
- the learning circuit generates a prediction coefficient by performing learning ST between the HD image data and the HD image data SD and stores the prediction coefficient in the coefficient memory M.
- the classification unit 101 of the upcoming unit 100 cannot classify all SD image data S100 completely and correctly.In practice, it is difficult to perform a complete classification. is there. Therefore, in the up-converter 100 according to the first embodiment described above, the case where the HD image data S 103 is generated from the SD image data S 100 of the same signal level distribution is used. However, the classification may not be performed accurately, and mapping may be performed using prediction coefficients generated using down filters F with completely different frequency characteristics. It may change, and the image quality of the generated HD image data S103 may deteriorate.
- the up-comparator 120 even if the SD image data S 100 having the same signal level distribution is classified into different classes, the prediction coefficient is higher. There is no significant change, and as a result, the quality of the generated HD image data S122 is not degraded.
- the HD image data HD is classified into classes, and the HD image data HD is converted to the SD while the down-filter F having a plurality of frequency characteristics is adaptively switched according to the class.
- the prediction coefficients used during mapping are converted to SD images. It is possible to avoid a large change depending on the class of the data S100, so that the image quality can be further improved as compared with the case of the first embodiment.
- the present invention is applied to an image data conversion device that generates HD image data S 103 and S 121 from SD image data S 100.
- the present invention is not limited to this.
- the present invention can be widely applied to a data conversion apparatus that generates second data from first data.
- the learning circuit is applied to the generation of the prediction data.
- the present invention is not limited to this, and the point is that the second image data is obtained from the teacher image data corresponding to the evening.
- the prediction data may be generated using a plurality of filters having different passbands.
- Pixels may be extracted, and a class for the pixel of interest may be determined from the plurality of extracted pixels.
- mapping circuit 103 is applied to the generation of the pixel data.
- the present invention is not limited to this, and the point is that the prediction data read from the prediction data storage unit is used. It is only necessary to generate the pixel of interest in the second image.
- the present invention is applied to an image data conversion apparatus.
- the present invention is not limited to this, and is suitably applied to, for example, data related to a plurality of adjacent data (waveforms) such as voice.
- the classification adaptive processing according to the present invention is performed.
- the present invention is applied to the case where the number of pixels (spatial resolution) is converted as in the case of SD-HD conversion.
- the present invention is not limited to this, and is disclosed in Japanese Patent Application Laid-Open No. Hei 5-16991.
- Japanese Unexamined Patent Application Publication No. As disclosed in Japanese Patent Application Laid-Open No. 7-107, when the spatial resolution is increased by increasing the number of quantization bits to generate a signal corresponding to the increase in the amount of information, Japanese Patent Application No. 10-123 0 2 Can be applied to improve image blur as proposed in No. 1.o
- the prediction image is generated from the teacher image data corresponding to the second image data using a plurality of filters having different passbands.
- the second image data can be generated using the prediction data corresponding to the feature of the image data, and thus the image quality can be further improved as compared with the related art.
- the present invention is suitably applied to an image data conversion device that generates high-resolution image data from low-resolution image data and a device that calculates a prediction coefficient used in the conversion processing.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Television Systems (AREA)
- Editing Of Facsimile Originals (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020007005851A KR100657776B1 (ko) | 1998-09-30 | 1999-09-30 | 연산 장치, 변환기, 및 이들의 방법 |
EP99969880A EP1033886A4 (en) | 1998-09-30 | 1999-09-30 | ARITHMETIC DEVICE, CONVERTER AND METHODS RELATING THERETO |
US09/555,245 US6718073B1 (en) | 1998-09-30 | 1999-09-30 | Arithmetic device, and converter, and their methods |
JP2000573098A JP4352298B2 (ja) | 1998-09-30 | 1999-09-30 | 演算装置及び変換装置並びにそれらの方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP27859398 | 1998-09-30 | ||
JP10/278593 | 1998-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000019724A1 true WO2000019724A1 (en) | 2000-04-06 |
Family
ID=17599435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1999/005384 WO2000019724A1 (en) | 1998-09-30 | 1999-09-30 | Arithmetic device, converter, and their methods |
Country Status (5)
Country | Link |
---|---|
US (1) | US6718073B1 (ja) |
EP (1) | EP1033886A4 (ja) |
JP (1) | JP4352298B2 (ja) |
KR (1) | KR100657776B1 (ja) |
WO (1) | WO2000019724A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021241266A1 (ja) * | 2020-05-29 | 2021-12-02 | ソニーグループ株式会社 | 画像処理装置および方法 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8036494B2 (en) * | 2004-04-15 | 2011-10-11 | Hewlett-Packard Development Company, L.P. | Enhancing image resolution |
JP2005311629A (ja) * | 2004-04-20 | 2005-11-04 | Sony Corp | 係数データの生成装置および生成方法、情報信号の処理装置および処理方法、並びにプログラムおよびそれを記録した媒体 |
JP4193871B2 (ja) * | 2006-05-18 | 2008-12-10 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0846934A (ja) * | 1994-08-01 | 1996-02-16 | Sony Corp | ディジタル画像信号の処理装置 |
JPH08317347A (ja) * | 1995-05-23 | 1996-11-29 | Sony Corp | 画像情報変換装置 |
JPH10112843A (ja) * | 1996-10-04 | 1998-04-28 | Sony Corp | 画像処理装置および画像処理方法 |
JPH10112844A (ja) * | 1996-10-04 | 1998-04-28 | Sony Corp | 画像処理装置および画像処理方法 |
JPH10191318A (ja) * | 1996-12-27 | 1998-07-21 | Sony Corp | 符号化装置、符号化方法、復号化装置、復号化方法、送受信装置、および、送受信方法 |
EP0859513A2 (en) | 1997-02-14 | 1998-08-19 | Sony Corporation | Image signal converting apparatus and method |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3278881B2 (ja) | 1991-12-13 | 2002-04-30 | ソニー株式会社 | 画像信号生成装置 |
KR100360206B1 (ko) | 1992-12-10 | 2003-02-11 | 소니 가부시끼 가이샤 | 화상신호변환장치 |
JP3572632B2 (ja) | 1993-06-29 | 2004-10-06 | ソニー株式会社 | 異常検出装置 |
JP3458412B2 (ja) | 1993-06-30 | 2003-10-20 | ソニー株式会社 | 量子化ビット数変換装置および変換方法、並びに量子化ビット数学習装置および学習方法 |
US5748235A (en) * | 1994-08-31 | 1998-05-05 | Sony Corporation | Imaging apparatus including means for converting red, green, and blue signal components of standard resolution to respective high resolution signal components |
JP3781203B2 (ja) * | 1994-11-28 | 2006-05-31 | ソニー株式会社 | 画像信号補間装置及び画像信号補間方法 |
JP3794505B2 (ja) * | 1995-03-22 | 2006-07-05 | ソニー株式会社 | 信号変換装置及び信号変換方法 |
JP3870428B2 (ja) * | 1995-05-10 | 2007-01-17 | ソニー株式会社 | 画像情報変換装置および方法並びに係数データ生成装置および方法 |
US5852470A (en) * | 1995-05-31 | 1998-12-22 | Sony Corporation | Signal converting apparatus and signal converting method |
US5946044A (en) | 1995-06-30 | 1999-08-31 | Sony Corporation | Image signal converting method and image signal converting apparatus |
EP1445949B1 (en) * | 1996-05-30 | 2006-06-28 | Sony Corporation | Sum-of-product calculating circuit and method thereof |
JPH1011583A (ja) | 1996-06-27 | 1998-01-16 | Sony Corp | クラス分類適応処理装置、クラス分類適応処理用の学習装置および学習方法 |
JPH10126218A (ja) * | 1996-10-15 | 1998-05-15 | Sony Corp | サンプリング周波数変換装置 |
WO1998051072A1 (en) | 1997-05-06 | 1998-11-12 | Sony Corporation | Image converter and image conversion method |
JP3946812B2 (ja) | 1997-05-12 | 2007-07-18 | ソニー株式会社 | オーディオ信号変換装置及びオーディオ信号変換方法 |
US6181382B1 (en) * | 1998-04-03 | 2001-01-30 | Miranda Technologies Inc. | HDTV up converter |
-
1999
- 1999-09-30 US US09/555,245 patent/US6718073B1/en not_active Expired - Fee Related
- 1999-09-30 EP EP99969880A patent/EP1033886A4/en not_active Withdrawn
- 1999-09-30 WO PCT/JP1999/005384 patent/WO2000019724A1/ja active IP Right Grant
- 1999-09-30 JP JP2000573098A patent/JP4352298B2/ja not_active Expired - Fee Related
- 1999-09-30 KR KR1020007005851A patent/KR100657776B1/ko not_active IP Right Cessation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0846934A (ja) * | 1994-08-01 | 1996-02-16 | Sony Corp | ディジタル画像信号の処理装置 |
JPH08317347A (ja) * | 1995-05-23 | 1996-11-29 | Sony Corp | 画像情報変換装置 |
JPH10112843A (ja) * | 1996-10-04 | 1998-04-28 | Sony Corp | 画像処理装置および画像処理方法 |
JPH10112844A (ja) * | 1996-10-04 | 1998-04-28 | Sony Corp | 画像処理装置および画像処理方法 |
JPH10191318A (ja) * | 1996-12-27 | 1998-07-21 | Sony Corp | 符号化装置、符号化方法、復号化装置、復号化方法、送受信装置、および、送受信方法 |
EP0859513A2 (en) | 1997-02-14 | 1998-08-19 | Sony Corporation | Image signal converting apparatus and method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021241266A1 (ja) * | 2020-05-29 | 2021-12-02 | ソニーグループ株式会社 | 画像処理装置および方法 |
Also Published As
Publication number | Publication date |
---|---|
EP1033886A4 (en) | 2008-04-02 |
KR100657776B1 (ko) | 2006-12-15 |
EP1033886A1 (en) | 2000-09-06 |
US6718073B1 (en) | 2004-04-06 |
JP4352298B2 (ja) | 2009-10-28 |
KR20010015845A (ko) | 2001-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR0184905B1 (ko) | 부호량제어장치 및 이것을 사용한 부호화장치 | |
US20100202711A1 (en) | Image processing apparatus, image processing method, and program | |
JP3794505B2 (ja) | 信号変換装置及び信号変換方法 | |
US20120321214A1 (en) | Image processing apparatus and method, program, and recording medium | |
JP4650123B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
JPH09212650A (ja) | 動きベクトル検出装置および検出方法 | |
KR20000047608A (ko) | 데이터 처리 장치 및 데이터 처리 방법 | |
JP2002300538A (ja) | 係数データの生成装置および生成方法、それを使用した情報信号の処理装置および処理方法、それに使用する種係数データの生成装置および生成方法、並びに情報提供媒体 | |
WO2000019724A1 (en) | Arithmetic device, converter, and their methods | |
US7672914B2 (en) | Apparatus and method for generating coefficient data, apparatus and method for generating coefficient-seed data, information-signal processing apparatus, program, and medium having recorded the program thereon | |
JP3743077B2 (ja) | 画像信号変換装置および方法 | |
JP4300603B2 (ja) | 画像情報変換装置および方法 | |
JP3723995B2 (ja) | 画像情報変換装置および方法 | |
JP2000059652A (ja) | ノイズ除去装置及びノイズ除去方法 | |
JP3693187B2 (ja) | 信号変換装置及び信号変換方法 | |
JPH0983961A (ja) | クラス予測係数の学習方法並びにクラス分類適応処理を用いた信号変換装置および方法 | |
JP4337186B2 (ja) | 画像情報変換装置および画像情報変換方法、学習装置および学習方法 | |
JP2000125268A (ja) | 画像データ変換装置及び画像データ変換方法 | |
KR100982625B1 (ko) | 정보 신호의 처리 장치 및 처리 방법 | |
JP2007251690A (ja) | 画像処理装置および方法、学習装置および方法、並びにプログラム | |
JP4131049B2 (ja) | 信号処理装置及び信号処理方法 | |
JP3669522B2 (ja) | 信号変換装置、信号変換方法、係数学習装置及び係数学習方法 | |
JP4062326B2 (ja) | 係数生成装置および方法 | |
JP4310847B2 (ja) | 画像情報変換装置および変換方法 | |
JP4168298B2 (ja) | データ変換装置及びデータ変換方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1999969880 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020007005851 Country of ref document: KR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 09555245 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1999969880 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020007005851 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1020007005851 Country of ref document: KR |