JP4135045B2 - Data processing apparatus, data processing method, and medium - Google Patents

Data processing apparatus, data processing method, and medium Download PDF

Info

Publication number
JP4135045B2
JP4135045B2 JP16052899A JP16052899A JP4135045B2 JP 4135045 B2 JP4135045 B2 JP 4135045B2 JP 16052899 A JP16052899 A JP 16052899A JP 16052899 A JP16052899 A JP 16052899A JP 4135045 B2 JP4135045 B2 JP 4135045B2
Authority
JP
Japan
Prior art keywords
data
value
distance
teacher
distances
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP16052899A
Other languages
Japanese (ja)
Other versions
JP2000348019A (en
Inventor
義教 渡邊
哲二郎 近藤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to JP16052899A priority Critical patent/JP4135045B2/en
Priority claimed from US09/587,865 external-priority patent/US6678405B1/en
Publication of JP2000348019A publication Critical patent/JP2000348019A/en
Application granted granted Critical
Publication of JP4135045B2 publication Critical patent/JP4135045B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a data processing device, a data processing method, and a medium. For example, a data processing device, a data processing method, and a data processing method capable of improving the processing performance when data processing such as image data is performed, and It relates to the medium.
[0002]
[Prior art]
For example, the applicant of the present application has previously proposed a class classification adaptive process as a process for improving the image quality and the like of other images.
[0003]
Class classification adaptive processing consists of class classification processing and adaptive processing. Data is classified by class classification processing based on its properties, and adaptive processing is performed for each class. The process is as follows.
[0004]
That is, in the adaptive processing, for example, an original image (for example, a linear combination of pixels (hereinafter, appropriately referred to as input pixels) constituting an input image (an image to be processed in the class classification adaptive processing) and a predetermined prediction coefficient is used. By obtaining the predicted value of the pixel of the input image by obtaining the predicted value of the pixel of the image that does not include noise, the image that is not blurred, or the like, there are images that have improved the blur that has occurred in the input image, etc. It has come to be obtained.
[0005]
Therefore, for example, the original image is used as teacher data, and pixels constituting the original image (hereinafter, referred to as original pixels as appropriate) are obtained by using an image in which noise is superimposed or blurred on the original image as student data. ) For the predicted value E [y] of the pixel value y of some student data (pixel value) x1, X2, ... and a predetermined prediction coefficient w1, W2Consider a linear primary combination model defined by the linear combination of. In this case, the predicted value E [y] can be expressed by the following equation.
[0006]
In order to generalize Equation (1), a matrix W composed of a set of prediction coefficients w, a matrix X composed of a set of student data, and a matrix Y ′ composed of a set of predicted values E [y]
[0007]
[Expression 1]
Then, the following observation equation holds.
[0008]
Here, the component x of the matrix XijIs a set of i-th student data (i-th teacher data yiThe j-th student data in the set of student data used for the prediction ofjRepresents a prediction coefficient by which the product of the j-th student data in the student data set is calculated. YjRepresents the jth teacher data, and thus E [yj] Represents the predicted value of the j-th teacher data.
[0009]
Then, it is considered that the least square method is applied to this observation equation to obtain a predicted value E [y] close to the pixel value y of the original pixel. In this case, a matrix Y composed of a set of true pixel values y of original pixels serving as teacher data and a matrix E composed of a set of residuals e of predicted values E [y] with respect to the pixel values y of the original pixels,
[0010]
[Expression 2]
From the equation (2), the following residual equation is established.
[0011]
In this case, the prediction coefficient w for obtaining the predicted value E [y] close to the pixel value y of the original pixeliIs the square error
[0012]
[Equation 3]
Can be obtained by minimizing.
[0013]
Therefore, the above square error is converted into the prediction coefficient wiWhen the value differentiated by 0 is 0, that is, the prediction coefficient w satisfying the following equation:iHowever, this is the optimum value for obtaining the predicted value E [y] close to the pixel value y of the original pixel.
[0014]
[Expression 4]
Therefore, first, Equation (3) is converted into the prediction coefficient w.iIs differentiated by the following equation.
[0015]
[Equation 5]
From equations (4) and (5), equation (6) is obtained.
[0016]
[Formula 6]
Further, considering the relationship among the student data x, the prediction coefficient w, the teacher data y, and the residual e in the residual equation of Equation (3), the following normal equation can be obtained from Equation (6). .
[0017]
[Expression 7]
Each equation constituting the normal equation of equation (7) can be prepared by the same number as the number of prediction coefficients w to be obtained by preparing a certain number of student data x and teacher data y. By solving the equation (7) (however, in order to solve the equation (7), the matrix composed of the coefficients related to the prediction coefficient w needs to be regular in the equation (7)), the optimal prediction The coefficient w can be obtained. In solving the equation (7), for example, a sweeping method (Gauss-Jordan elimination method) or the like can be used.
[0018]
As described above, the optimum prediction coefficient w is obtained, and further, using the prediction coefficient w, it is adaptive to obtain the prediction value E [y] close to the pixel value y of the original pixel by the equation (1). It is processing.
[0019]
The adaptive process is not included in the input image, but is different from, for example, a simple interpolation process in that a component included in the original image is reproduced. In other words, the adaptive process is the same as the interpolation process using a so-called interpolation filter as long as only Expression (1) is seen, but the prediction coefficient w corresponding to the tap coefficient of the interpolation filter uses the teacher data y. In other words, since it is obtained by learning, the components included in the original image can be reproduced. That is, a high S / N image can be easily obtained. From this, the adaptive processing can be said to be processing that has an image creation (resolution imagination) function, so that in addition to obtaining the predicted value of the original image from which noise and blur are removed from the input image, for example, low resolution or It can also be used when a standard resolution image is converted to a high resolution image.
[0020]
[Problems to be solved by the invention]
As described above, in the class classification adaptive process, an adaptive process is performed for each class. In the class classification performed in the preceding stage, an original pixel for which a predicted value is to be obtained (hereinafter, referred to as a target original pixel as appropriate). ) Is extracted, and the original pixel of interest is classified based on its properties (for example, the pattern of pixel values of the plurality of input pixels, the gradient of the pixel value, etc.). . As the plurality of input pixels used for this class classification, input pixels at fixed positions as viewed from the original pixel of interest are extracted.
[0021]
However, for example, when an input image with blur is converted into an image with improved blur by class classification adaptive processing, the fixed image is viewed from the original pixel of interest regardless of the degree of blur of the input image. If the input pixel at the position is used for classifying the target original pixel, it may be difficult to classify the target pixel sufficiently reflecting the characteristics of the target original pixel.
[0022]
That is, for example, when performing class classification adaptation processing on an input image with a small degree of blur (degree of blur), the input is located relatively close to the target original pixel from the viewpoint of image correlation. If the classification is performed using pixels, the classification that reflects the characteristics of the original pixel of interest can be performed. In addition, when performing class classification adaptive processing for an input image with a large degree of blur, from the viewpoint of the influence of blur, class classification is performed using input pixels that are relatively far from the original pixel of interest. If you do it, you can make a classification that reflects its nature.
[0023]
Therefore, when the input pixel at a fixed position as viewed from the target original pixel is used for class classification of the target original pixel, classification that reflects the property of the target original pixel may not be performed. The processing performance of the class classification adaptive processing deteriorates, that is, an image in which the input image is sufficiently improved (here, an image in which blur is sufficiently improved) may not be obtained by the class classification processing.
[0024]
The present invention has been made in view of such a situation, and is intended to improve processing performance such as class classification adaptive processing, for example.
[0025]
[Means for Solving the Problems]
  The data processing apparatus according to the present invention includes center data corresponding to target output data that is output data for which a predicted value is to be obtained from input data, and a plurality of types that are preset in the spatial direction or the time direction from the center data. A plurality of data having a plurality of peripheral data corresponding to one of the distances are read out while changing the distance, and each value with respect to an average value of a plurality of data corresponding to each of a plurality of types of distances By comparing the statistic based on the difference between multiple data values or the difference between the values of a plurality of data with a predetermined reference value, the distance at which the statistic closest to the predetermined reference value can be obtained is obtained from the input data Deciding means for determining the extraction distance for extracting and the attention output data that is the output data for which the predicted value is to be obtained is decided by the deciding means Extraction means for extracting a plurality of data corresponding to the extracted distance from the input data, and based on the value pattern of the plurality of data extracted by the extraction means, the attention output data is selected from any of a plurality of classes Classifying means for classifying into two classes and outputting a corresponding class code, and corresponding to input data as a plurality of data corresponding to a distance at which a statistic closest to a predetermined reference value is obtained A prediction coefficient for predicting higher quality teacher data than the student data by linear linear combination with the student data is learned in advance for each class code based on a plurality of data corresponding to the distance, and extracted by the extraction unit A prediction means for obtaining a predicted value of the output data of interest by linear linear combination of a plurality of data and a prediction coefficient corresponding to the class code;Prepares the teacher data to be the teacher for learning the prediction coefficient, performs specific processing according to the input data, generates the student data to be the student, and outputs the input to obtain the predicted value from the input data A plurality of pieces of data having center data corresponding to attention output data, which is data, and a plurality of peripheral data corresponding to one of a plurality of distances set in advance in the spatial direction or the time direction from the center data Are read out while changing the distance, and a statistical amount based on a variation of each value with respect to an average value of a plurality of data values corresponding to each of a plurality of types of distances, or a difference between a plurality of data values and a predetermined value. The distance at which the statistic closest to the predetermined reference value is obtained by comparing with the reference value is determined as the extraction distance for extracting multiple data from the input data. Then, for attention teacher data that is teacher data for which a predicted value is to be obtained, a plurality of data corresponding to the determined extraction distance is extracted from the student data, and based on the value patterns of the extracted plurality of data, Classify attention teacher data into one of multiple classes, output the corresponding class code, and use the extracted data to make statistics closest to a given reference value As a plurality of data corresponding to the distance from which the quantity is obtained, a prediction coefficient for predicting higher-quality teacher data than the student data is obtained by linear linear combination with the student data corresponding to the input data. Prediction coefficients are learned in advance by obtaining each class code based on data.It is characterized by that.
[0027]
  The decision means includesAccording to each distanceMultiple dataValuestandard deviationIs the distance at which the standard deviation closest to the given reference value is obtained.To extract multiple data from input data based ondistanceCan be determined. Also, the decision means includesThe output data is a block unit each having a plurality of attention output data,Said in a given blockMultiple attentionAverage of the standard deviation of the plurality of data obtained for each output dataIs the statistic, and by comparing the statistic with the predetermined reference valueIn that blockAttentionExtraction that extracts the plurality of data from the input data for output datadistanceCan be determined.The predetermined reference value isMultiple typesCorresponds to the distanceFor each of a plurality of data, a predetermined prediction coefficient corresponding to the class code is learned between the student data and the teacher data, and the teacher data is predicted with the learned prediction coefficient, so that the prediction can be made optimally.Corresponds to the distanceStatistics obtained from multiple dataIt can be.
[0028]
The data processing apparatus of the present invention can further include reference value storage means for storing a predetermined reference value.
[0030]
The data processing apparatus of the present invention can further include a prediction coefficient storage unit that stores a prediction coefficient for each class code.
[0031]
The input data and output data can be image data. Further, in this case, the extraction means can extract pixels that are spatially or temporally peripheral from the pixel as the target output data from the image data as the input data.
[0032]
  According to the data processing method of the present invention, the data processing apparatus sets in advance the center data corresponding to the output data of interest, which is the output data for which a predicted value is to be obtained from the input data, and the space data or the time direction from the center data. A plurality of data having a plurality of peripheral data corresponding to one distance among a plurality of types of distances is read while changing the distance, and an average of a plurality of data values corresponding to each of the plurality of types of distances Enter the distance at which the statistic closest to the predetermined reference value is obtained by comparing the statistic based on the variation of each value with respect to the value or the difference between multiple data values and the predetermined reference value About the determination means to determine the extraction distance to extract a plurality of data from the data and the attention output data that is the output data for which the predicted value is to be obtained, Based on the extraction means for extracting a plurality of data corresponding to the extraction distance determined by the determination means from the input data and the pattern of the values of the plurality of data extracted by the extraction means, the attention output data is converted into a plurality of classes. Class classification means to classify into any of these classes, output the corresponding class code, and input as a plurality of data corresponding to the distance that can obtain the statistic closest to the predetermined reference value A prediction coefficient for predicting high-quality teacher data higher than the student data by linear linear combination with the student data corresponding to the data is learned in advance for each class code based on a plurality of data corresponding to the distance. The predicted value of the output data of interest is obtained by linear linear combination of multiple data extracted in step 1 and the prediction coefficient corresponding to the class code. And the prediction meansPreparationThe determining means includes, among input data, center data corresponding to attention output data that is output data for which a predicted value is to be obtained, and a plurality of types of distances set in advance in the spatial direction or the time direction from the center data. A plurality of data having a plurality of peripheral data corresponding to one distance are read while changing the distance, and variation of each value with respect to an average value of a plurality of data corresponding to each of a plurality of types of distances, Or, by comparing a statistic based on a difference between a plurality of data values and a predetermined reference value, a plurality of data are extracted from the input data for a distance at which a statistic closest to the predetermined reference value is obtained. The extraction means determines the extraction distance determined by the determination means for the output data of interest that is the output data for which a predicted value is to be obtained. A plurality of data corresponding to the input data is extracted from the input data, and the class classification unit converts the attention output data to any one of the plurality of classes based on the value patterns of the plurality of data extracted by the extraction unit. Classifying the data into classes, outputting the corresponding class codes, and the predicting means is the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained A prediction coefficient for predicting higher-quality teacher data than the student data by linear linear combination is learned in advance for each class code based on a plurality of data corresponding to the distance, and a plurality of data extracted by the extraction unit are extracted. Including a step of obtaining a predicted value of the output data of interest by linear linear combination of the data and a prediction coefficient corresponding to the class code.In addition, the teacher data that becomes the teacher for learning the prediction coefficient is subjected to a specific process according to the input data, the student data that becomes the student is generated, and the output that tries to obtain the predicted value from the input data A plurality of pieces of data having center data corresponding to attention output data, which is data, and a plurality of peripheral data corresponding to one of a plurality of distances set in advance in the spatial direction or the time direction from the center data Are read out while changing the distance, and a statistical amount based on a variation of each value with respect to an average value of a plurality of data values corresponding to each of a plurality of types of distances, or a difference between a plurality of data values and a predetermined value. By comparing with the reference value, the distance that provides the statistic closest to the specified reference value is determined as the extraction distance for extracting multiple data from the input data For the attention teacher data, which is the teacher data for which the predicted value is to be obtained, a plurality of data corresponding to the determined extraction distance is extracted from the student data, and the attention data is extracted based on the value patterns of the extracted plurality of data. Classify the teacher data into one of multiple classes, output the corresponding class code, and use the extracted multiple data to find the statistic closest to the specified reference value As a plurality of data corresponding to the distance obtained, a plurality of pieces of data corresponding to the distance are used as prediction coefficients for predicting higher quality teacher data than the student data by linear linear combination with the student data corresponding to the input data. For each class code based on Numbers are learned in advanceIt is characterized by that.
[0033]
  The program that the medium of the present invention causes the computer to execute is set in advance in the spatial direction or the time direction from the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data. The average value of the multiple data values corresponding to each of the multiple types of distances is read while changing the distances, and the plurality of data having the multiple peripheral data corresponding to one of the multiple types of distances The distance at which the statistic closest to the predetermined reference value is obtained by comparing the statistic based on the variation of each value with respect to or the difference between the values of a plurality of data with the predetermined reference value A determination step for determining an extraction distance for extracting a plurality of data, and output data for which the predicted value is to be obtained. A plurality of data corresponding to the extraction distance determined by the determining means is extracted from the input data, and based on the value pattern of the plurality of data extracted in the extraction step, A class classification step for classifying into any one of the classes and outputting the corresponding class code, and a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained The prediction coefficient for predicting higher-quality teacher data than the student data by linear linear combination with the student data corresponding to the input data is learned in advance for each class code based on a plurality of data corresponding to the distance, By linear linear combination of the plurality of data extracted in the extraction step and the prediction coefficient corresponding to the class code, Including a prediction step of obtaining a predicted value of the eye output dataIn addition, the teacher data that becomes the teacher for learning the prediction coefficient is subjected to a specific process according to the input data, the student data that becomes the student is generated, and the output that tries to obtain the predicted value from the input data A plurality of pieces of data having center data corresponding to attention output data, which is data, and a plurality of peripheral data corresponding to one of a plurality of distances set in advance in the spatial direction or the time direction from the center data Are read out while changing the distance, and a statistical amount based on a variation of each value with respect to an average value of a plurality of data values corresponding to each of a plurality of types of distances, or a difference between a plurality of data values and a predetermined value. By comparing with the reference value, the distance that provides the statistic closest to the specified reference value is determined as the extraction distance for extracting multiple data from the input data For the attention teacher data, which is the teacher data for which the predicted value is to be obtained, a plurality of data corresponding to the determined extraction distance is extracted from the student data, and the attention data is extracted based on the value patterns of the extracted plurality of data. Classify the teacher data into one of multiple classes, output the corresponding class code, and use the extracted multiple data to find the statistic closest to the specified reference value As a plurality of data corresponding to the distance obtained, a plurality of pieces of data corresponding to the distance are used as prediction coefficients for predicting higher quality teacher data than the student data by linear linear combination with the student data corresponding to the input data. Prediction coefficients are learned in advance by finding each class code based onIt is characterized by that.
[0034]
  Another data processing apparatus according to the present invention includes a generation unit that performs specific processing according to input data on teacher data serving as a teacher for learning prediction coefficients, and generates student data serving as students. Corresponds to attention output data that is output data for which a predicted value is to be obtained from dataCentral data and,From the central dataIn spatial or temporal directionPresetMultiple typesA plurality of peripheral data corresponding to one of the distancesMultiple data, While changing the distanceRead out, multiple typesdistanceEach ofCorresponding toMultiple data valuesBased on the variation of each value with respect to the average value of or the difference between the values of multiple dataBy comparing the statistic with a predetermined reference value, the statistic closest to the predetermined reference value can be obtained.distanceExtract multiple data from input datadistanceExtraction means determined by the deciding means and deciding means for deciding as to the target teacher data that is the teacher data for which a predicted value is to be obtainedCorresponds to the distanceExtraction means for extracting a plurality of data from student data;Class classification means for classifying target teacher data into any one of a plurality of classes based on a pattern of values of a plurality of data extracted by the extraction means, and outputting a corresponding class code When,Using multiple data extracted by the extraction meansThus, higher quality teacher data than the student data is predicted by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained.The prediction factorFor each class code based on multiple data corresponding to the distanceAnd a calculating means to be obtained.
[0036]
  The decision means includesAccording to each distanceMultiple dataValuestandard deviationIs the distance at which the standard deviation closest to the given reference value is obtained.To extract multiple data from input data based ondistanceCan be determined. Also, the decision means includesThe output data is a block unit each having a plurality of attention teacher data,In a given blockMultiple attentionAverage of standard deviation of multiple data obtained for each teacher dataAs a statistic, and by comparing the statistic with a predetermined reference valueIn that blockAttentionExtracting multiple data from student data for teacher datadistanceCan be determined.
[0037]
  In another data processing apparatus of the present invention, reference value calculation means for obtaining a predetermined reference value can be further provided. In this case, the reference value calculation means includes a plurality of types.Corresponds to the distanceAs a result of predicting teacher data with prediction coefficients obtained for class codes corresponding to multiple data, it is possible to predict optimallyCorresponds to the distanceA statistical quantity obtained from a plurality of data can be obtained as a predetermined reference value.
[0039]
The teacher data and student data can be image data. In this case, the extraction unit can extract pixels located spatially or temporally from the image data as the student data with respect to the pixels as the attention teacher data.
[0040]
  In another data processing method of the present invention, a data processing device performs specific processing according to input data on teacher data serving as a teacher for learning a prediction coefficient, and generates student data serving as a student. Corresponding to the output data for which the predicted value is to be obtained from the generation means and the input data.Central data and,From the central dataIn spatial or temporal directionPresetMultiple typesA plurality of peripheral data corresponding to one of the distancesMultiple data, While changing the distanceRead out, multiple typesdistanceEach ofCorresponding toMultiple data valuesBased on the variation of each value with respect to the average value of or the difference between the values of multiple dataBy comparing the statistic with a predetermined reference value, the statistic closest to the predetermined reference value can be obtained.distanceExtract multiple data from input datadistanceExtraction means determined by the deciding means and deciding means for deciding as to the target teacher data that is the teacher data for which a predicted value is to be obtainedCorresponds to the distanceExtraction means for extracting a plurality of data from student data;Class classification means for classifying target teacher data into any one of a plurality of classes based on a pattern of values of a plurality of data extracted by the extraction means, and outputting a corresponding class code When,Using multiple data extracted by the extraction meansThus, higher quality teacher data than the student data is predicted by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained.The prediction factorFor each class code based on multiple data corresponding to the distanceA calculating unit that performs a specific process according to the input data on the teacher data serving as a teacher for learning the prediction coefficient to generate student data serving as a student, and the determining unit includes , Corresponding to attention output data that is output data for which a predicted value is to be obtained from input dataCentral data and,From the central dataIn spatial or temporal directionPresetMultiple typesA plurality of peripheral data corresponding to one of the distancesMultiple data, While changing the distanceRead out, multiple typesdistanceEach ofCorresponding toMultiple data valuesBased on the variation of each value with respect to the average value of or the difference between the values of multiple dataBy comparing the statistic with a predetermined reference value, the statistic closest to the predetermined reference value can be obtained.distanceExtract multiple data from input datadistanceThe extraction means determines the extraction of the attention teacher data, which is the teacher data for which a predicted value is to be obtained, determined by the determination means.Corresponds to the distanceExtract multiple data from student data,The class classification means classifies the teacher data of interest into one of a plurality of classes based on the value patterns of the plurality of data extracted by the extraction means, and sets the corresponding class code Output,The calculation means uses a plurality of data extracted by the extraction means.Thus, higher quality teacher data than the student data is predicted by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained.The prediction factorFor each class code based on multiple data corresponding to the distanceThe method includes the step of obtaining.
[0041]
  A program to be executed by a computer according to another medium of the present invention generates a student data to be a student by performing a specific process according to input data on the teacher data to be a teacher for learning a prediction coefficient Corresponding to the output data of interest that is the output data for which the predicted value is to be obtained from the input data from the step and the input dataCentral data and,From the central dataIn spatial or temporal directionPresetMultiple typesA plurality of peripheral data corresponding to one of the distancesMultiple data, While changing the distanceRead out, multiple typesdistanceEach ofCorresponding toMultiple data valuesBased on the variation of each value with respect to the average value of or the difference between the values of multiple dataBy comparing the statistic with a predetermined reference value, the statistic closest to the predetermined reference value can be obtained.distanceExtract multiple data from input datadistanceAnd the extraction step determined by the determination step for the determination teacher step and the attention teacher data which is the teacher data for which a predicted value is to be obtainedCorresponds to the distanceAn extraction step of extracting a plurality of data from student data;Class classification step for classifying the teacher data of interest into one of a plurality of classes based on the value patterns of the plurality of data extracted in the extraction step, and outputting the corresponding class code When,Using multiple data extracted in the extraction stepThus, higher quality teacher data than the student data is predicted by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained.The prediction factorFor each class code based on multiple data corresponding to the distanceAnd a calculation step to be obtained.
[0042]
  Still another data processing apparatus of the present invention is preset in advance in the spatial direction or the time direction from the center data corresponding to the target output data that is the output data for which the predicted value is to be obtained from the input data. The average value of the multiple data values corresponding to each of the multiple types of distances is read while changing the distances, and the plurality of data having the multiple peripheral data corresponding to one of the multiple types of distances The distance at which the statistic closest to the predetermined reference value is obtained by comparing the statistic based on the variation of each value with respect to or the difference between the values of a plurality of data with the predetermined reference value For first output means for determining the extraction distance for extracting a plurality of data and attention output data that is output data for which a predicted value is to be obtained, Based on the first extraction means for extracting a plurality of data corresponding to the extraction distance determined by one determination means from the input data and the pattern of the values of the plurality of data extracted by the first extraction means Classifying output data into one of a plurality of classes, first class classification means for outputting a corresponding class code, and a statistic closest to a predetermined reference value can be obtained A class based on a plurality of data corresponding to the distance, with a prediction coefficient that predicts higher quality teacher data than the student data by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance Linear linear combination of the first plurality of data extracted in the first extraction means and the prediction coefficient corresponding to the class code, which is learned in advance for each code In addition, the prediction means for obtaining the predicted value of the output data of interest and the generation of the student data to be the student by performing specific processing according to the input data on the teacher data to be the teacher for learning the prediction coefficient Means, center data corresponding to the target output data, which is output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or time direction from the central data. A plurality of pieces of data having a plurality of peripheral data corresponding to the distances of the respective data are read while changing the distances, and variations in the respective values with respect to the average values of the plurality of data corresponding to the respective types of distances, or By comparing the statistic based on the difference between the values of multiple data with a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained The second determination means for determining the extraction distance for extracting a plurality of data from the input data and the attention teacher data that is the teacher data for which a predicted value is to be obtained are set to the extraction distance determined by the second determination means. Based on the second extraction means for extracting the corresponding second plurality of data from the student data, and the value patterns of the plurality of data extracted by the second extraction means, the attention teacher data of the plurality of classes A second class classification unit that performs classification into one of the classes and outputs a corresponding class code; and a second plurality of data extracted by the second extraction unit, Compared to the student data by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the reference value is obtained Calculating means for a prediction coefficient for predicting the quality teacher data obtained for each class code based on the plurality of data corresponding to the distanceAnd the prediction coefficient used to obtain the predicted value of the output data of interest is learned in advance.It is characterized by that.
[0043]
  In the data processing device, the data processing method, and the medium of the present invention, center data corresponding to target output data that is output data for which a predicted value is to be obtained from input data, and a spatial direction or a time direction from the center data And reading a plurality of data having a plurality of peripheral data corresponding to one distance among a plurality of types of distances set in advance, while changing the distance, and reading a plurality of data corresponding to each of the plurality of types of distances The distance at which the statistic closest to the predetermined reference value is obtained by comparing the statistic based on the variation of each value with respect to the average value of the values or the difference between the values of a plurality of data with the predetermined reference value Is determined as an extraction distance for extracting a plurality of data from the input data. In addition, for attention output data that is output data for which a predicted value is to be obtained, a plurality of data corresponding to the extraction distance is extracted from the input data. Then, based on the extracted value pattern of the plurality of data, classifying the target output data into any one of the plurality of classes is performed, and a corresponding class code is output. As a plurality of data corresponding to the distance at which the statistic closest to the reference value is obtained, a prediction coefficient that predicts higher-quality teacher data than the student data by linear linear combination with the student data corresponding to the input data, The predicted value of the output data of interest is obtained by a linear first combination of a plurality of data extracted in the extraction step and a prediction coefficient corresponding to the class code, which is previously learned for each class code based on a plurality of data corresponding to the distance. Desired.In addition, the teacher data serving as a teacher for learning the prediction coefficient is subjected to specific processing according to the input data to generate student data serving as the student, and an output for obtaining a predicted value from the input data. A plurality of pieces of data having center data corresponding to attention output data, which is data, and a plurality of peripheral data corresponding to one of a plurality of distances set in advance in the spatial direction or the time direction from the center data Are read out while changing the distance, and a statistical amount based on a variation of each value with respect to an average value of a plurality of data values corresponding to each of a plurality of types of distances, or a difference between a plurality of data values and a predetermined value. The distance at which the statistic closest to the predetermined reference value is obtained by comparing with the reference value is determined as the extraction distance for extracting multiple data from the input data. Then, for attention teacher data that is teacher data for which a predicted value is to be obtained, a plurality of data corresponding to the determined extraction distance is extracted from the student data, and based on the value patterns of the extracted plurality of data, Classify attention teacher data into one of multiple classes, output the corresponding class code, and use the extracted data to make statistics closest to a given reference value As a plurality of data corresponding to the distance from which the quantity is obtained, a prediction coefficient for predicting higher-quality teacher data than the student data is obtained by linear linear combination with the student data corresponding to the input data. By obtaining each class code based on the data, the prediction coefficient is learned in advance.
[0044]
  In another data processing apparatus, data processing method, and medium of the present invention, student data to be students is obtained by performing specific processing according to input data on teacher data to be a teacher for learning prediction coefficients. Corresponding to the target output data that is the output data for which the predicted value is to be obtained from the input data.Central data and,From the central dataIn spatial or temporal directionPresetMultiple typesA plurality of peripheral data corresponding to one of the distancesMultiple data, While changing the distanceRead out, multiple typesdistanceEach ofCorresponding toMultiple data valuesBased on the variation of each value with respect to the average value of or the difference between the values of multiple dataBy comparing the statistic with a predetermined reference value, the statistic closest to the predetermined reference value can be obtained.distanceExtract multiple data from input datadistanceAs determined. Then, for the attention teacher data which is the teacher data for which a predicted value is to be obtained, the extraction determined by the determination stepCorresponds to the distanceA plurality of data is extracted from the student data,Based on the value patterns of the extracted data, classify the target teacher data into one of a plurality of classes, and the corresponding class code is output.Using the extracted dataThus, higher quality teacher data than the student data is predicted by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained.Prediction coefficient isFor each class code based on multiple data corresponding to the distanceDesired.
[0045]
  In still another data processing device of the present invention, center data corresponding to target output data that is output data for which a predicted value is to be obtained from input data, and preset in the spatial direction or the time direction from the center data. A plurality of data having a plurality of peripheral data corresponding to one distance among a plurality of types of distances is read while changing the distance, and an average of a plurality of data values corresponding to each of the plurality of types of distances The distance at which the statistic closest to the predetermined reference value is obtained by comparing the statistic based on the variation of each value with respect to the value or the difference between the values of a plurality of data and the predetermined reference value is input. It is determined as an extraction distance for extracting a plurality of data from the data. In addition, for attention output data that is output data for which a predicted value is to be obtained, a plurality of data corresponding to the extraction distance is extracted from the input data. Then, based on the extracted value pattern of the plurality of data, classifying the target output data into any one of the plurality of classes is performed, and a corresponding class code is output. As a plurality of data corresponding to the distance at which the statistic closest to the reference value is obtained, a prediction coefficient that predicts higher-quality teacher data than the student data by linear linear combination with the student data corresponding to the input data, The predicted value of the output data of interest is obtained by a linear first combination of a plurality of data extracted in the extraction step and a prediction coefficient corresponding to the class code, which is previously learned for each class code based on a plurality of data corresponding to the distance. Desired. On the other hand, specific processing corresponding to the input data is performed on the teacher data serving as a teacher for learning the prediction coefficient, and student data serving as a student is generated, and an output for obtaining a predicted value from the input data A plurality of pieces of data having center data corresponding to attention output data, which is data, and a plurality of peripheral data corresponding to one of a plurality of distances set in advance in the spatial direction or the time direction from the center data Are read out while changing the distance, and a statistical amount based on a variation of each value with respect to an average value of a plurality of data values corresponding to each of a plurality of types of distances, or a difference between a plurality of data values and a predetermined value. The distance at which the statistic closest to the predetermined reference value is obtained by comparing with the reference value is the extraction distance for extracting multiple data from the input data It is constant. A plurality of data corresponding to the extraction distance determined by the determination step is extracted from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained, and the extracted plurality of data Based on the pattern of the value, classify attention teacher data into any one of a plurality of classes, classify the corresponding class code, using the extracted plurality of data, Prediction coefficient that predicts higher quality teacher data than the student data by linear linear combination with the student data corresponding to the input data as a plurality of data corresponding to the distance at which the statistic closest to the predetermined reference value is obtained Is obtained for each class code based on a plurality of data corresponding to the distance.The prediction coefficient obtained in this way is used to obtain the predicted value of the target output data.
[0046]
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 shows a configuration example of an embodiment of an image processing apparatus to which the present invention is applied.
[0047]
In this image processing apparatus, for example, when a blurred image is input as an input image, the input image is subjected to class classification adaptation processing, as shown in FIG. Regardless of the degree of blur, an image in which the blur is sufficiently improved (blurred improvement image) is output.
[0048]
That is, the image processing apparatus includes a frame memory 1, a class tap generation circuit 2, a prediction tap generation circuit 3, a class classification circuit 4, a coefficient RAM (Random Access Memory) 5, a prediction calculation circuit 6, and a tap determination circuit 7. An input image to be subjected to blur improvement is input there.
[0049]
The frame memory 1 is configured to temporarily store an input image input to the image processing apparatus, for example, in units of frames. In the present embodiment, the frame memory 1 can store an input image of a plurality of frames by bank switching, whereby the input image input to the image processing apparatus is a moving image. However, the process can be performed in real time.
[0050]
The class tap generation circuit 2 uses, as a target original pixel, an original pixel for which a predicted value is to be obtained by class classification adaptive processing (here, an ideal pixel without blur, in which blur is completely eliminated from the input pixel). The input pixel used for class classification of the target original pixel is extracted from the input image stored in the frame memory 1 according to the tap information from the tap determination circuit 7, and this is used as a class tap to be classified into the class classification circuit 4 To output.
[0051]
The prediction tap generation circuit 3 extracts an input pixel used for obtaining a predicted value of the target original pixel in the prediction calculation circuit 6 from the input image stored in the frame memory 1 according to the tap information from the tap determination circuit 7. This is supplied to the prediction calculation circuit 6 as a prediction tap.
[0052]
The class classification circuit 4 classifies the original pixel of interest based on the class tap from the class tap generation circuit 2, and gives a class code corresponding to the class obtained as a result to the coefficient RAM 5 as an address. Has been made. That is, the class classification circuit 4 processes the class tap from the class tap generation circuit 2, for example, 1-bit ADRC (Adaptive Dynamic Range Coding), and outputs the resulting ADRC code to the coefficient RAM 5 as a class code. It has become.
[0053]
Here, in the K-bit ADRC processing, the maximum value MAX and the minimum value MIN of the input pixels constituting the class tap are detected, and DR = MAX−MIN is set as the local dynamic range of the set, and this dynamic Based on the range DR, the input pixels constituting the class tap are requantized to K bits. That is, the minimum value MIN is subtracted from the pixel values of the pixels constituting the class tap, and the subtracted value is DR / 2.KDivide by (quantize). Therefore, when a class tap is subjected to 1-bit ADRC processing, the pixel value of each input pixel constituting the class tap is 1 bit. In this case, a bit string obtained by arranging the pixel values of 1 bit for each pixel constituting the class tap in a predetermined order, which is obtained as described above, is output as an ADRC code.
[0054]
The coefficient RAM 5 stores a prediction coefficient for each class obtained by performing learning in a learning device to be described later. When a class code is supplied from the class classification circuit 4, the coefficient RAM 5 stores it in an address corresponding to the class code. The prediction coefficient thus read is read out and supplied to the prediction calculation circuit 6.
[0055]
The prediction arithmetic circuit 6 supplies the prediction coefficients w and w for the class of the original pixel of interest supplied from the coefficient RAM 5.2,..., Prediction tap from the prediction tap generation circuit 3 (pixel value of each pixel constituting) x1, X2,... Are used to obtain the predicted value E [y] of the original pixel y of interest by performing the calculation shown in Equation (1), and this is output as the pixel value of the pixel with improved blurring. It is like that.
[0056]
The tap determination circuit 7 determines a plurality of input pixels that constitute the class tap and the prediction tap based on the statistic of the input image stored in the frame memory 1, and a plurality of inputs that constitute the class tap and the prediction tap. Information relating to the pixels (hereinafter referred to as tap information as appropriate) is supplied to the class tap generation circuit 2 and the prediction tap generation circuit 3.
[0057]
That is, the tap determination circuit 7 basically has, for example, a square class tap and a prediction tap (hereinafter referred to as “3 × 3 pixels”) with the input pixel at the position of the original pixel of interest as the central pixel. The tap information is configured so that the class tap generation circuit 2 and the prediction tap generation circuit 3 configure the class tap generation circuit 2 and the prediction tap generation circuit 3 as appropriate. However, the tap information is such that the interval between the pixels constituting the tap (hereinafter referred to as tap width as appropriate) differs depending on the statistic of the input image.
[0058]
Specifically, for example, when the statistic of the input image is a certain value, the tap determination circuit 7 is composed of 3 × 3 pixels centered on the center pixel as shown in FIG. Tap information for configuring a tap having a tap width of 0 (interval between pixels constituting the tap is 0) is output. Further, for example, when the statistic of the input image is another value, the tap determination circuit 7 is a tap composed of 3 × 3 pixels with the center pixel as the center, as shown in FIG. Tap information for constituting a tap having a width of 1 (the interval between pixels constituting the tap is one pixel or one frame) is output.
[0059]
Next, with reference to the flowchart of FIG. 4, the blur improvement process for improving the blur of the input image performed in the image processing apparatus of FIG. 1 will be described.
[0060]
In the frame memory 1, input images (moving images) that are subject to blurring improvement processing are sequentially supplied in units of frames. In the frame memory 1, input images supplied in such units of frames are sequentially stored. .
[0061]
In step S1, the tap determination circuit 7 determines a plurality of input pixels constituting the tap based on the statistic of the input image stored in the frame memory 1, and tap information regarding the plurality of input pixels is determined. The data is output to the class tap generation circuit 2 and the prediction tap generation circuit 3.
[0062]
When the class tap generation circuit 2 or the prediction tap generation circuit 3 receives the tap information from the tap determination circuit 7, the class tap or prediction for the original pixel of interest for which a prediction value is to be obtained according to the tap information in step S2. A plurality of input pixels making up the tap are read from the frame memory 1 to form a class tap or a prediction tap. This class tap or prediction tap is supplied to the class classification circuit 4 or the prediction calculation circuit 6, respectively.
[0063]
When the class classification circuit 4 receives the class tap from the class tap generation circuit 2, in step S 3, the class classification circuit 4 classifies the target pixel of interest based on the class tap, and obtains the resulting class code for the coefficient RAM 5. And output as an address. In step S4, the coefficient RAM 5 reads the prediction coefficient stored in the address corresponding to the class code from the class classification circuit 4 and supplies it to the prediction calculation circuit 6.
[0064]
In the prediction arithmetic circuit 6, in step S5, the calculation shown in the equation (1) is performed using the prediction tap from the prediction tap generation circuit 3 and the prediction coefficient from the coefficient RAM 5, whereby the target original pixel y Predicted value E [y], that is, here, a pixel with improved blur is obtained, and the process proceeds to step S6. In step S6, the prediction calculation circuit 6 outputs the predicted value E [y] of the target original pixel y obtained in step S5 as a pixel value with improved blur in the input pixel located at the same position as the target original pixel. Proceed to step S7.
[0065]
In step S7, it is determined whether or not all original pixels in a predetermined block described later have been processed as target original pixels. If it is determined that processing has not yet been performed, the process returns to step S2 to return to the original pixel in the block. The same processing is repeated thereafter, with the original pixel that has not yet been set as the target original pixel among the pixels newly set as the target original pixel. Accordingly, in the present embodiment, taps are configured based on the same tap information for the original pixels in the same block. That is, for the original pixels in the same block, a tap is formed from the input pixels at the same position as viewed from each original pixel.
[0066]
If it is determined in step S7 that all the original pixels in the predetermined block have been processed as the target original pixel, the process proceeds to step S8 to determine whether there is a block to be processed next, that is, the frame. It is determined whether or not an input image corresponding to a block to be processed next is stored in the memory 1. If it is determined in step S8 that an input image corresponding to the block to be processed next is stored in the frame memory 1, the process returns to step S1, and the same processing is repeated thereafter. Accordingly, tap information is newly determined for the block to be processed next, and a tap is configured according to the tap information.
[0067]
On the other hand, when it is determined in step S8 that the input pixel corresponding to the block to be processed next is not stored in the frame memory 1, the blur improvement process is terminated.
[0068]
Next, FIG. 5 shows a configuration example of the tap determination circuit 7 of FIG.
[0069]
When the original pixel in the predetermined block is set as the target original pixel, the reading unit 11 reads out the input pixel serving as a tap for the target original pixel from the frame memory 1 (FIG. 1), and the standard deviation calculating unit 12 is supplied.
[0070]
That is, here, for example, one frame of the original image, one frame divided into several areas, or several frames (for example, from the frame immediately after the scene change to the frame immediately before the next scene change) ) As one block, tap information is determined for each block, and when the original pixel in a block is set as a target original pixel, the reading unit 11 The input pixels constituting the tap having the tap width according to the control of the determination unit 13 are read from the frame memory 1 and supplied to the standard deviation calculation unit 12.
[0071]
The standard deviation calculation unit 12 calculates the standard deviation of the pixel values of the input pixels constituting the tap as the tap statistic from the reading unit 11. Further, the standard deviation calculation unit 12 calculates an average score of standard deviation for taps having the same tap width for each block, and determines the average score as an evaluation value of each tap width for each block. This is supplied to the unit 13. That is, if the block is represented by i and the tap width is represented by j, the standard deviation calculation unit 12 uses the following formula as an evaluation value of the tap having the tap width j configured for each original pixel in the block #i. Is calculated and supplied to the determination unit 13.
[0072]
[Equation 8]
Here, in Expression (8), M represents the number of original pixels in the block #i, and K represents the number of input pixels constituting the tap. In addition, Vm, j, kRepresents the pixel value of the kth input pixel in the tap with tap width j, which is configured for the mth original pixel in block #i. Meanm, jRepresents an average value of taps with a tap width j (average value of pixel values of input pixels constituting the tap) configured for the mth original pixel in the block #i. Therefore, meanm, jIs (Vm, j, 1+ Vm, j, 2+ ... + Vm, j, K) / K.
[0073]
The determination unit 13 compares the evaluation value score (i, j) from the standard deviation calculation unit 12 with a predetermined reference value set in the reference value setting unit 14, and based on the comparison result, the reading unit 11 is controlled. Furthermore, the determination unit 13 determines a tap width when a tap is configured for an original pixel in a block including the target original pixel (hereinafter, referred to as a target block as appropriate) based on the comparison result, and determines the tap width. , Is output as tap information.
[0074]
The reference value setting unit 14 is configured to input pixels (students) constituting a tap when the blur of the input image is most improved (when the input pixel classification processing result is closest to the original pixel corresponding to the input pixel). The standard deviation of the pixel value of (data) is set as a reference value to be compared with the output of the standard deviation calculation unit 12. Note that how to obtain the reference value will be described later.
[0075]
Next, with reference to the flowchart of FIG. 6, a tap determination process for determining the tap structure performed in the tap determination circuit 7 of FIG. 5 will be described.
[0076]
In the tap determination process, when any of the original pixels in the target block is set as the target original pixel for the first time, in step S11, the reading unit 11 performs, for example, the original pixel set as the target original pixel of the target block. Input pixels for configuring a tap having a tap width of 0 are read from the frame memory 1 (FIG. 1) and supplied to the standard deviation calculation unit 12.
[0077]
Then, the process proceeds to step S12, and the standard deviation calculation unit 12 obtains the standard deviation of the tap formed by the input pixels from the reading unit 11, and the process proceeds to step S13. In step S13, it is determined whether or not all original pixels in the target block have been processed as target original pixels. If it is determined that processing has not been performed, the process returns to step S11, and among the original pixels in the target block, Hereinafter, the same processing is repeated with a pixel that is not regarded as a target pixel being a new target pixel.
[0078]
If it is determined in step S13 that all original pixels in the target block have been processed as target original pixels, the process proceeds to step S14, and the standard deviation calculation unit 12 is obtained for each original pixel in the target block. Calculate the average value of the standard deviation of taps.
[0079]
That is, if the block of interest is represented by block #i and the tap width of the tap configured for each original pixel of the block of interest #i is represented by j, in step S14, expression (8) is used. The indicated score (i, j) is calculated. The score (i, j) is supplied to the determination unit 13.
[0080]
When the determination unit 13 determines in step S15 whether the score (i, j) from the standard deviation calculation unit 12 is close to the reference value set in the reference value setting unit 14, it is determined that it is not close That is, if the average value of the standard deviation of taps with tap width j configured for each original pixel of the target block #i is not close to the reference value, the process proceeds to step S16, and the determination unit 13 changes the tap width j. Thus, the reading unit 11 is controlled, and the process returns to step S11.
[0081]
In this case, in step S11, in the readout unit 11, an original pixel having a target block is set as a target original pixel, and an input pixel for configuring a tap having a tap width j + 1 is received from the frame memory 1 for the target original pixel. Thereafter, the same processing is repeated.
[0082]
On the other hand, if it is determined in step S15 that score (i, j) from the standard deviation calculation unit 12 is close to the reference value set in the reference value setting unit 14, that is, each original pixel of the target block #i. If the average value of the standard deviations of the taps with the tap width j configured for is close to the reference value, the process proceeds to step S17, and the determination unit 13 outputs the tap width j as tap information and returns.
[0083]
As described above, in the class tap generation circuit 2 and the prediction tap generation circuit 3 in FIG. 1, when the original pixel of the block is set as the original pixel of interest according to the tap information output for each block by the tap determination circuit 7. Since taps are configured, when tap information with a tap width j is output, taps with such a tap width j are configured for the original pixels that form the block of interest.
[0084]
Next, FIG. 7 shows a configuration of an embodiment of a learning apparatus that obtains a prediction coefficient for each class to be stored in the coefficient RAM 5 in FIG. 1 and obtains a predetermined reference value set in the reference value setting unit 14 in FIG. An example is shown.
[0085]
The frame memory 61 is supplied with an original image (in this case, an image without blur) serving as the teacher data y, for example, in units of frames. The frame memory 61 temporarily stores the original image. It is supposed to be. The blur adding circuit 62 reads the original image that is stored in the frame memory 61 and becomes the teacher data y in learning of the prediction coefficient, and adds blur to the original pixels constituting the original image (for example, using a low-pass filter). Filtering) to generate a blurred image (hereinafter referred to as a blurred image as appropriate) as student data. This blurred image is supplied to the frame memory 63.
[0086]
The frame memory 63 temporarily stores the blurred image from the blur adding circuit 62.
[0087]
The frame memories 61 and 63 are configured similarly to the frame memory 1 of FIG.
[0088]
The class tap generation circuit 64 or the prediction tap generation circuit 65 uses the pixels constituting the blurred image stored in the frame memory 63 (hereinafter referred to as “blurred pixels” as appropriate), and generates the class tap generation circuit 2 or the prediction tap generation in FIG. Similar to the circuit 3, according to the tap information from the tap determination circuit 72, a class tap or a prediction tap is configured for the original pixel of interest and supplied to the class classification circuit 66 or the addition circuit 67, respectively.
[0089]
The class classification circuit 66 is configured in the same manner as the class classification circuit 4 of FIG. 1, classifies the original pixel of interest based on the class tap from the class tap generation circuit 64, and assigns the corresponding class code to the prediction tap memory 68. The address is given to the teacher data memory 70 as an address.
[0090]
The adder circuit 67 reads the stored value of the address corresponding to the class code output from the class classification circuit 66 from the prediction tap memory 68, and the stored value and a blurred pixel ( Is added to the summation (Σ), which is a multiplier of the prediction coefficient w, on the left side of the normal equation of Equation (7). Then, the adder circuit 67 stores the calculation result in an overwritten form in the address of the prediction tap memory 68 corresponding to the class code output from the class classification circuit 66.
[0091]
The prediction tap memory 68 reads the stored value of the address corresponding to the class output from the class classification circuit 66, supplies it to the adder circuit 67, and stores the output value of the adder circuit 67 at the address. .
[0092]
The adder circuit 69 reads the target original pixel among the original pixels constituting the original image stored in the frame memory 61 as the teacher data y, and stores an address corresponding to the class code output from the class classification circuit 66. The value is read from the teacher data memory 70, and the stored value is added to the teacher data (original pixel) y read from the frame memory 61, so that the summation (Σ) on the right side of the normal equation of Equation (7) The operation corresponding to is performed. Then, the adder circuit 69 stores the calculation result in an overwritten form in the address of the teacher data memory 70 corresponding to the class code output from the class classification circuit 66.
[0093]
In addition, addition circuits 67 and 69 also perform multiplication in equation (7). That is, the adder 67 multiplies the blurred pixels x constituting the prediction tap, and the adder 69 also multiplies the blurred pixels x constituting the prediction tap and the teacher data y. Accordingly, the adder 69 requires the blurred pixel x, which is read from the frame memory 63.
[0094]
The teacher data memory 70 reads the stored value of the address corresponding to the class code output from the class classification circuit 66, supplies it to the adder circuit 69, and stores the output value of the adder circuit 69 at that address. Yes.
[0095]
The arithmetic circuit 71 sequentially reads out the stored value stored in the address corresponding to each class code from each of the prediction tap memory 68 or the teacher data memory 70, and establishes the normal equation shown in the equation (7). By solving the above, a prediction coefficient for each class is obtained. That is, the arithmetic circuit 71 builds a normal equation of Expression (7) from the stored value stored in the address corresponding to each class code in each of the prediction tap memory 68 or the teacher data memory 70 and solves it. Thus, the prediction coefficient for each class is obtained.
[0096]
The tap determination circuit 72 performs tap determination processing similar to that of the tap determination circuit 7 in FIG. 1 to determine tap information related to taps to be generated by the class tap generation circuit 64 and the prediction tap generation circuit 65, and the class tap generation circuit 64 and the prediction tap generation circuit 65. Further, the tap determination circuit 72 is configured to calculate a predetermined reference value used in the tap determination process.
[0097]
Next, a learning process for obtaining a prediction coefficient and a predetermined reference value for each class performed in the learning apparatus of FIG. 7 will be described with reference to the flowchart of FIG.
[0098]
An original image (moving image) as teacher data is supplied to the learning device in units of frames, and the original images are sequentially stored in the frame memory 61. The original image stored in the frame memory 61 is supplied to the blur adding circuit 62, where it is made a blurred image. Note that the blur adding circuit 62 generates blur images having different degrees of blur for each frame, for example.
[0099]
The blurred image obtained by the blur adding circuit 62 is sequentially supplied to and stored in the frame memory 63 as student data.
[0100]
As described above, when the blurred images corresponding to all the original images prepared for the learning process are stored in the frame memory 63, the tap determination circuit 72 uses the reference used for the tap determination process in step S21. The value is obtained as described later, and the process proceeds to step S22.
[0101]
In step S22, in the tap determination circuit 72, the blur pixel constituting the tap is determined in the same manner as in the tap determination circuit 7 of FIG. 1, and the tap information related to the blur pixel is stored in the class tap generation circuit 64 and the prediction tap. It is output to the generation circuit 65.
[0102]
In step S23, the class tap generation circuit 64 or the prediction tap generation circuit 65 forms a class tap or a prediction tap for the original pixel of interest for which a prediction value is to be obtained according to the tap information from the tap determination circuit 72. Are read from the frame memory 63, and class taps or prediction taps are respectively configured. This class tap or prediction tap is supplied to the class classification circuit 66 or the addition circuit 67, respectively.
[0103]
In the class classification circuit 66, in step S24, the original pixel of interest is classified using the class tap from the class tap generation circuit 64 in the same manner as in the class classification circuit 4 of FIG. Are given as addresses to the prediction tap memory 68 and the teacher data memory 70.
[0104]
In step S25, the prediction tap (student data) or teacher data is added.
[0105]
That is, in step S 25, the prediction tap memory 68 reads out the stored value of the address corresponding to the class code output from the class classification circuit 66 and supplies it to the adder circuit 67. The adder circuit 67 uses the stored value supplied from the prediction tap memory 68 and the blurred pixels constituting the prediction tap supplied from the prediction tap generation circuit 65, on the left side of the normal equation of Expression (7). An operation corresponding to the summation (Σ) that is a multiplier of the prediction coefficient is performed. Then, the adder circuit 67 stores the calculation result in the form of overwriting the address of the prediction tap memory 68 corresponding to the class code output from the class classification circuit 66.
[0106]
Further, in step S 25, the teacher data memory 70 reads the stored value of the address corresponding to the class code output from the class classification circuit 66 and supplies it to the addition circuit 69. The adder circuit 69 reads out the original pixel of interest from the frame memory 61, reads out the necessary blurred pixel from the frame memory 63, and uses the read out pixel and the stored value supplied from the teacher data memory 70, An operation corresponding to the summation (Σ) on the right side of the normal equation of Expression (7) is performed. Then, the adder circuit 69 stores the calculation result in an overwritten form at the address of the teacher data memory 70 corresponding to the class code output from the class classification circuit 66.
[0107]
Thereafter, the process proceeds to step S26, where it is determined whether all the original pixels in the target block of interest are processed as target original pixels. If it is determined that the original pixels are not yet processed, the process returns to step S23. The same processing is repeated thereafter, with the original pixel that has not yet been set as the target original pixel among the original pixels in the target block as a new target pixel.
[0108]
If it is determined in step S26 that all the original pixels in the target block have been processed as the target original pixel, the process proceeds to step S27, whether there is a block to be processed next, that is, the frame memory. In 63, it is determined whether the blurred image corresponding to the block to be processed next is stored. If it is determined in step S27 that the blurred image corresponding to the block to be processed next is stored in the frame memory 63, the process returns to step S22, and the block is newly set as the target block. One of the original pixels is newly set as the original pixel of interest, and the same processing is repeated thereafter.
[0109]
On the other hand, if it is determined in step S27 that no blurred pixel corresponding to the block to be processed next is stored in the frame memory 63, that is, all original images prepared for learning are used in advance. When the processing is performed, the process proceeds to step S28, and the arithmetic circuit 71 sequentially reads out the stored value stored in the address corresponding to each class code from each of the prediction tap memory 68 or the teacher data memory 70, and the expression (7 The prediction equation for each class is obtained by building and solving the normal equation shown in FIG. Further, in step S29, the arithmetic circuit 71 outputs the obtained prediction coefficient for each class, and ends the learning process.
[0110]
In the prediction coefficient learning process as described above, there may occur a class in which the number of normal equations necessary for obtaining the prediction coefficient cannot be obtained. For such a class, for example, the default prediction It is possible to output a coefficient.
[0111]
Next, FIG. 9 shows a configuration example of the tap determination circuit 72 of FIG.
[0112]
As shown in the figure, the tap determination circuit 72 includes a reading unit 81, a standard deviation calculating unit 82, a determining unit 83, and a reference value creating unit 84. Among these, the reading unit 81, the standard deviation calculating unit 84 82 or the determination unit 83 is configured in the same manner as the reading unit 11, the standard deviation calculation unit 12, or the determination unit 13 of FIG. Therefore, the tap determination circuit 72 is configured in the same manner as the tap determination circuit 7 of FIG. 5 except that a reference value creation unit 84 is provided instead of the reference value setting unit 14.
[0113]
The reference value creating unit 84 obtains a reference value used for the tap determination process in step S21 of FIG.
[0114]
In the tap determination circuit 72 configured as described above, as shown in the flowchart of FIG. 10, in steps S31 to S37, the same processing as in steps S11 to S17 of FIG. For each, tap information is determined.
[0115]
Next, a reference value calculation method performed in the reference value creation unit 84 of FIG. 9 will be described with reference to FIGS. 11 and 12.
[0116]
In the reference value creation unit 84, all original images (teacher data) prepared for the learning process are read from the frame memory 61, and a blurred image (student data) obtained by adding blur to the original image is read. , Read from the frame memory 63. Furthermore, the reference value creation unit 84 divides the original image for the blurred image into blocks as described above, configures taps with different tap widths for each of the original pixels of each block, and configures the tap width and taps. The relationship with the average value of the standard deviation of blurred pixels is obtained.
[0117]
That is, when attention is paid to a certain block, the reference value creation unit 84 configures taps having a tap width j, with each original pixel of the block being sequentially set as the original pixel of interest. Furthermore, the reference value creation unit 84 calculates the standard deviation of the tap configured for each original pixel, and calculates the average value of the standard deviation in the block. Specifically, the total number of original pixels in the block #i is M, the total number of blurred pixels constituting the tap is K, and the tap width j is configured for the mth original pixel in the block #i. The pixel value of the kth blur pixel of Vm, j, kThen, mean the average value of taps with tap width j (average value of blurred pixels constituting the tap) configured for the mth original pixel in block #i.m, jIf each is expressed, the reference value creating unit 84 calculates score (i, j) according to the above-described equation (8).
[0118]
The reference value creating unit 84 changes the tap width j to several values, configures the tap for each original pixel in the block #i, and sequentially calculates the average value score (i , j).
[0119]
Further, the reference value creating unit 84 similarly calculates the average value score (i, j) of the standard deviations of the taps having the respective tap widths j for the other blocks #i.
[0120]
As described above, in the reference value creating unit 84, for each block #i, as shown in FIG. 11, the tap width j and the average value score (i, j) of the standard deviation of taps of the tap width j are calculated. Is required.
[0121]
FIG. 11 shows the relationship between the tap width j and the average value score (i, j) of the standard deviation of taps for each of the four blocks # 1 to # 4. , X indicates that of block # 2, Δ indicates that of block # 3, and □ indicates that of block # 4.
[0122]
FIG. 11 shows the average value score (i, j) of the standard deviation of taps when the tap width j is 0 to 10, respectively.
[0123]
The average value score (i, j) of the standard deviation of taps configured for each original pixel of block #i represents the degree of change in the pixel value of the blurred image for that block #i, that is, the degree of blur. Therefore, the larger the score (i, j) (the smaller), the smaller the blur degree (the larger).
[0124]
The reference value creation unit 84 obtains the average value score (i, j) of the standard deviation of taps as described above, and at the same time, by using the tap of each tap width j, the original pixel is subjected to the class classification adaptation process. Find the predicted value of.
[0125]
That is, the reference value creating unit 84 uses the original pixel in the block #i and the blurred image, fixes the tap width j to a certain value, generates a normal equation of Equation (7), and solves it. Find the prediction coefficient for each class. Further, the reference value creating unit 84 uses the prediction coefficient for each class, fixes the tap width j to the above-described certain value, and calculates the linear prediction expression of Expression (1), so that the block #i A predicted value of each original image is obtained.
[0126]
The reference value creation unit 84 performs similar processing by sequentially changing the tap width j, and also performs similar processing for other blocks. As a result, the predicted value e (i, j) when the tap of each tap width j is used for the original pixel of each block #i is obtained. That is, for the original pixel of each block #i, the relationship between the tap width j of the tap used for the prediction of the original pixel and the predicted value e (i, j) of the original pixel is obtained.
[0127]
Then, the reference value creation unit 84 determines how close the predicted value e (i, j) obtained as described above is to the original image. That is, the reference value creating unit 84 predicts, for example, the reciprocal of the absolute value difference sum between the original pixel and the predicted value e (i, j) obtained using the tap with the tap width j for the block #i. The S / N (Signal to Noise Ratio) of the value e (i, j) is obtained. Here, S / N of the predicted value e (i, j) obtained for the four blocks #i shown in FIG. 11 is shown in FIG.
[0128]
In FIG. 12, the S / N of the predicted value e (1,0) obtained using the tap with the tap width j being 0 is the best for the block # 1, that is, the predicted value e (1 , 0) is closest to the original image, and the reference value creating unit 84 thus obtains the block #i from which the predicted value e (i, j) closest to the original image is obtained, The tap width j is detected. Here, the block #i or the tap width j detected as described above is hereinafter referred to as the optimum block or the optimum tap width, respectively.
[0129]
For the optimal block, the predicted value e (i, j) when the tap width j is obtained by configuring the tap having the optimal tap width is closest to the original image. It is considered that the standard deviation of the tap (hereinafter referred to as the optimal standard deviation as appropriate) when the tap is configured is the most suitable value as the standard deviation of the tap configured for other blocks.
[0130]
That is, from a statistical point of view, it is considered that a predicted value closest to the original image can be obtained when a tap having a tap width such that the standard deviation is the optimum standard deviation is formed for other blocks.
[0131]
Therefore, the reference value creation unit 84 obtains the optimum standard deviation and supplies this to the determination unit 83 as a predetermined reference value.
[0132]
Here, in FIG. 12, since the S / N of the predicted value e (1,0) obtained using the tap having the tap width j of 0 is the best for the block # 1, the reference value creating unit 84 Then, in score (1, j) (indicated by a circle in FIG. 11) obtained for block # 1 in FIG. 11, 20 which is a value when tap width j is 0 is obtained as the optimum standard deviation. It is done.
[0133]
Furthermore, in this case, for the other blocks, the value indicated by the thick arrow in FIG. 11 is the tap width j that provides the standard deviation that matches the optimal standard deviation. Therefore, the tap determination in FIG. 6 and FIG. In the processing, such a value is determined as the tap width j.
[0134]
Note that the tap width j takes an integer value, but the tap width (value indicated by the thick arrow in FIG. 11) that provides a standard deviation that matches the optimal standard deviation is an integer as apparent from FIG. It does not always take numbers (rather, it is often not an integer value). Therefore, in the tap determination process of FIG. 6 and FIG. 10, when the tap width from which the standard deviation matching the optimal standard deviation is not an integer value, for example, the integer value closest to the tap width that is not the integer value is The final tap width is determined.
[0135]
That is, in the tap determination process of FIG. 6 (same in FIG. 10), as described above, the score (i, j) from the standard deviation calculation unit 12 is close to the reference value set in the reference value setting unit 14. (Step S15). If it is determined that the tap width j is near, the tap width j at that time is output as tap information (step S17). “i, j) is close to the reference value” means that when a tap having a tap width j that takes an integer value is configured, the standard deviation of the tap (in this embodiment, the average value of the standard deviations obtained for the block) ) Score (i, j) means closest to the reference value.
[0136]
In the tap determination process of FIG. 6 or FIG. 10, the tap width from which the standard deviation that matches the optimal standard deviation is obtained. If the tap width is not an integer value, the average value of the tap width is the integer value. It is also possible to configure a tap having a different tap width. That is, when the tap width for obtaining a standard deviation that matches the optimum standard deviation is 1.5, for example, the tap width between certain pixels is set to 1, and the tap width between other pixels is set to 2. It is possible to configure a tap so that the average tap width of the tap is 1.5.
[0137]
As described above, the optimum standard deviation is obtained, and the tap width tap that can obtain the standard deviation that matches the optimum standard deviation is configured to perform the learning process and the blur improvement process. Performance can be improved.
[0138]
That is, in the learning process, classification suitable for the degree of blur of the image is performed, and furthermore, prediction taps suitable for the degree of blur are configured. As a result, a prediction coefficient for each class suitable for the degree of image blur can be obtained. Also, in the blur improvement process, classification suitable for the degree of blur of the image is performed, a prediction tap suitable for the degree of blur is configured, and further, using a prediction coefficient for each class suitable for the degree of blur, A predicted value of the original pixel is obtained. As a result, a clear image (an image with more improved blur) can be obtained regardless of the degree of blur of the image.
[0139]
Specifically, for example, as described above, in the case of performing class classification adaptation processing for an input image with a small degree of blur, the class is determined using input pixels that are relatively close to each other when viewed from the target original pixel. If classification is performed, the classification that reflects the nature of the original pixel of interest can be performed, and when performing classification adaptation processing for an input image with a large degree of blur, The class classification using the input pixel located at a relatively far position can perform the classification that reflects the property. In the case shown in FIG. 11, such a class classification is performed.
[0140]
That is, in the case shown in FIG. 11, as described above, the larger the score (i, j), the smaller the degree of blur. Therefore, the blur is in the order of blocks # 1, # 2, # 3, and # 4. The degree is growing. In the case shown in FIG. 11, from the above, taps with tap widths 0, 1, 3, and 8 are configured for blocks # 1, # 2, # 3, and # 4, respectively.
[0141]
Therefore, according to the method of the present application, as the degree of blur increases, the class tap is configured using input pixels that are farther from the original pixel of interest, so that classification that sufficiently reflects the properties of the image is performed. It will be.
[0142]
Next, the series of processes described above can be performed by hardware or software. When a series of processing is performed by software, various programs can be installed by installing a computer that is included in the image processing apparatus or learning apparatus as dedicated hardware, or various programs. Installed on a general-purpose computer or the like.
[0143]
Therefore, with reference to FIG. 13, a medium used for installing a program for executing the above-described series of processes in a computer and making it executable by the computer will be described.
[0144]
As shown in FIG. 13A, the program can be provided to the user in a state where it is installed in advance on a hard disk 102 as a recording medium built in the computer 101.
[0145]
Alternatively, as shown in FIG. 13B, the program includes a floppy disk 111, a CD-ROM (Compact Disc Read Only Memory) 112, a MO (Magneto optical) disk 113, a DVD (Digital Versatile Disc) 114, a magnetic disk. 115, stored in a recording medium such as the semiconductor memory 116 temporarily or permanently, and provided as package software.
[0146]
Further, as shown in FIG. 13C, the program is wirelessly transferred from a download site 121 to a computer 123 via a digital satellite broadcasting artificial satellite 122, or a LAN (Local Area Network) or the Internet. It can be transferred to the computer 123 via the network 111 by wire and stored in a built-in hard disk or the like.
[0147]
The medium in this specification means a broad concept including all these media.
[0148]
Further, in the present specification, the steps describing the program provided by the medium do not necessarily have to be processed in time series in the order described in the flowchart, but are executed in parallel or individually (for example, Parallel processing or object processing).
[0149]
In addition, the class classification application process performs learning for obtaining a prediction coefficient for each class using the teacher data and the student data, and performs linear primary prediction using the prediction coefficient and the input data, thereby changing the teacher data from the input data. Therefore, it is possible to obtain a prediction coefficient for obtaining a desired prediction value based on teacher data and student data used for learning. That is, for example, by using a high-resolution image as the teacher data and using an image with a reduced resolution as the student data, a prediction coefficient that improves the resolution can be obtained. Further, for example, by using an image not including noise as the teacher data and using an image obtained by adding noise to the image as the student data, a prediction coefficient for removing noise can be obtained. Therefore, the present invention can be applied to the case of removing noise, the case of improving the resolution, and the case of performing waveform equalization, for example, in addition to the case of improving the blur as described above.
[0150]
In this embodiment, the moving image is the target of the classification application processing. However, in addition to the moving image, a still image, audio, or a signal reproduced from a recording medium (RF (Radio)
(Frequency) signal) can also be targeted.
[0151]
Further, in the present embodiment, the class tap and the prediction tap are configured according to the same tap information, and thus are configured with the same pixel. However, the class tap and the prediction tap have different configurations. It is also possible to configure according to different tap information.
[0152]
In this embodiment, both the class tap and the prediction tap are configured according to the tap information so that the tap width is variable. However, either the class tap or the prediction tap is fixed. It is possible to make it the tap width.
[0153]
Furthermore, in this embodiment, tap information is determined based on the standard deviation of taps, but tap information can also be determined based on statistics other than the standard deviation of taps. That is, the tap information is based on, for example, the dispersion of pixels constituting the tap, the sum of absolute differences between the pixels, the second sum of absolute differences (the sum of the absolute values of the differences between the pixels), and the like. It is possible to determine.
[0154]
In this embodiment, the tap width is changed for each block including a plurality of pixels. However, the tap width may be changed in units of pixels, for example.
[0155]
Furthermore, in the present embodiment, the image processing device and the learning device that learns the prediction coefficient and the reference value for each class used in the image processing device are configured as separate devices. The learning device can also be configured integrally. In this case, the learning device can perform learning in real time, and the prediction coefficient used in the image processing device can be updated in real time.
[0156]
In this embodiment, the coefficient RAM 5 stores the prediction coefficient for each class in advance. However, the prediction coefficient may be supplied to the image processing apparatus together with the input image, for example. Is possible. Similarly, the reference value can be supplied to the image processing apparatus together with the input image instead of being set in the reference value setting unit 14 (FIG. 5).
[0157]
Furthermore, the class tap and the prediction tap may be configured using pixels in either the spatial direction or the temporal direction.
[0158]
In the present embodiment, the tap configuration (pixels constituting the tap) is changed by changing the tap width based on the statistic of the input image. It can be changed by changing the position of the pixels constituting the tap.
[0159]
Furthermore, in the present embodiment, the predicted value of the original pixel is obtained by a linear expression, but the predicted value can also be obtained by a quadratic or higher order expression.
[0160]
In the learning process, as described above, an addition corresponding to the summation (Σ) of Expression (7) is performed using the prediction tap, but the addition using the prediction tap having a different tap width is performed as follows. These prediction taps are performed on the corresponding pixels.
[0161]
That is, as shown in FIG. 3, if the prediction tap is composed of 9 pixels of 3 × 3 pixels centered on the central pixel, in this embodiment, the prediction tap with a tap width of 0 (FIG. 3A )) Or a prediction tap with a tap width of 1 (FIG. 3B) is configured. In this case, for the prediction tap with the tap width of 0, for example, the upper left pixel is added to the upper left pixel of the prediction tap with the tap width of 1.
[0162]
Further, class classification of class taps having different tap widths is performed in the same manner. Therefore, for example, the pixel values of the pixels constituting the prediction tap having the tap width 0 shown in FIG. 3A correspond to the prediction tap having the tap width 1 shown in FIG. When the pixel values are equal to each other, the classification results using the class taps in FIGS. 3A and 3B are the same (classified into the same class).
[0163]
【The invention's effect】
  According to the data processing device, the data processing method, and the medium of the present invention,,A predicted value close to the output data of interest can be obtained.
[0164]
  According to another data processing apparatus, data processing method, and medium of the present invention,It is possible to obtain a prediction coefficient capable of obtaining a prediction value close to the teacher data.
[0165]
  According to yet another data processing apparatus of the present invention.,A predicted value close to the output data of interest can be obtained.
[Brief description of the drawings]
FIG. 1 is a block diagram showing a configuration example of an embodiment of an image processing apparatus to which the present invention is applied.
FIG. 2 is a diagram showing an outline of processing of the image processing apparatus of FIG. 1;
FIG. 3 is a diagram for explaining processing of a tap determination circuit 7 in FIG. 1;
4 is a flowchart for explaining blur improvement processing by the image processing apparatus in FIG. 1; FIG.
5 is a block diagram illustrating a configuration example of a tap determination circuit 7 in FIG. 1. FIG.
6 is a flowchart for explaining tap determination processing by the tap determination circuit 7 of FIG. 5; FIG.
FIG. 7 is a block diagram illustrating a configuration example of an embodiment of a learning device to which the present invention has been applied.
8 is a flowchart for explaining learning processing by the learning apparatus in FIG. 7;
9 is a block diagram illustrating a configuration example of a tap determination circuit 72 in FIG. 7;
10 is a flowchart for explaining tap determination processing by the tap determination circuit 72 of FIG. 9; FIG.
11 is a diagram for explaining processing of a reference value creation unit 84 in FIG. 9;
12 is a diagram for explaining processing of a reference value creating unit 84 in FIG. 9;
FIG. 13 is a diagram for explaining a medium to which the present invention is applied;
[Explanation of symbols]
1 frame memory, 2 class tap generation circuit, 3 prediction tap generation circuit, 4 class classification circuit, 5 coefficient RAM, 6 prediction arithmetic circuit, 7 tap determination circuit, 11 reading unit, 12 standard deviation calculation unit, 13 determination unit, 14 Reference value setting unit, 61 frame memory, 62 blur addition circuit, 63 frame memory, 64 class tap generation circuit, 65 prediction tap generation circuit, 66 class classification circuit, 67 addition circuit, 68 prediction tap memory, 69 addition circuit, 70 teacher Data memory, 71 arithmetic circuit, 72 tap determining circuit, 81 reading unit, 82 standard deviation calculating unit, 83 determining unit, 84 reference value creating unit, 101 computer, 102 hard disk, 103 semiconductor memory, 111 floppy disk, 11 CD-ROM, 113 MO disk, 114 DVD, 115 magnetic disk, 116 a semiconductor memory, 121 a download site 122 satellite, 123 computer, 131 network

Claims (20)

  1. A data processing device that processes input data and predicts output data for the input data,
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Determining means for determining as an extraction distance for extracting
    Extracting means for extracting a plurality of data corresponding to the extraction distance determined by the determining means for the attention output data that is the output data for which a predicted value is to be obtained;
    A class that classifies the output data of interest into one of a plurality of classes based on a pattern of values of a plurality of data extracted by the extracting means, and outputs a corresponding class code Classification means;
    Higher quality teacher data than the student data is predicted by linear linear combination with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained. The prediction coefficient is learned in advance for each class code based on a plurality of data corresponding to the distance, and by linear linear combination of the plurality of data extracted by the extraction unit and the prediction coefficient corresponding to the class code And a predicting means for obtaining a predicted value of the attention output data ,
    The teacher data to be a teacher for learning the prediction coefficient is subjected to specific processing according to the input data, and student data to be students is generated,
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Is determined as the extraction distance to extract,
    A plurality of data corresponding to the determined extraction distance is extracted from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained,
    Based on the extracted value patterns of the plurality of data, classify the attention teacher data into any one of a plurality of classes, and outputs a corresponding class code,
    Using the extracted plurality of data, the student is obtained by linear linear combination with the student data corresponding to the input data as the plurality of data corresponding to the distance from which the statistic closest to the predetermined reference value is obtained. A prediction coefficient for predicting higher quality teacher data than data is obtained for each class code based on a plurality of data corresponding to the distance.
    Accordingly , the data processing apparatus is characterized in that the prediction coefficient is learned in advance .
  2. The determining means obtains a standard deviation of the values of the plurality of data corresponding to each of the distances as the statistic, and based on the distance at which the standard deviation closest to the predetermined reference value is obtained, the input data The data processing apparatus according to claim 1, wherein an extraction distance for extracting a plurality of data is determined.
  3. The determining means sets the output data as block units each having a plurality of attention output data, and calculates an average of the standard deviations of the plurality of data obtained for each of the plurality of attention output data in a predetermined block. The extraction amount for extracting the plurality of data from the input data for the output data of interest in the block is determined by comparing the statistical amount with the predetermined reference value. The data processing apparatus according to claim 2.
  4. The predetermined reference value is obtained by learning a predetermined prediction coefficient corresponding to the class code between student data and teacher data for each of a plurality of data corresponding to the plurality of types of distances. The data processing apparatus according to claim 1, wherein the statistical amount is obtained from a plurality of data corresponding to the distance that can be optimally predicted as a result of predicting the teacher data.
  5. The data processing apparatus according to claim 1, further comprising a reference value storage unit that stores the predetermined reference value.
  6. The data processing apparatus according to claim 1, further comprising a prediction coefficient storage unit that stores the prediction coefficient for each class code.
  7. The data processing apparatus according to claim 1, wherein the input data and output data are image data.
  8. The data according to claim 7, wherein the extraction unit extracts, from the image data as the input data, pixels that are spatially or temporally adjacent to the pixel as the target output data. Processing equipment.
  9. A data processing method for a data processing apparatus that processes input data and predicts output data for the input data,
    The data processing device includes:
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Determining means for determining as an extraction distance for extracting
    Extracting means for extracting a plurality of data corresponding to the extraction distance determined by the determining means for the attention output data that is the output data for which a predicted value is to be obtained;
    A class that classifies the output data of interest into one of a plurality of classes based on a pattern of values of a plurality of data extracted by the extracting means, and outputs a corresponding class code Classification means;
    Higher quality teacher data than the student data is predicted by linear linear combination with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained. The prediction coefficient is learned in advance for each class code based on a plurality of data corresponding to the distance, and by linear linear combination of the plurality of data extracted by the extraction unit and the prediction coefficient corresponding to the class code , and a prediction means for finding the prediction value of the target output data,
    The determining means includes, from the input data, center data corresponding to the output data of interest which is the output data for which a predicted value is to be obtained, and a plurality of types of distances set in advance in the spatial direction or the time direction from the center data. A plurality of pieces of data having a plurality of peripheral data corresponding to one of the distances, respectively, while changing the distance, and each of the average values of a plurality of data corresponding to each of the plurality of types of distances The distance at which a statistic closest to the predetermined reference value is obtained by comparing a statistic based on a variation in values or a difference between the values of the plurality of data with a predetermined reference value is input to the distance. Determine the extraction distance to extract multiple data from the data,
    The extraction means extracts a plurality of data corresponding to the extraction distance determined by the determination means from the input data for attention output data that is the output data for which a predicted value is to be obtained,
    The class classification unit performs class classification to classify the output data of interest into any one of a plurality of classes based on a pattern of a plurality of data values extracted by the extraction unit, and correspondingly Output class code
    The predicting means has a higher quality than the student data by linear linear combination with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained. A prediction coefficient for predicting simple teacher data is learned in advance for each class code based on a plurality of data corresponding to the distance, and a plurality of data extracted by the extraction unit and a prediction coefficient corresponding to the class code; by linear one linear coupling, seen including a step of obtaining a prediction value of the target output data,
    The teacher data to be a teacher for learning the prediction coefficient is subjected to specific processing according to the input data, and student data to be students is generated,
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Is determined as the extraction distance to extract,
    A plurality of data corresponding to the determined extraction distance is extracted from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained,
    Based on the extracted value patterns of the plurality of data, classify the attention teacher data into any one of a plurality of classes, and outputs a corresponding class code,
    Using the extracted plurality of data, the student is obtained by linear linear combination with the student data corresponding to the input data as the plurality of data corresponding to the distance from which the statistic closest to the predetermined reference value is obtained. A prediction coefficient for predicting higher quality teacher data than data is obtained for each class code based on a plurality of data corresponding to the distance.
    Thus , the data processing method is characterized in that the prediction coefficient is learned in advance .
  10. A medium for causing a computer to execute a program for processing input data and performing data processing for predicting output data for the input data,
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to distances are read while changing the distance, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing a statistic based on a difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Determining step for determining as an extraction distance for extracting;
    An extraction step of extracting a plurality of data corresponding to the extraction distance determined by the determination means from the input data for the attention output data that is the output data for which a predicted value is to be obtained;
    A class that classifies the output data of interest into one of a plurality of classes based on a pattern of values of a plurality of data extracted in the extraction step, and outputs a corresponding class code A classification step;
    Predicting higher quality teacher data than the student data by linear linear combination with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained The prediction coefficient is learned in advance for each class code based on a plurality of data corresponding to the distance, and by linear linear combination of the plurality of data extracted in the extraction step and the prediction coefficient corresponding to the class code , it looks including a prediction step of obtaining a prediction value of the target output data,
    The teacher data to be a teacher for learning the prediction coefficient is subjected to specific processing according to the input data, and student data to be students is generated,
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Is determined as the extraction distance to extract,
    A plurality of data corresponding to the determined extraction distance is extracted from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained,
    Based on the extracted value patterns of the plurality of data, classify the attention teacher data into any one of a plurality of classes, and outputs a corresponding class code,
    Using the extracted plurality of data, the student is obtained by linear linear combination with the student data corresponding to the input data as the plurality of data corresponding to the distance from which the statistic closest to the predetermined reference value is obtained. A prediction coefficient for predicting higher quality teacher data than data is obtained for each class code based on a plurality of data corresponding to the distance.
    Accordingly, a medium that causes the computer to execute a program in which the prediction coefficient is learned in advance .
  11. A data processing device that processes input data and learns prediction coefficients used to predict output data for the input data,
    A generating unit that performs specific processing according to the input data on teacher data serving as a teacher for learning the prediction coefficient, and generates student data serving as students;
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Determining means for determining as an extraction distance for extracting
    An extraction unit that extracts a plurality of data corresponding to the extraction distance determined by the determination unit from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained;
    A class that classifies the teacher data of interest into one of a plurality of classes based on a pattern of values of a plurality of data extracted by the extraction unit, and outputs a corresponding class code Classification means;
    Using a plurality of data extracted by the extraction means, linear primary with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained A data processing apparatus comprising: a calculation unit that obtains a prediction coefficient for predicting teacher data of higher quality than the student data by combining for each class code based on a plurality of data corresponding to the distance.
  12. The determining means obtains a standard deviation of the values of the plurality of data corresponding to each of the distances as the statistic, and based on the distance at which the standard deviation closest to the predetermined reference value is obtained, the input data The data processing apparatus according to claim 11, wherein an extraction distance for extracting a plurality of data is determined.
  13. The determining means sets the output data as a block unit each having a plurality of attention teacher data, and calculates an average of the standard deviations of the plurality of data obtained for each of the plurality of attention teacher data in a predetermined block. The extraction distance for extracting the plurality of data from the student data for the teacher data of interest in the block is determined by comparing the statistics and the predetermined reference value as the statistics. The data processing apparatus according to claim 12.
  14. The data processing apparatus according to claim 11, further comprising reference value calculation means for obtaining the predetermined reference value.
  15. The reference value calculation means predicts the teacher data with the prediction coefficient obtained for the class code corresponding to the plurality of data corresponding to the plurality of types of distances, and as a result, a plurality of distances corresponding to the distances that can be optimally predicted. The data processing apparatus according to claim 14, wherein a statistic obtained from data is obtained as the predetermined reference value.
  16. The data processing apparatus according to claim 11, wherein the teacher data and student data are image data.
  17. The data according to claim 11, wherein the extraction unit extracts, from the image data as the student data, pixels that are spatially or temporally adjacent to the pixel as the attention teacher data. Processing equipment.
  18. A data processing method for a data processing apparatus that processes input data and learns a prediction coefficient used to predict output data for the input data,
    The data processing device includes:
    A generating unit that performs specific processing according to the input data on teacher data serving as a teacher for learning the prediction coefficient, and generates student data serving as students;
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Determining means for determining as an extraction distance for extracting
    An extraction unit that extracts a plurality of data corresponding to the extraction distance determined by the determination unit from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained;
    A class that classifies the teacher data of interest into one of a plurality of classes based on a pattern of values of a plurality of data extracted by the extraction unit, and outputs a corresponding class code Classification means;
    Using a plurality of data extracted by the extraction means, linear primary with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained Computing means for obtaining a prediction coefficient for predicting teacher data of higher quality than the student data by combination for each class code based on a plurality of data corresponding to the distance,
    The generation unit performs specific processing according to the input data on teacher data serving as a teacher for learning the prediction coefficient, and generates student data serving as students.
    The determining means includes, from the input data, center data corresponding to the output data of interest which is the output data for which a predicted value is to be obtained, and a plurality of types of distances set in advance in the spatial direction or the time direction from the center data. A plurality of pieces of data having a plurality of peripheral data corresponding to one of the distances, respectively, while changing the distance, and each of the average values of a plurality of data corresponding to each of the plurality of types of distances The distance at which a statistic closest to the predetermined reference value is obtained by comparing a statistic based on a variation in values or a difference between the values of the plurality of data with a predetermined reference value is input to the distance. Determine the extraction distance to extract multiple data from the data,
    The extraction means extracts a plurality of data corresponding to the extraction distance determined by the determination means from the student data for attention teacher data that is the teacher data for which a predicted value is to be obtained,
    The class classification unit performs class classification for classifying the attention teacher data into one of a plurality of classes based on a plurality of data value patterns extracted by the extraction unit, and correspondingly Output class code
    The calculation means uses a plurality of data extracted by the extraction means, and a student corresponding to the input data as a plurality of data corresponding to the distance from which a statistic closest to the predetermined reference value is obtained A data processing method comprising: a step of obtaining a prediction coefficient for predicting higher quality teacher data than the student data for each class code based on a plurality of data corresponding to the distance by linear linear combination with data .
  19. A medium for causing a computer to execute a program for performing data processing for processing input data and learning a prediction coefficient used to predict output data for the input data,
    A generation step of generating specific student data as a student by applying a specific process according to the input data to the teacher data as a teacher for learning the prediction coefficient;
    One of the center data corresponding to the target output data that is the output data for which a predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to distances are read while changing the distance, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing a statistic based on a difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Determining step for determining as an extraction distance for extracting;
    An extraction step of extracting a plurality of data corresponding to the extraction distance determined by the determination step from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained;
    A class that classifies the teacher data of interest into one of a plurality of classes based on a pattern of values of a plurality of data extracted in the extraction step, and outputs a corresponding class code A classification step;
    Using a plurality of data extracted in the extraction step, linear primary with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained A calculation step for calculating a prediction coefficient for predicting higher-quality teacher data than the student data for each class code based on a plurality of data corresponding to the distance. Medium to be executed.
  20. A first device that processes input data and predicts output data for the input data;
    A data processing device comprising: a second device that learns a prediction coefficient used to predict the output data;
    The first device includes:
    One of the center data corresponding to the output data of interest that is the output data for which the predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data First determining means for determining as an extraction distance for extracting
    A first extraction means for extracting a plurality of data corresponding to the extraction distance determined by the first determination means for the attention output data that is the output data for which a predicted value is to be obtained;
    Based on a value pattern of a plurality of data extracted by the first extracting means, classifying the target output data into any one of a plurality of classes, and classifying the corresponding class code First class classification means for outputting;
    Higher quality teacher data than the student data is predicted by linear linear combination with student data corresponding to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained. The prediction coefficient corresponding to the class code and the first plurality of data extracted in the first extracting means is previously learned for each class code based on the plurality of data corresponding to the distance. A prediction means for obtaining a predicted value of the output data of interest by linear linear combination with :
    The second device includes:
    A generating unit that performs specific processing according to the input data on teacher data serving as a teacher for learning the prediction coefficient, and generates student data serving as students;
    One of the center data corresponding to the output data of interest that is the output data for which the predicted value is to be obtained from the input data, and one of a plurality of distances preset in the spatial direction or the time direction from the center data A plurality of data having a plurality of peripheral data corresponding to the distances are read out while changing the distances, and variation of each value with respect to an average value of a plurality of data values corresponding to each of the plurality of types of distances, or By comparing the statistic based on the difference between the values of the plurality of data and a predetermined reference value, the distance at which the statistic closest to the predetermined reference value is obtained is determined by comparing the input data with a plurality of data Second determining means for determining as an extraction distance for extracting
    A second extraction for extracting second plurality of data corresponding to the extraction distance determined by the second determination means from the student data for the attention teacher data that is the teacher data for which a predicted value is to be obtained. Means,
    Based on the value patterns of the plurality of data extracted by the second extraction means, classifying the attention teacher data into any one of a plurality of classes, and classifying the corresponding class code Second class classification means for outputting;
    The second plurality of data extracted by the second extraction means corresponds to the input data as a plurality of data corresponding to the distance at which a statistic closest to the predetermined reference value is obtained. A calculation means for obtaining a prediction coefficient for predicting higher-quality teacher data than the student data by linear linear combination with the student data for each class code based on a plurality of data corresponding to the distance ;
    A data processing apparatus , wherein a prediction coefficient used to obtain a predicted value of the target output data in the first apparatus is learned in advance by the second apparatus.
JP16052899A 1999-06-08 1999-06-08 Data processing apparatus, data processing method, and medium Expired - Fee Related JP4135045B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP16052899A JP4135045B2 (en) 1999-06-08 1999-06-08 Data processing apparatus, data processing method, and medium

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP16052899A JP4135045B2 (en) 1999-06-08 1999-06-08 Data processing apparatus, data processing method, and medium
US09/587,865 US6678405B1 (en) 1999-06-08 2000-06-06 Data processing apparatus, data processing method, learning apparatus, learning method, and medium
EP00304812A EP1061473A1 (en) 1999-06-08 2000-06-07 Method and apparatus for classification-adaptive data processing
KR20000031365A KR100746839B1 (en) 1999-06-08 2000-06-08 Data processing apparatus, data processing method, learning apparatus, learning method, and medium
CNB001264370A CN1190963C (en) 1999-06-08 2000-06-08 Data processing device and method, learning device and method and media

Publications (2)

Publication Number Publication Date
JP2000348019A JP2000348019A (en) 2000-12-15
JP4135045B2 true JP4135045B2 (en) 2008-08-20

Family

ID=15716925

Family Applications (1)

Application Number Title Priority Date Filing Date
JP16052899A Expired - Fee Related JP4135045B2 (en) 1999-06-08 1999-06-08 Data processing apparatus, data processing method, and medium

Country Status (1)

Country Link
JP (1) JP4135045B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4038812B2 (en) 2001-12-27 2008-01-30 ソニー株式会社 Data processing device, data processing method, program, recording medium, and data processing system
JP4066146B2 (en) 2002-04-26 2008-03-26 ソニー株式会社 Data conversion device, data conversion method, learning device, learning method, program, and recording medium
US7164807B2 (en) * 2003-04-24 2007-01-16 Eastman Kodak Company Method and system for automatically reducing aliasing artifacts
JP4872862B2 (en) * 2006-09-28 2012-02-08 ソニー株式会社 Image data arithmetic device and method, program, and recording medium
JP4825748B2 (en) * 2007-07-13 2011-11-30 株式会社モルフォ Image data processing method and imaging apparatus
JP4882999B2 (en) 2007-12-21 2012-02-22 ソニー株式会社 Image processing apparatus, image processing method, program, and learning apparatus

Also Published As

Publication number Publication date
JP2000348019A (en) 2000-12-15

Similar Documents

Publication Publication Date Title
US20180130179A1 (en) Interpolating Visual Data
US10339643B2 (en) Algorithm and device for image processing
Aslantas et al. Fusion of multi-focus images using differential evolution algorithm
Huang et al. Multi-focus image fusion using pulse coupled neural network
EP2230640B1 (en) Method for filtering depth images
US5768438A (en) Image encoding/decoding device
Vekkot et al. A novel architecture for wavelet based image fusion
US6987544B2 (en) Method and apparatus for processing image
JP4920599B2 (en) Nonlinear In-Loop Denoising Filter for Quantization Noise Reduction in Hybrid Video Compression
US7245774B2 (en) Image processing apparatus
US7050502B2 (en) Method and apparatus for motion vector detection and medium storing method program directed to the same
CN100514367C (en) Color segmentation-based stereo 3D reconstruction system and process
JP5048600B2 (en) Method and system for improving data quality
CN101682686B (en) Apparatus and method for image processing
EP1347415B1 (en) Method for sharpening a digital image without amplifying noise
DE69627982T2 (en) Signal adaptive post-processing system to reduce blocking effects and ring interference
Lukac Adaptive color image filtering based on center-weighted vector directional filters
DE69812800T2 (en) Image enhancement method and apparatus
US6965702B2 (en) Method for sharpening a digital image with signal to noise estimation
EP1396818B1 (en) Image processing apparatus and method and image pickup apparatus
CN100458849C (en) Image processing apparatus and method and image pickup apparatus
US7295616B2 (en) Method and system for video filtering with joint motion and noise estimation
JP4093621B2 (en) Image conversion apparatus, image conversion method, learning apparatus, and learning method
US7430337B2 (en) System and method for removing ringing artifacts
KR100477702B1 (en) Adaptive contrast enhancement method using time-varying nonlinear transforms on a video signal

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060221

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20070413

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20070424

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20070625

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20071119

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080118

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20080124

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080215

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080414

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20080508

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20080521

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110613

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120613

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130613

Year of fee payment: 5

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees