JP4591785B2 - Signal processing device - Google Patents

Signal processing device Download PDF

Info

Publication number
JP4591785B2
JP4591785B2 JP2007026198A JP2007026198A JP4591785B2 JP 4591785 B2 JP4591785 B2 JP 4591785B2 JP 2007026198 A JP2007026198 A JP 2007026198A JP 2007026198 A JP2007026198 A JP 2007026198A JP 4591785 B2 JP4591785 B2 JP 4591785B2
Authority
JP
Japan
Prior art keywords
processing
signal
unit
step
process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2007026198A
Other languages
Japanese (ja)
Other versions
JP2007183977A (en
Inventor
和志 吉川
哲志 小久保
通雅 尾花
寿一 白木
英雄 笠間
哲二郎 近藤
昌憲 金丸
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to JP2007026198A priority Critical patent/JP4591785B2/en
Publication of JP2007183977A publication Critical patent/JP2007183977A/en
Application granted granted Critical
Publication of JP4591785B2 publication Critical patent/JP4591785B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a signal processing device, and more particularly, to a signal processing device that enables optimal processing to be performed for a user by changing processing contents and processing structure by a user operation, for example. .

  For example, in a conventional NR (Noise Reduction) circuit, when the user operates the adjustment knob and sets it to a predetermined position, noise reduction processing corresponding to the position of the knob is performed.

  By the way, the S / N (Signal to Noise Ratio), frequency characteristics, and the like of a signal input to the NR circuit are not always constant, but rather generally change. Then, when the S / N, frequency characteristics, etc. of the signal input to the NR circuit change, in the noise removal processing corresponding to the position set by the user operating the knob, the signal input to the NR circuit Appropriate processing is not necessarily performed. For this reason, it is necessary for the user to frequently operate the knob so that noise removal processing appropriate for the user is performed, which is troublesome.

  The present invention has been made in view of such a situation, and it is possible to perform processing optimal for the user by changing the content of the processing and the structure of the processing by a user operation. Is.

The signal processing apparatus of the present invention, an operation signal output means for outputting an operation signal in accordance with a user operation from the input signal composed of image signals, each pixel of the image signal, the pixel and its temporal and spatial Difference processing of pixel values using both, or any of its surrounding pixels, filtering processing, statistical processing, or filtering processing using the filtering processing results and statistical processing results, and statistical processing more of the content of the feature detection process, the feature detection means for detecting a characteristic of the content of the feature detection process of a predetermined number of the type specified by the operation signal, characterized and their characteristics to determine the plurality of types of processing results, including storage means for storing a table indicating the correspondence between the content of the signal processing for an input signal having, in a table stored in the storage unit, detected by the feature detecting means The be for an input signal having a characteristic, the content of signal processing which is set by the operation signal, the processing determining means for determining a content of the signal processing for the input signal, the signal processing determined by the processing determination means The output signal is generated by linear combination of the input signal and the predetermined prediction coefficient based on the content of the execution process including the predetermined prediction coefficient that is set according to the content and can be changed based on the operation signal. and process execution means for executing the execution process, the type of features to be detected by the feature detecting means or distribution characteristics, the table including the contents of the signal processing determined by the processing determining means, or executed by the process execution means and display means for displaying the internal information at least one of the execution predetermined prediction coefficient is the content of the processing, based on the operation signal There are, contents of the feature detection process, the content of signal processing or, characterized in that at least one is changed among the content of the execution process.

The operation signal can be a signal that specifies whether the input signal is a processing target for extracting a feature according to the content of the first feature detection processing or the content of the second feature detection processing , wherein the display means, as said internal information, so as to display the distribution of the contents of the first feature detection process, and features are extracted from the second of said input signal to be processed in the content of the feature detection process can do.

For the distribution displayed by the display means, the operation signal for designating whether the content of the first feature detection process or the feature of the second feature detection process is a feature to be processed When there is an input, the content of the feature detection process performed on the input signal can be changed based on the operation signal.

The processing determining unit is configured to determine whether or not to output the input signal as it is as the contents of the signal processing for the input signal based on a predetermined number of types of features detected from the input signal by the feature detecting unit. The processing execution unit can selectively output the input signal in accordance with the determination of the processing determination unit, thereby detecting a telop in the image signal that is the input signal.

The processing determining means can change the content of its own processing by changing the content of the signal processing in the table based on the operation signal.

  The contents of the process in the table can include a process for outputting an output signal having a first value and a process for outputting an output signal having a second value for an input signal. The execution unit can binarize the input signal into the first and second values according to the determination of the processing determination unit.

When the prediction coefficient linearly combined with the input signal is changed based on the operation signal, the sum can be changed to be 1.

Among the prediction coefficients other than the prediction coefficient changed based on the operation signal in the space corresponding to the pixel array of the image signal that is the input signal to be linearly combined, the prediction coefficient linearly combined with the input signal , Of the prediction coefficients changed based on the operation signal, those closer than the predetermined distance are changed in the same direction as the increase / decrease direction of the prediction coefficient changed based on the operation signal, and are changed based on the operation signal. Of the prediction coefficients , those farther than a predetermined distance can be changed in the direction opposite to the increase / decrease direction of the prediction coefficient changed based on the operation signal.

Corresponding to pixels other than the prediction coefficient changed based on the operation signal in the space corresponding to the pixel array of the image signal that is the input signal to be linearly combined, of the prediction coefficient linearly combined with the input signal. Among the prediction coefficients, the prediction coefficient changed based on the operation signal has the same maximum or minimum polarity as the polarity indicating positive or negative, and the direction of increase or decrease of the prediction coefficient changed based on the operation signal is changed in the same direction, which polarity indicates a prediction coefficient and polarity that is changed based on the operation signal is different from the maximum value or the minimum value, a direction of increase or decrease of the modified prediction coefficients based on the operation signal It can be changed in the reverse direction.

According to the signal processing method of the present invention, an operation signal output unit that outputs an operation signal according to a user operation, and an input signal composed of an image signal, for each pixel of the image signal, the pixel and its temporal and spatial The pixel value inter-pixel difference processing using both or any of the surrounding pixels, filter processing, statistical processing, or the filtering processing using the filtering processing results and the statistical processing results, and the statistical processing A feature detection means for detecting a feature of the content of a predetermined number of types of feature detection processing specified by an operation signal, and a feature and its features A storage means for storing a table showing a correspondence relationship between the input signal and the content of the signal processing, and detection by the feature detection means in the table stored in the storage means A process determining means for determining the content of the signal processing set by the operation signal as the content of the signal processing for the input signal, and a signal processing determined by the process determining means. The output signal is generated by linear combination of the input signal and the predetermined prediction coefficient based on the content of the execution process including the predetermined prediction coefficient that is set according to the content and can be changed based on the operation signal. A process execution unit that executes the execution process, a feature type or feature distribution detected by the feature detection unit, a table including the content of the signal processing determined by the process determination unit, or a process execution unit Display means for displaying at least one of the predetermined prediction coefficients which are the contents of the execution processing as internal information A signal processing method, the operation signal output means, an operation signal output step of outputting an operation signal in accordance with a user operation, in the feature detection unit, from the input signal composed of image signals, each pixel of the image signal Pixel difference processing, filtering processing, statistical processing, or their filtering processing results and statistical processing using a pixel and its temporal and / or spatial pixels. Detects the features of a predetermined number of types of feature detection processing specified by the operation signal from the content of multiple types of feature detection processing that obtains multiple types of processing results, including filtering using results and statistical processing Yes and characterization detection step, the processing determining means, in the table stored in the storage means, the detected features in the process of feature detection step of The process determination step for determining the signal processing content set by the operation signal as the signal processing content for the input signal and the processing determination step in the processing execution means By generating the output signal by linear combination of the input signal and the predetermined prediction coefficient based on the content of the execution process that is set according to the content of the signal processing and can be changed based on the operation signal , a process executing step of executing the execution of the input signal, the display means, of the features detected by the processing of the feature detection step type, or distribution of the feature, the contents of the signal processing determined by the processing of the processing decision step comprising a table, or less of the predetermined prediction coefficient is the content of the execution process executed by the processing of the processing execution step Even Ku and a display step of displaying either as internal information, based on the operation signal, the contents of the feature detection process, the content of signal processing, or that at least one of the contents of the execution process is changed It is characterized by.

The recording medium program of the present invention includes an operation signal output means for outputting an operation signal in response to a user operation, and an input signal composed of an image signal for each pixel of the image signal, its temporal and spatial Difference processing of pixel values using both, or any of its surrounding pixels, filtering processing, statistical processing, or filtering processing using the filtering processing results and statistical processing results, and statistical processing A feature detection means for detecting a feature of the content of a predetermined number of types of feature detection processing specified by an operation signal among a plurality of feature detection processing contents for obtaining a plurality of types of processing results including Storage means for storing a table showing a correspondence relationship between the contents of signal processing with respect to an input signal having a signal, and a feature detection means in the table stored in the storage means A process determining means for determining the content of the signal processing set by the operation signal as the content of the signal processing for the input signal, and the signal determined by the process determining means. By generating an output signal by linear combination of the input signal and the predetermined prediction coefficient based on the content of the execution process that is set according to the content of the process and can be changed based on the operation signal By a process execution means for executing an execution process on a signal, a type of feature detected by the feature detection means, or a distribution of the features, a table containing the contents of the signal processing determined by the process determination means, or a process execution means A signal provided with display means for displaying at least one of predetermined prediction coefficients, which are the contents of the executed execution process, as internal information A program to be executed by a computer for controlling a physical device, the operation signal output means, an operation signal output step of outputting an operation signal in accordance with a user operation, in the feature detection unit, the input signal composed of image signals In addition , for each pixel of the image signal, pixel value inter-pixel difference processing, filtering processing, statistical processing, or those using the pixel and its temporal and / or spatial pixels. Filter processing results, filtering processing using statistical processing results, and a plurality of feature detection processing for obtaining a plurality of types of processing results including statistical processing, a predetermined number of types of features specified by an operation signal In the feature detection control step for controlling the detection of the feature of the content of the detection process, and in the table stored in the storage means in the process determination means Process determination control for controlling the determination of the signal processing content set by the operation signal as the content of the signal processing for the input signal, with respect to the input signal having the feature detected by the processing of the feature detection control step The input signal based on the content of the execution process comprising the step and the predetermined prediction coefficient that is set according to the content of the signal processing determined in the processing of the processing determination control step in the processing execution means and can be changed based on the operation signal And an output signal generated by a linear combination of a predetermined prediction coefficient, a process execution control step for controlling execution of the execution process on the input signal, and a feature detected by the process of the feature detection control step in the display means type or distribution of the feature, the table including the contents of the signal processing determined by the processing of the processing decision controlling step also, , The contents of the executed processing executed by the processing of the processing execution control step includes a display control step for controlling the display of the at least one of the internal information of the predetermined prediction coefficients, based on the operation signal, wherein At least one of the contents of the detection process, the contents of the signal process, and the contents of the execution process is changed.

The program of the present invention includes an operation signal output means for outputting an operation signal according to a user's operation, and an input signal composed of an image signal, for each pixel of the image signal, both the pixel and its temporal and spatial, Or a plurality of pixel values including pixel difference processing using a neighboring pixel, filter processing, statistical processing, or filtering processing using those filtering processing results and statistical processing results, and statistical processing Among the contents of a plurality of feature detection processes for obtaining the processing results of the type, feature detection means for detecting the features of the contents of the predetermined number of types of feature detection processes specified by the operation signal, and the features and the inputs having the features A storage means for storing a table showing a correspondence relationship between the signal processing contents for the signal and a feature detection means in the table stored in the storage means. Processing determination means for determining the content of the signal processing set by the operation signal as the content of the signal processing for the input signal, and the content of the signal processing determined by the processing determination means The output signal is generated by linear combination of the input signal and the predetermined prediction coefficient based on the content of the execution process including the predetermined prediction coefficient that is set according to the operation signal and can be changed based on the operation signal. A process execution means for executing an execution process, a type of feature detected by the feature detection means, or a distribution of features, a table including the contents of signal processing determined by the process determination means, or executed by the process execution means A signal processing apparatus comprising: a display unit configured to display at least one of predetermined prediction coefficients that are contents of execution processing as internal information The Gosuru computer, the operation signal output means, an operation signal output step of outputting an operation signal in accordance with a user operation, in the feature detection unit, from the input signal composed of image signals, each pixel of the image signal, the pixel Pixel value difference processing, filtering processing, statistical processing, or their filtering processing result and statistical processing result using both of the pixel and its temporal and / or spatial pixels. Controls the detection of the features of the predetermined number of types of feature detection processing specified by the operation signal among the content of multiple types of feature detection processing for obtaining the results of multiple types of processing including the used filter processing and statistical processing Detected by the process of the feature detection control step in the table stored in the storage means in the feature detection control step and the process determination means A process determination control step for controlling the determination of the content of the signal processing set by the operation signal as the content of the signal processing for the input signal , and a process in the processing execution means Linear combination of an input signal and a predetermined prediction coefficient based on the content of an execution process consisting of a predetermined prediction coefficient that is set according to the content of the signal processing determined in the process of the determination control step and can be changed based on the operation signal And a process execution control step for controlling execution of the execution process on the input signal by generating an output signal, and a feature type detected by the process of the feature detection control step in the display means , or a feature distribution and process table comprising a processing content of the signal processing determined by the determination control step, or actual the processing of the processing execution control step And a display control step for controlling the display of the at least one of the internal information of the predetermined prediction coefficient is the content of the execution process is based on the operation signal, the contents of the feature detection process, the content of the signal processing Alternatively, at least one of the contents of the execution process is changed.

In the present invention, the output operation signal according to a user operation, from the input signal composed of image signals, the image signals for each pixel, the pixel and its temporal, and both spatial, or any of its Multiple types of processing results including inter-pixel difference processing of pixel values using neighboring pixels, filter processing, statistical processing, filter processing results thereof, filter processing using statistical processing results, and statistical processing Among the contents of a plurality of feature detection processes that are required, the features of the contents of a predetermined number of types of feature detection processes specified by the operation signal are detected, and the characteristics and the contents of the signal processing for the input signal having the characteristics stored a table showing a correspondence relationship, in table stored, be for an input signal having the detected characteristic is set by the operation signal No. Contents of treatment, is determined as the content of the signal processing for the input signals, are set according to the contents of the determined signal processing based on the contents of execution of a predetermined prediction coefficient can be changed based on the operation signal, The output signal is generated by a linear combination of the input signal and a predetermined prediction coefficient, so that the execution process for the input signal is executed, and the type of detected feature or distribution of the feature and the content of the determined signal processing table containing, or at least one of the predetermined prediction coefficient is the contents of the executed execution process is displayed as internal information, based on the operation signal, the contents of the feature detection process, the content of signal processing or, At least one of the contents of the execution process is changed.

  As described above, according to the present invention, it is possible to perform the optimum process for the user by changing the process contents and the process structure based on the user process.

  In the present embodiment, the internal information is displayed on the same display unit as the output signal, but the internal information can be displayed on a display unit different from the display unit that displays the output signal.

  FIG. 1 shows a configuration example of an embodiment of an optimization apparatus to which the present invention is applied. The optimization device performs predetermined processing (signal processing) on the input signal, and then outputs a signal obtained as a result of the processing as an output signal. The user examines (qualitatively evaluates) the output signal and inputs an operation signal corresponding to the user's preference to the optimization device when the output signal is not his / her preference. Based on the operation signal, the optimization device changes the content of the process and the structure of the process, performs a predetermined process on the input signal again, and outputs an output signal. In response to the operation signal input by the user's operation in this way, the optimization apparatus repeatedly performs the optimal processing (the optimal processing for the user) while repeatedly changing the processing content and the processing structure. To output an output signal closer to the user's preference.

  FIG. 2 shows a first detailed configuration example of the optimization apparatus of FIG.

  In this optimization apparatus 1, the user's operation is learned without the user's knowledge, so that the optimum process for the user is performed. That is, in the optimization device, an operation signal supplied in accordance with a user operation is monitored to determine whether it can be used for learning. When the operation signal is a learning operation signal that can be used for learning, a correction norm that is a norm for correcting the input signal is learned based on the learning operation signal. On the other hand, the input signal is corrected based on a correction criterion obtained by learning, and the corrected signal is output as an output signal.

  The optimization apparatus 1 includes a processing unit 11 including a correction unit 21 and a learning unit 22, and an operation signal corresponding to a user operation is supplied to the processing unit 11 in addition to an input signal to be processed. It is like that.

  The operation signal is supplied from the operation unit 2. That is, the operation unit 2 includes, for example, a rotary or slide type knob, a switch, a pointing device, and the like, and an operation signal corresponding to a user operation is transmitted to the processing unit 11 that configures the optimization device 1. Supply.

  For example, a digital input signal is supplied to the correction unit 21 configuring the optimization device 1, and a correction parameter, for example, as a correction standard for correcting the input signal is supplied from the learning unit 22. It has become. The correction unit 21 corrects (signal processing) the input signal based on the correction parameter, and outputs the corrected signal as an output signal.

  The learning unit 22 is supplied with an operation signal from the operation unit 2 and an input signal or an output signal as necessary. The learning unit 22 monitors the operation signal and determines whether it can be used for learning. Then, when the operation signal is a learning operation signal that can be used for learning, the learning unit 22 needs a correction parameter used to correct the input signal based on the learning operation signal. Accordingly, learning is performed using an input signal or an output signal, and the learning is supplied to the correction unit 21.

  The learning unit 22 includes a learning data memory 53 and a learning information memory 55. The learning data memory 53 stores learning data used for learning, and the learning information memory 55 is used for learning. The obtained learning information to be described later is stored.

  Next, processing (optimization processing) performed by the optimization device 1 of FIG. 2 will be described with reference to the flowchart of FIG.

  First, in step S <b> 1, the learning unit 22 determines whether a learning operation signal has been received from the operation unit 2. Here, when operating the operation unit 2, the user first performs an appropriate operation, performs detailed operations while confirming an output signal output in accordance with the operation, and is finally optimal. In general, the operation is stopped when an output signal that is thought to be present is obtained. The operation signal corresponding to the position of the operation unit 2 at the time when the output signal that the user thinks is optimal is the learning operation signal. When the operation is stopped after the operation has continued for a predetermined time or longer, the operation signal when the operation is stopped is determined as the learning operation signal.

  If it is determined in step S1 that the learning operation signal has not been received, that is, for example, the user is not operating the operation unit 2 or is searching for an optimal position even if the user is operating. When such an operation is performed, steps S2 to S10 are skipped, and the process proceeds to step S11. The correction unit 21 corrects the input signal according to the already set correction parameter, and outputs the correction result. The signal is output and the process returns to step S1.

  If it is determined in step S1 that the learning operation signal has been received, the process proceeds to step S2, and the learning unit 22 acquires learning data used for learning based on the learning operation signal, and step S3. Proceed to In step S3, the learning data memory 53 stores the latest learning data acquired in step S2.

  Here, the learning data memory 53 has a storage capacity capable of storing a plurality of learning data. Further, when the learning data memory 53 stores the learning data corresponding to the storage capacity, the learning data memory 53 stores the next learning data by overwriting the oldest stored value. Accordingly, the learning data memory 53 stores a plurality of pieces of learning data that are recent.

  After the learning data is stored in the learning data memory 53 in step S3, the process proceeds to step S4, and the learning unit 22 stores the latest learning data stored in the learning data memory 53 and the learning information memory 55. Learning is performed using the learning information thus obtained, a correction parameter is obtained, and the process proceeds to step S5. In step S5, the learning unit 22 updates the stored contents of the learning information memory 55 with new learning information obtained during the learning in step S4, and the process proceeds to step S6.

  In step S6, the learning unit 22 obtains the appropriateness as will be described later, which indicates the appropriateness of the correction parameter obtained in step S4, proceeds to step S7, and based on the suitability, the correction parameter obtained in step S4. Determine whether is appropriate.

  If it is determined in step S7 that the correction parameter is appropriate, steps S8 and S9 are skipped and the process proceeds to step S10. The learning unit 22 supplies the correction parameter determined to be appropriate to the correction unit 21. Output, and proceed to step S11. Therefore, in this case, thereafter, the correction unit 21 corrects the input signal according to the new correction parameter obtained by learning in step S4.

  On the other hand, if it is determined in step S7 that the correction parameter is not appropriate, the process proceeds to step S8, where the learning unit 22 extracts only the latest learning data from the learning data stored in the learning data memory 53. Learning is performed to obtain a correction parameter, and the process proceeds to step S9. In step S9, the learning unit 22 updates the stored content of the learning information memory 55 with new learning information obtained during the learning in step S8, and the process proceeds to step S10. In this case, in step S10, the learning unit 22 outputs the correction parameter obtained from only the latest learning data in step S8 to the correction unit 21, and proceeds to step S11. Accordingly, in this case, thereafter, the correction unit 21 corrects the input signal according to the new correction parameter obtained by learning in step S8.

  Next, FIG. 4 illustrates a detailed configuration example when the processing unit 11 of FIG. 2 is applied to, for example, an NR circuit that removes noise from an image signal or an audio signal.

  The weight memory 31 stores a weight (coefficient) W (for example, a value of 0 or more and 1 or less) as a correction parameter supplied from the selection unit 41 described later of the learning unit 22. The weight memory 32 stores the weight 1-W supplied from the computing unit 33.

  The computing unit 33 supplies the weight memory 32 with a subtraction value 1-W obtained by subtracting the weight W supplied from the selection unit 41 of the learning unit 22 from 1.0 as a weight. The computing unit 34 multiplies the input signal by the weight 1-W stored in the weight memory 32 and supplies the multiplied value to the computing unit 36. The computing unit 35 multiplies the weight W stored in the weight memory 31 by the output signal stored (latched) in the latch circuit 37 and supplies the multiplied value to the computing unit 36. The calculator 36 adds both outputs of the calculators 34 and 35, and outputs the added value as an output signal.

  The latch circuit 37 latches the output signal output from the arithmetic unit 36 and supplies it to the arithmetic unit 35.

  In the embodiment of FIG. 4, the weight memory 31 and 32, the arithmetic units 33, 34, 35, and 36, and the latch circuit 37 constitute the correction unit 21 of the processing unit 11.

  The selection unit 41 selects either the weight output from the weight correction unit 46 or the weight output from the operation signal processing unit 50 and supplies the selected correction parameter to the correction unit 21 as a correction parameter.

  An input signal is supplied to the input reliability calculation unit 42, and an input reliability representing the reliability of the input signal is obtained and output to the output reliability calculation unit 43 and the weight calculation unit 45. . The output reliability calculation unit 43 obtains an output reliability representing the reliability of the output signal based on the input reliability from the input reliability calculation unit 42 and supplies the output reliability to the latch circuit 44 and the weight calculation unit 45. The latch circuit 44 stores (latches) the output reliability from the output reliability calculation unit 43 and supplies the output reliability to the output reliability calculation unit 43 and the weight calculation unit 45.

  The weight calculation unit 45 calculates a weight from the input reliability from the input reliability calculation unit 42 and the output reliability from the output reliability calculation unit 43 and outputs the weight to the weight correction unit 46. In addition to the weight, the weight correction unit 46 is supplied with parameter control data for controlling the weight as the correction parameter from the parameter control data memory 57. The weight correction unit 46 controls the weight by parameter control. The data is processed (corrected) according to data and supplied to the selection unit 41.

  The operation signal processing unit 50 is supplied with an operation signal from the operation unit 2 (FIG. 2), and the operation signal processing unit 50 processes the operation signal supplied thereto and converts the operation signal into the operation signal. Corresponding weights are supplied to the selection unit 41, the teacher data generation unit 51, and the student data generation unit 52. Further, the operation signal processing unit 50 determines whether or not the operation signal is the above-described learning operation signal. When the operation signal is the learning operation signal, the operation signal processing unit 50 determines whether or not the operation signal is a learning operation signal. ) Is added to the output weight.

  When the teacher data generation unit 51 receives the weight with the learning flag from the operation signal processing unit 50, the teacher data generation unit 51 generates the teacher data to be a learning teacher and supplies it to the learning data memory 53. That is, the teacher data generation unit 51 supplies the weight with the learning flag added to the learning data memory 53 as teacher data.

  When the student data generation unit 52 receives the weight with the learning flag from the operation signal processing unit 50, the student data generation unit 52 generates student data to be a learning student and supplies it to the learning data memory 53. That is, the student data generation unit 52 is configured in the same manner as the above-described input reliability calculation unit 42, output reliability calculation unit 43, latch circuit 44, and weight calculation unit 45, for example, from input signals supplied thereto. The weight is calculated, and the weight calculated from the input signal when the weight with the learning flag is received is supplied to the learning data memory 53 as student data.

  The learning data memory 53 receives the teacher data as the weight corresponding to the learning operation signal supplied from the teacher data generation unit 51 and the learning operation signal supplied from the student data generation unit 52. A set of student data as weights calculated from the input signals is stored as a set of learning data. Note that, as described above, the learning data memory 53 can store a plurality of learning data, and when learning data corresponding to the storage capacity is stored, the next learning data is determined to be the oldest. The stored value is overwritten on the stored value. Therefore, the learning data memory 53 is basically in a state in which some recent learning data is always stored.

  The parameter control data calculation unit 54 controls the teacher data and student data as learning data stored in the learning data memory 53 under the control of the determination control unit 56, and further stores the learning data in the learning information memory 55 as necessary. Using the stored learning information, parameter control data that minimizes a predetermined statistical error is learned by calculating new learning information and supplied to the determination control unit 56. Further, the parameter control data calculation unit 54 updates the stored content of the learning information memory 55 with new learning information obtained by learning. The learning information memory 55 stores learning information from the parameter control data calculation unit 54.

  The determination control unit 56 determines the appropriateness of the parameter control data supplied from the parameter control data calculation unit 54 by referring to the latest learning data stored in the learning data memory 53. Further, the determination control unit 56 controls the parameter control data calculation unit 54 and supplies the parameter control data supplied from the parameter control data calculation unit 54 to the parameter control data memory 57. The parameter control data memory 57 updates the stored contents with the parameter control data supplied from the determination control unit 56, and supplies it to the weight correction unit 46.

  In the embodiment of FIG. 4, the learning unit 22 of the processing unit 11 is configured by the selection unit 41 to the weight correction unit 46 and the operation signal processing unit 50 to the parameter control data memory 57 described above.

  In the processing unit 11 of the optimization device 1 as an NR circuit configured as described above, noise in the input signal is removed as follows.

  That is, for the sake of simplicity, for example, as shown in FIG. 5A, the average value of the input signal on which the true value is constant and the temporally varying noise is superimposed is calculated as follows. Considering the removal of noise that fluctuates automatically, for example, an input signal having a large noise level (ie, a signal having a poor S / N) is reduced in weight (not considered much). For an input signal with a low noise level (and therefore a signal with good S / N), the noise can be effectively removed by increasing the weight.

  Therefore, in the NR circuit of FIG. 4, as the evaluation value of the input signal, for example, as shown in FIG. 5B, the proximity of the input signal to the true value, that is, the reliability of the input signal being the true value is expressed. Noise is effectively removed by calculating the average while obtaining the input reliability and performing weighting corresponding to the input reliability on the input signal.

Therefore, in the NR circuit of FIG. 4, for the input signal, a weighted average using a weight corresponding to the input reliability is obtained and output as an output signal, but now the input signal, output signal, input at time t When the reliability is expressed as x (t), y (t), and α x (t) , the output signal y (t) is obtained according to the following equation.

... (1)

Here, the greater the input reliability α x (t) is, the greater the weight is given.

  From Expression (1), the output signal y (t−1) one sample before the current time t is obtained by the following expression.

... (2)

The output signal y (t) is also an output reliability representing the closeness to the true value, that is, the reliability of the output signal y (t) being the true value, as the evaluation value of the output signal y (t). Degree α y (t) is introduced, and the output reliability α y (t−1) of the output signal y (t−1) one sample before the current time t is defined by the following equation.

... (3)

In this case, from the equations (1) to (3), the output signal y (t) and its output reliability α y (t) can be expressed as follows.

... (4)

... (5)

  In addition, the weight used to obtain the output signal y (t) at time t is expressed as w (t), which is defined by the following equation.

... (6)

  From the expression (6), the following expression is established.

... (7)

  Using Expressions (6) and (7), the output signal y (t) in Expression (4) can be expressed by a weighted average by multiplication and addition as follows.

... (8)

Note that the weight w (t) (and 1-w (t)) used in the equation (8) is the output reliability α y (t of the output signal y (t−1) one sample before from the equation (6). -1) and the input reliability α x (t) of the current input signal x (t). Further, the output reliability α y (t ) of the current output signal y (t) in Expression (5) is also the output reliability α y (t−1) of the output signal y (t−1) one sample before. And the input reliability α x (t) of the current input signal x (t).

Here, as the input reliability α x (t) of the input signal x (t) or the output reliability α y (t) of the output signal y (t), the respective variances σ x (t) 2 or σ y (t) If the inverse of 2 is used, that is, input reliability α x (t) and output reliability α y (t)

... (9)
In other words, the weight w (t) in the equation (6) and the weight 1-w (t) in the equation (7) can be obtained by the following equations.

... (10)

(11)

Σ y (t) 2 can be obtained by the following equation.

(12)

  The NR circuit of FIG. 4 basically performs a correction parameter calculation process for calculating a correction parameter as a weight w (t) according to the equation (6), and uses the weight w (t) to calculate the equation ( According to 8), the noise contained in the input signal x (t) is effectively calculated by calculating the weighted average of the output signal y (t−1) one sample before and the current input signal x (t). Correction processing to be removed is performed.

  By the way, the output signal obtained as a result of the correction process of the input signal with the weight w (t) obtained according to the equation (6) does not always feel optimal by the user. Therefore, the NR circuit of FIG. 4 performs control data learning processing for obtaining parameter control data for controlling (correcting) the weight w (t) as the correction parameter by learning the operation of the operation unit 2 by the user. The input signal is corrected using the weight corrected by the parameter control data.

  The control data learning process is performed as follows.

That is, when the user operates the operation unit 2, the weight W i corresponding to the learning operation signal given for the i-th time is the same as the input signal inputted when the learning operation signal is given. Therefore, in the control data learning process, the weight w (t) obtained according to the equation (6) is set to the weight W i corresponding to the learning operation signal. It is only necessary to obtain parameter control data that can be corrected to a close value (ideally the same value).

So, now, the equation (6) The weight w (t) which is determined in accordance with, as well as to become a student data learning of the student, the weight W i corresponding to the learning operating signals, as teacher data to be a learning teacher From the weight w (t) as the student data, for example, the predicted value W i of the weight W i as the teacher data is predicted by a linear expression defined by the parameter control data a and b as shown in the following expression: Think of asking for '.

W i '= aw i + b
... (13)

In Expression (13) (the same applies to Expression (14) and Expressions (16) to (21) described later), w i is given a weight W i corresponding to a learning operation signal as teacher data. Represents the weight w (t) as the student data obtained according to the equation (6) with respect to the input signal inputted at the time.

From equation (13), the error (prediction error) e i of the W i as teacher data, and the prediction value W i 'is expressed by the following equation.

                                                      (14)

Now, the prediction error e i of formula (14), the square represented by the following formula (the square) error summation

(15)
It is assumed that the parameter control data a and b that minimize the value are obtained by the least square method (also called the least square error method). In Equation (15) (also in Equations (16) to (21) described later), N represents the number of sets of teacher data and student data.

  First, when the sum of the square errors in equation (15) is partially differentiated with respect to parameter control data a and b, the following equation is obtained.

... (16)

... (17)

  Since the minimum value (minimum value) of the sum of square errors in equation (15) is given by a and b with the right side of equations (16) and (17) set to 0, the right side of equations (16) and (17) Is set to 0, Equation (18) is obtained from Equation (16), and Equation (19) is obtained from Equation (17).

... (18)

... (19)

  By substituting equation (19) into equation (18), the parameter control data a can be obtained by the following equation.

... (20)

  Further, from the equations (19) and (20), the parameter control data b can be obtained by the following equation.

(21)

  In the NR circuit of FIG. 4, the control data learning process for obtaining the parameter control data a and b is performed as described above.

  Next, correction processing, correction parameter calculation processing, and control data learning processing performed by the NR circuit of FIG. 4 will be described with reference to flowcharts of FIGS.

  First, the correction process will be described with reference to the flowchart of FIG.

  When the weight w (t) as the correction parameter is supplied from the selection unit 41 of the learning unit 22 to the correction unit 21, the weight memory 31 of the correction unit 21 overwrites the weight w (t). Remember me. Further, the computing unit 33 of the correction unit 21 subtracts the weight w (t) from 1.0 to obtain the weight 1-w (t), supplies it to the weight memory 32, and stores it in the form of overwriting.

  When the input signal x (t) is supplied, the calculator 34 calculates the product of the input signal x (t) and the weight 1-w (t) stored in the weight memory 32 in step S21. Calculate and supply to the calculator 36. In step S 21, the calculator 35 calculates the product of the weight w (t) stored in the weight memory 31 and the output signal y (t−1) one sample before latched in the latch circuit 37. , And supplied to the calculator 36.

  In step S22, the arithmetic unit 36 adds the product of the input signal x (t) and the weight 1-w (t) and the product of the weight w (t) and the output signal y (t-1). Thus, the weighted addition value (1-w (t)) x (t) + w (t) y (t-1) between the input signal x (t) and the output signal y (t-1) is obtained and output. Output as signal y (t). This output signal y (t) is also supplied to the latch circuit 37, and the latch circuit 37 stores the output signal y (t) in an overwritten form. Thereafter, the process returns to step S21, waits for the input signal of the next sample to be supplied, and thereafter the same processing is repeated.

  Next, correction parameter calculation processing will be described with reference to the flowchart of FIG.

In the correction parameter calculation process, first, in step S31, the input reliability calculation unit 42 obtains, for example, an input reliability α x (t) based on the variance of the input signal.

That is, the input reliability calculation unit 42 has a built-in FIFO (First In First Out) memory (not shown) that can latch the past number of samples in addition to the sample x (t) of the current input signal. Then, using the sample x (t) of the current input signal and its past number samples, their variances are calculated, and the reciprocal thereof is obtained as the input reliability α x (t) , and the output reliability calculation unit 43 And supplied to the weight calculation unit 45. Immediately after the input of the input signal is started, there may be cases where the number of samples of the input signal necessary to calculate the variance does not exist. Output as degrees α x (t) .

Thereafter, the process proceeds to step S32, and the weight calculation unit 45 uses the input reliability α x (t) from the input reliability calculation unit 42 to obtain the weight w (t) according to the equation (6).

That is, at the timing when the input reliability α x (t) is supplied from the input reliability calculation unit 42 to the weight calculation unit 45, the output reliability calculation unit 43 outputs one sample before the latch circuit 44. The output reliability α y (t−1) is latched, and the weight calculation unit 45 latches the input reliability α x (t) from the input reliability calculation unit 12 and the latch circuit 44 in step S32. The weight w (t) is obtained according to the equation (6) using the output reliability α y (t−1) that has been set. The weight w (t) is supplied to the weight correction unit 46.

  Thereafter, the process proceeds to step S33, where the weight correction unit 46 reads the parameter control data from the parameter control data memory 57, and proceeds to step S34. In step S34, the weight correction unit 46 sets the weight calculation unit 45 regardless of the mode in which the parameter control data read from the parameter control data memory 57 does not correct the weight w (t), that is, regardless of the operation of the operation unit 2 by the user. , A mode in which the weight w (t) automatically determined from the input reliability and the output reliability is used as the weight W for correcting the input signal x (t) as it is (hereinafter referred to as auto mode as appropriate). It is determined whether or not the auto mode data indicating is.

If it is determined in step S34 that the parameter control data is not auto mode data, the process proceeds to step S35, and the weight correction unit 46 calculates the weight w (t) supplied from the weight calculation unit 45 from the parameter control data memory 57. The correction is made according to the linear expression of Expression (13) defined by the supplied parameter control data a and b, and the process proceeds to Step S36. In step S36, the weight correction unit 46 supplies the corrected weight to the selection unit 41, and the process proceeds to step S37. Here, in Expression (13), w i corresponds to the weight w (t) supplied from the weight calculation unit 45, and W i ′ corresponds to the corrected weight W.

  On the other hand, when it is determined in step S34 that the parameter control data is auto mode data, step S35 is skipped and the process proceeds to step S36, where the weight correction unit 46 receives the weight w (t) from the weight calculation unit 45. Is directly supplied to the selection unit 41, and the process proceeds to step S37.

In step S37, the output reliability calculation unit 43 updates the output reliability. That is, the output reliability calculation unit 43 outputs the input reliability α x (t) calculated by the input reliability calculation unit 42 in the immediately preceding step S31 and the output reliability α one sample before latched by the latch circuit 44. The current output reliability α y (t) is obtained by adding y (t−1) according to the equation (5), and is stored in the latch circuit 44 in an overwritten form.

  Then, the process proceeds to step S <b> 38, and the selection unit 41 determines whether the operation unit 2 is operated by the user from the output of the operation signal processing unit 50. If it is determined in step S38 that the operation unit 2 has not been operated, the process proceeds to step S39, where the selection unit 41 selects the weight supplied from the weight correction unit 46 (hereinafter referred to as correction weight as appropriate), The data is output to the correction unit 21, and the process returns to step S31.

  If it is determined in step S38 that the operation unit 2 is operated, the process proceeds to step S40, and the selection unit 41 selects the weight output by the operation signal processing unit 50 in accordance with the operation and corrects it. It outputs to the part 21, and returns to step S31.

  Therefore, in the correction parameter calculation process of FIG. 7, when the operation unit 2 is not operated, the correction weight is supplied to the correction unit 21, and when the operation unit 2 is operated, the operation signal Is supplied to the correction unit 21. As a result, in the correction unit 21, when the operation unit 2 is not operated, the input signal is corrected by the correction weight, and when the operation unit 2 is operated, the weight corresponding to the operation signal is used. The input signal is corrected.

  Furthermore, in the correction parameter calculation processing of FIG. 7, in the case of the auto mode, the weight used for the correction processing is obtained only from the input reliability and the output reliability regardless of the operation of the operation unit 2, and is not in the auto mode. In this case, the weight used for the correction process is obtained using the parameter control data obtained by learning by the control data learning process of FIG.

  Next, the control data learning process will be described with reference to the flowchart of FIG.

  In the control data learning process, first, in step S41, if the operation signal processing unit 50 determines whether or not the learning operation signal has been received from the operation unit 2, and determines that the learning operation signal has not been received, step S41. Return to.

  In Step S41, when it is determined that the learning operation signal is received from the operation unit 2, that is, for example, the operation unit 2 leaves an interval of the first time t1 or more after the operation is started. If the operation is stopped continuously for the second time t2 and thereafter, the operation is stopped continuously for the third time t3 or after the start of the operation of the operation unit 2, the time t3 or more. When it can be determined that the user has operated the operation unit 2 so as to obtain a desired output signal, such as when the operation is continuously stopped, the process proceeds to step S42, and the teacher data generation unit 51 generates teacher data, and a student data generation unit 52 generates student data.

  That is, when the operation signal processing unit 50 receives the learning operation signal, the operation signal processing unit 50 has a weight W corresponding to the learning operation signal (for example, the operation amount of the operation unit 2 or the position of a knob or a lever as the operation unit 2). Are supplied to the teacher data generation unit 51 and the student data generation unit 52 together with the learning flag. When receiving the weight W with the learning flag, the teacher data generation unit 51 acquires the weight W as teacher data and supplies it to the learning data memory 53. When the student data generation unit 52 receives the weight with the learning flag, the student data generation unit 52 obtains the weight w corresponding to the input signal at that time as the student data, and supplies it to the learning data memory 53.

  Here, the weight w corresponding to the input signal means the weight automatically obtained from the input reliability and the output reliability according to the equation (6), and as described above, the student data generation unit 52 calculates the weight w corresponding to this input signal from the input signal.

  The learning data memory 53 receives the teacher data W from the teacher data generation unit 51 and receives the student data w from the student data generation unit 52. In step S43, the learning data memory 53 sets the latest teacher data W and student data w. Is stored, and the process proceeds to step S44.

  In step S44, the parameter control data calculation unit 54 performs addition in the least square method on the teacher data and the student data.

That is, the parameter control data calculation unit 54 performs an operation corresponding to multiplication (w i W i ) and summation (Σw i W i ) of the student data w i and the teacher data W i in the equations (20) and (21). , Equivalent to the summation of student data w i (Σw i ), equivalent to the summation of teacher data W i (ΣW i ), equivalent to the summation of the product of student data w i (Σw i 2 ) Perform the operation.

  Here, for example, if N-1 sets of teacher data and student data have already been obtained, and the Nth set of teacher data and student data has been obtained as the latest teacher data and student data, At the time, the parameter control data calculation unit 54 has already added N-1 sets of teacher data and student data. Accordingly, with regard to the Nth set of teacher data and student data, if an addition result for the N-1 set of teacher data and student data that has already been performed is held, N By adding the set of teacher data and student data, it is possible to obtain an addition result of N sets of teacher data and student data including the latest teacher data and student data.

  Therefore, the parameter control data calculation unit 54 stores the previous addition result as learning information in the learning information memory 55, and using this learning information, the Nth set of teacher data and The student data is added. Note that the addition requires the number N of sets of teacher data and student data used for the addition so far, and the learning information memory 55 also stores this number of sets N as learning information. Yes.

  The parameter control data calculation unit 54 performs addition in step S44, and then stores the addition result as learning information in the form of being overwritten in the learning information memory 55, and proceeds to step S45.

  In step S45, the parameter control data calculation unit 54 can obtain the parameter control data a and b from the addition result as learning information stored in the learning information memory 55 by using the equations (20) and (21). It is determined whether or not.

  That is, if a set of teacher data and student data is hereinafter referred to as a learning pair as appropriate, at least if learning information obtained from two learning pairs does not exist, from equations (20) and (21), The parameter control data a and b cannot be obtained. Therefore, in step S45, it is determined whether or not the parameter control data a and b can be obtained from the learning information.

  If it is determined in step S45 that the parameter control data a and b cannot be obtained, the parameter control data calculation unit 54 supplies the fact to the determination control unit 56 and proceeds to step S49. In step S49, the determination control unit 56 supplies auto mode data representing the auto mode to the parameter control data memory 57 as parameter control data and stores it therein. Then, the process returns to step S41, and the same processing is repeated thereafter.

  Therefore, when there is no learning information that can obtain the parameter control data a and b, the weight w (t) automatically obtained from the input reliability and the output reliability is set as described with reference to FIG. This is used as it is for correcting the input signal x (t).

  On the other hand, when it is determined in step S45 that the parameter control data a and b can be obtained, the process proceeds to step S46, and the parameter control data calculation unit 54 uses the learning information and uses equations (20) and (21). ) To obtain parameter control data a and b, supply them to the determination control unit 56, and proceed to step S47.

  In step S47, the determination control unit 56 sets each student data stored in the learning data memory 53 according to a linear expression of the expression (13) defined by the parameter control data a and b from the parameter control data calculation unit 54. Then, the predicted value of the corresponding teacher data is obtained, and the sum of the square errors represented by the equation (15) of the prediction error of the predicted value (the error with respect to the teacher data stored in the learning data memory 53) is obtained. . Further, the determination control unit 56 obtains a normalization error obtained by dividing the sum of the square errors by the number of learning pairs stored in the learning data memory 53, for example, and proceeds to step S48.

  In step S48, the determination control unit 56 determines whether or not the normalization error is greater than (or greater than) a predetermined threshold value S1. When it is determined in step S48 that the normalization error is larger than the predetermined threshold value S1, that is, the primary expression defined by the parameter control data a and b (13) is stored in the learning data memory 53. If the relationship between the student data and the teacher data is not approximated accurately, the process proceeds to step S49, and as described above, the determination control unit 56 uses the auto mode data representing the auto mode as the parameter control data. It is supplied to the parameter control data memory 57 and stored. Then, the process returns to step S41, and the same processing is repeated thereafter.

  Therefore, even if the parameter control data a and b can be obtained, the primary expression of the equation (13) defined by the parameter control data a and b is the student data and the teacher data stored in the learning data memory 53. Is not approximated with high accuracy, it is automatically obtained from the input reliability and the output reliability as in the case where there is not enough learning information to obtain the parameter control data a and b. The weight w (t) is used for correcting the input signal x (t) as it is.

  On the other hand, when it is determined in step S48 that the normalization error is not larger than the predetermined threshold value S1, that is, the primary expression defined by the parameter control data a and b (13) is stored in the learning data memory 53. When the relationship between the stored student data and teacher data is approximated with high accuracy, the process proceeds to step S50, and the determination control unit 56 is defined by the parameter control data a and b from the parameter control data calculation unit 54. An error (distance) ε between the regression line expressed by the linear expression of Expression (13) and the point defined by the latest teacher data and student data stored in the learning data memory 53 is obtained.

  Then, the process proceeds to step S51, where the determination control unit 56 determines whether or not the magnitude of the error ε is larger (or greater) than the predetermined threshold S2, and if it is determined that it is not larger, skips step S52. In step S53, the determination control unit 56 outputs the parameter control data a and b obtained in step S46 to the parameter control unit data memory 37. The parameter control data memory 57 stores the parameter control data a and b from the determination control unit 56 in the form of overwriting, and returns to step S41.

  On the other hand, when it is determined in step S51 that the magnitude of the error ε is larger than the predetermined threshold value S2, the process proceeds to step S52, and the determination control unit 56 controls the parameter control data calculation unit 54 to Using only a predetermined number of past learning pairs from the latest learning pairs as the latest teacher data and student data stored in the learning data memory 53 (without using the learning information in the learning information memory 55), the parameters The control data a and b are recalculated. Then, the process proceeds to step S53, and the determination control unit 56 outputs the parameter control data a and b obtained in step S52 to the parameter control unit data memory 37, stores them in an overwritten form, and returns to step S41.

  Accordingly, the parameter control data a and b can be obtained, and the primary expression defined by the parameter control data a and b is the student data and teacher data stored in the learning data memory 53. Is accurately defined by parameter control data a and b obtained by learning using a learning pair obtained based on the operation of the operation unit 2 by the user. The weight w (t) obtained from the input reliability and the output reliability is corrected according to the equation (13), and the correction weight W obtained by the correction is used for correcting the input signal x (t). Become.

  Here, as shown in FIG. 9A, the regression line represented by the linear expression of the equation (13) defined by the parameter control data a and b obtained in step S46 is represented by N sets of teacher data and student data. This is a straight line that minimizes the square error (total sum) of N points that are defined. In step S50, an error ε between this straight line and the points defined by the latest teacher data and student data. Is required.

  When the magnitude of the error ε is not larger than the threshold value S2, the regression line represented by the linear expression of the expression (13) defined by the parameter control data a and b obtained in step S46 is It is considered that all the points defined by the teacher data and the student data given so far, including the points defined by the latest teacher data and the student data, are approximated relatively accurately.

  However, when the magnitude of the error ε is larger than the threshold value S2, that is, the point defined by the latest teacher data and student data (indicated by a circle in FIG. 9B) is as shown in FIG. 9B. If the user is far away from the regression line represented by the linear expression of the expression (13) defined by the parameter control data a and b obtained in step S46, the user is different from that for some reason. It is thought that the operation of the trend operation unit 2 was performed.

  Therefore, in this case, the determination control unit 56 controls the parameter control data calculation unit 54, so that only some of the recent learning pairs among the learning pairs stored in the learning data memory 53 in step S52. Is used to recalculate the parameter control data a and b.

  That is, in this case, the parameter control data calculation unit 54 does not use (forget) the learning information as the past addition result stored in the learning information memory 55, and some recent teacher data and student data The parameter control data a and b defining the straight line of the equation (13) that best approximates the set of points defined by the teacher data and the student data are recalculated using only the set of.

  Specifically, the parameter control data calculation unit 54, for example, as shown in FIG. 9C, a point defined by the latest teacher data and student data (indicated by a circle in FIG. 9C), and one time before that Parameter control data a ′ and b ′ defining a straight line passing through the given teacher data and a point defined by the student data (indicated by Δ in FIG. 9C) are obtained.

  As described above, when the operation signal supplied in accordance with the user operation is a learning operation signal that can be used for learning by determining whether or not the learning operation signal can be used for learning, the learning operation is performed. Since the parameter control data a and b for correcting the weight for correcting the input signal are learned based on the signal, the user's operation can be learned without the user's knowledge. Based on the result, appropriate processing is gradually performed for the user, and finally, optimal processing for the user is performed.

  From the user side, when the user normally operates the operation unit 2, an optimum noise removal result for the user can be obtained with respect to various input signals without performing any operation. Meaning that the device is familiar to the hand. At this stage, the user operates the operation unit 2 so as to obtain a desired output signal. Therefore, the user operates the operation unit 2 and corrects the input signal. The relationship with the weight W used is gradually and clearly recognized. Finally, the operation of the operation unit 2 by the user and the weight W used for correction of the input signal are qualitatively determined. It will be related.

  In the NR circuit of FIG. 4, the weight W used in the correction processing (FIG. 6) performed by the correction unit 21 can be obtained as a desired output signal for the user in accordance with the operation of the operation unit 2 by the user. Be changed. That is, when the user operates the operation unit 2, the operation signal processing unit 50 outputs the weight represented by the operation signal corresponding to the operation, and the selection unit 41 selects the weight and supplies it to the correction unit 21. . In this case, the correction unit 21 performs the correction process represented by Expression (8) using the weight corresponding to the user's operation. And when the weight w (t) of Formula (8) is changed by a user's operation, naturally the content of the process (correction process) represented by Formula (8) will also be changed. In the NR circuit of FIG. 4, it can be said that the “contents of processing” are changed so as to obtain a desired output signal for the user in accordance with the user's operation.

  Further, in the NR circuit of FIG. 4, when the parameter control data a and b cannot be obtained or can be obtained, the linear expression of the equation (13) defined by the parameter control data a and b is When the relationship between the student data and the teacher data stored in the learning data memory 53 is not approximated accurately, the weight automatically obtained from the input reliability and the output reliability is determined by the correction unit 21. Used for correction processing. On the other hand, the parameter control data a and b can be obtained, and the primary expression (13) defined by the parameter control data a and b is the student data and teacher data stored in the learning data memory 53. Is accurately defined by parameter control data a and b obtained by learning using a learning pair obtained based on the operation of the operation unit 2 by the user. The weight obtained from the input reliability and the output reliability is corrected according to the equation (13), and the correction weight obtained by the correction is used for the correction process by the correction unit 21.

  That is, in the NR circuit of FIG. 4, when a sufficient number of learning pairs are not input from the user or when learning pairs that can be approximated with high accuracy are not input, the input reliability and the output reliability The weight automatically obtained from the above is used for the correction process in the correction unit 21, and when a learning pair that can be approximated with high accuracy is input from the user, learning is performed using the learning pair. The correction weights obtained from the parameter control data a and b obtained by the above are used for the correction process in the correction unit 21.

  Therefore, when a sufficient number of learning pairs or learning pairs that can be approximated with high accuracy are not obtained, and when learning pairs that can be approximated with high accuracy are obtained, Equation (8) is used. 4 changes, and as a result, the content of the correction process represented by the equation (8) is also changed. From this viewpoint, the NR circuit of FIG. Therefore, it can be said that the “content of processing” is changed so that a desired output signal can be obtained for the user.

  Furthermore, in the NR circuit of FIG. 4, there are a sufficient number of learning pairs, a learning pair that can be approximated with high accuracy, and a learning pair that can be approximated with high accuracy. The system for calculating the weight used for the correction process changes.

  That is, when a sufficient number of learning pairs or learning pairs that can be approximated with high accuracy are not obtained, the weight is obtained from the input reliability and the output reliability regardless of the user's operation. On the other hand, when a learning pair that can be approximated with high accuracy is obtained, a weight is obtained based on parameter control data obtained by learning using a learning pair obtained based on a user operation.

  Therefore, in this case, it can be said that the processing system for calculating the weight, that is, the algorithm for calculating the weight is changed so as to obtain a desired output signal for the user in accordance with the user's operation.

Here, if the process for obtaining the weight is represented by the function F, the above-described change in “contents of the process” corresponds to the function F being changed. When the function F is changed, the function F itself changes in shape (for example, when the function F changes from F = x to F = x 2 ). There is a case where the coefficient defining the function F changes (for example, when F = 2x changes to F = 3x).

  Assuming that the change in the function F itself representing the processing among the changes in the “processing contents” changes as the “processing structure”, the processing system for calculating the weight as described above. That is, a change in the algorithm for obtaining the weight can be said to be a change in the “processing structure”.

  Therefore, in the NR circuit of FIG. 4, according to the operation of the user, the “processing content” and further the “processing structure” are changed, so that a desired output signal can be obtained for the user. It has become.

  As the input signal, other signals can be used as well as image signals and audio signals. However, when the input signal is an image signal, the input reliability is calculated based on the variance obtained from a plurality of pixels that are spatially or temporally close to the pixel to be processed. .

  Further, in the above-described case, in order to simplify the description, the weight w obtained from the input reliability or the like in the learning unit 22 is expressed by a linear expression defined by the parameter control data a and b (13), Although the correction weight W is corrected, the correction of the weight w is actually preferably performed by a higher-order expression. In addition, it is desirable to set the order of the higher-order expression to an appropriate value based on, for example, an application to which the optimization apparatus is applied.

Furthermore, as an expression for calculating the correction weight W from the weight w (hereinafter, referred to as a correction weight calculation expression as appropriate), in addition to the primary expression W = aw + b of the expression (13), for example, the secondary expression W = aw 2 + Bw + c and a cubic equation W = aw 3 + bw 2 + cw + d are prepared in advance (a, b, c, d are predetermined coefficients), and the normalization error of the plurality of correction weight calculation formulas It is possible to adopt one that minimizes. In this case, the correction weight calculation formula that minimizes the normalization error obtained from the learning pair obtained by the user's operation is selected, and the correction weight is obtained by the selected correction weight calculation formula. That is, the algorithm for obtaining the correction weight is changed according to the user's operation. Accordingly, in this case as well, it can be said that the “processing structure” is changed according to the user's operation.

  Next, FIG. 10 shows another detailed configuration example when the processing unit 11 of the optimization apparatus 1 of FIG. 4 is applied to an NR circuit. In the figure, portions corresponding to those in FIG. 4 are denoted by the same reference numerals, and description thereof will be omitted below as appropriate. That is, the NR circuit of FIG. 10 is not provided with the weight correction unit 46, and instead of the input reliability calculation unit 42 and the student data generation unit 52, an input reliability calculation unit 61 and a student data generation unit 62 are provided. Other than that, the configuration is basically the same as in the case of FIG.

  The input reliability calculation unit 61 calculates the input reliability of the input signal from a plurality of samples of the input signal and the parameter control data stored in the parameter control data memory 57, and the output reliability calculation unit 43 and the weight calculation. To the unit 45.

  The student data generation unit 62 acquires the input signal and the output reliability output from the output reliability calculation unit 43 as student data, and supplies it to the learning data memory 53.

  In the embodiment of FIG. 10, since the weight correction unit 46 is not provided, the weight obtained by the weight calculation unit 45 is supplied to the selection unit 41 as it is. Is configured to select and output either the weight output from the weight calculation unit 45 or the weight output from the operation signal processing unit 50 in the same manner as in FIG.

  In the embodiment of FIG. 10, the parameter control data functions as data for controlling the input reliability.

  Also in the NR circuit of FIG. 10, the correction process, the correction parameter calculation process, and the control data learning process are performed as in the NR circuit of FIG. Since the correction process is the same as the process described with reference to FIG. 6, the description of the correction process is omitted for the NR circuit of FIG. 10, and the correction parameter calculation process and the control data learning process are described. .

That is, in the NR circuit of FIG. 10, it is assumed that the input reliability α x (t) that defines the weight shown in Expression (6) used in the correction process is defined by the following expression, for example. Processing and control data learning processing are performed.

(22)

However, in the formula (22), a 1, a 2, ···, a N is the parameter control data, x 1, x 2, ··· , x N is input trying Hodokoso now process This is a sample of an input signal having a predetermined relationship with a signal sample (target sample). Here, when the input signal is an image signal, for example, as x 1 , x 2 ,..., X N , for example, a pixel as a sample of interest (indicated by x in FIG. 11), From the pixel, a pixel that is spatially or temporally close (indicated by a circle in FIG. 11) can be used.

  From equation (22), the weight w (t) given by equation (6) can be expressed as shown in equation (23).

... (23)

Therefore, in order to obtain the weight W given by the user when the input signals x 1 , x 2 ,..., X N are inputted, parameter control satisfying the following formula is obtained from the formula (23). Data a 1 , a 2 ,..., A N can be obtained.

... (24)

  Thus, by transforming equation (24), equation (25) can be obtained.

... (25)

Since it is generally difficult to obtain the parameter control data a 1 , a 2 ,..., A N that always satisfy the equation (25), for example, the square error of the left side and the right side of the equation (25) is expressed here. Consider that the parameter control data a 1 , a 2 ,..., A N that minimizes the sum is obtained by the method of least squares.

Here, minimizing the sum of the square errors of the left side and the right side of Equation (25) means that the square error between the weight w (t) given by Equation (23) and the weight W given by the user is minimized. That is, the weight W given by the user is used as teacher data, and the input signals x 1 , x 2 ,..., X N that define the weight w (t) of Expression (23) and the output reliability Using the degree α y (t−1) as the student data, the square error between the weight w (t) calculated by the equation (23) from the student data and the weight W as the teacher data given by the user is minimized. The weight w (t) calculated by the equation (23) from the parameter control data a 1 , a 2 ,..., A N and the student data is the teacher data W The error is small.

The square error e 2 between the left side and the right side of Equation (25) is given by Equation (26).

... (26)

Square error e 2 a to minimize the parameter control data a 1, a 2, ···, a N is the square error e 2 of the formula (26), a 1, a 2, ···, respectively a N The condition that the partial differential becomes 0, that is, is given by the following equation.

... (27)

  By substituting equation (26) into equation (27) and calculating, equation (28) is obtained.

... (28)

  Therefore, if each of the matrices X, A, and Y is defined as shown in equation (29), the relationship of equation (30) is established from equations (28) to these matrices X, A, and Y.

... (29)

... (30)

However, the summation in equation (29) (Σ) is an input signal x 1 to x N, the number of sets of the weight W provided from the user when the input signals x 1 to the x N is input It means the summation of.

Equation (30) can be solved for the matrix (vector) A, that is, the parameter control data a 1 , a 2 ,..., A N by, for example, the Cholesky method.

As described above, the NR circuit of FIG. 10 uses the weight W given by the user as teacher data and the input signals x 1 , x 2 ,..., Which define the weight w (t) of the equation (23). x N and output reliability α y (t−1) as student data, weight w (t) calculated by the equation (23) from the student data, and weight W as teacher data given by the user , A N for learning parameter control data a 1 , a 2 ,..., A N that minimizes the square error of. Further, the NR circuit of FIG. 10 obtains the input reliability α x (t) from the equation (22) defined by the parameter control data a 1 to a N , and further calculates the input reliability α x (t) From the output reliability α y (t−1) , correction parameter calculation processing for obtaining a weight as a correction parameter is performed according to the equation (23).

  Therefore, with reference to the flowchart of FIG. 12, the correction parameter calculation processing by the NR circuit of FIG. 10 will be described.

  In the correction parameter calculation process, first, in step S61, the input reliability calculation unit 61 reads parameter control data from the parameter control data memory 57, and proceeds to step S62. In step S62, the input reliability calculation unit 61 relates to the mode in which the parameter control data read from the parameter control data memory 57 calculates the input reliability without using the parameter control data, that is, the operation of the operation unit 2 by the user. In other words, it is determined whether or not the input reliability is auto mode data representing a mode in which the input reliability is automatically obtained from the input signal (this mode is also referred to as an auto mode as appropriate hereinafter).

If it is determined in step S62 that the parameter control data is not auto mode data, the process proceeds to step S63, and the input reliability calculation unit 61 is defined by the parameter control data a 1 to a N read from the parameter control data memory 57. Is obtained using the latest N input signal samples x 1 to x N supplied to the output reliability calculation unit 43 and the weight calculation unit 45. The process proceeds to step S65.

If it is determined in step S62 that the parameter control data is auto mode data, the process proceeds to step S64, and the input reliability calculation unit 61, for example, only the input signal as in step S31 of FIG. The input reliability α x (t) based on the variance is obtained and supplied to the output reliability calculation unit 43 and the weight calculation unit 45.

In step S65, the weight calculation unit 45 outputs the input reliability α x (t) from the input reliability calculation unit 61 and the output reliability calculation unit 43 latched in the latch circuit 44 one sample before. The weight w (t) is obtained according to the equation (23) using the output reliability α y (t−1) . The weight w (t) is supplied from the weight calculator 45 to the selector 41.

Thereafter, the process proceeds to step S66, and the output reliability calculation unit 43 determines that the input reliability α x (t) supplied from the input reliability calculation unit 61 and the latch circuit 44 are the same as in step S37 of FIG. By adding the output reliability α y (t−1) one sample before latching according to the equation (5), the output reliability α y (t) is updated, and the latch circuit 44 is overwritten. Remember me in the form.

  In step S67, the selection unit 41 determines whether the operation unit 2 is operated by the user from the output of the operation signal processing unit 50. If it is determined in step S67 that the operation unit 2 has not been operated, the process proceeds to step S68, where the selection unit 41 selects the weight supplied from the weight calculation unit 45 and outputs the weight to the correction unit 21. Return to S61.

  If it is determined in step S67 that the operation unit 2 is operated, the process proceeds to step S69, and the selection unit 41 selects the weight output by the operation signal processing unit 50 in accordance with the operation and corrects it. It outputs to the part 21, and returns to step S61.

  Therefore, in the correction parameter calculation process of FIG. 12, when the operation unit 2 is not operated, the weight calculated based on the input reliability is supplied to the correction unit 21, and the operation unit 2 is operated. If so, the weight corresponding to the operation signal is supplied to the correction unit 21. As a result, in the correction unit 21, when the operation unit 2 is not operated, the input signal is corrected by the weight based on the input reliability, and when the operation unit 2 is operated, the operation signal corresponds to the operation signal. The input signal is corrected by the weight to be used.

  Furthermore, in the correction parameter calculation process of FIG. 12, in the case of the auto mode, the weight used for the correction process is obtained from the input reliability based on the variance of the input signal regardless of the operation of the operation unit 2, and the auto mode If not, the weight used for the correction process is obtained from the input reliability obtained using the parameter control data obtained by the learning by the control data learning process of FIG. .

  Next, the control data learning process performed by the NR circuit of FIG. 10 will be described with reference to the flowchart of FIG.

  In the control data learning process, first, in step S71, as in the case of step S41 in FIG. 8, it is determined whether the operation signal processing unit 50 has received the learning operation signal from the operation unit 2, If it is determined that it has not been received, the process returns to step S71.

  In Step S71, when it is determined that the learning operation signal is received from the operation unit 2, that is, for example, the operation unit 2 leaves an interval of the first time t1 or more after the operation is started. If the operation is stopped continuously for the second time t2 and thereafter, the operation is stopped continuously for the third time t3 or after the start of the operation of the operation unit 2, the time t3 or more. When it can be determined that the user has operated the operation unit 2 so that a desired output signal can be obtained, such as when the operation is continuously stopped, the process proceeds to step S72, and the teacher data generation unit 51 generates teacher data, and a student data generation unit 62 generates student data.

  That is, when receiving the learning operation signal, the operation signal processing unit 50 supplies the weight W corresponding to the learning operation signal to the teacher data generation unit 51 and the student data generation unit 62 together with the learning flag. When receiving the weight W with the learning flag, the teacher data generation unit 51 acquires the weight W as teacher data and supplies it to the learning data memory 53.

On the other hand, the student data generation unit 62 has a buffer (not shown) for buffering the input signal, and always stores the input signal in the buffer for the storage capacity, and has a learning flag. When the weight is received, the input signal samples x 1 to x N having a predetermined positional relationship with the input signal sample input at that time are read out from the built-in buffer. Further, the student data generation unit 62 reads the output reliability α y (t−1) from the output reliability calculation unit 43. Then, the student data generation unit 62 supplies these input signal samples x 1 to x N and the output reliability α y (t−1) to the learning data memory 53 as student data.

When the learning data memory 53 receives the teacher data W from the teacher data generation unit 51 and also receives the student data x 1 to x N and α y (t−1) from the student data generation unit 62, in step S73, The latest teacher data W, student data x 1 to x N and α y (t−1) set (learning pair) are stored, and the process proceeds to step S74.

  In step S74, the parameter control data calculation unit 54 performs addition in the least square method for the teacher data and the student data.

  That is, the parameter control data calculation unit 54 performs an operation corresponding to the product of the student data that is an element of the matrices X and Y in Equation (29), the product of the student data and the teacher data, and their summation. Do.

  The addition in step S74 is performed in the same manner as in step S44 of FIG. That is, the previous addition result is stored as learning information in the learning information memory 55, and the parameter control data calculation unit 54 uses this learning information to add the latest teacher data and student data. I do.

After adding in step S74, the parameter control data calculation unit 54 stores the addition result in the form of overwriting the learning information memory 55 as learning information, and proceeds to step S75, where the parameter control data calculation unit 54 obtains whether the equation (30) can be solved for the matrix A, that is, parameter control data a 1 to a N from the addition result as learning information stored in the learning information memory 55. Determine whether it is possible.

In other words, the expression (30) cannot be solved for the matrix A if there is no learning information obtained from a predetermined number or more of learning pairs, and the parameter control data a 1 to a N that are the elements of the matrix A cannot be solved. I can't ask for it. Therefore, in step S75, it is determined whether or not the parameter control data a 1 to a N can be obtained from the learning information.

If it is determined in step S75 that the parameter control data a 1 to a N cannot be obtained, the parameter control data calculation unit 54 supplies the fact to the determination control unit 56 and proceeds to step S79. In step S79, the determination control unit 56 supplies auto mode data representing the auto mode as parameter control data to the parameter control data memory 57 for storage. Then, the process returns to step S71, and the same processing is repeated thereafter.

Accordingly, when there is no learning information that can determine the parameter control data a 1 to a N , as described with reference to FIG. 12, the weight obtained from the input reliability based on the variance of the input signal is the input signal. It will be used to correct x (t).

On the other hand, when it is determined in step S75 that the parameter control data can be obtained, the process proceeds to step S76, and the parameter control data calculation unit 54 solves Equation (30) for the matrix A using the learning information. Thus, the parameter control data a 1 to a N as the elements are obtained, supplied to the determination control unit 56, and the process proceeds to step S77.

In step S77, the determination control unit 56 uses each student data stored in the learning data memory 53 according to the equation (23) defined by the parameter control data a 1 to a N from the parameter control data calculation unit 54. Then, the predicted value of the corresponding teacher data is obtained, and the sum of the square errors represented by the equation (26) of the prediction error of the predicted value (error with respect to the teacher data stored in the learning data memory 53) is obtained. Further, the determination control unit 56 obtains a normalization error obtained by dividing the sum of the square errors by the number of learning pairs stored in the learning data memory 53, for example, and proceeds to step S78.

In step S78, the determination control unit 56 determines whether or not the normalization error is greater than (or greater than) a predetermined threshold value S1. When it is determined in step S78 that the normalization error is larger than the predetermined threshold value S1, that is, the primary expression defined by the parameter control data a 1 to a N is the learning data memory 53. If the relationship between the student data and the teacher data stored in is not accurately approximated, the process proceeds to step S79, and the determination control unit 56, as described above, uses the auto mode data representing the auto mode as the parameter control data. Is supplied to and stored in the parameter control data memory 57. Then, the process returns to step S71, and the same processing is repeated thereafter.

Therefore, even if it is possible to obtain the parameter control data a 1 to a N, the parameter control data a 1 to expression defined by a N (23) is, student data and stored in the learning data memory 53 teachers When the relationship with the data is not approximated with high accuracy, the input reliability based on the variance of the input signal is determined as in the case where there is no learning information that can obtain the parameter control data a 1 to a N. The calculated weight is used for correcting the input signal x (t).

On the other hand, when it is determined in step S78 that the normalization error is not larger than the predetermined threshold value S1, that is, the linear expression defined by the parameter control data a 1 to a N is the learning data memory. If the relationship between the student data and the teacher data stored in 53 is approximated with high accuracy, the process proceeds to step S80, where the determination control unit 56 determines the parameter control data a 1 through a 1 through the parameter control data calculation unit 54. An error (distance) ε between the surface (line) of the equation (23) defined by a N and the point defined by the latest teacher data and student data stored in the learning data memory 53 is obtained.

Then, the process proceeds to step S81, where the determination control unit 56 determines whether or not the magnitude of the error ε is greater than (or greater than) the predetermined threshold value S2, and if it is determined that it is not greater, skips step S82. In step S83, the determination control unit 56 outputs the parameter control data a 1 to a N obtained in step S76 to the parameter control unit data memory 37. The parameter control data memory 57 stores the parameter control data a 1 to a N from the determination control unit 56 in the form of overwriting, returns to step S71, and thereafter the same processing is repeated.

On the other hand, if it is determined in step S81 that the magnitude of the error ε is larger than the predetermined threshold value S2, the process proceeds to step S82, and the determination control unit 56 controls the parameter control data calculation unit 54 to The parameter control data a 1 to a N are recalculated using only the latest teacher data and student data stored in the learning data memory 53. Then, the process proceeds to step S83, and the determination control unit 56 outputs the parameter control data a 1 to a N obtained in step S82 to the parameter control data memory 57, stores them in an overwritten form, and returns to step S71.

That is, in the embodiment of FIG. 13, as in the embodiment of FIG. 8, in step S82, the parameter control data a 1 to a N obtained from the teacher data and student data given so far are used. Thus, an error ε between the surface defined by the equation (23) and the point defined by the latest teacher data and student data is obtained.

If the magnitude of the error ε is not larger than the threshold value S2, the surface of the equation (23) defined by the parameter control data a 1 to a N obtained in step S76 is the latest teacher data. Since both the teacher data given so far and the points specified by the student data, including the points specified by the student data, are considered to be approximated relatively accurately, the parameter control data a 1 to a N is stored in the parameter control data memory 57.

On the other hand, when the magnitude of the error ε is larger than the threshold value S2, the latest teacher data and student are obtained from the surface of the equation (23) defined by the parameter control data a 1 to a N obtained in step S76. Since it is considered that the points specified by the data are relatively far apart, the determination control unit 56 controls the parameter control data calculation unit 54, so that the latest data stored in the learning data memory 53 in step S82. The parameter control data a 1 to a N are recalculated using only the teacher data and the student data.

In the NR circuit of FIG. 10, the input reliability calculation unit 61 calculates the input reliability α x (t) from the parameter control data a 1 to a N obtained as described above according to the equation (22). Is done.

Accordingly, in this case as well, learning of the parameter control data a 1 to a N that defines the input reliability α x (t) in Expression (22) is performed based on the learning operation signal supplied in accordance with the user's operation. As a result, the user's operation can be learned without the user's knowledge, and furthermore, the learning result can be used to perform the optimum process for the user.

  Similarly to the NR circuit of FIG. 4, when the user operates the operation unit 2, the operation signal processing unit 50 outputs the weight represented by the operation signal corresponding to the operation, and the selection unit 41 selects the weight and supplies it to the correction unit 21. In this case, the correction unit 21 performs the correction process represented by Expression (8) using the weight corresponding to the user's operation. And when the weight w (t) of Formula (8) is changed by a user's operation, naturally the content of the process (correction process) represented by Formula (8) will also be changed. In the NR circuit of FIG. 10 as well, it can be said that the “contents of processing” are changed so as to obtain a desired output signal for the user in accordance with the user's operation.

Furthermore, in the NR circuit of FIG. 10, when the parameter control data a 1 to a N cannot be obtained or can be obtained, Expression (23) defined by the parameter control data a 1 to a N is obtained. When the relationship between the student data and the teacher data stored in the learning data memory 53 is not accurately approximated, the weight obtained from the input reliability based on the variance of the input signal is corrected by the correction unit 21. Used for processing. On the other hand, the parameter control data a 1 to a N can be obtained, and the equation (23) defined by the parameter control data a 1 to a N is obtained from the student data stored in the learning data memory 53 and the teacher. When the relationship with the data is approximated with high accuracy, the parameter control data a 1 to a N obtained by learning using a learning pair obtained based on the operation of the operation unit 2 by the user. The weight obtained from the input signal and the parameter control data a 1 to a N (input reliability calculated from) and the output reliability in accordance with the equation (23) defined by It is done.

  That is, in the NR circuit of FIG. 10, as in the case of the NR circuit of FIG. 4, a sufficient number of learning pairs and a learning pair that can be approximated with high accuracy are not obtained. The algorithm for calculating the weight used for the correction process is changed when a possible learning pair is obtained.

  Accordingly, in the NR circuit of FIG. 10, the “processing contents” and the “processing structure” are also changed according to the user's operation, so that a desired output signal is output to the user. become.

In the case described above, the output reliability α y (t−1) is used as the student data to obtain the parameter control data a 1 to a N , but this output reliability α y (t−1) Is obtained from the input reliability α x (t−1) as shown in the equation (5). The input reliability α x (t) is gradually improved so that the weight desired by the user can be obtained by performing the control data learning process of FIG. 13. α y (t-1) will also be improved.

In the above case, the output reliability is set to a known value, the input reliability is defined by the parameter control data a 1 to a N , and the parameter control data a that can obtain the weight desired by the user. 1 to a N are obtained, but conversely, the input reliability is set to a known value, and the output reliability is defined by the parameter control data a 1 to a N , and the weight desired by the user It is also possible to obtain the parameter control data a 1 to a N such that

Furthermore, for example, while the output reliability a known value, the input reliability, parameter control data a 1 to be defined by a N, parameter control such that the weight that the user desires to obtain data a 1 to a N Further, the input reliability obtained from the parameter control data a 1 to a N is set to a known value, and the output reliability is defined by the parameter control data a 1 ′ to a N ′. It is also possible to obtain the parameter control data a 1 ′ to a N ′ so that the weight to be obtained is obtained, that is, to obtain two sets of parameter control data a 1 to a N and a 1 ′ to a N ′. It is.

In the above case, the weight is defined by the input reliability α x (t) and the output reliability α y (t−1) as shown in the equation (6), and the parameter control data a 1 To a N , but other weights, for example, input reliability α x (t) and output reliability α y (t−1) as well as input reliability as shown in Expression (31). The parameter control data a 1 to a N and the correction term Δα may be determined by using the correction term Δα of the degree α x (t) or the output reliability α y (t−1). Is possible.

... (31)

  Furthermore, the formula that defines the input reliability by the parameter control data is not limited to the formula (22).

  Next, FIG. 14 shows a second detailed configuration example of the optimization apparatus of FIG. In the optimization apparatus 1 of FIG. 14, the internal information generation unit 71 is newly provided in the optimization apparatus 1 of FIG. 2, and the configuration of the processing unit 11 is the same as that of FIG. Description is omitted. In the embodiment of FIG. 14, a display unit 81 is provided outside the optimization device 1.

  The internal information generation unit 71 reads out the internal information of the processing unit 11 and converts it into image information, and then outputs the information to a display unit 81 made up of an LCD (Liquid Crystal Display) or CRT (Cathode Ray Tube) for display (presentation). ) More specifically, the display unit 81 may display the internal information numerically as it is. For example, a display screen such as a level gauge is set, and the level gauge varies according to the value of the internal information. You may make it display. Further, the display unit 81 is not limited to this, and any other display method may be used as long as the internal information is visually displayed (presented). As the internal information, for example, weights stored in the weight memories 31 and 32 of the correction unit 21, stored data in the learning data memory 53 and the learning information memory 55 of the learning unit 22, and the like can be used. Further, the internal information may be presented to the user by a presentation (presentation) method other than display, that is, a sound or the like.

  Next, the optimization processing of the NR circuit of FIG. 14 will be described with reference to the flowchart of FIG. This process is basically the same as the optimization process described with reference to the flowchart of FIG. 3 except that a process for displaying internal information by the internal information generation unit 71 is added. That is, in steps S91 to S101, the same processing as in steps S1 to S11 of FIG. 3 is performed, and the process proceeds to step S102.

  In step S102, the weight W is displayed on the display unit 81. That is, the internal information generation unit 71 reads, for example, the value of the weight W stored in the weight memory 31 as internal information, converts it into an image signal that can be displayed on the display unit 81, and outputs the image signal to the display unit 81. The weight W is displayed (presented), and the process returns to step S91.

  By the process described with reference to the flowchart of FIG. 15, the weight W as the internal information regarding the process actually executed in the processing unit 11 is displayed (presented) to the user, and as a result, the user receives the internal information. It is possible to operate the operation unit 2 so that an optimal output signal can be obtained while viewing the display. The internal information generation unit 71 reads out the parameter control data a and b from the parameter control data memory 37 (FIGS. 4 and 10) of the learning unit 22 and displays the information in addition to the internal information such as the weight described above. Anyway. Further, whether the weight selected by the selection unit 41 (FIGS. 4 and 10) is a weight obtained from the parameter control data a and b obtained by learning using a learning pair, or Image information indicating whether the weight is obtained from the input reliability and the output reliability may be generated as internal information.

  Next, FIG. 16 shows a configuration example of an embodiment of an automatic traveling device for an automobile to which the optimization device of FIG. 1 is applied.

  In the automatic traveling device, the position coordinates (X, Y) of the automobile and the traveling direction θ are obtained, and the automobile is caused to travel along a predetermined locus. However, the coordinates (X, Y) and the traveling direction θ required in the automatic traveling apparatus often include errors, and in this case, the automobile may travel out of a predetermined locus. Therefore, in the automatic travel device of FIG. 16, the user's operation is learned without the user's knowledge, and the automobile is caused to travel along a predetermined locus based on the learning result. That is, when the automobile starts to run off the predetermined trajectory, the user generally operates a handle, an accelerator, or the like so that the automobile travels along the predetermined trajectory. Therefore, in the automatic traveling device of FIG. 16, such user operations are learned without the user's knowledge, and the vehicle is caused to gradually travel along a predetermined locus based on the learning result. Control.

  The gyro sensor 91 detects the yaw rate r of the automobile and supplies it to the calculation unit 93. The wheel pulser 92 supplies the number of electrical pulses corresponding to the rotation angle of the vehicle wheel to the calculation unit 93.

  The calculation unit 93 calculates the coordinates (X, Y) of the vehicle and the traveling direction θ from the outputs of the gyro sensor 91 and the wheel pulser 92, for example, and supplies them to the optimization device 94.

... (32)

In Equation (32), θ (0) represents the direction at the start of traveling of the automobile, and (X (0), Y (0)) represents the coordinates at the start of traveling of the automobile. Note that θ (0) and (X (0), Y (0)) can be obtained using, for example, a GPS (Global Positioning System) not shown. V r represents the running speed of the automobile, and β represents the slip angle of the center of gravity of the automobile.

  Here, as described above, a method for obtaining the coordinates (X, Y) of the automobile and the traveling direction θ is disclosed in, for example, Japanese Patent Laid-Open No. 10-69219.

  The optimization device 94 includes the processing unit 101, learns the operation of the operation unit 98 by the user, that is, learns based on the operation signal supplied when the user operates the operation unit 98, and learns Based on the result, the coordinates (X, Y) and the travel direction θ from the calculation unit 93 are corrected so that the travel desired by the user is performed, and supplied to the automatic travel control unit 95.

  The automatic travel control unit 95 stores map data and a preset trajectory to be automatically traveled (hereinafter referred to as a set trajectory as appropriate). Then, the automatic travel control unit 95 recognizes the current position and travel direction of the vehicle from the coordinates (X, Y) and the travel direction θ supplied from the processing unit 101 of the optimization device 94, and the vehicle follows the set locus. Then, a control signal for controlling the drive unit 97 described later is generated and output to the selection unit 96.

  The selection unit 96 is supplied with a control signal from the automatic travel control unit 95 and an operation signal from the operation unit 98. Then, the selection unit 96 preferentially selects an operation signal from among the control signal from the automatic travel control unit 95 and the operation signal from the operation unit 98 and outputs the operation signal to the drive unit 97. In other words, the selection unit 96 normally selects a control signal from the automatic travel control unit 95 and outputs it to the drive unit 97. When receiving the operation signal from the operation unit 98, the selection unit 96 receives the operation signal. During this time, output of the control signal from the automatic travel control unit 95 is stopped, and an operation signal from the operation unit 98 is output to the drive unit 97.

  The drive unit 97 drives each mechanism necessary for traveling such as an engine (not shown) of the automobile, wheels, brakes, and clutches in accordance with a control signal or an operation signal from the selection unit 96. The operation unit 98 includes, for example, a steering wheel, an accelerator pedal, a brake pedal, and a clutch pedal, and supplies an operation signal corresponding to a user operation to the optimization device 94 and the selection unit 96.

  In the automatic traveling apparatus configured as described above, the calculation unit 93 calculates the current coordinates (X, Y) and the traveling direction θ of the automobile from the outputs of the gyro sensor 91 and the wheel pulser 92, and the optimization apparatus 94. Is supplied to the automatic travel control unit 95 via the processing unit 101. The automatic traveling control unit 95 recognizes the current position and traveling direction of the vehicle from the coordinates (X, Y) and the traveling direction θ supplied thereto, and drives to be described later so that the vehicle travels along the set locus. A control signal for controlling the unit 97 is generated and supplied to the driving unit 97 via the selection unit 96. As a result, the vehicle automatically travels according to the control signal output by the automatic travel control unit 95.

  On the other hand, when the user operates the operation unit 98, an operation signal corresponding to the operation is supplied to the drive unit 97 via the selection unit 96, so that the automobile travels according to the operation of the operation unit 98 by the user. To do.

  Further, when the user operates the operation unit 98, the operation signal output from the operation unit 98 is also supplied to the processing unit 101 of the optimization device 94. The optimization device 94 performs learning based on an operation signal supplied when the user operates the operation unit 98. Then, when the user stops the operation of the operation unit 98, the processing unit 101 of the optimization device 94 determines the coordinates (X, Y) and the traveling direction θ supplied from the calculation unit 93 based on the learning result. The travel is corrected so as to be performed along the set locus as the travel to be performed, and is supplied to the automatic travel control unit 95.

  Next, FIG. 17 shows a configuration example of the processing unit 101 of the optimization device 94 of FIG. In the figure, portions corresponding to those in the processing unit 11 of FIG. 4 are denoted by the same reference numerals, and description thereof will be omitted below as appropriate. That is, the processing unit 101 in FIG. 17 is not provided with the selection unit 41, and instead of the operation signal processing unit 50 and the teacher data generation unit 51, an operation signal processing unit 110 and a teacher data generation unit 111 are provided. Other than that, the configuration is basically the same as the processing unit 11 of FIG.

  Here, in the following, in order to simplify the description, attention is paid only to the traveling direction θ among the coordinates (X, Y) and the traveling direction θ supplied from the arithmetic unit 93 to the processing unit 101 of the optimization device 94. And explain. However, for the coordinates (X, Y), it is possible to perform the same processing as the processing for the traveling direction θ described below.

  The operation signal processing unit 110 receives the operation signal from the operation unit 98 and determines whether it is a learning operation signal. If the operation signal is a learning operation signal, the operation signal processing unit 110 supplies a message indicating that to the student data generation unit 52 and the teacher data generation unit 111.

  The teacher data generation unit 111 is supplied with a message indicating that the operation signal is a learning operation signal (hereinafter referred to as a learning message as appropriate) from the operation signal processing unit 110, and also includes a calculation unit 93 as an input signal. The traveling direction θ from the vehicle is supplied. Further, the teacher data generation unit 111 is also supplied with a corrected traveling direction θ from the computing unit 93 (hereinafter referred to as a corrected traveling direction as appropriate) as an output signal output from the correcting unit 21 (calculator 36). It has come to be. The teacher data generation unit 111 obtains a weight W corresponding to the learning operation signal from the traveling direction θ as the input signal and the corrected traveling direction as the output signal supplied when the learning message is received, and the teacher data Is supplied to the learning data memory 53.

  That is, in this case, as the teacher data, it is necessary to obtain the weight W when the user turns the predetermined direction after the user operates the operation unit 98 as the steering wheel so that the vehicle faces the predetermined direction. There is. That is, the teacher data is used for correcting the input signal x (t) representing the traveling direction θ immediately after the user operates the operation unit 98 as a steering wheel and the automobile is directed in a desired direction. It is necessary to adopt the weight W to be used. The input signal x (t) immediately after the operation of the operation unit 98 is in accordance with the equation (8) and the output signal y (t−1) output immediately before the operation of the operation unit 98. ) Is corrected to the output signal y (t) immediately after the operation of the operation unit 98 as the corrected traveling direction, and is used for correcting the input signal x (t) immediately after the operation of the operation unit 98. The weight W to be obtained is obtained from the equation (8) by the input signal x (t) immediately after the operation of the operation unit 98, the output signal y (t) immediately after the operation of the operation unit 98, and y (t− 1). Therefore, the teacher data generation unit 111 outputs the traveling direction θ as the input signal x (t) supplied immediately after receiving the learning message, and the output signal y (t) supplied immediately before and after receiving the learning message. -1) and the corrected traveling direction as y (t), a weight W as teacher data is obtained and supplied to the learning data memory 53.

  When the student data generation unit 52 receives the learning message, the student data generation unit 52 supplies the learning data memory 53 with the weight w obtained from the traveling direction as the input signal supplied immediately before the learning message as student data. That is, the student data generation unit 52 is configured similarly to the input reliability calculation unit 42, the output reliability calculation unit 43, the latch circuit 44, and the weight calculation unit 45 as described with reference to FIG. The weight w (the same weight w obtained by the weight calculator 45) is calculated from the traveling direction as the input signal, and the weight w calculated immediately before receiving the learning message is used as the student data. Is supplied to the data memory 53.

  Therefore, in the parameter control data calculation unit 54, the user operates the operation unit 98, and the weight W when the traveling direction becomes the direction desired by the user is used as teacher data, and the user operates the operation unit 98. The parameter control data a and b shown in the equations (20) and (21) are calculated using the same weight w output by the weight calculation unit 45 immediately before this as the student data.

  Then, the weight correction unit 46 uses the parameter control data a and b to correct the weight w obtained by the weight calculation unit 45 according to the equation (13) and supplies the corrected weight w to the correction unit 21.

  As a result, the parameter control data a and b are obtained by the weight calculation unit 45 so that the traveling direction immediately before the user operates the operation unit 98 is corrected to the traveling direction immediately after the user operates the operation unit 98. Since the weight w is corrected, the automobile will automatically travel along the set locus.

  That is, the fact that the user operates the operation unit 98 means that the traveling direction θ output by the calculation unit 93 is caused by errors in the gyro sensor 91, noise included in the output thereof, calculation errors in the calculation unit 93, and the like. Since it includes an error and does not represent the true traveling direction of the automobile, it is considered that the actual traveling direction of the automobile is deviated from the set locus. Furthermore, the operation of the operation unit 98 by the user in this case is considered to change the actual traveling direction of the automobile to a direction along the set locus. Therefore, the weight W when the user operates the operation unit 98 and the actual traveling direction of the vehicle is along the set trajectory is used as teacher data, and the weight is set immediately before the user operates the operation unit 98. The weight w obtained by the calculation unit 45, that is, the weight w output from the weight calculation unit 45 in a state deviating from the set locus is learned as student data, so that the vehicle runs in a state deviating from the set locus. The parameter control data a and b of the equation (13) for correcting the weight of the equation (6) are obtained so as to correct the direction in the direction along the set locus.

  Next, processing of the processing unit 101 of the optimization device 94 in FIG. 17 will be described. In addition, in the processing unit 101 of the optimization device 94 in FIG. 17, as in the processing unit 11 of the NR circuit in FIG. 4, correction processing for correcting the traveling direction θ output by the calculation unit 93 as the input signal x (t). Parameter control for controlling (correcting) the weight as the correction parameter by learning the correction parameter calculation process for obtaining the weight as the correction parameter used for the correction process and the operation of the operation unit 98 (FIG. 16) by the user A control data learning process for obtaining data is performed. Since the correction process is the same as the correction process by the NR circuit of FIG. 4 described in FIG. 7, the processing unit 101 of the optimization device 94 of FIG. The correction parameter calculation processing and student data learning processing to be performed will be described.

  First, correction parameter calculation processing performed by the processing unit 101 of the optimization device 94 of FIG. 17 will be described with reference to the flowchart of FIG.

In the correction parameter calculation process, first, in step S111, the input reliability calculation unit 42 determines the travel direction θ from the calculation unit 93 (FIG. 16) as an input signal, as in step S31 of FIG. The input reliability α x (t) based on the variance is obtained and supplied to the output reliability calculation unit 43 and the weight calculation unit 45.

Thereafter, the process proceeds to step S112, where the weight calculation unit 45 uses the input reliability α x (t) from the input reliability calculation unit 42 to obtain the weight w (t) according to the equation (6), and the weight correction unit. The process proceeds to step S113.

  In step S113, the weight correction unit 46 reads the parameter control data from the parameter control data memory 57, and proceeds to step S114. In step S114, the weight correction unit 46 is in a mode in which the parameter control data read from the parameter control data memory 57 does not correct the weight w (t), that is, regardless of the operation of the operation unit 98 (FIG. 16) by the user. In the weight calculation unit 45, a mode (auto mode) in which the weight w (t) that is automatically obtained from the input reliability and the output reliability is used as the weight W for correcting the input signal x (t) as it is. It is determined whether or not the auto mode data is represented.

  If it is determined in step S113 that the parameter control data is not auto mode data, the process proceeds to step S115, and the weight correction unit 46 obtains the weight w (t) supplied from the weight calculation unit 45 from the parameter control data memory 57. The correction is made according to the linear expression of Expression (13) defined by the supplied parameter control data a and b, and the process proceeds to Step S116. In step S116, the weight correction unit 46 supplies the corrected weight to the correction unit 21 as a correction parameter, and the process proceeds to step S117.

  On the other hand, when it is determined in step S114 that the parameter control data is auto mode data, step S115 is skipped and the process proceeds to step S116, where the weight correction unit 46 receives the weight w (t) from the weight calculation unit 45. As a correction parameter to the correction unit 21 as it is, and the process proceeds to step S117.

In step S117, the output reliability calculation unit 43 updates the output reliability. That is, the output reliability calculation unit 43 outputs the input reliability α x (t) calculated by the input reliability calculation unit 42 in the immediately preceding step S31 and the output reliability α one sample before latched by the latch circuit 44. The current output reliability α y (t) is obtained by adding y (t−1) according to the equation (5), and is stored in the latch circuit 44 in an overwritten form.

  After the process of step S117, the process returns to step S111, and the same process is repeated thereafter.

  As described above, in the correction parameter calculation process of FIG. 18, in the case of the auto mode, the weight used for the correction process is obtained from the input reliability and the output reliability regardless of the operation of the operation unit 98. When the mode is not set, the weight used for the correction process is obtained based on the operation of the operation unit 98 using the parameter control data obtained by the learning by the control data learning process of FIG.

  Next, a control data learning process performed by the processing unit 101 of the optimization device 94 in FIG. 17 will be described with reference to the flowchart in FIG.

  In the control data learning process, first, in step S131, the operation signal processing unit 110 determines whether or not the learning operation signal has been received from the operation unit 98 (FIG. 16), and determines that it has not been received. If so, the process returns to step S101.

  In Step S101, when it is determined that the learning operation signal has been received from the operation unit 98, that is, for example, the handle as the operation unit 98 has an interval of the first time t1 or more after the start of the operation. Without being opened, when the operation is continuously performed for the second time t2 or more and then the operation is stopped for the third time t3 or more, or after the operation of the handle as the operation unit 98 is started, When it can be determined that the user has operated the steering wheel as the operation unit 98 so that the automobile faces a desired direction, such as when the operation is stopped continuously for the third time t3 or more, step In S132, the teacher data generation unit 111 generates teacher data, and the student data generation unit 52 generates student data.

  That is, when the operation signal processing unit 110 determines that the learning operation signal has been received, the operation signal processing unit 110 supplies the learning message to the teacher data generation unit 111 and the student data generation unit 52. Upon receiving the learning message from the operation signal processing unit 110, the teacher data generation unit 111 outputs the traveling direction θ as an input signal supplied from the calculation unit 93 and the correction unit 21 (calculator 36) in step S132. The weight W corresponding to the learning operation signal is obtained from the output signal obtained by correcting the traveling direction θ from the computing unit 93 (corrected traveling direction).

  Specifically, the teacher data generation unit 111 inputs an input signal x (t indicating the traveling direction θ immediately after the user operates the operation unit 98 as a steering wheel so that the automobile turns to a desired direction. ) Is received from the calculation unit 93 (FIG. 16). Further, the teacher data generation unit 111 outputs the current output signal y (t) output from the correction unit 21 and the output signal y (t−1) one time before that, that is, the output immediately before the operation of the operation unit 98. The signal y (t−1) is held, and the input signal x (t) and the output signals y (t) and y (t−1) are used according to the equation (8), The weight W (weight corresponding to the learning operation signal) used by the correction unit 21 when the learning operation signal is given is obtained.

  Here, in order to simplify the description, it is assumed that the operation of the handle as the operation unit 98 by the user is instantaneously completed during one time from t-1 to t.

  When the teacher data generation unit 111 obtains the weight W corresponding to the learning operation signal as described above, the teacher data generation unit 111 supplies the weight W to the learning data memory 53 as teacher data.

  Further, in step S132, the student data generation unit 52 that has received the learning message from the operation signal processing unit 110 immediately before that inputs the input obtained from the traveling direction as the input signal supplied from the calculation unit 93 (FIG. 16). The same weight w calculated by using the reliability and the output reliability and output by the weight calculator 45 is supplied to the learning data memory 53 as student data.

  Therefore, in the learning data memory 33, the weight W used by the correction unit 21 when the user operates the operation unit 98 and the actual driving direction of the automobile becomes the direction desired by the user is used as the teacher data. At the same time, immediately before the user operates the operation unit 98, a learning pair is supplied in which the weight w obtained from the input reliability and the output reliability is the student data.

  The learning data memory 53 receives the teacher data W from the teacher data generation unit 111 and also receives the student data w from the student data generation unit 52. In step S133, the learning data memory 53 sets the latest teacher data W and student data w. And proceeds to step S134.

  In step S134, the parameter control data calculator 54 stores the latest teacher data and student data stored in the learning data memory 53 and the learning information memory 55, as in step S44 of FIG. Adds in the least squares method for learning information. Further, in step S134, the parameter control data calculation unit 54 stores the addition result as learning information in an overwritten form in the learning information memory 55, and proceeds to step S135.

  In step S135, as in the case of step S45 of FIG. 8, the parameter control data calculation unit 54 uses the addition results as learning information stored in the learning information memory 55 to obtain the equation (20) and (21). It is determined whether the parameter control data a and b can be obtained.

  If it is determined in step S135 that the parameter control data a and b cannot be obtained, the parameter control data calculation unit 54 supplies the fact to the determination control unit 56 and proceeds to step S139. In step S139, the determination control unit 56 supplies auto mode data representing the auto mode as parameter control data to the parameter control data memory 57 for storage. Then, the process returns to step S131, and the same processing is repeated thereafter.

  Therefore, when there is no learning information that can obtain the parameter control data a and b, the weight w (t) automatically obtained from the input reliability and the output reliability in the weight calculation unit 45 (FIG. 17). ) Is used as it is for correcting the input signal x (t).

  On the other hand, when it is determined in step S135 that the parameter control data a and b can be obtained, the process proceeds to step S136, and the parameter control data calculation unit 54 uses the learning information to calculate the equations (20) and (21). ), The parameter control data a and b are obtained, supplied to the determination control unit 56, and the process proceeds to step S137.

  In step S137, the determination control unit 56 sets each student data stored in the learning data memory 53 in accordance with a linear expression of the equation (13) defined by the parameter control data a and b from the parameter control data calculation unit 54. Then, the predicted value of the corresponding teacher data is obtained, and the sum of the square errors represented by the equation (15) of the prediction error of the predicted value (the error with respect to the teacher data stored in the learning data memory 53) is obtained. . Further, the determination control unit 56 obtains a normalization error obtained by dividing the sum of the square errors by the number of learning pairs stored in the learning data memory 53, for example, and proceeds to step S138.

  In step S138, the determination control unit 56 determines whether or not the normalization error is greater than (or greater than) a predetermined threshold value S1. When it is determined in step S138 that the normalization error is larger than the predetermined threshold value S1, that is, the primary expression defined by the parameter control data a and b (13) is stored in the learning data memory 53. If the relationship between the determined student data and the teacher data is not accurately approximated, the process proceeds to step S139, and as described above, the determination control unit 56 uses the auto mode data representing the auto mode as the parameter control data. It is supplied to the parameter control data memory 57 and stored. Then, the process returns to step S131, and the same processing is repeated thereafter.

  Therefore, even if the parameter control data a and b can be obtained, the primary expression of the equation (13) defined by the parameter control data a and b is the student data and the teacher data stored in the learning data memory 53. Is not approximated with high accuracy, it is automatically obtained from the input reliability and the output reliability as in the case where there is not enough learning information to obtain the parameter control data a and b. The weight w (t) is used for correcting the input signal x (t) as it is.

  On the other hand, when it is determined in step S138 that the normalization error is not larger than the predetermined threshold value S1, that is, the primary expression defined by the parameter control data a and b (13) is stored in the learning data memory 53. When the relationship between the stored student data and the teacher data is approximated with high accuracy, the process proceeds to step S140, and the determination control unit 56 is defined by the parameter control data a and b from the parameter control data calculation unit 54. An error (distance) ε between the regression line expressed by the linear expression of Expression (13) and the point defined by the latest teacher data and student data stored in the learning data memory 53 is obtained.

  Then, the process proceeds to step S141, and the determination control unit 56 determines whether or not the magnitude of the error ε is larger (or more) than the predetermined threshold value S2, and when it is determined that it is not larger, skips step S142. In step S143, the determination control unit 56 outputs the parameter control data a and b obtained in step S136 to the parameter control unit data memory 37. The parameter control data memory 57 stores the parameter control data a and b from the determination control unit 56 in the form of overwriting, and returns to step S131.

  On the other hand, when it is determined in step S141 that the magnitude of the error ε is larger than the predetermined threshold value S2, the process proceeds to step S142, and the determination control unit 56 controls the parameter control data calculation unit 54 to Using only a predetermined number of past learning pairs from the latest learning pairs as the latest teacher data and student data stored in the learning data memory 53 (without using the learning information in the learning information memory 55), the parameters The control data a and b are recalculated. Then, the process proceeds to step S143, and the determination control unit 56 outputs the parameter control data a and b obtained in step S142 to the parameter control unit data memory 37, stores them in an overwritten form, and returns to step S131.

  Accordingly, the parameter control data a and b can be obtained, and the primary expression defined by the parameter control data a and b is the student data and teacher data stored in the learning data memory 53. Is accurately defined by parameter control data a and b obtained by learning using a learning pair obtained based on the operation of the operation unit 2 by the user. The weight w (t) obtained from the input reliability and the output reliability is corrected according to the equation (13), and the correction weight W obtained by the correction is used for correcting the input signal x (t). Become.

  As described above, even in the automatic traveling device of FIG. 16, it is determined whether or not the operation signal supplied according to the user's operation can be used for learning, and is a learning operation signal that can be used for learning. In this case, since the parameter control data a and b for correcting the weight for correcting the input signal are learned based on the learning operation signal, the user's operation can be learned without the user's knowledge. As a result, appropriate processing for the user is gradually performed based on the learning result, and finally, optimal processing for the user is performed.

  In other words, while the user operates the operation unit 98 so as to correct the traveling direction to the one along the set trajectory, the automobile gradually travels automatically along the set trajectory.

  Further, in the processing unit 101 of the optimization device 94 in FIG. 17, as in the NR circuit in FIG. 4, the actual traveling direction of the automobile is along the set locus according to the operation of the operation unit 98 by the user. As described above, the weight W used in the correction process (FIG. 6) performed by the correction unit 21 is changed. That is, when the user operates the operation unit 98 so that the traveling direction of the automobile becomes a desired direction, the traveling direction θ as an input signal output from the calculation unit 93 (FIG. 16) changes, and from the traveling direction θ. The required input reliability and, further, the output reliability obtained from the input reliability also change. Due to the change in the input reliability and the output reliability, the weight obtained by the weight calculation unit 45 also changes, and the changed weight is supplied to the correction unit 21 via the weight correction unit 46. And in the correction | amendment part 21, the correction process represented by Formula (8) is performed using the weight supplied in this way. Accordingly, when the user operates the operation unit 98, the weight of the equation (8) is changed by the user's operation, and naturally, as in the case described with the NR circuit of FIG. Since the processing (correction processing) content to be expressed is also changed, the processing unit 101 of the optimizing device 94 in FIG. 17 also displays the “processing content” as desired by the user according to the user's operation. It can be said that the direction has been changed to obtain.

  Further, in the processing unit 101 of the optimization device 94 in FIG. 17, as in the case of the NR circuit in FIG. 4, when a sufficient number of learning pairs has not been input from the user or there are learning pairs that can be approximated with high accuracy. If not input, the weight automatically obtained from the input reliability and the output reliability is used for the correction process in the correction unit 21, and a learning pair capable of high-precision approximation is input from the user. In this case, the correction weight obtained from the parameter control data a and b obtained by learning using the learning pair is used for the correction process in the correction unit 21. That is, the weights used for correction processing when there are not a sufficient number of learning pairs or learning pairs that can be approximated with high accuracy and when learning pairs that can be approximated with high accuracy are obtained. The algorithm for calculating has been changed.

  Accordingly, in the processing unit 101 of the optimization device 94 in FIG. 17 as well, in the same way as in the NR circuit in FIG. 4, according to the user's operation, the “processing content” and further the “processing structure” are changed. Thus, the vehicle automatically travels in the traveling direction along the set locus.

  For example, Japanese Patent Laid-Open No. 7-13625 discloses a traveling control device for a work vehicle such as a rice transplanter. In this traveling control device, the operation state of the user and the detection result of the gyro sensor or the like are disclosed. The correction amount of the control parameter in the autopilot state is calculated so that the difference from the information based on it is reduced. Accordingly, the automatic travel device of FIG. 16 is different from the travel control device described in Japanese Patent Application Laid-Open No. 7-13625 in that the parameter correction amount for automatic travel (automatic control) changes based on the user's operation. Common.

  However, the automatic traveling device of FIG. 16 is a learning operation signal that can be used for learning, in that it determines whether or not an operation signal supplied in response to a user operation can be used for learning. In some cases, automatic learning is performed only when the switch is manually switched to the manual steering control mode in that the parameter control data for correcting the weight for correcting the input signal is learned based on the learning operation signal. This is greatly different from the travel control apparatus described in Japanese Patent Application Laid-Open No. 7-13625, in which the control parameter correction amount in the steering state is calculated.

  As a result of this difference, in the travel control device described in Japanese Patent Application Laid-Open No. 7-13625, every time the user feels that appropriate automatic steering is not performed, the switch is switched to the manual steering control mode, and the control parameter is set. After completing the calculation of the correction amount, the switch must be switched again to enter the autopilot control mode, which may cause the user to feel bothered.

  On the other hand, in the automatic traveling device of FIG. 16, it is determined whether or not the operation signal supplied in accordance with the user's operation can be used for learning, and further, the learning operation signal that can be used for learning. In this case, the algorithm is changed so as to learn the parameter control data for correcting the weight for correcting the input signal based on the learning operation signal, so that the user switches the switch as described above. Even if it is not performed, an appropriate automatic traveling is performed. That is, since the user's operation is learned before the user knows, the learning progresses while the user is performing the operation of correcting the traveling direction, and even if the user does not perform the operation gradually, The car will travel along the set trajectory.

  Further, the automatic travel device shown in FIG. 16 changes in the processing structure in response to the user's operation, but also differs from the travel control device described in Japanese Patent Laid-Open No. 7-13625.

  Next, FIG. 20 shows another embodiment of the processing unit 101 of the optimization device 94 of FIG. In the figure, portions corresponding to those in FIG. 17 are denoted by the same reference numerals, and description thereof will be omitted below as appropriate.

  In the processing unit 11 of the NR circuit of FIGS. 4 and 10 and the processing unit 101 of the optimization device 94 of FIG. 17, parameter control data for controlling correction parameters using a learning pair obtained based on a user operation. However, the processing unit 101 in FIG. 20 learns the correction parameter itself using a learning pair obtained based on the user's operation.

  That is, in the embodiment of FIG. 20, the correction unit 21 includes a correction amount calculation unit 141 and a calculator 142, and the learning unit 22 includes a learning data memory 53, a learning information memory 55, a determination control unit 56, an operation unit, and an operation unit. The signal processing unit 110 includes a teacher data generation unit 143, a student data generation unit 144, a correction parameter calculation unit 145, and a correction parameter memory 146.

  The correction amount calculation unit 141 is supplied with correction parameters as described later from the correction parameter memory 146 of the learning unit 22, and the correction amount calculation unit 141 uses the correction parameters to input signal A correction amount for correcting the traveling direction θ is calculated and supplied to the calculator 142.

  In addition to the correction amount being supplied from the correction amount calculator 141 to the calculator 142, the traveling direction θ as an input signal is supplied from the calculator 93 (FIG. 16). The travel direction θ as an input signal is corrected by adding a correction amount thereto, and the corrected travel direction (corrected travel direction) is output to the automatic travel control unit 95 (FIG. 16) as an output signal. .

  The teacher data generation unit 143 supplies the traveling direction as an input signal supplied immediately after receiving the learning message from the operation signal processing unit 110 to the learning data memory 53 as teacher data. The student data generation unit 144 supplies the learning data memory 53 with the traveling direction as the input signal supplied immediately before receiving the learning message from the operation signal processing unit 130 as student data.

  The correction parameter calculation unit 145 stores teacher data and student data as learning data stored in the learning data memory 53 under the control of the determination control unit 56, and further stores it in the learning information memory 55 as necessary. Using the learned information, a correction parameter that minimizes a predetermined statistical error is learned by calculating new learning information, and is supplied to the determination control unit 56. Further, the correction parameter calculation unit 145 updates the stored content of the learning information memory 55 with new learning information obtained by learning.

  The correction parameter memory 146 stores correction parameters output from the determination control unit 56.

  In the optimization device 94 configured as described above, the traveling direction θ supplied from the calculation unit 93 is corrected as follows.

  That is, assuming that the yaw rate at time t output from the gyro sensor 91 (FIG. 16) is r ′, the computing unit 93 calculates the traveling direction from the equation in which r in equation (32) is replaced with r ′. .

Now, the yaw rate r of the gyro sensor 91 outputs 'the error contained in addition to the e r, if it is assumed that the true yaw rate and r, the yaw rate r of the gyro sensor 91 outputs' is represented by the following formula .

... (33)

  The traveling direction θ ′ calculated from the yaw rate r ′ output from the gyro sensor 91 in the calculation unit 93 is as follows from the equations (32) and (33).

... (34)

  Accordingly, the relationship between the traveling direction θ ′ obtained by the calculation unit 93 and the true traveling direction θ obtained from the true yaw rate r is expressed by the following equation.

... (35)

When the error e r the gyro sensor 91 is included in the yaw rate r 'output by a white, the second term on the right side of equation (35), the long term, because becomes zero as shown in the following equation There is no particular problem. In the short term, the second term on the right side of Equation (35) does not become 0, but this case can be dealt with by the processing unit 101 of the optimization device 94 in FIG.

... (36)

However, when the error er is colored, the error er is accumulated with the lapse of time t, and the traveling direction θ ′ obtained by the calculation unit 93 is greatly deviated from the true traveling direction θ.

  That is, now, for the sake of simplicity of explanation, when considering an automatic traveling that goes straight in a certain direction, the automatic traveling control unit 95 (FIG. 16) shows that the traveling direction θ ′ obtained by the computing unit 93 is shown in FIG. As indicated by the dotted line, a control signal that is constant is generated.

However, the direction of travel when the error e r contained in the running direction theta 'obtained by the arithmetic unit 93 is colored, with time t, since the error e r is accumulated, obtained by the arithmetic unit 93 For example, when θ ′ draws a locus of a curve as shown by a solid line in FIG. 21, the vehicle is traveling straight ahead.

For this reason, the processing unit 101 of the optimization device 94 in FIG. 20 determines the traveling direction θ ′ so that the traveling direction θ ′ from the computing unit 93 supplied as an input signal draws a locus indicated by a solid line in FIG. Correction parameters a 0 , a 1 ,..., A N are corrected based on learning operation signals from the user, and the correction parameters a 0 to a N are used. Correction processing for correcting the traveling direction θ ′ from the calculation unit 93 is performed.

  Therefore, with reference to FIGS. 22 and 23, the correction process and the correction parameter learning process performed by the processing unit 101 of the optimization device 94 of FIG. 20 will be described. In the embodiment shown in FIG. 18, the automatic traveling straight in a certain direction is performed. However, the processing unit 101 of the optimization device 94 in FIG. 20 can be applied to automatic traveling along an arbitrary locus. Is possible.

  First, correction processing performed by the processing unit 101 of the optimization device 94 in FIG. 20 will be described with reference to the flowchart in FIG.

In the correction process, in step S 151, the correction amount calculation unit 141 calculates the correction amount using the correction parameters a 0 to a N stored in the correction parameter memory 146.

That is, here, for example, the true traveling direction θ is expressed as shown in Expression (37) using the correction parameters a 0 to a N and the traveling direction θ ′ from the calculation unit 93 as an input signal. As a result, a correction amount is calculated.

... (37)

Therefore, from equation (37), the correction amount calculation unit 141 calculates a 0 + a 1 t 1 + a 2 t 2 +... + A N t N as the correction amount. This correction amount is supplied to the calculator 142.

  In the calculator 142, in step S152, the traveling direction θ ′ from the calculator 53 as an input signal and the correction amount are added, and the added value (θ in Expression (37)) is output as an output signal. After waiting for the next input signal sample to be supplied, the process returns to step S151, and thereafter the same processing is repeated.

  Next, the correction parameter learning process performed by the processing unit 101 of the optimization device 94 in FIG. 20 will be described with reference to the flowchart in FIG.

  In the correction parameter learning process, first, in step S161, the operation signal processing unit 110 determines whether or not the learning operation signal has been received from the operation unit 98 (FIG. 16), and determines that it has not been received. If so, the process returns to step S161.

  Further, when it is determined in step S161 that the learning operation signal has been received from the operation unit 98, that is, for example, the operation unit 98 leaves an interval of the first time t1 or more after the start of the operation. Without being operated continuously for the second time t2 or more, and then continuously for the third time t3 or more, or when the operation is stopped, or after the operation of the operation unit 98 is started, for the third time t3 or more. When it can be determined that the user has operated the operation unit 98 so that the automobile faces the desired traveling direction, such as when the operation is continuously stopped, the process proceeds to step S162, and teacher data generation is performed. The unit 143 generates teacher data, and the student data generation unit 144 generates student data.

  That is, when the operation signal processing unit 110 receives a learning operation signal, the operation signal processing unit 110 supplies a learning message to that effect to the teacher data generation unit 143 and the student data generation unit 144. Upon receiving the learning message, the teacher data generation unit 143 acquires the traveling direction as an input signal supplied immediately after that as teacher data and supplies it to the learning data memory 53.

  That is, in this case, as the teacher data, it is necessary to use the traveling direction after the user operates the operation unit 98 as a steering wheel so that the automobile is directed in a desired direction. Therefore, the teacher data generation unit 143 supplies the traveling direction θ as an input signal supplied after receiving the learning message to the learning data memory 53 as teacher data.

  In addition, when the student data generation unit 52 receives the learning message, the student data is used as the learning data by using, as student data, the traveling direction as the input signal supplied immediately before that, that is, the traveling direction immediately before the automobile faces the desired direction. This is supplied to the memory 53.

  Thereafter, the process proceeds to step S163, where the learning data memory 53 stores the set of teacher data and student data from the teacher data generation unit 51, and proceeds to step S164.

  In step S164, the correction parameter calculation unit 145 performs addition in the least square method similar to the case described in Expressions (22) to (30) for teacher data and student data.

Note that the addition in step S164 is performed using the previous addition result as learning information stored in the learning information memory 55, as in the case described above. Also, here, as θ ′ in equation (37), the sum of square errors between the predicted value of teacher data as θ in equation (37) calculated using student data and the corresponding teacher data is minimized. Addition for obtaining the correction parameters a 0 to a N to be performed is performed.

  After performing the addition in step S164, the correction parameter calculation unit 145 stores the addition result in the form of overwriting the learning information memory 55 as learning information, and proceeds to step S165.

In step S165, the correction parameter calculation unit 145 determines whether or not the correction parameters a 1 to a N can be obtained from the addition result as learning information stored in the learning information memory 55.

When it is determined in step S165 that the correction parameters a 1 to a N cannot be obtained, the correction parameter calculation unit 145 supplies the fact to the determination control unit 56 and proceeds to step S169. In step S169, the determination control unit 56 supplies disable data representing prohibition of correction to the correction parameter memory 146 as a correction parameter, and stores it therein. And it returns to step S161 and the same process is repeated hereafter.

Therefore, when there is no learning information that can determine the correction parameters a 1 to a N , the correction unit 21 does not correct the input signal. That is, the correction amount of the input signal is set to zero.

On the other hand, when it is determined in step S165 that the correction parameter can be obtained, the process proceeds to step S166, and the correction parameter calculation unit 145 obtains the correction parameters a 0 to a N using the learning information and makes the determination. It supplies to the control part 56 and progresses to step S167.

In step S167, the determination control unit 56 determines from the student data stored in the learning data memory 53 according to the equation (37) defined by the parameter control data a 1 to a N from the correction parameter calculation unit 145. The predicted value of the corresponding teacher data is obtained, and the sum of squares of the prediction error of the predicted value (error with respect to the teacher data stored in the learning data memory 53) is obtained. Further, the determination control unit 56 obtains a normalization error obtained by dividing the square sum of the prediction errors by the number of learning pairs stored in the learning data memory 53, for example, and proceeds to step S168.

In step S168, the determination control unit 56 determines whether or not the normalization error is greater than (or greater than) a predetermined threshold value S1. In step S168, when it is determined that the normalization error is larger than the predetermined threshold value S1, that is, the linear expression defined by the correction parameters a 1 to a N is stored in the learning data memory 53. When the relationship between the stored student data and teacher data is not approximated with high accuracy, the process proceeds to step S169, and the determination control unit 56 uses the disable data as the correction parameter, as described above, as the correction parameter memory 146. To be stored. And it returns to step S161 and the same process is repeated hereafter.

Therefore, even if it is possible to obtain the correction parameters a 1 through a N, expression defined by the correction parameters a 1 through a N is (37), the student data and tutor data stored in the learning data memory 53 Is not approximated with high accuracy, the correction amount of the input signal x (t) is set to 0, as in the case where there is no learning information sufficient to obtain the correction parameters a 1 to a N. The

On the other hand, if it is determined in step S168 that the normalization error is not greater than the predetermined threshold value S1, that is, the linear expression defined by the correction parameters a 1 to a N (37) is the learning data memory 53. If the relationship between the student data and the teacher data stored in is accurately approximated, the process proceeds to step S170, and the determination control unit 56 is defined by the correction parameters a 0 to a N from the correction parameter calculation unit 145. An error ε between the surface of the equation (37) and a point defined by the latest teacher data and student data stored in the learning data memory 53 is obtained.

Then, the process proceeds to step S171, where the determination control unit 56 determines whether or not the magnitude of the error ε is larger (or greater) than the predetermined threshold value S2, and if it is determined that it is not larger, skips step S172. In step S173, the determination control unit 56 outputs the correction parameters a 0 to a N obtained in step S166 to the correction parameter memory 146. In this case, the correction parameter memory 146 stores the correction parameters a 0 to a N from the determination control unit 56 in an overwritten manner, and the process returns to step S161.

On the other hand, when it is determined in step S171 that the magnitude of the error ε is larger than the predetermined threshold value S2, the process proceeds to step S172, and the determination control unit 56 learns by controlling the correction parameter calculation unit 145. The correction parameters a 0 to a N are recalculated using only the latest teacher data and student data stored in the data memory 53. In step S173, the determination control unit 56 outputs the correction parameters a 0 to a N obtained in step S172 to the parameter control unit data memory 37, stores them in an overwritten form, and returns to step S161. .

That is, also in the embodiment of FIG. 23, in the same manner as in the embodiment of FIG. 8, in step S170, the correction parameters a 0 to a N obtained from the teacher data and student data given so far are used. An error ε between the surface defined by Expression (37) and the point defined by the latest teacher data and student data is obtained.

When the magnitude of the error ε is not larger than the threshold value S2, the surface of the equation (37) defined by the correction parameters a 0 to a N obtained in step S166 is the latest teacher data and the student. Since all of the points defined by the teacher data and the student data given so far, including the points defined by the data, are considered to be approximated relatively accurately, the correction parameters a 0 to a N Is stored in the correction parameter memory 146.

On the other hand, when the magnitude of the error ε is larger than the threshold value S2, the latest teacher data and student data are obtained from the surface of the equation (37) defined by the correction parameters a 0 to a N obtained in step S166. Therefore, the determination control unit 56 uses only the latest teacher data and student data stored in the learning data memory 53 in step S172 to correct the correction parameter. Let a 0 to a N be recalculated.

Accordingly, in this case as well, the correction parameters a 0 to a N of Expression (37) are learned based on the learning operation signal supplied in accordance with the user's operation, so that the user knows the user's operation. It is possible to learn before it is completed, and furthermore, it is possible to perform the optimum process for the user using the learning result.

  Furthermore, in this case, when the error included in the traveling direction output from the calculation unit 93 (FIG. 16) is colored, the vehicle can be automatically driven along a predetermined set locus.

  Further, in the processing unit 101 of the optimization device 94 in FIG. 20, the correction performed by the correction unit 21 so that the actual traveling direction of the automobile is along the set locus according to the operation of the operation unit 98 by the user. The correction parameter used in the process (FIG. 6) is changed. That is, when the user operates the operation unit 98 so that the traveling direction of the automobile becomes a desired direction, the calculation unit 43 (FIG. 16) outputs the traveling direction as an input signal immediately before and after the operation of the operation unit 98. The correction parameters are learned by using each θ as the student data and the teacher data, and the correction parameters are thereby changed. The changed correction parameter is supplied to the correction unit 21. The correction unit 21 calculates a correction amount using the correction parameter, and the input signal correction process (FIG. 22) is performed based on the correction amount. Therefore, when the user operates the operation unit 98, the correction parameter of the equation (37) is changed by the user's operation, so that the processing (correction processing) represented by the equation (37) is naturally performed. Since the contents are also changed, in the processing unit 101 of the optimization device 94 of FIG. 20, the “processing contents” are changed so that the user can obtain a desired traveling direction according to the user's operation. It can be said that

  Furthermore, in the optimization device 94 of FIG. 20, when a sufficient number of learning pairs is not input from the user or when learning pairs that can be approximated with high accuracy are not input, input in the correction unit 21 is performed. When the correction amount of the signal is set to 0 and a learning pair that can be approximated with high accuracy is input from the user, the correction amount obtained by the correction parameter obtained by performing learning using the learning pair Thus, the input signal is corrected. That is, the correction processing of the correction unit 21 is performed when a sufficient number of learning pairs or learning pairs that can be approximated with high accuracy are not obtained, and when learning pairs that can be approximated with high accuracy are obtained. The algorithm for calculating the weight used for the is changed.

  Accordingly, in the processing unit 101 of the optimization device 94 of FIG. 20, the “contents of processing” and also the “structure of processing” are changed according to the user's operation. The car automatically runs in the running direction.

Here, in the embodiment of FIG. 23 (the same applies to the embodiments of FIG. 8 and FIG. 13), in step S170, equations defined by the correction parameters a 0 to a N from the correction parameter calculation unit 145 ( 37) and the error ε between the points specified by the latest teacher data and student data are obtained and the subsequent processing is performed. In step S170, a plurality of recent teacher data and student data are obtained. Between the plane of the equation (37) defined by the correction parameters a 0 to a N obtained in step S166 before each of the points defined by the recent plurality of teacher data and student data is supplied It is also possible to obtain the error ε and perform the subsequent processing based on the plurality of errors ε.

  The processing unit 101 of the optimization device 94 in FIG. 16 can be configured using the processing unit 11 of the optimization device 1 shown in FIG. 10, for example, in addition to those shown in FIG. 17 and FIG. is there.

  Next, FIG. 24 shows a configuration example of another embodiment of the automatic travel device to which the optimization device of FIG. 1 is applied. In the figure, portions corresponding to those in FIG. 16 are denoted by the same reference numerals, and description thereof will be omitted below as appropriate. That is, the automatic travel device of FIG. 24 is configured in the same manner as in FIG. 16 except that the internal information generation unit 161 is newly provided in the optimization device 94 and the display unit 171 is newly provided. ing.

  Similarly to the internal information generation unit 171 in FIG. 14, the internal information generation unit 161 reads the internal information from the processing unit 11, converts it into image information, and outputs the image information to the display unit 171. The display unit 171 displays the internal information supplied from the internal information generation unit 161 in a predetermined display format.

  In the embodiment of FIG. 24, the processing unit 101 can be configured as shown in FIGS. When the processing unit 101 of FIG. 24 is configured as shown in FIG. 17, the same processing as in FIG. 17 is performed except for the correction parameter calculation processing. Therefore, with reference to the flowchart of FIG. 25, a correction parameter calculation process when the processing unit 101 of FIG. 24 is configured as shown in FIG. 17 will be described.

  In steps S191 to S197, processing similar to that in steps S111 to S117 of FIG. 18 is performed.

  After the process of step S197, the process proceeds to step S198, and internal information is displayed on the display unit 171. That is, in more detail, the internal information generation unit 161 reads, for example, the weight W stored in the weight memory 31 (FIG. 17) as internal information, converts it into an image signal that can be displayed on the display unit 171, and displays it. It outputs to the part 171 and displays (presents). After the process of step S198, the process returns to step S111, and the same process is repeated thereafter.

  With the processing described in the flowchart of FIG. 25, the weight W as the internal information related to the processing of the processing unit 101 is displayed (presented) on the display unit 171. As a result, the user can perform optimum automatic driving while viewing the display. It is possible to operate the operation unit 98 so as to be performed.

  In the above-described case, the weight W is displayed. However, the internal information generation unit 161 may display (present) other internal information on the display unit 171, for example, parameter control. The parameter control data a and b may be read from the data memory 37 and displayed. Whether the weight selected by the selection unit 41 is a weight obtained from the parameter control data a and b obtained by performing learning using a learning pair, or input reliability and output reliability. It is also possible to display internal information indicating whether the weight is obtained from the above.

  Next, when the processing unit 101 in FIG. 24 is configured as shown in FIG. 20, the same processing as in FIG. 20 is performed except for the correction parameter learning processing. 26, correction parameter learning processing when the processing unit 101 in FIG. 24 is configured as shown in FIG. 20 will be described.

  In steps S211 to S223, the same processing as in steps S161 to S172 of FIG. 23 is performed.

After the processing of step S219 and step S223, the process proceeds to step S224, and the internal information generation unit 161 reads, for example, the correction parameters a 0 to a N stored in the correction parameter memory 101 as internal information, and the display unit 171 The image signal is converted into a displayable image signal and displayed on the display unit 171. At this time, since the correction parameters a 0 to a N are composed of a plurality of parameters, as shown in FIG. 27, each parameter is displayed on the horizontal axis, and each value is displayed on the vertical axis. You may make it do. As the correction parameters a 0 to a N , for example, as shown in FIG. 28, any two correction parameters a i and a j may be displayed with the horizontal axis and the vertical axis, respectively. The correction parameters on the horizontal axis and the vertical axis can be selected by the user.

  Thereafter, the process returns to step S211, and the same processing is repeated thereafter.

As described above, the correction parameters a 0 to a N are displayed by the correction parameter processing described with reference to the flowchart of FIG. 26, so that the processing unit 101 of the optimization device 94 of FIG. 0 to a N are displayed as the internal information, and as a result, the user can operate the operation unit 98 while performing optimal automatic running while viewing the display.

Note that the internal information generation unit 161 may display internal information other than the correction parameters a 0 to a N.

In the embodiment of FIG. 26, when the process proceeds to step S224 after the process of step S219, the correction parameters a 0 to a N as the internal information are assumed to be 0 and displayed.

  Next, an optimization apparatus 201 as another embodiment of the optimization apparatus of FIG. 1 will be described with reference to FIG. The optimization device 201 includes a processing unit 211, and removes noise from, for example, an image signal as an input signal to optimize the displayed image signal. In this example, an image signal is described as an example of a main input signal, but the present invention is not limited to an image signal and may be other signals.

  The processing unit 211 includes a learning unit 221 and a mapping processing unit 222. An operation signal from the operation unit 202 is supplied to the learning unit 221 of the processing unit 211. The learning unit 221 learns a coefficient set necessary for the processing of the mapping processing unit 222 based on the operation signal, and the coefficient Store in the memory 235. As a learning rule (learning rule) of the learning unit 211, for example, a least-Nth power error method (least Nth power method) can be used. A solution by the least-Nth power error method will be described later.

  The mapping processing unit 222 performs mapping processing for mapping (converting) an input signal to a predetermined output signal. That is, the mapping processing unit 222 sets a pixel to be obtained from an image signal as an output signal as a target pixel, and a tap corresponding to the target pixel (at least one or more pixels necessary for processing, and a sample Is also extracted from an image signal as an input signal, and a product-sum operation process with a coefficient set stored in the coefficient memory 235 is performed to obtain a pixel of interest. The mapping processing unit 222 performs similar processing (mapping) on the pixels constituting the image signal as the output signal, thereby generating the image signal as the output signal, outputting it to the display unit 203, and displaying it.

  The operation unit 202 is operated by a user and supplies an operation signal corresponding to the operation to the learning unit 221. The display unit 203 displays a pixel signal as an output signal output from the mapping processing unit 202.

  Next, a detailed configuration of the learning unit 221 in FIG. 29 will be described with reference to FIG. The teacher data generation unit 231 generates teacher data serving as a learning teacher from the input signal, and outputs the teacher data to the least-N power error coefficient calculation unit 234. The student data generation unit 232 generates student data to be learning students from the input signal and outputs the student data to the prediction tap extraction unit 233. For example, the teacher data 231 is not subjected to any processing on the input signal by the teacher data 231, and the student data generation unit 232 performs a predetermined thinning process or LPF (Low Pass Filter). However, the present invention is not limited to the above-described configuration as long as the student data is generated as degraded with respect to the teacher data. Therefore, in addition to the above, for example, when the teacher data generation unit 231 performs predetermined thinning or LPF processing on the input signal, the student data generation unit 232 is performed by the teacher data generation unit 231. In other words, thinning or LPF processing may be performed. In addition, for example, the teacher data can be used as it is with the input signal, and the student data can be obtained by superimposing noise on the input signal.

  The prediction tap extraction unit 233 sequentially uses the pixels constituting the image signal as the teacher data as the target pixel, and at least one pixel (tap) having a predetermined positional relationship with the target pixel as the student data. A prediction tap is extracted from the image signal and is output to the least-N power error method coefficient calculation unit 234.

  The least-N power error coefficient calculation unit 234 receives a prediction tap and a teacher based on an operation signal that is input from the operation unit 202 and represents information specifying the value of the exponent N, which is necessary for the minimum N power error coefficient calculation processing. A coefficient set is calculated from the data by the least-Nth power error method, and is output to the coefficient memory 235 and stored (overwritten and stored as appropriate).

  The coefficient memory 235 stores the coefficient set supplied from the least-N power error method coefficient calculation unit 234 and outputs the coefficient set to the mapping processing unit 222 as appropriate.

  Next, the configuration of the mapping processing unit 222 in FIG. 29 will be described with reference to FIG. The tap extraction unit 251 of the mapping processing unit 222 uses the pixels constituting the image signal as the output signal as the target pixel in order, and predicts the pixel (the pixel value) in a predetermined positional relationship with the target pixel. As a result, by extracting from the image signal as the input signal, a prediction tap having the same tap structure as that formed by the prediction tap extraction unit 233 of FIG. 30 is configured and output to the product-sum operation unit 252. The product-sum operation unit 252 performs a product-sum operation on the extracted prediction tap (pixel) value input from the tap extraction unit 251 and the coefficient set stored in the coefficient memory 235 of the learning unit 221. As a result, the pixel of interest is generated and output to the display unit 203 (FIG. 29).

Here, the coefficient calculation by the minimum N-th power error method of the minimum N-th power error method coefficient calculation unit 234 in FIG. 30 will be described. The case of the index N = 2 in the least-N error method is generally called the least square error method (least square error method). That is, the teacher data as the pixel of interest is y, the M student data constituting the prediction tap is x i (i = 1, 2,..., M), and the predetermined M coefficients are w i . represents the predicted value y 'of the teacher data y, it is the prediction tap x i and linear combination of the predetermined coefficient w i (multiply-accumulate) w 1 x 1 + w 2 x 2 + ... + w M x M 32, as shown in FIG. 32, the error between the teacher data y indicated by the black circle and the predicted value y ′ indicated by the white circle in the figure (the true value y as the teacher data indicated by the arrow in the figure and its A coefficient set w 1 , w 2 ,..., W M that minimizes the sum of the squares of the differences from the prediction error y ′ is obtained.

  When the value of the exponent N in the least-Nth power error method is changed, for example, when the exponent N increases, the error of the predicted value y ′ having a large error has a greater influence on the sum of the Nth power errors, so According to the error method, as a result, a coefficient in a direction in which such a large error prediction value y ′ is relieved (a coefficient for reducing the error of the large error prediction value y ′) is obtained. However, since the error of the predicted value y ′ having a small error has little influence on the sum of the N-th power error, it is not considered so much and is easily ignored as a result. On the contrary, when the index N is small, the error of the predicted value y ′ having a large error is smaller than that when the exponent N is large, and the error of the predicted value y ′ having a small error is smaller. Will have a smaller effect on the sum of N-th power errors. As a result, according to the least-Nth power error method, a coefficient in a direction to make the error of the predicted value with a small error smaller than when the index N is large is obtained.

  Note that the change in the influence of the error of the predicted value y ′ as described above on the sum of the Nth power error is obtained by raising the error of the predicted value y ′ to the Nth power. That's it.

  As described above, since the qualitative tendency of the coefficient set obtained by the least-Nth power error method is seen by the index N, the coefficient N is obtained by changing the index N and by the least-Nth power error method. Thus, a coefficient set for executing a user's favorite mapping process can be obtained (a coefficient set having an output signal mapped to an input signal as a user's preference can be obtained). However, in practice, a method other than the least square error method, that is, a method other than the exponent N = 2, makes it extremely difficult to calculate a coefficient set that minimizes the sum of the N power errors of the predicted value y ′.

  Here, the reason why it is difficult to calculate a coefficient set that minimizes the sum of the N-th power errors of the predicted value y ′ by a method other than the least square error method.

  The total sum of the errors of the predicted value y ′ to the Nth power (sum of the Nth power errors) can be expressed by Expression (38).

... (38)

  Here, E indicates the sum total of the number of samples of the error e between the true value y and the predicted value y ′ as the teacher data.

Meanwhile definitions, in the present embodiment, as described above, the predicted value y 'is the true value y, and the prediction tap x i, linear combination of a predetermined coefficient wi, i.e., by the following equation (39) Is done.

... (39)

Here, the coefficients w 1 , w 2 , w 3 ..., W M are hereinafter referred to as prediction coefficients as appropriate. This set of prediction coefficients is a coefficient set stored in the coefficient memory 235 of FIG. The prediction taps x 1 , x 2 , x 3 ..., X M are spatially or temporally determined from the position of the image as the student data corresponding to the pixel (true value) y of the image as the teacher data. It is possible to employ a pixel located at a position close to.

  In this case, the error e in the equation (38) can be expressed by the following equation (40).

... (40)

In the least-Nth power error method, prediction coefficients w 1 , w 2 , w 3 ..., W that minimize the total E of N-th power errors expressed by the following formulas (41) and (42) derived from the formula (40). It is necessary to find M. Expression (41) is an expression indicating the sum E when the index N is an odd number, and Expression (42) is an expression indicating the sum E when the index N is an even number.

4 (41)

... (42)

Here, in the case of the formula (41), that is, when the index N is an odd number, the sum E should have the same sum of the magnitudes of the differences y−y ′ between the true value y and the predicted value y ′. For example, the prediction coefficients w 1 , w 2 , w 3 ..., W M that have the same value and minimize the sum E cannot be obtained regardless of the sign of the difference y−y ′. That is, the sum E is a function including an absolute value, for example, a function as shown in FIG. For this reason, prediction coefficients w 1 , w 2 , w 3 ..., W M that give the minimum value of the sum E cannot be obtained except for the full search. FIG. 33 shows a change in a certain prediction coefficient w i and the sum E when the index N = 1.

On the other hand, in the case of the equation (42), that is, when the index N is an even number, the sum E always satisfies E ≧ 0, so that the sum E in the equation (42) is expressed by the following equation (43). By setting the equation obtained by partial differentiation with the prediction coefficient w i to 0, the minimum value can be obtained.

... (43)

Therefore, from equation (43), the prediction coefficient w 1 , w 2 , w 3 ... W M that minimizes the sum E of N-th power errors is obtained by solving the equation shown in the following equation (44). It will be required.

... (44)

  For this formula (44), for example, when the index N is N = 2, that is, when a solution is obtained by a so-called least square error method, 2 is substituted into the index N of formula (44), and the following formula ( It is sufficient to solve 45).

... (45)

The equation (45) can be expressed in the form of a determinant as shown in the following equation (46) and is called a normal equation. When the index N = 2, the minimum value of the sum E is uniquely determined, and the minimum value is the minimum value of the sum E. If the simultaneous equations of the same number (M in this case) as the number of prediction coefficients w 1 , w 2 , w 3 ..., W M are formed by the normal equation of Equation (45), the simultaneous linear equations are For example, it can be solved by the Cholesky method or the like, and the prediction coefficients w 1 , w 2 , w 3 ..., W M can be obtained.

... (46)

In order to solve the normal equation of Expression (46), a matrix whose component is the sum (Σx i x j ) of the products x i x j of the prediction taps on the left side in Expression (46) is regular. is necessary.

  When the index N is an even number and 4 or more, the equation (42) can be expressed as the following equation (47).

... (47)

Since the equation represented by the equation (47) is a high-order simultaneous equation, the prediction coefficients w 1 , w 2 , w 3 are solved by solving the simultaneous linear equations as in the case where the index N is N = 2. ..., w M cannot be determined.

As described above, when the index N is other than N = 2, the prediction coefficients w 1 , w 2 , w 3 ..., W M that minimize the sum of the N-th power errors shown in Expression (38) are generally easy. Can not ask for.

  Therefore, the minimum N-th power error method coefficient calculation unit 234 of the learning unit 221 calculates the prediction coefficient by the following two minimum N power error methods. It should be noted that which of the two least N error methods is to be used can be specified by, for example, the user operating the operation unit 202 (FIG. 29).

First, the first method (hereinafter also referred to as direct method) will be described. As shown in the following equation (48), the sum E obtained by multiplying the term of the error e 2 by the weight α S is defined as the sum of the N-th power error method instead of the equation (38).

... (48)

That is, the N-th power error e N is defined by the product of the weight α S and the square error e 2 .

In this case, for example, as shown in the following equation (49), the weight α S of the equation (48) is the prediction coefficients w 1 , w 2 , w 3 ..., W obtained when the index N is N = 2. Prediction coefficients w 1 to w M that minimize the total E of N-th power errors in Expression (48) are obtained from M by using a function of the prediction value y ′ obtained by the linear linear expression of Expression (39). be able to.

... (49)

Various functions are conceivable as the weight α S , and a function that makes the N-th power error e N = α S e 2 defined by the equation (48) satisfy the above-mentioned N-th power error property. For example, a function represented by the following equation (50) can be adopted.

... (50)

Here, x S is an error of the predicted value y ′ calculated by the equation (39) from the prediction coefficients w 1 to w M obtained by the least square error method (hereinafter referred to as an error according to the least square criterion as appropriate). ) Represents a value normalized in the range of 0 to 1.0, and the weight α S of Equation (50) defined as a function of the error x S of the predicted value y ′ is as shown in FIG. Become.

The coefficient a is a term that controls the influence of the error x S according to the least square criterion on the N-th power error e N. When the coefficient a is a = 0, the weight α S has a slope in FIG. It is a horizontal straight line of zero. In this case, the influence of the error x S according to the least square criterion on the N power error e N is constant regardless of the magnitude of the error x S according to the least square criterion, and the prediction that minimizes the sum E of Expression (48). The coefficient w i is theoretically the same as that obtained by the least square error method. Therefore, by setting a = 0, the least square error method is substantially realized. When the coefficient a> 0,

Error x S is the least-squares criterion, impact on the N-th power error e N of the formula (48) becomes larger as the error x S is large, the smaller the error x S is small. On the other hand, when the a <0, error x S by least squares criterion is, influence on the N-th power error e N of the formula (48) becomes smaller as the error x S is large, larger as the error x S is small Become.

That is, the N-th power error e N in the equation (48) has the same properties as when the index N is increased when the coefficient a is positive, and when the coefficient a is negative, the index a It has the same properties as when N is small. Thus, the N-th power error e N of the formula (48), since having properties similar to N-th power error e N of the formula (38), prediction coefficients for the sum E of the N-th power error of formula (48) to a minimum This is a prediction coefficient that substantially minimizes the sum E of N-th power errors in Equation (38).

  When the coefficient a is 0, the least square error method is realized as described above. That is, the index N is 2, but when the coefficient a is positive, the index N is N> 2, and when the coefficient a is negative, the index N is N <2. This coefficient a greatly affects the exponent N of the least-N power error method, similarly to the coefficient c described later.

The coefficient b is a correction term, and the function value (weight α S ) in FIG. 34 changes in the vertical direction as a whole depending on the value of the coefficient b. The coefficient b does not affect the index N of the least-N power error method so much.

The coefficient c is a term that changes the scaling of the axis, that is, a term that changes the way of assigning the weight α S to the error x S according to the least square criterion. The larger the value of the coefficient c, the more the weight α S is. In contrast, the change in the weight α S becomes more gradual as the coefficient c becomes smaller. Therefore, the influence of the error x S on the N-th power error e N of the equation (48) by the least square criterion due to the change of the coefficient c is the same as the case where the coefficient a changes. The N-th power error e N in equation (48) can be given the same property as the N-th power error e N in equation (38). That is, the coefficient c can affect the exponent N of the least-N power error method.

  The straight line in FIG. 34 shows the case of c = 1 and a> 0 (b is arbitrary), and the curve in FIG. 34 is the case of c ≠ 1, a> 0 (b is arbitrary). Show.

The coefficients a, b, and c that define the weight α S in the equation (50) can be changed by the user operating (setting) the operation unit 202, and the coefficients a, b, and c are changed. , The weight α S in equation (50) changes. Due to the change in the weight α S , α S e 2 in the equation (48) functions substantially (equivalent) as the N-th power error e N for the predetermined index N, and as a result, the equation (50) The prediction coefficient that minimizes the sum E of the N-th power errors, that is, the normative prediction coefficient w i by the least-N power error method can be obtained.

In the first method described above, the coefficient a, b, and c for determining the weight α S are changed variously to substantially change the index N and obtain the prediction coefficient by the least-Nth power error method. Therefore, the index N is not limited to an integer, and for example, it is possible to obtain a prediction coefficient when the index N such as the index N = 2.2 is another real number such as a decimal.

Next, a second method (hereinafter also referred to as recursive method = recursive method) for calculating a prediction coefficient by the least-N power error method will be described. In the first method, as shown in the equation (48), the square error e 2 multiplied by the weight α S is used as the N-th power error, but in the second method, the low-order error is reduced. Using a solution obtained by the least-Nth power error method, a solution by a higher-order least-Nth power error method is obtained by a recursive method.

That is, the prediction coefficient w i that minimizes the sum E of square errors in the following equation (51) can be obtained by the least square error method as described above, but can be obtained by this least square error method. The predicted value y ′ calculated by the equation (39) using the calculated prediction coefficient w i is expressed as y 1 (hereinafter, appropriately referred to as a predicted value according to the least square criterion).

... (51)

  Next, for example, consider the sum E of the cube error expressed by the following equation (52).

... (52)

Obtaining the prediction coefficient w i that minimizes the sum E of the cubed errors of the equation (52) obtains a solution by the least-squares error method. At this time, as shown by the equation (52), 3 The square error | e 3 | is represented by the product of the square error e 2 and the error | y−y 1 | between the predicted value y 1 and the true value y according to the least square criterion. Since | y−y 1 | in equation (52) can be obtained as a constant, the prediction coefficient w i that minimizes the sum E of the cube error in equation (52) is actually the least square. It can be obtained by the error method.

  Similarly, consider the sum E of the fourth power error expressed by Equation (53).

... (53)

Obtaining the prediction coefficient w i that minimizes the sum E of the fourth power error of the equation (53) obtains a solution by the least fourth error method, but now the sum of the third error of the equation ( 52 ). When the prediction value y ′ calculated by the equation (39) using the prediction coefficient w i that minimizes E is expressed as y 2 (hereinafter, referred to as a prediction value according to the least-cubic criterion as appropriate), the fourth power error e 4 represents the square error e 2 and the square of the error between the predicted value y 2 based on the least-squares criterion and the true value y (hereinafter, appropriately 2 It can be expressed as a product of | y−y 2 | 2 . Since the square error | y−y 2 | 2 according to the least-cube criterion of Expression (53) can be obtained as a constant, the prediction coefficient w that minimizes the sum E of the fourth-order errors of Expression (53) i can actually be obtained by the least square error method.

  The same applies to the following formula (54).

... (54)

That is, in the equation (54), finding the prediction coefficient w i that minimizes the sum 5 of the fifth power is obtained by the least fifth error method. Now, the prediction value y ′ calculated by the equation (39) using the prediction coefficient w i that minimizes the sum E of the fourth power errors of the equation (53) is represented by y3 (hereinafter, appropriately predicted by the least-fourth power criterion). 5), the fifth power error e 5 is the third power of the error between the square error e 2 and the error between the predicted value y 3 and the true value y according to the least-fourth power criterion, as shown in the equation (54). (Hereinafter, referred to as the third-order error according to the least-fourth criterion as appropriate) | y−y 3 | 3 . Since the cube error according to the least-fourth criterion in Equation (54) can be obtained as a constant, the prediction coefficient w i that minimizes the sum E of the fifth error in Equation (54) is also actually the smallest. It can be obtained by the square error method.

In the case of the least-Nth power error method with an exponent N of 6th or higher, the solution (prediction coefficient w i ) can be obtained in the same manner.

  As described above, in the second method, in order to obtain a solution by a high-order least-Nth power error method, a predicted value ( A prediction error) is used and this is recursively repeated to obtain a solution by a higher-order least-N error method.

In the above-described case, a solution by the least-Nth power error method is obtained by using a prediction value calculated using a prediction coefficient obtained by the least-N-1 power error method which is lower by one order. However, the solution by the least-Nth power error method can be obtained by using a predicted value calculated using a prediction coefficient obtained by any lower-order least-th power N error method. That is, in the case of formula (53), | y-y 2 | instead of | y-y 1 | may be used, in the case of formula (54), | y-y 3 | instead of | y -y 2 | and | y-y 1 | may be used.

In the second method, the Nth power error e N is expressed by the product of the second power error e 2 and the N−2 power error | y−y ′ | N−2. Similarly, for example, it is possible to obtain a solution by the least-N-th power error method in which the index N is arbitrary and the index N is N = 2.2.

  Next, image optimization processing by the optimization apparatus 201 in FIG. 29 will be described with reference to the flowchart in FIG. The image optimization process includes a learning process and mapping.

  In the learning process, it is determined in step S230 whether or not the user has operated the operation unit 202. If it is determined that the user has not operated the operation unit 202, the process returns to step S230. If it is determined in step S230 that the operation unit 202 has been operated, the process proceeds to step S231.

  In step S231, the teacher data generation unit 231 of the learning unit 221 generates teacher data from the input signal and outputs the teacher data to the least-N power error coefficient calculation unit 234, and the student data generation unit 232 receives the student data from the input signal. Is output to the prediction tap extraction unit 233, and the process proceeds to step S232.

  As the data used to generate student data and teacher data (hereinafter referred to as “learning data” as appropriate), for example, input signals that have been input from the present to a time point that is traced back to the past for a predetermined time are employed. can do. Further, as learning data, it is possible not to use an input signal but to store dedicated data in advance.

  In step S232, the prediction tap extraction unit 233 uses each teacher data as a pixel of interest, generates a prediction tap from the student data input from the student data generation unit 232 for each pixel of interest, and generates a minimum N-th power error method coefficient calculation unit. Then, the process proceeds to step S233.

In step S233, the least-Nth power error method coefficient calculation unit 234 receives from the operation unit 202 an operation signal designating calculation of a coefficient set by the least-Nth power error method using the recursive method (second method). For example, when the operation unit 202 is operated by the user and it is determined that the method is not the recursive method, that is, the direct method (first method) is designated, the process proceeds to step S234, and the formula (50 specifies the weight alpha S) of (exponent N specifies a) coefficients a, b, c are determined whether or not the input is the processing until the input is repeated, for example, the operation unit 202 by the user operation If it is determined that values specifying the coefficients a, b, and c have been input, the process proceeds to step S235.

In step S235, the minimum N-th power error method coefficient calculation unit 234 substantially minimizes the problem of minimizing the above-described equation (48) in the state of the coefficients a, b, and c to which the weight α S is input. By solving by the square error method, prediction coefficients w 1 , w 2 , w 3 ..., W M as a solution by the least N power error method of the exponent N corresponding to the weight α S , that is, a coefficient set is obtained and the coefficients It memorize | stores in the memory 235, and returns to step S230.

  On the other hand, if it is determined in step S233 that the recursive method has been selected, the process proceeds to step S236.

  In step S236, the least-N power error coefficient calculation unit 234 determines whether information specifying the exponent N is input, and repeats the process until the index N is input. If it is determined that information specifying the index N is input by operating, the process proceeds to step S237.

  In step S237, the least-N-square error method coefficient calculation unit 234 obtains a coefficient set by a solution using a basic least-square error method. In step S238, the least-N power error coefficient calculation unit 234 uses the predicted value obtained from the coefficient set obtained by the least square error method, and has been described with reference to Expressions (51) to (54). The coefficient set by the corresponding least-Nth power error method input from the operation unit 202 is recursively obtained as the index N, and stored in the coefficient memory 235, and the process returns to step S231.

  Next, in the mapping process, in step S241, the tap extraction unit 251 of the mapping processing unit 222 sets the image frame as the output signal corresponding to the image frame as the current input signal as the target frame, and the pixel of the target frame. Among them, for example, in the raster scan order, those that have not yet been set as the target pixel are set as the target pixel, and a prediction tap is extracted from the input signal for the target pixel and is output to the product-sum operation unit 252.

  In step S242, the product-sum operation unit 252 reads the prediction coefficient in the coefficient memory 235 of the learning unit 221, and the prediction tap input from the tap extraction unit 251 and the prediction coefficient read from the coefficient memory 235 according to Equation (39). The sum-of-products operation process is executed. As a result, the product-sum operation unit 252 obtains the pixel value (predicted value) of the target pixel. Thereafter, the process proceeds to step S243, where the tap extraction unit 251 determines whether all the pixels of the target frame have been set as the target pixel. If it is determined that the pixels have not yet been processed, the tap extraction unit 251 returns to step S241 and proceeds to the raster scan order of the target frame. In the following, the same processing is repeated with a pixel that has not yet been set as the target pixel as a new target pixel.

  If it is determined in step S243 that all the pixels of the target frame are the target pixels, the process proceeds to step S244, and the display unit 203 displays the target frame including the pixels obtained by the product-sum operation unit 252. .

  Then, the process returns to step S241, and the tap extraction unit 251 repeats the same processing below with the next frame as a new frame of interest.

  According to the image optimization process of FIG. 35, in the mapping process, when the user sees the image displayed on the display unit 203 and does not meet his / her preference, the user directly operates the operation unit 202 to And the recursive method is specified, and the exponent N of the least-Nth power error method is specified. As a result, the prediction coefficient obtained by the least-Nth power error method is changed in the learning process, and the output signal obtained by the mapping process is Can be made to suit the user's own preferences.

Here, in FIG. 36, in the direct method of changing the values of the coefficients a, b, and c of the weight α S in the equation (50), for example, the coefficients a, b, and c are changed to a = 40, b = 0.1, By setting c = 1, the sum of errors of predicted values calculated using the coefficient set of the least-Nth power criterion obtained by the least-Nth power error method and the minimum 2 obtained by a general least-square error method. The sum of errors of predicted values calculated using the coefficient set of the multiplication criterion is shown. Here, as the sum of errors, the sum of square error and cube error is shown. Further, the case where the coefficients a, b, and c are the above-mentioned values corresponds to the case where the exponent N of the N-th power error e N in Expression (48) is larger than 2. In FIG. 36, the sum of the square errors is 10160281 in the case of the coefficient set of the least square criterion, and is 10828594 in the case of the coefficient set of the least N criterion. Is smaller than the case of the coefficient set of the least-Nth power criterion. On the other hand, the sum of the cube errors is 165988823 in the case of the coefficient set of the least square criterion, and 161283660 in the case of the coefficient set of the least N criterion. However, it is smaller than the case of the coefficient set of the least square criterion.

  Therefore, an image as an output signal having a smaller sum of the squared norms can be obtained by performing the mapping process (the product-sum operation of Expression (39)) using the coefficient set of the least squares norms, By using the coefficients a, b, and c of the above-described values and performing the mapping process using the coefficient set of the least-Nth power criterion obtained, an image as an output signal with a smaller sum of the cube errors is obtained. Can do.

  In the image optimization process of FIG. 35, the index N is changed by the user operating the operation unit 202 (in the direct method, the coefficients a, b, and c specifying the index N are changed, and the recursive In the method, the index N itself is changed), and this sets what kind of exponent N minimum N error method is adopted as a learning criterion (learning system) of the prediction coefficient (coefficient set). That is, the learning algorithm itself for obtaining the prediction coefficient has been changed. Therefore, it can be said that the “processing structure” is changed so as to obtain an image desired by the user.

  Next, FIG. 37 shows another configuration example of the optimization apparatus of FIG. 37 is the same as that of the optimization apparatus 201 in FIG. 29 except for the internal information generation unit 261, and the description thereof is omitted.

  As the internal multiplication of the processing unit 211, for example, the internal information generation unit 261 reads a prediction coefficient stored in the coefficient memory 235, converts the prediction coefficient information into an image signal, and outputs the image signal to the display unit 203 for display. Let

  Next, image optimization processing by the optimization apparatus 201 in FIG. 37 will be described with reference to the flowchart in FIG. The image optimization apparatus in FIG. 38 also includes a learning process and a mapping process as in the case of FIG. In the learning process, the same processes as in steps S230 to S238 in FIG. 35 are performed in steps S250 to S258, respectively.

  Further, in the learning process, after the processes of steps S255 and S258, the process proceeds to step S259, where the internal information generation unit 261 reads the coefficient set stored in the coefficient memory 235 as internal information, and each value included in the coefficient set A displayable image signal is generated based on the above and output to the display unit 203 for display.

  At this time, the image generated by the internal information generation unit 261 and displayed on the display unit 203 is, for example, a format such as a three-dimensional distribution diagram as shown in FIG. 39 or a two-dimensional distribution diagram as shown in FIG. And so on. That is, in FIG. 39, coordinates corresponding to the position of the prediction tap extracted from the input signal are shown as positions on the xy plane as Tap Position (x) and Tap Position (y). On the corresponding coordinates, a prediction coefficient (Coeff) that takes a product with the pixel value as the prediction tap is shown. In FIG. 40, FIG. 39 is expressed in a contour map.

  Now, the description returns to the flowchart of FIG.

  After the process of step S259, the process returns to step S250, and the same process is repeated thereafter.

  On the other hand, in the mapping process, processes similar to those in steps S241 to S244 in FIG. 35 are performed in steps S261 to S264, respectively.

  Through the above processing, each value (each coefficient value) of the coefficient set stored in the coefficient memory 235 of the processing unit 211 is displayed (presented) as internal information regarding the processing, and the user can determine the distribution of the coefficient set and the output signal. The index N is changed by operating the operation unit 202 so that an image as an output signal that suits the user's preference is obtained while viewing the processing result of the processing unit 211 (in the direct method, the index N is specified). The coefficients a, b, and c to be changed are changed, and in the recursive method, the index N itself is changed). As a result, the learning norm (the learning system) of the prediction coefficient (coefficient set) Whether to use the least-Nth power error method is set. That is, since the learning algorithm itself for obtaining the prediction coefficient has been changed, it can be said that the “processing structure” has been changed. In the above example, the coefficient set is displayed. However, for example, internal information related to processing such as whether the current least-N power error method is a direct method or a recursive method is displayed. You may do it.

  FIG. 41 shows another configuration example of the optimization apparatus. 41 includes a processing unit 311 and optimizes an input signal based on an operation signal input from the operation unit 202 and displays the input signal on the display unit 202. In the figure, portions corresponding to those in the above-described embodiment are denoted by the same reference numerals, and description thereof will be omitted below as appropriate.

  The coefficient memory 321 of the processing unit 311 in FIG. 41 is basically the same as the coefficient memory 235 in FIG. 30, and stores a coefficient set necessary for the mapping processing unit 222 to execute the mapping process. This coefficient set is basically a coefficient set (coefficient set as an initial value) generated by a learning device 341 shown in FIG. 43, which will be described later, but is appropriately changed by the coefficient changing unit 322 and overwritten and stored. . Therefore, as the overwriting is repeated, the coefficient set will be different from the one generated by the learning device 341 over time. It should be noted that the coefficient set as the initial value is held in a memory (not shown), and the stored content of the coefficient memory 321 is returned to the coefficient set of the initial value in response to the operation of the operation unit 202. Is possible.

  The coefficient changing unit 322 reads the coefficient set (prediction coefficient) stored in the coefficient memory 321 based on the operation signal input from the operation unit 202, and integrates the prediction coefficient corresponding to each prediction tap (each prediction tap). The prediction coefficient) is changed, and the coefficient memory 321 is overwritten again.

  Next, the configuration of the coefficient changing unit 322 will be described with reference to FIG. The coefficient reading / writing unit 331 of the coefficient changing unit 322 is controlled by the change processing unit 332, reads the coefficient set stored in the coefficient memory 321, outputs the coefficient set to the change processing unit 332, and changes the value by the change processing unit 332. The prediction coefficient thus set is overwritten and stored in the coefficient memory 321. The change processing unit 332 changes the prediction coefficient read from the coefficient memory 321 by the coefficient reading / writing unit 331 based on the operation signal.

  Here, with reference to FIG. 43, a learning device 341 that generates a coefficient set stored in the coefficient memory 321 by a learning process will be described. The teacher data generation unit 351 of the learning device 341 is the same as the teacher data generation unit 231 of the learning device 221 in FIG. 30, generates teacher data from image signals as learning data prepared in advance, and generates teacher data. The result is output to the normal equation generation unit 354. The student data generation unit 352 is similar to the student data generation unit 232 of FIG. 30, generates student data from the learning data, and outputs the student data to the prediction tap extraction unit 353.

  The prediction tap extraction unit 353 is the same as the prediction tap extraction unit 233 in FIG. 30, and uses the teacher data to be processed from now on as a target pixel, and the mapping processing unit in FIG. 41 for the target pixel. A prediction tap having the same tap structure as that generated by the tap extraction unit 251 (FIG. 31) constituting 222 is extracted from the student data and output to the normal equation generation unit 354.

Normal equation generating unit 354 predicts that the teacher data y as the pixel of interest input from the supervisor data generator 351 taps x 1, x 2, ..., from x M, to produce a normal equation of formula (46). Then, when the normal equation generation unit 354 obtains the normal equation of Expression (46) using all the teacher data as the pixel of interest, the normal equation generation unit 354 outputs the normal equation to the coefficient determination unit 355. The coefficient determination unit 355 solves the input normal equation (the above-described expression (46)) by, for example, the Cholesky method and obtains a coefficient set.

  Next, coefficient determination processing (learning processing) by the learning device 341 in FIG. 43 will be described with reference to the flowchart in FIG. In step S271, the teacher data generation unit 351 generates teacher data from the learning data and outputs the teacher data to the normal equation generation unit 354. The student data generation unit 352 generates student data from the learning data, and the prediction tap. The data is output to the extraction unit 353, and the process proceeds to step S272.

  In step S272, the prediction tap extraction unit 352 sequentially extracts each teacher data as a pixel of interest, extracts a prediction tap from the student data for each attention data, outputs the prediction tap to the normal equation generation unit 354, and proceeds to step S273.

  In step S273, the normal equation generation unit 354 uses each teacher data and a set of prediction taps to perform summation (Σ) that is each component of the left-side matrix in Equation (46) and each vector of the right-side vector. By calculating the summation (Σ) as a component, a normal equation is generated and output to the coefficient determination unit 355.

  In step S274, the coefficient determination unit 355 solves the normal equation input from the normal equation generation unit 354, obtains a coefficient set by a so-called least square error method, and stores the coefficient set in the coefficient memory 321 in step S275. .

  Through the above processing, a coefficient set (coefficient set as an initial value) as a basis is stored in the coefficient memory 321. In the above example, the case where the coefficient set is obtained by the least square error method has been described. However, a coefficient set obtained by another method may be used, and the coefficient set obtained by the above-described least N error method. It may be a required coefficient set.

  Next, the change of the coefficient set by the change processing unit 332 of the coefficient changing unit 322 in FIG. 41 will be described. The coefficient set is stored in advance in the coefficient memory 321 by the processing of the flowchart of FIG. 44 described above, but the coefficient changing unit 322 operates each prediction coefficient of the coefficient set set in advance by the calculation. It changes based on the operation signal input from the unit 202.

  For example, as shown in FIG. 45, when the prediction taps extracted from the student data are a total of 49 taps of 7 taps × 7 taps (horizontal × vertical is 7 × 7 pixels), predictions corresponding to the prediction taps There are the same number of coefficients. That is, in this case, the coefficient set stored in the coefficient memory 321 is a coefficient set made up of 49 prediction coefficients. At this time, the prediction (multiplying each prediction tap with the number assigned to each prediction tap and the horizontal axis) with the position (tap position) of each prediction tap as the horizontal axis, and the prediction multiplied by the prediction tap at each tap position It is assumed that the distribution when the coefficient value of the coefficient is on the vertical axis is as shown in FIG. Now, if the user tries to change all the coefficient values of the coefficient set, it is necessary to manipulate the coefficient values of as many as 49 coefficients. The value of each coefficient is normalized (dividing each coefficient by the sum of all the values of the coefficients so that the gain of the input signal and the output signal obtained by processing the input signal with the prediction coefficient are the same) The sum of the values of the coefficients needs to be 1, but it is also difficult to manipulate the individual coefficients so that the sum is 1.

  That is, for example, when considering only raising the coefficient value of the coefficient corresponding to the prediction tap indicated by the arrow in FIG. 47, the distribution of tap positions and coefficient values is as shown in FIG. Note that the tap position t in FIG. 48 is the tap position designated by the arrow in FIG.

  In this way, when only the coefficient value of the coefficient corresponding to a specific prediction tap is raised (increased), the coefficient value of the coefficient at the other tap position is decreased, so that the total becomes 1. The operation is difficult. Furthermore, when changing the coefficient values of coefficients corresponding to a larger number of prediction taps, it is also difficult to set the sum to 1.

  Therefore, the change processing unit 332 changes the coefficient value of the coefficient corresponding to the one tap position indicated by the arrow in FIG. 49 with the operation signal larger than the predetermined threshold S11 (the change amount changes more than the threshold S11). ), The coefficient values of other coefficients are changed from the distribution shown in FIG. 46 to the distribution shown in FIG. That is, the change processing unit 332 determines the coefficient corresponding to each tap position so that the distribution of coefficient values changes as a spring-like model according to the distance from the tap position corresponding to the coefficient whose value has been changed. Change the coefficient value. That is, when the coefficient set distribution obtained by learning has a distribution as shown in FIG. 46, the change processing unit 332 causes the operation unit 202 to increase the value corresponding to the tap position t as shown in FIG. For other values, the coefficient value at a position closer to the tap position t is increased so that the coefficient value changes more as the position is closer, and conversely, the coefficient corresponding to the tap at a position far from the tap position t. The coefficient value is changed so that the coefficient value is greatly decreased as the position is further away, and the coefficient value is summed to be 1. Here, in the following, a model whose distribution changes like a spring as shown in FIG. 50 is referred to as a spring model. According to the spring model, when the operation unit 202 is operated so as to reduce the coefficient at a certain tap position t, the coefficient value at a position close to the tap position t is decreased according to the proximity of the position, Conversely, the coefficient value at a position far from the tap position t is raised according to the distance from the position.

  Further, when the change amount of the coefficient value of the coefficient is smaller than the predetermined threshold S11, the change processing unit 332 has the same polarity as the coefficient at the tap position t according to the change amount of the coefficient at the tap position t as shown in FIG. The coefficient value taking the extreme value is changed in the same direction as the coefficient at the tap position t, and the coefficient value taking an extreme value having a polarity different from the coefficient at the tap position t is changed in the opposite direction to the coefficient at the tap position t ( The extremal coefficient that is the same as the extremal direction of the manipulated coefficient is shifted in the same direction as the manipulated coefficient, and the extremal coefficient that is different from the extremal direction of the manipulated coefficient is manipulated. The total of the coefficient values is changed to 1 so that the balance of the distribution is kept in balance as a whole. In the following, a model in which coefficient values are changed while keeping the overall balance in balance as shown in FIG. 51 is referred to as an equilibrium model. Since the equilibrium model changes in this way, it functions approximately (equivalently) as an HPF (High Pass Filter) or LPF (Low Pass Filter).

  In the above description, the case where the positive coefficient value is increased in the equilibrium model has been described. However, for example, when the positive coefficient value is decreased, that is, the positive coefficient value is changed in the negative direction. In such a case, the positive value is changed in the negative direction, and the negative coefficient value is changed in the positive direction. Furthermore, if the negative coefficient value is raised, the positive coefficient value is changed in the negative direction, the negative coefficient value is changed in the positive direction, and the negative coefficient value is decreased. If so, the positive coefficient value is changed in the positive direction, and the negative coefficient value is changed in the negative direction. In any of the equilibrium models, the coefficient value is changed in a direction in which the balance is maintained in equilibrium as a whole.

  As described above, when the change amount of the changed coefficient value is larger than the threshold value S11, the change processing unit 332 changes the coefficient value corresponding to the other tap by the spring model shown in FIG. Is smaller than the threshold value S11, the coefficient values of the coefficients corresponding to the other taps are changed by the equilibrium model shown in FIG. This is because when the amount of change of one coefficient is large, the influence is large in view of the balance of each coefficient value, so it is unnatural to make changes that maintain the overall balance. This is because when the change amount is small and the change amount is small, the change in the coefficient has a small influence on the overall balance, so that the balance is changed as a whole.

  It should be noted that the model of change in coefficients other than the changed coefficient value by operating the operation unit 202 is not limited to this, and may be changed so that the sum of coefficient values becomes 1 as a whole. It ’s fine. Further, in the above-described case, the model for changing other coefficients is switched according to the amount of change of the coefficient changed by operating the operation unit 202, but other coefficients are changed. The model can also be fixed.

  Next, the image optimization processing of the optimization apparatus 301 in FIG. 41 will be described with reference to the flowchart in FIG. In this image optimization process, the coefficient changing process and the mapping process including the mapping process are the same as the mapping process described with reference to FIGS. 35 and 38, and therefore only the coefficient changing process will be described here.

  In step S291, the change processing unit 332 (FIG. 42) of the coefficient changing unit 322 determines whether an operation signal for operating the coefficient value is input from the operation unit 202. That is, when the user views the image displayed on the display unit 203 and considers that the image is suitable for his / her preference, the user is caused to execute mapping processing by the coefficient set stored in the coefficient memory 321 (FIG. 41). However, when it is determined that it does not meet the user's preference, an operation of changing the coefficient set stored in the coefficient memory 321 used for the mapping process is performed.

  For example, when it is determined in step S291 that an operation signal for manipulating a coefficient has been input, that is, the operation unit 202 changes the coefficient value of any one of the coefficients stored in the coefficient memory 321. If operated, the process proceeds to step S292.

  In step S292, the change processing unit 332 controls the coefficient reading / writing unit 331 to read the coefficient set stored in the coefficient memory 321, and the process proceeds to step S293. In step S293, the change processing unit 332 determines whether or not the coefficient value input as the operation signal has changed to a predetermined threshold value S11 or more as compared with a value included in the coefficient set in advance. For example, if it is determined in step S293 that the change between the value input as the operation signal and the value of the coefficient set stored in the coefficient memory 321 is greater than or equal to the threshold value S11, the process proceeds to step S294.

  In step S294, the change processing unit 332 changes the value of each coefficient included in the coefficient set using the spring model as shown in FIG. 50, and the process proceeds to step S295.

  On the other hand, if it is determined in step S293 that the change between the value input as the operation signal and the value of the coefficient set stored in the coefficient memory 321 is not greater than or equal to the threshold value S11, the process proceeds to step S296.

  In step S296, the change processing unit 332 changes the value of each coefficient included in the coefficient set using the equilibrium model as shown in FIG. 51, and the process proceeds to step S295.

  In step S295, the change processing unit 332 controls the coefficient reading / writing unit 331 to overwrite and store the changed coefficient set value in the coefficient memory 321, and the process returns to step S291, and the subsequent processing is performed. Repeated.

  If it is determined in step S291 that the coefficient value has not been operated, that is, if the user determines that the image displayed on the display unit 203 is the user's favorite image, the process returns to step S291. Thereafter, the same processing is repeated.

  By the coefficient changing process described above, the user can change the coefficient set used for the mapping process and execute the process optimal for the user. Note that changing the value of each coefficient in the coefficient set changes the “processing content” of the mapping processing by the mapping processing unit 311.

  In the coefficient changing process of FIG. 52, when the magnitude of the coefficient change is equal to or larger than the predetermined threshold value S11, all the coefficient values of the coefficient set are changed by the spring model according to the value of the operated coefficient, and the threshold value S11. Is smaller than that, the algorithm for changing the coefficient set changes in order to change all the coefficient values of the coefficient set by the equilibrium model. Therefore, in the processing unit 311 of the optimization apparatus 301 in FIG. 41, the “processing contents” and the “processing structure” are also changed according to the user's operation, and thereby the optimum signal processing for the user can be performed. It can be said that it is done.

  Further, as described above, when the coefficient set stored in the coefficient memory 321 is obtained by the least-N power error method, for example, coefficient sets corresponding to a plurality of exponents N are stored in the coefficient memory 321 in advance. The coefficient changing unit 322 may be changed to a coefficient set corresponding to the designated index N in accordance with an operation signal from the operation unit 202 based on a user operation. In this case, each coefficient set stored in the coefficient memory 321 is changed to one generated by the least-Nth power error method corresponding to the exponent N input from the operation unit 202 based on the user's operation. Since it is changed to a coefficient set generated by a different coefficient set generation algorithm, it can be said that the “processing structure” is changed.

  Next, an embodiment in which the internal information generation unit 371 is provided in the optimization apparatus 301 in FIG. 41 will be described with reference to FIG. In FIG. 53, the points other than the provision of the internal information generation unit 371 are the same as those of the optimization apparatus 301 shown in FIG.

  The internal information generation device 371 reads out, for example, a coefficient set stored in the coefficient memory 321 as internal information of the processing unit 311, converts it into an image signal that can be displayed on the display unit 203, and then outputs the image signal to the display unit 203. And display it.

  Next, the image optimization process of the optimization apparatus 301 in FIG. 53 will be described with reference to the flowchart in FIG. Similar to the image optimization process performed by the optimization process 301 of FIG. 41, this image optimization process also includes a coefficient change process and a mapping process. The mapping process includes the mapping process described with reference to FIGS. 35 and 38. Since they are the same, only the coefficient changing process will be described here.

  In the coefficient changing process, the same processes as in steps S291 to S296 in FIG. 52 are performed in steps S311 to S315, respectively.

  In step S315, as in the case of step S295 in FIG. 52, after the changed coefficient set is stored in the coefficient memory 321, the process proceeds to step S317, and the internal information generation unit 371 stores the coefficient set in the coefficient memory 321. Each coefficient value of the set of coefficients is read out, converted into an image signal that can be displayed on the display unit 203, and output to the display unit 203 for display. At this time, the display unit 203 displays, for example, each coefficient value of the coefficient set in a format such as the three-dimensional distribution diagram as shown in FIG. 39 or the two-dimensional distribution diagram as shown in FIG. Can be displayed.

  After the process of step S317, the process returns to step S311 and the same process is repeated thereafter.

  According to the coefficient changing process of FIG. 54, the value of the coefficient set stored in the coefficient memory 321 is displayed as internal information, so that the user can execute the optimum process for the user while viewing the coefficient set. It is possible to operate the operation unit 202 so as to be obtained.

  Note that the product-sum operation unit 252 (FIG. 31) of the mapping processing unit 222 obtains an output signal by calculating a higher-order expression of the second or higher order instead of the linear expression of the expression (39). It is possible.

  Next, a configuration example of the optimization apparatus 401 that extracts a telop portion from an image signal as an input signal will be described with reference to FIG.

  Based on the operation signal input from the operation unit 402, the feature amount detection unit 411 of the optimization apparatus 401 detects, for example, two designated feature amounts for each pixel of the image signal as an input signal. The information of the feature amount thus output is output to the process determination unit 412. The feature amount detection unit 411 stores the image signal as the input signal in the internal buffer 421 until the telop is extracted from the input image signal, and outputs the image signal to the processing unit 413. The operation unit 402 is the same as the operation unit 202 in FIGS. 41 and 53. Note that the feature quantity detection unit 411 is not limited to detecting only the two specified types of feature quantities for each pixel of the image signal as an input signal. For example, the feature quantity detection unit 411 detects a plurality of types of feature quantities at the same time. Two types of feature values designated from among them may be output, or two or more types of feature values may be detected simultaneously and output simultaneously.

  Based on the feature quantity input from the feature quantity detection unit 411, the process decision unit 412 decides, for example, on a pixel-by-pixel basis the process performed by the subsequent processing unit 413 on the image signal, and processes the determined process content. To the unit 413.

  The processing unit 413 subjects the image signal as an input signal read from the buffer 421 to processing of the processing content input from the processing determination unit 412 in units of pixels, and outputs to the display unit 403 for display.

  Next, the configuration of the feature amount detection unit 411 will be described with reference to FIG. The buffer 421 of the feature amount detection unit 411 temporarily stores an image signal as an input signal and supplies the image signal to the processing unit 413. The feature quantity extraction unit 422 extracts the two types of feature quantities selected by the feature quantity selection unit 423 from the image signal as an input signal, and outputs the extracted feature quantity to the process determination unit 412. The feature quantity selection unit 423 supplies information specifying the feature quantity to be extracted from the input signal to the feature quantity extraction unit 422 based on the operation signal input from the operation unit 402. Examples of selectable feature amounts include, for example, a luminance value for each pixel of the image signal, Laplacian, Sobel, inter-frame difference, inter-field difference, background difference, and a value (sum total) obtained from each feature amount within a predetermined range. , Average, dynamic range, maximum value, minimum value, median value, variance, etc.), but other feature quantities may be used.

  Next, the configuration of the process determining unit 412 will be described with reference to FIG. The feature amount recognition unit 431 of the processing determination unit 412 recognizes the plurality of feature amount types input from the feature amount detection unit 421, and processes the feature amount itself together with information indicating the recognized feature amount type. The data is output to the determination unit 432. The processing content determination unit 432 is preset for each feature amount stored in the processing content database 433 based on information indicating the type of feature amount input from the feature amount recognition unit 431 and the feature amount itself. The determined processing content is determined, and the determined processing content is output to the processing unit 413.

  Next, the configuration of the processing unit 413 will be described with reference to FIG. The processing content recognition unit 441 of the processing unit 413 recognizes the processing content input from the processing determination unit 412 and instructs the processing execution unit 442 to execute the recognized processing. The process execution unit 442 performs a specified process on the input signal input via the buffer 421 based on a command for each pixel from the process content recognition unit 441 and can be displayed on the display unit 202. It is converted into an image signal and output to the display unit 403 for display.

  55 is for extracting a telop from an image signal, the processing content is whether or not to extract a telop part for each pixel (whether or not to display it). However, other processing may be performed on the image signal, and the input signal is output as it is for the pixel recognized as the telop part, and the pixel recognized as other than the telop part is not output. It may be a process to make. In addition, here, in order to simplify the description, it is assumed that an image input as an input signal is a one-frame still image. However, the optimization apparatus 401 in FIG. 55 can also be applied to moving images.

  Next, the telop extraction optimization process by the optimization apparatus of FIG. 55 will be described with reference to the flowchart of FIG.

  In step S331, the feature quantity extraction unit 422 of the feature quantity detection unit 411 determines whether or not two types of feature quantities have been selected by the feature quantity selection unit 423, and repeats the processing until they are selected. That is, information indicating the feature quantity selected by the feature quantity selection unit 423 based on the operation signal according to the type of feature quantity input by the user operating the operation unit 402 is the feature quantity extraction unit 422. The process of step S331 is repeated until it is input. For example, when it is determined that information for selecting a feature amount is input from the feature amount selection unit 423, that is, when it is determined that the user has operated the operation unit 402 to select two types of feature amounts, the processing Advances to step S332.

  In step S <b> 332, the feature amount extraction unit 422 extracts the two types of selected feature amounts for each pixel from the image signal as an input signal, and outputs the extracted signal to the processing determination unit 412. At this time, the buffer 421 stores an image signal as an input signal.

  In step S <b> 333, the process determining unit 412 determines the processing content for each pixel based on the two types of input feature amounts and outputs the determined process content to the processing unit 413. More specifically, the feature quantity recognition unit 431 identifies the two types of input feature quantities, and outputs the identified feature quantity types and the feature quantities themselves to the process determination unit 412. Further, the process determination unit 412 determines the process contents from the two types of feature amounts input for each pixel. More specifically, in the processing content database 433, each value of the feature amounts A and B is stored as a combination of any two types of feature amounts A and B (feature amount A and feature amount B). A table called LUT (Look Up Table) that associates the processing contents (information about whether or not a telop in this case) with respect to the pixels having the feature amounts A and B is stored, and the processing contents are determined. The unit 432 refers to the LUT based on the combination of (feature amount A, feature amount B) of the target pixel to be processed now, and determines whether to process the corresponding process, that is, as a telop. To the unit 413. The LUT is generated, for example, by extracting a plurality of feature amounts from an image containing only a telop in advance and associating the combination with information indicating a telop. Details of the LUT will be described later.

  In step S 334, the processing unit 413 processes the image signal as an input signal input via the buffer 421 according to the processing content input from the processing determination unit 412, and converts the image signal into an image signal that can be displayed on the display unit 403. Are output to the display unit 403 and displayed. More specifically, the processing content recognition unit 441 of the processing unit 413 recognizes the processing content input from the processing determination unit 412 and executes the processing determined for the corresponding pixel. To The processing execution unit 442 reads an image signal as an input signal stored in the buffer 421, executes processing corresponding to each pixel, converts the image signal into an image signal that can be displayed on the display unit 403, and displays the image signal on the display unit 403. Are output and displayed.

  In step S335, the feature amount detection unit 411 determines whether it is considered that a telop has been extracted. That is, when the user looks at the image displayed on the display unit 403 and does not determine that a telop has been extracted, the operation unit 402 is changed so that the telop extraction process is performed again by changing the combination of feature amounts. Manipulate. When an operation signal from the operation unit 402 corresponding to this operation is input, the processing returns to step S331, and the subsequent processing is repeated.

  On the other hand, when it is determined that the telop is extracted by the user's subjectivity, the user operates the operation unit 402 to input an operation signal indicating the end of the process to the feature amount detection unit 421. Ends.

  That is, by the above process, the process of steps S331 to S335 is repeated until the user can determine that the telop has been extracted by looking at the image displayed on the display unit 403, so that the optimum feature for the user can be obtained. A combination of quantities can be set so that a telop can be extracted from an image signal as an input signal. In the above description, two types of feature quantities are used to determine the processing contents. However, the processing contents may be determined based on other types of feature quantities. Further, in the processing of step S331, a combination of a plurality of feature amounts is sequentially performed in a predetermined order by an operation signal corresponding to a predetermined operation (for example, a button operation for designating up / down) by the user. Switching may be made so that the user can switch and input the feature amount without being particularly aware of the type of the feature amount.

  In the above processing, according to the operation of the operation unit 402 by the user, the type of feature amount detected by the feature amount detection unit 411 is changed so that the processing unit 413 detects a telop. Since the change in the type of feature quantity detected by the feature quantity detection unit 411 indicates a change in the algorithm for determining the processing content in the process determination unit, the “process structure” is also changed in the feature quantity detection unit 411. It can be said that.

  Further, as described above, the feature amount detection unit 411 can detect various feature amounts. In this feature amount, a filter such as a Laplacian is used to detect the feature amount. There are those that need to set parameters such as coefficients. The parameter for detecting the feature amount can be changed according to the operation of the operation unit 402. According to the change of the parameter, the feature amount detected by the feature amount detection unit 411 is changed. The type itself does not change, but the detected feature value changes. Therefore, it can be said that the change of the parameter for detecting the feature amount is a change of the “processing contents” of the feature amount detection unit 411.

  Next, the configuration of the optimization apparatus 501 provided with the internal information generation unit 511 in the optimization apparatus 401 of FIG. 55 will be described with reference to FIG. The optimization apparatus 501 in FIG. 60 is basically the same as the optimization apparatus 401 in FIG. 55 except for the internal information generation unit 511.

  The internal information generation unit 511 of the optimization device 501 extracts, as internal information, for example, feature quantity selection information output from the feature quantity selection unit 423 of the feature quantity detection unit 411, and the type of feature quantity currently selected Is displayed on the display unit 403.

  Here, the telop extraction optimization processing by the optimization apparatus 501 will be described with reference to the flowchart of FIG.

  This process is basically the same as the telop extraction optimization process performed by the optimization apparatus 401 in FIG. 55 described with reference to the flowchart in FIG. 59, and indicates the type of the selected feature amount. The difference is that processing to display information has been added.

  That is, in step S341, the feature amount extraction unit 422 of the feature amount detection unit 411 determines whether or not two types of feature amounts are selected by the feature amount selection unit 423, and repeats the processing until they are selected. For example, when it is determined that information for selecting a feature amount is input from the feature amount selection unit 423, that is, when it is determined that the user has operated the operation unit 402 to select two types of feature amounts, the processing Advances to step S342.

  In step S <b> 342, the internal information generation unit 511 extracts information indicating the types of the two selected feature amounts from the feature amount selection unit 423, and displays the names of the selected two types of feature amounts. To display.

  Thereafter, in steps S343 to S346, the same processing as in steps S332 to S335 of FIG. 59 is performed.

  According to the processing in FIG. 61, the type of the currently selected feature amount, which is internal information related to the processing of the feature amount detection unit 411, is displayed (presented), so that the user can select the type of the feature amount currently selected. Thus, it is possible to set an optimum combination of feature quantities that can accurately extract a telop from an image signal as an input signal.

  Note that the internal information generation unit 511 generates, for example, a distribution of two types of feature value values for each pixel detected by the feature value detection unit 411 as internal information. It is possible to display as shown in FIG.

  As described above, when a parameter for feature amount detection is changed in accordance with an operation of the operation unit 402, the internal information generation unit 511 displays the parameter as internal information on the display unit 403. It is also possible to (present).

  Next, with reference to FIG. 62, a configuration example of the optimization apparatus 601 in which an internal information generation unit 611 that generates internal information from the processing determination unit 412 is provided instead of the internal information generation unit 511 of FIG. explain.

  The optimization apparatus 601 in FIG. 62 has the same configuration as the optimization apparatus 501 in FIG. 60 except that an internal information generation unit 611 is provided instead of the internal information generation unit 511 in FIG.

  The internal information generation unit 611 uses, as internal information, for example, two types of feature amounts based on the processing content determined by the processing content determination unit 432 of the processing determination unit 412 and the two types of feature amounts actually detected. A distribution map (for example, FIG. 65 and FIG. 67) of the telop-extracted pixels and the telop-extracted pixels with the axis as the axis is generated and displayed on the display unit 403.

  Next, the telop extraction optimization process by the optimization apparatus 601 in FIG. 62 will be described with reference to the flowchart in FIG.

  Note that this process is basically the same process as the telop extraction optimization process performed by the optimization apparatus 501 in FIG. 60 described with reference to the flowchart in FIG. The difference is that a process for displaying the distribution of whether or not a pixel is telop extracted as an axis is added.

  That is, in step S351, the feature quantity extraction unit 422 of the feature quantity detection unit 411 determines whether or not two types of feature quantities are selected by the feature quantity selection unit 423, and repeats the processing until they are selected. For example, when it is determined that information for selecting a feature amount is input from the feature amount selection unit 423, that is, when it is determined that the user has operated the operation unit 402 to select two types of feature amounts, the processing Advances to step S352.

  In step S <b> 352, the feature amount extraction unit 422 extracts the two types of selected feature amounts for each pixel from the image signal as an input signal, and outputs the extracted signal to the processing determination unit 412. At this time, the buffer 421 stores an image signal as an input signal.

  In step S353, the processing determination unit 412 determines the processing content for each pixel based on the two types of input feature amounts, and outputs the processing content to the processing unit 413.

  In step S354, the processing unit 413 processes the image signal as an input signal read from the buffer 421 according to the processing content input from the processing determination unit 412, converts the image signal into an image signal that can be displayed on the display unit 403, and displays the image signal. The data is output to the unit 403 and displayed.

  In step S355, the internal information generation unit 611 generates, as internal information, a distribution map in which the processing content determined by the processing content determination unit of the processing determination unit 412 is plotted with two types of feature amounts as axes. This is displayed on the part 403.

  In step S356, the feature amount detection unit 411 determines whether it is considered that a telop has been extracted. In step S356, when an operation signal from the operation unit 202 corresponding to the user's operation is input and it is determined that no telop is extracted, the process returns to step S351, and the subsequent processes are repeated.

  On the other hand, in step S356, when the user operates the operation unit 402 to input an operation signal indicating the end of the process to the feature amount detection unit 421, the process ends.

  That is, for example, an image signal as an input signal as shown in FIG. 64 is input. In FIG. 64, “Title ABC” is displayed as a telop in the center of the figure for the background image (here, the portion that is not a telop).

  In step S355, the internal information generation unit 611 performs a distribution indicating whether or not two types of feature amounts detected from the image signal as illustrated in FIG. 64 are extracted as telops as axes, for example, in FIG. It is displayed as a two-dimensional distribution diagram as shown.

  In the example of FIG. 65, there are two types of features selected as feature quantities: Laplacian and inter-frame difference. Show. In the example of FIG. 65, the telop-extracted pixels and the non-telop-extracted pixels are in a state where there is no distribution upper boundary (the distribution of telop pixels and non-telop pixels is not separated). In such a distribution, the telop is often not extracted from the background image. For example, as shown in FIG. 66, the telop portion is not a boundary, and boundaries 621 and 622 are generated around it. It becomes a state.

  In such a case, the user determines that no telop has been extracted, and as a result, the processing in steps S351 to S356 is repeated. Then, by repeating this process, for example, the selected feature amount is the Laplacian sum (17 pixels × 17 pixels) (the sum of the Laplacians of the pixels in the 17 pixels × 17 pixel range centered on the target pixel) and the luminance. It is assumed that a distribution diagram is generated as shown in FIG. 67 when DR (dynamic range of pixels in a range of 17 pixels × 17 pixels centering on the target pixel of the luminance value of each pixel) is obtained. At this time, in FIG. 67, the distribution of pixels extracted as telops and the distribution of pixels not extracted as telops are visually separated. That is, when a combination of a Laplacian sum and a luminance DR is selected as a feature amount, it is shown that the telop and other portions of the input image are distributed so as to be separated. When such a distribution is obtained, the telop portion can be extracted with high accuracy as shown in FIG. 68 by performing threshold processing on the Laplacian sum and the luminance DR as the feature amounts detected from each pixel. .

  As described above, by the above-described processing, the user can determine that the telop has been extracted by looking at the two-dimensional distribution with the two selected feature amounts as axes along with the image displayed on the display unit 403. By repeating the processing of steps S351 to S356, the distribution of the telop, the detected pixel, and the background pixel as viewed from the feature amount as the internal information regarding the processing of the processing determination unit 412 is displayed. Operates the operation unit 402 so that the telop can be accurately extracted from the image signal as the input signal while grasping the distribution of the telop, the detected pixel, and the background pixel viewed from the feature amount, It is possible to set a combination of feature amounts that is optimal for the user.

  Next, the configuration of an optimization apparatus 701 provided with a process determination unit 711 instead of the process determination unit 412 of the optimization apparatus 601 illustrated in FIG. 62 will be described with reference to FIG. The optimization apparatus 701 has the same configuration as that of the optimization apparatus 601 shown in FIG. 62 except that a process determination unit 711 is provided instead of the process determination unit 412.

  The processing determination unit 711 changes the contents of the LUT in the processing content database 433 based on an operation signal from the operation unit 702 (the same as the operation unit 402), and changes the feature amount input from the feature amount detection unit 411. Based on this, the processing unit 413 in the subsequent stage determines the processing to be performed on the image signal in units of pixels, and outputs the determined processing content to the processing unit 413.

  Next, the processing determination unit 711 will be described with reference to FIG. The basic configuration is the same as that of the processing determination unit 412 in FIG. 50, but a processing content determination unit 721 is provided instead of the processing content determination unit 432. The processing content determination unit 721 changes the LUT in which the processing content is determined for each combination of two types of feature amounts stored in the processing content database 433 based on the operation signal input from the operation unit 702. More specifically, with the image signal as the input signal displayed as it is, the feature amount of the pixel designated as the telop by the operation unit 202 is regarded as a telop, and the other areas are processed other than the telop. Set the LUT to.

  After the LUT is changed, the processing content determination unit 721 determines each feature amount stored in the processing content database 433 based on the information identifying the feature amount input from the feature amount recognition unit 431 and the feature amount itself. The processing content set in advance is determined every time, and the determined processing content is output to the processing unit 413.

  Next, the telop extraction optimization process by the optimization apparatus 701 in FIG. 69 will be described with reference to the flowchart in FIG.

  In step S361, the image signal as the input signal is displayed on the display unit 403 as it is. More specifically, the buffer 421 of the feature quantity detection unit 411 receives and stores the image signal as the input signal, and the processing unit 413 reads the image signal as the input signal stored in the buffer 421 as it is, Without processing, the data is output to the display unit 403 as it is and displayed.

  In step S362, the processing content determination unit 721 of the processing determination unit 711 determines whether a telop and a background are instructed from the operation unit 702. That is, for example, it is assumed that an unprocessed image signal is displayed in the process of step S361 as shown in FIG. At this time, the user operates the pointer 741 via the operation unit 702 (by dragging or clicking), and roughly specifies the telop part by the range 742 or the like, for example, as shown in FIG. As described above, the processing is repeated until the background portion 751 of the outer portion of the telop portion 752 and the range 742 in FIG. 72 is designated, and when the telop and the background portion are designated, the designated corresponding pixel positions are incorporated. Stored in a memory (not shown), and the process proceeds to step S363.

  In step S363, the feature amount selection unit 423 of the feature amount detection unit 421 determines whether or not an operation signal for selecting two predetermined types of feature amounts has been input from the operation unit 202, and the two predetermined types of feature amounts. The process is repeated until “” is specified, and when two predetermined feature amounts are selected, the process proceeds to step S364.

  In step S364, the feature amount extraction unit 422 extracts the two types of feature amounts selected from the input signal based on the information for selecting the type of feature amount input to the feature amount selection unit 423, and performs a process determination unit. To 711.

  In step S365, the internal information generation unit 611 uses the two types of feature amounts as the axes based on the two types of feature amounts input to the processing determination unit 711 and the pixel position information designated as the telop and the background. A two-dimensional distribution chart is generated, and the two-dimensional distribution chart is displayed on the display unit 403 in step S366. More specifically, the feature amount recognition unit 431 of the processing determination unit 711 recognizes the type of feature amount, outputs information indicating the type and the feature amount itself to the processing content determination unit 721, and the processing content determination unit 721 In addition to the information indicating the feature amount and its type, the internal information generation unit 611 is designated as the telop and the background by outputting the information indicating the pixel position designated as the telop and the background to the internal information generation unit 611. Based on the information indicating the pixel position, for example, a two-dimensional distribution map of feature amounts as shown in FIG. 74 is generated. That is, in the example of FIG. 74, an example is shown in which the Laplacian and the inter-frame difference are selected as the feature amount, but the circle mark in the figure indicates the telop pixel and the cross mark in the figure is designated as the background. Is shown. In the two-dimensional distribution diagram of FIG. 74, for example, when a Laplacian is detected with a value of X and a difference between frames is detected with a value of Y for a given pixel, the pixel is designated as a telop. A circle is displayed at a position (X, Y) on the distribution, and when a background is designated, a cross is displayed at the same position.

  In step S367, the processing content determination unit 721 determines whether an operation signal indicating that it is determined that the telop and the background are separated is input. That is, for example, in the case of a two-dimensional distribution as shown in FIG. 74, the distribution of the circles indicating the telop and the distribution of the crosses indicating the background are not completely separated. In fact, it cannot be expected that a telop is extracted even on the display screen. For example, as shown in FIG. 73, a background 751 and a telop part 752 cause a boundary 753 to separate the background and the telop. It is often not in the state. When it is determined that the telop is not extracted from the viewpoint of the user as described above, when the user tries to change the feature amount again, the operation unit 702 indicates that the operation unit 702 is not separated according to the user's operation. Is output to the processing content determination unit 721 of the processing determination unit 711. In this case, in step S367, it is determined that an operation signal indicating that it is determined that the telop and the background are separated is not input, and the process returns to step S363, and the subsequent processes are repeated. By this process, two types of feature amounts are selected again.

  Further, when the processing in step S366, for example, as shown in FIG. 75, the circle indicating the telop portion and the cross indicating the background are separated to some extent, the operation unit 702 is operated by the user. Then, an operation signal indicating separation is output to the processing content determination unit 721 of the processing determination unit 711. In this case, it is determined in step S367 that an operation signal indicating that it is determined that the telop and the background are separated is input, and the process proceeds to step S368.

  In step S368, the processing content determination unit 721 determines whether or not a telop portion has been designated on the two-dimensional distribution. That is, as shown in FIG. 75, for example, whether or not an operation signal designating a range 761 or the like is input from the operation unit 702 as a range in which many circles indicating telops are distributed by the pointer 741 on the displayed distribution. The process is repeated until the range is specified. If it is determined that the range is specified, the process proceeds to step S369.

  In step S369, the processing content determination unit 721 performs processing on the LUT indicating the presence / absence of telop extraction corresponding to each combination of feature amounts based on the operation signal designating the range 761 in FIG. 75 input from the operation unit 702. The content is changed, the processing content is determined according to the changed LUT, and is output to the processing unit 413. The processing unit 413 uses the input processing content as an input signal input via the buffer 421. A telop is extracted from the image signal and displayed on the display unit 403. More specifically, the processing content determination unit 721 corresponds to pixels distributed in a specified range based on information indicating a range on a two-dimensional distribution as illustrated in FIG. 75 input from the operation unit 702. The LUT of the processing content database 433 is updated so that a combination of two types of feature values is extracted as a telop, and the processing content of each pixel is determined according to the updated LUT and output to the processing unit 413.

  In step S370, the processing content determination unit 721 determines whether it is determined that a telop has been extracted. That is, it is determined whether it is determined that the telop is extracted from the viewpoint of the user, or whether it is determined that the telop is not extracted. For example, as shown in FIG. 76, when the output image is displayed on the display unit 403 by the process of step S369, the boundaries 771 and 772 between the telop part and the background part are not telops themselves. It cannot be said that it has been extracted. Therefore, when it is determined that the telop is not extracted from the viewpoint of the user, the user operates the operation unit 702 and inputs an operation signal indicating that the telop is not extracted. When receiving the operation signal, the processing content determination unit 721 of the processing determination unit 711 determines in step S370 that no telop has been extracted, and the processing proceeds to step S371.

  In step S371, it is determined whether or not to re-specify the telop range on the two-dimensional distribution, and if it is determined not to re-specify the telop range on the two-dimensional distribution, the process is as follows. Returning to step S368, the subsequent processing is repeated.

  On the other hand, in step S370, for example, when re-designation of the telop range is selected in the process of step S371, the telop is compared with the range 761 of FIG. 75 in step S368 as shown in FIG. A range 781 in which a portion where the distribution of the circles shown is narrowed down is narrowed (a portion in which a portion including many circles extracted as telops is narrowed down) is set. That is, in FIG. 75, even if the range 761 on the distribution of the feature amount is set as the telop part, as a result, as shown in FIG. 76, the boundaries 771 and 772 between the telop part and the background part are not telops themselves. That is, the telop was not completely extracted.

  Therefore, the user operates the operation unit 702 to set a range 781 narrower than the range 761 as a range on the feature amount distribution (as a range of a portion designated as a telop portion). By setting the range in this way, the range on the distribution of the feature amount extracted as a telop is narrowed down, that is, the background portion is more easily excluded. However, if the range to be extracted as a telop is made too narrow, the telop itself becomes difficult to be extracted, so the user searches for the optimum telop extraction state by repeating such processing while looking at the extracted telop. To do.

  For example, when the telop is extracted as shown in FIG. 78, when the user determines in step S370 that the telop is extracted, the operation unit 702 is operated by the user, An operation signal indicating that it is determined that a telop has been extracted is input to the processing content determination unit 721 of the processing determination unit 711. Therefore, it is determined that a telop has been extracted, and the processing is as follows. ,finish.

  If it is determined in step S371 that the telop range is not designated, the process returns to step S363, and the subsequent processes are repeated.

  By such processing, first, the user selects a telop, a background portion, and two types of feature amounts on the image signal as an input signal, and the selected telop and background select the selected feature amount by 2 Look at how the two-dimensional distribution is made when it is used as an axis, change the telop range on the two-dimensional distribution (narrow down), or change the two types of feature values to be extracted. By repeating the process until the user's preference is reached while viewing the telop, the telop extraction process that matches the user's preference can be realized. When it is determined in step S362 that telop and background processing has been instructed, the telop and background range is specified by selecting two types of feature values or narrowing down the telop of the two-dimensional distribution. Since it is only necessary to be able to generate information as a type, the range may be rough.

  In the above processing, depending on the combination of the two types of feature values selected by the user, the processing is divided into processing of a telop or a background. "Will be changed.

  In the above processing, the type of feature amount is determined by designating two types of feature amounts in the operation unit 402. For example, as shown in FIG. 79, a predetermined operation button of the operation unit 702 is used. Thus, the combination of the two types of feature values may be changed simply by sending an up or down instruction as an operation signal. That is, as an initial state, processing is executed with a combination of feature quantities a and b shown in state A, and when a down instruction is given, processing is executed with a combination of feature quantities b and c shown in state B. When down is instructed, processing is executed with a combination of feature quantities c and d shown in state C. For example, in state C, if up is specified, the process returns to state B, and in state B, if up is specified, the state You may make it return to A. In this way, the user can change the feature values one after another without being particularly aware of the types of feature values, and thus narrow down the combination of feature values for efficient telop extraction. Is possible.

  Next, referring to FIG. 80, a feature quantity detection unit 811 capable of generating a new feature quantity from an existing feature quantity is provided in place of the feature quantity detection unit 411 of the optimization device 401 of FIG. The configuration of the optimization device 801 will be described. In FIG. 80, the configuration is the same except that the feature quantity detection unit 811 is provided instead of the feature quantity detection unit 411 of the optimization apparatus 401 in FIG.

  The operation unit 802 is the same as the operation unit 402.

  With reference to FIG. 81, the structure of the feature amount detection unit 811 in FIG. 80 will be described. In the feature quantity detection unit 811, the buffer 421 and the feature quantity extraction unit 422 are the same as the feature quantity detection unit 411 illustrated in FIG. 56. The feature quantity selection unit 821 controls the feature quantity extraction unit 422 based on the operation information designating the feature quantity input from the operation unit 802, and the two types of features designated among the feature quantities prepared in advance. The amount is extracted and output to the process determining unit 413, or the feature amount stored in advance in the feature amount database 823 is output to the process determining unit 413. More specifically, the feature amount database 823 stores feature amount information regarding the type of feature amount and a method for detecting the feature amount. Based on this feature quantity information, the feature quantity extraction unit 422 reads feature quantity information corresponding to the type of feature quantity selected by the feature quantity selection unit 821 from the feature quantity database 823 and records the feature quantity information in the feature quantity information. The feature quantity selected from the input signal is detected according to the feature quantity detection method.

As the feature amount information prepared in advance, the luminance value, Laplacian, Sobel, and inter-frame difference (for example, inter-frame difference f S (x, y) of the target pixel (x, y)) is f S ( x, y) = f 0 (x, y) −f 1 (x, y), where f 0 (x, y) indicates the current pixel of interest, and f 1 (x, y) Indicates a pixel one frame before at the same position in space), inter-field difference, background difference, and differential value (for example, the differential value f b (x, y) of the target pixel (x, y) is f b (X, y) = 4 × f 0 (x, y) −f 0 (x−1, y) −f 0 (x + 1, y) −f 0 (x, y + 1) −f 0 (x, y−1) ) Where f 0 (x, y) represents the current pixel of interest, and f 0 (x−1, y), f 0 (x + 1, y), f 0 (x, y + 1), f 0 (x, y-1 ) is And a value obtained from each feature quantity within a predetermined range (sum, average, dynamic range, maximum value, minimum value, median value, or , Dispersion, etc.), but other than these may be used.

  The feature quantity processing unit 822 generates a new feature quantity from the feature quantities stored in the feature quantity database 823 based on the operation signal input from the user. More specifically, the feature amount processing unit 822 generates new feature amount information from the feature amount information stored in the feature amount database 823 based on the operation signal input from the user, and based on the feature amount information. Then, the feature quantity extraction unit 422 extracts a new feature quantity.

Further, assuming that feature quantity information of feature quantities A, B, and C is stored as feature quantity types in the feature quantity database 823, a DR (dynamic range: feature range A) of the feature quantity A at a predetermined position with respect to the target pixel. A feature value corresponding to a new feature value A ′ is obtained by reading the value of the feature value A of a plurality of existing pixels and obtaining a value that is a difference between the minimum value and the maximum value for each pixel. Information may be used, and in the same way, the maximum value, median value, minimum value, sum, variance, the number of pixels that have a value greater than or equal to the threshold (the threshold can also be set), or between multiple feature quantities It is good also as new feature-value information by calculating | requiring the linear primary combination. The linear linear combination between a plurality of feature amounts is, for example, that a Laplacian for a predetermined pixel is x a , a Sobel is x b , and an interframe difference is x c , and these three types of feature amounts are linearly linearly combined. When these three types of feature amounts are multiplied by a coefficient, the sum, that is, A × x a + B × x b + C × x c is used as a new feature amount of this pixel. Here, A, B, and C are coefficients. For example, in the case of telop extraction, these coefficients are pixels in a range in which the telop part is roughly specified by the operation unit 802 as teacher data, and a plurality of feature amounts are obtained. It can be obtained as described in FIG. 43 and FIG. 44 by the learning process using the student data. In FIG. 81, the feature quantities stored in the feature quantity database 823 indicate the types of feature quantities in which the feature quantities A to C are extracted by the feature quantity extraction unit 422, and the feature quantities A ′ to C ′ are It is assumed that the type of feature amount processed from the feature amounts A to C by the feature amount processing unit 822 (what is actually stored is the feature amount information (type of feature amount and its detection method). Information).

  The feature amount information stored in the feature amount database 823 may be the one specified at the timing extracted by the feature amount extraction unit 422, or the feature amount stored in the feature amount database 823 in advance. Information may be processed by the feature amount processing unit 822, or may be stored in advance by other methods.

  Next, the telop extraction optimization process by the optimization apparatus 801 in FIG. 80 will be described with reference to the flowchart in FIG. In step S381, the feature quantity extraction unit 422 of the feature quantity detection unit 411 determines whether or not two types of feature quantities have been selected by the feature quantity selection unit 821, and if not, the process proceeds to step S386. Further, for example, when it is determined that information for selecting a feature amount is input from the feature amount selection unit 423, that is, when it is determined that the user has operated the operation unit 802 to select two types of feature amounts, The process proceeds to step S382.

  In step S382, the feature amount extraction unit 422 extracts the two types of selected feature amounts for each pixel from the image signal as an input signal, and outputs the extracted signal to the processing determination unit 412. At this time, the buffer 421 stores an image signal as an input signal.

  In step S <b> 383, the process determining unit 412 determines the process content for each pixel based on the two types of input feature amounts and outputs the determined process contents to the process unit 413.

  In step S <b> 384, the processing unit 413 processes the image signal as an input signal input via the buffer 421 according to the processing content input from the processing determination unit 412, and converts the image signal into an image signal that can be displayed on the display unit 403. Are output to the display unit 403 and displayed.

  In step S385, the feature amount detection unit 411 determines whether it is considered that a telop has been extracted. That is, when the user looks at the image displayed on the display unit 403 and does not determine that a telop has been extracted, the operation unit 802 is changed so that the telop extraction process is performed again by changing the combination of feature amounts. Manipulate. When an operation signal corresponding to this operation is input from the operation unit 802, the process proceeds to step S386.

  In step S386, the feature amount processing unit 822 of the feature amount detection unit 811 determines whether or not a feature amount processing is instructed. When there is no feature amount processing instruction, the processing returns to step S381. On the other hand, if the feature amount processing unit 822 determines in step S386 that an operation signal instructing processing of the feature amount is input from the operation unit 802, the process proceeds to step S387.

  In step S387, the feature amount processing unit 822 determines whether or not a basic feature amount is specified, and repeats the processing until information specifying the basic feature amount is input. For example, when the operation unit 802 is operated and an operation signal indicating the feature amount A is input, it is determined that the feature amount based on the feature amount A of the image signal as the input signal is input, and the processing is as follows. The process proceeds to step S388.

  In step S388, the feature amount processing unit 822 determines whether or not the processing content is instructed, and repeats the processing until the processing content is instructed. For example, when the operation unit 802 is operated and an operation signal instructing DR is input, it is determined that the processing content has been designated, and the processing proceeds to step S389.

  In step S389, the feature amount processing unit 822 processes the specified feature amount with the specified processing content, obtains a new feature amount, stores the new feature amount in the feature amount database 823, and the processing returns to step S381. . That is, in this case, the feature quantity processing unit 822 reads the feature quantity A from the feature quantity database 823, and acquires a DR that is the specified processing content, thereby generating a new feature quantity A ′. The feature data is stored in the feature database 823, and the process returns to step S381.

  On the other hand, when it is determined in step S385 that the telop has been extracted by the user's subjectivity, the user operates the operation unit 802 to input an operation signal indicating the end of the process to the feature amount detection unit 421. At this time, the process ends.

  That is, by the above process, the process of steps S381 to S389 is repeated until the user can determine that the telop has been extracted by looking at the image displayed on the display unit 403, and thus the optimum characteristics for the user. The combination of quantities can be set so that telops can be extracted from the image signal as the input signal. Furthermore, by increasing the types of feature quantities selected by the user, more combinations of feature quantities can be set. This makes it possible to execute the optimum process for the user.

  In the above processing, the processing content is determined according to the two types of feature amounts designated by the user, and the telop is extracted from the image signal as the input signal. It can be said that the “content of processing” has been changed so as to obtain a desired output signal for the user. In addition, according to the user's operation, the two axes of feature quantities (two types of feature quantities to be selected) are switched, and new feature quantities are set (the types of feature quantities are increased). Since the algorithm for determining the processing content (for example, whether or not to process as a telop) is changed depending on the combination of the quantities, the “processing structure” in the “processing content” is also changed. I can say that.

  In addition, telop extraction is obtained by switching the type of feature quantity to be extracted by trial and error. However, in the method using fixed feature quantity, the program can be used to change the assumed feature quantity. In order to obtain the optimal algorithm for telop extraction heuristically, it is necessary to recreate the system many times, which is actually quite difficult. It was. On the other hand, the optimization device of the present invention can extract a new feature amount in real time and can also present the feature amount distribution, so that trial and error performed by the user is easy. Thus, it is possible to improve the possibility of finding the optimum feature amount for telop extraction.

  Next, the configuration of the optimization apparatus 901 in which the feature amount detection unit 911 and the internal information generation unit 912 are provided in the optimization apparatus 801 in FIG. 80 will be described with reference to FIG. In the optimizing device 901 of FIG. 83, the configuration other than the feature amount detection unit 911 is provided instead of the feature amount detection unit 811 and the internal information generation unit 912 is newly provided, as shown in FIG. This is basically the same as the optimization device 801.

  Next, the configuration of the feature amount detection unit 911 will be described with reference to FIG. The feature amount detection unit 911 has basically the same configuration as the feature amount detection unit 811 in FIG. 81, except that the feature amount selection unit 921 is replaced with the feature amount processing unit 822 instead of the feature amount selection unit 821. The feature amount processing unit 922 is provided. In any case, the basic functions are the same, except that the feature quantity selection unit 911 supplies information on the type of the selected feature quantity to the internal information generation unit 912. Further, the feature amount processing unit 922 is different in that image information relating to processing content instructions is output to the display unit 403.

  The internal information generation unit 912 is the same as the internal information generation unit 511 in FIG. The operation unit 902 is the same as the operation unit 402.

  Next, telop extraction optimization processing by the optimization apparatus 901 in FIG. 83 will be described with reference to the flowchart in FIG. This process is basically the same as the process described with reference to the flowchart of FIG. 82, except that the feature amount selected by the internal information generation unit 912 is displayed.

  That is, in step S391, the feature quantity extraction unit 422 of the feature quantity detection unit 911 determines whether or not two types of feature quantities have been selected by the feature quantity selection unit 921, and if not selected, the process proceeds to step S397. Proceed to For example, when it is determined that information for selecting a feature amount is input from the feature amount selection unit 921, the process proceeds to step S392.

  In step S392, the internal information generation unit 912 extracts information indicating the types of the two selected feature amounts from the feature amount selection unit 921, and displays the names of the two types of selected feature amounts on the display unit 403. To display.

  In step S393, the feature amount extraction unit 422 extracts the two types of selected feature amounts for each pixel from the image signal as an input signal, and outputs the extracted signal to the processing determination unit 412. At this time, the buffer 421 stores an image signal as an input signal.

  In step S394, the processing determination unit 412 determines the processing content for each pixel based on the two types of input feature amounts, and outputs the processing content to the processing unit 413.

  In step S395, the processing unit 413 processes an image signal as an input signal input via the buffer 421 according to the processing content input from the processing determination unit 412 and converts the image signal into an image signal that can be displayed on the display unit 403. Are output to the display unit 403 and displayed.

  In step S396, the feature amount detection unit 411 determines whether it is considered that a telop has been extracted. That is, when the user looks at the image displayed on the display unit 403 and does not determine that a telop has been extracted, the operation unit 902 is changed so as to change the combination of feature amounts and try the telop extraction process again. Manipulate. When an operation signal corresponding to this operation is input from the operation unit 902, the processing proceeds to step S397.

  In step S397, the feature amount processing unit 922 of the feature amount detection unit 911 determines whether or not the feature amount processing has been instructed. If there is no feature amount processing instruction, the processing returns to step S391. On the other hand, if the feature amount processing unit 922 determines in step S397 that an operation signal instructing processing of the feature amount has been input from the operation unit 902, the processing proceeds to step S398.

  In step S398, the feature amount processing unit 922 displays a processing content instruction screen as shown in FIG. 86, for example. In FIG. 86, a basic feature amount display unit 931 is provided on the left side of the drawing, and the feature amount currently stored in the feature amount database 823 is displayed as a basic feature amount. In this case, feature amounts A to C and A ′ to C ′ are displayed. On the right side, a processing content selection column 932 is displayed. In this case, DR, maximum value, minimum value, median value, minimum value, sum, variance, the number of pixels that have a value greater than or equal to a threshold value, or the number of pixels that have a linear linear combination and that have a value greater than or equal to the threshold value Is provided with a column 932a for setting a threshold, and further, a column 932b for selecting a feature amount to be selected when linear linear combination is selected. Further, a scale setting field 933 for setting the scale of each value is displayed. The scale value indicates a region centered on the target pixel. For example, when detecting a DR, the necessary pixels are indicated as 3 pixels × 3 pixels or 5 pixels × 5 pixels. This is a value indicating the pixel area.

  In step S399, the feature amount processing unit 922 determines whether or not a basic feature amount is specified, and repeats the processing until information specifying the basic feature amount is input. For example, when the operation unit 902 is operated and an operation signal indicating the feature amount A is input, it is determined that the feature amount based on the feature amount A of the image signal as the input signal is input, and the processing is as follows. The process proceeds to step S400.

  In step S400, the feature amount processing unit 922 determines whether or not the processing content is instructed, and repeats the processing until the processing content is instructed. For example, when the operation unit 902 is operated and an operation signal instructing DR is input, it is determined that the processing content has been designated, and the processing proceeds to step S401.

  In step S401, the feature amount processing unit 922 processes the specified feature amount with the specified processing content and stores it in the feature amount database 823, and the processing returns to step S391.

  On the other hand, when it is determined that the telop is extracted based on the user's subjectivity, the user operates the operation unit 902 to input an operation signal indicating the end of the process to the feature amount detection unit 421. Ends.

  That is, by the above-described processing, the user can recognize the feature amount that can be optimally processed by operating while viewing the displayed base feature amount and the processing content. Furthermore, after that, it is possible to immediately specify a feature amount that can realize the optimum process for the user. Also, by inputting the input information necessary for generating new feature values according to the display screen, the types of feature values selected by the user can be increased, so that many combinations of feature values can be set efficiently. Is possible.

  In the above processing, the processing content is determined according to the two types of feature amounts designated by the user, and the telop is extracted from the image signal as the input signal. It can be said that the “content of processing” has been changed so as to obtain a desired output signal for the user. In addition, according to the user's operation, the two axes of feature quantities (two types of feature quantities to be selected) are switched, and new feature quantities are set (the types of feature quantities are increased). Since the algorithm for determining the processing content (for example, whether or not to process as a telop) is changed depending on the combination of the quantities, the “processing structure” in the “processing content” is also changed. I can say that.

  Next, the configuration of an optimization apparatus 1001 provided with an internal information generation unit 1011 instead of the internal information generation unit 912 of the optimization apparatus 901 of FIG. 83 will be described with reference to FIG. 87, the configuration is the same as that of the optimization apparatus 901 in FIG. 83 except that the internal information generation unit 912 is replaced with the internal information generation unit 1011 and the processing determination unit 412 is replaced with the processing determination unit 711 in FIG. It is.

  The internal information generation unit 1011 is basically the same as the internal information generation unit 912, but further, information on the processing content determined for each pixel determined by the processing content determination unit 432 of the processing determination unit 412. Is displayed on the display unit 403 (the function of the internal information generation unit 611 in FIG. 69 is added).

  The operation unit 1002 is the same as the operation unit 402.

  Next, telop extraction optimization processing by the optimization apparatus 1001 in FIG. 87 will be described with reference to the flowchart in FIG.

  In step S411, the buffer 421 of the feature amount detection unit 911 receives and stores the image signal as the input signal, and the processing unit 413 reads the image signal as the input signal stored in the buffer 421 as it is and processes it. Without being output, the data is output to the display unit 403 as it is and displayed.

  In step S412, the processing content determination unit 721 of the processing determination unit 711 determines whether a telop and background are instructed from the operation unit 1002, and repeats the processing until the telop and background are instructed. When instructed, the process proceeds to step S413.

  In step S413, the feature amount extraction unit 422 of the feature amount detection unit 911 determines whether or not two types of feature amounts have been selected by the feature amount selection unit 921, and if not, the process proceeds to step S421. . For example, when it is determined that information for selecting a feature amount is input from the feature amount selection unit 921, the process proceeds to step S414.

  In step S414, the internal information generation unit 912 extracts information indicating the types of the two selected feature amounts from the feature amount selection unit 921, and displays the names of the two selected feature amount types on the display unit 403. Is displayed (presented).

  In step S415, the feature amount extraction unit 422 extracts the two types of selected feature amounts for each pixel from the image signal as an input signal, and outputs the extracted signal to the processing determination unit 412. At this time, the buffer 421 stores an image signal as an input signal.

  In step S <b> 416, the process determining unit 711 determines the processing content for each pixel based on the two types of input feature amounts and outputs the determined process content to the processing unit 413.

  In step S417, the internal information generation unit 1011 uses the two types of feature amounts as the axes based on the two types of feature amounts input to the processing determination unit 711 and the information on the pixel positions designated as the telop and the background. A two-dimensional distribution chart is generated, and the two-dimensional distribution chart is displayed on the display unit 403 in step S418. More specifically, the feature amount recognition unit 431 of the processing determination unit 711 recognizes the type of feature amount, outputs information indicating the type and the feature amount itself to the processing content determination unit 721, and the processing content determination unit 721 In addition to the information indicating the feature amount and its type, the internal information generation unit 1011 is designated as the telop and the background by outputting the information indicating the pixel position designated as the telop and the background to the internal information generation unit 1011. Based on the information indicating the pixel position, for example, a two-dimensional distribution map of feature amounts as shown in FIG. 74 is generated.

  In step S419, the processing unit 413 processes the image signal as an input signal input via the buffer 421 according to the processing content input from the processing determination unit 711, and converts the image signal into an image signal that can be displayed on the display unit 403. Are output to the display unit 403 and displayed.

  In step S420, the feature amount detection unit 911 determines whether or not the telop is considered to be extracted. In other words, when the user looks at the image displayed on the display unit 403 and does not determine that a telop has been extracted, the operation unit 1002 is changed so that the telop extraction process is performed again by changing the combination of feature amounts. Manipulate. When an operation signal from the operation unit 1002 corresponding to this operation is input, the process proceeds to step S421.

  In step S421, the feature amount processing unit 922 of the feature amount detection unit 911 determines whether or not feature amount processing has been instructed. When there is no feature amount processing instruction, the process returns to step S413. On the other hand, if the feature amount processing unit 922 determines in step S421 that an operation signal instructing processing of the feature amount has been input from the operation unit 1002, the processing proceeds to step S422.

  In step S422, the feature amount processing unit 922 displays a processing content instruction screen (FIG. 86).

  In step S423, the feature amount processing unit 922 determines whether or not a basic feature amount is specified, and repeats the processing until information specifying the basic feature amount is input. If it is determined that a basic feature amount is input as an input signal, the process proceeds to step S424.

  In step S424, the feature amount processing unit 922 determines whether or not the processing content is instructed, and repeats the processing until the processing content is instructed. If it is determined that the processing content has been input, the processing proceeds to step S425.

  In step S425, the feature amount processing unit 922 processes the specified basic feature amount with the specified processing content and stores it in the feature amount database 823, and the processing returns to step S413.

  On the other hand, when it is determined in step S420 that the telop has been extracted by the user's subjectivity, the user operates the operation unit 1102 to input an operation signal indicating the end of the process to the feature amount detection unit 421. At this time, the process ends.

  That is, by the above-described process, the process of steps S411 to S425 is repeated until the user can determine that the telop is extracted by looking at the image displayed on the display unit 403, and thus the optimum feature for the user. A combination of quantities can be set so that a telop can be extracted from an image signal as an input signal. By increasing the types of feature quantities that can be selected by the user, more combinations of feature quantities can be set. This makes it possible to execute the optimum process for the user. Further, since the processing instruction screen necessary for processing the existing feature amount to generate a new feature amount is displayed, the user can efficiently execute the processing process according to the display. Also, at that time, it is possible to repeatedly perform telop extraction processing while changing the feature amount while observing the separation state between the telop on the pixel feature amount and the background. Can be easily selected.

  Next, referring to FIG. 89, instead of the feature amount detection unit 411, the process determination unit 412, and the operation unit 402 of the optimization device 401 of FIG. 55, a feature amount detection unit 1111, a process determination unit 1112, and A configuration of the optimization device 1101 provided with the operation unit 1102 will be described. 89, the configuration is the same as that of the optimization apparatus 401 in FIG. 55 except that a feature amount detection unit 1111, a processing determination unit 1112, and an operation unit 1102 are newly provided.

  The feature quantity detection unit 1111 is not provided with the feature quantity selection unit 423 in the configuration of the feature quantity detection unit 411 in FIG. 56, and the feature quantity extraction unit 422 extracts two types of preset feature quantities. The configuration other than that is the same.

  The process determining unit 1112 stores history information for updating the LUT stored in the process content database 433, and changes the LUT corresponding to the history information. The configuration of the process determination unit 1112 will be described later with reference to FIG.

  The operation unit 1102 is the same as the operation unit 402.

  Here, the configuration of the process determining unit 1112 will be described with reference to FIG. The processing determination unit 1112 in FIG. 90 is provided with a processing content determination unit 1121 in place of the processing content determination unit 432 of the processing determination unit 412 in FIG. 57, and further determines processing except that a history memory 1122 is added. The configuration is the same as that of the unit 412.

  The processing content determination unit 1121 stores the history information of the operation for changing the LUT stored in the processing content database 433 in the history memory 1122, and changes the LUT based on the history information. The other functions are the same as those of the processing content determination unit 432 of the processing determination unit 412 in FIG.

  Next, telop extraction optimization processing by the optimization device 1101 in FIG. 89 will be described with reference to the flowchart in FIG. 91.

  In step S431, the feature amount extraction unit 411 of the feature amount detection unit 1111 extracts two predetermined types of feature amounts from the image signal as an input signal, and outputs the extracted feature amounts to the processing determination unit 1112. At this time, the buffer 421 stores an image signal as an input signal.

  In step S432, the processing content determination unit 1121 of the processing determination unit 1112 refers to the LUT stored in the processing content database 433 based on the type of feature amount and the feature amount input from the feature amount recognition unit 431. Processing contents are determined for each pixel and output to the processing unit 413.

  In step S433, the processing unit 413 processes each pixel according to the processing content input from the processing determination unit 1112 and outputs it to the display unit 403 for display.

  In step S434, the processing content determination unit 1121 of the processing determination unit 1112 determines whether an operation signal for changing the LUT is input from the operation unit 1102. That is, the user subjectively determines whether the user's favorite processing is performed by looking at the image displayed on the display unit 403, and based on the determination result, the user operates the operation unit 1102 to respond accordingly. An operation signal is input. For example, when processing that matches the user's preference is not performed (when the user's preference image is not displayed on the display unit 403), in order to change the LUT of the processing content database 433 of the processing determination unit 1121, A change request will be entered.

  In step S434, if an operation signal requesting change of the LUT is input, that is, if the user's favorite process is not performed, the process proceeds to step S435.

  In step S435, it is determined whether or not the LUT change process can be executed by the auto LUT change process. Here, the LUT change process includes a manual LUT change process and an auto LUT change process. Details of the determination of whether or not the auto LUT change process is possible will be described later.

  For example, if it is determined in step S435 that the auto LUT change process is not possible, the process proceeds to step S436, and the manual LUT change process is executed.

Here, in describing the LUT change processing, details of the LUT will be described. The LUT is as shown in FIG. It is the table which showed the processing content determined for every combination of two feature-values. FIG. 92 shows the case of feature amounts A and B as two types of feature amounts, and in this example, an example is shown in which each feature amount is classified in eight stages (total of 64 classes). In FIG. 92, it is assumed that the feature amount is normalized to a value of 0 to 1.0, and the value V a of the feature amount A is 0 ≦ V a <1/8, / 8 ≦ V a <2 / 8,2 / 8 ≦ V a <3 / 8,3 / 8 ≦ V a <4 / 8,4 / 8 ≦ V a <5 / 8,5 / 8 ≦ V a < 6/8, 6/8 ≦ V a <7/8, 7/8 ≦ V a ≦ 8/8, and the value V b of the feature quantity B is 0 ≦ V b <1/8, 1/8 from the top ≦ V b <2 / 8,2 / 8 ≦ V b <3 / 8,3 / 8 ≦ V b <4 / 8,4 / 8 ≦ V b <5 / 8,5 / 8 ≦ V b <6 / 8, 6/8 ≦ V b <7/8, 7/8 ≦ V b ≦ 8/8. Each processing content is classified into three types of X, Y, and Z according to the combination of feature amounts in each range in the drawing. In the case of FIG. 92, 0 ≦ V a ≦ 3/8 and 0 ≦ Processing content X in the range of V b ≦ 3/8, processing content Y in the range of 4/8 ≦ V a <6/8, or processing content Y in the range of 4/8 ≦ V a <6/8, processing in the other ranges The content is Z. In addition, the processing content can be specified in various ways. For example, as shown in FIGS. 93 to 95, a prediction tap used for processing by the processing unit 413 can be designated for the target pixel.

  That is, FIG. 93 shows the processing content X, with the target pixel being P0, the filter taps P1 and P2 are set spatially centering on the target pixel P0 in the x direction, and the same applies to the y direction. Taps P3 and P4 are set centering on the pixel P0, and taps P5 and P6 (for example, the tap P6 one frame before and one frame after the same pixel position) before and after the pixel of interest P0 are temporally centered. P5) is set. That is, the processing content X is a so-called spatio-temporal filter processing.

  In FIG. 94, the processing content Y is shown. Instead of the spatiotemporal filter of FIG. 93, taps P3 and P4 are replaced with taps P12 and P5 at timings earlier than tap P6 in the time direction. Furthermore, a later tap P11 is set. That is, the processing content Y is a so-called time filter processing.

  Further, FIG. 95 shows the processing content Z. Instead of the spatiotemporal filter of FIG. 93, taps P5 and P6 are replaced with a tap P21 at a position further away from the pixel of interest than the tap P1 in the x direction. The tap P22 is set at a position further away from the target pixel than the tap P2. That is, the processing content Z is a so-called spatial filter processing.

  The processing contents are not limited to the three types as in the example of FIG. 92, and may be divided into other types as a matter of course. For example, all the pixels are white. Or you may make it carry out the binarization process which divides into black. Moreover, this binarization process may specify the binarization process whether it is a pixel extracted as a telop part like the above-mentioned example, for example. Conversely, there may be three or more types of processing content.

  Next, the manual LUT changing process in step S436 in FIG. 91 will be described with reference to the flowchart in FIG.

  In step S441, the processing content determination unit 1121 determines whether or not the pixel position is specified as the operation signal from the operation unit 1102 and whether the processing content is specified, and the pixel position is specified as the operation signal from the operation unit 1102. The processing is repeated until the processing content is designated. That is, for example, when the screen shown in FIG. 97 is displayed on the display unit 403, the user operates the pointer 1131 on the image displayed on the display unit 403, and performs predetermined processing at a pixel position where processing is to be changed. For example, as shown in FIG. 97, the drop-down list 1132 is displayed, and one of the processing contents X, Y, and Z displayed in the drop-down list 1132 can be designated. In step S441, when this designation is made, it is determined that the pixel position is designated and the processing content is designated, and the processing proceeds to step S442. In this case, as shown in FIG. 97, the pixel position P41 is selected, and the processing content X is selected.

  In step S442, the processing content determination unit 1121 reads the corresponding two types of feature amount combinations from the designated pixel position. More specifically, the processing content determination unit 1121 reads out a combination of two types of feature amounts corresponding to the designated pixel among the feature amounts detected by the feature amount detection unit 1111.

  In step S443, the processing content determination unit 1121 changes the processing content corresponding to the corresponding feature amount combination to the processing content specified in step S441.

  In step S444, the processing content determination unit 1121 stores the changed pixel position and the processing content in the history memory 1122.

  In step S445, the processing content determination unit 1121 determines whether there is any other change in the LUT, and if it is determined that the process for changing the LUT is still performed, that is, the operation unit 1102 selects another LUT. When the operation signal instructing the change of the LUT is input, the process returns to step S441, and when it is determined that there is no process to change the LUT, that is, the operation signal indicating the end of the change of the LUT from the operation unit 1102 Is entered, the process ends.

In the above-described manual LUT change process, the LUT is changed as follows. That is, in step S442, when the obtained feature amount of the pixel at the pixel position P41 is, for example, (V a , V b ) = (0.52, 0.27), the LUT in FIG. For the position (position from above) = (5, 3) (the position on the LUT is also shown in the following), the processing content Y is set in FIG. If the processing content is changed to X in step S443 as shown in FIG. 97, the processing at the position (5, 3) of the LUT is changed from processing content Y in step S443 as shown in FIG. The processing content is changed to X.

  After the manual LUT changing process of FIG. 96 is completed, the process returns to the process of step S432 in the flowchart of FIG. 91, and the subsequent processes are repeated. In this case, when the LUT is changed as described above, the combination of the two types of feature amounts corresponds to the processing content Y in the pixel of (5, 3) of the LUT as shown in FIG. Processing will be executed. As a result, the processing corresponding to the pixel having the same feature amount as the pixel position P41 in the image shown in FIG. 97, that is, the pixel belonging to (5, 3) of the LUT is changed, for example, as shown in FIG. Thus, an image that has been subjected to processing different from that in FIG. 97 is displayed in that portion. Note that FIG. 100 illustrates an example in which a pixel having the same feature amount as the pixel at the pixel position P41 in FIG. 97 is white.

  In the example of the processing described with reference to the flowchart of FIG. 96, an example has been described in which only the LUT (upper processing content) designated by the combination of the feature amounts of the selected pixels is changed. This is not a limitation. That is, for example, when viewed from the position where the change is specified on the LUT, all the close portions may be set as the changed processing content. That is, for example, as shown in FIG. 101, when all the processing contents are set as Z as the initial state of the LUT (the initial state of the LUT is a so-called default setting, all the processing contents are Z 96), the processing content at the position (4, 2) on the LUT is set to X and the processing at the position (5, 2) as shown in FIG. When the content is changed to Y, as shown in FIG. 102B, an area close to the changed position on the LUT may be changed to X or Y. That is, in the case of FIG. 102B, at all positions on the LUT, the distance from the position (4, 2) on the LUT is compared with the distance from the position (5, 2). Everything has changed. As a result, in FIG. 102B, the left half area is set as the processing content X, and the right half area is set as the processing content Y.

  Similarly, as shown in FIG. 103A, the processing content of the combination of the feature amounts at the positions (4, 2) and (7, 7) on the LUT is X, and (5, 2), (4 on the LUT , 5) when the processing content of the combination of feature amounts at the position is changed to Y, as shown in FIG. 103B, (1, 1), (2, 1), (3, 1), ( 4,1), (1,2), (2,2), (3,2), (4,2), (1,3), (2,3), (3,3), (4 3), (5, 7), (5, 8), (6, 6), (6, 7), (6, 8), (7, 5), (7, 6), (7, 7) , (7, 8), (8, 4), (8, 5), (8, 6), (8, 7), (8, 8), the processing content is X, and the others on the LUT At the position, the processing content is Y.

  Further, similarly, as shown in FIG. 104A, the processing content of the combination of the feature quantities at the positions (4, 2) and (7, 7) on the LUT is X, and (2, 3), When the processing content of the combination of feature amounts at the positions (5, 2), (4, 5), (7, 4) is changed to Y, as shown in FIG. 104B, (3, 1) on the LUT , (3,2), (4,1), (4,2), (4,3), (5,7), (5,8), (6,6), (6,7), ( 6, 8), (7, 6), (7, 7), (7, 8), (8, 6), (8, 7), (8, 8), the processing content is X, and on the LUT In other positions, the processing content is Y.

  By changing the LUT in this way, it is possible to collectively change the processing contents of a combination of a plurality of feature amounts in a single process using a combination of relatively similar feature amounts.

  Now, the description returns to the flowchart of FIG.

  If it is determined in step S435 that the auto LUT change process is possible, the process proceeds to step S437, where the auto LUT change process is executed.

  Here, the auto LUT changing process will be described with reference to the flowchart of FIG. In step S461, the processing content determination unit 1121 obtains a group that exists for each processing content of the update history distribution stored in the history memory 1122. That is, in the history memory 1122, as shown in FIG. 106, for example, a history table that is different from the position on the LUT designated for change by the above-described manual LUT change processing and the LUT in which the designated processing content is recorded. Is remembered. The history table shown in FIG. 106 indicates that a change has been instructed to the processing contents of (3, 3), (8, 3), (2, 5), (6, 6) on the LUT. Each of the processing contents X, X, X, and Y is indicated.

  Here, the group for each processing content refers to a region where each processing content on the history table exists at a predetermined density or more and has a predetermined area or more.

  For example, if a group exists, as shown in FIG. 107, a group 1151 is formed on the history table, and the processing content determination unit 1121 obtains this group 1151. In FIG. 107, a group 1151 is a group of processing contents X, and a group is obtained for each of other processing contents.

  In the group, manual LUT processing is executed a certain number of times, update history is stored in the history memory 1122, and change processing to the same processing content is executed to some extent on the LUT. If not, it does not exist. In step S435 in FIG. 91, it is determined whether or not the auto LUT changing process is possible depending on whether or not there is a group of the processing contents. That is, when there is a group for each processing content, it is determined that auto LUT change processing is possible, and when there is no group, it is determined that auto LUT change processing is not possible.

  In step S462, the processing content determination unit 1121 detects the barycentric position of the group obtained in step S461.

  That is, for example, in the case of FIG. 107, the group 1151 is formed, but the center-of-gravity position is obtained from the positions on all the history tables where the processing content X is designated. In the case of FIG. 107, the centroids 1161 of the positions on all the history tables in which the processing content X in the group 1161 is designated are obtained.

  In step S463, the processing content determination unit 1121 corresponds to the processing content at the position on the history table existing within a predetermined range from the center of gravity position of the group, on the LUT corresponding to the combination of the feature amounts of the respective pixels. The processing contents of the squares are changed to the processing contents of the squares on the history table corresponding to the combination of the feature amounts of the pixels constituting the group, and the processing ends. That is, in FIG. 107, all the processing contents on the LUT corresponding to the positions on the history table that exist in the range 1162 that is a range of a circle with a predetermined radius centered on the gravity center position 1161 constitute a group. The processing content is changed to X.

That is, for example, if the history table is generated with the LUT as shown in FIG. 108 configured as shown in FIG. 107, a predetermined distance from the center of gravity 1161 as shown in FIG. (2, 3) and (3, 4) on the history table in the range 1162 in FIG. 6 are stored as history information, which is different from the processing content X constituting the group, The processing contents of (2, 3) and (3, 4) on the LUT corresponding to (2, 3) and (3,4) are changed to X. Therefore, if the LUT is configured as shown in FIG. 108, the processing content determination unit 1121 has (2, 3) on the LUT and the processing content is Y. Therefore, as shown in FIG. In (2, 3), the processing content is changed to X. In the case of FIG. 108, the processing content determination unit 1121 has (3, 4) on the LUT and the processing content is X. Therefore, as shown in FIG. Maintain the state.
As a result of this processing, the processing content (the grid information) on the LUT is automatically changed. This process may be repeatedly executed at a predetermined time interval as well as the timing when the user instructs the LUT change process.

  Now, the description returns to the flowchart of FIG.

  If it is determined in step S434 that the LUT change is not instructed, that is, if the user views the image displayed on the display unit 403 and determines that the user's favorite image has been generated. The process ends.

  With the above processing, it is possible to generate a user's favorite image by changing the processing contents registered in each square of the LUT by repeating the change of the LUT according to the user's operation.

  In the above, since the processing content on the LUT is changed in response to the user's operation, it can be said that the “processing content” has been changed. Further, depending on whether or not a group for each processing content is obtained according to the user's processing, the processing system in which the processing content of the LUT is changed from the center of gravity position of the group, or as shown in FIGS. 101 to 104 In addition, the processing content is changed to one of processing systems in which the processing content on the LUT at the position on the other LUT is changed in accordance with the processing content at the position on the designated LUT. That is, since the LUT changing algorithm is changed, it can be said that the “processing structure” in the “processing contents” is also changed in accordance with the user's operation.

  Next, referring to FIG. 110, instead of the processing determination unit 1112 of the optimization device 1101 of FIG. 89, a processing determination unit 1181 is provided, and the internal information generation unit 1182 is newly provided. Other configuration examples will be described. In FIG. 110, the configuration is the same as that of the optimization device 1101 of FIG. 89 except that a processing determination unit 1181 and an internal information generation unit 1182 are newly provided.

  The processing determination unit 1181 stores history information for updating the LUT stored in the processing content database 1191 (FIG. 111), changes the LUT corresponding to the history information, and stores the history information in the processing content database 1191. The LUT is supplied to the internal information generation unit 1182.

  The internal information generation unit 1182 reads the LUT stored in the processing content database 1191, converts it into information that can be displayed on the display unit 403, outputs the information to the display unit 403, and displays it.

  Next, the configuration of the process determination unit 1181 will be described with reference to FIG. The processing determination unit 1181 in FIG. 111 has the same configuration as the processing determination unit 1112 except that a processing content database 1191 is provided instead of the processing content database 433 in FIG.

  The processing content database 1191 stores the LUT and supplies LUT information to the internal information generation unit 1182 as necessary. Other functions are the same as those of the processing content database 433 in FIG.

  Next, telop extraction optimization processing by the optimization device 1101 in FIG. 110 will be described with reference to the flowchart in FIG. The telop extraction optimization process in FIG. 112 is basically the same as the process described with reference to the flowchart in FIG. 91, and the processes in steps S471 to S473 and S475 to S478 in FIG. 91. This is processing corresponding to steps S431 to S437 in FIG. 91. After the processing in step S473, in step S474, the internal information generation unit 1182 reads the LUT in the processing content database 1191 of the processing determination unit 1181, and the display unit 403 The image signal is converted into a displayable image signal, output to the display unit 403, displayed (presented), the process proceeds to step S475, and the subsequent processing is repeated.

  Since the LUT is displayed (presented) by such processing, the LUT can be changed while recognizing the change of the LUT and the processing applied to the image signal as the input signal from the image displayed on the display unit 403. It becomes possible.

  In the above description, an example in which the LUT is changed by specifying the pixel on the image processed with the processing content registered in the LUT displayed on the display unit 403 and the processing content in the manual LUT changing processing has been described. However, the internal information generation unit 1182 reads the LUT stored in the processing content database 1191 and displays the processing content on the LUT on the display unit 403 in a state where it can be directly operated by the operation unit 1102, for example. The above processing contents may be directly changed.

  Here, with reference to the flowchart of FIG. 113, the manual LUT changing process in which the optimization device 1101 of FIG. 110 directly changes the value on the above-described LUT will be described.

  In step S481, the processing content determination unit 1121 determines whether or not a position on the LUT is specified, and repeats the processing until it is specified. For example, as shown in FIG. 114, when a position where the processing content at the position (5, 3) on the LUT displayed on the display unit 403 is set as Y is specified, the position on the LUT is specified. The process proceeds to step S482.

  In step S <b> 482, the internal information generation unit 1182 causes the display unit 403 to display the position on the designated LUT. That is, in the case of FIG. 114, the position display frame 1192 is displayed at the designated (5, 3).

  In step S483, the processing content determination unit 1121 determines whether or not the processing content is specified, and repeats the processing until the processing content is specified. For example, as shown in FIG. 114, the drop-down list 1193 is displayed at the position where the pointer 1191 is operated (for example, by right clicking the mouse as the operation unit 1102), and the processing content displayed therein If any of X, Y, and Z is specified by the user operating the operation unit 1102, it is determined that the processing content is specified, and the processing proceeds to step S484.

  In step S484, the processing content determination unit 1121 changes the processing content to the designated processing content and ends the processing. That is, in the case of FIG. 114, since “X” displayed in the drop-down list 1193 is selected, as shown in FIG. 115, the processing content is changed from Y to X in (5, 3) on the LUT. Will be.

  With the above processing, the processing content set on the LUT can be directly changed. By operating the LUT while viewing the image processed with the processing content registered in the LUT, the user's It becomes possible to easily set the processing content to be preferred.

  In the optimizing device 1101 in FIG. 110, the processing contents on the squares on the LUT specified by the user's operation are changed by the manual LUT changing process, so that the “processing contents” are changed by the user's operation. It can be said that. Further, when the change history stored in the history memory 1122 is accumulated to some extent and a group is detected, the algorithm for changing the LUT is changed from the manual LUT change process to the auto LUT change process. Has been changed.

  Furthermore, since the LUT is displayed as internal information related to the processing of the processing determination unit 1112 and the processing content on the LUT can be changed while viewing the displayed LUT, the user can change the content on the LUT and the display unit 403. It is possible to recognize the correspondence with the image displayed on the screen.

  Next, referring to FIG. 116, as another embodiment in which a feature amount detection unit 1111 and a processing unit 413 are provided instead of the feature amount detection unit 411 and the processing unit 413 of the optimization apparatus 401 of FIG. The configuration of the optimization device 1201 will be described.

  The feature quantity detection unit 1111 has the same configuration as that of the optimization device 1101 in FIG.

  Based on the processing content information input from the processing determination unit 413, the processing unit 1211 performs a mapping process on the input signal read from the buffer 421 using, for example, a coefficient set obtained by learning, and displays it on the display unit 403. Output and display. The process determination unit 413 changes the coefficient set learning method based on the operation signal from the operation unit 1202. The operation unit 1202 is the same as the operation unit 402.

  Next, the configuration of the processing unit 1211 will be described with reference to FIG.

  Based on the image signal as an input signal read from the buffer 421 of the feature amount detection unit 1111, the learning device 1221 calculates a coefficient set necessary for the mapping processing of the mapping processing unit 1222 for each processing content by the minimum N power error method. Learning is performed and the coefficient memory 1237 is stored. The learning device 1221 learns the coefficient set by changing the value of the exponent N of the least-N power error method based on the operation signal input from the operation unit 1202.

  The mapping processing unit 1222 reads out a corresponding coefficient set from the coefficient memory 1237 of the learning device 1221 based on the processing content input from the processing determination unit 412, and serves as an input signal read from the buffer 421 of the feature amount detection unit 1111. The image signal is mapped and output to the display unit 403 for display.

  Next, the detailed configuration of the learning device 1221 will be described with reference to FIG. The teacher data generation unit 1231 is the same as the teacher data generation unit 231 in FIG. 30, generates teacher data from an input signal as learning data, and outputs the teacher data to the least-N power error coefficient calculation unit 1236. The student data generation unit 1232 is the same as the student data generation unit 232 of FIG. 30, generates teacher data from the input signal as learning data, and sends it to the feature amount extraction unit 1233 and the prediction tap extraction unit 1235. Output.

  The feature quantity extraction unit 1233 is the same as the feature quantity extraction unit 422 of the feature quantity detection unit 1111, and extracts the feature quantity from the student data and outputs it to the process determination unit 1234. The processing determination unit 1234 is the same as the processing determination unit 412, determines the processing content based on the feature amount input from the feature amount detection unit 1233, and outputs it to the minimum N-th power error coefficient calculation unit 1236. The prediction tap extraction unit 1235 is the same as the prediction tap extraction unit 233. The prediction tap extraction unit 1235 sequentially extracts teacher pixels from the student data, using the teacher data as the target pixel, and extracts the pixels that are prediction taps from the student data. 1236.

  The minimum N-th power error method coefficient calculation unit 1236 is the same as the minimum N-th power error method coefficient calculation unit 234 in FIG. 30 in the basic configuration and processing contents, and the minimum N-th power error method coefficient input from the operation unit 1202. Based on the information specifying the value of the exponent N necessary for the calculation process, a coefficient set is calculated by the least-Nth power error method from the prediction tap input from the prediction tap extraction unit 1235 and the teacher data, and stored in the coefficient memory 1237. Output and store overwriting. However, the least-Nth power error method coefficient computing unit 1236 in FIG. 118 differs from the least Nth power error method coefficient computing unit 234 in FIG. 30 in that a coefficient set is generated for each processing content input from the processing determining unit 1234. . The coefficient memory 1237 stores, for each processing content, the coefficient set output from the least-N power error coefficient calculating unit 1236 for each processing content. 118 shows that coefficient sets A to N are stored for each processing content.

  Next, the configuration of the mapping processing unit 1222 will be described with reference to FIG. The tap extraction unit 251 is the same as the mapping processing unit 222 in FIG. 31, extracts a prediction tap from the input signal supplied from the buffer 421 for the target pixel, and outputs the prediction tap to the product-sum operation unit 1251. The product-sum operation unit 1251 is the same as the product-sum operation unit 252 in FIG. 31, and the value of the extracted prediction tap (pixel) input from the prediction tap extraction unit 251 and the coefficient of the learning device 1221. A product-sum operation process is executed using the coefficient set stored in the memory 1237 to generate a pixel of interest, which is applied to all the pixels and output as an output signal to the display unit 403 for display. However, the product-sum operation unit 1251 uses a coefficient set corresponding to the processing content supplied from the processing determination unit 412 among the coefficient sets A to N stored in the coefficient memory 1237 in the product-sum operation processing. It has become.

  Next, a learning process in the image optimization process performed by the optimization apparatus 1201 in FIG. 116 will be described with reference to the flowchart in FIG.

  In step S501, it is determined whether or not the user has operated the operation unit 202. If it is determined that the user has not operated the operation unit 202, the process returns to step S501. If it is determined in step S501 that the operation unit 1202 has been operated, the process proceeds to step S502.

  In step S502, the teacher data generation unit 1231 of the learning device 1221 generates teacher data from the input signal and outputs the teacher data to the least-N power error coefficient calculation unit 1236, and the student data generation unit 1232 receives the student data from the input signal. Is output to the feature amount extraction unit 1233 and the prediction tap extraction unit 1235, and the process proceeds to step S503.

  As the data used to generate student data and teacher data (hereinafter referred to as “learning data” as appropriate), for example, input signals that have been input from the present to a time point that is traced back to the past for a predetermined time are employed. can do. Further, as learning data, it is possible not to use an input signal but to store dedicated data in advance.

  In step S <b> 503, the feature amount extraction unit 1233 extracts the feature amount from the student data at the position corresponding to the target pixel (teacher data), and outputs the feature amount to the processing determination unit 1234.

  In step S <b> 504, the processing determination unit 1234 determines the processing content for the target pixel based on the feature amount input from the feature amount extraction unit 1233, and outputs the processing content to the minimum N-th power error method coefficient calculation unit 1236. For example, the process determination unit 1234 may vector-quantize one or more feature amounts from the feature amount extraction unit 1233 and use the quantization result as information on the processing contents. However, in this case, the LUT or the like is not stored as in the processing determination unit 1112 in FIG.

  In step S505, the prediction tap extraction unit 1235 uses each teacher data as a pixel of interest, generates a prediction tap from the student data input from the student data generation unit 1232 for each pixel of interest, and generates a minimum N-th power error method coefficient calculation unit. 1234, and the process proceeds to step S506.

In step S506, the least-Nth power error method coefficient calculation unit 1236 receives from the operation unit 202 an operation signal designating calculation of a coefficient set by the least-Nth power error method using the recursive method (second method). For example, if the operation unit 1202 is operated by the user and it is determined that the method is not the recursive method, that is, the direct method (first method) is designated, the process proceeds to step S507, and the equation (50) It is determined whether or not coefficients a, b, and c that specify the weight α S (specify the index N) are input, and the process is repeated until the coefficients are input. For example, the operation unit 1202 is operated by the user. If it is determined that the values specifying the coefficients a, b, and c have been input, the process proceeds to step S507.

In step S507, the minimum N-th power error coefficient calculation unit 1236 substantially minimizes the problem of minimizing the above-described equation (48) in the state of the coefficients a, b, and c to which the weight α S is input. By solving by the square error method, the prediction coefficients w 1 , w 2 , w 3 ..., W M as the solution by the least N power error method of the exponent N corresponding to the weight α S , that is, the coefficient set are determined. Each processing content input from the unit 1234 is obtained and stored in the coefficient memory 1237, and the process returns to step S501.

  On the other hand, if it is determined in step S506 that the recursive method has been selected, the process proceeds to step S509.

  In step S509, the least-N power error coefficient calculation unit 1236 determines whether information designating the exponent N is input, and repeats the process until the index N is input. For example, the user operates the operation unit 1202. If it is determined that information specifying the index N is input by operating, the process proceeds to step S510.

  In step S510, the least-N power error coefficient calculation unit 1236 obtains a coefficient set by a solution using a basic least square error method. In step S511, the least-N power error coefficient calculation unit 1236 uses the predicted value obtained from the coefficient set obtained by the least square error method, and has been described with reference to equations (51) to (54). A coefficient set by the corresponding least-Nth power error method input from the operation unit 1202 is obtained for each processing content input from the processing determination unit 1234 and stored in the coefficient memory 1237 recursively in the index N. Return to.

  Through the above processing, a coefficient set is learned in the coefficient memory 1237 for each processing content.

  Next, mapping processing in image optimization processing by the optimization device 1201 in FIG. 116 will be described with reference to the flowchart in FIG. 121.

  In step S521, the feature amount detection unit 1111 detects the feature amount of the pixel of the input signal at the position corresponding to the target pixel (output signal) from the image signal as the input signal, and the detected feature amount is sent to the processing determination unit 412. Output.

  In step S <b> 522, the process determination unit 412 determines the processing content based on the feature amount input from the feature amount detection unit 1111, and outputs it to the processing unit 1211. The processing of the processing determination unit 412 determines the processing content by performing the same processing as the processing determination unit 1234 of FIG. Therefore, as described above, when the process determination unit 1234 performs vector quantization on one or more feature amounts from the feature amount extraction unit 1233 and uses the result of the quantization as processing content information, the processing of FIG. The unit 412 does not store an LUT or the like like the processing determination unit 1112 in FIG.

  In step S523, the tap extraction unit 251 of the mapping processing unit 1222 of the processing unit 1211 uses an image frame as an output signal corresponding to the image frame as the current input signal as a target frame, and among the pixels of the target frame, For example, in the raster scan order, a pixel that has not yet been set as the target pixel is set as the target pixel, and a prediction tap is extracted from the input signal for the target pixel and output to the product-sum operation unit 1251.

  In step S524, the product-sum operation unit 1251 of the mapping processing unit 1222 reads the coefficient set corresponding to the processing content input from the processing determination unit 412 from the coefficient memory 1237 of the learning device 1221.

  In step S525, the product-sum operation unit 1251 uses the prediction coefficient corresponding to the processing content read from the coefficient memory 1237 of the learning device 1221 and uses the prediction coefficient input from the tap extraction unit 251 according to Equation (39). A sum-of-products operation process of the tap and the coefficient set read from the coefficient memory 1237 is executed. Accordingly, the product-sum operation unit 1251 obtains the pixel value (predicted value) of the target pixel. Thereafter, the process proceeds to step S526, and the tap extraction unit 251 determines whether or not all the pixels of the target frame have been set as the target pixel. If it is determined that the pixels have not yet been processed, the tap extraction unit 251 returns to step S521 to In the following, the same processing is repeated with a pixel that has not yet been set as the target pixel as a new target pixel.

  If it is determined in step S526 that all the pixels in the target frame are the target pixels, the process proceeds to step S527, and the display unit 403 displays the target frame including the pixels obtained by the product-sum operation unit 1251. .

  Then, returning to step S521, the feature amount detection unit 1111 detects the feature amount of the pixel of the input signal at the position corresponding to the target pixel (output signal) from the image signal as the input signal, and newly pays attention to the next frame. Hereinafter, the same processing is repeated as a frame.

  116, the index N is changed by the user operating the operation unit 1202 (in the direct method, the coefficients a, b, and c specifying the index N are changed, and the recursive In the method, the index N itself is changed), and this sets what kind of exponent N minimum N error method is adopted as a learning criterion (learning system) of the prediction coefficient (coefficient set). That is, the learning algorithm for obtaining the coefficients has been changed. Therefore, it can be said that the “processing structure” is changed so as to obtain an image desired by the user.

  Next, the configuration of the optimization apparatus 1301 in which the internal information generation unit 1312 is provided in the optimization apparatus 1201 of FIG. 116 will be described with reference to FIG.

  122 is the same as the optimization device 1201 of FIG. 116 except that the internal information generation unit 1312 is provided and the processing unit 1311 is provided instead of the processing unit 1211. .

  For example, the internal information generation unit 1312 reads the coefficient set information stored for each processing content from the coefficient memory 1321 of the processing unit 1311 as internal information, converts the information into information that can be displayed on the display unit 403, and outputs the information. To display.

  Next, the configuration of the processing unit 1311 will be described with reference to FIG. Although the basic configuration is the same, a coefficient memory 1321 is provided in place of the coefficient memory 1237, but the function is the same, but it is connected to the internal information generation unit 1312, and each processing content is The coefficient set stored in is read out.

  Next, image optimization processing by the optimization device 1301 in FIG. 122 will be described with reference to the flowchart in FIG. 124. The optimization apparatus 1301 in FIG. 122 also includes a learning process and a mapping process in the same manner as the optimization apparatus 1201 in FIG. In the learning process, the same processes as those in steps S501 to S511 in FIG. 120 are performed in steps S541 to S551.

  Further, in the learning process, after the processes of steps S541 and S551, the process proceeds to step S552, where the internal information generation unit 1312 reads the coefficient set stored in the coefficient memory 1321 as internal information, and each value included in the coefficient set. A displayable image signal is generated based on the above and output to the display unit 403 for display.

  At this time, the image generated by the internal information generation unit 1312 and displayed on the display unit 403 is, for example, a three-dimensional distribution diagram as shown in FIG. 39 or the two-dimensional distribution shown in FIG. The format can be as shown in the figure.

  Now, the description returns to the flowchart of FIG.

  After the process of step S552, the process returns to step S541, and the same process is repeated thereafter.

  On the other hand, in the mapping process shown in the flowchart of FIG. 125, the same processes as in steps S521 to S527 of FIG. 121 are performed in steps S571 to S577, respectively.

  Through the above processing, each value (each coefficient value) of the coefficient set stored in the coefficient memory 1321 of the processing unit 1311 is displayed (presented) as internal information regarding the processing, and the user can determine the distribution of the coefficient set and the output signal. The learning algorithm for obtaining a coefficient set is changed by operating the operation unit 1202 and changing the index N so that an image as an output signal that suits one's preference can be obtained while viewing the processing result of the processing unit 1311 as Has been. Accordingly, it can be said that the “processing structure” has been changed so as to obtain an image desired by the changed user. In the above example, the coefficient set is displayed. However, for example, internal information related to processing such as whether the current least-N power error method is a direct method or a recursive method is displayed. You may do it.

  Next, with reference to FIG. 126, a configuration of an optimization apparatus 1401 in which a processing unit 1411 is provided instead of the processing unit 1211 of the optimization apparatus 1201 in FIG. 116 will be described.

  The configuration of the processing unit 1411 is basically the same as that of the processing unit 311 of the optimization apparatus 301 in FIG. 41, and the operation signal input from the operation unit 1202 and the processing content input by the processing determination unit 412 are used. Based on this, the input signal is optimized and displayed on the display unit 403.

  Next, the configuration of the processing unit 1411 will be described with reference to FIG. The coefficient memory 1421 stores a plurality of coefficient sets for each processing content, and stores coefficient sets necessary for mapping processing by the mapping processing unit 1222. In the figure, coefficient sets A to N are stored. This coefficient set is generated by learning in advance by the learning device 1441 in FIG.

  Here, the configuration of the learning device 1441 that generates the coefficient set will be described with reference to FIG.

  The teacher data generation unit 1451, the student data generation unit 1452, the feature amount extraction unit 1453, the process determination unit 1454, and the prediction tap extraction unit 1455 of the learning device 1441 are the same as the teacher data generation unit 1231 and the student of the learning device 1221 of FIG. This corresponds to the data generation unit 1232, the feature amount extraction unit 1233, the processing determination unit 1234, and the prediction tap extraction unit 1235, which are the same as those of the data generation unit 1232, the description thereof will be omitted.

  The normal equation generation unit 1456 is the same as the normal equation generation unit 354 of FIG. 43, generates a normal equation based on the teacher data and the prediction tap input from the teacher data generation unit 351, and generates a coefficient determination unit 1457. However, in this case, a normal equation is generated and output for each piece of processing content information input from the processing determination unit 1454.

  The coefficient determination unit 1457 is the same as the coefficient determination unit 355 of FIG. 43 and generates a coefficient set by solving the input normal equation. At this time, the coefficient set is associated with each piece of processing content information. Generate.

  Next, the coefficient determination process (learning process) by the learning device 1441 in FIG. 128 will be described with reference to the flowchart in FIG. 129. In step S591, the teacher data generation unit 1451 generates teacher data from the learning data and outputs the teacher data to the normal equation generation unit 1456, and the student data generation unit 1452 generates student data from the learning data, and the feature amount. It outputs to the extraction part 1453 and the prediction tap extraction part 1455, and progresses to step S592.

  In step S592, the prediction tap extraction unit 1455 sequentially extracts each teacher data as a pixel of interest, extracts a prediction tap from the student data for each attention data, outputs the prediction tap to the normal equation generation unit 1456, and proceeds to step S593.

  In step S593, the feature amount extraction unit 1453 extracts the feature amount of the student data (pixel) at the position corresponding to the target pixel (teacher data), and outputs the feature amount to the processing determination unit 1454.

  In step S594, the processing determination unit 1454 determines the processing content for each pixel based on the feature amount extracted by the feature amount extraction unit 1453, and outputs information on the determined processing content to the normal equation generation unit 1456. . For example, the process determination unit 1454 may vector-quantize one or a plurality of feature amounts and use the quantization result as information about the process contents. Therefore, the process determination unit 1456 does not store the LUT.

  In step S595, the normal equation generation unit 1456 uses each teacher data and a set of prediction taps to calculate each of the summation (Σ) that is each component of the left-side matrix in Expression (46) and the right-side vector. By calculating the summation (Σ) as a component, a normal equation is generated for each piece of processing content information input from the processing determination unit 1454 and output to the coefficient determination unit 1457.

  In step S596, the coefficient determination unit 1457 solves the normal equation input from the normal equation generation unit 1456 for each piece of processing content information, and sets a coefficient set for each piece of processing content information by a so-called least square error method. In step S597, it is stored in the coefficient memory 1421.

  Through the above processing, a coefficient set (coefficient set as an initial value) as a basis is stored in the coefficient memory 1421 for each piece of processing content information. In the above example, the case where the coefficient set is obtained by the least square error method has been described. However, a coefficient set obtained by another method may be used, and the coefficient set obtained by the above-described least N error method. It may be a required coefficient set.

  Next, image optimization processing by the optimization device 1401 in FIG. 126 will be described with reference to the flowchart in FIG. The image optimization process includes a coefficient change process and a mapping process. Since the mapping process is the same as the mapping process described with reference to FIGS. 121 and 125, only the coefficient change process will be described here.

  In step S <b> 611, the change processing unit 332 of the coefficient changing unit 322 determines whether an operation signal for operating the coefficient value is input from the operation unit 1202. That is, when the user views the image displayed on the display unit 403 and considers that the image is suitable for his / her preference, the user performs the mapping process by the coefficient set for each piece of processing content information stored in the coefficient memory 1421. If it is determined that it does not meet the user's preference, an operation for changing the coefficient set stored in the coefficient memory 1421 used for the mapping process is performed.

  For example, when it is determined in step S611 that an operation signal for manipulating the coefficient is input, that is, the operation unit 1202 changes the coefficient value of any one of the coefficients stored in the coefficient memory 1421. If operated, the process proceeds to step S612.

  In step S612, the change processing unit 332 controls the coefficient reading / writing unit 331 to read the coefficient set stored in the coefficient memory 321, and the process proceeds to step S613. In step S613, the change processing unit 332 determines whether or not the coefficient value input as the operation signal has changed to a predetermined threshold S11 or more as compared to a value included in the coefficient set in advance. For example, if it is determined in step S613 that the change between the value input as the operation signal and the value of the coefficient set stored in the coefficient memory 1421 is greater than or equal to the threshold value S11, the process proceeds to step S614.

  In step S614, the change processing unit 332 changes the value of each coefficient included in the coefficient set using the spring model as illustrated in FIG. 50, and the process proceeds to step S615.

  On the other hand, if it is determined in step S613 that the change between the value input as the operation signal and the coefficient set value stored in the coefficient memory 1421 is not greater than or equal to the threshold value S11, the process proceeds to step S615.

  In step S615, the change processing unit 332 changes the value of each coefficient included in the coefficient set using the equilibrium model as illustrated in FIG. 51, and the process proceeds to step S616.

  In step S616, the change processing unit 332 controls the coefficient read / write unit 331 to overwrite and store the changed coefficient set value in the coefficient memory 1421. The process returns to step S611, and the subsequent processes are performed. Repeated.

  If it is determined in step S611 that the coefficient value has not been operated, that is, if the user determines that the image displayed on the display unit 403 is the user's favorite image, the process returns to step S611. Thereafter, the same processing is repeated.

  Through the coefficient changing process described above, the user can change the coefficient set stored for each piece of processing content information used in the mapping process, and execute the process optimal for the user. Note that changing the value of each coefficient in the coefficient set changes the “processing content” of the mapping processing by the mapping processing unit 311.

  Further, in the coefficient changing process of FIG. 130, when the magnitude of the coefficient change is equal to or greater than the predetermined threshold value S11, all the coefficient values of the coefficient set are changed by the spring model according to the value of the operated coefficient, and the threshold value S11. Is smaller than the coefficient set, the coefficient set changing algorithm for changing the coefficient set changes in order to change all coefficient values of the coefficient set by the equilibrium model. Therefore, in the processing unit 1411 of the optimizing device 1401 in FIG. 126, the “processing contents” and the “processing structure” are also changed according to the user's operation, so that the optimum signal processing for the user can be performed. It can be said that it is done.

  As described above, when the coefficient set stored in the coefficient memory 1421 is obtained by the least-Nth power error method, for example, coefficient sets corresponding to a plurality of exponents N are stored in the coefficient memory 1421 in advance. The coefficient changing unit 322 may be changed to a coefficient set corresponding to the designated index N in accordance with an operation signal from the operation unit 1202 based on a user operation. In this case, each coefficient set stored in the coefficient memory 1421 is changed to one generated by the least-Nth power error method corresponding to the exponent N input from the operation unit 1202 based on the user's operation. That is, since the learning algorithm for obtaining the coefficient set corresponding to the index N is changed, it can be said that the “processing structure” is changed.

  Next, the configuration of the optimization apparatus 1501 in which the internal information generation unit 1521 is provided in the optimization apparatus 1401 of FIG. 126 will be described with reference to FIG. 131 is the same as that shown in FIG. 126 except that a processing unit 1511 is provided in place of the processing unit 1411 when the internal information generation unit 1521 is provided. This is the same as the conversion processing unit 1401.

  The internal information generation unit 1521 reads, as internal information, for example, a coefficient set stored for each piece of processing content information in the coefficient memory 1531 of the processing unit 1511, and converts it into an image signal that can be displayed on the display unit 403. The data is output to the display unit 403 and displayed.

  Next, the configuration of the processing unit 1521 will be described with reference to FIG. The configuration is basically the same as that of the processing unit 1411 in FIG. 126, but a coefficient memory 1531 is provided instead of the coefficient memory 1421. The coefficient memory 1531 is functionally similar to the coefficient memory 1421, but is different in that the coefficient memory 1531 is connected to the internal information generation unit 1521 and is configured to read out a coefficient set as appropriate.

  Next, image optimization processing by the optimization apparatus 1501 in FIG. 131 will be described with reference to the flowchart in FIG. 133. This image optimization process also includes a coefficient change process and a mapping process, similar to the image optimization process performed by the optimization process 1401 in FIG. 126. The mapping process is the same as the mapping process described with reference to FIGS. 121 and 125. Since they are the same, only the coefficient changing process will be described here.

  In the coefficient changing process, processes similar to those in steps S611 to S616 in FIG. 130 are performed in steps S631 to S636, respectively.

  In step S636, as in step S636 of FIG. 130, after the changed coefficient set is stored in the coefficient memory 1531, the process proceeds to step S637, and the internal information generation unit 1521 stores the coefficient set in the coefficient memory 1531. Each coefficient value of the set coefficient is read out, converted into an image signal that can be displayed on the display unit 403, and output to the display unit 403 for display (presentation). At this time, the display unit 403 displays, for example, each coefficient value of the coefficient set in a format such as the three-dimensional distribution diagram as shown in FIG. 39 or the two-dimensional distribution diagram as shown in FIG. Can be displayed (presented).

  After the process of step S637, the process returns to step S631, and the same process is repeated thereafter.

  According to the coefficient changing process of FIG. 133, the coefficient set value stored for each piece of processing content information in the coefficient memory 1531 is displayed as internal information. It is possible to operate the operation unit 1202 so as to obtain a coefficient set for executing processing.

  Note that the product-sum operation unit 1251 of the mapping processing unit 1222 can calculate an output signal by calculating a higher-order expression of the second or higher order instead of the linear expression of the expression (39). is there.

  Next, the series of processes described above can be performed by hardware or software. When a series of processing is performed by software, a program constituting the software is installed in a general-purpose computer or the like.

  Therefore, FIG. 134 shows a configuration example of an embodiment of a computer in which a program for executing the series of processes described above is installed.

  The program can be recorded in advance in a hard disk 2105 or a ROM 2103 as a recording medium built in the computer.

  Alternatively, the program is stored in a removable recording medium 2111 such as a floppy (registered trademark) disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto optical) disk, a DVD (Digital Versatile Disc), a magnetic disk, or a semiconductor memory. It can be stored (recorded) temporarily or permanently. Such a removable recording medium 2111 can be provided as so-called package software.

  The program is installed on the computer from the removable recording medium 2111 as described above, or transferred from the download site to the computer wirelessly via a digital satellite broadcasting artificial satellite, LAN (Local Area Network), The program can be transferred to a computer via a network such as the Internet, and the computer can receive the program transferred in this way by the communication unit 2108 and install it in the built-in hard disk 2105.

  The computer includes a CPU (Central Processing Unit) 2102. An input / output interface 2110 is connected to the CPU 2102 via the bus 2101, and the CPU 2102 operates an input unit 2107 including a keyboard, a mouse, a microphone, and the like by the user via the input / output interface 2110. When a command is input as a result of this, a program stored in a ROM (Read Only Memory) 2103 is executed accordingly. Alternatively, the CPU 2102 may be a program stored in the hard disk 2105, a program transferred from a satellite or a network, received by the communication unit 2108 and installed in the hard disk 2105, or a removable recording medium 2111 attached to the drive 2109. The program read and installed in the hard disk 2105 is loaded into a RAM (Random Access Memory) 2104 and executed. Thereby, the CPU 2102 performs processing according to the above-described flowchart or processing performed by the configuration of the above-described block diagram. Then, the CPU 2102 outputs the processing result from the output unit 2106 configured with an LCD (Liquid Crystal Display), a speaker, or the like, for example, via the input / output interface 2110 as necessary, or from the communication unit 2108. Transmission and further recording on the hard disk 2105 are performed.

  Here, in the present specification, the processing steps for describing a program for causing the computer to perform various processes do not necessarily have to be processed in time series in the order described in the flowcharts, but in parallel or individually. This includes processing to be executed (for example, parallel processing or processing by an object).

  Further, the program may be processed by one computer or may be distributedly processed by a plurality of computers. Furthermore, the program may be transferred to a remote computer and executed.

  The present invention has been described with respect to the case where the present invention is applied to noise removal from an input signal and automatic traveling. However, the present invention is widely applied to other applications such as conversion of signal frequency characteristics and the like other than noise removal and automatic traveling. Applicable.

FIG. 1 is a diagram showing an optimization apparatus to which the present invention is applied. FIG. 2 is a block diagram showing a configuration example of an embodiment of an optimization apparatus to which the present invention is applied. FIG. 3 is a flowchart for explaining optimization processing by the optimization apparatus of FIG. FIG. 4 is a block diagram illustrating a configuration example of an embodiment of an NR circuit using an optimization device. FIG. 5 is a waveform diagram showing the input signal and the input reliability. FIG. 6 is a flowchart for explaining correction processing by the NR circuit. FIG. 7 is a flowchart for explaining correction parameter calculation processing by the NR circuit. FIG. 8 is a flowchart for explaining control data learning processing by the NR circuit. FIG. 9 is a diagram for explaining the control data learning process. FIG. 10 is a block diagram illustrating a configuration example of another embodiment of the NR circuit using the optimization device. FIG. 11 is a diagram illustrating pixels multiplied by the parameter control data. FIG. 12 is a flowchart for explaining correction parameter calculation processing by the NR circuit. FIG. 13 is a flowchart for explaining control data learning processing by the NR circuit. FIG. 14 is a block diagram illustrating a configuration example of another embodiment of the NR circuit using the optimization device. FIG. 15 is a flowchart for explaining optimization processing by the optimization apparatus of FIG. FIG. 16 is a block diagram showing a configuration example of an embodiment of an automatic travel device to which the present invention is applied. FIG. 17 is a block diagram illustrating a configuration example of a processing unit of the optimization apparatus in FIG. FIG. 18 is a flowchart for explaining correction parameter calculation processing by the optimization apparatus of FIG. FIG. 19 is a flowchart for explaining control data learning processing by the optimization apparatus of FIG. FIG. 20 is a block diagram illustrating another configuration example of the processing unit of the optimization apparatus of FIG. FIG. 21 is a diagram illustrating the traveling direction output by the calculation unit in FIG. 16. FIG. 22 is a flowchart for explaining the correction processing by the optimization apparatus of FIG. FIG. 23 is a flowchart for explaining correction parameter learning processing by the optimization apparatus of FIG. FIG. 24 is a block diagram showing another configuration example of the automatic travel device to which the present invention is applied. FIG. 25 is a flowchart for explaining correction parameter calculation processing by the optimization apparatus of FIG. FIG. 26 is a flowchart for explaining correction parameter learning processing by the optimization apparatus of FIG. FIG. 27 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG. FIG. 28 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG. FIG. 29 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 30 is a block diagram illustrating a configuration example of the learning unit of the optimization apparatus in FIG. FIG. 31 is a block diagram illustrating a configuration example of the mapping processing unit of the optimization apparatus of FIG. FIG. 32 is a diagram for explaining an error between a true value and a predicted value. FIG. 33 is a diagram for explaining the least-N power error method. FIG. 34 is a diagram for explaining the weight α S. FIG. 35 is a flowchart for explaining image optimization processing by the optimization apparatus of FIG. FIG. 36 is a diagram illustrating a comparison between the least-Nth power criterion and the least-square criterion. FIG. 37 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 38 is a flowchart for explaining image optimization processing by the optimization apparatus of FIG. FIG. 39 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG. FIG. 40 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG. FIG. 41 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 42 is a block diagram illustrating a configuration example of a coefficient conversion unit of the optimization apparatus in FIG. FIG. 43 is a block diagram illustrating a configuration example of a learning device that generates the coefficients stored in the coefficient memory of FIG. 41 by learning. FIG. 44 is a flowchart for explaining the coefficient determination processing by the learning apparatus of FIG. FIG. 45 is a diagram illustrating a configuration of a prediction tap. FIG. 46 is a diagram illustrating an example of the distribution of coefficient values corresponding to the tap positions of the prediction taps. FIG. 47 is a diagram illustrating a configuration of a prediction tap. FIG. 48 is a diagram illustrating an example of distribution of coefficient values corresponding to tap positions of prediction taps. FIG. 49 is a diagram illustrating a configuration of a prediction tap. FIG. 50 is a diagram illustrating the spring model. FIG. 51 is a diagram for explaining an equilibrium model. FIG. 52 is a flowchart for explaining image optimization processing by the optimization apparatus of FIG. FIG. 53 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 54 is a flowchart for explaining image optimization processing by the optimization apparatus of FIG. FIG. 55 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied. FIG. 56 is a block diagram illustrating a configuration example of the feature amount detection unit in FIG. 55. FIG. 57 is a block diagram illustrating a configuration example of the processing determination unit in FIG. FIG. 58 is a block diagram illustrating a configuration example of the processing unit of FIG. FIG. 59 is a flowchart for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 60 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 61 is a flowchart for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 62 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied. FIG. 63 is a flowchart for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 64 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 65 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 66 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 67 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 68 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 69 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied. FIG. 70 is a block diagram illustrating a configuration example of the processing determination unit in FIG. 69. FIG. 71 is a flowchart for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 72 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 73 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 74 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 75 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 76 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 77 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 78 is a diagram for explaining telop extraction optimization processing by the optimization apparatus of FIG. 69. FIG. 79 is a diagram for explaining feature amount switching by the optimization apparatus in FIG. 69. FIG. 80 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 81 is a block diagram illustrating a configuration example of the feature amount detection unit in FIG. FIG. 82 is a flowchart for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 83 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 84 is a block diagram illustrating a configuration example of the feature amount detection unit in FIG. FIG. 85 is a flowchart for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 86 is a view for explaining a feature content processing content instruction screen by the optimization apparatus of FIG. FIG. 87 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied. FIG. 88 is a flowchart for explaining telop extraction optimization processing by the optimization apparatus of FIG. FIG. 89 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied. FIG. 90 is a block diagram illustrating a configuration example of the processing determination unit in FIG. FIG. 91 is a flowchart for explaining image optimization processing by the optimization apparatus of FIG. FIG. 92 is a diagram for explaining an LUT. FIG. 93 is a diagram for explaining the processing contents designated for each feature amount on the LUT. FIG. 94 is a diagram for explaining processing contents designated for each feature amount on the LUT. FIG. 95 is a diagram for explaining the processing contents designated for each feature amount on the LUT. FIG. 96 is a flowchart for explaining the manual LUT change process in the image optimization process by the optimization apparatus of FIG. FIG. 97 is a diagram for explaining the manual LUT change process in the image optimization process by the optimization apparatus of FIG. FIG. 98 is a diagram for explaining the manual LUT change process in the image optimization process by the optimization apparatus of FIG. FIG. 99 is a diagram for explaining the manual LUT changing process in the image optimizing process by the optimizing apparatus of FIG. FIG. 100 is a diagram for explaining the manual LUT change process in the image optimization process by the optimization apparatus of FIG. FIG. 101 is a diagram for explaining the manual LUT changing process in the image optimizing process by the optimizing apparatus of FIG. FIG. 102 is a diagram for explaining the manual LUT change process in the image optimization process by the optimization apparatus of FIG. FIG. 103 is a diagram for explaining the manual LUT change process in the image optimization process by the optimization apparatus of FIG. FIG. 104 is a diagram for explaining the manual LUT changing process in the image optimizing process by the optimizing apparatus of FIG. FIG. 105 is a flowchart for describing auto LUT change processing in image optimization processing by the optimization apparatus in FIG. 91. FIG. 106 is a diagram for describing auto LUT change processing in image optimization processing by the optimization apparatus in FIG. 91. FIG. 107 is a diagram for explaining auto LUT change processing in image optimization processing by the optimization apparatus in FIG. 91. FIG. 108 is a diagram for explaining auto LUT change processing in image optimization processing by the optimization apparatus in FIG. 91. FIG. 109 is a diagram for explaining auto LUT change processing in image optimization processing by the optimization apparatus in FIG. 91. FIG. 110 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 111 is a block diagram illustrating a configuration example of the processing determination unit in FIG. 110. FIG. 112 is a flowchart for explaining image optimization processing by the optimization apparatus of FIG. 110. FIG. 113 is a flowchart for describing manual LUT change processing in image optimization processing by the optimization apparatus in FIG. 110. FIG. 114 is a diagram for explaining the manual LUT change process in the image optimization process by the optimization apparatus of FIG. 110. FIG. 115 is a diagram for explaining the manual LUT changing process in the image optimizing process by the optimizing apparatus in FIG. 110. FIG. 116 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. 117 is a block diagram illustrating a configuration example of the processing unit in FIG. 116. 118 is a block diagram showing a learning apparatus that generates a coefficient set stored in the coefficient memory of FIG. 116 by learning. FIG. 119 is a block diagram illustrating a configuration example of the mapping processing unit in FIG. 117. FIG. 120 is a flowchart for explaining learning processing by the optimization apparatus of FIG. FIG. 121 is a flowchart for explaining mapping processing by the optimization apparatus of FIG. FIG. 122 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 123 is a block diagram illustrating a configuration example of the processing unit in FIG. 122. FIG. 124 is a flowchart for describing learning processing by the optimization apparatus of FIG. FIG. 125 is a flowchart for explaining mapping processing by the optimization apparatus of FIG. FIG. 126 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. 127 is a block diagram illustrating a configuration example of the processing unit in FIG. 126. 128 is a block diagram showing a learning apparatus that generates a coefficient set stored in the coefficient memory of FIG. 127 by learning. FIG. 129 is a flowchart illustrating coefficient determination processing by the learning device in FIG. 128. FIG. 130 is a flowchart for explaining image optimization processing by the optimization apparatus in FIG. 126. FIG. 131 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied. FIG. 132 is a block diagram illustrating a configuration example of the processing unit in FIG. 131. FIG. 133 is a flowchart for explaining image optimization processing by the optimization apparatus in FIG. 131. FIG. 134 is a block diagram illustrating a configuration example of an embodiment of a computer to which the present invention has been applied.

Explanation of symbols

  DESCRIPTION OF SYMBOLS 1 optimization apparatus, 11 processing part, 21 correction | amendment part, 22 learning part, 101 processing part, 201 optimization apparatus, 211 processing part, 221 learning part, 222 mapping processing part, 301 optimization apparatus, 311 processing part, 321 count Memory, 322 Count conversion unit, 341 learning device, 401 optimization device, 411 feature amount detection unit, 412 processing determination unit, 413 processing unit, 701 optimization device, 711 processing determination unit, 811, 911 feature amount detection unit, 912 Internal information generation unit, 1001 optimization device, 1011 internal information generation unit, 1101 optimization device, 1111 feature amount detection unit, 1112 processing determination unit, 1081 feature amount detection unit, 1182 internal information generation unit, 1201 optimization device, 1211 Processing unit, 1301 optimization device, 1311 processing unit, 1312 internal information generation unit, 1401 optimization device, 1411 processing unit, 1501 optimization device, 1511 processing unit, 1521 internal information generation unit

Claims (12)

  1. Operation signal output means for outputting an operation signal in accordance with a user operation;
    Than the input signal composed of image signals, the image signal each pixel of the pixel and its temporal, and both spatial or pixel difference processing pixel value using the one of its peripheral pixels, Among the contents of the filtering process, the statistical process, or the filtering process result, the filtering process using the statistical process result, and the plurality of feature detection processes for obtaining a plurality of types of processing results including the statistical process, the operation signal Feature detection means for detecting the features of the content of the predetermined number of types of feature detection processing specified by
    Storage means for storing a table showing a correspondence relationship between the feature and the content of signal processing for the input signal having the feature;
    In table stored in the storage means, said be for the input signal with the features detected by the feature detecting means, the content of signal processing which is set by the operation signal, signal processing on the input signal Processing determining means for determining the content of
    The input signal and the predetermined prediction coefficient are set according to the content of the signal processing determined by the processing determination means and based on the content of the execution process including a predetermined prediction coefficient that can be changed based on the operation signal. Processing execution means for executing an execution process on the input signal by generating an output signal by linear combination of :
    The type of feature detected by the feature detection unit, the distribution of the feature, a table including the content of the signal processing determined by the processing determination unit, or the content of execution processing executed by the processing execution unit Display means for displaying at least one of the predetermined prediction coefficients as internal information,
    Based on the operation signal, the contents of the feature detection process, the contents of the signal processing or signal processing apparatus, wherein at least one is changed among the contents of the execution process.
  2. The operation signal is a signal that specifies whether the input signal is a processing target for extracting a feature according to the content of the first feature detection processing or the content of the second feature detection processing .
    The display means, as said internal information, characterized by displaying the distribution of the contents of the first feature detection process, and features are extracted from the second of said input signal to be processed in the content of the feature detection process The signal processing apparatus according to claim 1.
  3. For the distribution displayed by the display means, the operation signal for designating whether the content of the first feature detection process or the feature of the second feature detection process is a feature to be processed The signal processing apparatus according to claim 2 , wherein when there is an input, the content of the feature detection processing applied to the input signal is changed based on the operation signal.
  4. Whether the processing determining means outputs the input signal as it is as the content of the signal processing on the input signal based on the predetermined number of types of features detected from the input signal by the feature detecting means. Decide
    It said processing execution means, in accordance with the determination of the processing determining means, by outputting the input signal selectively, according to claim 1, characterized in that detecting a telop in the image signal which is the input signal Signal processing device.
  5. The signal processing apparatus according to claim 1 , wherein the processing determination unit changes the content of the processing by changing the content of the signal processing in the table based on the operation signal.
  6. As the contents of processing in the table, there are processing for outputting an output signal having a first value and processing for outputting an output signal having a second value for the input signal,
    The signal processing apparatus according to claim 1 , wherein the processing execution unit binarizes the input signal into the first and second values according to the determination of the processing determination unit.
  7. The prediction coefficients the input signal and is the linear combination, when it is changed based on the operation signal, signal processing apparatus according to claim 1, characterized in that the sum of its is changed to be 1 .
  8. Of the prediction coefficients that are linearly combined with the input signal, the prediction coefficients other than the prediction coefficients that are changed based on the operation signal in a space corresponding to the pixel array of the image signal that is the input signal that is linearly combined. Of the prediction coefficients changed based on the operation signal, those closer than a predetermined distance are changed in the same direction as the increase / decrease direction of the prediction coefficient changed based on the operation signal, The prediction coefficient that is changed based on the one that is farther than a predetermined distance is changed in the direction opposite to the increase / decrease direction of the prediction coefficient that is changed based on the operation signal. The signal processing apparatus as described.
  9. Corresponding to pixels other than the prediction coefficient changed based on the operation signal in the space corresponding to the pixel array of the image signal that is the input signal to be linearly combined, of the prediction coefficient linearly combined with the input signal. Among the prediction coefficients, the prediction coefficient changed based on the operation signal has the same maximum or minimum polarity as the polarity indicating positive or negative, and the direction of increase or decrease of the prediction coefficient changed based on the operation signal is changed in the same direction, which polarity indicates a prediction coefficient and polarity that is changed based on the operation signal is different from the maximum value or the minimum value, a direction of increase or decrease of the modified prediction coefficients based on the operation signal The signal processing device according to claim 1, wherein the signal processing device is changed in a reverse direction.
  10. Operation signal output means for outputting an operation signal in accordance with a user operation;
    An inter-pixel difference process, a filter of pixel values using the pixel and its temporal and / or spatial pixels, for each pixel of the image signal, from an input signal composed of an image signal Of the contents of processing, statistical processing, or the filtering processing results thereof, filtering processing using the statistical processing results, and a plurality of feature detection processing for obtaining a plurality of types of processing results including statistical processing, according to the operation signal Feature detection means for detecting features of the content of the specified number of types of feature detection processing specified;
    Storage means for storing a table showing a correspondence relationship between the feature and the content of signal processing for the input signal having the feature;
    The table stored in the storage means is for the input signal having the feature detected by the feature detection means, and the signal processing content set by the operation signal is the signal processing for the input signal. Processing determining means for determining the content of
    The input signal and the predetermined prediction coefficient are set according to the content of the signal processing determined by the processing determination means and based on the content of the execution process including a predetermined prediction coefficient that can be changed based on the operation signal. Processing execution means for executing an execution process on the input signal by generating an output signal by linear combination of:
    The type of feature detected by the feature detection unit, the distribution of the feature, a table including the content of the signal processing determined by the processing determination unit, or the content of execution processing executed by the processing execution unit A signal processing method of a signal processing device comprising: display means for displaying at least one of predetermined prediction coefficients as internal information,
    In the operation signal output means, an operation signal output step of outputting an operation signal according to a user operation,
    In the feature detecting means, from the input signal composed of image signals, each pixel of the image signal, the pixel and its temporal and spatial both, or pixel values using the one of its peripheral pixels Contents of multiple feature detection processing for obtaining multiple types of processing results including pixel difference processing, filtering processing, statistical processing, filter processing results, filtering processing using statistical processing results, and statistical processing results A feature detection step for detecting features of the content of a predetermined number of types of feature detection processing specified by the operation signal ;
    Content of signal processing set by the operation signal for the input signal having the characteristic detected by the characteristic detection step in the table stored in the storage means in the processing determination means and a process determining step of determining a content of the signal processing for the input signal,
    In the process execution means , the input based on the content of the execution process consisting of a predetermined prediction coefficient that is set according to the content of the signal process determined in the process of the process determination step and can be changed based on the operation signal A process execution step of executing an execution process on the input signal by generating an output signal by linear combination of a signal and the predetermined prediction coefficient ;
    In the display means, the type of the feature detected by the processing of the feature detection step, the distribution of the feature, the table containing the content of the signal processing determined by the processing of the processing determination step, or the processing execution step A display step of displaying as internal information at least one of the predetermined prediction coefficients that are the contents of the execution process executed by the process ,
    Based on the operation signal, the contents of the feature detection process, the contents of the signal processing or signal processing method, wherein at least one is changed among the contents of the execution process.
  11. Operation signal output means for outputting an operation signal in accordance with a user operation;
    An inter-pixel difference process, a filter of pixel values using the pixel and its temporal and / or spatial pixels, for each pixel of the image signal, from an input signal composed of an image signal Of the contents of processing, statistical processing, or the filtering processing results thereof, filtering processing using the statistical processing results, and a plurality of feature detection processing for obtaining a plurality of types of processing results including statistical processing, according to the operation signal Feature detection means for detecting features of the content of the specified number of types of feature detection processing specified;
    Storage means for storing a table showing a correspondence relationship between the feature and the content of signal processing for the input signal having the feature;
    The table stored in the storage means is for the input signal having the feature detected by the feature detection means, and the signal processing content set by the operation signal is the signal processing for the input signal. Processing determining means for determining the content of
    The input signal and the predetermined prediction coefficient are set according to the content of the signal processing determined by the processing determination means and based on the content of the execution process including a predetermined prediction coefficient that can be changed based on the operation signal. Processing execution means for executing an execution process on the input signal by generating an output signal by linear combination of:
    The type of feature detected by the feature detection unit, the distribution of the feature, a table including the content of the signal processing determined by the processing determination unit, or the content of execution processing executed by the processing execution unit A program executed by a computer that controls a signal processing device including display means for displaying at least one of predetermined prediction coefficients as internal information,
    In the operation signal output means, an operation signal output step of outputting an operation signal according to a user operation,
    In the feature detecting means, from the input signal composed of image signals, each pixel of the image signal, the pixel and its temporal and spatial both, or pixel values using the one of its peripheral pixels Contents of multiple feature detection processing for obtaining multiple types of processing results including pixel difference processing, filtering processing, statistical processing, filter processing results, filtering processing using statistical processing results, and statistical processing results Among them, a feature detection control step for controlling detection of features of the content of a predetermined number of types of feature detection processing specified by the operation signal ;
    In the processing determination means, for the input signal having the feature detected by the feature detection control step in the table stored in the storage means, and for the signal processing set by the operation signal. the contents, a processing determination control step of controlling determination of a content of the signal processing for the input signal,
    Based on the content of the execution process comprising a predetermined prediction coefficient that is set according to the content of the signal processing determined in the processing of the processing determination control step in the processing execution means and can be changed based on the operation signal, A process execution control step for controlling execution of an execution process on the input signal by generating an output signal by linear combination of the input signal and the predetermined prediction coefficient ;
    In the display means, the type of feature detected by the processing of the feature detection control step, the distribution of the feature, the table including the content of the signal processing determined by the processing of the processing determination control step, or the execution of the processing A display control step for controlling display as internal information of at least one of the predetermined prediction coefficients that is the content of the execution process executed by the process of the control step,
    Based on the operation signal, the contents of the feature detection process, the contents of the signal processing or the computer readable program, wherein at least one is changed among the contents of the execution process, the Recorded recording medium.
  12. Operation signal output means for outputting an operation signal in accordance with a user operation;
    An inter-pixel difference process, a filter of pixel values using the pixel and its temporal and / or spatial pixels, for each pixel of the image signal, from an input signal composed of an image signal Of the contents of processing, statistical processing, or the filtering processing results thereof, filtering processing using the statistical processing results, and a plurality of feature detection processing for obtaining a plurality of types of processing results including statistical processing, according to the operation signal Feature detection means for detecting features of the content of the specified number of types of feature detection processing specified;
    Storage means for storing a table showing a correspondence relationship between the feature and the content of signal processing for the input signal having the feature;
    The table stored in the storage means is for the input signal having the feature detected by the feature detection means, and the signal processing content set by the operation signal is the signal processing for the input signal. Processing determining means for determining the content of
    The input signal and the predetermined prediction coefficient are set according to the content of the signal processing determined by the processing determination means and based on the content of the execution process including a predetermined prediction coefficient that can be changed based on the operation signal. Processing execution means for executing an execution process on the input signal by generating an output signal by linear combination of:
    The type of feature detected by the feature detection unit, the distribution of the feature, a table including the content of the signal processing determined by the processing determination unit, or the content of execution processing executed by the processing execution unit A computer that controls the signal processing device including display means for displaying at least one of the predetermined prediction coefficients as internal information;
    In the operation signal output means, an operation signal output step of outputting an operation signal according to a user operation,
    In the feature detecting means, from the input signal composed of image signals, each pixel of the image signal, the pixel and its temporal and spatial both, or pixel values using the one of its peripheral pixels Contents of multiple feature detection processing for obtaining multiple types of processing results including pixel difference processing, filtering processing, statistical processing, filter processing results, filtering processing using statistical processing results, and statistical processing results Among them, a feature detection control step for controlling detection of features of the content of a predetermined number of types of feature detection processing specified by the operation signal ;
    In the processing determination means, for the input signal having the feature detected by the feature detection control step in the table stored in the storage means, and for the signal processing set by the operation signal. the contents, a processing determination control step of controlling determination of a content of the signal processing for the input signal,
    Based on the content of the execution process comprising a predetermined prediction coefficient that is set according to the content of the signal processing determined in the processing of the processing determination control step in the processing execution means and can be changed based on the operation signal, A process execution control step for controlling execution of an execution process on the input signal by generating an output signal by linear combination of the input signal and the predetermined prediction coefficient ;
    In the display means, the type of feature detected by the processing of the feature detection control step, the distribution of the feature, the table including the content of the signal processing determined by the processing of the processing determination control step, or the execution of the processing A display control step for controlling display as internal information of at least one of the predetermined prediction coefficients that is the content of the execution process executed by the process of the control step,
    Based on the operation signal, the feature content of the detection process, the contents of the signal processing or program, characterized in that at least one is changed among the contents of the execution process.
JP2007026198A 2007-02-05 2007-02-05 Signal processing device Expired - Fee Related JP4591785B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007026198A JP4591785B2 (en) 2007-02-05 2007-02-05 Signal processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007026198A JP4591785B2 (en) 2007-02-05 2007-02-05 Signal processing device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
JP2003570299 Division

Publications (2)

Publication Number Publication Date
JP2007183977A JP2007183977A (en) 2007-07-19
JP4591785B2 true JP4591785B2 (en) 2010-12-01

Family

ID=38339940

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007026198A Expired - Fee Related JP4591785B2 (en) 2007-02-05 2007-02-05 Signal processing device

Country Status (1)

Country Link
JP (1) JP4591785B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101965937B1 (en) * 2016-11-17 2019-08-13 두산중공업 주식회사 Fault Signal Recovery Method and Apparatus
KR101926257B1 (en) 2017-05-15 2018-12-06 두산중공업 주식회사 Fault Signal Recovery System and Method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001238185A (en) * 2000-02-24 2001-08-31 Sony Corp Image signal converting apparatus, image signal conversion method, image display device using it, and device and method for generating coefficient data used for it
JP2001309314A (en) * 2000-04-25 2001-11-02 Sony Corp Device and method for converting image signal and image display device using the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3454396B2 (en) * 1995-10-11 2003-10-06 株式会社日立製作所 Video change point detection control method, playback stop control method based thereon, and video editing system using them
JPH10294885A (en) * 1997-04-17 1998-11-04 Sony Corp Unit and method for image processing
JP3864494B2 (en) * 1997-05-12 2006-12-27 ソニー株式会社 Image signal converter and television receiver using the same

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001238185A (en) * 2000-02-24 2001-08-31 Sony Corp Image signal converting apparatus, image signal conversion method, image display device using it, and device and method for generating coefficient data used for it
JP2001309314A (en) * 2000-04-25 2001-11-02 Sony Corp Device and method for converting image signal and image display device using the same

Also Published As

Publication number Publication date
JP2007183977A (en) 2007-07-19

Similar Documents

Publication Publication Date Title
US7283668B2 (en) Method and apparatus for color-based object tracking in video sequences
CN103069373B (en) Two-Dimensional Block in Beam Seas control
JP3234064B2 (en) Image retrieval method and apparatus
US20100014781A1 (en) Example-Based Two-Dimensional to Three-Dimensional Image Conversion Method, Computer Readable Medium Therefor, and System
US7130461B2 (en) Systems and method for automatically choosing visual characteristics to highlight a target against a background
JP3264273B2 (en) Automatic color correction apparatus and automatic color correction method and recording medium recording the control program
US20080219493A1 (en) Image Processing System
US8189929B2 (en) Method of rearranging a cluster map of voxels in an image
US5864632A (en) Map editing device for assisting updating of a three-dimensional digital map
JP2914227B2 (en) Image processing apparatus and image processing method
JP3679512B2 (en) Image extraction apparatus and method
US6774889B1 (en) System and method for transforming an ordinary computer monitor screen into a touch screen
CN102710911B (en) Information processing apparatus and method
US8180161B2 (en) Image classification device and image classification program
Howard et al. Vision‐based terrain characterization and traversability assessment
US6578017B1 (en) Method to aid object detection in images by incorporating contextual information
KR101117146B1 (en) Image processing device and method, recording medium
US6396491B2 (en) Method and apparatus for reproducing a shape and a pattern in a three-dimensional scene
KR100913861B1 (en) Data processing apparatus, data processing method, data processing system and recording medium
US5568590A (en) Image processing using genetic mutation of neural network parameters
JP4260168B2 (en) Video color preference characteristic conversion device, conversion method, and recording medium
CN101828201B (en) Image processing device and method, and learning device, method
KR20030062313A (en) Image conversion and encoding techniques
JP4388301B2 (en) Image search apparatus, image search method, image search program, and recording medium recording the program
US6140997A (en) Color feature extracting apparatus and method therefor capable of easily transforming a RGB color space into a color space close to human sense

Legal Events

Date Code Title Description
A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100422

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100614

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100819

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130924

Year of fee payment: 3

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100901

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130924

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees