WO2002067193A1 - Dispositif de traitement de signaux - Google Patents
Dispositif de traitement de signaux Download PDFInfo
- Publication number
- WO2002067193A1 WO2002067193A1 PCT/JP2002/001542 JP0201542W WO02067193A1 WO 2002067193 A1 WO2002067193 A1 WO 2002067193A1 JP 0201542 W JP0201542 W JP 0201542W WO 02067193 A1 WO02067193 A1 WO 02067193A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- processing
- learning
- unit
- data
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/028—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using expert systems only
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Definitions
- the present invention relates to a signal processing device, and in particular, to a signal processing device that is capable of performing optimal processing for a user by, for example, changing the content of the processing and the structure of the processing by a user operation.
- a signal processing device that is capable of performing optimal processing for a user by, for example, changing the content of the processing and the structure of the processing by a user operation.
- the S / N Signal to Noise Ratio
- frequency characteristics, and the like of the signal input to the NR circuit are not always constant, but rather generally change.
- the SZN, frequency characteristics, etc. of the signal input to the NR circuit change, the signal input to the NR circuit in the noise removal processing corresponding to the position set by the user operating the knob This is not always the case, and the user must frequently operate the knobs to perform appropriate noise removal processing for himself, which is troublesome.
- the present invention has been made in view of such a situation, and allows a user to perform an optimal process for a user by changing a process content or a process structure by a user operation. Things.
- a signal processing device includes a signal processing unit that performs signal processing on an input signal, and an output unit that outputs a signal processing result of the signal processing unit, and based on an operation signal supplied in response to a user operation, The processing structure of the signal processing means is changed. I do.
- the signal processing unit includes: a feature detection unit that detects a feature from the input signal; a process determination unit that determines a content of a process for the input signal based on the feature detected by the feature detection unit; Processing execution means for executing processing on the input signal in accordance with the content of the processing determined by the above, and the characteristic detection means, the processing determination means, or the processing execution based on the operation signal
- the structure of at least one of the means may be changed.
- the output means may include a presentation means for presenting a signal processing result of the signal processing means.
- the processing structure of the feature detecting means may be changed based on the operation signal.
- the operation signal may be a signal designating a predetermined number of types of features among a plurality of types of features, and the feature detection means may detect the predetermined number of types of features. Then, the structure of the processing can be changed.
- the feature detecting means can detect a predetermined number of types of features from the input signal, and the processing determining means can detect the predetermined number of types of features based on the predetermined number of types of features detected from the input signal by the feature detecting means. Thus, it is possible to determine the content of the processing in the processing execution means for the input signal.
- the input signal may be an image signal
- the processing determining unit may perform processing on the input signal based on a predetermined number of types of characteristics detected from the input signal by the characteristic detecting unit. It is possible to determine whether or not to output the input signal as it is, and the processing executing means selectively outputs the input signal according to the determination of the processing determining means, thereby obtaining the input signal.
- a telop in an image signal can be detected.
- the feature detecting means may change the processing structure so as to detect a new type of feature different from a prepared feature based on an operation signal.
- the feature detecting means may include a dynamic range, a maximum value, a median value, a minimum value, a sum, a variance, a number of input signals whose features are larger than a threshold value, or a linear linear combination of features, which are prepared in advance. It can be detected as a type of feature.
- the structure of the process of the process determining means may be changed.
- the processing determining means may store a feature / processing correspondence relationship, which is a correspondence relationship between each value of the feature and the content of a process on an input signal having the feature of the value.
- a feature / processing correspondence relationship which is a correspondence relationship between each value of the feature and the content of a process on an input signal having the feature of the value.
- the content of the process associated with the value of the feature detected from the input signal can be determined as the content of the process for the input signal.
- the processing determining means may change the structure of the processing in the feature Z processing correspondence relationship based on an operation signal, thereby changing the structure of the processing.
- the processing execution means can binarize the input signal into first and second values according to the determination of the processing determining means.
- the structure of the processing of the processing executing means may be changed based on the operation signal.
- the processing executing means includes: teacher data generating means for generating teacher data from predetermined learning data; student data generating means for generating student data from learning data; By linearly combining the predicted value of the teacher data obtained by the linear combination, the learning means for learning the prediction coefficient that minimizes the error between the teacher data and the input signal, and the prediction coefficient obtained by the learning means And output signal generating means for generating an output signal.
- the learning means can learn a prediction coefficient by a least-N-th power error method of statistically minimizing an N-th power error, which is the N-th power of the error, based on an operation signal. By changing the error raised to the Nth power, the structure of the processing can be changed.
- the learning means employs the product of the square error and the weight according to the operation signal as the Nth error, thereby changing the Nth power of the error based on the operation signal. .
- the learning means includes a squared error as a Nth error using the Nth power corresponding to the operation signal, and a predicted value of teacher data calculated using a prediction coefficient obtained by a minimum N′th error method.
- the learning means may be made to learn a prediction coefficient for each content of the processing determined by the processing determining means, and the output generating means is configured to determine the input signal and the input signal by the processing determining means.
- An output signal can be generated by linear combination with a prediction coefficient corresponding to the content of the processing to be performed.
- the signal processing unit includes: a determination unit that monitors an operation signal and determines whether the operation signal can be used for learning; and an input signal based on a learning operation signal that is an operation signal that can be used for learning.
- a learning unit for learning a correction standard, which is a standard for correction, and a correction unit for correcting an input signal based on a correction standard obtained by learning and outputting the corrected signal as an output signal are provided. Can be done.
- the signal processing means includes: teacher data generating means for generating teacher data from predetermined learning data; student data generating means for generating student data from learning data; By linearly combining the predicted value of the teacher data obtained by the linear combination, the learning means for learning the prediction coefficient that minimizes the error between the teacher data and the input signal, and the prediction coefficient obtained by the learning means And output signal generating means for generating an output signal.
- the signal processing means can learn a prediction coefficient by a minimum N-th power error method that statistically minimizes an N-th power error, which is the N-th power of the error, based on an operation signal, By changing the N-th power of the error, the structure of the processing can be changed.
- the learning means employs the product of the square error and the weight according to the operation signal as the Nth error, thereby changing the Nth power of the error based on the operation signal. .
- the learning means includes a squared error as a Nth error using the Nth power corresponding to the operation signal, and a predicted value of teacher data calculated using a prediction coefficient obtained by a minimum N′th error method.
- a signal processing method includes a signal processing step of performing signal processing on an input signal, and an output step of outputting a signal processing result in the processing of the signal processing step, wherein an operation signal supplied in response to a user operation is provided.
- the processing structure of the signal processing step is changed based on this.
- the recording medium program according to the present invention includes a signal processing control step of controlling signal processing of an input signal, and an output control step of controlling output of a signal processing result in the processing of the signal processing control step.
- the processing structure of the signal processing control step is changed based on the operation signal supplied in response to the control signal.
- a program according to the present invention causes a computer to execute a signal processing control step for controlling signal processing of an input signal, and an output control step for controlling output of a signal processing result in the processing of the signal processing control step.
- the structure of the processing of the signal processing control step is changed based on the operation signal supplied according to the following.
- an input signal is signal-processed, a signal processing result is output, and a processing structure in the signal processing is changed based on an operation signal supplied according to a user operation.
- FIG. 1 is a diagram showing an optimization device to which the present invention is applied.
- FIG. 2 is a block diagram illustrating a configuration example of an embodiment of an optimization device to which the present invention has been applied.
- FIG. 3 is a flowchart for explaining the optimization processing by the optimization device of FIG.
- FIG. 4 is a block diagram illustrating a configuration example of an embodiment of an NR circuit using an optimization device.
- FIG. 5A is a waveform diagram showing an input signal.
- FIG. 5B is a waveform chart showing the input reliability.
- FIG. 6 is a flowchart illustrating the correction processing by the NR circuit.
- FIG. 7 is a flowchart illustrating a correction parameter calculation process performed by the NR circuit.
- FIG. 8 is a flowchart illustrating control data learning processing by the NR circuit.
- 9A to 9C are diagrams for explaining control data learning processing.
- FIG. 10 is a block diagram showing a configuration example of another embodiment of an NR circuit using an optimization device.
- FIG. 11 is a diagram illustrating pixels multiplied by the parameter control data.
- FIG. 12 is a flowchart illustrating a correction parameter calculation process performed by the NR circuit.
- FIG. 13 is a flowchart illustrating a control data learning process by the NR circuit.
- FIG. 14 is a block diagram illustrating a configuration example of another embodiment of an NR circuit using an optimization device.
- FIG. 15 is a flowchart for explaining the optimization processing by the optimization device in FIG.
- FIG. 16 is a block diagram showing a configuration example of an embodiment of an automatic traveling device to which the present invention is applied.
- FIG. 17 is a block diagram illustrating a configuration example of a processing unit of the optimization device in FIG.
- FIG. 18 is a flowchart illustrating a correction parameter calculation process performed by the optimization device of FIG.
- FIG. 19 is a flowchart illustrating a control data learning process by the optimization device of FIG.
- FIG. 20 is a block diagram illustrating another configuration example of the processing unit of the optimization device in FIG. 16.
- FIG. 21 is a diagram illustrating the traveling direction output by the calculation unit in FIG. 16.
- FIG. 22 is a flowchart illustrating a correction process by the optimization device in FIG. 16.
- FIG. 23 is a flowchart for explaining a correction parameter learning process by the optimization device of FIG.
- FIG. 24 is a block diagram showing another configuration example of the automatic traveling device to which the present invention is applied.
- FIG. 25 is a flowchart illustrating a correction parameter calculation process performed by the optimization device in FIG. 24.
- FIG. 26 is a flowchart for explaining a correction parameter learning process by the optimizing device of FIG.
- FIG. 27 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG.
- FIG. 28 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG.
- FIG. 29 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied.
- FIG. 30 is a block diagram illustrating a configuration example of a learning unit of the optimization device in FIG.
- FIG. 31 is a block diagram illustrating a configuration example of a mapping processing unit of the optimization device in FIG. 29.
- FIG. 32 is a diagram for explaining an error between a true value and a predicted value.
- FIG. 33 is a diagram for explaining the minimum N-th power error method.
- 3 4 is a diagram for explaining the weight a s.
- FIG. 35 is a flowchart for explaining the image optimization processing by the optimization device in FIG.
- FIG. 37 is a block diagram illustrating a configuration example of another embodiment of the optimizing device to which the present invention has been applied.
- FIG. 38 is a flowchart for explaining the image optimizing process by the optimizing device of FIG.
- FIG. 39 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG.
- FIG. 40 is a diagram illustrating an example of internal information generated by the internal information generation unit in FIG.
- FIG. 41 is a block diagram showing a configuration example of another embodiment of the optimizing device to which the present invention is applied.
- FIG. 42 is a block diagram illustrating a configuration example of a coefficient conversion unit of the optimization device in FIG.
- FIG. 43 is a block diagram illustrating a configuration example of a learning device that generates the coefficients stored in the coefficient memory of FIG. 41 by learning.
- FIG. 44 is a flowchart illustrating a coefficient determination process by the learning device in FIG.
- FIG. 45 is a diagram illustrating the configuration of the prediction tap.
- FIG. 46 is a diagram illustrating an example of a distribution of coefficient values corresponding to tap positions of prediction taps.
- FIG. 47 is a diagram illustrating the configuration of the prediction tap.
- FIG. 48 is a diagram illustrating an example of a distribution of coefficient values corresponding to the tap positions of the prediction taps.
- FIG. 49 is a diagram illustrating the configuration of the prediction tap.
- FIG. 50 is a diagram illustrating the panel model.
- Figure 51 illustrates the equilibrium model
- FIG. 52 is a flowchart illustrating the image optimizing process performed by the optimizing device of FIG.
- FIG. 53 shows a configuration example of another embodiment of the optimization device to which the present invention is applied. It is a block diagram.
- FIG. 54 is a flowchart for explaining the image optimizing process by the optimizing device of FIG.
- FIG. 55 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied.
- FIG. 56 is a block diagram illustrating a configuration example of the feature amount detection unit in FIG.
- FIG. 57 is a block diagram illustrating a configuration example of the process determining unit in FIG.
- FIG. 58 is a block diagram illustrating a configuration example of the processing unit in FIG.
- FIG. 59 is a flowchart illustrating the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 60 is a block diagram showing a configuration example of another embodiment of the optimizing device to which the present invention is applied.
- FIG. 61 is a flowchart for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 62 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied.
- FIG. 63 is a flowchart for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 64 is a diagram for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 65 is a view for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 66 is a diagram for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 67 is a diagram for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 68 is a diagram for explaining the telop extraction optimizing process by the optimizing device of FIG. is there.
- FIG. 69 is a block diagram showing a configuration example of another embodiment of the optimizing device to which the present invention is applied.
- FIG. 70 is a block diagram illustrating a configuration example of the process determining unit in FIG.
- FIG. 71 is a flowchart for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 72 is a view for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 73 is a diagram for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 74 is a diagram for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 75 is a diagram for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 76 is a view for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 77 is a view for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 78 is a view for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 79 is a diagram for explaining switching of the feature amounts by the optimization device of FIG.
- FIG. 80 is a block diagram showing a configuration example of another embodiment of the optimizing device to which the present invention is applied.
- FIG. 81 is a block diagram illustrating a configuration example of the feature amount detection unit in FIG.
- FIG. 82 is a flowchart for explaining the telop extraction optimizing process performed by the optimizing device shown in FIG.
- FIG. 83 shows a configuration example of another embodiment of the optimization device to which the present invention is applied. It is a block diagram.
- FIG. 84 is a block diagram illustrating a configuration example of the feature amount detection unit in FIG.
- FIG. 85 is a flowchart for explaining the telop extraction optimizing process by the optimizing device of FIG.
- FIG. 86 is a diagram for explaining a feature content processing content instruction screen by the optimization device of FIG. 84.
- FIG. 87 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied.
- FIG. 88 is a flowchart illustrating the telop extraction optimizing process performed by the optimizing device of FIG.
- FIG. 89 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied.
- FIG. 90 is a block diagram illustrating a configuration example of the process determining unit in FIG. 89.
- FIG. 91 is a flowchart for explaining the image optimization processing by the optimization device in FIG. 89.
- FIG. 92 is a diagram for explaining LUT.
- FIG. 93 is a diagram for explaining the processing content specified on the LUT for each feature amount.
- FIG. 94 is a view for explaining the processing content designated on the LUT for each feature amount.
- FIG. 95 is a view for explaining the processing content designated on the LUT for each feature amount.
- Figure 96 shows the manual L U in the image optimization processing by the optimization device in Figure 91.
- Figure 97 shows the manual L U in the image optimization processing by the optimization device of Figure 91.
- FIG. 98 is a diagram illustrating the manual LUT change processing in the image optimization processing by the optimization device of FIG.
- Fig. 99 shows the manual LU in the image optimization processing by the optimization device in Fig. 91.
- FIG. 100 is a view for explaining manual LUT change processing in the image optimization processing by the optimization device of FIG.
- FIG. 101 illustrates the manual LUT changing process in the image optimizing process by the optimizing device of FIG. 91.
- FIG. 102A is a diagram illustrating a manual LUT change process in the image optimization process by the optimization device of FIG.
- FIG. 102B is a view for explaining manual LUT change processing in the image optimization processing by the optimization device of FIG.
- FIG. 103A is a view for explaining manual LUT change processing in the image optimization processing by the optimization device of FIG.
- FIG. 103B is a diagram for explaining manual LUT change processing in the image optimization processing by the optimization device in FIG.
- FIG. 104A is a diagram for explaining the manual LUT change processing in the image optimization processing by the optimization device in FIG.
- FIG. 104B is a diagram for explaining the manual LUT change processing in the image optimization processing by the optimization device in FIG.
- FIG. 105 is a flowchart for explaining the auto LUT changing process in the image optimizing process by the optimizing device of FIG. 'FIG. 106 is a diagram for explaining the automatic LUT changing process in the image optimizing process by the optimizing device of FIG.
- FIG. 107 is a diagram for explaining the automatic LUT changing process in the image optimizing process by the optimizing device of FIG.
- FIG. 108 is a diagram for explaining the auto LUT change processing in the image optimization processing by the optimization device in FIG.
- FIG. 109 is a view for explaining the auto LUT change processing in the image optimization processing by the optimization device of FIG.
- FIG. 110 shows a configuration example of another embodiment of the optimization device to which the present invention is applied.
- FIG. 110 shows a configuration example of another embodiment of the optimization device to which the present invention is applied.
- FIG. 11 is a block diagram illustrating a configuration example of the process determining unit in FIG.
- FIG. 112 is a flowchart for explaining the image optimizing process by the optimizing device of FIG.
- FIG. 113 is a flowchart for explaining the manual LUT changing process in the image optimizing process by the optimizing device of FIG.
- FIG. 114 illustrates the manual LUT changing process in the image optimizing process by the optimizing device of FIG.
- FIG. 115 is a view for explaining manual LUT changing processing in the image optimization processing by the optimizing device of FIG.
- FIG. 116 is a block diagram showing a configuration example of another embodiment of the optimizing device to which the present invention is applied.
- FIG. 117 is a block diagram illustrating a configuration example of the processing unit in FIG.
- FIG. 118 is a block diagram showing a learning device that generates a coefficient set stored in the coefficient memory of FIG. 116 by learning.
- FIG. 119 is a block diagram illustrating a configuration example of the mapping processing unit in FIG.
- FIG. 120 is a flowchart illustrating the learning processing by the optimization device in FIG.
- FIG. 121 is a flowchart for explaining the mapping process by the optimization device of FIG.
- FIG. 122 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied.
- FIG. 123 is a block diagram illustrating a configuration example of the processing unit in FIG.
- FIG. 124 is a flowchart illustrating a learning process performed by the optimization device in FIG. 122.
- FIG. 125 is a flowchart for explaining the mapping process by the optimization device of FIG.
- FIG. 126 is a block diagram illustrating a configuration example of another embodiment of the optimization device to which the present invention has been applied.
- FIG. 127 is a block diagram illustrating a configuration example of the processing unit in FIG.
- FIG. 128 is a block diagram illustrating a learning device that generates a coefficient set stored in the coefficient memory of FIG. 127 by learning.
- FIG. 129 is a flowchart illustrating the coefficient determination processing by the learning device in FIG.
- FIG. 130 is a flowchart illustrating the image optimizing process by the optimizing device of FIG.
- FIG. 13 1 is a block diagram showing a configuration example of another embodiment of the optimization device to which the present invention is applied.
- FIG. 13 is a block diagram illustrating a configuration example of the processing unit in FIG.
- FIG. 13 is a flowchart for explaining the image optimizing process by the optimizing device of FIG.
- FIG. 134 is a block diagram showing a configuration example of an embodiment of a computer to which the present invention is applied.
- FIG. 1 shows a configuration example of an embodiment of an optimization device to which the present invention is applied.
- the optimizing device performs predetermined processing (signal processing) on an input signal, and then outputs a signal obtained as a result of the processing as an output signal.
- the user examines (qualitatively evaluates) this output signal, and if it is not the output signal of his / her own preference, inputs an operation signal corresponding to the user's preference to the optimization device.
- the optimizing device changes the content of the processing and the structure of the processing based on the operation signal, performs a predetermined processing on the reproduced input signal, and outputs an output signal.
- the optimizing device responds to the operation signal input by the user's operation in this way, and repeats the change of the processing content and the processing structure, thereby performing the optimal processing (the processing optimal for the user) as the input signal.
- the optimal processing the processing optimal for the user
- FIG. 2 shows a first detailed configuration example of the optimization device of FIG.
- the optimizing device 1 by learning the operation of the user without the user's knowledge, a process optimal for the user is performed. That is, the optimization device monitors an operation signal supplied in response to a user operation, and determines whether the operation signal can be used for learning. Then, when the operation signal is a learning operation signal that can be used for learning, a correction criterion for correcting an input signal is learned based on the learning operation signal. On the other hand, the input signal is corrected based on a correction criterion obtained by learning, and the corrected signal is output as an output signal.
- the optimization device 1 includes a processing unit 11 including a correction unit 21 and a learning unit 22.
- the processing unit 11 includes an input signal to be processed and an operation signal corresponding to a user operation. It is being supplied.
- the operation signal is supplied from the operation unit 2. That is, the operation unit 2 is composed of, for example, a rotary-type or slide-type knob, a switch, a pointing device, and the like, and configures the optimization device 1 with an operation signal corresponding to a user's operation. Supply to the processing unit 11.
- a digital input signal is supplied to the correction unit 21 constituting the optimizing device 1, and the learning unit 22 supplies a correction parameter as a correction criterion for correcting the input signal. It is being supplied.
- the correction unit 21 corrects the input signal based on the correction parameter (signal processing), and outputs the corrected signal as an output signal.
- the learning unit 22 is supplied with an operation signal from the operation unit 2, and is supplied with an input signal or an output signal as necessary.
- the learning unit 22 monitors the operation signal to determine whether the operation signal can be used for learning.
- the learning unit 22 corrects the correction signal used to correct the input signal based on the learning operation signal.
- the parameters are learned using the input signal and output signal as necessary, and are supplied to the correction unit 21.
- the learning unit 22 includes a learning data memory 53 and a learning information memory 55.
- the learning data memory 53 stores learning data used for learning. 5 stores learning information, which will be described later, obtained by learning. Next, the processing (optimization processing) performed by the optimization device 1 in FIG. 2 will be described with reference to the flowchart in FIG.
- step S1 the learning unit 22 determines whether a learning operation signal has been received from the operation unit 2.
- the user when operating the operation unit 2, the user first performs an appropriate operation, performs a detailed operation while checking an output signal output according to the operation, and finally optimizes the operation.
- the operation is stopped when an output signal that is considered to be obtained is obtained.
- the operation signal corresponding to the position of the operation unit 2 at the time when the output signal that the user considers to be optimal is obtained is the operation signal for learning. Therefore, the learning unit 22 If the operation is stopped after the operation has been performed for a predetermined time or more, the operation signal at the time of the stop is determined as a learning operation signal.
- step S1 when it is determined that the learning operation signal has not been received, that is, for example, the user has not operated the operation unit 2, or even if the user has operated the operation unit 2, If such an operation is performed, skip steps S2 to S10 and proceed to step SI1, where the correction unit 21 corrects the input signal according to the correction parameters that have been set. Then, an output signal as a result of the correction is output, and the process returns to step S1.
- step S1 If it is determined in step S1 that the learning operation signal has been received, the process proceeds to step S2, where the learning unit 22 acquires learning data used for learning based on the learning operation signal. And proceed to step S3.
- step S3 the learning data memory 53 stores the latest learning data acquired in step S2.
- the learning data memory 53 since the learning data memory 53 stores a plurality of learning data, Storage capacity. Further, the learning data memory 53 stores the learning data only for the storage capacity, and stores the next learning data in a form overwriting the oldest stored value. Therefore, the learning data memory 53 stores a plurality of learning data, which are recent ones.
- step S3 After the learning data is stored in the learning data memory 53 in step S3, the process proceeds to step S4, where the learning unit 22 and the learning data stored in the learning data memory 53 are updated with the latest learning data. Then, learning is performed using the learning information stored in the learning information memory 55, a correction parameter is obtained, and the process proceeds to step S5. In step S5, the learning unit 22 updates the storage content of the learning information memory 55 with the new learning information obtained during the learning in step S4, and proceeds to step S6.
- step S6 the learning unit 22 obtains an appropriateness, which will be described later, representing the appropriateness of the correction parameter obtained in step S4, and proceeds to step S7. Determine whether the correction parameters obtained in step 4 are appropriate.
- step S7 If it is determined in step S7 that the correction parameter is appropriate, skip steps S8 and S9 and proceed to step S10, where the learning unit 22 determines that the correction parameter is appropriate.
- the obtained correction parameters are output to the correction unit 21 and the process proceeds to step S11. Therefore, in this case, thereafter, the correction unit 21 corrects the input signal in accordance with the new correction parameters obtained in the learning in step S4.
- step S7 determines whether the correction parameter is corrects the input signal according to the new correction parameters obtained in the learning in step S8.
- FIG. 4 shows an example of a detailed configuration in a case where the processing unit 11 of FIG. 2 is applied to, for example, an NR circuit that removes noise from an image signal or an audio signal.
- the weight memory 31 stores a weight (coefficient) W (for example, a value of 0 or more and 1 or less) as a correction parameter supplied from the selecting unit 41 described later of the learning unit 22.
- the weight memory 32 stores the weight 11 W supplied from the arithmetic unit 33.
- the arithmetic unit 33 supplies the weight memory 32 with a subtraction value 11 W obtained by subtracting the weight W supplied from the selection unit 41 of the learning unit 22 from 1.0, as a weight.
- the arithmetic unit 34 multiplies the input signal by the weight 1_W stored in the weight memory 32, and supplies the multiplied value to the arithmetic unit 36.
- the arithmetic unit 35 multiplies the weight W stored in the weight memory 31 by the output signal stored (latched) in the latch circuit 37, and supplies the multiplied value to the arithmetic unit 36.
- the computing unit 36 adds the outputs of both computing units 34 and 35 and outputs the sum as an output signal.
- the latch circuit 37 latches the output signal output from the arithmetic unit 36 and supplies the output signal to the arithmetic unit 35.
- the correction unit 21 of the processing unit 11 is constituted by 35, 36, and the latch circuit 37.
- the selection unit 41 selects one of the weight output from the weight correction unit 46 and the weight output from the operation signal processing unit 50, and supplies the selected weight to the correction unit 21 as a correction parameter.
- An input signal is supplied to the input reliability calculation section 42.Input reliability indicating the reliability of the input signal is obtained, and the output reliability calculation section 43 and the weight calculation section are calculated.
- the output reliability calculation unit 43 obtains the output reliability indicating the reliability of the output signal based on the input reliability from the input reliability calculation unit 42, and calculates the output reliability.
- the weight is supplied to the weight calculator 45.
- the latch circuit 44 stores (latches) the output reliability from the output reliability calculation section 43 and supplies the output reliability to the output reliability calculation section 43 and the weight calculation section 45.
- the weight calculation unit 45 calculates a weight from the input reliability from the input reliability calculation unit 42 and the output reliability from the output reliability calculation unit 43 and outputs the weight to the weight correction unit 46. .
- parameter control data for controlling the weight as a correction parameter is supplied to the weight correction unit 46 from the parameter control data memory 57.
- the weights are processed (corrected) by the parameter control data and supplied to the selector 41.
- the operation signal processing unit 50 is supplied with an operation signal from the operation unit 2 (FIG. 2).
- the operation signal processing unit 50 processes the operation signal supplied thereto,
- the weight corresponding to the signal is supplied to the selection unit 41, the teacher data generation unit 51, and the student data generation unit 52. Further, the operation signal processing unit 50 determines whether the operation signal is the learning operation signal described above, and if the operation signal is the learning operation signal, a flag to that effect (hereinafter referred to as a learning flag as appropriate). Is added to the output weight.
- the teacher data generation unit 51 Upon receiving the weight with the learning flag from the operation signal processing unit 50, the teacher data generation unit 51 generates teacher data as a learning teacher and supplies it to the learning data memory 53. That is, the teacher data generation unit 51 supplies the weight to which the learning flag is added to the learning data memory 53 as teacher data.
- the student data generation unit 52 When receiving the weight with the learning flag from the operation signal processing unit 50, the student data generation unit 52 generates student data to be a student for learning and supplies the data to the learning data memory 53. That is, the student data generation unit 52 is configured in the same manner as, for example, the input reliability calculation unit 42, the output reliability calculation unit 43, the latch circuit 44, and the weight calculation unit 45 described above.
- the weight is calculated from the supplied input signal, and when the weight with the learning flag is received, the weight calculated from the input signal is supplied to the learning data memory 53 as student data.
- the learning data memory 53 includes teacher data as a weight corresponding to the learning operation signal supplied from the teacher data generator 51, and a learning operation signal supplied from the student data generator 52.
- a set of student data as weights calculated from the input signal when the is received is stored as one set of learning data.
- the learning data memory 53 can store a plurality of learning data. Further, if the learning data of only the storage capacity is stored, the next learning data becomes The oldest stored value is overwritten and stored. Therefore, the learning data memory 53 basically has a state in which some recent learning data is always stored.
- the parameter control data calculation unit 54 outputs teacher data and student data as learning data stored in the learning data memory 53, and further, if necessary, learning information. Using the learning information stored in the memory 55, parameter control data that minimizes a predetermined statistical error is learned by calculating new learning information, and is supplied to the determination control unit 56. Further, the parameter control data calculation unit 54 updates the storage content of the learning information memory 55 with the new learning information obtained by the learning.
- the learning information memory 55 stores learning information from the parameter control data calculator 54.
- the judgment control unit 56 determines the appropriateness of the parameter control data supplied from the parameter control data calculation unit 54 by referring to the latest learning data stored in the learning data memory 53. judge. Further, the judgment control unit 56 controls the parameter control data calculation unit 54, and supplies the parameter control data supplied from the parameter control data calculation unit 54 to the parameter control data memory 57.
- the parameter control data memory 57 updates the stored content with the parameter control data supplied from the determination control unit 56 and supplies the updated data to the weight correction unit 46.
- the learning unit 22 of the processing unit 11 is configured by the above-described selection unit 41 to weight correction unit 46, and the operation signal processing unit 50 to parameter control data memory 57. ing.
- the processing unit 11 of the optimizing device 1 as an NR circuit configured as described above removes noise in the input signal as follows.
- the average of the input signal on which the true value is constant and the noise that fluctuates with time is superimposed is obtained.
- the weight of an input signal with a high noise level should be reduced (for example, For input signals with low noise levels (thus, signals with good S / N), noise can be effectively removed by increasing the weight.
- the evaluation value of the input signal is, for example, the closeness of the input signal to the true value as shown in FIG. 5B, that is, the reliability of the input signal being the true value.
- the noise is effectively removed by calculating the input reliability that expresses, and calculating the average while weighting the input signal according to the input reliability.
- a weighted average using the weight corresponding to the input reliability is obtained for the input signal and output as an output signal.
- the input reliability is X (t), y (t),
- Equation 1 the output signal y (t-1) one sample before the current time t is obtained by the following equation. [Equation 2] t-1
- the evaluation value of the output signal y (t) indicates that the output signal y (t) is close to the true value, that is, that the output signal y (t) is the true value.
- the output reliability y (t ) representing reliability is introduced, and the output reliability ya-D of the output signal y (t-1) one sample before the current time t is defined by the following equation.
- the output signal y (t) in Eq. (4) can be represented by the following weighted average by multiplication and addition.
- the weight w (t) (and one w (t)) used in equation (8) can be calculated from equation (6) using the output signal y (t-1) It can be obtained from the output reliability and the input reliability x (t ) of the current input signal X (t).
- the output reliability Q; y ( t ) of the current output signal y (t) in equation (5) is also calculated as the output reliability of the output signal y (t-1) one sample before. 3 ⁇ 41
- the main input fe X (t can be obtained from the input angle ( ⁇ x (t) and
- the input reliability x (t ) of the input signal X (t) or the output reliability a yW of the output signal y (t) use the respective variance ⁇ ⁇ 2 or the reciprocal of ⁇ ⁇ 2 Then, the input reliability ⁇ and the output reliability a y (t )
- the NR circuit in FIG. 4 performs a correction parameter calculation process of calculating a correction parameter as a weight w (t) according to equation (6), and calculates the weight w (t ) To calculate the weighted average of the output signal y (t-1) one sample before and the current input signal X (t) according to equation (8) to obtain the input signal X (t) Performs a correction process that effectively removes the noise contained in.
- the output signal obtained as a result of the input signal correction processing using the weight w (t) obtained according to the equation (6) is not always felt by the user to be optimal. Therefore, the NR circuit in Fig. 4 is used to learn the operation of the operation unit 2 by the user.
- the control data learning process for obtaining parameter control data for controlling (correcting) the weight w (t) as a correction parameter is performed, and the input signal is corrected using the weight corrected by the parameter control data. Has become.
- the control data learning process is performed as follows.
- the weight Wi corresponding to the learning operation signal given at the i-th time is calculated based on the input signal input when the learning operation signal is given. It can be considered that the user thinks that the weight is optimal. Therefore, in the control data learning process, the weight w (t) obtained according to Equation (6) is close to the weight corresponding to the learning operation signal. Parameter control data that can be corrected to a value (ideally, the same value) may be obtained.
- the weight w (t) obtained according to the equation (6) is used as student data as a learning student, and the weight Wi corresponding to the learning operation signal is used as teacher data as a learning teacher.
- a predicted value of weight Wi as teacher data which is predicted by a linear expression defined by parameter control data a and b as shown in the following equation, is obtained. Think about it.
- Equation (13) (the same applies to equations (14) and (16) to (21) described later), the following is applied to the learning operation signal as teacher data.
- the weight w (t) as the student data obtained according to Equation (6) is represented.
- N represents the number of sets of teacher data and student data.
- Equation (17) The minimum value (minimum value) of the sum of the squared errors in Eq. (15) is given by a and b, which set the right-hand sides of Eqs. (16) and (17) to 0. If the right-hand sides of Equations (16) and (17) are set to 0, respectively, Equation (18) is derived from Equation (16), and Equation (1) is derived from Equation (17). 9), each obtained.
- parameter control data a can be obtained by the following equation.
- parameter control data b can be obtained by the following equation.
- control data learning processing for obtaining parameter control data a and b is performed as described above.
- the correction processing will be described with reference to the flowchart in FIG.
- the weight memory 31 of the correction unit 21 stores the weight w (t). Is stored in an overwritten form.
- the computing unit 33 of the correction unit 21 subtracts the weight w (t) from 1.0, obtains the weight 1 _w (t), supplies the weight 1_w (t) to the weight memory 32, and stores the overwritten data. Let it.
- step S21 the operation unit
- the 34 calculates the product of the input signal X (t) and the weight 1 ⁇ w (t) stored in the weight memory 32 and supplies the result to the calculator 36. Further, in step S21, the arithmetic unit
- the computing unit 36 adds the product of the input signal X (t) and the weight 1—w (t) to the product of the weight w (t) and the output signal y (t-1). Then, the weighted sum (1—w (t)) of the input signal X (t) and the output signal y (t-1) is obtained as X (t) + w (t) y (t-1). , And output as the output signal y (t).
- This output signal y (t) is also supplied to the latch circuit 37, and the latch circuit 37 stores the output signal y (t) in an overwritten form. Thereafter, the process returns to step S21, and waits for the input signal of the next sample to be supplied. Thereafter, the same processing is repeated.
- the input reliability calculation unit 42 obtains, for example, the input reliability ax ( t ) based on the variance of the input signal. That is, the input reliability calculator 42 can latch not only a sample X (t) of the current input signal but also the past several samples, and a FIFO (First In Firs t Out) memory is built in, the variance is calculated using the current input signal sample x (t) and the past several samples, and the reciprocal is calculated as the input reliability ⁇ ⁇ ⁇ Are supplied to the output reliability calculation section 43 and the weight calculation section 45. Immediately after input signal input is started, there may not be as many input signal samples required to calculate the variance.In this case, for example, the default value is It is output as reliability ⁇ .
- the input reliability calculator 42 can latch not only a sample X (t) of the current input signal but also the past several samples, and a FIFO (First In Firs t Out) memory is built in, the variance is calculated using the current input signal sample x (t) and the past several
- the weight calculation unit 45 uses the input reliability a x (t ) from the input reliability calculation unit 42 to calculate the weight w (t) according to the equation (6).
- the output reliability H output before the sample is latched, and the weight calculator 45 calculates the input reliability x (t ) from the input reliability calculator 12 and the latch circuit in step S32. 4
- the weight w (t) is supplied to the weight correction unit 46.
- step S34 the weight correction unit 46 performs the mode in which the parameter control data read from the parameter control data memory 57 does not correct the weight w (t), i.e., the operation of the operation unit 2 by the user.
- the weight calculation unit 45 uses the weight w (t) automatically obtained from the input reliability and the output reliability as it is, as the weight W for correcting the input signal X (t).
- step S34 determines the weight w (t) supplied from the weight calculation unit 45.
- the weight correction unit 46 determines the weight w (t) supplied from the weight calculation unit 45.
- the weight correction unit 46 supplies the weight after the correction to the selection unit 41, and proceeds to step S37.
- Wi corresponds to the weight w (t) supplied from the weight calculator 45, and corresponds to the weight W after the Wi ′ force correction.
- step S34 determines whether the parameter control data is auto mode data. If it is determined in step S34 that the parameter control data is auto mode data, step S35 is skipped and the process proceeds to step S36.
- the weight w (t) is supplied to the selection unit 41 as it is, and the process proceeds to step S37.
- step S37 the output reliability calculation unit 43 updates the output reliability. That is, the output reliability calculating unit 43, the input reliability calculation section 42 in Step S 3 1 the immediately preceding calculated input reliability chi), one sample before the latch circuit 44 is latched output signal Yoriyukidohi
- the current output reliability ayW is obtained by adding y — 1 according to Equation (5), and stored in the latch circuit 44 in an overwritten form.
- step S38 the selection unit 41 determines whether or not the operation unit 2 is operated by the user from the output of the operation signal processing unit 50. If it is determined in step S38 that the operation unit 2 has not been operated, the process proceeds to step S39, and the selection unit 41 selects the weight supplied from the weight correction unit 46 (hereinafter, appropriately referred to as correction weight). Is selected and output to the correction unit 21 to return to step S31.
- correction weight the weight supplied from the weight correction unit 46 (hereinafter, appropriately referred to as correction weight). Is selected and output to the correction unit 21 to return to step S31.
- step S38 If it is determined in step S38 that the operation unit 2 is operated, the process proceeds to step S40, and the selection unit 41 selects the weight output by the operation signal processing unit 50 according to the operation. Then, output to the correction unit 21 and return to step S31. Therefore, in the correction parameter calculation processing of FIG. 7, when the operation unit 2 is not operated, the correction weight is supplied to the correction unit 21. When the operation unit 2 is operated, the operation weight is The weight corresponding to the signal is supplied to the correction unit 21. As a result, in the correction unit 21, when the operation unit 2 is not operated, the input signal is corrected by the correction weight, and when the operation unit 2 is operated, the input signal is corrected by the weight corresponding to the operation signal. The input signal is corrected. Furthermore, in the correction parameter calculation processing of FIG.
- the weight used for the correction processing is obtained only from the input reliability and the output reliability regardless of the operation of the operation unit 2, and the auto mode is not performed.
- the weight used in the correction process is obtained by using parameter control data obtained by learning based on the operation of the operation unit 2 by the control data learning process in FIG.
- step S41 the operation signal processing unit 50 determines whether or not the learning operation signal has been received from the operation unit 2, and has determined that the learning operation signal has not been received. If so, return to step S41.
- step S41 when it is determined that the learning operation signal has been received from the operation unit 2, that is, for example, after the operation unit 2 has started the operation, the operation unit 2 waits for the first time t1 or more. The operation is continuously performed for the second time t2 or more without opening, and then the operation is stopped for the third time t3 or more, or after the operation of the operation unit 2 is started, If it is determined that the operation of the operation unit 2 has been performed so that the user output S and a desired output signal can be obtained, for example, when the operation is stopped continuously for a time t 3 or more of step 3, step S Proceeding to 42, the teacher data generator 51 generates teacher data, and the student data generator 52 generates student data.
- the weight W corresponding to the learning operation signal (for example, the operation amount of the operation unit 2, the knob or lever of the operation unit 2, and the like) Is supplied to the teacher data generation unit 51 and the student data generation unit 52 together with the learning flag.
- the teacher data generation unit 51 acquires the weight W as teacher data and supplies it to the learning data memory 53.
- the student data generating unit 52 obtains a weight w corresponding to the input signal at that time as student data and supplies the weight w to the learning data memory 53.
- the weight w corresponding to the input signal is expressed by the following equation (6): It means a weight automatically obtained from the output reliability, so to speak, and as described above, the student data generator 52 calculates the weight w corresponding to this input signal from the input signal.
- the learning data memory 53 When the learning data memory 53 receives the teacher data W from the teacher data generator 51 and receives the student data w from the student data generator 52, the learning data memory 53 receives the latest teacher data W in step S43. And the set of student data w, and then proceed to step S44.
- step S44 the parameter control data calculation unit 54 adds the least squares method to the teacher data and the student data.
- the parameter control data calculating unit 5 Equation (2 0) and (2 1) of the student data Wi and the teacher data Wi in multipliers (W Iwi) and Sameshiyon (sigma W i Wi) equivalent to computing a calculation corresponding to Sameshiyon ( ⁇ W i) of the student data W i, calculation corresponding to Sameshiyon ( ⁇ Wi) of teacher data Wi, calculation corresponding to the student data if the product of shark Shiyon ( ⁇ W i 2) I do.
- N-1 sets of teacher data and student data have already been obtained, and that the Nth set of teacher data and student data has been obtained as the latest teacher data and student data.
- the parameter control data calculation unit 54 has already added N_1 sets of teacher data and student data. Therefore, as for the Nth set of teacher data and student data, if the addition results for the already performed N_1 sets of teacher data and student data are held, the addition results By adding the Nth set of teacher data and student data, an addition result of N sets of teacher data and student data including the latest teacher data and student data can be obtained.
- the parameter control data calculation unit 54 stores the result of the previous addition as learning information in the learning information memory 55.
- the N set The teacher data and the student data are added.
- the teacher data used for the addition is used for the addition.
- the number N of sets of student data is required, and the learning information memory 55 stores the number N of sets as learning information.
- the parameter control data calculation unit 54 After performing the addition in step S44, the parameter control data calculation unit 54 stores the addition result as learning information in the form of overwriting in the learning information memory 55, and in step S45. move on.
- step S45 the parameter control data calculation unit 54 calculates the parameter control data a and the parameter control data a from the addition result as the learning information stored in the learning information memory 55 by using equations (2 0) and (2 1). Determine if it is possible to find b.
- step S45 it is determined whether the parameter control data a and b can be obtained from the learning information.
- step S45 If it is determined in step S45 that the parameter control data a and b cannot be obtained, the parameter control data calculation unit 54 supplies the determination to the determination control unit 56, and 4 Proceed to 9.
- step S49 the judgment control unit 56 supplies, as parameter control data, auto mode data indicating the auto mode to the parameter control data memory 57 and stores it. Then, returning to step S41, the same processing is repeated thereafter.
- the weight w (t) automatically obtained from the input reliability and output reliability as described in Fig. 7 1S It is used as is for the input signal X (t) correction.
- step S45 if it is determined in step S45 that the parameter control data a and b can be obtained, the process proceeds to step S46, and the parameter control data calculation unit 54 uses the learning information to By calculating the equations (2 0) and (2 1), The parameter control data a and b are obtained, supplied to the judgment control unit 56, and the process proceeds to step S47.
- step S47 the judgment control unit 56 sets the learning data memory 53 according to the linear expression of the expression (13) defined by the parameter control data a and b from the parameter control data calculation unit 54.
- a predicted value of the corresponding teacher data is obtained from each student data stored in the learning data. Find the sum of the squared errors represented by).
- the determination control unit 56 obtains a normalized error obtained by dividing the sum of the square errors by, for example, the number of learning pairs stored in the learning data memory 53, and proceeds to step S48.
- step S48 the determination control unit 56 determines whether the normalization error is greater than (or greater than) a predetermined threshold S1.
- a predetermined threshold S1 that is, the linear expression of the expression (1 3) defined by the parameter control data a and b is If the relationship between the student data and the teacher data stored in the memory 53 is not accurately approximated, the process proceeds to step S49, and the determination control unit 56 sets the parameter control data as described above.
- the auto mode data representing the auto mode is supplied to the parameter control data memory 57 and stored therein. Then, the process returns to step S41, and the same processing is repeated thereafter.
- the linear expression of the expression (13) defined by the parameter control data a and b is converted to the student data stored in the learning data memory 53. If the relationship between the data and the teacher data is not closely approximated, the input reliability and output reliability are calculated in the same way as when there is no learning information sufficient to obtain the parameter control data a and b.
- the weight w (t) automatically obtained is used as it is for the correction of the input signal x (t).
- step S48 unless the normalization error is greater than a predetermined threshold S1,
- the judgment control unit 56 determines the linear expression of the expression (1 3) defined by the parameter control data a and b from the parameter control data calculation unit 54.
- An error (distance) E between the regression line represented by and the point specified by the latest teacher data and student data stored in the learning data memory 53 is determined.
- step S51 the determination control unit 56 determines whether the magnitude of the error ⁇ is greater than (or greater than) a predetermined threshold value S2. Skipping to step 52, the process proceeds to step S53, and the determination control unit 56 outputs the parameter control data a and b obtained in step S46 to the parameter control unit data memory 37.
- the parameter control data memory 57 stores the parameter control data a and b from the judgment control unit 56 in an overwritten form, and returns to step S41.
- step S51 if it is determined in step S51 that the magnitude of the error ⁇ is larger than the predetermined threshold value S2, the process proceeds to step S52, where the determination control unit 56 includes the parameter control data calculation unit 5 4 by using only a predetermined number of past learning pairs from the latest learning pairs as the latest teacher data and student data stored in the learning data memory 53 (learning information memory 55). Without using the learning information of), the parameters control data a and b are recalculated. Then, the process proceeds to step S53, where the judgment control unit 56 outputs the parameter control data a and b obtained in step S52 to the parameter control unit data memory 37, and stores it in an overwritten form. Return to step S41.
- the parameter control data a and b can be obtained, and the linear expression of the expression (13) defined by the parameter control data a and b is calculated based on the student data stored in the learning data memory 53. If the relationship between the data and the teacher data is approximated with high accuracy, the learning pair obtained based on the operation of the operation unit 2 by the user The weight w (t) obtained from the input reliability and the output reliability is corrected according to the equation (13) defined by the parameter control data a and b obtained by learning using The correction weight W obtained by the correction will be used for correcting the input signal X (t).
- the regression line represented by the linear expression of the expression (13) defined by the parameter control data a and b obtained in step S46 is, as shown in FIG. Error between N points specified by teacher data and student data
- step S50 an error ⁇ between this straight line and a point defined by the latest teacher data and student data is obtained. If the magnitude of the error ⁇ is not larger than the threshold value S 2, the error ⁇ is expressed by a linear expression of the expression (1 3) defined by the parameter control data a and b obtained in step S 46.
- the regression line is considered to be relatively accurate approximation to the points specified by the teacher data and student data given up to now, including the points specified by the latest teacher data and student data. .
- the determination control unit 56 controls the parameter control data calculation unit 54 to make the most recent learning pair of the learning pairs stored in the learning data memory 53 in step S52.
- the parameter control data a and b are recalculated using only some of the learning pairs of.
- the parameter control data calculation unit 54 does not use (forget) the learning information as the past addition result stored in the learning information memory 55 and forgets some recent teacher data. Using only a set of student data, their teacher data And the parameter control data a and b that define the straight line of equation (13) that best approximates the set of points defined by the student data.
- the parameter control data calculation unit 54 determines the points defined by the latest teacher data and student data (indicated by ⁇ in FIG. 9C).
- the parameter control data a ′ and b ′ that define a straight line passing through the points specified by the teacher data and the student data given the previous time (indicated by ⁇ in FIG. 9C) are obtained.
- the operation signal supplied in response to the user's operation can be used for learning. If the operation signal is a learning operation signal that can be used for learning, the operation signal for learning is determined. Since the parameter control data a and b for correcting the weight for correcting the input signal are learned based on the operation signal, the user's operation can be learned without the user's knowledge. Based on the learning results, appropriate processing is gradually performed for the user, and ultimately, optimal processing for the user is performed.
- the user can obtain optimal noise removal results for various input signals without performing any operation during that time. Means that the device fits into your hand, so to speak. Then, at the stage where the user is familiar with the hand, the user operates the operation unit 2 so as to obtain a desired output signal. Therefore, the user has to operate the operation unit 2 and correct the input signal.
- the relationship between the weight W used and the weight W used for correction of the input signal is eventually qualitatively determined. Will be related. Also, in the NR circuit of FIG. 4, the weight W used in the correction processing (FIG. 6) performed by the correction unit 21 according to the operation of the operation unit 2 by the user is set so that a desired output signal for the user is obtained.
- the operation signal processing unit 50 outputs the weight represented by the operation signal corresponding to the operation
- the selection unit 41 selects the weight, and selects the correction unit. 2 Feed to 1.
- the correction unit In 21 the correction process represented by Expression (8) is performed using the weight corresponding to the user's operation.
- the content of the processing (correction processing) represented by the equation (8) naturally changes.
- the NR circuit of Fig. 4 it can be said that, according to the operation of the user, the "contents of processing" have been changed so that a desired output signal can be obtained for the user.
- the linear expression of the equation (13) defined by the parameter control data a and b becomes
- the weight automatically obtained from the input reliability and the output reliability is Used for the correction processing in the correction unit 21.
- the parameter control data a and b can be obtained, and the linear expression of the expression (13) defined by the parameter control data a and b is stored in the student data stored in the learning data memory 53.
- the parameter control data a and b obtained by performing learning using a learning pair obtained based on the operation of the operation unit 2 by the user are corrected according to the equation (13) defined by the following equation, and the correction weight obtained by the correction is used for the correction processing by the correction unit 21.
- the input reliability and output reliability Is used for the correction processing in the correction unit 21. If a learning pair capable of highly accurate approximation is input by the user, learning is performed using the learning pair.
- the correction weights obtained from the parameter control data a and b thus obtained are used for the correction processing in the correction unit 21.
- the weight w (t) of the equation (8) changes between the case where the learning pair is obtained and the learning pair capable of high-precision approximation.
- the correction represented by the equation (8) From this point of view, the NR circuit shown in Fig. 4 changes the content of the processing according to the user's operation so that the output content desired by the user can be obtained. It can be said that.
- the weight is obtained from the input reliability and the output reliability regardless of the user's operation.
- the weight is determined based on the parameter control data obtained by learning using the learning pair obtained based on the user's operation. Desired.
- the processing system for calculating the weight that is, the algorithm for obtaining the weight, has been changed in accordance with the operation of the user so that a desired output signal for the user can be obtained.
- a change in the “contents of the processing” described above corresponds to a change in the function F.
- the input signal not only an image signal and an audio signal but also other signals can be used.
- the input signal is an image signal
- the input reliability is calculated based on the variance obtained from a plurality of pixels that are spatially or temporally close to the pixel to be processed. Become.
- the learning unit 22 calculates the weight w obtained from the input reliability and the like by using the primary expression (13) defined by the parameter control data a and b.
- the correction weight is corrected to the correction weight W according to the equation.
- the correction weight calculation formula that minimizes the normalization error obtained from the learning pair obtained by the user's operation is selected, and the correction weight is calculated by the selected correction weight calculation formula. Become. That is, the algorithm for obtaining the correction weight is changed according to the operation of the user. Therefore, also in this case, it can be said that the “processing structure” has been changed according to the operation of the user.
- FIG. 10 shows another detailed configuration example when the processing unit 11 of the optimization device 1 in FIG. 4 is applied to an NR circuit.
- the NR circuit of FIG. 10 does not include the weight correction unit 46, and instead of the input reliability calculation unit 42 and the student data generation unit 52, the input reliability calculation unit 61 and the student data
- the configuration is basically the same as that in FIG. 4 except that the generators 62 are provided.
- the input reliability calculation unit 61 calculates the input reliability of the input signal from the plurality of samples of the input signal and the parameter control data stored in the parameter control data memory 57. 3 and the weight calculator 45 are supplied.
- the student data generator 62 acquires the input signal and the output reliability output by the output reliability calculator 43 as student data, and supplies the student data to the learning data memory 53.
- the weight calculated by the weight calculation unit 45 is supplied to the selection unit 41 as it is.
- the selector 41 selects one of the weight output by the weight calculator 45 and the weight output by the operation signal processor 50 in the same manner as in FIG. Output.
- the parameter control data functions as data for controlling the input reliability.
- the NR circuit in FIG. 10 also performs the correction processing, the correction parameter calculation processing, and the control data learning processing, as in the NR circuit in FIG. Since the same processing as the processing described in FIG. 6 is performed as the correction processing, the description of the correction processing will be omitted for the NR circuit in FIG. 10 and the correction parameter calculation processing and the control data learning processing will be described. explain.
- the input reliability x (t ) that defines the weights shown in Expression (6) and is used in the correction processing is, for example, the correction parameter as defined by the following expression.
- Meter calculation processing and control data learning processing are performed.
- aa 2, ... ⁇ , a N is the parameter control data
- xx 2, ⁇ ..., x N is an attempt Hodokoso now process Input signal
- the input signal is, for example, when an image signal, Xl, x 2, ⁇ ⁇ ⁇ , as the x N, e.g., a pixel as a target sample (indicated by X marks in FIG. 1 1), From that pixel, a pixel that is spatially or temporally close (indicated by a triangle in Fig. 11) can be used.
- Equation (2 5) Since it is generally difficult to find parameter control data a 2 ,..., A N that always satisfies equation (2 5), here, for example, equation (2 5) Parameter control data & ⁇ a 2 , ⁇ ⁇ ⁇ , a N that minimizes the sum of squared errors on the left and right sides of Consider finding it by the small square method.
- minimizing the sum of the square errors of the left side and the right side of equation (25) means that the square error between the weight w (t) given by equation (23) and the weight W given by the user is minimizing, i.e., with the weight W provided by the user and teacher data, the input signal defining the weight w (t) of formula (2 3) Xl, X 2 ,. ⁇ ⁇ , x N, And the output reliability ⁇ -as the student data, the square error between the weight w (t) calculated from equation (23) from the student data and the weight W as the teacher data given by the user is minimized.
- Equation (26) The weight w (t) calculated by equation (2 3) from such parameter control data a 1; a 2 ,..., A N and the student data is equivalent to the teacher data
- the error from W is small.
- the square error e 2 between the left and right sides of Equation (25) is given by Equation (26).
- Equation (30) can be solved, for example, by the Cholesky method for the matrix (beta) A, that is, the parameter control data a 2 , ⁇ ⁇ , a N.
- the NR circuit in FIG. With the data, the input signal defining the weight w (t) of formula (2 3) Xl, x 2 , • ⁇ ⁇ , x N, and the output reliability monument as student data from the student data, the formula ( 23) The parameter control data a a 2 ,..., A N that minimizes the square error between the weight w (t) calculated by 3) and the weight W A control data learning process for learning by the multiplication method is performed. Further, the NR circuit of FIG.
- step S61 the parameter control data is read from the input reliability calculation unit 61 1 parameter control data memory 57, and the process proceeds to step S62.
- step S62 the input reliability calculation unit 61 generates no. Parameter control data read out from parameter control data memory 57 Mode in which input reliability is obtained without using parameter control data, i.e., regardless of the operation of the operation unit 2 by the user, the input reliability is calculated using only the input signal. From this, it is determined whether or not the data is auto-mode data representing a mode that is automatically obtained (this mode is also referred to as an auto mode as appropriate).
- step S62 If it is determined in step S62 that the parameter control data is not the auto mode data, the process proceeds to step S63, where the input reliability calculator 61 reads the parameter control data read from the parameter control data memory 57. at to a child stranded connection being defined formula in accordance with an equation of (2 2), determined using the sample X to x N of the latest N pieces of the input signal supplied thereto, the output reliability calculating unit 4 3 and The weight is supplied to the weight calculator 45, and the process proceeds to step S65.
- step S62 If it is determined in step S62 that the parameter control data is the auto mode data, the process proceeds to step S64, and the input reliability calculation unit 61 executes the example.
- the input reliability ⁇ is obtained based on the variance, and the obtained input reliability ⁇ ⁇ ⁇ is supplied to the output reliability calculation unit 43 and the weight calculation unit 45 .
- step S65 the weight calculator 45 calculates the input reliability ⁇ x (t ) from the input reliability calculator 61 and the output reliability calculator 4 latched in the latch circuit 44.
- the weight w (t) is obtained according to the equation (2 3) using the output reliability ⁇ output one sample before in 3.
- the weight w (t) is supplied from the weight calculator 45 to the selector 41.
- step S66 the output reliability calculating unit 43 receives the input reliability ⁇ ⁇ ( t) supplied from the input reliability calculating unit 61, as in step S37 in FIG.
- the output reliability a y W is updated by adding the output reliability ⁇ _ one sample before latched by the latch circuit 44 according to the equation (5), and the output reliability a y W is updated.
- step S67 the selection unit 41 determines whether or not the operation unit 2 is operated by the user from the output of the operation signal processing unit 50. If it is determined in step S67 that the operation unit 2 is not operated, the process proceeds to step S68, where the selection unit 41 selects the weight supplied from the weight calculation unit 45, and selects the correction unit 2 Output to 1 and return to step S61.
- step S67 If it is determined in step S67 that the operation unit 2 is operated, the process proceeds to step S69, and the selection unit 41 outputs the operation signal processing unit 50 according to the operation. The weight is selected, output to the correction unit 21, and the process returns to step S61.
- the correction unit 21 corrects the input signal by the weight based on the input reliability when the operation unit 2 is not operated, and when the operation unit 2 is operated, the operation signal is corrected. Input signal is corrected by the weight corresponding to Is done.
- the weight used in the correction processing is obtained from the input reliability based on the variance of the input signal regardless of the operation of the operation unit 2.
- the mode is not the auto mode, based on the operation of the operation unit 2, it is used for the correction process based on the input reliability obtained using the parameter control data obtained by learning by the control data learning process of FIG. 13 described later. A weight is determined.
- step S71 the operation signal processing unit 50 receives the learning operation signal from the operation unit 2 as in step S41 of FIG. It is determined whether or not reception has been performed. If it is determined that reception has not been performed, the process returns to step S71. ′ In addition, in step S71, when it is determined that the learning operation signal has been received from the operation unit 2, that is, for example, the operation unit 2 waits for the first time t1 or more after the start of the operation.
- step S7 Proceeding to 2, the teacher data generator 51 generates teacher data, and the student data generator 62 generates student data.
- the operation signal processing unit 50 sends the weight W corresponding to the learning operation signal to the teacher data generation unit 51 and the student data generation unit 62 together with the learning flag. Supply.
- the teacher data generation unit 51 acquires the weight W as teacher data and supplies it to the learning data memory 53.
- the student data generation unit 62 has a buffer (Fig. (Not shown), the input signal is always stored in the buffer for the storage capacity, and when the weight with the learning flag is received, the sample of the input signal input at that time is samples XL to X N of the input signal having a predetermined positional relationship is read from the buffer built in it. Further, the student data generator 62 reads out the output reliability o ⁇ tD from the output reliability calculator 43. The student data generating unit 6 2 supplies a sample X l to x N of these input signals, and an output reliability monument, as student data, the learning data memory 5 3.
- step S 7 The rewritable received teacher data W from the tutor data generating unit 5 1, when from the student data generating unit 6 2 student data X l through chi New and ay (t is received, step S 7 in 3, storing the most recent tutor data W and student data X l through a set of x N Oyobihi gamma _ upsilon (learning pair), the process proceeds to step S 7 4. in step S 7 4, parameter control data calculating unit 5 4 adds the least squares method to the teacher data and student data.
- the parameter control data calculation unit 54 corresponds to the product of the student data and the product of the student data and the teacher data, which are the elements of the matrix X and ⁇ in the equation (29), and their summation. Perform the operation.
- step S74 is performed in the same manner as in step S44 in FIG. That is, the previous addition result is stored as learning information in the learning information memory 55, and the parameter control data calculation unit 54 uses this learning information to calculate the latest teacher data and student data. Perform addition.
- the parameter control data calculation unit 54 stores the addition result as learning information in the learning information memory 55 in a form of overwriting, and then proceeds to step S75. Then, the parameter control data calculation unit 54 determines whether the equation (30) can be solved for the matrix ⁇ ⁇ based on the addition result as the learning information stored in the learning information memory 55. , that is, whether it is possible to obtain the parameter control data a L to a N.
- Expression (30) indicates that learning information obtained from a predetermined number or more of learning pairs must exist. Lever can not be solved for the matrix A, it can not be determined that the element and going on the parameter control data ai to a N. Therefore, in step S 7 5, the learning information, whether it is possible to obtain the parameter control data 1 to a N is determined.
- step S 7 5 if is possible to obtain the parameter control data ai to a N is determined to be not possible, the parameter control data calculating unit 5 4 to that effect is supplied to determine the constant control unit 5 6, step Proceed to S79.
- step S79 the judgment control unit 56 supplies, as parameter control data, auto mode data representing the auto mode to the parameter control data memory 57 and stores it. Then, returning to step S71, the same processing is repeated thereafter.
- parameter control data ai to a N as described in FIG. 1 2, the weight obtained from the input reliability based on dispersion of input signals, It will be used to correct the input signal X (t).
- step S75 when it is determined in step S75 that the parameter control data can be obtained, the process proceeds to step S76, and the parameter control data calculation unit 54 uses the learning information to calculate the equation (30). and by solving for the matrix a, it obtains the parameter control data ai to a N has its element, is supplied to the judgment control unit 5 6, the process proceeds to step S 7 7.
- step S 7 7 the judgment control unit 5 6 follow the parameter control data calculating unit 4 or these parameters control data ai to formula defined by a N (2 3), stored in the learning data memory 5 3
- a predicted value of the corresponding teacher data is obtained from each of the obtained student data, and the prediction error of the predicted value (error with respect to the teacher data stored in the learning data memory 53) is expressed by the equation (26).
- the judgment control unit 56 obtains a normalized error obtained by dividing the sum of the square errors by, for example, the number of learning pairs stored in the learning data memory 53, and proceeds to step S78.
- step S78 the determination control unit 56 determines whether the normalization error is greater than (or greater than) a predetermined threshold S1.
- step S 7 8 if the normalization error is judged to be larger than a predetermined threshold S 1, i.e., a linear equation of expression defined by the parameter control data a to a N (2 3) is, for training If the relationship between the student data and the teacher data stored in the data memory 53 is not approximated with high accuracy, the process proceeds to step S79, and the determination control unit 56 executes the parameter control data as described above. As the data, auto mode data representing the auto mode is supplied to the parameter control data memory 57 and stored therein. Then, the process returns to step S71, and the same processing is repeated thereafter.
- a predetermined threshold S 1 i.e., a linear equation of expression defined by the parameter control data a to a N (2 3) is, for training
- the student data stored in the equation (23) 1S learning data memory 53 defined by the parameter control data & 1 to a N the relationship between a teacher data and, if not accurately approximated, as in the case of learning information can only obtain the parameter control data a t to a N does not exist, based on the variance of the input signal input
- the weight obtained from the reliability is used for correcting the input signal X (t).
- step S 7 8 if the normalization error is judged not greater than the predetermined threshold value S 1, i.e., linear expression of equation defined by the parameter control data & 1 to a N (2 3) is, If the relationship between the student data and the teacher data stored in the learning data memory 53 is accurately approximated, the process proceeds to step S80, where the determination control unit 56 includes the parameter control data calculation unit 5 type defined by the parameter control data a t to a N obtained in 4 with the surface of (2 3) (line), defined by the most recent tutor data Contact Yopi student data stored in the learning data memory 5 3 Find the error (distance) ⁇ from the point to be set.
- the predetermined threshold value S 1 i.e., linear expression of equation defined by the parameter control data & 1 to a N (2 3) is, If the relationship between the student data and the teacher data stored in the learning data memory 53 is accurately approximated, the process proceeds to step S80, where the determination control unit 56 includes the parameter control data calculation unit 5 type defined by the parameter control data a
- step S81 the determination control unit 56 determines whether the magnitude of the error ⁇ is greater than (or greater than) a predetermined threshold value S2. 82, skipping to step S83, the determination control unit 56 performs parameter control on the parameter control data & 1 to & 11 obtained in step S76. Output to the external data memory 37. Parameter control data memory 5 7, and stored in the form of overwriting the parameter control data a 1 to a N from the judgment control unit 5 6 returns to Sutetsu flop S 7 1, the same process is repeated.
- step S81 when it is determined in step S81 that the magnitude of the error E is larger than the predetermined threshold value S2, the process proceeds to step S82, where the determination control unit 56 determines the parameter control data calculation unit 5 By controlling 4, the parameter control data to a N are recalculated using only the latest teacher data and student data stored in the learning data memory 53. Then, the process proceeds to step S 8 3, the judgment control unit 5 6 stores Sutetsu flop S 8 parameter control data & 1 to a N obtained in 2, and outputs the parameter control data memory 5 7, in the form of overwriting And return to step S71.
- step S82 the parameter control data ai obtained from the teacher data and student data given up to now is used. From a N, the error ⁇ between the surface defined by equation (23) and the point defined by the latest teacher data and student data is obtained.
- the magnitude of this error ⁇ is, if not greater than the threshold S 2 is the surface of the expression defined (2 3) in step S 7 parameters obtained by the sixth control data ai to a N, latest Since the points specified by the teacher data and the student data, including the points specified by the teacher data and the student data, are considered to be relatively accurate approximations, the parameter control data & 1 to a N is stored in the parameter one motor control data memory 5 7.
- the judgment control unit 56 controls the parameter control data calculation unit 54 to perform the learning in step S82.
- the input reliability calculation unit 61 based on the parameter control data a 1 to a N obtained as described above, the input signal frequency Is calculated.
- the learning of the formula (2 2) to the parameter control data ai defining the input reliability alpha Kaiomega of a New performed Therefore, the user's operation can be learned without the user's knowledge, and furthermore, the optimum processing for the user can be performed using the learning result.
- the operation signal processing unit 50 outputs the weight represented by the operation signal corresponding to the operation.
- the selection unit 41 selects the weight and supplies it to the correction unit 21.
- the correction unit 21 performs the correction process represented by Expression (8) using the weight corresponding to the user's operation.
- the weight w (t) of equation (8) is changed by the user's operation, the content of the processing (correction processing) represented by equation (8) naturally changes.
- the NR circuit shown in FIG. 10 can be said to have been modified so that the "contents of processing" power S and the output signal desired by the user can be obtained.
- the parameter control data & 1 to a N can be obtained, and the equation (23) defined by the parameter control data ai to a N (23) 1S Student data and teacher data stored in the learning data memory 53 If the relationship is closely approximated, the learning pair obtained based on the operation of the operation unit 2 by the user is According to the equation (23) defined by the parameter control data & 1 to a N obtained by performing the learning using the input signal, the input signal and the input calculated from the parameter control data & 1 to a N ( The weight obtained from the (reliability) and the output reliability is used for the correction processing by the correction unit 21.
- the NR circuit in FIG. 10 has the same high accuracy as the case of the NR circuit in FIG. 4 in the case where a sufficient number of learning pairs or a learning pair capable of highly accurate approximation are not obtained.
- the algorithm for calculating the weights used in the correction process is different between when a learning pair that can be approximated is obtained.
- the output reliability Q ⁇ t-D is used as student data, but seeking the parameter control data ai to a N, the output reliability ay (t - ⁇ the formula
- the input reliability ⁇ ⁇ ⁇ is gradually improved to obtain the weight desired by the user by performing the control data learning process of FIG. 13, so that the output reliability y (t will also be improved.
- the output with the reliability to a known value, the input reliability, defined by the parameter control data ai to a N, the parameter control data ai such as weights the user desires is obtained, et al. or has been to seek a N, on the contrary, with the input reliability and known value, the output reliability, and defined by the parameter control data ai to a N, the weight desired by the user it is also possible to determine the parameters one as obtained motor control data ai to a N.
- the output reliability a known value
- the input reliability defined by the parameter control data ai to a N
- the parameter control data 1 to a N such as the weight that the user desires to obtain determined
- the output signal The reliability is defined by the parameter control data to a N ′
- the parameter control data a to a N ′ to obtain the weight desired by the user is obtained. That is, two sets of parameter control data ai to a it is also possible to obtain the N and a to a N '.
- the weights as shown in Equation (6), defined by the input reliability "x (t) and the output reliability monument, and to seek the parameter control data ai to a N
- the correction term of the input reliability x (t ) or the output reliability x be defined with, it is possible Unisuru by obtaining the correction term parameter control data ai to a N.
- the expression that defines the input signal by parameter control data is not limited to expression (2 2).
- FIG. 14 shows a second detailed configuration example of the optimization device in FIG.
- the internal information generation unit 71 is newly provided in the optimization device 1 of FIG. 2, and the configuration of the processing unit 11 is the same as that of FIG. 2. The description is omitted here.
- the display unit 81 is provided outside the optimizing device 1.
- the internal information generation unit 71 reads out the internal information of the processing unit 11, converts it into image information, and outputs it to a display unit 81 such as an LCD (Liquid Crystal Display) or CRT (Cathode Ray Tube). Display (present). More specifically, the display unit 81 may display the internal information numerically as it is. For example, a display screen such as a level gauge may be set, and the level gauge may be displayed according to the value of the internal information. The display may fluctuate.
- the display unit 81 is not limited to this, Other display methods may be used as long as they visually display (present) the internal information.
- the internal information includes, for example, the weights stored in the weight memories 31 and 32 of the correction unit 21 and the contents stored in the learning data memory 53 and the learning information memory 55 of the learning unit 22. Can be adopted. Also, the internal information may be presented to the user by a presentation (presentation) method other than display, that is, by sound or the like.
- step S102 the weight W is displayed on the display unit 81. That is, the internal information generation unit 71 reads, for example, the value of the weight W stored in the weight memory 31 as internal information, converts the value into an image signal that can be displayed on the display unit 81, and converts the value into a display signal. And the weight W is displayed (presented), and the process returns to step S91.
- the weight W as internal information regarding the processing actually executed in the processing unit 11 is displayed (presented) to the user. It is possible to operate the operation unit 2 so that an optimum output signal can be obtained while watching the display of the internal information.
- the internal information generation unit 71 stores the parameter control data a and b from the parameter control data memory 37 (FIGS. 4 and 10) of the learning unit 22 in addition to the internal information such as the weight described above. You may make it read and display.
- the weights selected by the selector 41 (FIGS. 4 and 10) are the weights obtained from the parameter control data a and b obtained by performing the learning using the learning pair.
- image information indicating whether the weight is obtained from the input reliability and the output reliability may be generated as internal information.
- Fig. 16 shows an example of an automobile autonomous driving system to which the optimization device of Fig. 1 was applied. 1 illustrates a configuration example of an embodiment.
- the position coordinates (X, Y) and traveling direction 0 of the vehicle are obtained, and the vehicle is caused to travel along a predetermined trajectory.
- the coordinates (X, Y) and traveling direction 0 obtained by the automatic traveling device often include an error, and in this case, the vehicle may run off a predetermined trajectory.
- the operation of the user is learned without the user's knowledge, and the vehicle is caused to travel along a predetermined trajectory based on the learning result. That is, when the vehicle starts running off the predetermined trajectory, the user generally operates the steering wheel, the accelerator, and the like so that the vehicle runs along the predetermined trajectory. Therefore, in the automatic traveling device shown in FIG.
- the gyro sensor 91 detects the yaw rate r of the vehicle and supplies it to the calculation unit 93.
- the wheel pulsar 92 supplies the arithmetic unit 93 with a number of electric pulses corresponding to the rotation angle of the vehicle wheel.
- the calculation unit 93 calculates the coordinates (X, Y) and the traveling direction 0 of the vehicle from the outputs of the gyro sensor 91 and the wheel pulser 92 according to, for example, the following formula, and supplies them to the optimization device 94: .
- Equation (32) where, in equation (3 2), ⁇ (0) represents the direction at the start of driving of the car, and (X (0), ⁇ (0) ⁇ are the coordinates at the start of driving of the car. Note that ⁇ (0) and (X (0), ⁇ (0)) are, for example, GPS (Global Positioning System) not shown. Etc. can be obtained. Vr represents the running speed of the car, and ⁇ represents the slip angle of the center of gravity of the car.
- the optimization device 94 is composed of a processing unit 101 and learns the operation of the operation unit 98 by the user, that is, based on an operation signal supplied when the user operates the operation unit 98. Based on the learning result, the coordinates (X, Y) and the traveling direction 0 from the computing unit 93 are corrected so that the traveling desired by the user is performed, and supplied to the automatic traveling control unit 95. I do.
- the automatic travel control unit 95 stores map data and a preset locus to be automatically traveled (hereinafter, appropriately referred to as a set locus). Then, the automatic traveling control unit 95 recognizes the current position and the traveling direction of the vehicle from the coordinates (X, Y) and the traveling direction ⁇ supplied from the processing unit 101 of the optimization device 94, and A control signal for controlling a drive unit 97 described later is generated so as to travel along the set locus, and is output to the selection unit 96.
- the selection unit 96 is supplied with a control signal from the automatic traveling control unit 95 and an operation signal from the operation unit 98. Then, the selection unit 96 preferentially selects an operation signal from the control signal from the automatic traveling control unit 95 and the operation signal from the operation unit 98 and outputs the operation signal to the driving unit 97. That is, the selection unit 96 normally selects the control signal of the automatic driving control unit 95 and outputs the control signal to the driving unit 97, but upon receiving the operation signal from the operation unit 98, While receiving the operation signal, the output of the control signal from the automatic traveling control unit 95 is stopped, and the operation signal from the operation unit 98 is output to the drive unit 97.
- the drive unit 97 drives, according to a control signal or an operation signal from the selection unit 96, an engine (not shown) of an automobile, and various mechanisms necessary for traveling, such as wheels, brakes, and clutches.
- the operation unit 98 includes, for example, a steering wheel, an accelerator pedal, a brake pedal, a clutch pedal, and the like, and optimizes an operation signal corresponding to a user operation. It is supplied to the device 94 and the selection unit 96.
- the current coordinates (X, Y) and traveling direction ⁇ of the vehicle are computed from the outputs of the gyro sensor 91 and the wheel pulsar 92 in the computing unit 93. It is supplied to the automatic traveling control unit 95 via the processing unit 101 of the optimizing device 94.
- the automatic travel control unit 95 recognizes the current position and the travel direction of the vehicle from the coordinates (X, Y) and the travel direction 0 supplied thereto, and described later so that the vehicle travels along the set locus.
- a control signal for controlling the driving unit 97 is generated and supplied to the driving unit 97 via the selecting unit 96. As a result, the vehicle automatically travels according to the control signal output by the automatic travel control unit 95.
- the operation signal output from the operation unit 98 is also supplied to the processing unit 101 of the optimization device 94.
- the optimization device 94 performs learning based on an operation signal supplied when the user operates the operation unit 98.
- the processing unit 101 of the optimization device 94 determines the coordinates (X, Y) and the traveling direction 0 supplied from the calculation unit 93, and obtains the learning result. Based on this, the travel is corrected so that the travel along the set locus as the travel desired by the user is performed, and the corrected travel is supplied to the automatic travel controller 95.
- FIG. 17 illustrates a configuration example of the processing unit 101 of the optimization device 94 in FIG.
- the processing unit 101 in FIG. 17 does not include the selection unit 41, and instead of the operation signal processing unit 50 and the teacher data generation unit 51, the operation signal processing unit 110
- the configuration is basically the same as that of the processing unit 11 in FIG. 4, except that a mentor data generation unit 111 is provided.
- the operation unit 93 is used to Of the coordinates (X, Y) and the traveling direction 0 supplied to the processing unit 101 of 94, only the traveling direction 0 will be focused on for description. However, for the coordinates (X, Y), the same processing as the processing for the traveling direction ⁇ ⁇ described below can be performed.
- the operation signal processing unit 110 receives the operation signal from the operation unit 98 and determines whether the operation signal is a learning operation signal. When the operation signal is a learning operation signal, the operation signal processing unit 110 supplies a message indicating that to the student data generation unit 52 and the teacher data generation unit 111. .
- the teacher data generation unit 111 is supplied with a message indicating that the operation signal is a learning operation signal (hereinafter, appropriately referred to as a learning message) from the operation signal processing unit 110, and also as an input signal.
- the running direction 0 is supplied from the calculation unit 93.
- the teacher data generation unit 111 corrects the traveling direction ⁇ from the calculation unit 93 as an output signal output from the correction unit 21 (arithmetic unit 36) (hereinafter referred to as “correction travel Direction).
- the teacher data generation unit 111 obtains a weight W corresponding to the learning operation signal from the traveling direction ⁇ ⁇ ⁇ as an input signal supplied when a learning message is received and the corrected traveling direction as an output signal.
- the data is supplied to the learning data memory 53 as teacher data.
- the weight W when the vehicle turns in the predetermined direction after the user operates the operation unit 98 as the steering wheel so that the vehicle turns in the predetermined direction is obtained.
- the teacher data the user power S, the operation unit 98 as a steering wheel, and the correction of the input signal X (t) representing the traveling direction ⁇ immediately after the vehicle turns to the desired direction are obtained. It is necessary to adopt the weight W used for.
- the input signal X (t) immediately after the operation of the operation section 98 is expressed by the following equation (8) and the input signal X (t) and the output signal y (t) output immediately before the operation of the operation section 98.
- the teacher data generator 111 Since the output signal y (t) immediately after the operation of the operation section 98 is corrected as the corrected traveling direction by the weighted addition with 1), the input signal X immediately after the operation of the operation section 98 is obtained. From Equation (8), the weight W used for correcting (t) is obtained from the input signal X (t) immediately after the operation of the operation unit 98 and the output signal y (t) immediately after the operation of the operation unit 98. And y (t-1) immediately before the operation of the operation unit 98. Therefore, the teacher data generator 111 generates the driving direction ⁇ ⁇ ⁇ as the input signal X (t) supplied immediately after receiving the learning message, and the output supplied immediately before and immediately after receiving the learning message.
- a weight W as teacher data is obtained and supplied to the learning data memory 53.
- the student data generation unit 52 stores the weight w obtained from the running direction as the input signal supplied up to immediately before the learning message in the learning data memory 53 as student data.
- the student data generator 52 has the same configuration as the input reliability calculator 42, the output reliability calculator 43, the latch circuit 44, and the weight calculator 45.
- the weight w (the same weight w obtained by the weight calculation unit 45) is calculated from the traveling direction as the input signal supplied thereto, and the weight w calculated immediately before receiving the learning message is calculated.
- the data is supplied to the learning data memory 53 as student data.
- the user operates the operation unit 98 to use the weight W when the traveling direction becomes the direction desired by the user as the teacher data, and the user operates the operation unit 98.
- the same weight w output by the weight calculator 45 immediately before the operation of 98 is used as the student data, and the calculation of the parameter control data a and b shown in the equations (20) and (21) is performed. Done.
- the weight correction unit 46 corrects the weight w obtained by the weight calculation unit 45 in accordance with the equation (13) using the parameter control data a and b, and supplies the weight w to the correction unit 21. Is done.
- the parameter control data a and b are used by the weight calculation unit 4 so that the traveling direction immediately before the user operates the operation unit 98 is corrected to the traveling direction immediately after the user operates the operation unit 98. Since the weight w obtained in step 5 is corrected, the vehicle will automatically travel along the set track.
- the fact that the user operates the operation unit 98 means that an error of the gyro sensor 91, noise included in the output thereof, a calculation error in the calculation unit 93, etc. Since the traveling direction ⁇ ⁇ output by the arithmetic unit 93 includes an error and does not represent the true traveling direction of the vehicle, it is considered that the actual traveling direction of the vehicle is out of the set trajectory. . Further, in this case, the operation of the operation unit 98 by the user is considered to change the actual traveling direction of the vehicle to a direction along the set trajectory. Therefore, when the user operates the operation unit 98, the weight W when the actual traveling direction of the car follows the set trajectory is used as the teacher data, and the user operates the operation unit 98 immediately before operating the operation unit 98.
- the weight w obtained by the weight calculation unit 45 that is, the weight w output by the weight calculation unit 45 in a state deviating from the set trajectory is learned as student data, so that the weight w deviates from the set trajectory.
- the parameter control data a and b of the equation (13) for correcting the weight of the equation (6) are obtained so that the traveling direction in the state where the vehicle is in a state along the set locus is corrected.
- the processing of the processing unit 101 of the optimization device 94 in FIG. 17 will be described.
- the correction process is performed by learning the correction process to correct the direction ⁇ , the correction parameter calculation process to find the weight as the correction parameter used in the correction process, and the operation of the operation unit 98 (Fig. 16) by the user.
- Control data learning processing for obtaining parameter control data for controlling (correcting) the weight as a parameter is performed, but the correction processing is the same as the correction processing by the NR circuit in FIG. 4 described in FIG.
- the correction parameter calculation process and the student data learning process performed by the processing unit 101 of the optimization device 94 in FIG. 17 will be described.
- step S111 the input reliability calculation unit 42 sets the calculation unit 9 3 ( The input reliability o; x ( t ) based on the variance of the traveling direction ⁇ from FIG. 16) is obtained and supplied to the output reliability calculation unit 43 and the weight calculation unit 45. Then, the process proceeds to step S 1 1 2, and the weight calculator 45 uses the input reliability a x ( t ) from the input reliability calculator 42 to calculate the weight w (t) according to equation (6). Then, the calculated value is supplied to the weight correction unit 46, and the process proceeds to step S113.
- step S113 the weight correction unit 46 reads the parameter control data from the parameter control data memory 57, and proceeds to step S114.
- step S114 the weight correction unit 46 sets the mode in which the parameter control data read from the parameter control data memory 57 does not correct the weight w (t), that is, the operation unit 98 (FIG. Regardless of the operation of 16), the weight calculation unit 45 corrects the input signal X (t) with the weight w (t) automatically obtained from the input reliability and output reliability. It is determined whether the data is auto mode data indicating the mode (auto mode) used as the weight W.
- step SI 13 If it is determined in step SI 13 that the parameter control data is not the auto mode data, the process proceeds to step S 115, where the weight correction unit 46 determines the weight w (t) supplied from the weight calculation unit 45. ) Is corrected according to the linear expression of the expression (13) defined by the parameter control data a and b supplied from the parameter control data memory 57, and the process proceeds to step S116.
- step S116 the weight correction unit 46 supplies the weight after the correction to the correction unit 21 as a correction parameter, and proceeds to step S117.
- step S114 determines whether the parameter control data is the auto mode data. If it is determined in step S114 that the parameter control data is the auto mode data, step S115 is skipped, the process proceeds to step S116, and the weight correction unit 46 The weight w (t) from the weight calculation unit 45 is supplied as it is to the correction unit 21 as a correction parameter, and the process proceeds to step S117.
- step S117 the output reliability calculation unit 43 updates the output reliability. That is, the output reliability calculation unit 43 calculates the input reliability x W calculated by the input reliability calculation unit 42 in the immediately preceding step S 31 and the one sample before the latched by the latch circuit 44.
- the current output reliability y (t ) is obtained by adding the output reliability according to equation (5), and is stored in the latch circuit 44 in a form overwritten.
- the weight used in the correction processing is obtained from the input reliability and the output reliability regardless of the operation of the operation unit 98. If the mode is not the auto mode, the weight used in the correction process is obtained based on the operation of the operation unit 98 using the parameter control data obtained by learning by the control data learning process in FIG.
- step S131 it is determined whether or not the operation signal processing unit 110 has received an operation signal for force learning from the operation unit 98 (Fig. 16). If it is determined that the information has not been received, the process returns to step S101.
- step S 101 when it is determined that the learning operation signal has been received from the operation unit 98, that is, for example, a steering wheel or the like is used as the operation unit 98.
- the operation is continuously performed for the second time t2 or more without leaving an interval of the time t1 or more, and then the operation is stopped for the third time t3 or more, or After the operation of the steering wheel as the start of operation 8, the user operates the operation unit 9 8 so that the vehicle turns in a desired direction, for example, when the operation is stopped for a third time t 3 or more and the operation is stopped.
- the process proceeds to step S132, where the teacher data generation unit 111 generates the teacher data, and the student data generation unit 52 generates the student data. Generate.
- the operation signal processing unit 110 when determining that the operation signal processing unit 110 has received the learning operation signal, the operation signal processing unit 110 supplies the learning message to the teacher data generation unit 111 and the student data generation unit 52.
- the teacher data generation unit 111 Upon receiving the learning message from the operation signal processing unit 110, the teacher data generation unit 111, in step S132, calculates the traveling direction ⁇ ⁇ as the input signal supplied from the calculation unit 93, The operation for learning is performed based on the output signal output from the unit 21 (arithmetic unit 36) obtained by correcting the traveling direction 6 from the computing unit 93 (corrected traveling direction). Find the weight W corresponding to the signal.
- the teacher data generation unit 111 generates an input signal X representing the traveling direction 0 immediately after the user operates the operation unit 98 as a steering wheel and the vehicle turns to the desired direction. (t) is received from the operation unit 93 (FIG. 16). Further, the teacher data generation unit 1 1 1 includes a current output signal y (t) output from the correction unit 21 and an output signal y (t-1) one time earlier, that is, the operation of the operation unit 98. The immediately preceding output signal y (t-1) is retained, and using these input signals X (t) and output signals y (t) and y (t-1), the equation (8) is obtained. According to, the weight W (weight corresponding to the learning operation signal) used by the correction unit 21 when the learning operation signal is given is obtained.
- the teacher data generation unit 111 After obtaining the weight W corresponding to the learning operation signal as described above, the teacher data generation unit 111 supplies the weight W to the learning data memory 53 as teacher data.
- step S132 the student data generation unit 52, which has received the learning message from the operation signal processing unit 110, obtains, from the traveling direction as the input signal supplied from the calculation unit 93 (FIG. 16) immediately before that.
- the same weight w calculated by using the input reliability and the output reliability obtained and output by the weight calculator 45 is supplied to the learning data memory 53 as student data.
- the learning data memory 33 stores the weight W used by the correction unit 21 when the user operates the operation unit 98 to change the actual traveling direction of the vehicle to the direction desired by the user. At the same time, immediately before the user operates the operation unit 98, a learning pair in which the weight w obtained from the input reliability and the output reliability is student data is supplied.
- the learning data memory 53 receives the teacher data W from the teacher data generator 1 1 1 When the student data w is received from the student data generator 52, the latest set of teacher data W and student data w is stored in step S133, and the process proceeds to step S134.
- step S134 the parameter control data calculation unit 54 outputs the latest teacher data and student data stored in the learning data memory 53, and the learning information as in step S44 in FIG. With the learning information stored in the memory 55 as a target, addition in the least squares method is performed. Further, in step S134, the parameter control data calculation section 54 stores the addition result as learning information in the learning information memory 55 so as to overwrite it, and proceeds to step S135.
- step S135 as in step S45 in FIG. 8, the parameter control data calculation unit 54 calculates the equation from the addition result as learning information stored in the learning information memory 55. Based on (20) and (21), it is determined whether or not the parameter control data a and b can be obtained.
- step S135 If it is determined in step S135 that the parameter control data a and b cannot be obtained, the parameter control data calculation unit 54 supplies the fact to the determination control unit 56, and the step S1 Go to 3-9.
- step S139 the judgment control unit 56 supplies auto mode data representing the auto mode to the parameter control data memory 57 as parameter control data and stores it. Then, the process returns to step S131, and thereafter, the same processing is repeated.
- the weight calculation unit 45 (Fig. 17) automatically obtains it from the input reliability and the output reliability.
- the weight w (t) ⁇ is used as is for the correction of the input signal ⁇ (t).
- step S135 when it is determined in step S135 that the parameter control data a and b can be obtained, the process proceeds to step S136, and the parameter control data calculation unit 54 uses the learning information to By calculating the equations (20) and (21), the parameter control data a and b are obtained, supplied to the judgment control unit 56, and Proceed to step S137.
- step S137 the determination control unit 56 determines the learning data memory 53 according to the linear expression of the expression (13) defined by the parameter control data a and b from the parameter control data calculation unit 54.
- a prediction value of the corresponding teacher data is obtained from each student data stored in the learning data, and a prediction error of the prediction value (an error with respect to the teacher data stored in the learning data memory 53) is calculated by the equation (15). Find the sum of the squared errors expressed. Further, the determination control unit 56 obtains a normalization error by dividing the sum of the square errors by, for example, the number of learning pairs stored in the learning data memory 53, and proceeds to step S138.
- step S138 the determination control unit 56 determines whether the normalization error is greater than (or greater than) a predetermined threshold S1. If it is determined in step S138 that the normalization error is larger than the predetermined threshold value S1, that is, the linear expression of the expression (13) defined by the parameter control data a and b is If the relationship between the student data and the teacher data stored in the learning data memory 53 is not accurately approximated, the process proceeds to step S139, and the determination control unit 56 sets the parameter as described above. As control data, auto mode data representing the auto mode is supplied to the parameter control data memory 57 and stored therein. Then, the process returns to step S131, and the same processing is repeated thereafter.
- the linear expression of the expression (13) defined by the parameter control data a and b is converted to the student data stored in the learning data memory 53. If the relationship between the data and the teacher data is not closely approximated, the input reliability and output reliability are calculated in the same way as when there is no learning information sufficient to obtain the parameter control data a and b.
- the automatically obtained weight w (t) i is used as it is for the correction of the input signal ⁇ (t).
- step S138 when it is determined in step S138 that the normalization error is not larger than the predetermined threshold value S1, that is, as defined by the parameter control data a and b If the linear expression of the expression (13) accurately approximates the relationship between the student data and the teacher data stored in the learning data memory 53, the process proceeds to step S140, and the determination control unit 5 6 is a regression line represented by a linear expression of the expression (13) defined by the parameter control data a and b from the parameter control data calculation unit 54, and the latest regression line stored in the learning data memory 53. Find the error (distance) ⁇ between the points specified by the teacher data and student data.
- step S141 the determination control unit 56 determines whether or not the magnitude of the error ⁇ is greater than (or greater than) a predetermined threshold S2. Skipping step S142, proceeding to step S143, the determination control unit 56 stores the parameter control data a and b obtained in step S136 into the parameter control unit data memory 37. Output.
- the parameter control data memory 57 stores the parameter control data a and b from the judgment control unit 56 in an overwritten form, and returns to step S1331.
- step S141 when it is determined that the magnitude of the error ⁇ is larger than the predetermined threshold S2, the process proceeds to step S142, where the determination control unit 56 performs the parameter control.
- the determination control unit 56 performs the parameter control.
- the data calculation unit 54 only a predetermined number of past learning pairs from the latest learning pairs as the latest teacher data and student data stored in the learning data memory 53 are used ( Without using the learning information in the learning information memory 55), the parameter control data a and b are recalculated.
- step S144 where the determination control unit 56 outputs the parameter control data a and b obtained in step S142 to the parameter control unit data memory 37, and overwrites the data. And return to step S13.
- the parameter control data a and b can be obtained, and the linear expression of the expression (13) defined by the parameter control data a and b is calculated based on the student data stored in the learning data memory 53.
- the parameter control data a and b obtained by performing learning using a learning pair obtained based on the operation of the operation unit 2 by the user.
- the weight w (t) obtained from the input reliability and the output reliability is corrected according to the equation (13) defined by the following equation, and the correction weight W obtained by the correction is used to correct the input signal X (t). Will be used.
- the operation signal supplied in response to the user's operation can be used for learning, and the operation signal for learning that can be used for learning is determined by the operation signal.
- the parameter control data a and b for correcting the weight for correcting the input signal are learned based on the learning operation signal, so that the user's operation must be learned without the user's knowledge. As a result, based on the learning result, appropriate processing is gradually performed for the user, and finally, optimal processing is performed for the user.
- the vehicle while operating the operation unit 98 so that the user corrects the traveling direction to the one along the set locus, the vehicle gradually runs automatically along the set locus. Become.
- the actual traveling direction of the car follows the set trajectory.
- the weight W used in the correction process (FIG. 6) performed by the correction unit 21 is changed. That is, when the user operates the operation unit 98 so that the traveling direction of the automatic vehicle becomes the desired direction, the traveling direction 0 as an input signal output by the calculation unit 93 (FIG. 16) changes, The input reliability obtained from the traveling direction 0 and the output reliability obtained from the input reliability also change.
- the weight calculated by the weight calculation section 45 also changes, and the changed weight is supplied to the correction section 21 via the weight correction section 46. Then, the correction unit 21 performs the correction process represented by Expression (8) using the weights supplied in this manner. Therefore, when the user operates the operation unit 98, the weight of the equation (8) is changed by the user's operation, and as in the case of the NR circuit shown in FIG. Since the contents of the processing (correction processing) represented by are also changed, the processing unit 101 of the optimization device 94 in FIG. Accordingly, the “contents of the processing” can be determined to have been changed so that the user can obtain a desired traveling direction.
- the processing unit 101 of the optimizing device 94 shown in FIG. 17 can also perform a case where a sufficient number of learning pairs are not input from the user or an approximation with high accuracy, similarly to the NR circuit shown in FIG. If no proper learning pair is input, the weight automatically obtained from the input reliability and the output reliability is used for the correction processing in the correction unit 21, and the user can perform highly accurate approximation.
- the correction weights obtained by the parameter control data a and b obtained by performing the learning using the learning pair are used for the correction processing in the correction unit 21.
- a sufficient number of learning pairs or a learning pair capable of high-precision approximation are not obtained, and a learning pair capable of high-precision approximation is obtained.
- the algorithm for calculating weights has changed.
- Japanese Patent Application Laid-Open No. Hei 7-13625 discloses a travel control device for a work vehicle such as a rice transplanter.
- this travel control device the operation state of a user and the detection result of a jay mouth sensor and the like are disclosed.
- the correction amount of the control parameter in the autopilot state is calculated so as to reduce the difference from the information based on the control parameter.
- the automatic traveling device shown in FIG. 16 is different from the traveling control system disclosed in Japanese Patent Application Laid-Open No. 7-13625 in that the correction amount of the parameter for automatic traveling (autopilot) changes based on the operation of the user.
- the automatic traveling apparatus shown in FIG. 16 is different from the learning operation in which the operation signal supplied in response to the user's operation determines whether or not the operation signal can be used for learning. If the signal is a signal, the parameter control data for correcting the weight for correcting the input signal is learned based on the learning operation signal. However, only when the switch is manually switched to the manual steering control mode, the correction amount of the control parameter in the automatic steering state is calculated, which is significantly different from the traveling control device described in JP-A-7-13625. .
- the switch is switched each time the user feels that the automatic power control is not being performed properly, and the manual control mode is set. After the calculation of the control parameter correction amount is completed, the switch must be switched again to set the automatic pilot control mode, and thus the user may feel troublesome.
- the automatic traveling device shown in Fig. 16 it is determined whether or not the operation signal supplied in response to the user's operation can be used for learning. If the signal is an operation signal, the algorithm is changed based on the learning operation signal so as to learn parameter control data for correcting the weight for correcting the input signal. Even without switching, appropriate automatic driving will be performed. That is, since the learning of the user's operation is performed without the user's knowledge, the learning proceeds while the user is performing the operation of correcting the traveling direction, and even if the user does not perform the operation, the learning gradually proceeds. The car will follow the set track.
- the automatic traveling apparatus shown in FIG. 16 has a different processing structure in response to a user's operation, but also differs from the traveling control apparatus described in JP-A-7-13625 in that respect.
- FIG. 20 shows another embodiment of the processing unit 101 of the optimizing device 94 of FIG.
- the same reference numerals are given to the portions corresponding to the case in FIG. 17, and the description thereof will be appropriately omitted below.
- the processing unit 11 of the NR circuit in FIGS. 4 and 10 and the processing unit 101 of the optimizing device 94 in FIG. 17 use a learning pair obtained based on a user operation, Learning of parameter control data for controlling the correction parameters is performed, but the processing unit 101 in FIG. 20 uses a learning pair obtained based on a user operation. Therefore, learning of the correction parameter itself is performed.
- the correction unit 21 includes a correction amount calculation unit 141 and a computing unit 144
- the learning unit 22 includes a learning data memory 53, a learning information memory 55, Judgment control unit 56, Operation signal processing unit 110, Teacher data generation unit 144, Student data generation unit 144, Correction parameter calculation unit 144, and Correction parameter memory 1 46 It is composed of
- the correction amount calculation unit 14 1 is supplied with correction parameters as described later from the correction parameter memory 14 6 of the learning unit 22, and the correction amount calculation unit 14 1
- the correction amount for correcting the traveling direction 0 as the input signal is calculated using the correction parameter, and is supplied to the computing unit 144.
- the computing unit 14 2 is supplied with the correction amount from the correction amount calculating unit 14 1, as well as the traveling direction force S as an input signal and the computing unit 93 (FIG. 16).
- the computing unit 144 corrects the traveling direction 0 as an input signal by adding a correction amount to the traveling direction 0, and outputs the traveling direction (corrected traveling direction) after the correction as an output signal as an output signal. Output to the row control section 95 (FIG. 16).
- the teacher data generation unit 144 supplies the traveling direction as an input signal supplied immediately after receiving the learning message from the operation signal processing unit 110 to the learning data memory 53 as teacher data.
- the student data generating section 144 supplies the running direction as an input signal supplied immediately before receiving the learning message from the operation signal processing section 130 to the learning data memory 53 as student data.
- the correction parameter calculation unit 144 includes teacher data and student data as learning data stored in the learning data memory 53. Using the learning information stored in the information memory 55, a correction parameter for minimizing a predetermined statistical error is learned by calculating new learning information, and is supplied to the judgment control unit 56. Further, the correction parameter calculation unit 144 updates the storage content of the learning information memory 55 with the new learning information obtained by the learning. The correction parameter memory 146 considers the correction parameters output by the determination control unit 56 as “G'I”.
- the traveling direction 6 force supplied from the calculation unit 93 is corrected as follows.
- the arithmetic unit 93 replaces r in the equation (32) with r ′ in the traveling direction. Calculated from the formula.
- the traveling direction ⁇ 'calculated from the yaw rate r' output from the gyro sensor 91 is as follows from the equations (32) and (33).
- the processing unit 101 of the optimizing device 94 in FIG. 20 is configured to draw the trajectory indicated by a solid line in FIG.
- a correction parameter a for correcting the traveling direction 0 '. A physician ⁇ ⁇ ⁇ , a N to perform the correction parameter learning processing of learning based on the learning operating signals from Interview one The, the capturing positive parameter a.
- To a N a correction process for correcting the traveling direction 0 ′ from the calculation unit 93 is performed.
- step S151 the correction amount calculation unit 141 corrects the correction parameter a stored in the correction parameter memory 146. Or with a N, to a total of the correction amount.
- the true traveling direction 0 is the correction parameter a.
- the amount of capture is calculated as represented by Expression (37).
- step S 15 the traveling direction 0 ′ from the arithmetic unit 53 as an input signal is added to the correction amount, and the added value (6 in Expression (3 7)) ⁇ It is output as an output signal, waits for the next input signal sample to be supplied, returns to step S151, and thereafter repeats the same processing.
- step S161 the operation signal processing unit 110 determines whether or not the learning operation signal has been received from the operation unit 98 (FIG. 16). If it is determined that it has not been received, the process returns to step S1661.
- step S 161 that the operation section 98 has received the learning operation signal, that is, for example, the operation section 98 has been operated for the first time after the start of the operation. It is operated continuously for at least t 2 without leaving an interval of t 1 or more, After that, if the operation is stopped continuously for the third time t3 or more, or the operation is stopped continuously for the third time t3 or more after the operation of the operation unit 98 is started.
- step S162 the teacher data generation unit 144 Generates the teacher data, and the student data generator 144 generates the student data.
- the operation signal processing unit 110 supplies a learning message to that effect to the teacher data generation unit 144 and the student data generation unit 144.
- the teacher data generation unit 144 acquires the traveling direction as an input signal supplied immediately thereafter as the teacher data, and supplies it to the learning data memory 53.
- the teacher data generation unit 144 supplies the traveling direction 0 as an input signal supplied after receiving the learning message to the learning data memory 53 as teacher data.
- the student data generating unit 52 sets the running direction as the input signal supplied immediately before, that is, the running direction immediately before the vehicle turns in the desired direction, as student data.
- the data is supplied to the learning data memory 53.
- step S166 the learning data memory 53 stores the set of teacher data and student data from the teacher data generation unit 51, and proceeds to step S164.
- step S 164 the correction parameter calculation unit 145 applies the least squares method to the teacher data and student data in the same manner as described in the equations (2 2) to (30). Perform
- step S164 is performed using the previous addition result as the learning information stored in the learning information memory 55, as in the case described above.
- ⁇ ′ in equation (37) the predicted value of the teacher data as 0 in equation (37) calculated using the student data and the sum of the square errors of the corresponding teacher data Correction parameter a to minimize To a N are added.
- the correction parameter calculation unit 14 5 stores the result of the addition as learning information in the form of overwriting in the learning information memory 55 5. Proceed to.
- Step S 1 6 the correction parameter calculating unit 1 4 5, whether the result the adding of the learning information stored in the learning information memory 5 5, it is possible to determine the correction parameter & 1 through a N Is determined.
- Step S 1 6 5 the correction parameter ai to when it is determined that it is not possible to determine the a N, the correction parameter calculator 1 4 5, to that effect, is supplied to the judgment control unit 5 6, step S Proceed to 1 6 9
- step S169 the determination control unit 56 supplies, to the correction parameter memory 1446, disable data indicating disable of correction as a correction parameter. Then, the process returns to step S161, and the same processing is repeated thereafter.
- the correction unit 2 1 when the learning information can only determine the correction parameters a L to a N does not exist, the correction unit 2 1, the correction of the input signal is not performed. That is, the correction amount of the input signal is set to zero.
- step S165 if it is determined in step S165 that it is possible to obtain the correction parameter, the process proceeds to step S166, where the correction parameter calculation unit 144 calculates the correction using the learning information.
- Parameter a To a N are obtained and supplied to the judgment control unit 56, and the process proceeds to step S167.
- step S 167 the judgment control unit 56 sets the learning data memory 53 according to the equation (37) defined by the parameter control data ai to a N from the correction parameter calculation unit 145.
- a prediction value of the corresponding teacher data is obtained from each student data stored in the learning data, and a prediction error of the prediction value (stored in the learning data memory 53).
- the sum of the squares of the error (for the teacher data) is calculated.
- the judgment control unit 56 obtains a normalized error obtained by dividing the sum of squares of the prediction error by, for example, the number of learning pairs stored in the learning data memory 53, and proceeds to step S168. move on.
- step S168 the determination control unit 56 determines whether the normalization error is greater than (or greater than) a predetermined threshold S1.
- a predetermined threshold S 1 i.e., linear expression of equation defined by the correction parameters & 1 to a N (3 7) is, learning If the relationship between the student data and the teacher data stored in the data memory 53 is not accurately approximated, the process proceeds to step S169, and the determination control unit 56 sets the correction parameter as The disable data is supplied to and stored in the correction parameter memory 146. Then, the process returns to step S161, and the same processing is repeated thereafter.
- the correction parameter & 1 to a be able to obtain the N
- the capturing positive parameter ai to formula defined by a N forces the student data memorize in the learning data memory 5 3 the relationship between a teacher data and, if not accurately approximate the correction parameters a 1 through as if a no learning information only can be obtained a N, the correction of the input signal X (t)
- the quantity is assumed to be zero.
- Step S 1 6 if the normalization error is judged not greater than the predetermined threshold value S 1, i.e., linear expression of equation defined by the correction parameters & 1 to a N (3 7) is,
- the process proceeds to step S170, and the determination control unit 56 sets the correction parameter calculation unit 1 4 5 Correction parameters from a.
- the error ⁇ between the point defined by the most recent tutor data Contact Yopi student data stored in the learning data memory 5 3.
- step S171 the determination control unit 56 determines whether the magnitude of the error ⁇ is greater than (or greater than) a predetermined threshold value S2. Skipping step S172 and proceeding to step S173, the judgment control unit 56 sets the correction parameter a obtained in step S166. Through a ⁇ , the correction parameter Output to memory 1 4 6. In this case, the correction parameter memory 144 stores the correction parameter a from the determination control unit 56. To a N are overwritten and stored, and the process returns to step S 16 1.
- step S 171 determines whether the magnitude of the error ⁇ is larger than the predetermined threshold value S 2 or if it is determined in step S 171 that the magnitude of the error ⁇ is larger than the predetermined threshold value S 2, the process proceeds to step S 172, and the determination control unit 56 sets the correction parameter calculation unit By controlling 145, only the latest teacher data and student data stored in the learning data memory 53 are used, and the correction parameter a. Or a N is recalculated. Then, the process proceeds to step S 173, where the judgment control unit 56 sets the correction parameter a obtained in step S 172. Or a a N, and outputs the parameter control unit data memory 3 7, are stored in the form of overwriting, the flow returns to Step S 1 6 1.
- step S170 the teacher data and the student data which have been given so far are obtained.
- Correction parameters a From a N, the error ⁇ between the surface defined by equation (37) and the point defined by the latest teacher data and student data is obtained. C The magnitude of this error ⁇ is determined by the threshold S If not, the correction parameter a obtained in step S166. Any or surface of the type defined (3 7) by a N is, including the point defined by the most recent tutor data and student data, the point defined by the teacher data and student data given et the ever Is also considered to be relatively accurate, so its correction parameter a. To a N are stored in the collection parameter memory 1 46.
- the magnitude of the error ⁇ is, when the threshold S 2 large from the surface of the expression defined (3 7) in step S 1 6 determined Me was corrected parameter 6 a 0 to a N, most Since it is considered that the point specified by the new teacher data and the student data is relatively far apart, the judgment control unit 56 determines in step S172 that the latest data stored in the learning data memory 53 Correction parameter a using only teacher data and student data. Let a N be recalculated.
- the learning operation signal is supplied based on the user's operation.
- the correction parameter a in equation (37) Since the learning of a to N is performed, the operation of the user can be learned without the user's knowledge, and furthermore, the processing optimal for the user can be performed using the learning result.
- the vehicle when the error included in the traveling direction output by the arithmetic unit 93 (FIG. 16) is colored, the vehicle can be automatically driven along a predetermined set locus.
- the processing unit 101 of the optimizing device 94 shown in FIG. 20 has a correction unit so that the actual running direction of the vehicle follows the set trajectory according to the operation of the operation unit 98 by the user. 21.
- the correction parameters used in the correction process (Fig. 6) performed in step 1 are changed. That is, when the user operates the operation unit 98 so that the traveling direction of the automobile becomes the desired direction, the calculation unit 43 (FIG. 16) outputs the input immediately before and immediately after the operation of the operation unit 98.
- the learning of the correction parameter is performed using the traveling direction 0 as a signal as the student data and the teacher data, respectively, whereby the correction parameter is changed.
- the changed correction parameter is supplied to the correction unit 21.
- the correction unit 21 calculates a correction amount using the correction parameter, and corrects the input signal based on the correction amount (FIG. 22). Is performed. Therefore, when the user operates the operation section 98, the correction parameter of the equation (37) is changed by the user's operation, so that the processing expressed by the equation (37) is naturally performed. Therefore, the processing unit 101 of the optimizing device 94 shown in FIG. 20 also changes its “processing content” power S according to the user's operation. It can be said that the driving direction has been changed so that it can be obtained.
- the correction unit 2 when a sufficient number of learning pairs are not input from the user, or when a learning pair capable of high-precision approximation is not input, the correction unit 2 When the correction amount of the input signal in 1 is set to 0 and a learning pair that allows highly accurate approximation is input by the user, the correction parameter obtained by performing learning using the learning pair The input signal is corrected by the correction amount obtained by the above. In other words, there are a sufficient number of learning pairs and learning pairs that can perform highly accurate approximation. The algorithm for calculating the weight used in the correction process of the correction unit 21 is changed depending on whether it is not obtained or when a learning pair capable of approximation with high accuracy is obtained.
- step S170 the correction parameter a .
- the error ⁇ between the surface of equation (37) defined by a N and the point specified by the latest teacher data and student data was obtained, and the subsequent processing was performed.
- the correction parameter a obtained in step S166 before the latest plurality of teacher data and student data is supplied.
- the processing unit 101 of the optimizing device 94 in FIG. 16 uses the processing unit 11 of the optimizing device 1 shown in FIG. 10 in addition to those shown in FIGS. It is also possible to configure with.
- FIG. 24 shows a configuration example of another embodiment of the automatic traveling device to which the optimization device of FIG. 1 is applied.
- parts corresponding to the case in FIG. 16 are denoted by the same reference numerals, and description thereof will be omitted as appropriate below. That is, in the automatic traveling device of FIG. 24, an internal information generation unit 161 is newly provided in the optimization device 94, and a display unit 171 is newly provided.
- the configuration is the same as in the case of. ⁇
- the internal information generating unit 161 like the internal information generating unit 171, in FIG. 14, reads the internal information from the processing unit 11, converts it into image information, and outputs it to the display unit 1 11 .
- the display unit 171 displays the internal information supplied from the internal information generation unit 161 in a predetermined display format.
- the processing unit 101 can be configured as shown in FIGS.
- the same processing as in FIG. 17 is performed except for the correction parameter calculation processing. Therefore, with reference to the flowchart of FIG. 25, the correction parameter calculation processing when the processing unit 101 of FIG. 24 is configured as shown in FIG. 17 will be described.
- steps S191 to S197 the same processes as those in steps S111 to S117 in FIG. 18 are performed.
- step S197 the process proceeds to step S198, and the internal information is displayed on the display section 171. That is, in more detail, the internal information generation unit 16 1 ⁇
- the weight W stored in the weight memory 31 (FIG. 17) is read out as the internal information, and the image signal that can be displayed on the display unit 171 is displayed. And output to the display unit 17 1 for display (presentation).
- step S198 the process returns to step S111, and the same processing is repeated thereafter.
- the weight W as internal information related to the processing of the processing unit 101 is displayed (presented) on the display unit 171.
- the user looks at the display, It is possible to operate the operation unit 98 so that optimal automatic traveling is performed.
- the weight W is displayed.
- the internal information generation unit 161 may display (present) other internal information on the display unit 171, for example.
- the parameter control data a and b may be read from the parameter control data memory 37 and displayed. Whether the weight selected by the selection unit 41 is the weight obtained from the parameter control data a and b obtained by performing the learning using the learning pair, or the input reliability And internal information indicating whether the weight is obtained from the output reliability.
- steps S211 to S223 the same processes as those in steps S161 to S172 of FIG. 23 are performed.
- step S224 the internal information generation unit 161 stores, for example, the correction parameter a stored in the correction parameter memory 101. Or a a N, reads as internal information, and converts the display available-image signal on the display unit 1 7 1 on the display unit 1 7 1.
- the correction parameter a Since a N is composed of a plurality of parameters, as shown in FIG. 27, each parameter may be displayed on a horizontal axis and each value on a vertical axis, for example, in a bar graph form. .
- the correction parameter a. Or a N for example, as shown in FIG. 2 8, it may be displayed taking any two correction parameters ai and aj on the horizontal axis and vertical axis, respectively.
- the correction parameters for the horizontal axis and the vertical axis can be selected by a user.
- the correction parameter a is obtained by the correction parameter processing described with reference to the flowchart in FIG.
- the processing unit 101 of the optimizing device 94 in FIG. To a N are displayed as internal information.
- the user can operate the operation unit 98 so as to perform optimal automatic driving while watching the display.
- the internal information generation unit 16 1 uses the correction parameter a. Or it may be displayed inside information other than a N.
- the correction parameter a as internal information is set.
- the display is performed assuming that a to N are 0.
- an optimization device 201 as another embodiment of the optimization device in FIG. 1 will be described with reference to FIG.
- the optimizing device 201 includes a processing unit 211, and removes noise from, for example, an image signal or the like as an input signal, and optimizes the displayed image signal.
- the image signal is described as an example of a main input signal, but the present invention is not limited to the image signal, and may be another signal.
- the processing unit 2 1 1 includes a learning unit 2 2 1 and a mapping processing unit 2 2 2.
- the operation signal from the operation section 202 is supplied to the learning section 2 21 of the processing section 2 1 1, and the learning section 2 2 1 processes the mapping processing section 2 2 2 based on the operation signal.
- the necessary coefficient set is learned in and stored in the coefficient memory 235.
- a learning standard (learning standard) of the learning unit 211 for example, a minimum N-th power error method (least N-th power method) can be used. The solution by the least N-th power error method will be described later.
- the mapping processing unit 222 performs a mapping process of mapping (converting) an input signal to a predetermined output signal. That is, the mapping processing unit 222 sets a pixel to be obtained from the image signal as an output signal as a pixel of interest, and sets a tap corresponding to the pixel of interest (at least one or more taps required for processing).
- a pixel of interest is obtained by extracting a pixel (also referred to as a sample) from an image signal as an input signal and performing a product-sum operation with a coefficient set stored in a coefficient memory 235.
- the mapping processing section 222 performs the same processing (mapping) on the pixels constituting the image signal as an output signal, thereby generating an image signal as an output signal and outputting the generated image signal to the display section 203. Let it.
- the operation unit 202 is operated by the user, and supplies an operation signal corresponding to the operation to the learning unit 221.
- the display unit 203 displays a pixel signal as an output signal output by the mapping processing unit 202.
- the teacher data generator 231 generates teacher data as a learning teacher from the input signal, and outputs the generated teacher data to the minimum N-th power error coefficient calculator 234.
- Student data generator 2 3 2 Generates student data to be students for learning from the input signal, and outputs the generated student data to the prediction tap extracting unit 233.
- the teacher data and the student data are generated, for example, by performing no processing on the input signal by the teacher data 231, and by allowing the student data generating unit 232 to perform a predetermined thinning process or It is generated by deteriorating the input data by LPF (Low Pass Filter) or the like, and is not limited to the above configuration as long as the student data is generated as being inferior to the teacher data.
- LPF Low Pass Filter
- the teacher data generator 2 31 performs predetermined thinning or processing by LPF on the input signal
- the student data generator 2 32 If thinning or LPF processing is applied to a greater extent than in step 1, Further, for example, it is also possible to use the input signal as it is as the teacher data, and to use the data obtained by superimposing noise on the input signal as the student data.
- the prediction tap extracting unit 233 sequentially sets pixels constituting an image signal as teacher data as a pixel of interest, and at least one or more pixels (taps) having a predetermined positional relationship with the pixel of interest. It is extracted as a prediction tap from the image signal as data, and is output to the least-N-th power error coefficient calculation unit 234.
- the least-N-th-power-error-factor-computing unit 234 is based on an operation signal representing information for specifying the value of the exponent N required for the least-N-th power-of-error calculating method input from the operation unit 202. Then, a coefficient set is calculated from the prediction taps and the teacher data by the least-N-th power error method, output to the coefficient memory 235, and stored (overwritten and stored as appropriate).
- the coefficient memory 235 stores the coefficient set supplied from the least-N-th-power-error-method coefficient calculator 234 and outputs it to the mapping processor 222 as appropriate.
- the tap extraction unit 25 1 of the mapping processing unit 22 2 sequentially sets pixels constituting an image signal as an output signal as a target pixel, and a pixel (of a pixel having a predetermined positional relationship with respect to the target pixel). Is extracted from the image signal as an input signal as a prediction tap, thereby forming a prediction tap having the same tap structure as that formed by the prediction tap extraction unit 233 of FIG. Output to the calculation unit 252.
- the unit 252 multiplies the product of the value of the extracted prediction tap (pixel) input from the tap extraction unit 251 and the coefficient set stored in the coefficient memory 235 of the learning unit 221 , A target pixel is generated and output to the display unit 203 (FIG. 29).
- the coefficient calculation by the minimum N-th power error method of the minimum N-th power error coefficient calculation unit 234 in FIG. 30 will be described.
- the error of the predicted value y 'with a large error has a large effect on the sum of the N-th power error.
- a coefficient in a direction in which the predicted value y 'of such a large error is relieved (a coefficient for reducing the error of the predicted value y' of a large error) is obtained.
- the error of the predicted value y whose error is small has little effect on the sum of the N-th power error, and is therefore not considered so much, and as a result, it is easily ignored.
- the error of the predicted value y 'with a larger error has a smaller influence on the sum of the N-th power error than when the index N is larger, and the error is smaller. Will have a smaller effect on the sum of Nth power errors.
- the least-N-th power error method a coefficient in a direction in which the error of a predicted value having a small error compared to the case where the index N is large is made smaller is obtained.
- Equation (38) The sum of the N-th error of the predicted value y '(sum of the N-th error) can be expressed by Equation (38).
- E represents the sum of the error e between the true value y as the teacher data and the predicted value y 'for the number of N-th samples.
- the predicted value y ′ of the true value y is a linear linear combination of the predicted tap X i and the predetermined coefficient wi, that is, the following equation (39) Defined by
- This set of prediction coefficients is a coefficient set stored in the coefficient memory 235 of FIG. Ma
- the prediction taps X i, x 2 , ⁇ 3 ⁇ , ⁇ ⁇ are spatially or from the position of the image as the student data corresponding to the pixel (true value) y of the image as the teacher data. It is possible to adopt a pixel located at a position close in time.
- Equation (41) is an equation showing the sum E when the exponent N is an odd number.
- (42) is an equation X indicating the sum E when the exponent N is an even number.
- equation (4 3) Therefore, from equation (4 3), solving equation (4 4) below solves the equation, that is, the prediction coefficients Wl , w 2 , w 3- , w M will be obtained.
- Equation (45) can be expressed in the form of a determinant, as shown in the following equation (46), and is called a normal equation.
- the index N 2
- the pole of the sum E The small value is uniquely determined, and the minimum value is the minimum value of the sum E.
- the normal equations in Equation (45) form the same number of simultaneous linear equations as the number of prediction coefficients ww 2 , w 3 "-, w M (in this case, M), this simultaneous linear equation , Cholesky method, etc., and the prediction coefficient w 2) w 3- ", w M can be obtained. [Equation 46]
- Equation (42) can be expressed as Equation 2 (47) below.
- the least-N-th-power error coefficient calculating unit 234 of the learning unit 221 calculates a prediction coefficient by the following two least-N-th power error methods. Note that of the two least-N Which of the above is adopted can be specified, for example, by the user operating the operation unit 202 (FIG. 29).
- the sum E of the term of the error e 2 multiplied by the weight a s is defined as the sum of the N-th power error method instead of the equation (38).
- the N-th power error e N is defined by the product of the weight s and the square error e 2 .
- x s ax s c + b ⁇ (50)
- x s is a predicted value y ′ calculated by the equation ( 39 ) from the prediction coefficients Wl to w M obtained by the least square error method.
- Error hereinafter referred to as the least-squares Is expressed as a function of the error x s of the predicted values y and, and the weight s of equation (50) defined as shown in FIG. It will show.
- the coefficient a is a term that controls the effect of the error X s by the least square criterion on the N-th error e N .
- the weight s is Is a horizontal straight line of 0.
- the influence of the error X s by the least square criterion on the N-th error e N is constant regardless of the magnitude of the error X s by the least square criterion, and the prediction that minimizes the sum E of Equation (48)
- Error x s is the least-squares criterion, impact on the N-th power error e N of the formula (48), false difference x s increases the larger, the smaller the error X s is small.
- the error X s by least squares norms, impact on N-th power error e N of the formula (48) becomes smaller as the error x s is large, the more the error x s is less growing.
- the N-th power error e N Equation (48) in the case where the coefficient a is positive has the same properties as when the exponent N large, in the case where the coefficient a negative, exponent It has the same properties as when N is small. Therefore, the prediction coefficients of N th power error e N Equation (48), since having properties similar to N-th power error e N of the formula (3 8), to the sum E of the N-th power error of formula (48) to a minimum Is a prediction coefficient that effectively minimizes the sum E of the N-th error in Equation (38).
- the least square error method is realized as described above. That is, the index N is 2, but if the coefficient a is positive, the index N is N> 2, and if the coefficient a is negative, the index N is 2 and 2. Become. This coefficient a greatly affects the index N of the least-N-th power error method, similarly to the coefficient c described later.
- the coefficient b is a correction term, and the function value (weight s ) of FIG. 34 as a whole changes in the vertical direction depending on the value of the coefficient b. Note that the coefficient b does not significantly affect the index N of the least-N-th power error method.
- Coefficients c Section for converting axis scaling, i.e., a term for changing the assignment how weights monument 5 to the error X s by least squares criterion, as the value of the coefficient c is the greater, the weight a s Changes sharply, and conversely, the smaller the value of the coefficient c, the more gradual the change in the weight o; s .
- the coefficient c changes, the effect of the error x s on the N-th error e N in Equation (48) according to the least-squares criterion is the same as when the coefficient a changes, so that the coefficient by c, the N-th power error e N of the formula (4 8), can provide properties similar to N-th power error e N of the formula (3 8). That is, the coefficient c can influence the index N of the least-N-th power error method.
- Coefficients a, b, c that define the weights a s of formula (5 0), by the user operation unit 20 2 operation (setting) can be changed, coefficients a, b, c is changed Thus, the weight o; s in equation (50) changes.
- Weight o by a change in s, the formula (4 8) a s e 2 as N-th power error e N for a given index N, will be functional in the substantially (equivalently), as a result, the formula (5 0)
- the prediction coefficient that minimizes the sum E of the Nth power error E that is, the standard prediction coefficient w;
- the first method as shown in Equation (4 8), but those obtained by multiplying the weight shed s relative squared error e 2 was adopted as the N-th power error
- the second method low-order Using the solution obtained by the least-N-th-power error method, a higher-order least-N-th-power error solution is obtained by a recursive method. That is, the prediction coefficient W i that minimizes the sum E of the square errors of the following equation (5 1) can be obtained by the least square error method as described above.
- the predicted value y ′ calculated by the equation ( 39 ) using the prediction coefficient W i obtained by the above is represented as yi (hereinafter, appropriately referred to as a predicted value according to the least square criterion).
- Equation (5 3) To find the prediction coefficient W i that minimizes the sum E of the fourth-order error of Eq. (53), a solution is obtained by the least-square error method. equation (3 9) predicted value y which is calculated by 'using the prediction coefficients W i which minimizes the sum E of 3 square error 29), y 2 (Hereinafter, appropriately referred to as the predicted value based on the least-squares criterion), the squared error e 4 is represented by the squared error e 2 and the predicted value based on the least-squares criterion as shown in Equation (5 3).
- Equation (54) is to find the prediction coefficient W i that minimizes the sum E of the fifth power. Will obtain a solution by the least square error method. Now, the predicted value y ′ calculated by the equation (39) using the prediction coefficient W i that minimizes the sum E of the fourth-order error of the equation (5 3) is converted to y 3 (hereinafter, appropriately, the minimum fourth power).
- the cube of (hereinafter referred to as the cube will keep in error due to a minimum of 4 square norm) can be expressed by the product of I y- y 3 I 3. Since the cubic error of Equation (54) according to the minimum fourth power criterion can be obtained as a constant, the prediction coefficient w that minimizes the sum E of the quintic error in Equation (54) is also actually a minimum. It can be obtained by the square error method.
- the solution (predicted coefficient W i) can be obtained in the same manner.
- a prediction value (of the prediction value calculated using the prediction coefficient obtained by the lower-order least-Nth-power error method) is obtained. (Predictive error), and by repeating this recursively, higher order least-Nth error A solution by the difference method can be obtained.
- the solution by the least-N-th power error method is calculated using the predicted value calculated using the prediction coefficient obtained by the first-order lowest N-first-power error method.
- the solution using the least-N-th power error method can be obtained by using a predicted value calculated using the prediction coefficient obtained by an arbitrary lower-order N-th power error method. That is, in the case of equation (5 3), instead of I y— y 2 I
- the N-th error e N is the square error e 2 and the N—square error
- Image optimization processing consists of learning processing and mapping.
- step S230 it is determined whether or not the user has operated the operation unit 202 in step S230. If it is determined that the user has not operated the operation unit 202, the process returns to step S230. If it is determined in step S230 that the operation unit 202 has been operated, the process proceeds to step S231.
- step S 231 the teacher data generator 2 31 of the learning section 2 21 generates teacher data from the input signal and outputs the generated teacher data to the least-N-th-power error coefficient calculator 2 34, as well as the student data.
- the generating unit 232 generates student data from the input signal, outputs the generated student data to the predicted tap extracting unit 233, and proceeds to step S232.
- the data used to generate the student data and the teacher data may be, for example, an input signal input from the present to a point in time that has been retroactive for a predetermined time. Can be adopted. Instead of using input signals, dedicated data can be stored in advance as learning data. Noh.
- step S2 32 the prediction tap extracting unit 2 33 generates a prediction tap from the student data input from the student data generating unit 23 2 for each of the attention pixels, using each teacher data as a target pixel, Output to the least-N-th-error-method coefficient calculator 2 34, and then go to step S 2 33.
- step S233 the least-N-th-power error coefficient calculating unit 234 specifies from the operation unit 202 that the coefficient set is to be calculated by the least-N-th power error method using the recursive method (second method). It is determined whether or not an operation signal to be input is input. For example, it is determined that the operation unit 202 is operated by the user and the recursive method is not specified, that is, the direct method (first method) is specified.
- step S 2 3 4 (specify the index N) weights a s specifies the formula (5 0) coefficients a, b, whether c is inputted is determined, the input If, for example, it is determined that the user has operated the operation unit 202 and the values specifying the coefficients a, b, and c have been input, the process proceeds to step S 235. move on.
- step S 235 the least-N-th-error-method coefficient calculating section 234 asks the question of minimizing the above equation (48) with the weights a s and the input coefficients a, b, c.
- the prediction coefficient w or w 2 , w 3- ", w M as a solution by the least Nsquare error method of the exponent N corresponding to the weight a s , that is,
- the coefficient set is obtained and stored in the coefficient memory 235, and the process returns to step S230.On the other hand, if it is determined in step S233 that the recursive method has been selected, the process proceeds to step S230. Proceed to 2 3 6.
- step S 236 the least-N-th-error-method coefficient calculating section 234 determines whether or not information specifying the exponent N has been input, and repeats the processing until the exponent N is input. For example, if it is determined that the user operates the operation unit 202 to input information specifying the exponent N, the process proceeds to step S 237.
- step S 237 the least-N-square-error-method coefficient calculation unit 234 obtains a coefficient set by a solution based on the least squares error method.
- step S 2 3 8 Smell
- the least-N-square-error-method coefficient calculating unit 234 uses the predicted values obtained from the coefficient set obtained by the least-square error method, and explains the equations with reference to equations (51) to (54). As described above, the coefficient set is recursively obtained for the exponent N by the corresponding least-N-th power error method input from the operation unit 202, stored in the coefficient memory 235, and the process returns to step S231.
- step S 2 41 the tap extracting unit 25 1 of the mapping processing unit 22 2 extracts the image frame as the output signal corresponding to the image frame as the current input signal.
- a frame of interest among pixels of the frame of interest, for example, in a raster scan order, a pixel that has not yet been set as a target pixel is set as a target pixel, and a prediction tap is extracted from the input signal for the target pixel, and a product-sum operation unit Output to 2 5 2.
- step S 2 42 the product-sum operation unit 25 2 reads the prediction coefficient of the coefficient memory 2 35 of the learning unit 2 21, and inputs the prediction coefficient from the tap extraction unit 25 1 according to the equation (39).
- a product-sum operation is performed on the prediction tap thus read and the prediction coefficient read from the coefficient memory 235.
- the product-sum operation unit 252 obtains the pixel value (predicted value) of the target pixel.
- step S243 the tap extracting unit 251 determines whether or not all the pixels of the frame of interest have been set as the pixel of interest. 4.
- step 1 the same process is repeated with a new pixel of interest as a pixel of interest in the raster scan order of the frame of interest.
- step S 243 If it is determined in step S 243 that all the pixels of the frame of interest have been set as the pixel of interest, the process proceeds to step S 244, and the display unit 203 is obtained by the product-sum operation unit 252. Display the frame of interest consisting of the pixels.
- step S 2 41 the tap extracting section 2 51 repeats the same processing thereafter with the next frame as a new frame of interest.
- the user sees the image displayed on the display unit 203 in the mapping processing, and if the image does not suit his or her own preference, the operation unit 202 is switched to the operation unit 202.
- Operation to specify the direct method or the recursive method, and The exponent N of the difference method is specified, whereby the prediction coefficient obtained by the least-N-th power error method is changed in the learning process, and the image as the output signal obtained by the mapping process is adapted to the user's own preference. You can let it go.
- the sum of the errors is the sum of the squared and cubic errors.
- the coefficient a, b, c is in the case of the above value, the index N of the N-th power error e N of the formula (48) corresponds to the case of greater than 2.
- the sum of the squared errors is 10 1 6028 1 when using the least squares coefficient set, and 108 28 594 when using the least N coefficients set.
- the results obtained with the coefficient set of the norm are smaller than those obtained with the coefficient set of the least-N power norm.
- the sum of the cubic errors is 16 598 88 23 when using the least squares coefficient set, and 16 1 283 660 when using the least N coefficients set.
- the result with the coefficient set of the square law is smaller than that with the least squares coefficient set.
- mapping process (the product-sum operation of equation (39)) using the coefficient set of the least-squares criterion, it is possible to obtain an image as an output signal having a smaller sum of the square criterion, Also, by performing the matting process using the coefficient set of the minimum N-th power criterion obtained using the coefficients a, b, and c of the above values, an image as an output signal having a smaller sum of the third-order errors is obtained. Obtainable.
- the index N is changed by the user operating the operation unit 202 (coefficients a, b, and c specifying the index N are changed in the direct method). Also, in the recursive method, the index N itself is changed), and as a result, what kind of index is used as a learning criterion (learning system) for the prediction coefficient (coefficient set) Sets whether to use the minimum N-th power error method. In other words, the learning algorithm for finding the prediction coefficients has changed. Therefore, it can be said that the processing structure has been changed so that the user's favorite image can be obtained.
- learning system learning criterion
- FIG. 37 shows another configuration example of the optimization device in FIG.
- the configuration of the optimizing device 201 of FIG. 37 is the same as that of the optimizing device 201 of FIG. 29 except for the configuration of the internal information generating unit 261, and a description thereof will be omitted. .
- the internal information generation unit 261 reads, for example, a prediction coefficient stored in the coefficient memory 235 as an internal multiplication method of the processing unit 211, converts the information of the prediction coefficient into an image signal, and displays the image signal. Output to section 203 and display.
- the image optimizing device of FIG. 38 also includes a learning process and a mapping process as in the case of FIG. In the learning process, in steps S250 to S258, the same processes as those in steps S230 to S238 in FIG. 35 are performed.
- step S259 the internal information generation unit 2611 stores the coefficient set stored in the coefficient memory 2335. Then, it reads out as internal information, generates a displayable image signal based on each value included in the coefficient set, and outputs it to the display unit 203 for display.
- the image generated by the internal information generation unit 26 1 and displayed on the display unit 203 is, for example, a three-dimensional distribution diagram as shown in FIG. 39 or a two-dimensional distribution diagram as shown in FIG. It may be in a form such as a dimension distribution diagram. That is, in FIG. 39, the coordinates corresponding to the positions of the prediction taps extracted from the input signal are shown as positions on the xy plane as Tap Position (x) and ⁇ Tap Position (y). On the coordinates corresponding to the position of the tap, a prediction coefficient (Coeff) that takes the product of the pixel value as the prediction tap is shown. In FIG. 40, FIG. 39 is represented in a contour diagram. 'Now, return to the description of the flowchart in FIG. After the processing in step S259, the process returns to step S250, and the same processing is repeated thereafter.
- each value (coefficient value) of the coefficient set stored in the coefficient memory 235 of the processing unit 211 is displayed (presented) as internal information relating to the processing, and the user can obtain the distribution of the coefficient set.
- the exponent N is changed (directly In the method, the coefficients a, b, and c specifying the exponent N are changed, and in the recursive method, the exponent N itself is changed), whereby the learning criterion (the learning system) of the prediction coefficient (coefficient set) is changed. ) Is set what kind of exponent N least-Nth power error method is to be used. In other words, it can be said that the “processing structure” has been changed because the learning algorithm itself for obtaining the prediction coefficient has been changed.
- the coefficient set is displayed.However, for example, internal information about the processing such as whether the current least-N-th power error method is a direct method or a recursive method is displayed. May be displayed.
- FIG. 41 shows another configuration example of the optimization device.
- the optimizing device 310 of FIG. 41 includes a processing unit 311 and optimizes an input signal based on an operation signal input from the operation unit 202 to display the signal on the display unit 202.
- the same reference numerals are given to portions corresponding to those in the above-described embodiment, and a description thereof will be omitted as appropriate below.
- the coefficient memory 3 2 1 of the processing unit 3 1 1 of FIG. 4 is basically the same as the coefficient memory 2 3 5 of FIG. 3, and the mapping processing unit 2 2 2 executes the mapping processing.
- the coefficient set necessary for the calculation is stored.
- This coefficient set is basically a coefficient set (coefficient set as an initial value) generated by the learning device 341 of FIG. 43 described later, but is appropriately changed by the coefficient changing unit 322. Is overwritten and stored. Obedience Thus, as the overwriting is repeated, the coefficient set will be different from the one generated by the learning device 341 in due course.
- the coefficient set as the initial value is stored in a memory (not shown), and the content stored in the coefficient memory 3 2 1 is returned to the initial value coefficient set in response to the operation of the operation unit 202.
- the coefficient changing unit 3222 reads out the coefficient set (prediction coefficient) stored in the coefficient memory 3221 based on the operation signal input from the operation unit 202, and calculates the prediction coefficient corresponding to each prediction tap. Change the value of (the prediction coefficient integrated with each prediction tap) and overwrite and store it again in the coefficient memory 3 2 1.
- the coefficient read / write unit 3 3 1 of the coefficient change unit 3 2 2 is controlled by the change processing unit 3 3 2, reads the coefficient set stored in the coefficient memory 3 2 1, and outputs it to the change processing unit 3 3 2
- the prediction coefficient whose value has been changed by the change processing unit 332 is overwritten and stored in the coefficient memory 3221.
- the change processing unit 332 changes the prediction coefficient read from the coefficient memory 3221 by the coefficient read / write unit 331 based on the operation signal.
- the teacher data generator 3 51 of the learning device 3 41 is the same as the teacher data generator 2 3 1 of the learning device 2 21 of FIG. 30, and an image as learning data prepared in advance is used.
- the teacher data is generated from the signal, and the teacher data is output to the normal equation generator 354.
- the student data generation unit 352 is the same as the student data generation unit 232 in FIG. 30, generates student data from the learning data, and outputs the generated student data to the prediction tap extraction unit 353.
- the prediction tap extraction unit 353 is the same as the prediction tap extraction unit 233 of FIG. 30.
- the teacher data to be processed from now on is set as a pixel of interest, and the prediction pixel 4 Extract the predicted taps with the same tap structure as the tap extraction unit 25 1 (Fig. 31) that constitutes the muting processing unit 22 1 of Fig. 21 from the student data, and send them to the normal equation generation unit 35 4 Output.
- the normal equation generator 3 5 4 generates the target pixel input from the teacher data generator 3 5 1 From the teacher data y and the prediction taps x 2 , x, x, x M force, generate the normal equation of equation (46). Then, when the normal equation generation unit 354 obtains the normal equation of Expression (46) using all the teacher data as the pixel of interest, the normal equation is output to the coefficient determination unit 355.
- the coefficient determination unit 355 obtains a coefficient set by solving the input normal equation (the above equation (46)) by, for example, the Cholesky method.
- step S 271 the teacher data generation unit 35 1 generates teacher data from the learning data and outputs it to the normal equation generation unit 35 4, and the student data generation unit 35 2
- the student data is generated from the data for use, and is output to the prediction tap extracting unit 353, and the process proceeds to step S272.
- step S272 the prediction tap extraction unit 352 extracts the prediction tap from the student data for each data of interest, sequentially using each teacher data as a pixel of interest, and outputs it to the normal equation generation unit 354. Proceed to step S27.
- step S 273 the normal equation generation unit 354 uses each teacher data and a set of prediction taps to calculate a summation ( ⁇ ), which is each component of the matrix on the left side of the equation (46), By calculating the summation ( ⁇ ), which is each component of the vector on the right side, a normal equation is generated and output to the coefficient determination unit 355.
- the coefficient determination unit 355 solves the normal equation input from the normal equation generation unit 354, obtains a coefficient set by the so-called least square error method, and In S275, it is stored in the coefficient memory 321.
- the coefficient set (the coefficient set as an initial value) serving as the basis is stored in the coefficient memory 3 21.
- the coefficient set obtained by the least square error method has been described.
- the coefficient set obtained by another method may be used. May be a coefficient set obtained by
- the coefficient set stored in the coefficient memory 32 1 is a coefficient set including 49 prediction coefficients.
- the position of each prediction tap is set on the horizontal axis (for example, a number is assigned to each prediction tap, and the value of the number is set on the horizontal axis), and multiplied by the prediction tap at each tap position It is assumed that the distribution when the coefficient value of the prediction coefficient is set to the vertical axis is as shown in FIG.
- each coefficient is normalized so that the gain of the input signal and the gain of the input signal and the output signal obtained by processing with the prediction coefficients are the same (the individual coefficients are divided by the sum of the values of all coefficients). It is necessary that the sum of the coefficients, ie, the values of the coefficients, be one, but it is difficult to manipulate the individual coefficients so that the sum is one.
- the tap position t in FIG. 48 is the tap position designated by the arrow in FIG.
- the change processing section 332 sends the operation signal to a single key indicated by the arrow in FIG. 49.
- the coefficient value of the coefficient corresponding to the step position is to be changed more than the predetermined threshold value S11 (the amount of change is larger than the threshold value S11)
- the coefficient values of the other coefficients are plotted.
- the distribution shown in FIG. 46 is changed to the distribution shown in FIG.
- the change processing unit 332 corresponds to each tap position such that the distribution of coefficient values changes as a model like a panel according to the distance from the tap position corresponding to the coefficient whose value has been changed.
- the coefficient value of the coefficient to be changed is changed. That is, when the distribution of the coefficient set obtained by the learning is a distribution as shown in FIG. 46, the change processing unit 33 32 changes the tap position t as shown in FIG.
- the coefficient value at a position closer to the tap position t is increased so that the closer the position is, the more the coefficient value changes, and conversely, the farther from the tap position t
- the coefficient value of the coefficient corresponding to the tap at the position is changed so that the farther the position is, the more the coefficient value decreases, and the sum of the coefficient values becomes 1.
- a model whose distribution changes like a spring like 50 is called a panel model.
- the coefficient value at a position close to the tap position t depends on the proximity of the position.
- the coefficient value at a position far from the tap position t is raised according to the distance of the position.
- the change processing unit 332 changes the tap position according to the change amount of the coefficient at the tap position t as shown in FIG.
- the coefficient value that takes the extreme value of the same polarity as the coefficient of t is changed in the same direction as the coefficient of the tap position t, and the coefficient value that takes the extreme value of the polarity different from the coefficient of the tap position t is the coefficient of the tap position t.
- the force described when the positive coefficient value is raised for example, when the positive coefficient value is reduced, that is, when the coefficient value is changed in the negative direction. If the value is positive, change it in the negative direction, and the negative coefficient value changes in the positive direction. In addition, if the negative coefficient value is increased, the positive coefficient value is changed in the negative direction, the negative coefficient is changed in the positive direction, and the negative coefficient value is decreased. If so, a positive coefficient value changes in the positive direction, and a negative coefficient value changes in the negative direction. In the equilibrium model, in each case, the value of the coefficient changes in a direction to maintain the balance as a whole.
- the change processing unit 332 changes the coefficient values corresponding to the other taps using the panel model shown in FIG.
- the coefficient values of the coefficients corresponding to the other taps are changed using the equilibrium model shown in FIG. This is because when the amount of change of one coefficient is large, the effect is large in view of the balance of each coefficient value, and it is unnatural to make a change that maintains the overall balance. If the amount of change is small, the change in the coefficient will have little effect on the overall balance, so that the change is made to maintain the balance as a whole.
- the model of the change of the coefficient other than the changed coefficient value is not limited to this, but is changed so that the total of the coefficient values becomes 1 as a whole. Anything is fine.
- the model for changing another coefficient is switched in accordance with the magnitude of the change in the coefficient changed by operating the operation unit 202.
- the model that changes is fixed.
- the image optimizing process includes a coefficient change process and a mask change process. Since the mubbing process consisting of the rubbing process is the same as the mapping process described in FIGS. 35 and 38, only the coefficient changing process will be described here.
- step S291 the change processing unit 332 (FIG. 42) of the coefficient changing unit 322 determines whether or not an operation signal for operating the coefficient value has been input from the operation unit 202. That is, when the user sees the image displayed on the display unit 203 and determines that the user's preference is met, the user now uses the coefficient set stored in the coefficient memory 3 2 1 (FIG. 41). Although the mapping process is executed, if it is determined that the mapping does not meet the user's own preference, an operation of changing the coefficient set stored in the coefficient memory 321 used for the mapping process is performed.
- step S291 determines whether an operation signal for operating a coefficient has been input, that is, one of the coefficients stored in the coefficient memory 3221 is changed. If the operation unit 202 has been operated, the process proceeds to step S292.
- step S292 the change processing section 3332 controls the coefficient read / write section 331 ⁇ to read the coefficient set stored in the coefficient memory 3221, and in step S2 9 Proceed to 3.
- step S293 the change processing unit 3332 determines whether the coefficient value input as the operation signal has changed to a predetermined threshold S11 or more compared to a value previously included in the coefficient set. Determine whether or not. For example, if it is determined in step S293 that the change between the value input as the operation signal and the value of the coefficient set stored in the coefficient memory 3221 is equal to or greater than the threshold S11, the processing is performed. Is Step S
- step S294 the change processing unit 3332 changes the value of each coefficient included in the coefficient set by a spring model as shown in FIG. 50, and the processing is performed in step S290. Go to 5.
- step S293 the value input as the operation signal and the coefficient memory
- step S2 If it is determined that the change from the value of the coefficient set stored in 3 2 1 is not greater than or equal to the threshold value S 1 1, the process proceeds to step S 2 96.
- step S296 the change processing unit 3332 changes the value of each coefficient included in the coefficient set by the equilibrium model as shown in FIG. 51, and the processing is performed in step S2 Go to 9 5.
- step S295 the change processing unit 3332 controls the coefficient read / write unit 331 to overwrite and store the changed coefficient set value in the coefficient memory 3221. Then, the process returns to step S291, and the subsequent processing is repeated.
- step S291 when it is determined that the coefficient value has not been operated, that is, when the user determines that the image displayed on the display unit 203 is the user's favorite image, Returning to step S291, the same processing is repeated thereafter.
- the user can change the coefficient set used for the mapping processing and execute the processing optimal for the user.
- changing the value of each coefficient in the coefficient set changes the "processing content" of the mubbing processing by the mapping processing unit 311.
- each coefficient set stored in the coefficient memory 3 2 1 is input from the operation unit 202 based on a user operation. Is changed to the one generated by the least-N-th-power error method corresponding to the exponent N, i.e., the coefficient set is generated by a different coefficient set generation algorithm. It can be said that.
- FIG. 53 an embodiment will be described in which the internal information generating unit 37 1 is provided in the optimizing device 301 of FIG.
- the configuration is the same as that of the optimizing device 301 shown in FIG. 41 except for the point that the internal information generation unit 371 is provided.
- the internal information generating device 371 reads out, for example, a coefficient set stored in the coefficient memory 321 as internal information of the processing unit 331, and converts the read image data into an image signal that can be displayed on the display unit 203. After the conversion, it is output to the display unit 203 for display.
- This image optimization process like the image optimization process performed by the optimization process 301 in FIG. 41, includes a coefficient change process and a matting process, but the mapping process is shown in FIGS. 35 and 38. Since this is the same as the mapping process described, only the coefficient changing process will be described here.
- steps S311 to S315 the same processes as those in steps S291 to S296 of FIG. 52 are performed.
- step S 3 15 as in the case of step S 2 95 in FIG. 52, after the changed coefficient set is stored in the coefficient memory 3 21, the process proceeds to step S 3 17
- the internal information generation unit 371 reads out each coefficient value of the coefficient set stored in the coefficient memory 321 and converts it into an image signal that can be displayed on the display unit 203. Output and display.
- the display unit 203 displays the coefficient in the form of, for example, the three-dimensional distribution diagram shown in FIG. 39 or the two-dimensional distribution diagram shown in FIG. Each coefficient value of the set can be displayed.
- step S317 After the processing in step S317, the process returns to step S311 and the same processing is repeated thereafter.
- the coefficient change processing shown in FIG. 54 the value of the coefficient set stored in the coefficient memory 3221 is displayed as internal information. It is possible to operate the operation unit 202 such that a coefficient set for executing the operation is obtained.
- the output is obtained by calculating not a linear expression of the expression (39) but a higher-order expression of second order or higher. It is possible to determine the signal.
- the feature amount detection unit 4111 of the optimization device 401 Based on the operation signal input from the operation unit 402, the feature amount detection unit 4111 of the optimization device 401, for example, converts two specified feature amounts into an image signal as an input signal. Each pixel is detected, and information on the detected feature amount is output to the process determining unit 412. Further, the feature amount detection unit 4111 stores the image signal as an input signal in the internal buffer 421 until the telop is extracted from the input image signal, and outputs the image signal to the processing unit 4113. I do.
- the operation unit 402 is the same as the operation unit 202 in FIGS. 41 and 53. Note that the feature amount detection unit 411 is not limited to detecting only two types of designated feature amounts for each pixel of an image signal as an input signal. It is also possible to output two types of feature values specified from them, or to detect two or more types of feature values simultaneously and output them simultaneously.
- the processing determination unit 4 12 determines, for example, on a pixel-by-pixel basis, a process to be performed on the image signal by the subsequent processing unit 4 13 based on the feature amount input from the feature amount detection unit 4 11.
- the determined processing content is output to the processing unit 4 13.
- the processing unit 4 13 applies the processing of the processing content input from the processing determining unit 4 12 to the image signal as the input signal read from the buffer 4 21 in units of pixels, and outputs the processed image signal to the display unit 4 3. , To be displayed.
- the buffer 421 of the amount detection unit 411 temporarily stores the image signal as an input signal and supplies the image signal to the processing unit 413.
- the feature amount extraction unit 4222 extracts the two types of feature amounts selected by the feature amount selection unit 4223 from the image signal as an input signal, and outputs the extracted feature amounts to the processing determination unit 4122.
- the feature amount selection unit 423 supplies information specifying the feature amount to be extracted from the input signal to the feature amount extraction unit 422 based on the operation signal input from the operation unit 402.
- the selectable feature amounts include, for example, a luminance value for each pixel of an image signal, Laplacian, Sobel, inter-frame difference, inter-field difference, background difference, and a value obtained from each feature amount within a predetermined range (total sum). , Average, dynamic range, maximum value, minimum value, median value, or variance), but other feature values may be used.
- the feature amount recognition unit 431 of the processing decision unit 4 1 2 recognizes the types of the plurality of feature amounts input from the feature amount detection unit 4 2 1, together with information indicating the type of the recognized feature amount. Then, the feature amount itself is output to the processing content determining unit 432.
- the processing content deciding unit 432 based on the information indicating the type of the feature amount input from the feature amount recognizing unit 431 and the feature amount itself, stores each feature stored in the processing content database 433. The processing content set in advance for each amount is determined, and the determined processing content is output to the processing unit 413.
- the processing content recognition unit 441 of the processing unit 4 13 recognizes the processing content input from the processing determination unit 4 12 and instructs the processing execution unit 4 42 to execute the recognized processing. I do.
- the processing execution unit 442 performs a specified process on an input signal input via the buffer 421 based on a command for each pixel from the processing content recognition unit 441.
- the image signal is converted into an image signal that can be displayed on the display unit 202, output to the display unit 403, and displayed. Since the optimizing device 401 shown in FIG. 55 extracts a telop from an image signal, the processing content is to determine whether or not to extract a telop portion for each pixel (whether or not to display the telop).
- step S 331 the feature amount extraction unit 4 2 2 of the feature amount detection unit 4 1 1 determines whether or not two types of feature amounts have been selected by the feature amount selection unit 4 2 3, and the selection is made. Repeat the process until That is, based on an operation signal corresponding to the type of the feature amount input by the user operating the operation unit 402, the feature amount selection unit 4 23 3 outputs information indicating the selected feature amount. Until the data is input to the feature amount extraction unit 4 22, the process of step S 3 31 is repeated. For example, when it is determined that the information for selecting the feature amount is input from the feature amount selection unit 4 2 3, that is, it is determined that the user operates the operation unit 402 and selects two types of feature amounts. In this case, the process proceeds to Step S332.
- step S3332 the feature amount extraction unit 422 extracts each of the selected two types of feature amounts from the image signal as an input signal for each pixel, and outputs the extracted feature amount to the processing determination unit 4122.
- the buffer 421 stores an image signal as an input signal.
- the processing determining unit 412 determines the processing content for each pixel based on the two types of input feature amounts, and outputs the processing content to the processing unit 413. More specifically, the feature amount recognizing unit 431 identifies the two types of input feature amounts, and outputs the type of the identified feature amount and the feature amount itself to the process determining unit 412. Further, the processing determining unit 4 12 determines the processing content from the two types of feature amounts input for each pixel.
- the processing content database 4 33 contains, as a combination of arbitrary two types of feature amounts A and B, for each (feature amount A, feature amount B), each of the feature amounts A and B A table called LUT (Look Up Table) that associates values with the processing contents (in this case, information on whether or not they are telops) for the pixels having the feature values A and B
- LUT Look Up Table
- the processing content determining unit 432 refers to the LUT based on the combination of (feature amount A, feature amount B) of the pixel of interest that is currently being processed, and performs the corresponding processing, that is, Then, it is determined whether or not to be processed as a telop and output to the processing unit 4 13.
- the LUT is generated by, for example, extracting a plurality of feature amounts from an image of only a telop in advance and associating the combination with information indicating that the telop is a telop. The details of the LUT will be described later.
- step S334 the processing unit 4113 processes the image signal as an input signal input via the buffer 4221 according to the processing content input from the processing determination unit 4122, and displays the display unit.
- the image signal is converted to an image signal that can be displayed on the display section 403, and is output to the display section 403 for display.
- the processing content recognition unit 441 of the processing unit 4 13 recognizes the processing content input from the processing determination unit 4 12 and executes the determined processing on the corresponding pixel.
- the processing execution unit 442 reads out the image signal as the input signal stored in the buffer 421, executes a process corresponding to each pixel, and converts it into an image signal that can be displayed on the display unit 403. Then, the information is output to the display section 400 and displayed.
- step S335 the feature detection unit 4111 determines whether or not the telop is regarded as being extracted. That is, when the user looks at the image displayed on the display unit 403 and determines that the telop has not been extracted, the user operates again to change the combination of the feature amounts and try the telop extraction processing. Operate section 402. When an operation signal from the operation unit 402 corresponding to this operation is input, the process returns to step S331, and the subsequent processes are repeated.
- the user when it is determined that the telop has been extracted by the user's subjectivity, the user operates the operation unit 402 to input an operation signal indicating the end of the processing to the feature amount detection unit 421. At this time, the process ends.
- the processing in steps S3311 to S335 is repeated until the user looks at the image displayed on the display unit 403 and determines that the telop has been extracted.
- the optimal combination of features for the user Then, the telop can be extracted from the image signal as the input signal.
- two types of feature values are used to determine the processing content.
- the processing content may be determined based on other types of feature values.
- a plurality of operations are performed in a predetermined order by an operation signal corresponding to a predetermined operation of the operation unit 402 (for example, a button operation for specifying up / down) by the user.
- the combination of the feature values may be sequentially switched so that the user can switch and input the feature value without being particularly aware of the type of the feature value.
- the type of the feature amount detected by the feature amount detection unit 411 according to the operation of the operation unit 402 by the user is set so that the telop is detected by the processing unit 413. Be changed. Since the change in the type of the feature detected by the feature detector 411 indicates a change in the algorithm for deciding the processing content in the process determiner, the feature detector 411 also executes the processing. Structure has changed.
- the feature amount detection unit 411 can detect various feature amounts, but some of these feature amounts include, for example, Laplacian, etc. Some parameters need to be set, such as filter coefficients.
- the parameter for detecting the characteristic amount can be changed according to the operation of the operation unit 402, but according to the change of the parameter, the parameter is detected by the characteristic amount detection unit 411.
- the type of the detected feature does not change, but the value of the detected feature changes. Therefore, the change of the parameter for detecting the feature can be said to be a change of the “contents of processing” of the feature detector 411.
- the optimization device 501 of FIG. 60 is basically the same as the optimization device 401 of FIG. 55 except for the configuration in which the internal information generation unit 511 is provided.
- the internal information generation unit 5111 of the optimization device 501 is configured as, for example, a feature
- the feature quantity selection information output from the feature quantity selection section 4 2 3 of the quantity detection section 4 11 is extracted, and the type of the feature quantity currently selected is displayed on the display section 4 3.
- this processing is basically the same as the telop extraction optimizing processing by the optimizing device 401 in FIG. 55 described with reference to the flowchart in FIG. 59, and the selected features
- a process of displaying information indicating the type of the amount is added. That is, in step S 3 41, the feature amount extraction unit 4 2 2 of the feature amount detection unit 4 1 1 determines whether or not two types of feature amounts have been selected by the feature amount selection unit 4 2 3. The process is repeated until it is done. For example, when it is determined that the information for selecting the feature amount is input from the feature amount selection unit 4 2 3, that is, it is determined that the user operates the operation unit 402 and selects two types of feature amounts. In this case, the process proceeds to Step S3422.
- step S3342 the internal information generation unit 5111 extracts information indicating the type of the selected two types of characteristics from the characteristic amount selection unit 423, and outputs the selected two types of characteristics.
- the name of the quantity type is displayed on the display section 403.
- steps S3343 to S346 the same processing is performed as in steps S3332 to S335 of FIG. 59.
- the type of the currently selected feature quantity which is internal information relating to the processing of the feature quantity detection unit 411, is displayed (presented), so that the user is currently selected. While grasping the types of the feature amounts, it is possible to set an optimal combination of feature amounts so that a telop can be accurately extracted from an image signal as an input signal.
- the internal information generation unit 511 1 generates, for example, a distribution of two types of feature values for each pixel detected by the feature detection unit 4 11 1 as internal information, and displays the display unit 4 0 1 In 3, it is possible to display as shown in FIGS. 65 and 67 described later. As described above, when the parameter for detecting the feature amount is changed according to the operation of the operation unit 402, the internal information generation unit 5111 displays the parameter as internal information. It is also possible to display (present) it in the section 4003.
- FIG. 62 instead of internal information generation section 5 11 1 of FIG. 60, an internal information generation section 6 11 1 for generating internal information from processing determination section 4 12 is provided.
- a configuration example of the optimizing device 600 will be described.
- the optimization device 600 of FIG. 62 is the same as the optimization device 50 of FIG. 60 except that an internal information generation unit 6 11 1 is provided instead of the internal information generation unit 5 11 1 of FIG. It has the same configuration as 1.
- the internal information generation unit 611 is used as the internal information based on, for example, the processing content determined by the processing content determination unit 432 of the processing determination unit 4 12 and the two types of feature amounts actually detected. Then, a distribution map (for example, FIGS. 65 and 67) of pixels extracted from telop and pixels not extracted from telop when two kinds of feature amounts are used as axes is generated, and the display unit 40 Display on 3.
- a distribution map for example, FIGS. 65 and 67
- This processing is basically the same as the telop extraction optimizing processing by the optimizing device 501 shown in FIG. 60 described with reference to the flowchart in FIG. 61. The difference is that processing has been added to display the distribution of whether or not pixels have been extracted as telops based on the type of feature.
- step S355 the feature amount extraction unit 4222 of the feature amount detection unit 4111 determines whether or not two types of feature amounts have been selected by the feature amount selection unit 4223, and selects The process is repeated until it is done. For example, when it is determined that the information for selecting the feature amount is input from the feature amount selection unit 4 2 3, that is, it is determined that the user operates the operation unit 402 and selects two types of feature amounts. If so,
- the feature amount extraction unit 422 extracts the selected two types of feature amounts for each pixel from the image signal as an input signal, and outputs the extracted feature amount to the processing determination unit 412.
- the buffer 421 stores an image signal as an input signal.
- the processing determining unit 412 determines the processing content for each pixel based on the two types of input feature amounts and outputs the processing content to the processing unit 413.
- step S354 the processing unit 413 processes the image signal as the input signal read from the buffer 421 according to the processing content input from the processing determination unit 412, and outputs the processed image signal to the display unit 403.
- the image signal is converted into a displayable image signal, output to the display unit 403, and displayed.
- step S ⁇ b> 355 the internal information generation unit 611 generates a distribution map obtained by plotting the processing content determined by the processing content determination unit of the processing determination unit 412 with two types of feature amounts as axes. Is generated as internal information and displayed on the display unit 403.
- step S355 the feature amount detection unit 411 determines whether or not the telop is regarded as being extracted.
- step S356 when an operation signal corresponding to the user's operation is input from the operation unit 202 and it is determined that no telop has been extracted, the process returns to step S355. The subsequent processing is repeated.
- step S 356 when the user operates the operation unit 402 to input an operation signal indicating the end of the process to the feature amount detection unit 421, the process ends.
- step S355 the internal information generation unit 6111 generates a distribution indicating whether or not the two types of feature amounts detected from the image signal as shown in FIG. For example, it is displayed as a two-dimensional distribution diagram as shown in FIG. In the example of Fig. 65, two types of features were selected as Laplacian and the difference between frames. The circles in the figure indicate the pixels from which the telop was extracted, and the backslashes indicate the pixels from which the telop was not extracted.
- the pixels from which the telop is extracted and the pixels from which the telop is not extracted are in a state where there is no upper boundary (the distribution of the pixels of the telop and the non-telop pixels is not separated). ).
- the telop is often not extracted from the background image.
- the telop part is not a boundary, but is surrounded by a boundary 6 2 1, 6 It is a state where 22 occurs.
- the user determines that the telop has not been extracted, and as a result, the processes of steps S351 to S356 are repeated. Then, by repeating this process, for example, the selected feature amount is the Laplacian sum (17 pixels XI 7 pixels) (the sum of the Laplacian of the pixels in the range of 17 pixels X 17 pixels centering on the target pixel) ) And the luminance DR (17 pixels X 17 pixels centered around the pixel of interest of the luminance value of each pixel), a distribution map is generated as shown in Fig. 67. Suppose. At this time, in FIG. 67, the distribution of pixels extracted as a telop and the distribution of pixels not extracted as a telop are visually separated.
- the telop of the input image and the other parts are distributed so as to be separated.
- the Laplacian sum and the luminance DR as the feature amounts detected from each pixel are subjected to threshold value processing, etc., and the telop portion is accurately extracted as shown in FIG. 68. be able to.
- the telop is extracted by the user viewing the two-dimensional distribution centered on the selected two types of feature amounts together with the image displayed on the display unit 403.
- steps S351 to S356 By repeating the processing of steps S351 to S356 until it can be determined that the telop and the detected pixel and the background pixel are viewed from the feature amount as the internal information related to the processing of the processing determining unit 412.
- By displaying the distribution of The user operates the operation unit 402 so that the telop can be accurately extracted from the image signal as the input signal while grasping the distribution of the telop, the detected pixel, and the background pixel from the feature amount. By operating, it is possible to set the optimal combination of feature amounts for the user.
- the configuration of the optimization device 701 is the same as that of the optimization device 601 shown in FIG. 62 except that a processing determination unit 711 is provided instead of the processing determination unit 412. .
- the processing determination unit 711 changes the contents of the LUT of the processing content database 433 based on the operation signal from the operation unit 702 (the same as the operation unit 402), and detects the feature value. Based on the feature amount input from the unit 4 11, the subsequent processing unit 4 13 determines the processing to be performed on the image signal for each pixel, and outputs the determined processing content to the processing unit 4 13 .
- the processing content determination section 7 2 1 determines the processing content for each combination of the two types of feature amounts stored in the processing content database 4 3 3 based on the operation signal input from the operation section 7 02. Change the LUT. More specifically, in a state where the image signal as the input signal is displayed as it is, the feature amount of the pixel in the area designated as the telop by the operation unit 202 is regarded as the telop, and the other area is regarded as the telop. Set the LUT to do the processing.
- the processing content determining unit 7 2 1 After the LUT is changed, the processing content determining unit 7 2 1 generates a processing content database 4 3 3 based on the information identifying the feature value input from the feature value recognizing unit 4 3 1 and the feature value itself. The processing content that is set in advance for each feature value stored in the storage unit is determined, and the determined processing content is output to the processing unit 413.
- step S3661 the image signal as an input signal is displayed on the display unit 403 as it is. More specifically, the buffer 421 of the feature amount detection unit 411 receives and stores the image signal as an input signal, and the processing unit 413 detects the input signal stored in the buffer 421. The image signal as a signal is read out as it is, and is output to the display unit 403 as it is without processing, and displayed.
- step S3652 the processing content determining unit 721 of the processing determining unit 711 determines whether or not the telop and the background have been instructed by the operating unit 702. That is, for example, it is assumed that an unprocessed image signal is displayed as shown in FIG. 72 in the process of step S3661. At this time, the user operates the pointer 741 via the operation unit 72 (by dragging or clicking), and roughly specifies the tip portion by the range 742 or the like. For example, as shown in FIG. 73, the process is repeated until the telop part 752 and the background part 751 of the outer part of the range 742 in FIG. When the section is instructed, each designated pixel position is stored in a built-in memory (not shown), and the process proceeds to step S366.
- step S3663 the feature selection section 4223 of the feature detection section 4221 determines whether or not an operation signal for selecting two predetermined types of feature amounts has been input from the operation section 202. It is determined and the process is repeated until two predetermined types of feature values are specified. When the two predetermined types of feature values are selected, the process proceeds to step S364.
- step S364 the feature amount extraction unit 422 determines the two types selected from the input signal based on the information for selecting the type of the feature amount input to the feature amount selection unit 423.
- the feature amount is extracted and output to the processing determining unit 711.
- step S365 the internal information generation unit 611 based on the two types of feature amounts input to the processing determination unit 7111 and the information on the pixel position specified in the telop and the background, A two-dimensional distribution map with two types of feature amounts as axes is generated, and in step S366, the two-dimensional distribution map is displayed on the display unit 403. More specifically, the feature amount recognizing unit 431 of the processing determining unit 7 11 1 recognizes the type of the feature amount, and information indicating the type and the feature. The collected amount itself is output to the processing content determination unit 721, and the processing content determination unit 721 internally stores information indicating the pixel position specified in the telop and the background in addition to the information indicating the feature amount and its type.
- the internal information generation unit 611 By outputting the information to the information generation unit 611, the internal information generation unit 611 based on the information indicating the pixel position designated as the telop and the background, for example, the feature amount 2 shown in FIG. Generate a dimensional distribution map. That is, in the example of FIG. 74, an example is shown in which the Labracian and the inter-frame difference are selected as the feature amounts, but the circles in the figure indicate the pixels of the telop, and the backslashes in the figure indicate the background. Pixels are shown. In the two-dimensional distribution diagram of FIG.
- step S366 the processing content determination section 721 determines whether or not an operation signal indicating that the telop and the background have been determined to be separated has been input.
- an operation signal indicating that the telop and the background have been determined to be separated has been input.
- the distribution of circles indicating telops and the distribution of back marks indicating backgrounds are not completely separated.
- the boundary 753 is separate from the background 7'51 and the telop 752.
- the background and the telop are not separated, for example, due to the appearance of a background.
- the operation unit 720 When it is determined that the telop has not been extracted from the viewpoint of the user in this way, when the user tries to change the feature amount again, the operation unit 720 according to the user operation confirms that the separation has not been performed.
- the operation signal shown is output to the processing content determination section 7 21 of the processing determination section 7 11.
- step S366 it is determined that the operation signal indicating that the telop and the background are determined to be separated has not been input, and the process returns to step S366 and thereafter. Is repeated. By this process, two types of feature amounts are selected again.
- step S366 by performing the processing in step S366, for example, as shown in FIG.
- the operation unit 720 is operated by the user and processes and determines the operation signal indicating that the operation is separated. Output to the processing content determination unit 7 21 of the unit 7 11.
- step S366 it is determined that an operation signal indicating that the telop and the background are separated has been input, and the process proceeds to step S366.
- step S368 the processing content determination unit 721 determines whether or not a telop portion has been designated on the two-dimensional distribution. That is, as shown in FIG. 75, as an area in which a large number of circles indicating telops are distributed by the pointer 741 on the displayed distribution, for example, an operation signal for designating the area 761 is an operation unit. It is determined whether or not an input has been made from 702, and the process is repeated until a range is specified. If it is determined that the range has been specified, the process proceeds to step S369.
- the processing content determination unit 721 corresponds to each combination of the feature amounts based on the operation signal input from the operation unit 702 and specifying the range 761 in FIG.
- the processing content on the LUT that indicates the presence or absence of telop extraction is changed, the processing content is determined according to the changed LUT and output to the processing unit 4 13, and the processing unit 4 13 ,
- the telop is extracted from the image signal as the input signal input through the buffer 421, and is displayed on the display unit 403. More specifically, the processing content determining unit 721, based on the information indicating the range on the two-dimensional distribution as shown in FIG.
- the LUT of the processing content database 4 33 is updated so that a combination of two types of feature values corresponding to the pixel to be extracted is extracted as a telop, and the processing content of each pixel is determined according to the updated LUT. Output to 3.
- step S370 the processing content determining unit 721 determines whether or not it has been determined that a telop has been extracted. That is, it is determined whether the telop has been extracted from the viewpoint of the user or whether the telop has not been extracted. For example, as shown in FIG. 76, when the output image is displayed on the display unit 403 by the processing of step S369, the telop unit and the background unit Since the boundaries 771 and 772 are not telops themselves, it cannot be said that telops are completely extracted. Therefore, when the user determines that the telop has not been extracted as described above, the user operates the operation unit 720 to input an operation signal indicating that the telop has not been extracted. .
- step S 370 determines in step S 370 that no telop has been extracted, and the processing in step S 3 7 1 Proceed to.
- step S371 it is determined whether or not the range of the telop on the two-dimensional distribution is to be re-designated. The process returns to step S368, and the subsequent processes are repeated.
- step S370 for example, if it is selected to re-specify the range of the terror in the processing of step S371, in step S368, as shown in FIG. Compared to the range 761 in Fig. 75, the range where the distribution of the circles indicating the telop exists is narrowed down to a smaller range 7 8 1 (The range where the circles extracted as the telop are included in a larger number of the circles is narrowed down) ) Is set.
- FIG. 75 even if the range 761 on the distribution of the feature amount is set as the telop portion, as a result, as shown in FIG. 76, the boundary between the telop portion and the background portion 771, 7 In the case of 72, since the telop was not the telop itself, the telop was not completely extracted.
- the user operates the operation unit 702 to set the range 781 narrower than the range 761 as the range on the distribution of the feature amount (as the range of the portion specified as the telop portion). Set.
- the range on the distribution of the feature value extracted as the telop is narrowed, that is, the background portion is more easily removed.
- the telop itself will not be easily extracted, and the user will need to repeat this process while looking at the extracted telop to extract the optimal telop. Search for the state.
- step S370 when the user determines that the telop has been extracted, it indicates that the user has operated the operation unit 720 to determine that the telop has been extracted. Since the operation signal is to be input to the processing content determining unit 721 of the processing determining unit 711, it is determined that the telop has been extracted, and the processing ends.
- step S371 If it is determined in step S371 that the range of the telop is not to be specified, the process returns to step S366, and the subsequent processes are repeated.
- the user first selects two types of feature amounts, a telop and a background portion, on an image signal as an input signal, and furthermore, the selected telop and background determine the selected feature amount.
- a telop extraction process that meets the user's preference can be realized.
- step S365 If it is determined in step S365 that the telop and background processing has been instructed, then the range of the telop and the background is specified by selecting two types of feature values and the two-dimensional distribution of the telop. It is only necessary to be able to generate information that can be used as a template for narrowing down.
- the processing is divided into the processing of whether it is a telop key or the background depending on the combination of the two types of feature amounts selected by the user. Content ”will be changed.
- the type of feature is determined by designating two types of feature in the operation unit 402. For example, as shown in FIG.
- the combination of the two types of feature amounts may be changed simply by sending an up or down instruction as an operation signal using the two predetermined operation buttons. That is, as an initial state, the process is executed by the combination of the feature amounts a and b shown in the state A, and when the down is instructed, the process is executed by the combination of the feature amounts b and c shown in the state B, Furthermore, when down is instructed, the processing is executed with the combination of the feature values c and d shown in state C. For example, in state C, if up is specified, it returns to state B, and in state B, up is specified. Then, the state A may be returned. In this way, the user can change the feature values one after another without being particularly conscious of the types of the feature values, so that the combination of the feature values for efficient telop extraction can be made. It is possible to narrow down.
- FIG. 8 ⁇ a feature amount capable of generating a new feature amount from an existing feature amount in place of the feature amount detection unit 4 11 1 of the optimization device 401 of FIG.
- the configuration of the optimizing device 8101 provided with the detecting unit 811 will be described.
- FIG. 80 the configuration is the same except that a feature value detection unit 811 is provided instead of the feature value detection unit 411 of the optimization device 401 of FIG.
- the operation unit 802 is the same as the operation unit 402.
- the configuration of the feature amount detection unit 811 in FIG. 80 will be described.
- the buffer 4 21 and the feature amount extracting unit 4 22 are the same as the feature amount detecting unit 4 11 1 shown in FIG.
- the feature value selection unit 8221 controls the feature value extraction units 4 and 22 based on the operation information that specifies the feature value input from the operation unit 8002, and selects the feature values prepared in advance. Extracts two types of specified feature quantities and outputs them to the processing decision section 4 13 or outputs the feature quantities stored in advance in the feature quantity database 8 23 to the processing decision section 4 13 . More specifically, the feature amount database 823 stores feature amount information on a type of feature amount and a method of detecting the feature amount.
- the feature amount extraction unit 4 222 reads out feature amount information corresponding to the type of feature amount selected by the feature amount selection unit 8 21 from the feature amount database 8 23, and According to the feature amount detection method recorded in the feature amount information, the feature amount selected from the input signal is detected.
- (X, y) indicates the current pixel of interest
- (x, y) indicates the pixel one frame before at the same position in space)
- inter-field difference background difference
- the feature amount processing unit 822 generates a new feature amount from the feature amounts stored in the feature amount database 823 based on the operation signal input by the user. More specifically, the feature amount processing unit 822 generates new feature amount information from the feature amount information stored in the feature amount database 823 based on the operation signal input by the user, and generates the feature amount.
- the feature amount extraction unit 422 extracts a new feature amount based on the information.
- the DR of the feature amount A (dynamic range: a predetermined The value generated by reading the value of the feature value A of a plurality of pixels present at the position and obtaining the value that is the difference between the minimum value and the maximum value for each pixel) corresponds to the new feature value A '. It may be feature amount information, and similarly, the maximum value, the median value, the minimum value, the sum, the variance, the number of pixels having a value equal to or more than the threshold value (threshold value can be set), or a plurality of A linear linear combination between the feature values may be obtained and used as new feature value information.
- A, B, and C are coefficients,
- the feature amounts stored in the feature amount database 8 23 indicate the types of the feature amounts A to C extracted by the feature amount extraction unit 4 22, and the feature amount A 'To C' It is assumed that they are the types of the feature amounts processed from the feature amounts A to C by the feature amount processing unit 8 2 2 (actually stored are and the respective feature amount information (feature amount information Information indicating the type and its detection method)).
- the feature amount information stored in the feature amount database 8 23 information designated at the timing of extraction by the feature amount extraction unit 4 22 may be stored, or the feature amount database 8 2 3 may be stored in advance. May be processed by the feature processing unit 822, or may be stored in advance by another method.
- step S381 the feature amount extraction unit 4222 of the feature amount detection unit 4111 determines whether two types of feature amounts have been selected by the feature amount selection unit 8221, and if not selected. The process proceeds to step S386. Further, for example, when it is determined that the information for selecting the feature amount is input from the feature amount selection unit 4 23, that is, the user operates the operation unit 8 02 to select two types of feature amounts. When it is determined, the process proceeds to step S382.
- step S382 the feature amount extraction unit 422 extracts each of the two selected feature amounts from the image signal as an input signal for each pixel, and outputs the extracted feature amount to the processing determination unit 412.
- the buffer 421 stores an image signal as an input signal.
- the processing determining unit 412 determines the processing content for each pixel based on the two types of input feature amounts and outputs the processing content to the processing unit 413.
- step S384 the processing unit 4113 determines the image signal as an input signal input via the buffer 4221 according to the processing content input from the processing determination unit 4122.
- the signal is processed, converted into an image signal that can be displayed on the display unit 403, output to the display unit 403, and displayed.
- step S385 the feature amount detection unit 411 determines whether or not the telop is regarded as being extracted. That is, when the user looks at the image displayed on the display unit 403 and determines that the telop has not been extracted, the user operates again to change the combination of the feature amounts and try the telop extraction processing. Operate section 8002. When an operation signal corresponding to this operation is input from the operation unit 802, the process proceeds to step S3886.
- step S386 the feature amount processing unit 822 of the feature amount detection unit 811 determines whether or not the processing of the feature amount has been instructed. The processing returns to step S381.
- step S 386 when the feature amount processing unit 8 22 determines that the operation signal instructing the processing of the feature amount has been input from the operation unit 80 2, the processing is performed in step S 3 87 Proceed to.
- step S387 the feature amount processing unit 822 determines whether or not the base feature amount has been designated, and repeats the process until information specifying the base feature amount is input. For example, when the operation unit 802 is operated and an operation signal indicating the feature amount A is input, it is determined that the feature amount based on the feature amount A of the image signal as the input signal is input. The processing proceeds to step S388.
- step S388 the feature amount processing unit 822 determines whether or not the processing content has been instructed, and repeats the processing until the processing content is instructed. For example, when the operation unit 802 is operated and an operation signal for instructing DR is input, it is determined that the processing content is specified, and the process proceeds to step S389.
- step S 3 89 the feature amount processing unit 8 22 processes the specified feature amount according to the specified processing content to obtain a new feature amount, and stores the new feature amount in the feature amount database 8 23.
- the process returns to step S 3 81. That is, in this case, the feature amount processing unit 8 22 reads out the feature amount A from the feature amount database 8 23 and acquires the DR that is the specified processing content, thereby obtaining a new feature amount A ′. Generate a Then, the process is stored in the feature amount database 823, and the process returns to step S3801.
- step S385 when it is determined in step S385 that the telop has been extracted by the user's subjectivity, the user operates the operation unit 802 to generate an operation signal indicating the end of the processing to detect the feature amount.
- the input is made to the section 4 2 1, at which time the process ends. That is, according to the above-described processing, the processing of steps S381 to S389 is repeated until the user sees the image displayed on the display unit 403 and determines that the telop has been extracted.
- the contents of the processing are determined according to the two kinds of feature amounts specified by the user, and the telop is extracted from the image signal as the input signal. Therefore, it can be said that it has been changed so that the user can obtain the desired output signal.
- the two axes of the feature amount (the two types of feature amounts to be selected) are switched, and a new feature amount is set (the types of the feature amounts increase).
- the algorithm that determines the processing content (for example, whether or not to process as a telop) is changed depending on the combination of the feature amounts, so the “processing structure” of the “processing content” is also changed. It can be said that there is.
- telop extraction is obtained by switching the types of features to be extracted by trial and error.However, in a method using fixed features, changing the assumed features requires a program. It is necessary to rewrite the system or recreate the system itself, so in order to heuristically obtain the optimal algorithm for telop extraction, it is necessary to recreate the system many times, which is actually quite difficult. It was.
- the optimization device of the present invention provides a new feature in real time. Since it is possible to extract the amount of feature and to present the feature amount distribution, trial and error performed by the user is also easy, and the possibility of finding the optimum feature amount for telop extraction is improved. Can be.
- FIG. 83 the configuration of the optimizing device 90 1 in which the feature amount detecting unit 9 11 and the internal information generating unit 9 12 are provided in the optimizing device 800 of FIG. Will be described.
- a feature amount detection unit 911 is provided in place of the feature amount detection unit 811, and a new internal information generation unit 912 is additionally provided. Is basically the same as the optimizing device 800 shown in FIG.
- the feature amount detection unit 9 11 1 has basically the same configuration as the feature amount detection unit 8 11 1 in FIG. 81, except that the feature amount selection unit 8 2 1 is replaced by a feature amount selection unit 9 2 1
- a feature amount processing section 9222 is provided in place of the feature amount processing section 8222.
- the basic functions are the same in each case. 1 The difference is that the feature amount selection unit 911 supplies the information of the type of the selected feature amount to the internal information generation unit 912. Also, the feature amount processing unit 9222 is different in that it outputs image information related to the instruction of the processing content to the display unit 400.
- the internal information generator 911 is similar to the internal information generator 511 of FIG.
- the operation unit 902 is the same as the operation unit 402.
- step S 3 9 the feature extraction unit 4 2 2 feature quantity detecting unit 9 1 1 judges whether or not two types of features from the feature value selection section 9 2 1 is selected, selection If not, the process proceeds to step S397. Further, for example, when it is determined that the information for selecting the feature amount has been input from the feature amount selection unit 921, the process proceeds to step S392.
- step S392 the internal information generation unit 912 determines the selected two types of characteristics. Information indicating the type of the characteristic amount is extracted from the characteristic amount selection unit 9 21 and the names of the two selected characteristic amount types are displayed on the display unit 4 03.
- step S393 the feature amount extraction unit 422 extracts the selected two types of feature amounts for each pixel from the image signal as an input signal, and outputs the extracted feature amounts to the processing determination unit 4122.
- the buffer 421 stores an image signal as an input signal.
- the processing determining unit 412 determines the processing content for each pixel based on the two types of input feature amounts and outputs the processing content to the processing unit 413.
- step S395 the processing unit 4113 processes the image signal as the input signal input through the buffer 421 according to the processing content input from the processing determination unit 4122, and displays the display unit.
- the image signal is converted to an image signal that can be displayed on the display section 403, and is output to the display section 403 for display.
- step S396 the feature amount detection unit 411 determines whether or not the telop is regarded as being extracted. That is, when the user looks at the image displayed on the display unit 403 and determines that the telop has not been extracted, the user operates again to change the combination of the feature amounts and try the telop extraction processing. Operate part 902. When an operation signal corresponding to this operation is input from the operation unit 902, the process proceeds to step S397.
- step S397 the feature amount processing unit 922 of the feature amount detection unit 911 determines whether or not the processing of the feature amount has been instructed. The processing returns to step S391. On the other hand, in step S397, when the feature amount processing unit 922 determines that the operation signal instructing the processing of the feature amount has been input from the operation unit 902, the processing is performed in step S398 Proceed to.
- the feature amount processing unit 9222 displays, for example, an instruction screen of the processing content as shown in FIG.
- a basic characteristic amount display section 931 is provided on the left side of the figure, and the characteristic amount currently stored in the characteristic amount database 8 23 as a basic characteristic amount is provided. Is displayed. In this case, feature amounts A to C and A ′ to C ′ are displayed. Also, on the right side, The selection box 9 3 2 is displayed.
- DR maximum value, minimum value, median value, minimum value, sum, variance, number of pixels that take values above the threshold, or linear linear combination is displayed, and pixels that take values above the threshold
- a column 9332a for setting a threshold is provided, and further, a column 9332b for selecting a feature quantity selected when linear linear combination is selected is provided.
- a scale setting field 933 for setting the scale of each value is displayed.
- the scale value indicates the area around the target pixel.For example, when detecting DR, the required pixels are set to 3 pixels x 3 pixels or 5 pixels x 5 pixels. Is a value indicating an area of a pixel indicated by a circle.
- step S399 the feature amount processing unit 9222 determines whether or not a base feature amount has been specified, and repeats the processing until information specifying the base feature amount is input. For example, when the operation unit 902 is operated and an operation signal indicating the feature amount A is input, it is determined that the feature amount based on the feature amount A of the image signal as the input signal has been input, and that The process proceeds to step S400.
- step S400 the feature amount processing unit 9222 determines whether or not the processing content has been instructed, and repeats the processing until the processing content is instructed. For example, when the operation unit 902 is operated and an operation signal for instructing DR is input, it is determined that the processing content is specified, and the process proceeds to step S401.
- step S401 the feature amount processing unit 922 processes the specified feature amount according to the specified processing content and stores the processed feature amount in the feature amount database 823. Return to S39 1.
- the user operates the operation unit 902 to input an operation signal indicating the end of the processing to the feature amount detection unit 421. At this time, the process ends.
- the user can recognize the displayed characteristic amount as a base and the characteristic amount that can be optimally processed by operating while looking at the processing content. Further, thereafter, it is possible to immediately specify a feature amount that can realize the optimum processing for the user. Also generate new features By inputting necessary input information in accordance with the display screen, the number of types of feature amounts selected by the user can be increased, so that many combinations of feature amounts can be set efficiently.
- the contents of the processing are determined according to the two kinds of feature amounts specified by the user, and the telop is extracted from the image signal as the input signal.
- the content of the processing has been changed so that the user can obtain the desired output signal.
- the two axes of the feature amount (the two types of feature amounts to be selected) are switched, and a new feature amount is set (the types of the feature amounts increase).
- the algorithm that determines the processing content (for example, whether or not to process as a telop) is changed depending on the combination of the feature amounts, so the “processing structure” of the “processing content” is also changed. It can be said that there is.
- FIG. 87 an optimizing device 100 having an internal information generating unit 101 instead of the internal information generating unit 912 of the optimizing device 91 of FIG.
- the configuration of 1 will be described.
- a configuration other than replacing the internal information generation unit 9 12 with the internal information generation unit 101 1 and further replacing the processing determination unit 4 1 2 with the processing determination unit 7 1 1 of FIG. Is the same as that of the optimization device 901 of FIG.
- the internal information generating unit 101 is basically the same as the internal information generating unit 912, but is further determined by the processing content determining unit 432 of the processing determining unit 412. The information of the processing content determined for each pixel is read out and displayed on the display unit 403 (the function of the internal information generation unit 611 in FIG. 69 is added).
- the operation unit 1002 is the same as the operation unit 402.
- step S411 the buffer 421 of the feature amount detection unit 911 receives and stores an image signal as an input signal, and the processing unit 413 is stored in the buffer 421.
- the image signal as an input signal is read out as it is, without processing, It is output to the display section 400 as it is and displayed.
- step S 4 12 the processing content determining section 7 2 1 of the processing determining section 7 11 1 determines whether or not a telop and a background have been instructed by the operation section 100 2, and the telop and the background have been instructed.
- the process proceeds to step S 413.
- step S 4 13 the feature amount extraction unit 4 2 2 of the feature amount detection unit 9 11 1 determines whether or not two types of feature amounts have been selected by the feature amount selection unit 9 2 If not, the process proceeds to Step S 4 21. Also, for example, when it is determined that the information for selecting the feature amount has been input from the feature amount selection unit 921, the process proceeds to step S414.
- step S 4 14 the internal information generation unit 912 extracts information indicating the selected two types of feature amounts from the feature amount selection unit 9 2 1, and selects the two types of selected feature types.
- the name of the quantity type is displayed (presented) on the display section 403.
- step S415 the feature amount extraction unit 422 extracts the selected two types of feature amounts for each pixel from the image signal as an input signal, and outputs the extracted feature amounts to the process determination unit 4122.
- the buffer 421 stores an image signal as an input signal.
- the processing determining unit 711 determines the processing content for each pixel based on the two types of input feature amounts, and outputs the processing content to the processing unit 413.
- step S 417 the internal information generation unit 101 1, based on the two types of feature amounts input to the processing determination unit 711, and information on the pixel position specified in the telop and the background, A two-dimensional distribution map is generated with two types of feature amounts as axes, and in step S 418, the two-dimensional distribution map is displayed on the display unit 403.
- the feature amount recognizing unit 431 of the processing determining unit 7 11 1 recognizes the type of the feature amount, outputs information indicating the type and the feature amount itself to the processing content determining unit 7 2 1, and
- the content deciding unit 7 21 outputs the information indicating the pixel position specified in the telop and the background to the internal information generating unit 101 1 in addition to the information indicating the feature amount and its type, so that the internal
- the information generation unit 1011 based on the information indicating the pixel position designated as the telop and the background, A two-dimensional distribution map of the feature values as shown in 7 is generated.
- step S419 the processing unit 413 processes the image signal as the input signal input via the buffer 421 according to the processing content input from the processing determination unit 711, and displays the display unit.
- the image signal is converted to an image signal that can be displayed on the display section 403, and is output to the display section 403 for display.
- step S420 the feature amount detection unit 911 determines whether or not the telop is regarded as being extracted. That is, when the user looks at the image displayed on the display unit 403 and determines that the telop has not been extracted, the user operates again to change the combination of the feature amounts and try to perform the telop extraction process. Operate section 1002. When an operation signal from the operation unit 1002 corresponding to this operation is input, the processing proceeds to step S 4 21.
- step S 4 21 the feature amount processing unit 9 22 of the feature amount detection unit 9 11 determines whether or not the processing of the feature amount has been instructed. The process returns to step S413. On the other hand, in step S 4 21, when the feature amount processing unit 9 22 determines that the operation signal for instructing the processing of the feature amount is input from the operation unit 100 2, the processing is performed in step S 4 21 Proceed to 2 2.
- step S422 the feature amount processing unit 9222 displays an instruction screen for the processing content (FIG. 86).
- step S4243 the feature amount processing unit 9222 determines whether or not a base feature amount has been specified, and repeats the processing until information specifying the base feature amount is input. If it is determined that a basic feature amount has been input as an input signal, the process proceeds to step S424.
- step S424 the feature amount processing unit 9222 determines whether or not the processing content has been instructed, and repeats the processing until the processing content is instructed. If it is determined that the processing content has been input, the process proceeds to step S425. In step S425, the feature amount processing unit 9222 processes the specified basic feature amount according to the specified processing content, and stores the processed feature amount in the feature amount database 823. Then, the process returns to step S 4 13.
- step S420 when it is determined in step S420 that the telop has been extracted by the user's subjectivity, the user operates the operation unit 1102 so that the operation signal indicating the end of the processing detects the feature amount. This is input to the section 4 2 1, at which time the processing ends.
- the processing of steps S411 to S425 is repeated until the user looks at the image displayed on the display unit 4003 and determines that the telop has been extracted.
- By increasing the types of feature amounts that can be selected by the user it is possible to set many combinations of feature amounts, and it is possible to execute a process optimal for the user.
- a processing instruction screen necessary for generating a new feature by processing an existing feature is displayed, the user can efficiently execute the processing according to the display.
- FIG. 89 instead of the feature amount detection unit 4 1 1, the processing determination unit 4 1 2, and the operation unit 4 02 of the optimization device 401 of FIG.
- the configuration of the optimizing device 111 when the unit 111, the process determining unit 111, and the operation unit 112 are provided will be described.
- the optimization device shown in FIG. Same as 401 except for the configuration in which the feature amount detection unit 1 1 1 1, the processing determination unit 1 1 1 2, and the operation unit 1 102 are newly provided, the optimization device shown in FIG. Same as 401.
- the feature amount detection unit 1 1 1 1 is different from the configuration of the feature amount detection unit 4 1 1 in FIG. 56 in that the feature amount selection unit 4 2 3 is not provided, and the feature amount extraction unit 4 2 2 is preset.
- the configuration is the same except that the extracted two types of feature values are extracted.
- the processing decision unit 1 1 1 2 updates the LUT stored in the processing content database 4 3 3
- the new history information is stored, and the LUT is changed according to the new history information. Note that the configuration of the processing determining unit 111 will be described later with reference to FIG.
- the operation unit 1102 is the same as the operation unit 402.
- the processing deciding unit 1 1 1 2 of FIG. 90 is provided with a processing content deciding unit 1 1 2 1 instead of the processing content deciding unit 4 3 2 of the processing deciding unit 4 1 2 of FIG.
- the configuration is the same as that of the processing determination unit 4 12 except that 1 1 2 2 is added.
- the processing content determination unit 1 121 stores the history information of the operation for changing the LUT stored in the processing content database 43 3 in the history memory 1 122, and changes the LUT based on the history information. Other functions are the same as those of the processing content determining unit 432 of the processing determining unit 412 in FIG.
- step S 431 the feature amount extraction unit 4 1 1 of the feature amount detection unit 1 1 1 1 extracts two predetermined types of feature amounts from the image signal as an input signal, and the processing determination unit 1 1 1 2 Output to At this time, an image signal as an input signal is
- step S 4 32 the processing content deciding unit 1 1 2 1 of the processing deciding unit 1 1 1 2, based on the type and the feature amount of the feature amount input from the feature amount recognizing unit 4 3 1, With reference to the LUT stored in 433, the processing content is determined for each pixel and output to the processing unit 413.
- step S433 the processing unit 413 processes each pixel according to the processing content input from the processing determination unit 111, outputs the processed pixel to the display unit 403, and displays it.
- step S4334 the processing content determining unit 1121 of the processing determining unit 1112 determines whether or not an operation signal for changing the LUT has been input from the operating unit 1102. In other words, the user sees the image displayed on the display unit 403 and subjectively determines whether or not the user's favorite processing has been performed.
- the operation unit 1102 is operated, and a corresponding operation signal is input.
- step S434 if the operation signal requesting the change of the LUT has been input, that is, if the user's favorite process has not been performed, the process proceeds to step S435.
- step S435 it is determined whether the LUT change processing can be executed by the automatic LUT change processing.
- the LUT change processing includes manual LUT change processing and auto LUT change processing. The details of determining whether or not the auto LUT change process is possible will be described later.
- step S435 determines whether the auto LUT change process is possible. If it is determined in step S435 that the auto LUT change process is not possible, the process proceeds to step S436, where a manual LUT change process is performed.
- LUT as shown in Figure 92. 9 is a table showing processing contents determined for each combination of two feature amounts.
- Fig. 92 shows the case of feature amounts A and B as two types of feature amounts.
- each feature amount is classified in eight stages (total 64 classes). I have. Note that in FIG. 9.
- characteristic amount is one that was normalized to a value of 0 to 1.0, it is about the value V a of the feature A, 0 ⁇ V a rather 1Z8 left, l / 8 ⁇ V a ⁇ 2/8 N 2 / 8 ⁇ V a ⁇ 3/8 3 / 8 ⁇ V a ⁇ 48, 4 / 8 ⁇ V a ⁇ 58, 5 / 8 ⁇ V a ⁇ 6/8 , 6 / 8 ⁇ V a ⁇ 7/8 , 7 / 8 ⁇ V a ⁇ 8/8, for the value V b of the feature B, OV b ⁇ 1/8 from the top, 1 / 8 ⁇ V b ⁇ 2 / 8, 2 / 8 ⁇ V b ⁇ 3/8, 3 / 8 ⁇ V b ⁇ 4/8, 4 / 8 ⁇ V b ⁇ 5/8 N 5 / 8 ⁇ V b ⁇ 6/8, 6/8 ⁇ V b ⁇ is separated in a range of 7/8 N 7 / 8 ⁇ V b ⁇ is separated
- Each processing content is classified into three types of X, ⁇ , and Z in the figure according to the combination of a very small amount in each range. If, 0 ⁇ V a ⁇ 3/8 and,, 0 ⁇ V b 3Z8 processing contents X in the range of, 4/8 ⁇ V a ⁇ 6/8, or treatment in the range of 4Z8 ⁇ V a rather 6/8
- the content is Y, and the processing content is ⁇ in other ranges.
- the processing content can be variously specified. For example, as shown in FIG. 93 to FIG. 95, a prediction tap used for processing by the processing unit 413 can be designated for the target pixel.
- FIG. 93 shows the processing content X, where the target pixel is ⁇ 0, the filter taps P l and ⁇ 2 are set spatially in the X direction with the target pixel ⁇ 0 as the center, and the y direction is set. Similarly, taps P 3 and P 4 are set around the pixel of interest ⁇ 0, and taps P 5 and P 6 before and after the pixel of interest P 0 (for example, at the same pixel position). Tap P6 one frame before and P5) one frame after. That is, the processing content X is the processing of a so-called spatiotemporal filter.
- FIG. 94 the processing content Y is shown. Instead of the taps P 3 and P 4 from the spatiotemporal filter of FIG. 93, the tap P 1 at the timing earlier than the tap P 6 in the time direction is used. 2 and tap P 11 which is further after tap P 5 are set. That is, the processing content Y is a so-called time filter process.
- FIG. 95 shows the processing content Z.
- taps P5 and P6 are replaced with taps in the X direction further away from the target pixel in the X direction.
- the tap P 21 is set at the position where the tap P 2 is located, and the tap P 22 is set at a position further away from the pixel of interest than the tap P 2. That is, the processing content Z is the processing of the so-called spatial finoleta.
- the types of processing are not limited to the three types as shown in the example of FIG. 92, but may be of course divided into other types.
- a binarization process of dividing the color into black may be performed.
- the binarization process may specify, for example, a binarization process as to whether or not the pixel is extracted as a telop portion, as in the above-described example.
- the types of processing contents may be three or more.
- step S436 of FIG. The manual LUT change processing will be described.
- step S441 the processing content determination unit 1 1 2 1 determines whether the pixel position is specified as the operation signal from the operation unit 1 102 and the processing content is specified, and the processing content determination unit 1 1 2 The pixel position is specified as the operation signal, and the processing is repeated until the processing content is specified. That is, for example, when the screen shown in FIG. 97 is displayed on the display unit 403, the user operates the pointer 113 on the image displayed on the display unit 403 to change the process. Is performed at the pixel position where the pixel is to be added, for example, a drop-down list 1 1 3 2 is displayed as shown in FIG. 97, and further, the processing contents X, Either ⁇ or Z can be specified.
- step S441 when this designation is made, it is determined that the pixel position has been designated and the processing content has been designated, and the process proceeds to step S442.
- the pixel position P41 is selected, and the processing content X is selected.
- step S442 the processing content determining unit 111 reads out a combination of two types of feature amounts for the specified pixel position. More specifically, the processing content determination unit 111 reads out a combination of two types of feature amounts corresponding to the specified pixel from among the feature amounts detected by the feature amount detection unit 111.
- step S443 the processing content determining unit 1 121 changes the processing content corresponding to the combination of the corresponding feature amounts to the processing content specified in step S441.
- step S444 the processing content determining unit 1121 stores the changed pixel position and the processing content in the history memory 1122.
- step S445 the processing content determination unit 1 1 2 1 determines whether or not there is another LUT change, and if it is determined that the process of continuing to change the LUT is still performed, that is, the operation unit 1 If an operation signal instructing to change another LUT is input from 102, the process returns to step S441, and if it is determined that there is no process to continuously change the LUT, that is, the LUT is When the operation signal indicating the end of the change of is input, the process ends.
- step S 442 the obtained feature value of the pixel at the pixel position P 41 is obtained.
- the processing content is changed to X in step S443 as shown in FIG. 97
- the processing of the position of (5, 3) of the LUT is performed in step S443 as shown in FIG. Is changed from processing content Y to processing content X.
- the changed LUT A region near the upper position may be changed to X or Y.
- the distance from the position (4, 2) on the LUT is compared with the distance from the position (5, 2) on the LUT. Everything has been changed in the content.
- the left half area is the processing content X
- the right half area is the processing content Y.
- the processing content of the combination of the feature amounts at the positions (4, 2) and (7, 7) on the LUT is represented by X, and (5, 2), (
- the processing content of the combination of the feature values at the positions of 4, 5) is changed to Y, as shown in Fig. 103B, (1, 1), (2, 1), (3, 1) on the LUT , (4, 1), (1, 2), (2, 2), (3, 2), (4, 2), (1, 3), (2, 3), (3, 3),
- the processing contents of the combination of the feature amounts at the positions (4, 2) and (7, 7) on the LUT are represented by X, and (2, 3) on the LUT. ), (5, 2), (4, 5), (7, 4)
- the processing content of the combination of feature values is changed to Y, as shown in Fig. 104B, ( (3, 1), (3, 2), (4, 1), (4, 2), (4, 3), (5, 7), (5, 8), (6, 6), (6, For (7), (6, 8), (7, 6), (7, 7), (7, 8), (8, 6), (8, 7), and (8, 8)
- the processing content is X
- the processing content is Y.
- step S435 If it is determined in step S435 that auto LUT change processing is possible, the flow advances to step S437 to execute auto LUT change processing.
- step S461 the processing content determining unit 1121 obtains a group that exists for each processing content in the distribution of the update history stored in the history memory 1122. That is, as shown in FIG. 106, for example, the history memory 1122 stores, for example, the position on the LUT where the change is designated by the manual LUT change process described above, and the LUT where the designated process content is recorded. Stores different history tables.
- the history table shown in Fig. 106 shows that an instruction has been issued to change the processing contents of (3, 3), (8, 3), (2, 5), and (6, 6) on the LUT. , Respectively, indicate that processing contents X, X, X, and Y have been designated.
- the group for each processing content refers to a region where each processing content on the history table exists at a predetermined density or more and has a predetermined area or more.
- a group 1151 is formed on the history table as shown in FIG. 107, and the processing volume determination unit 1121 obtains the group 1151.
- the group 1 151 is a group of processing contents X, and a group is also obtained for each of the other processing contents.
- step S435 of FIG. 91 whether or not auto LUT change processing is possible is determined based on whether or not this processing content group exists. That is, if a group for each processing content exists, it is determined that the auto LUT change processing is possible, and if not, it is determined that the auto LUT change processing is not possible.
- step S462 the processing content determination unit 1 121 detects the centroid position of the group obtained in step S461. That is, for example, in the case of FIG. 107, the group 1 151 is formed, and the position of the center of gravity is obtained from the positions on all the history tables where the processing content X is specified. In the case of FIG. 107, the center of gravity 1 161 of the position on all the history tables in which the processing content X in the group 1 1 6 1 is specified is obtained.
- step S463 the processing content determination unit 1 1 2 1 determines the combination of the feature amount of each pixel in accordance with the processing content of the position on the history table existing within a predetermined range from the position of the center of gravity of the group.
- the processing contents of the cell on the corresponding LUT are changed to the processing contents of the cell on the history table corresponding to the combination of the feature values of the pixels constituting the group, and the processing ends. That is, in FIG. 107, all processes on the LUT corresponding to positions on the history table existing in the range 1 1 62 which is a range of a circle of a predetermined radius centered on the center of gravity position 1 1 6 1
- the contents are changed to the processing contents X that make up the group.
- the processing content determination unit 1 121 when the LUT is configured as shown in FIG. 108, the processing content determination unit 1 121 generates (2, 3) force on the LUT.
- the processing content determination unit 1121 outputs the (3, 4) force on the LUT as shown in FIG. Is maintained as it is. Through this processing, the processing contents (cell information) on the LUT are automatically changed. This process may be repeatedly executed at predetermined time intervals, not only at the timing when the user instructs the LUT change process.
- step S434 the change of the LUT has not been instructed, that is, the user sees the image displayed on the display unit 403, and determines that the user's favorite image has been generated. If so, the process ends.
- a processing determining unit 1 18 1 is provided in place of the processing determining unit 1 1 1 2 of the optimizing device 1101 of FIG. 89, and a new internal information generating unit is provided.
- FIG. 110 the configuration is the same as that of the optimizing device 1101 in FIG. 89 except for the configuration other than the newly provided processing determining unit 1181 and internal information generating unit 1182.
- the processing determination unit 1181 stores history information for updating the LUT stored in the processing content database 1191 (Fig. 111), and changes the LUT according to the history information. At the same time, the LUT stored in the processing content database 1191 is supplied to the internal information generator 1182.
- the internal information generation unit 1182 reads the LUT stored in the processing content database 1191, converts the LUT into information that can be displayed on the display unit 403, and outputs the information to the display unit 403 for display.
- the processing determination unit 1 1 1 1 of 1 1 1 1 is the same as the processing determination unit 1 1 1 2 except that a processing content database 1 1 9 1 is provided instead of the processing content database 4 3 3 in FIG.
- the configuration is as follows.
- the processing content database 1191 stores the LUT and supplies LUT information to the internal information generation unit 1182 as needed.
- the other functions are the same as those in the processing content database 433 of FIG.
- the telop extraction optimizing process performed by the optimizing device 111 of FIG. 110 will be described with reference to the flowchart of FIG.
- the telop extraction optimization processing of FIG. 11 is basically the same as the processing described with reference to the flowchart of FIG. 91, and steps S 471 to S 473 of FIG.
- the processing of S475 to S478 is processing corresponding to steps S431 to S437 in FIG. 91.
- the internal information generation unit 11 82 reads the LUT of the processing content database 1191 of the processing determining unit 1181, converts it into an image signal that can be displayed on the display unit 403, outputs the image signal to the display unit 403, and displays (presents) it.
- step S475 the process proceeds to step S475, and the subsequent processes are repeated. Since the LUT is displayed (presented) by such processing, the processing performed on the image signal as an input signal from the image displayed on the display unit 403 and the LUT is changed while recognizing a change in the LUT. It becomes possible.
- the internal information generation unit 1182 reads the LUT stored in the processing content database 1191, and, for example, the operation unit 1102 can directly operate the processing content on the LUT.
- the status may be displayed on the display unit 403 so that the processing content on the LUT can be directly changed.
- step S481 the processing content determination unit 1 121 determines whether or not a position on the LUT has been designated, and repeats the processing until it is designated. For example, as shown in Fig. 11-14, if the position where (5, 3) processing on the LUT displayed on the display unit 4 ⁇ 3 is set as Y is specified, the LUT on the LUT is displayed. It is determined that the position has been designated, and the process proceeds to step S482.
- step S482 the internal information generating unit 1182 causes the display unit 403 to display a position on the designated LUT. That is, in the case of FIG. 114, the position display frame 1 192 is displayed at the designated (5, 3).
- step S483 the processing content determination unit 111 determines whether or not the processing content has been specified, and repeats the processing until the processing content is specified. For example, as shown in FIG. 114, at the position where the pointer 1 191 is operated, a drop-down list 1 193 is displayed (for example, by right-clicking the mouse as the operation unit 1102), and If the user specifies any of the processing contents X, ⁇ , and Z displayed by operating the operation unit 1102, it is determined that the processing content has been specified, and the processing proceeds to step S484 .
- step S484 the processing content determination unit 111 changes the processing content to the specified processing content, and ends the processing. That is, in the case of FIG. 114, since “X” displayed in the drop-down list 1 193 is selected, (5, 3) on the LUT has the processing content as shown in FIG. It will be changed from Y to X. By performing the above processing, the processing contents set on the LUT can be directly changed.By operating the LUT while viewing the image processed with the processing contents registered in the LUT, the user It is possible to easily set the processing content that is the favorite of the user.
- the processing contents on the grid on the LUT specified by the user operation are changed by the user operation in the manual LUT change processing. Can be said to have changed.
- the change history stored in the history memory 1 122 is accumulated to some extent, and when a group is detected, the LU Since the algorithm for changing T is changed from manual LUT change processing to auto LUT change processing, the "processing structure" has been changed.
- the LUT is displayed as internal information on the processing of the processing determining unit 1 1 1 2 and the processing contents on the LUT can be changed while looking at the displayed LUT, the user can check the contents on the LUT. It is possible to recognize the correspondence relationship with the image displayed on the display unit 403.
- the feature amount detection unit 1 1 1 1 and the processing unit 4 1 3 of the optimization device 401 of FIG. As another embodiment in which the unit 4 13 is provided, the configuration of the optimizing device 1 201 will be described.
- the feature amount detection unit 111 is the same as the configuration of the optimization device 111 of FIG.
- the processing unit 1221 based on the information of the processing content input from the processing determining unit 4113, converts the input signal read from the buffer 421 using, for example, a coefficient set obtained by learning. Performs mapping processing, outputs it to the display unit 403, and displays it.
- the processing determination unit 413 changes the learning method of the coefficient set based on the operation signal from the operation unit 122.
- the operation unit 122 is the same as the operation unit 402. Next, the configuration of the processing unit 122 1 will be described with reference to FIG.
- the learning device 1221 based on the image signal as the input signal read from the buffer 4221 of the feature detection unit 1111, is used for the mapping process of the mapping unit 1222.
- the coefficient set is learned by the least N-th power error method for each processing content, and is stored in the coefficient memory 123. Further, the learning device 1221, based on the operation signal input from the operation unit 122, changes the value of the exponent N of the least-N-th power error method to learn the coefficient set.
- the matching processing section 122 reads the corresponding coefficient set from the coefficient memory 123 of the learning device 122 based on the processing content input from the processing determining section 412, and obtains the feature amount.
- the image signal as an input signal read from the buffer 4 21 of the detection unit 1 1 1 1 is subjected to a matting process and output to the display unit 4 03 for display.
- the teacher data generator 1 2 3 1 is the same as the teacher data generator 2 31 in FIG. 30, generates teacher data from an input signal as learning data, and generates a minimum N-th power error coefficient calculation unit.
- the student data generator 1 2 3 2 is similar to the student data generator 2 32 in FIG. 30, generates teacher data from an input signal as learning data, and generates a feature amount extractor 1 2. 3 3 and output to prediction tap extraction unit 1 2 3 5
- the feature amount extraction unit 1 2 3 3 is the same as the feature amount extraction unit 4 2 2 of the feature amount detection unit 1 1 1 1, and extracts the feature amount from the student data and sends it to the processing determination unit 1 2 3 4 Output.
- the processing determining unit 1 2 3 4 is the same as the processing determining unit 4 1 2, and the feature amount detecting unit 1
- the processing content is determined based on the feature amount input from 233, and is output to the least-Nth-power-error-method coefficient calculating unit 123.
- the teacher data is sequentially used as a target pixel, and a pixel serving as a prediction tap for each target pixel is extracted from the student data, and is output to the least-N-th-power error coefficient calculating unit 1236.
- the least-N-th-power error coefficient calculating unit 1 2 3 6 is similar in basic configuration and processing to the least-N-th power error coefficient calculating unit 2 3 4 in FIG. Based on the information that specifies the value of the exponent N required for the input least-N-th power error calculation method, the least-N-th power is calculated from the prediction tap and the teacher data input from the prediction tap extraction unit 123. The coefficient set is calculated by the error method, output to the coefficient memory 123, and overwritten and stored. However, the point that the least-N-th power error coefficient calculation unit 1 2 36 in FIG. 118 generates a coefficient set for each processing content input from the processing determination unit 1 This is different from the least-N-th-power error coefficient calculation unit 2 3 4.
- the coefficient memory 1237 stores the coefficient set output from the least-N-th power error coefficient calculating section 123 for each processing content.
- FIG. 118 shows that coefficient sets A to N are stored for each processing content.
- the tap extracting unit 251 which is the same as the mapping processing unit 222 in FIG. 31, extracts a prediction tap from the input signal supplied from the buffer 421, and calculates a product-sum operation for the target pixel. Output to section 1 2 5 1.
- the product-sum operation unit 1251 is the same as the product-sum operation unit 252 in FIG. 31.
- the product-sum operation unit 1252 is used to extract the extracted prediction taps (pixels) input from the prediction tap extraction unit 251. Executes the product-sum operation using the values and the coefficient set stored in the coefficient memory 1 2 3 7 of the learning device 1 2 2 1 to generate the pixel of interest, applies these to all pixels, and outputs an output signal. Is output to the display section 400 and displayed.
- the product-sum operation unit 1 25 among the coefficient sets A to N stored in the coefficient memory 1 2 3 7, is a process content supplied from the process determination unit 4 1 2 Is used.
- step S501 it is determined whether the user has operated the operation unit 202. If it is determined that the user has not operated the operation unit 202, the process returns to step S501. If it is determined in step S501 that the operation unit 122 has been operated, the process proceeds to step S502.
- step S502 the teacher data generator 1 2 3 1 of the learning device 1 2 1 2 generates teacher data from the input signal and outputs the generated data to the least-N-th-power error coefficient calculator 1 2 3 6.
- the student data generator 1 2 3 2 generates student data from the input signal, and outputs it to the feature amount extractor 1 2 3 3 and the prediction tap extractor 1 2 3 5. move on.
- the data used to generate the student data and the teacher data may be, for example, an input signal input from the present to a point in time that has been retroactive for a predetermined time. Can be adopted.
- learning data it is also possible to store exclusive data in advance instead of using an input signal.
- step S503 the feature amount extraction unit 1233 sets the pixel of interest (teacher data) The feature amount is extracted from the student data at the position corresponding to, and is output to the process determining unit 123.
- the processing determining unit 123 determines the processing content for the pixel of interest based on the feature amount input from the feature amount extracting unit 123, and calculates the minimum N Output to the operation unit 1 2 3 6.
- the processing determining unit 123 may vector-quantize one or more feature amounts from the feature extracting unit 123 and may use the quantization result as information of the processing content.
- LUT and the like are not stored as in the processing determination unit 111 of FIG. 89.
- step S505 the prediction tap extracting unit 123 generates a prediction tap from the student data input from the student data generation unit 123 for each pixel of interest with each teacher data as a pixel of interest. Then, the result is output to the least-N-th power error coefficient coefficient calculation unit 123, and the flow advances to step S506.
- step S506 the least-N-th-power-error-method coefficient calculating unit 1236 specifies from the operation unit 202 that a coefficient set is to be calculated by the least-Nth-power error method using the recursive method (second method). It is determined whether or not an operation signal to be input is input. For example, it is determined that the operation unit 122 is operated by the user and the recursive method is not specified, that is, the direct method (first method) is specified. If so, the process proceeds to step S507, where it is determined whether or not coefficients a, b, and c that specify the weight s of the equation (50) (specify the exponent N) are input. If, for example, it is determined that the user has operated the operation unit 122 and the values specifying the coefficients a, b, and c have been input, the process proceeds to step S507. Proceed to.
- step S 5 0 7 the minimum N-th power error method coefficient calculation unit 1 2 3 6, weighting a s coefficients are input a, b, in the state of c, of minimizing the expression (4 8) above
- the prediction coefficients W l , w 2 , w 3 ′ ”, w M as the solution by the least-Nsquare error method for the index N corresponding to the weight a s are obtained. That is, a coefficient set is obtained for each processing content input from the processing determination section 123, stored in the coefficient memory 123, and the process returns to step S501.
- step S506 it is determined in step S506 that the recursive method has been selected, the process proceeds to step S509.
- step S509 the least-Nth-power error coefficient calculating unit 1236 determines whether information specifying the exponent N has been input, and repeats the processing until the exponent N is input. If it is determined that the user operates the operation unit 122 to input information specifying the exponent N, the process proceeds to step S510.
- step S510 the least-N-square-error-method coefficient calculating unit 1236 obtains a coefficient set by a solution based on the least squares error method as a basis.
- step S 511 the least-N-square-error-method coefficient calculation unit 1 2 36 uses the predicted values obtained from the coefficient set obtained by the least-square error method, and calculates equations (5 1) to (5 1) As described with reference to 5 4), the coefficient set recursively input to the exponent N by the least squares error method input from the operation unit 1202 and input from the processing determination unit 1 2 3 4 It is obtained for each of the processed contents, stored in the coefficient memory 1 237, and the process returns to step S501.
- the coefficient set is learned in the coefficient memory 1237 for each processing content.
- mapping process in the image optimization process performed by the optimization device 1201 in FIG. 116 will be described with reference to the flowchart in FIG.
- step S ⁇ b> 521 the feature detection unit 111 detects a feature of a pixel of an input signal at a position corresponding to a target pixel (output signal) from an image signal as an input signal, and detects the detected feature. Is output to the processing determining unit 4 1 2.
- the processing determining unit 4112 determines the processing content based on the feature amount input from the feature amount detecting unit 1111 and outputs the processing content to the processing unit 1221.
- the processing of the processing determining unit 4 12 determines the processing content by performing the same processing as the processing determining unit 1 2 3 4 of FIG. Therefore, as described above, the processing determining unit 123 4 performs vector quantization of one or a plurality of feature amounts from the feature amount extracting unit 123, and uses the quantization result as information of processing contents.
- the processing unit 4 12 in FIG. 116 does not store the LUT or the like as in the processing determining unit 1 112 in FIG.
- step S 523 the tap extracting unit 251 of the matching processing unit 122 2 of the processing unit 122 1 1 outputs the frame of the image as the output signal corresponding to the image frame as the current input signal.
- Is set as a target frame and among the pixels of the target frame, for example, in the raster scan order, a pixel which has not been set as a target pixel is set as a target pixel.
- a prediction tap is extracted from an input signal. Output to 2 5 1.
- step S 5 2 the product-sum operation unit 1 2 5 1 of the mapping processing unit 1 2 2 2 calculates a coefficient set corresponding to the processing content input from the processing determination unit 4 1 2 into the learning device 1 2 2 1 Read from coefficient memory 1 2 3 7
- step S525 the product-sum operation unit 1251 uses the prediction coefficient corresponding to the processing content read from the coefficient memory 123 of the learning device 1221, and calculates the equation ( According to 3 9), a product-sum operation is performed on the prediction tap input from the tap extracting unit 2 51 and the coefficient set read from the coefficient memory 1 2 3 7. Thus, the product-sum operation unit 1251 obtains the pixel value (predicted value) of the target pixel. Then, the process proceeds to step S 5 26, where the tap extracting unit 25 1 determines whether or not all the pixels of the frame of interest have been set as the pixel of interest. Returning to 1, the same process is repeated with a new pixel of interest as a pixel of interest in the raster scan order of the frame of interest.
- step S526 If it is determined in step S526 that all the pixels of the frame of interest have been determined as the pixel of interest, the process proceeds to step S527, and the display unit 4003 obtains the sum by the product-sum operation unit 1251 The frame of interest consisting of the pixels thus displayed is displayed.
- the feature amount detection unit 1 11 1 1 detects the feature amount of the pixel of the input signal at the position corresponding to the pixel of interest (output signal) from the image signal as the input signal. The same processing is repeated hereafter, with the frame of which is a new frame of interest.
- the exponent N is changed by the user operating the operation unit 122 (in the direct method, the coefficients a, b, c is changed, and in the recursive method, the exponent N itself is changed.)
- a learning criterion learning system
- what kind of least-Nth power error method of the exponent N is Whether to adopt is set. That is, the learning algorithm for finding the coefficient has been changed. Therefore, it can be said that the processing structure has been changed so that the user's favorite image can be obtained.
- the internal information generation unit 1312 reads the information of the coefficient set stored for each processing content from the coefficient memory 1321 of the processing unit 1311 as the internal information. Convert it to displayable information, output it, and display it.
- the basic configuration is the same, but a coefficient memory 1321 is provided in place of the coefficient memory 1237, but the function is the same, but it is connected to the internal information generation unit 1312.
- the coefficient set stored for each processing content is read out.
- the optimization device 1301 in FIG. 122 also includes a learning process and a mapping process as in the case of the optimization device 1201 in FIG.
- the learning process in steps S541 to S551, the same processes as those in steps S501 to S511 in FIG. 120 are performed.
- step S 5 52 the internal information generation unit 13 12 sets the coefficient set stored in the coefficient memory 13 21 Read as internal information, based on each value included in coefficient set Then, an image signal that can be displayed is generated, output to the display unit 403, and displayed.
- the image generated by the internal information generation unit 1312 and displayed on the display unit 403 is, for example, a three-dimensional distribution diagram as shown in FIG. It can be in a form such as a two-dimensional distribution map indicated by 0.
- step S552 After the processing in step S552, the process returns to step S541, and the same processing is repeated thereafter.
- each value (coefficient value) of the coefficient set stored in the coefficient memory 1321 of the processing unit 1311 is displayed (presented) as internal information relating to the processing, and the user
- the learning algorithm for finding the coefficient set by changing the index N has been changed. Therefore, it can be said that the “processing structure” has been changed so as to obtain an image that the changed user likes.
- the coefficient set is displayed.For example, internal information about the processing, such as whether the current least-N-th power error method is a direct method or a recursive method, is displayed. It may be displayed.
- FIG. 126 an optimizing device 1 in which a processing unit 1 4 1 1 is provided instead of the processing unit 1 2 1 1 of the optimizing device 1 201 of FIG.
- the configuration of 401 will be described.
- the configuration of the processing unit 1 4 1 1 is basically the same as the configuration of the processing unit 3 1 1 of the optimization device 3 0 1 of FIG.
- the input signal is optimized based on the processing content input by the processing determining unit 4 12 and displayed on the display unit 4 3.
- the coefficient memory 1442 1 stores a plurality of coefficient sets for each processing content, and stores a coefficient set necessary for the mapping processing by the mapping processing unit 122 2.
- the figure shows that the coefficient sets A to N are stored. This coefficient set is generated by learning in advance by the learning device 1441 shown in FIG.
- Teacher data generator 1 4 5 1, Student data generator 1 4 5 2, Feature amount extractor 1 4 5 3, Processing determiner 1 4 5 4, and prediction tap extractor 1 4 of learning device 1 4 4 1
- Reference numeral 5 denotes a teacher data generation unit 1 2 3 1, a student data generation unit 1 2 3 2, a feature amount extraction unit 1 2 3 3, and a process determination unit 1 2 3 of the learning apparatus 1 2 2 1 in FIG. 4, and corresponds to the predicted tap extracting unit 123, and is the same as that described above.
- the normal equation generator 1 456 is the same as the normal equation generator 354 in FIG. 43, and the normal equation is generated based on the teacher data input from the teacher data generator 3 51 and the prediction tap. Is generated and output to the coefficient determining unit 14457, except that a normal equation is generated and output for each piece of processing content information input from the processing determining unit 1445.
- the coefficient deciding unit 1 457 is the same as the coefficient deciding unit 355 of FIG. 43, and solves the input normal equation to generate a coefficient set.
- a coefficient set is generated in association with the coefficient set.
- step S591 the teacher data generation unit 1451 generates teacher data from the learning data and outputs it to the normal equation generation unit 1456.
- Student data is generated from the learning data, and the feature amount extraction unit 1 4 5 3 and the prediction tap extraction unit Output to 1 4 5 5 and proceed to step S 5 92.
- step S 592 the prediction tap extraction unit 144 5 5 extracts prediction taps from the student data for each data of interest, using each teacher data as a pixel of interest in sequence, and outputs the prediction tap to the normal equation generation unit 144 56. Output and go to step S593.
- step S595 the feature amount extraction unit 14453 extracts the feature amount of the student data (pixel) at the position corresponding to the pixel of interest (teacher data), and outputs it to the processing determination unit 1444. I do.
- step S 594 the processing determination unit 144 5 determines the processing content for each pixel based on the feature amount extracted by the feature amount extraction unit 144 3, and determines the determined processing content. Is output to the normal equation generator 1 4 5 6.
- the processing determination unit 144 5 4 may perform, for example, vector quantization of one or a plurality of feature amounts, and use the quantization result as information of processing content. Therefore, LUT is not stored in the processing determining unit 14456.
- step S595 the normal equation generation unit 14566 uses each set of teacher data and prediction taps to calculate the summation ( ⁇ ) And the summation ( ⁇ ), which is a component of the vector on the right side, calculates a normal equation for each piece of processing content information input from the processing decision unit 144 5, and determines the coefficient. Output to section 1 4 5 7.
- step S 596 the coefficient determination unit 1457 solves the normal equation input from the normal equation generation unit 1456 for each piece of processing content information, and uses a so-called least square error method.
- a coefficient set is obtained for each piece of processing content information, and stored in the coefficient memory 14421 in step S5997.
- a coefficient set (coefficient set as an initial value) serving as a basis is stored in the coefficient memory 144 2 for each piece of processing content information.
- the coefficient set is obtained by the least squares error method.
- the coefficient set may be obtained by another method. It may be a coefficient set that is obtained more.
- this image optimization process includes a coefficient change process and a matting process. Since the matting process is the same as the mapping process described with reference to FIGS. 122 and 125, only the coefficient change process will be described here. explain.
- step S611 the change processing unit 332 of the coefficient changing unit 3222 determines whether or not an operation signal for operating the coefficient value has been input from the operation unit 122. In other words, if the user views the image displayed on the display unit 403 and determines that the image suits his or her preference, the user now determines the processing content information stored in the coefficient memory 1442 1. Performs the mapping process using each coefficient set, but if it is determined that it does not suit the user's own preference, the operation of changing the coefficient set stored in the coefficient memory 1442 1 used for the mapping process is performed. Do.
- step S611 determines whether an operation signal for operating a coefficient has been input, that is, one of the coefficients stored in the coefficient memory 14421 is changed. If the operation unit 122 is operated, the process proceeds to step S612.
- step S 6 12 the change processing section 3 3 2 controls the coefficient read / write section 3 3 1 to read out the coefficient set stored in the coefficient memory 3 2 1, and in step S 6 1 Proceed to 3.
- step S 6 13 the change processing section 3 3 2 determines whether the coefficient value input as the operation signal has changed to a predetermined threshold S 11 or more compared to a value previously included in the coefficient set. Determine whether or not. For example, when it is determined in step S 6 13 that the change between the value input as the operation signal and the value of the coefficient set stored in the coefficient memory 14 21 is equal to or greater than the threshold value S 11, The processing proceeds to step S 6 14.
- step S 614 the change processing section 332 changes the value of each coefficient included in the coefficient set by using a panel model as shown in FIG. 50. Proceed to step S 6 1 5.
- step S 6 13 when it is determined in step S 6 13 that the change between the value input as the operation signal and the value of the coefficient set stored in the coefficient memory 144 2 1 is not greater than or equal to the threshold value S 11, The process proceeds to step S615.
- step S 615 the change processing unit 332 changes the value of each coefficient included in the coefficient set by using the equilibrium model as shown in FIG. 51, and the processing is performed in step S 6. Go to 6 1 6
- step S 6 16 the change processing section 3 3 2 controls the coefficient read / write section 3 3 1 to overwrite and store the changed coefficient set value in the coefficient memory 1 4 2 1. Then, the process returns to step S611, and the subsequent processing is repeated.
- step S 611 when it is determined that the coefficient value is not operated, that is, when the user determines that the image displayed on the display unit 403 is the user's favorite image, Returning to step S611, the same processing is repeated thereafter.
- the user can change the stored coefficient set for each piece of processing content information used in the mapping processing, and execute the processing optimal for the user. Note that changing the value of each coefficient in the coefficient set changes the “processing content” of the mapping processing by the matching processing unit 311.
- the coefficient set stored in the coefficient memory 1442 1 is determined by the least N-th power error method, for example, a coefficient set corresponding to a plurality of exponents N is previously stored in the coefficient memory.
- the coefficient is changed to the coefficient set corresponding to the specified exponent N according to the operation signal from the operation unit 122 based on the operation of the user. You may make it do.
- the coefficient memory 1442 1 determines whether the coefficient set stored in the coefficient memory 1442 1 is determined by the least N-th power error method.
- Each coefficient set stored in 1442 is changed to a coefficient generated by the least N-th power error method corresponding to the exponent N input from the operation unit 122 based on a user operation. That is, since the learning algorithm for finding the coefficient set corresponding to the index N is changed, it can be said that the “processing structure” has been changed.
- the internal information generation unit is added to the optimization device 1401 of FIG. 126.
- the configuration of the optimization device 1501 provided with 1 5 2 1 will be described.
- a processing unit 1511 is provided instead of the processing unit 1411 in accordance with the provision of the internal information generation unit 1521. Except for this point, the other configuration is the same as that of the optimization processing unit 1401 in FIG. 126.
Landscapes
- Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Picture Signal Circuits (AREA)
- Studio Devices (AREA)
- Position Input By Displaying (AREA)
- Electrically Operated Instructional Devices (AREA)
- Facsimile Image Signal Circuits (AREA)
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP02700666A EP1363235A4 (en) | 2001-02-21 | 2002-02-21 | SIGNAL PROCESSING DEVICE |
JP2002566439A JP4707083B2 (ja) | 2001-02-21 | 2002-02-21 | 信号処理装置 |
US10/258,033 US7814039B2 (en) | 2001-02-21 | 2002-02-21 | Signal processing device and method which learn a prediction coefficient by least-Nth-power error minimization of student teacher data error to detect a telop within image signals which are input signals by selectively outputting input signals following decision of the processing deciding means |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001045249 | 2001-02-21 | ||
JP2001-45249 | 2001-02-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002067193A1 true WO2002067193A1 (fr) | 2002-08-29 |
Family
ID=18907082
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2002/001541 WO2002067192A1 (fr) | 2001-02-21 | 2002-02-21 | Dispositif de traitement de signaux |
PCT/JP2002/001542 WO2002067193A1 (fr) | 2001-02-21 | 2002-02-21 | Dispositif de traitement de signaux |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2002/001541 WO2002067192A1 (fr) | 2001-02-21 | 2002-02-21 | Dispositif de traitement de signaux |
Country Status (7)
Country | Link |
---|---|
US (2) | US7814039B2 (ja) |
EP (2) | EP1363234B1 (ja) |
JP (4) | JP4707083B2 (ja) |
KR (2) | KR100877457B1 (ja) |
CN (2) | CN1253830C (ja) |
DE (1) | DE60231480D1 (ja) |
WO (2) | WO2002067192A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7672536B2 (en) | 2003-06-27 | 2010-03-02 | Sony Corporation | Signal processing device, signal processing method, program, and recording medium |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE60232831D1 (de) * | 2002-02-21 | 2009-08-13 | Sony Corp | Signalverarbeitungsvorrichtung |
NO319660B1 (no) * | 2003-11-17 | 2005-09-05 | Tandberg Telecom As | Fremgangsmåte for interpolering av pixelverdier |
EP1699211B1 (en) * | 2005-03-04 | 2008-07-23 | Sennheiser Communications A/S | A learning headset |
US7453771B2 (en) * | 2005-12-19 | 2008-11-18 | Caterpillar Inc. | Apparatus and method for reducing noise for moveable target |
KR101376556B1 (ko) * | 2006-01-17 | 2014-04-02 | 코닌클리케 필립스 엔.브이. | 사이클로스테이션너리 툴박스를 이용하여 잡음에 삽입된텔레비전 신호의 존재 검출 |
JP5061882B2 (ja) * | 2007-12-21 | 2012-10-31 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム、並びに学習装置 |
JP5618128B2 (ja) * | 2010-02-22 | 2014-11-05 | ソニー株式会社 | 符号化装置、符号化方法、およびプログラム |
CN101916393B (zh) * | 2010-07-14 | 2012-09-26 | 中国科学院半导体研究所 | 具有图像分割功能的脉冲耦合神经网络的实现电路 |
CN102929730B (zh) * | 2012-10-31 | 2015-02-25 | 成都主导科技有限责任公司 | 一种数据校正方法、装置及系统 |
US20150193699A1 (en) * | 2014-01-08 | 2015-07-09 | Civitas Learning, Inc. | Data-adaptive insight and action platform for higher education |
CN104636785B (zh) * | 2015-02-28 | 2018-08-07 | 北京慧眼食珍科技有限公司 | 带有二维码软件安装信息的二维码、生成方法与识别方法 |
CN104636784A (zh) * | 2015-02-28 | 2015-05-20 | 立德高科(北京)数码科技有限责任公司 | 一种二维码及其生成方法与识别方法 |
WO2017061189A1 (ja) * | 2015-10-05 | 2017-04-13 | シャープ株式会社 | 画像復号装置または画像符号化装置のための画像予測装置 |
KR20180046679A (ko) * | 2016-10-28 | 2018-05-09 | 삼성전자주식회사 | 지문 인식 방법 및 이를 포함하는 전자 기기 |
JP6803241B2 (ja) * | 2017-01-13 | 2020-12-23 | アズビル株式会社 | 時系列データ処理装置および処理方法 |
JP6638695B2 (ja) * | 2017-05-18 | 2020-01-29 | トヨタ自動車株式会社 | 自動運転システム |
JP2019049875A (ja) * | 2017-09-11 | 2019-03-28 | トヨタ自動車株式会社 | 作動許可認証装置 |
JP6833660B2 (ja) * | 2017-11-08 | 2021-02-24 | 株式会社東芝 | 信頼度監視システム、信頼度評価方法、及びプログラム |
JP6812334B2 (ja) * | 2017-12-12 | 2021-01-13 | 日本電信電話株式会社 | 学習型自律システム用学習データ生成装置、学習型自律システム用学習データ生成方法、プログラム |
CN108174126B (zh) * | 2017-12-28 | 2020-09-18 | 北京空间机电研究所 | 一种基于可见光图像的ccd信号采样位置精确选取方法 |
JP6852141B2 (ja) | 2018-11-29 | 2021-03-31 | キヤノン株式会社 | 情報処理装置、撮像装置、情報処理装置の制御方法、および、プログラム |
CN110907694A (zh) * | 2020-02-07 | 2020-03-24 | 南京派格测控科技有限公司 | 功率放大器的输入电流的计算方法及装置 |
KR102399635B1 (ko) * | 2020-09-28 | 2022-05-18 | 경북대학교 산학협력단 | 저전력-저용량 임베디드 장비에 적용 가능한 신호 최적화 장치, 방법 및 이를 수행하기 위한 프로그램을 기록한 기록매체 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63173181A (ja) * | 1987-01-13 | 1988-07-16 | Olympus Optical Co Ltd | 画像処理方法 |
US6031543A (en) * | 1995-09-28 | 2000-02-29 | Fujitsu Limited | Image processing apparatus for correcting color space coordinates and method |
EP1001371A1 (en) * | 1998-11-09 | 2000-05-17 | Sony Corporation | Data processing apparatus and data processing methods |
JP2001319228A (ja) * | 2000-02-28 | 2001-11-16 | Sharp Corp | 画像処理装置および画像処理システム |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4261054A (en) * | 1977-12-15 | 1981-04-07 | Harris Corporation | Real-time adaptive power control in satellite communications systems |
US4228538A (en) * | 1977-12-15 | 1980-10-14 | Harris Corporation | Real-time adaptive power control in satellite communications systems |
US5303306A (en) * | 1989-06-06 | 1994-04-12 | Audioscience, Inc. | Hearing aid with programmable remote and method of deriving settings for configuring the hearing aid |
JP2792633B2 (ja) * | 1990-02-09 | 1998-09-03 | 株式会社日立製作所 | 制御装置 |
KR950009484B1 (ko) | 1990-06-06 | 1995-08-23 | 미쓰이세끼유 가가꾸고오교오 가부시끼가이샤 | 폴리올레핀 수지 조성물 |
US6259824B1 (en) * | 1991-03-12 | 2001-07-10 | Canon Kabushiki Kaisha | Image processing apparatus utilizing a neural network to improve printed image quality |
JPH0580626A (ja) * | 1991-03-12 | 1993-04-02 | Canon Inc | 情報処理装置 |
JP3096507B2 (ja) | 1991-11-29 | 2000-10-10 | 松下電器産業株式会社 | 操作内容学習装置および空気調和機制御装置 |
US5412735A (en) * | 1992-02-27 | 1995-05-02 | Central Institute For The Deaf | Adaptive noise reduction circuit for a sound reproduction system |
JP3435713B2 (ja) | 1992-11-27 | 2003-08-11 | 株式会社デンソー | 学習パターン選択型追加学習装置 |
JP3048811B2 (ja) | 1992-11-27 | 2000-06-05 | 三洋電機株式会社 | コントラスト自動調整装置 |
US5619619A (en) * | 1993-03-11 | 1997-04-08 | Kabushiki Kaisha Toshiba | Information recognition system and control system using same |
JPH06289782A (ja) * | 1993-04-07 | 1994-10-18 | Matsushita Electric Ind Co Ltd | 相互認証方法 |
US5649065A (en) * | 1993-05-28 | 1997-07-15 | Maryland Technology Corporation | Optimal filtering by neural networks with range extenders and/or reducers |
DE69425100T2 (de) * | 1993-09-30 | 2001-03-15 | Koninkl Philips Electronics Nv | Dynamisches neuronales Netzwerk |
DE4343411C2 (de) * | 1993-12-18 | 2001-05-17 | Blue Chip Music Gmbh | Gitarren-Signalanalyseeinrichtung |
US5796609A (en) * | 1996-09-13 | 1998-08-18 | Honeywell-Measurex Corporation | Method and apparatus for internal model control using a state variable feedback signal |
US5974235A (en) * | 1996-10-31 | 1999-10-26 | Sensormatic Electronics Corporation | Apparatus having flexible capabilities for analysis of video information |
US6046878A (en) * | 1997-04-30 | 2000-04-04 | Seagate Technology, Inc. | Object positioning using discrete sliding mode control with variable parameters |
US5987444A (en) * | 1997-09-23 | 1999-11-16 | Lo; James Ting-Ho | Robust neutral systems |
JPH11176272A (ja) | 1997-12-12 | 1999-07-02 | Toa Corp | スイッチの誤操作防止装置及びスイッチの誤操作防止装置を備えた音響信号処理装置 |
JP4258045B2 (ja) * | 1998-11-09 | 2009-04-30 | ソニー株式会社 | ノイズ低減装置、ノイズ低減方法、および記録媒体 |
JP4147647B2 (ja) * | 1998-11-09 | 2008-09-10 | ソニー株式会社 | データ処理装置およびデータ処理方法、並びに記録媒体 |
JP2000250603A (ja) * | 1999-03-02 | 2000-09-14 | Yamaha Motor Co Ltd | 総合特性最適化方法 |
US6342810B1 (en) * | 1999-07-13 | 2002-01-29 | Pmc-Sierra, Inc. | Predistortion amplifier system with separately controllable amplifiers |
JP2001045249A (ja) | 1999-07-26 | 2001-02-16 | Canon Inc | ファクシミリ装置 |
-
2002
- 2002-02-21 KR KR1020027013945A patent/KR100877457B1/ko not_active IP Right Cessation
- 2002-02-21 EP EP02700665A patent/EP1363234B1/en not_active Expired - Lifetime
- 2002-02-21 CN CNB028011120A patent/CN1253830C/zh not_active Expired - Fee Related
- 2002-02-21 JP JP2002566439A patent/JP4707083B2/ja not_active Expired - Fee Related
- 2002-02-21 US US10/258,033 patent/US7814039B2/en not_active Expired - Fee Related
- 2002-02-21 WO PCT/JP2002/001541 patent/WO2002067192A1/ja active Application Filing
- 2002-02-21 CN CNB028010639A patent/CN1271564C/zh not_active Expired - Fee Related
- 2002-02-21 WO PCT/JP2002/001542 patent/WO2002067193A1/ja active Application Filing
- 2002-02-21 DE DE60231480T patent/DE60231480D1/de not_active Expired - Lifetime
- 2002-02-21 EP EP02700666A patent/EP1363235A4/en not_active Ceased
- 2002-02-21 JP JP2002566438A patent/JP4269214B2/ja not_active Expired - Fee Related
- 2002-02-21 US US10/258,099 patent/US7516107B2/en not_active Expired - Fee Related
- 2002-02-21 KR KR1020027014074A patent/KR100874062B1/ko not_active IP Right Cessation
-
2007
- 2007-12-27 JP JP2007337535A patent/JP4811399B2/ja not_active Expired - Fee Related
- 2007-12-27 JP JP2007337534A patent/JP4761170B2/ja not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63173181A (ja) * | 1987-01-13 | 1988-07-16 | Olympus Optical Co Ltd | 画像処理方法 |
US6031543A (en) * | 1995-09-28 | 2000-02-29 | Fujitsu Limited | Image processing apparatus for correcting color space coordinates and method |
EP1001371A1 (en) * | 1998-11-09 | 2000-05-17 | Sony Corporation | Data processing apparatus and data processing methods |
JP2001319228A (ja) * | 2000-02-28 | 2001-11-16 | Sharp Corp | 画像処理装置および画像処理システム |
Non-Patent Citations (1)
Title |
---|
See also references of EP1363235A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7672536B2 (en) | 2003-06-27 | 2010-03-02 | Sony Corporation | Signal processing device, signal processing method, program, and recording medium |
US7672526B2 (en) | 2003-06-27 | 2010-03-02 | Sony Corporation | Signal processing device, signal processing method, program, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
JP4269214B2 (ja) | 2009-05-27 |
CN1271564C (zh) | 2006-08-23 |
CN1460228A (zh) | 2003-12-03 |
JP2008146661A (ja) | 2008-06-26 |
DE60231480D1 (de) | 2009-04-23 |
US7516107B2 (en) | 2009-04-07 |
EP1363235A1 (en) | 2003-11-19 |
CN1253830C (zh) | 2006-04-26 |
JP4707083B2 (ja) | 2011-06-22 |
JPWO2002067193A1 (ja) | 2004-06-24 |
US20030142837A1 (en) | 2003-07-31 |
JP4761170B2 (ja) | 2011-08-31 |
JP2008159062A (ja) | 2008-07-10 |
KR20030005286A (ko) | 2003-01-17 |
US7814039B2 (en) | 2010-10-12 |
US20040034721A1 (en) | 2004-02-19 |
EP1363234B1 (en) | 2009-03-11 |
KR100877457B1 (ko) | 2009-01-07 |
JPWO2002067192A1 (ja) | 2004-06-24 |
EP1363234A1 (en) | 2003-11-19 |
KR20020089511A (ko) | 2002-11-29 |
WO2002067192A1 (fr) | 2002-08-29 |
EP1363235A4 (en) | 2006-11-29 |
KR100874062B1 (ko) | 2008-12-12 |
JP4811399B2 (ja) | 2011-11-09 |
EP1363234A4 (en) | 2006-11-22 |
CN1460227A (zh) | 2003-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2002067193A1 (fr) | Dispositif de traitement de signaux | |
WO2003071479A1 (fr) | Processeur de signaux | |
JP7466289B2 (ja) | 映像処理装置及びその動作方法 | |
JP4591786B2 (ja) | 信号処理装置 | |
CN110874578B (zh) | 一种基于强化学习的无人机视角车辆识别追踪方法 | |
JP2020008896A (ja) | 画像識別装置、画像識別方法及びプログラム | |
JP7111088B2 (ja) | 画像検索装置、学習方法及びプログラム | |
CN117218498B (zh) | 基于多模态编码器的多模态大语言模型训练方法及系统 | |
US20220088775A1 (en) | Robot control device, robot system, and robot control method | |
WO2021132099A1 (ja) | 学習支援装置、学習装置、学習支援方法及び学習支援プログラム | |
CN114693790B (zh) | 基于混合注意力机制的自动图像描述方法与系统 | |
KR20100022958A (ko) | 학습 장치, 학습 방법, 정보 가공 장치, 정보 가공 방법, 및 프로그램 | |
JP4591785B2 (ja) | 信号処理装置 | |
CN112947466B (zh) | 一种面向自动驾驶的平行规划方法、设备及存储介质 | |
CN116101205A (zh) | 基于车内摄像头的智能座舱车内智能感知系统 | |
JP2009075937A (ja) | 機器動作設定装置 | |
CN112907535B (zh) | 一种用于超声图像采集教学任务的辅助系统 | |
US20220381582A1 (en) | Dynamic Parameterization of Digital Maps | |
Tian et al. | Vehicle Music Automation Control System Based on Machine Vision. | |
JP2937390B2 (ja) | 画像変換方法 | |
CN112967293A (zh) | 一种图像语义分割方法、装置及存储介质 | |
CN116453043A (zh) | 一种基于背景建模和目标检测的厨房老鼠检测方法 | |
PRIYA et al. | Grey Level Image Enhancement Technique using Particle Swarm Optimization Algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002700666 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020027013945 Country of ref document: KR |
|
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 2002 566439 Kind code of ref document: A Format of ref document f/p: F |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 028011120 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020027013945 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10258033 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2002700666 Country of ref document: EP |