WO2001097510A1 - Systeme et procede de traitement d'images, programme et support d'enregistrement - Google Patents

Systeme et procede de traitement d'images, programme et support d'enregistrement Download PDF

Info

Publication number
WO2001097510A1
WO2001097510A1 PCT/JP2001/005117 JP0105117W WO0197510A1 WO 2001097510 A1 WO2001097510 A1 WO 2001097510A1 JP 0105117 W JP0105117 W JP 0105117W WO 0197510 A1 WO0197510 A1 WO 0197510A1
Authority
WO
WIPO (PCT)
Prior art keywords
image signal
image
pixel
signal
pixels
Prior art date
Application number
PCT/JP2001/005117
Other languages
English (en)
Japanese (ja)
Inventor
Tetsujiro Kondo
Yasunobu Node
Katsuhisa Shinmei
Original Assignee
Sony Corporation
Shiraki, Hisakazu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2000179341A external-priority patent/JP4407015B2/ja
Priority claimed from JP2000179342A external-priority patent/JP4470282B2/ja
Application filed by Sony Corporation, Shiraki, Hisakazu filed Critical Sony Corporation
Priority to US10/049,553 priority Critical patent/US7085318B2/en
Publication of WO2001097510A1 publication Critical patent/WO2001097510A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0125Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards being a high definition standard
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/213Circuitry for suppressing or minimising impulsive noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0137Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes dependent on presence/absence of motion, e.g. of motion zones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0145Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being class adaptive, i.e. it uses the information of class which is determined for a pixel based upon certain characteristics of the neighbouring pixels

Definitions

  • Image processing apparatus image processing method, program, and recording medium
  • the present invention is applicable to, for example, a noise elimination device and a noise elimination method for removing noise of an image signal, and an image conversion device and an image conversion method for converting an input image signal into an image signal having a higher resolution.
  • the present invention relates to an image processing device, an image processing method, a program, and a recording medium. Background art
  • FIG. 1 shows a configuration in which image signals are accumulated with time, which is a configuration known as a motion adaptive recursive filter.
  • the input image signal is supplied to an addition circuit 2 through an amplifier 1 that performs amplitude adjustment for each pixel.
  • the frame memory 3 stores the output image signal of the frame immediately before (current frame) the current frame (current frame of the input image signal (hereinafter, current frame)). I have.
  • the image signal stored in the frame memory 3 is sequentially read out for each pixel corresponding to each pixel position of the input image signal, and is supplied to the addition circuit 2 through the amplifier 4 that performs amplitude adjustment.
  • the addition circuit 1 adds the pixel of the current frame passed through the pump 1 and the pixel of the previous frame passed through the amplifier 4, outputs the added output as an output image signal, and supplies it to the frame memory 3.
  • the stored image signal is rewritten to the output image signal of the addition circuit 1.
  • the input image signal of the current frame is also supplied to the subtraction circuit 5 for each pixel. Further, the image signal of the previous frame stored in the frame memory 3 is sequentially read out for each pixel corresponding to each pixel position of the input image signal and supplied to the subtraction circuit 5.
  • the subtraction circuit 5 outputs the difference between the pixel value of the current frame at the same pixel position on the image and the pixel value of the previous frame.
  • the difference output from the subtraction circuit 5 is supplied to an absolute value conversion circuit 6, converted into an absolute value, and then supplied to a threshold value processing circuit 7.
  • the threshold value processing circuit 7 compares the absolute value of the pixel difference supplied thereto with a predetermined threshold value, and determines whether the pixel is a moving part or a stationary part for each pixel. That is, when the absolute value of the pixel difference is smaller than the threshold value, the threshold value processing circuit 7 determines that the input pixel is a stationary portion, and when the absolute value of the pixel difference is larger than the threshold value, the input pixel is Judge as a moving part.
  • the result of the static motion determination in the threshold value processing circuit ⁇ is supplied to the weight coefficient generation circuit 8.
  • the weighting coefficient generation circuit 8 sets the value of the weighting coefficient k (0 ⁇ k ⁇ 1) according to the result of the static / dynamic determination in the threshold processing circuit 7, supplies the coefficient k to the amplifier 1, and
  • the coefficient 1 k is supplied to the amplifier 4.
  • the amplifier 1 multiplies its input signal by k, and the amplifier 4 multiplies its input signal by 11 k.
  • the output of the addition circuit 2 is a value obtained by weighting and adding the pixel value of the current frame and the pixel value of the previous frame from the frame memory 3.
  • the stored signal in the frame memory 3 is rewritten every frame by the output image signal from the adder circuit 2, the pixel values of a plurality of frames are integrated in the still portion in the image signal stored in the frame memory 3. It becomes something. Therefore, assuming that the noise changes randomly for each frame, the noise is gradually reduced and removed by the weighted addition, and the noise of the image signal (same as the output image signal) stored in the frame memory 3 is reduced.
  • the stationary part is the one after noise removal.
  • noise removal by the above-described motion adaptive recursive fill has the following problems.
  • noise level when the noise level is high, a moving part may be mistaken for a still part, and in such a case, image quality deterioration such as blur may be observed. In addition, noise cannot be removed from moving parts.
  • noise eliminator using the classification adaptive processing has been proposed by the present applicant.
  • noise can be removed from both stationary and moving parts.
  • the above-described motion-adaptive recursive fill has better noise removal performance.
  • the present invention is also effective when applied to a resolution conversion device for increasing the resolution of an input image signal in addition to the noise removal processing.
  • the current television system is a so-called standard system in which the number of scanning lines per frame is 525 or 625, and a high-definition system in which the number of scanning lines per frame is larger than that.
  • a multi-level system for example, a high-definition system with 1 1 2 5 lines.
  • an image signal having the resolution of the standard method must be converted into an image signal having a resolution matching the high-definition method.
  • Resolution conversion (referred to as up conversion as appropriate). Therefore, conventionally, various resolution conversion devices for image signals using methods such as linear interpolation have been proposed. For example, up conversion by accumulation type processing and up conversion by class classification adaptive processing have been proposed.
  • the resolution conversion device based on the accumulation type processing can output the converted output image with little deterioration for the still image portion, but the image degradation occurs for the image portion with large motion.
  • the resolution conversion device using the classification adaptive processing can obtain a converted output image with little deterioration in the case of a moving image portion, but cannot obtain a very good image in a stationary portion. There was a problem.
  • an object of the present invention is to provide an image processing apparatus and an image processing method capable of performing good processing as a whole by taking advantage of the configuration of accumulating image signals over time and the configuration of adaptive classification processing. , A program and a recording medium.
  • the invention according to claim 1 is an image processing apparatus that receives an input image signal and generates an output image signal having higher quality than the input image signal.
  • a storage unit for storing an image signal of the same quality as the output image signal, and by adding the input image signal and the image stored in the storage unit, First signal processing means for generating a first image signal of higher quality than the image and storing the first image signal in the storage means;
  • the feature based on the input image signal is extracted according to the position of the pixel of interest in the output image signal, the pixel of interest is classified into one of a plurality of classes according to the characteristic, and a predetermined value is determined corresponding to the classified class.
  • a second signal processing means for generating a second image signal of higher quality than the input image by calculating the input image signal by a calculation method;
  • Output selection means for making a determination based on the first image signal and the second image signal, and selecting one of the first and second image signals as an output image signal
  • An image processing apparatus having
  • the invention according to claim 26 is an image processing method for receiving an input image signal and generating an output image signal with higher quality than the input image signal, wherein the image signal having the same quality as the output image signal is stored in storage means, A first signal processing step of generating a higher quality first image signal than the input image by adding the signal and the stored image, and storing the first image signal in storage means;
  • the feature based on the input image signal is extracted according to the position of the pixel of interest in the output image signal, the pixel of interest is classified into one of a plurality of classes according to the characteristic, and a predetermined value is determined corresponding to the classified class.
  • the invention according to claim 51 is a program for causing a computer to execute image processing for generating an output image signal having higher quality than an input image signal.
  • An image signal of the same quality as the output image signal is stored in the storage means, and the input image signal and the stored image are added to generate a first image signal of higher quality than the input image, and the first image
  • the feature based on the input image signal is extracted according to the position of the pixel of interest in the output image signal, the pixel of interest is classified into one of a plurality of classes according to the characteristic, and a predetermined value is determined corresponding to the classified class.
  • the invention according to claim 52 is a computer-readable recording medium which records a program for causing a computer to execute image processing for generating an output image signal having higher quality than an input image signal.
  • An image signal of the same quality as the image signal is stored in the storage means, and the input image signal and the stored image are added to generate a first image signal of higher quality than the input image, and the first image signal
  • the feature based on the input image signal is extracted according to the position of the pixel of interest in the output image signal, the pixel of interest is classified into one of a plurality of classes according to the characteristic, and a predetermined value is determined corresponding to the classified class.
  • the input image signal A second signal processing step of generating a second image signal of higher quality than the input image by calculating the
  • FIG. 1 is a block diagram showing an example of a conventional motion adaptive recovery filter.
  • FIG. 2 is a block diagram showing a basic configuration of the present invention.
  • FIG. 3 is a block diagram showing an embodiment of the present invention.
  • FIG. 4 is a block diagram of an example of a noise elimination circuit by accumulation type processing according to an embodiment of the present invention.
  • FIG. 5 is a flowchart showing an example of processing of a noise elimination circuit by accumulation type processing according to an embodiment of the present invention.
  • FIG. 6 is a block diagram showing an example of a class classification adaptive noise elimination circuit according to one embodiment.
  • FIG. 7 is a schematic diagram illustrating an example of a cluster map and a prediction map.
  • FIG. 8 is a block diagram showing an example of a feature detection circuit constituting a part of the classification adaptive noise elimination circuit.
  • FIG. 9 is a schematic diagram for explaining an example of a feature detection circuit.
  • FIG. 10 is a block diagram showing a configuration at the time of learning for generating the coefficient data used in the classifying adaptive noise elimination circuit.
  • FIG. 11 shows a case where an embodiment of the present invention is processed by software. Is a flowchart for explaining the processing of FIG.
  • FIG. 12 is a flowchart showing a processing flow of the motion adaptive recursive filter.
  • FIG. 13 is a flowchart showing the flow of noise removal processing by the classification adaptive processing.
  • FIG. 14 is a flowchart showing the flow of processing at the time of learning for generating coefficient data used in the classifying adaptive noise elimination circuit.
  • FIG. 15 is a block diagram of another embodiment of the present invention.
  • FIG. 16 is a schematic diagram for explaining a resolution conversion process performed by another embodiment.
  • FIG. 17 is a block diagram showing an example of a configuration of a resolution conversion section by a storage process in another embodiment.
  • FIG. 18 is a schematic diagram for explaining the conversion processing of the resolution conversion unit by the accumulation processing.
  • FIG. 19 is a schematic diagram for explaining the conversion processing of the resolution conversion unit by the accumulation processing.
  • FIG. 20 is a block diagram showing an example of the configuration of a resolution conversion unit based on the classification adaptive processing.
  • FIG. 21 is a schematic diagram for explaining the processing operation of the resolution conversion unit by the class classification adaptive processing.
  • FIG. 22 is a block diagram showing an example of a feature detection circuit in the resolution conversion unit based on the classification adaptive processing.
  • FIG. 23 is a schematic diagram for explaining the operation of the feature detection circuit.
  • FIG. 24 is a block diagram showing a configuration at the time of learning for generating coefficient data used in the resolution conversion unit by the classification adaptive processing.
  • FIG. 25 is a diagram illustrating a process of selecting an output image signal according to another embodiment.
  • FIG. 26 is a flowchart for explaining a process of selecting an output image signal in another embodiment.
  • FIG. 27 is a flowchart for explaining a process when another embodiment of the present invention is processed by software.
  • FIG. 28 is a flowchart showing the flow of the conversion process of the resolution conversion unit by the accumulation process.
  • FIG. 29 is a flowchart showing the flow of the resolution conversion processing by the classification adaptive processing.
  • FIG. 30 is a flowchart showing a flow of a learning process for generating coefficient data used in the resolution conversion process by the class classification adaptive process.
  • FIG. 2 shows the overall configuration of the present invention.
  • the input image signal is supplied to an accumulation processing unit 100 and a class classification adaptive processing unit 00.
  • the accumulation processing section 100 is a processing section having a configuration for accumulating image signals as time passes.
  • the class classification adaptive processing unit 200 detects a feature based on the input image signal according to the position of the pixel of interest in the output image signal, classifies the pixel of interest into one of a plurality of classes according to the characteristic, and performs classification.
  • An output image signal is generated by calculating an input image signal by a predetermined calculation method corresponding to the class.
  • the output image signal of the accumulation type processing unit 100 and the output image signal of the class classification adaptive processing unit 200 are supplied to the selection circuit 301 and the output judgment circuit 302 of the output selection unit 300. .
  • the output determination circuit 302 determines which output image signal is appropriate to output based on the output image signal of each processing unit. A selection signal corresponding to the result of this determination is generated.
  • the selection signal is The signal is supplied to the selection circuit 301, and one of the two output image signals is selected.
  • the accumulation processing unit 100 has the same configuration as the above-described motion adaptive recursive filter. Then, by repeating the weighted addition of the current frame and the previous frame, noise removal is satisfactorily performed on the pixels in the stationary portion.
  • the classification adaptive processing unit 200 is a noise removing unit based on the classification adaptive processing.
  • the noise elimination unit using the classification adaptive processing extracts pixels of each frame at the same position among a plurality of frames, classifies noise components of the pixels based on a change between the frames of the pixels, and classifies the noise components. Since the noise component is removed from the input image signal by the arithmetic processing set in advance corresponding to the class, the noise is removed regardless of the moving part and the stationary part.
  • the noise elimination unit of the accumulation type which can accumulate information of long frames, has a larger noise elimination effect than the noise elimination unit by the classification adaptive processing.
  • the output selection unit 300 determines the stillness of the image in units of a predetermined number of pixels, and selects an output image signal from the noise removal unit based on the accumulation type processing in the stationary part according to the determination result. By selecting an output image signal from the noise removing unit based on the classification adaptive processing in the moving part, an output image signal from which the noise is removed in both the stationary part and the moving part can be obtained.
  • the storage type processing unit 100 stores image information in a frame memory over a long period in the time direction, High resolution images It is configured to form an image signal. According to this configuration, a converted output image signal with little deterioration can be obtained for a still image or an image in which panning or tilting is performed simply on the entire screen.
  • the classification adaptive processing unit 200 is a resolution conversion unit based on the classification adaptive processing.
  • the resolution conversion unit classifies a feature of a pixel of interest in an image based on an input image signal based on a feature of a plurality of pixels including the pixel of interest and its temporal and spatial surrounding pixels. Generates a high-resolution output image signal by generating a plurality of pixels in a high-resolution image corresponding to the target pixel by performing image conversion calculation processing that is set in advance corresponding to the classified class. I do. Therefore, the resolution conversion unit based on the classification adaptive processing can obtain a converted output image signal with little deterioration even in the moving part. However, for the stationary part, the storage-type resolution converter that handles image information longer in the time direction can perform better resolution conversion.
  • the output selection unit 300 outputs the image signal from one resolution conversion unit and the image signal from the other resolution conversion unit for each pixel or every predetermined number of pixels. Since either one of the image signals can be selected and output, a high-quality converted output image with little deterioration can be obtained.
  • the input image signal is supplied to the motion adaptive recursive filter 11 forming an example of the accumulation type processing unit 100 for each pixel, and the class classification forming the example of the classification adaptive processing unit 200. It is supplied to the adaptive noise elimination circuit 12.
  • This motion adaptive recursive filter 11 As the motion adaptive recursive filter 11, a configuration similar to that of the example of FIG. 1 described above can be used. This motion adaptive recursive filter 1 1 These output image signals are supplied to the output selection unit 13 corresponding to the output selection unit 300.
  • the classifying adaptive noise elimination circuit 12 extracts pixels of each frame at the same position among a plurality of frames, classifies the noise components of the pixels based on a change between the frames of the pixels, and An output image signal from which a noise component has been removed is generated from an input image signal by an arithmetic processing set in advance corresponding to the classified class, and its detailed configuration will be described later.
  • the output image signal from the classification adaptive noise elimination circuit 12 is also supplied to the output selection unit 13.
  • the output selection unit 13 includes a static / movement determination circuit 14, a delay circuit 15 for timing adjustment, and a selection circuit 16, and the output image signal from the motion adaptive recursive filter 11 is delayed.
  • the image signal supplied to the selection circuit 16 through the circuit 15 and output from the adaptive noise removal circuit 12 is supplied to the selection circuit 16 as it is.
  • the output image signal from the motion adaptive recursive filter 11 and the output image signal from the classification adaptive noise elimination circuit 12 are supplied to a static / motion determination circuit 14.
  • the static / movement determining circuit 14 determines whether each pixel is a stationary portion or a moving portion from the two output image signals, and supplies the determination output as a selection control signal to the selection circuit 16.
  • the pixels in the still part of the image are noise-removed, but the pixels in the moving part of the image are output as they are without noise removal. You.
  • noise elimination is performed irrespective of the still part and the moving part of the image.
  • the static / movement determining circuit 14 determines whether each pixel is a static portion or a moving portion of the image by using the above-described properties. That is, the still / moving judgment circuit 14 calculates the difference between the pixel value of the output image signal from the motion adaptive recursive filter 11 and the pixel value of the output image signal from the classification adaptive noise elimination circuit 12. It has a difference value calculation circuit 14 1, an absolute value conversion circuit 14 2 for converting a difference value from the difference value calculation circuit 14 1 into an absolute value, and a comparison judgment circuit 14 3.
  • the comparison determination circuit 144 determines that the moving portion is present, and determines the difference value from the absolute value conversion circuit 144.
  • the comparison and determination circuit 144 controls the selection circuit 16 so as to select the output image signal from the motion adaptive recursive filter 11 for the pixel determined to be a still part of the image,
  • the selection circuit 16 is controlled so as to select an image signal output from the classifying adaptive noise elimination circuit 12 for the pixel determined to be the moving part of the image.
  • the selection circuit 16 that is, from the output selection unit 13, for the stationary part, information of a long frame can be accumulated, and the output image from the motion adaptive recursive filter that can remove noises well can be stored.
  • the signal is output, and the motion part is
  • An output image signal from the class classification adaptive noise elimination circuit 12 is output instead of the output image signal from the adaptive recursive filter. Therefore, an output image signal from which noise has been removed is obtained from the output selection unit 13 over the stationary part and the moving part.
  • the motion adaptive recursive filter 11 is not limited to the configuration shown in FIG. 1 but may be a configuration shown in FIG.
  • reference numeral 101 denotes a delay circuit for time alignment
  • reference numeral 104 denotes a motion vector detection circuit.
  • the input image signal passed through the delay circuit 101 is supplied to the synthesizing circuit 102.
  • the image stored in the storage memory 103 is supplied to the synthesizing circuit 102 via the shift circuit 105.
  • the combined output of the combining circuit 102 is stored in the storage memory 103.
  • the stored image in the storage memory 103 is taken out as an output and supplied to the motion vector detection circuit 104.
  • the motion vector detection circuit 104 detects a motion vector between the input image signal and the image stored in the storage memory 103.
  • the shift circuit 105 shifts the position of the image read from the storage memory 103 horizontally and / or vertically according to the motion vector detected by the motion vector detection circuit 104. . Since the motion compensation is performed by the shift circuit 105, in the synthesizing circuit 102, the pixels spatially located at the same position are added as described below.
  • the composite value of the output of the composite circuit 102 (pixel value of input image X N + pixel value of accumulated image X M) / (N + M) (N and M are predetermined coefficients)
  • the accumulation memory 103 accumulates the result of adding the pixel data over a plurality of frame periods. By this processing, noise components having no correlation can be removed.
  • Fig. 5 shows the case where the processing of the configuration shown in Fig. 4 is performed by software processing.
  • This is a flowchart showing the flow of the operation.
  • a motion vector is detected between an image region on the accumulated image and an image region on the input image corresponding to the image region.
  • the position of the stored image is shifted based on the detected motion vector.
  • the input image and the stored image whose position has been shifted are synthesized and stored (step S53).
  • step S54 the stored image is read from the storage memory and output.
  • class classification adaptive noise elimination circuit used in this embodiment will be described in detail.
  • class classification adaptive processing class classification is performed according to the three-dimensional (spatio-temporal) distribution of the signal level of the input image signal, and the prediction coefficients obtained by learning in advance for each class are stored in a memory. Then, a process of outputting an optimum estimated value (that is, a pixel value after noise removal) by an arithmetic process according to a weighted addition formula using such a prediction coefficient is described.
  • noise removal is performed by performing the classification adaptive processing in consideration of the motion of the image. That is, a pixel region to be referred to for detecting a noise component and a pixel region to be used for an arithmetic process for removing noise are cut out according to the motion estimated from the input image signal, Based on these, an image from which noise has been removed by the classification adaptive processing is output.
  • FIG. 6 shows an overall configuration of a class adaptive noise removal circuit used in this embodiment.
  • the input image signal to be processed is supplied to a frame memory 21.
  • the frame memory 1 stores the supplied image of the current frame and supplies the image of the previous frame to the frame memory 22.
  • Frey The memory 22 stores the supplied one-frame image, and supplies the image one frame before to the frame memory 23. In this way, images of newer frames are stored in the frame memories 21, 22, and 23 in this order.
  • the frame memory 2 stores the current frame
  • the frame memory 21 stores the frame one frame after the current frame
  • the frame memory 23 stores the frame one frame before the current frame.
  • the case is taken as an example.
  • the storage contents of the frame memories 21, 11, 23 are not limited to this.
  • images at a time interval of two frames may be stored.
  • five frame memories may be provided to store images of five consecutive frames.
  • a field memory can be used instead of the frame memory.
  • the image data of the rear frame, the current frame, and the previous frame stored in the frame memories 21, 1, and 23 are respectively stored in the motion vector detector 24, the motion vector detector 25, and the first frame. It is supplied to the area cutout section 26 and the second area cutout section 27.
  • the motion vector detection unit 24 detects a motion vector for a pixel of interest between the current frame image stored in the frame memory 12 and the previous frame image stored in the frame memory 23. I do. Further, the motion vector detection unit 25 calculates the motion vector of the pixel of interest between the image of the current frame stored in the frame memory 22 and the image of the subsequent frame stored in the frame memory 21. Detects torque.
  • the motion vector (movement direction and motion amount) of the pixel of interest detected by each of the motion vector detection units 24 and 25 is determined by the first area segmentation. It is supplied to the output section 26 and the second area cutout section 27.
  • a method of detecting a motion vector a block matching method, an estimation using a correlation coefficient, a gradient method, or the like can be used.
  • the first area cutout unit 26 starts the image data of each frame supplied thereto from the evening, referring to the motion vectors detected by the motion vector detection units 24 and 25, and The extracted pixel value is supplied to the feature detection unit 28.
  • the feature detection unit 28 generates a class code representing information related to the noise component based on the output of the first area cutout unit 26, as described later, and stores the generated class code in the coefficient ROM 29. Supply. in this way
  • the pixels extracted by the first area cutout unit 26 are used for generating a class code, they are called class taps.
  • the coefficient R OM 29 stores a prediction coefficient determined by learning as described later in advance in Class I, more specifically, along with an address associated with the class code.
  • the coefficient R OM 29 is calculated by the feature detector
  • the first area cutout unit 27 is a frame memory 21, 1, 2
  • the estimation operation unit 30 outputs the output of the second region extraction unit 27 and the coefficient R OM
  • a weighting operation as shown in the following equation (1) is performed to generate a predicted image signal from which noise has been removed.
  • the pixel values extracted by the second region cutout unit 27 are used in the weighted addition for generating the predicted image signal.
  • FIG. 7 shows the tap structures of the class taps and the prediction taps cut out by the first area cut-out unit 26 and the second area cut-out unit 27, respectively.
  • a target pixel to be predicted is indicated by a black circle
  • a pixel cut out as a class tap or a prediction tap is indicated by a circle with a shadow.
  • Fig. 7A shows an example of the structure of a basic class tap. From the current frame f [0] including the pixel of interest and the frames temporally before and after the current frame, that is, f [1-1] and f [+1], the pixel at the same spatial position as the pixel of interest is Cut out as a class tap.
  • the class tap has a tap structure in which only one pixel is extracted in each of the previous frame f [-1], the current frame f [0], and the subsequent frame f [+1].
  • the pixel at the same pixel position in each frame of the subsequent frame f [+1] is extracted as a class tap for noise detection. Therefore, the pixel position of the class tap in each frame to be processed is constant, and the tap structure does not change.
  • the first area cutout unit 26 sets the previous frame f [-1], the current frame f [0], and the subsequent frame f From each frame of [+ 1], the pixel at the position corresponding to the Extract. In other words, the pixel at the position corresponding to the motion vector is extracted
  • FIG. 7B shows an example of a basic prediction tap structure extracted by the second region cutout unit 27. From the pixel data of the frame of interest and the image data of the frames temporally located before and after the frame of interest, a total of 13 pixels including the pixel of interest and, for example, 12 pixels surrounding the pixel of interest are obtained. It is cut out as a prediction tap.
  • FIGS. 7C and 7D show the case where the cut-out position is temporally shifted according to the motion vector output from the motion vector detection units 24 and 25.
  • FIG. 7E the motion vector in the frame of interest is (0, 0), the motion vector in the previous frame is (11, 11), and the motion vector in the subsequent frame is (0, 0).
  • the cutout positions of the class taps and prediction taps in the entire frame are translated in accordance with the motion vector.
  • the cluster tap extracted by the first region cutout unit 26 becomes a corresponding pixel on the image between a plurality of frames.
  • the prediction tap extracted by the second area cutout unit 27 also becomes a corresponding pixel on an image between a plurality of frames due to the motion correction.
  • the number of frame memories is increased and, for example, five instead of three, for example, by storing the current frame and two frames before and after the current frame.
  • a class tap structure may be used in which only the pixel of interest is extracted from the current frame, and a pixel corresponding to the pixel of interest is extracted from each of the preceding and succeeding two frames. In such a case, the pixel region to be extracted is expanded temporally, so that more effective noise removal can be performed.
  • the feature detection unit 28 uses the variation in the pixel value of the pixel of the three frames extracted as a class tap in the first area extraction unit 26 as To detect. Then, a class code corresponding to the level change of the noise component is output to the coefficient ROM 29. That is, the feature detecting unit 28 classifies the level variation of the noise component of the target pixel into classes, and outputs a class code indicating which of the classified classes.
  • the feature detection unit 28 performs ADRC (Adaptive Dynamic Range Coding) on the output of the first region cutout unit 26, and performs level fluctuation of the corresponding pixel of the pixel of interest over a plurality of frames. Generates a class code consisting of the ADRC output.
  • ADRC Adaptive Dynamic Range Coding
  • FIG. 8 shows an example of the feature detecting section 28.
  • Fig. 8 shows the generation of class code by 1-bit ADRC.
  • the dynamic range detection circuit 281 includes, from each of the frame memories 21, 1, and 23, two pixels corresponding to the pixel of interest of the current frame and the pixel of interest before and after the current frame. A total of three pixels are supplied. The value of each pixel is represented by, for example, 8 bits.
  • the dynamic range detection circuit 281 as its output, The calculated dynamic range DR, the minimum value MIN, and the pixel value p X of each of the three input pixels are output.
  • the pixel values P of the three pixels from the dynamic range detection circuit 281 are sequentially supplied to a subtraction circuit 282, and the minimum value M IN is subtracted from each pixel value PX. By removing the minimum value M IN from each pixel value P X, the normalized pixel value is supplied to the comparison circuit 283.
  • the output (DR / 2) of the bit shift circuit 284 that reduces the dynamic range DR to 1/2 is supplied to the comparison circuit 283, and the magnitude relationship between the pixel value PX and DR / 2 is detected. .
  • the 1-bit comparison output of the comparison circuit 283 is set to “ ⁇ ”, otherwise, the comparison output is set to “0”.
  • the 283 generates the 3-bit ADRC output by parallelizing the comparison outputs of the three pixels obtained sequentially.
  • the dynamic range DR is supplied to a bit number conversion circuit 285, and the number of bits is converted from 8 bits to, for example, 5 bits by quantization. Then, the dynamic range converted into the number of bits and the 3-bit ADRC output are supplied to the coefficient ROM 29 as a class code.
  • the pixel value should not fluctuate or be small between the target pixel of the current frame and the corresponding pixels of the previous and subsequent frames. Therefore, when a change in pixel value is detected, it can be determined that the change is caused by noise.
  • the pixel value of the class tap extracted from each temporally continuous frame of t-1, t, t + 1 is 1-bit ADRC processing. This generates a 3-bit [0 10] ADRC output. And the dynamic range DR The one converted to 5 bits is output. The 3-bit ADRC output represents the noise level fluctuation for the pixel of interest.
  • the noise level is represented by a code obtained by converting the dynamic range DR into 5 bits.
  • the purpose of converting 8 bits to 5 bits is to avoid having too many classes.
  • the class code generated by the feature detection unit 28 is, for example, a 3-bit code related to the noise level fluctuation in the time direction obtained as a result of ADRC, and the result of the dynamic range DR. And a code consisting of, for example, 5 bits related to the noise level obtained as By using the dynamic range DR for class classification, motion and noise can be distinguished, and differences in noise level can be distinguished.
  • a noise-free input image signal (referred to as a teacher signal) used for learning is supplied to a noise adding unit 31 and a normal equation adding unit 32.
  • the noise adding unit 31 adds a noise component to the input image signal to generate a noise-added image (referred to as a student signal), and supplies the generated student signal to the frame memory 21.
  • the frame memories 21, 23 store the images of the student signal of three frames that are temporally continuous, respectively.
  • the frame memory 22 stores the image of the current frame
  • the frame memories 11 and 23 respectively store the image after the current frame. And the case where the image of the previous frame is recorded.
  • the storage contents of the frame memories 21, 1, and 23 are not limited to this.
  • the class code generated by the feature detecting unit 28 and the prediction tap extracted by the second region extracting unit 27 are supplied to the normal equation adding unit 32.
  • the teacher signal is further supplied to the normal equation addition unit 32.
  • the normal equation adding unit 32 performs a process of generating a normal equation in order to generate a coefficient based on these three types of inputs, and the prediction coefficient determination unit 33 executes a prediction coefficient for each class code from the normal equation. To determine. Then, the prediction coefficient determination unit 33 supplies the determined prediction coefficient to the memory 34.
  • the memory 34 stores the supplied prediction coefficients for each class.
  • the prediction coefficients stored in the memory 34 and the prediction coefficients stored in the coefficient ROM 29 (FIG. 6) are the same.
  • the prediction coefficients are undetermined coefficients. Learning is performed by inputting multiple teacher signals for each class.
  • the number of classes of the teacher signal ⁇ is denoted by m
  • the following equation (2) is set from the equation (1).
  • the prediction coefficient is determined so as to minimize the error vector e defined by the following equation (4). That is, the prediction coefficient is uniquely determined by the so-called least square method.
  • e sigma e
  • (4) k o formula e 2 (4)
  • each prediction coefficient wi should be determined so that the partial differential value becomes 0 for each value of i. 21 ⁇ 2 e. (5)
  • the specific procedure for determining each prediction coefficient wi from equation (5) will be described. If ⁇ and ⁇ are defined as in equations (6) and (7), equation (5) can be written in the form of the determinant of equation (8) below.
  • the prediction coefficient determination unit 33 calculates each parameter in the normal equation (8) based on the above three types of inputs, and further converts the normal equation (8) according to a general matrix solution such as a sweeping-out method. Perform the calculation process to solve to calculate the prediction coefficient.
  • any of the following four methods can be used.
  • the noise component is extracted as the difference between the signal obtained by performing the processing using the RF system on the flat image signal and the image signal component from which the noise is removed by frame addition of the signal.
  • the extracted noise component is added to the input image signal.
  • the noise elimination circuit 12 using the above-described class classification adaptive processing performs, for example, a pixel of interest and a pixel corresponding to the pixel of interest as a class tap when performing the class classification adaptive processing to remove noise from the image signal.
  • the noise level between frames based on the data of the class It detects the fluctuation of the noise level and generates a class code corresponding to the detected fluctuation of the noise level.
  • pixels to be used for noise component detection processing (class taps) and pixels to be used for prediction calculation processing (prediction taps) are extracted so as to estimate the motion between frames and correct the estimated motion. I do.
  • an image signal from which noise has been removed is calculated by linear linear combination of the prediction tap and the prediction coefficient.
  • a prediction coefficient that accurately corresponds to the inter-frame variation of the noise component it is possible to effectively remove the noise component. Yes And even if there is movement, the noise level can be detected correctly and noise can be removed. In particular, it is possible to prevent the image from being blurred due to the erroneous determination that the moving part is a stationary part as in the case of the motion adaptive recursive fill described with reference to FIG.
  • a cluster-top structure having no spatial spread in a frame for example, only a pixel of interest is extracted from the current frame, and a pixel corresponding to the pixel of interest is extracted from a frame temporally before / after the current frame.
  • the influence of a spatial blur factor on processing can be reduced. That is, it is possible to reduce the occurrence of blurring in the output image signal due to, for example, the influence of edges and the like.
  • Denoising is performed independently of motion.
  • the perfectly stationary part is inferior to a moving adaptive recursive filter that can store information of long frames.
  • the output of the motion adaptive recursive filter as shown in FIG. 1 or FIG. 4 is selected and output, and in the moving part, the class classification as shown in FIG. Since the output of the adaptive noise elimination circuit is selectively output, an image signal output with good noise elimination can be obtained in both the moving part and the stationary part of the image.
  • the class taps and prediction taps in the first area cutout section 26 and the second area cutout section 27 in the description of the class classification adaptive elimination circuit are merely examples, and it goes without saying that they are not limited to these.
  • the feature detection unit 28 uses a one-bit ADRC encoding circuit.
  • a multi-bit ADRC encoding circuit may be used.
  • An encoding circuit may be used.
  • the selection between the output of the motion adaptive recursive filter 11 and the output of the classification adaptive noise elimination circuit 12 is performed on a pixel-by-pixel basis.
  • the selection may be performed in units of pixel blocks or objects each including a number of pixels, or in units of frames.
  • the static / movement determination circuit performs the static / movement determination in selected units.
  • the output of one motion adaptive recursive filter and the output of one class classification adaptive elimination circuit are selected as alternatives. It is also possible to provide a plurality of noise removing circuits by processing and select an output image signal from them.
  • FIG. 11 shows an embodiment of the present invention.
  • 9 is a flowchart showing a flow of a noise removal process. As shown in steps S 1 and S 2, the classification adaptive noise elimination process and the motion adaptive recursive fill process are performed in parallel. The difference between the outputs obtained in each process is calculated (step S3).
  • step S4 the difference is converted into an absolute value, and in step S5 of the determination, it is determined whether or not the absolute value of the difference is large. If it is determined that the absolute value of the difference is large, the output of the adaptive noise removal for classification is selected (step S6). Otherwise, the output of the motion adaptive recursive filter is selected (step S7). This completes the processing for one pixel.
  • FIG. 12 is a flowchart showing details of the processing S2 of the motion adaptive recursive filter.
  • a first step S11 an initial input image is stored in a frame memory.
  • the next step S12 the difference between the image in the frame memory and the next input image (frame difference) is calculated. This difference is converted into an absolute value in step S13.
  • the absolute difference is compared with a threshold value in step S14.
  • the weight coefficient k by which the input image signal is multiplied is set to 1 (step S15). That is, the weighting factor (11 k) multiplied to the output signal of the frame memory is set to 0 because it is a moving part.
  • k is set to a value within the range (0 to 0.5) in step S16.
  • step S17 the pixel in the frame memory and the pixel at the same position of the next input image are weighted and added.
  • the addition result is stored in the frame memory (step S18).
  • the process returns to step S12.
  • the addition result is output (step S 1 9) ⁇
  • FIG. 13 is a flowchart showing details of the classification adaptive noise elimination process S1.
  • a motion vector is detected between the current frame and the previous frame.
  • a motion vector is detected between the current frame and the next frame.
  • the first region is cut out. That is, the class setup is extracted.
  • the extracted class tap is subjected to feature detection processing.
  • the coefficient corresponding to the detected feature is read out of the coefficients obtained by the learning processing in advance (step S25).
  • step S26 a second area (prediction tap) is cut out.
  • step S27 an estimation operation is performed using the coefficient and the prediction tap, and an output from which noise has been removed is obtained.
  • the motion vectors detected in steps S21 and S22 are used. The cutout position is changed.
  • FIG. 14 is a flowchart showing a flow of a learning process for obtaining a coefficient used in the classification adaptive noise elimination process.
  • a student signal is generated by adding noise to an image signal (teacher signal) without noise.
  • a motion vector is detected between the current frame and the previous frame.
  • a motion vector is detected between the current frame and the next frame. The region cutout position is changed by these detected motion vectors.
  • step S34 the first area (class tap) is cut out.
  • Feature detection is performed based on the extracted class taps (step Step S35).
  • step S36 a second region (prediction tap) is cut out.
  • step S37 based on the teacher image signal, the data of the prediction gap, and the detected features, data necessary for solving a normal equation having a prediction coefficient as a solution is calculated.
  • step S38 it is determined whether the addition of the normal equations has been completed. If not, the process returns to step S31. If it is determined that the processing has been completed, a prediction coefficient is determined in step S39. The obtained prediction coefficients are stored in the memory and used in the noise removal processing.
  • the output of the noise elimination circuit having a large noise elimination effect for the stationary part such as the motion adaptive recovery filter is selectively output.
  • the output of the noise elimination circuit that can remove noise in the moving part, such as the adaptive noise elimination circuit for classification, is selected and output, so that the noise can be removed well in both the moving part and the stationary part of the image.
  • the resulting image signal output is obtained.
  • an image signal of the above-described standard television system (hereinafter, referred to as SD) is used as an input image signal, and is converted into an output image signal of a high-definition system (hereinafter, referred to as HD). This is the case.
  • SD standard television system
  • HD high-definition system
  • FIG. 16 for each pixel of interest of the SD image, four pixels of the HD image are created and the resolution is converted. is there.
  • FIG. 15 is a block diagram showing a configuration of another embodiment.
  • the input image signal is stored in a storage type Supply to the high-density storage resolution conversion circuit 111 that constitutes an example of the resolution conversion unit based on logic, and to the class classification adaptive processing resolution conversion circuit 112 that constitutes an example of the resolution conversion unit that performs the classification adaptive processing. Is done.
  • the high-density storage resolution conversion circuit 1 1 1 includes a frame memory for storing image signals of HD-equivalent images, and converts between an image based on the image signals stored in the frame memory and an image based on the SD input image signal. By accumulating the SD input image signal in the frame memory while correcting the pixel position with reference to the movement of the image, an HD-equivalent output image signal is generated in the frame memory. The detailed configuration will be described later.
  • the converted image signal equivalent to HD from the high-density storage resolution conversion circuit 111 is supplied to the output selection unit 113.
  • the class classification adaptive processing resolution conversion circuit 112 includes a plurality of features including a target pixel in the image based on the SD input image signal and its temporal and spatial surrounding pixels. Detect from pixel. Then, the target pixel is classified into classes based on the detected features, and a plurality of pixels in the HD image corresponding to the target pixel are determined by an image conversion operation set in advance corresponding to the classified class. By generating it, a high-resolution output image signal is generated, and its detailed configuration will be described later.
  • the converted image signal corresponding to HD from the class classification adaptive processing resolution conversion circuit 112 is also supplied to the output selection unit 113.
  • the output selection unit 113 includes a judgment circuit 114, which will be described in detail later, and a selection circuit 115, and converts the converted image signal from the high-density storage resolution conversion circuit 111, and class classification adaptive processing.
  • the converted image signals from the resolution conversion circuits 112 are supplied to the selection circuits 115, respectively.
  • the converted image signal from the high-density storage resolution conversion circuit 111 The converted image signal from the Lass classification adaptive processing resolution conversion circuit 112 is supplied to the judgment circuit 114.
  • the determination circuit 114 determines, from the two converted image signals, the motion and the activity of the image based on the image signals in units of a predetermined number of pixels, and selects a selection circuit 111 according to the determination result. 5 is used to select either the converted image signal from the high-density storage resolution conversion circuit 111 or the converted image signal from the class classification adaptive processing resolution conversion circuit 112 in units of a predetermined number of pixels.
  • a selection control signal for performing selection control is generated. In this example, it is determined which conversion image signal to select for each pixel, and the determination output is supplied to the selection circuit 115 as a selection control signal.
  • FIG. 17 shows a configuration example of the high-density storage resolution conversion circuit 111 used in this embodiment.
  • This high-density storage resolution conversion circuit 1 1 1 is effective for resolution conversion of an image having a static or full-screen simple pan / tilt movement, excluding scene change zoom.
  • the high-density storage resolution conversion circuit 111 includes a frame memory 110 as shown in FIG. This frame memory 110 stores each pixel value of an image signal of one frame having a resolution equivalent to an HD image (see FIG. 16).
  • the SD input image signal is first supplied to the linear interpolation unit 211.
  • the linear interpolation unit 211 generates an image signal having the number of pixels equivalent to the HD image from the SD input image signal by linear interpolation, and outputs the generated image signal to the motion vector detection unit 212.
  • the processing in the linear interpolation unit 211 is performed in order to perform matching with the same image size when detecting a motion vector between an SD input image and an HD equivalent image in the frame memory 210. It is.
  • the motion vector detection unit 2 1 2 outputs the output image of the linear interpolation unit 2 1 1
  • the motion vector is detected between the image stored in the frame memory 210 and the image equivalent to the HD image.
  • a method of detecting a motion vector for example, representative point matching on the entire screen is performed.
  • the accuracy of the detected motion vector is one pixel unit in an image equivalent to HD. That is, the input image signal of the SD image has an accuracy of one pixel or less.
  • the motion vector detected by the motion vector detector 2 12 is supplied to the phase shifter 2 13.
  • the phase shift unit 211 shifts the phase of the SD input image signal in accordance with the motion vector supplied thereto, and supplies it to the image storage processing unit 214.
  • the image storage processing section 214 stores the image signal stored in the frame memory 210 and the SD input image signal that has been phase-shifted by the phase shift section 213. The signal stored in the frame memory 210 is rewritten by the signal.
  • FIGS. 18 and 19 show conceptual diagrams of the processing in the image accumulation processing section 2 14.
  • FIGS. 18 and 19 show the accumulation processing only in the vertical direction for the sake of simplicity, but the accumulation processing is similarly performed in the horizontal direction.
  • FIGS. 18A and 19A show SD input image signals.
  • black circles indicate pixels actually existing on the SD image, and white circles indicate non-existent pixels.
  • the motion vector detector 2 12 detects 3 pixels of motion in the vertical direction in an HD-equivalent image, so the phase shifter 2 13 detects the SD input image signal. The phase shift of the three pixels in the vertical direction is shown.
  • the accuracy of the detected motion vector is one pixel equivalent to HD as described above, and the pixel position in the SD input image signal after the phase shift is shown in FIG. 19B.
  • each pixel after the phase shift and the corresponding pixel in the image signal (FIG. 18B, FIG. 19C) corresponding to the HD image of the frame memory 110 are stored.
  • the corresponding pixel of the frame memory 210 is rewritten by the added output pixel.
  • motion compensation is performed for the motion of the SD image, and the pixels of the HD accumulated image and the pixels of the SD input image at the same position are added.
  • weighting may be performed between the HD accumulated image and the SD input image.
  • the original SD image is shifted according to the motion vector with an accuracy of one pixel unit of the HD image, and is stored in the frame memory 210.
  • the image stored in the frame memory 210 is an HD equivalent image as shown in FIG. 18B or FIG. 19C.
  • FIG. 18 and FIG. 19 are explanatory diagrams of only the vertical direction, but the horizontal image is similarly converted from the SD image to the HD equivalent image.
  • the image signal stored in the frame memory 210 by the above-described storage processing is supplied to the output selection unit 113 as an HD output image signal as an output of the high-density storage resolution conversion circuit 111. Since the HD output image signal from the high-density storage resolution conversion circuit 111 is generated by the above-described high-density storage processing of the image in the time direction, the scene change is performed as described above. In the case of an SD input image having a static portion or a simple pan and tilt movement excluding zoom and the like, it is possible to obtain an HD output image without deterioration and without aliasing.
  • Class-adaptive resolution conversion circuit that performs SD-to-HD conversion can obtain high-quality HD output images.
  • the target pixel of the SD input image signal is subjected to class classification according to the characteristics of the target pixel, and the prediction coefficients obtained by learning in advance for each class are stored in the memory.
  • a process of outputting optimal estimated pixel values of a plurality of HD pixels corresponding to a pixel of interest by an arithmetic process according to a weighted addition formula using such prediction coefficients is described.
  • FIG. 20 shows an example of the overall configuration of the class classification adaptive processing resolution conversion circuit 112 used in this embodiment.
  • the SD input image signal to be processed is supplied to a field memory 122.
  • This field memory 1 2 1 always stores the SD image signal one field before. Then, the SD input image signal and the SD image signal one field before stored in the field memory 221 are supplied to the first area cutout section 222 and the second area cutout section 223. You.
  • the first area cutout unit 222 looks at a plurality of pixels (hereinafter referred to as class taps) from the SD input image signal or the SD image signal in order to extract the feature of the pixel of interest in the SD input image signal. Perform the following processing.
  • the first area cutout unit 222 supplies the extracted pixel values of the plurality of images to the feature detection unit 222.
  • the feature detection unit 222 generates a class code representing the feature of the pixel of interest from the pixel of interest in the first area and its temporal and spatial surrounding pixels, and generates the generated class code as a coefficient ROM 2. Supply 2 to 5.
  • the plurality of pixels cut out by the first area cutout unit 222 are used for generating a class code, they are called class taps as described above.
  • the coefficient ROM 225 stores in advance a prediction coefficient determined by learning as will be described later, in class I, more specifically, along with an address associated with the class code. Then, the coefficient ROM 225 receives the class code supplied from the feature detecting unit 224 as an address, and outputs a prediction coefficient corresponding to the address.
  • the second area cutout unit 223 converts the SD input image signal and the SD image signal of the previous field stored in the field memory 221 into a pixel area for prediction (second area).
  • a plurality of prediction pixels including the target pixel included therein are extracted, and the values of the extracted pixels are supplied to the estimation calculation unit 226.
  • the estimation calculation unit 226 uses the following equation (1).
  • equation (1) By performing the operation shown in 1), a plurality of pixel values of the HD image corresponding to the target pixel of the SD image are obtained, and a predicted HD image signal is generated.
  • the pixel values extracted by the second area cutout unit 223 are used in weighted addition for generating a predicted HD image signal, and are therefore referred to as prediction taps.
  • Equation (11) is similar to equation (1) in the above-described embodiment.
  • y w 1 XX 1 + w 2 XX 2 + ⁇ ⁇ ⁇ + Wn X n (11) where X,, X 2, ⁇ ⁇ ⁇ , X ⁇ are each prediction tap, and W , ⁇ ⁇ ⁇ ⁇ ⁇ , W n are the prediction coefficients.
  • FIG. 21 An example of a class tap extracted by the first area cutout unit 222 will be described.
  • the plurality of pixels output are assumed to be as shown in FIG. 21.It is assumed that the field including the pixel of interest and the field before it are included.
  • the pixels indicated by black circles are: Pixels in the n-th field (for example, odd-numbered fields) indicate pixels in the white circle, pixels in the n-th field (for example, even-numbered fields) indicate pixels, and the cluster type indicates the pixel of interest and its time. And a plurality of pixels spatially and spatially adjacent to each other.
  • the pixel of interest When the pixel of interest is a pixel in the n-th field, it has a class tap structure as shown in Fig. 21A. From the n-field, the pixel of interest and the upper and lower Seven pixels, that is, a pixel and two pixels on each side of the pixel, are extracted as class taps, and six pixels spatially adjacent to the pixel of interest are extracted as class taps from the previous field. Therefore, a total of 13 pixels are cut out as clusters.
  • the pixel of interest When the pixel of interest is a pixel in the n + 1st field, it has a class tap structure as shown in FIG. 21B. From the n + 1 field, the pixel of interest and the left and right 1 Three pixels with each pixel are extracted as class taps, and six pixels spatially adjacent to the pixel of interest are extracted as class taps from the previous field. Therefore, a total of nine pixels are cut out as class taps. In this example, a tap structure similar to the above-described class tap is also used for the prediction tap cut out by the second region cutout unit 27. Next, a configuration example of the feature detection unit 224 will be described.
  • a plurality of pixel value patterns that are cut out as class taps in the first area cutout 222 are characterized as the target pixel.
  • This pixel value There are a plurality of patterns corresponding to the cluster type, and each of the pixel value patterns is regarded as one class.
  • the feature detection unit 224 classifies the feature of the pixel of interest into classes using the plurality of pixel values cut out as class taps by the first region cutout unit 222, and responds to the class tap in advance. It outputs a class code indicating which of a plurality of classes is assumed.
  • the feature detection unit 224 performs ADRC (Adactive Dynamic Range Coding) on the output of the first region cutout unit 222, and uses the ADRC output as the feature of the pixel of interest. Generated as a class code that represents ADRC (Adactive Dynamic Range Coding)
  • FIG. 22 shows an example of the feature detecting section 224.
  • Fig. 22 shows the generation of class code by 1-bit ADRC.
  • 13 or 9 pixels are supplied as class taps to the dynamic range detection circuit 122 from the first area cutout section 222.
  • the value of each pixel is represented by, for example, 8 bits.
  • the dynamic range detection circuit 11 outputs the calculated dynamic range DR, the minimum value MIN, and the pixel value PX of each of the plurality of input pixels as its output.
  • the pixel values PX of the plurality of pixels from the dynamic range detection circuit 122 are sequentially supplied to a subtraction circuit 222, and the minimum value MIN is subtracted from each pixel value PX.
  • the normalized pixel value is supplied to the comparison circuit 123.
  • the output (DR / 2) of the bit shift circuit 124 that reduces the dynamic range DR to 1/2 is supplied to the comparison circuit 123, and the magnitude relationship between the pixel value PX and DR / 2 is detected. .
  • the 1-bit comparison output of the comparison circuit 12 3 is set to “ ⁇ ”.
  • the comparison circuit 1 2 3 The comparison output of one bit is set to “0.” Then, the comparison circuit 13 parallelizes the comparison outputs of a plurality of pixels as the class taps obtained in order to make a 13-bit or 9-bit comparison output. Generates ADRC output.
  • the dynamic range DR is supplied to a bit number conversion circuit 125, and the number of bits is converted from 8 bits to, for example, 5 bits by quantization. Then, the dynamic range obtained by converting the number of bits and the ADRC output are supplied to the coefficient ROM 225 as a class code. Note that if multi-bit ADRC is performed instead of one bit, Of course, the feature of the target pixel can be classified in more detail.
  • An HD image signal (referred to as a teacher signal) used for learning is supplied to a thinning processing unit 13 1 and a normal equation adding unit 13 2.
  • the thinning processing unit 1331 performs a thinning process on the HD image signal to generate an SD image signal (referred to as a student signal), and supplies the generated student signal to the field memory 121.
  • the field memory 221 stores the student signal one field earlier in time. One field of the issue is stored.
  • the class code generated by the feature detection unit 222 and the prediction tap extracted by the second region extraction unit 222 are supplied to the normal equation addition unit 132.
  • a teacher signal is further supplied to the normal equation adding unit 1 32.
  • the normal equation adding unit 1 3 2 performs a process of generating a normal equation in order to generate a coefficient based on these three types of inputs, and the prediction coefficient determining unit 1 3 3 performs prediction for each class code from the normal equation. Determine the coefficient.
  • the prediction coefficient determination unit 133 supplies the determined prediction coefficient to the memory 134.
  • the memory 134 stores the supplied prediction coefficients.
  • the prediction coefficient stored in the memory 13 and the prediction coefficient stored in the coefficient R OM 25 (FIG. 20) are the same.
  • the class classification adaptive processing resolution conversion circuit 112 classifies the feature of the target pixel of the SD image into a class, and performs an estimation operation using a prediction coefficient prepared in advance based on the classified class. By creating a plurality of pixels of the HD image corresponding to the pixel of interest.
  • the class classification adaptive processing resolution conversion circuit 1 1 2 A converted image signal with little deterioration can be obtained without depending on the stillness and motion of the image.However, long frames are required for the complete still part as described above and simple motion of the entire image such as pan and tilt. It is inferior to the converted image signal from the high-density storage resolution conversion circuit 111 that can store the information of the image.
  • a resolution-converted output image signal with less deterioration is appropriately output from the output selection unit 113.
  • the determination circuit 114 determines which resolution conversion output is to be selected, and, based on the determination output, outputs an appropriate Control to Obtain Resolution-Converted Output Image Signal
  • the details of the determination circuit 114 will be described, and the selection operation thereby will be described.
  • the converted image signal from the high-density accumulated resolution converting circuit 1 11 and the converted image signal from the class classification adaptive processing resolution converting circuit 1 1 2 are sent to the difference value calculating circuit 2 4 1
  • the difference value between the two is calculated.
  • the difference value is converted to an absolute value by the absolute value conversion circuit 242 and supplied to the comparison determination circuit 243.
  • the comparison judgment circuit 243 judges whether the absolute value of the difference value from the absolute value generation circuit 242 is larger than a predetermined value, and supplies the judgment result to the selection signal generation circuit 249. I do.
  • Classification adaptive processing The resolution conversion image signal from the resolution conversion circuit 1 1 2 is selected by the selection circuit 1 1 5 A selection control signal is generated and supplied to the selection circuit 1 15.
  • the class classification adaptive processing resolution conversion circuit 213 capable of responding to motion is also used. It is better to use the converted image signal.
  • the difference value calculation circuit 241, the absolute value conversion circuit 242, and the comparison judgment circuit 243 constitute a static motion judgment circuit for an image.
  • the selection signal generation circuit 249 becomes As described below, the converted image signal from the high-density storage resolution conversion circuit 111 and the converted image signal from the class classification adaptive processing resolution conversion circuit 112 that have the higher activity are used. A selection control signal to be output from the selection circuit 115 is generated and supplied to the selection circuit 115. By outputting the pixel with the higher activity, it is possible to output a more active image without blurring.
  • the converted image signal from the high-density accumulation resolution converting circuit 111 and the converted image signal from the class classification adaptive processing resolution converting circuit 112 are used for the activity calculation. Are supplied to the region cutout portions 244 and 245, respectively, for cutting out the region of FIG.
  • the region cutout sections 2 4 4 and 2 4 5 are used to output resolution conversion signals equivalent to HD from the high-density storage resolution conversion circuit 1 11 and the class classification adaptive processing resolution conversion circuit 1 12, for example, as shown in Fig. 25B As shown by a broken line in FIG. 25C, a plurality of pixels before and after the target pixel of the SD image are cut out as pixels in the activity calculation area.
  • the plurality of pixels cut out as an area for the activity calculation are supplied to detectors 246 and 247, respectively, which detect a dynamic range as an activity, and each of the activities within the area (in this example, dynamic Range) is detected. Then, those detection outputs are supplied to a comparison circuit 248, the magnitudes of the two dynamic ranges are compared, and the comparison output is supplied to a selection signal generation circuit 249.
  • the selection signal generation circuit 24 9 is used when the judgment output of the comparison judgment circuit 24 3 indicates that the absolute value of the difference value is smaller than a predetermined threshold value. Based on the output of the above, a selection control signal for selecting and outputting a resolution conversion output having a larger dynamic range of a plurality of pixels cut out as an activity calculation area is output to the selection circuit 1. Supply 1 to 5.
  • the operation of the decision circuit 114 and the selection circuit 115 will be further described with reference to the flowchart of FIG. Figure 26
  • the operation of the flowchart corresponds to a case where the determination circuit 114 is realized by software processing.
  • an example will be described in which an appropriate one of the output of the high-density accumulation resolution conversion circuit 111 and the output of the class classification adaptive processing resolution conversion circuit 112 is selected for each pixel.
  • a difference value between the two pixels is calculated (step S101), and it is determined whether or not the absolute value of the difference value is larger than a threshold value (step S102).
  • the converted output image signal from the adaptive processing resolution conversion circuit 112 is selected and output (step S107).
  • the two activities in this example, the dynamic range
  • the two activities are calculated in units of the activity calculation area described above (steps S103 and S104), and the calculated values are calculated.
  • the two activities are compared (step S105), and the pixel with the larger activity is output (steps S106, S108).
  • an image without blurring with higher activity is selected and output.
  • the dynamic range in a specific area surrounded by a dotted line as shown in FIG. 25 is used, but the present invention is not limited to this.
  • the variance in a specific area the sum of absolute differences between a target pixel and pixels on both sides thereof, or the like can be used.
  • the selection processing the case where selection is performed in pixel units has been described.
  • the selection is not limited to selection in pixel units, and may be performed in block units, object units, frame units, or the like. Good.
  • the output of one high-density storage resolution conversion circuit and the output of one class classification adaptive processing resolution conversion circuit have been selected as an alternative.
  • Classification adaptive processing It is also possible to provide a plurality of resolution conversion circuits and select an output image signal from them.
  • class taps and prediction taps in the first area cutout section 222 and the second area cutout section 223 in the description of the classification adaptive processing are merely examples, and it goes without saying that they are not limited thereto. No. Also, in the above description, the structure of the class tap and the prediction tap are the same, but they need not be the same structure.
  • the conversion from the SD image to the HD image has been described as an example.
  • the classification adaptive processing and the high-density accumulation are not limited to the above-described embodiments.
  • FIG. 27 is a flowchart showing the flow of the resolution conversion processing of one embodiment. As shown in steps S111 and S112, the resolution conversion processing by the class classification adaptive processing and the resolution conversion processing by the high-density accumulation processing are performed in parallel. The output obtained in each process is processed by the output determination process (step S113). Then, in step S114, an output is selected according to the determination result in step S113. Thus, the processing for one pixel is completed.
  • FIG. 28 is a flowchart showing details of the resolution conversion processing S112 by the high-density accumulation processing.
  • first step S1221 an initial input frame image is linearly interpolated to form an image having the number of pixels of HD.
  • the image after this interpolation is stored in the frame memory (step S122).
  • step S 1 2 similarly linear for the next frame Interpolation is performed.
  • step S14 a motion vector is detected using the two-frame image obtained by the linear interpolation.
  • step S125 the input SD image is phase-shifted by the detected motion vector.
  • the phase-shifted image undergoes an image accumulation process (step S126).
  • step S127 the accumulation result is stored in the frame memory. Then, an image is output from the frame memory (step S128).
  • FIG. 29 is a flowchart showing details of the resolution conversion processing S111 by the class classification adaptive processing.
  • a first region is cut out. That is, class taps are extracted.
  • feature detection is performed on the extracted class tap.
  • the coefficient corresponding to the detected feature is read out (step S133).
  • the second region is cut out.
  • an estimation operation is performed using the coefficients and the prediction taps, and an up-converted output (HD image) is obtained.
  • FIG. 30 is a flowchart showing a flow of a learning process for obtaining a coefficient used for a resolution conversion process by the class classification adaptation.
  • a high-resolution HD signal teacher signal
  • step S142 the first area (class tap) is cut out.
  • Feature detection is performed based on the extracted class taps (step S144).
  • step S144 a second region (prediction tap) is cut out.
  • step S145 based on the teacher image signal, the data of the prediction taps, and the detected features, data necessary for solving a normal equation in which the prediction coefficient is solved is calculated.
  • step S146 it is determined whether the addition of the normal equations has been completed. If not, the process returns to step S142 (first region cutout process).
  • step S147 a prediction coefficient is determined. The obtained prediction coefficients are stored in the memory and used in the resolution conversion processing.
  • a high-density storage structure capable of handling information in the time direction for a long time and a result of the classification adaptive processing can be selected for each pixel. Images can be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Picture Signal Circuits (AREA)
  • Television Systems (AREA)

Abstract

L'invention concerne un système de traitement d'images servant à recevoir un signal image d'entrée pour générer un signal image de sortie de qualité supérieure, lequel système comprend un premier et un second dispositif de traitement du signal. Le premier dispositif de traitement du signal assure un traitement du type stockage et comprend, à cet effet, une mémoire destinée au stockage d'un signal image d'une qualité identique à celle du signal image de sortie. Le premier dispositif de traitement du signal ajoute ensuite le signal image d'entrée à l'image, telle que stockée dans la mémoire, de manière à générer un premier signal image d'une qualité supérieure à celle du signal d'entrée et à stocker le premier signal image dans la mémoire. Le second dispositif de traitement du signal assure la classification et l'adaptation des opérations, et génère, pour ce faire, un second signal image d'une qualité supérieure à celle de l'image d'entrée par une extraction, basée sur le signal image d'entrée, qui est fonction de la position d'un pixel dans le signal image de sortie, par la classification du pixel cible dans une pluralité de catégories en fonction de ses caractéristiques, et par l'activation du signal image d'entrée par un système d'activation préréglé en fonction de la catégorie de classification. Le système de traitement d'images selon l'invention comprend également un dispositif de sélection de sortie capable de prendre une décision sur la base du premier signal image et du second signal image, afin de sélectionner l'un de ces deux signaux comme signal image de sortie.
PCT/JP2001/005117 2000-06-15 2001-06-15 Systeme et procede de traitement d'images, programme et support d'enregistrement WO2001097510A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/049,553 US7085318B2 (en) 2000-06-15 2001-06-15 Image processing system, image processing method, program, and recording medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2000179341A JP4407015B2 (ja) 2000-06-15 2000-06-15 ノイズ除去装置およびノイズ除去方法
JP2000179342A JP4470282B2 (ja) 2000-06-15 2000-06-15 画像処理装置および画像処理方法
JP2000-179342 2000-06-15
JP2000-179341 2000-06-15

Publications (1)

Publication Number Publication Date
WO2001097510A1 true WO2001097510A1 (fr) 2001-12-20

Family

ID=26593971

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2001/005117 WO2001097510A1 (fr) 2000-06-15 2001-06-15 Systeme et procede de traitement d'images, programme et support d'enregistrement

Country Status (2)

Country Link
KR (1) KR100816593B1 (fr)
WO (1) WO2001097510A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004055775A1 (fr) * 2002-12-13 2004-07-01 Sony Corporation Appareil de traitement de signaux d'images, procede de traitement de signaux d'images, programme de mise en pratique dudit procede et support lisible par ordinateur dans lequel ledit programme a ete enregistre
WO2004072898A1 (fr) * 2003-02-13 2004-08-26 Sony Corporation Dispositif, procede et programme de traitement de signaux
WO2004077354A1 (fr) * 2003-02-25 2004-09-10 Sony Corporation Dispositif, procédé et programme de traitement d'images
WO2005066897A1 (fr) * 2004-01-06 2005-07-21 Sony Corporation Dispositif, procede, support d'enregistrement, et programme d'infographie
JP2007142633A (ja) * 2005-11-16 2007-06-07 Kddi Corp ショット境界検出装置
US7729558B2 (en) 2002-11-20 2010-06-01 Sony Corporation Image signal, processing device and processing method, coefficient data generation device and generation method used for the same, program for executing the methods and computer readable medium containing the program
WO2010064316A1 (fr) * 2008-12-05 2010-06-10 オリンパス株式会社 Dispositif, procédé et programme de traitement d'image
US8204336B2 (en) 2008-07-16 2012-06-19 Panasonic Corporation Removing noise by adding the input image to a reference image
US8411205B2 (en) 2007-07-11 2013-04-02 Olympus Corporation Noise reducing image processing apparatus
JP7487611B2 (ja) 2020-08-24 2024-05-21 株式会社リコー 画像形成装置及びプログラム

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01143583A (ja) * 1987-11-30 1989-06-06 Nec Corp 画像信号の雑音除去方法
JPH04101579A (ja) * 1990-08-21 1992-04-03 Toshiba Corp テレビジョン信号の処理装置
JPH06121194A (ja) * 1992-10-08 1994-04-28 Sony Corp ノイズ除去回路
JPH114415A (ja) * 1997-06-12 1999-01-06 Sony Corp 画像変換装置、画像変換方法、学習装置、学習方法、および、伝送媒体
JP2000059652A (ja) * 1998-08-07 2000-02-25 Sony Corp ノイズ除去装置及びノイズ除去方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0612294A (ja) * 1992-06-26 1994-01-21 Sekisui Chem Co Ltd 監視装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01143583A (ja) * 1987-11-30 1989-06-06 Nec Corp 画像信号の雑音除去方法
JPH04101579A (ja) * 1990-08-21 1992-04-03 Toshiba Corp テレビジョン信号の処理装置
JPH06121194A (ja) * 1992-10-08 1994-04-28 Sony Corp ノイズ除去回路
JPH114415A (ja) * 1997-06-12 1999-01-06 Sony Corp 画像変換装置、画像変換方法、学習装置、学習方法、および、伝送媒体
JP2000059652A (ja) * 1998-08-07 2000-02-25 Sony Corp ノイズ除去装置及びノイズ除去方法

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7729558B2 (en) 2002-11-20 2010-06-01 Sony Corporation Image signal, processing device and processing method, coefficient data generation device and generation method used for the same, program for executing the methods and computer readable medium containing the program
CN100407755C (zh) * 2002-12-13 2008-07-30 索尼株式会社 用于处理图像信号的设备和方法
WO2004055775A1 (fr) * 2002-12-13 2004-07-01 Sony Corporation Appareil de traitement de signaux d'images, procede de traitement de signaux d'images, programme de mise en pratique dudit procede et support lisible par ordinateur dans lequel ledit programme a ete enregistre
US7595800B2 (en) 2003-02-13 2009-09-29 Sony Corporation Signal processing device, method, and program
US7609292B2 (en) 2003-02-13 2009-10-27 Sony Corporation Signal processing device, method, and program
US7734113B2 (en) 2003-02-13 2010-06-08 Sony Corporation Signal processing device, method, and program
US7576777B2 (en) 2003-02-13 2009-08-18 Sony Corporation Signal processing device, method, and program
US7590304B2 (en) 2003-02-13 2009-09-15 Sony Corporation Signal processing device, method, and program
WO2004072898A1 (fr) * 2003-02-13 2004-08-26 Sony Corporation Dispositif, procede et programme de traitement de signaux
US7593594B2 (en) 2003-02-13 2009-09-22 Sony Corporation Signal processing device, method, and program
US7668393B2 (en) 2003-02-13 2010-02-23 Sony Corporation Signal processing device, method, and program
US7593601B2 (en) 2003-02-25 2009-09-22 Sony Corporation Image processing device, method, and program
WO2004077354A1 (fr) * 2003-02-25 2004-09-10 Sony Corporation Dispositif, procédé et programme de traitement d'images
WO2005066897A1 (fr) * 2004-01-06 2005-07-21 Sony Corporation Dispositif, procede, support d'enregistrement, et programme d'infographie
US7899208B2 (en) 2004-01-06 2011-03-01 Sony Corporation Image processing device and method, recording medium, and program for tracking a desired point in a moving image
JP2007142633A (ja) * 2005-11-16 2007-06-07 Kddi Corp ショット境界検出装置
JP4510749B2 (ja) * 2005-11-16 2010-07-28 Kddi株式会社 ショット境界検出装置
US8411205B2 (en) 2007-07-11 2013-04-02 Olympus Corporation Noise reducing image processing apparatus
US8204336B2 (en) 2008-07-16 2012-06-19 Panasonic Corporation Removing noise by adding the input image to a reference image
WO2010064316A1 (fr) * 2008-12-05 2010-06-10 オリンパス株式会社 Dispositif, procédé et programme de traitement d'image
JP7487611B2 (ja) 2020-08-24 2024-05-21 株式会社リコー 画像形成装置及びプログラム

Also Published As

Publication number Publication date
KR20020062274A (ko) 2002-07-25
KR100816593B1 (ko) 2008-03-24

Similar Documents

Publication Publication Date Title
US7265791B2 (en) Method and apparatus for de-interlacing video signal
US7085318B2 (en) Image processing system, image processing method, program, and recording medium
US20100202711A1 (en) Image processing apparatus, image processing method, and program
KR20060047595A (ko) 적응 시간적인 예측을 채용하는 움직임 벡터 추정
NL1027270C2 (nl) Deïnterlinieringsinrichting met een ruisverminderings/verwijderingsinrichting.
KR20080033094A (ko) 모션이 보상된 이미지를 위한 보간 방법 및 이 방법의구현을 위한 디바이스
KR20070094796A (ko) 디인터레이싱 방법, 장치 및 시스템
US6930728B2 (en) Scan conversion apparatus
JPWO2006025396A1 (ja) 画像処理装置および画像処理プログラム
WO2001097510A1 (fr) Systeme et procede de traitement d'images, programme et support d'enregistrement
US8139151B2 (en) Moving image processing apparatus, control method thereof, and program
JP4407015B2 (ja) ノイズ除去装置およびノイズ除去方法
KR20070094521A (ko) 화상 처리 장치 및 화상처리 방법과 프로그램
AU2004200237B2 (en) Image processing apparatus with frame-rate conversion and method thereof
JP4746909B2 (ja) ビデオシーケンスの補助データ処理
Park et al. Covariance-based adaptive deinterlacing method using edge map
JP4886479B2 (ja) 動きベクトル補正装置、動きベクトル補正プログラム、補間フレーム生成装置及び映像補正装置
JP2010118940A (ja) 画像処理装置、画像処理方法、及びプログラム
JP3723995B2 (ja) 画像情報変換装置および方法
JP4470282B2 (ja) 画像処理装置および画像処理方法
JPH0730859A (ja) フレーム補間装置
JP4250807B2 (ja) フィールド周波数変換装置および変換方法
JP2006186504A (ja) 画像処理装置および方法、記録媒体、並びにプログラム
JPH08317347A (ja) 画像情報変換装置
WO2005013615A1 (fr) Procede et appareil de desentrelacement adaptatif bases sur un champ de phase corrigee, et programmes memoires de supports d'informations permettant d'executer ce procede de desentrelacement adaptatif

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP KR US

WWE Wipo information: entry into national phase

Ref document number: 1020027001976

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 1020027001976

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 10049553

Country of ref document: US