WO2024047994A1 - Dispositif de génération d'informations d'entrée, dispositif de traitement d'image, procédé de génération d'informations d'entrée, dispositif d'apprentissage, programme, et procédé d'apprentissage pour dispositif de réduction de bruit - Google Patents

Dispositif de génération d'informations d'entrée, dispositif de traitement d'image, procédé de génération d'informations d'entrée, dispositif d'apprentissage, programme, et procédé d'apprentissage pour dispositif de réduction de bruit Download PDF

Info

Publication number
WO2024047994A1
WO2024047994A1 PCT/JP2023/021204 JP2023021204W WO2024047994A1 WO 2024047994 A1 WO2024047994 A1 WO 2024047994A1 JP 2023021204 W JP2023021204 W JP 2023021204W WO 2024047994 A1 WO2024047994 A1 WO 2024047994A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
input
unit
images
Prior art date
Application number
PCT/JP2023/021204
Other languages
English (en)
Japanese (ja)
Inventor
宏 能地
ピヤワト スワンウイタヤ
Original Assignee
LeapMind株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2022137843A external-priority patent/JP2024033920A/ja
Priority claimed from JP2022137834A external-priority patent/JP2024033913A/ja
Application filed by LeapMind株式会社 filed Critical LeapMind株式会社
Publication of WO2024047994A1 publication Critical patent/WO2024047994A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/72Data preparation, e.g. statistical preprocessing of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Definitions

  • the present invention relates to an input information generation device, an image processing device, an input information generation method, a learning device, a program, and a learning method for a noise reduction device.
  • This application claims priority based on Japanese Patent Application No. 2022-137834, which was filed in Japan on August 31, 2022, and Japanese Patent Application No. 2022-137843, which was filed in Japan on August 31, 2022. , all contents stated in the application are incorporated by reference.
  • the image When capturing an image with an imaging device, if the amount of surrounding light is not sufficient or if the settings of the imaging device such as shutter speed, aperture, or ISO sensitivity are inappropriate, the image may be of low quality.
  • the present invention aims to provide a technology that can convert a low-quality video to a high-quality video using lightweight calculations.
  • One aspect of the present invention includes an image acquisition unit that acquires, as input images, a plurality of frames including at least a target frame for which input information is to be generated, among frames constituting a moving image; an input conversion unit that converts a pixel value of an input image into input data with a smaller number of bits than the number of bits indicating the pixel value of the input image; and a synthesis unit that combines the plurality of converted input data into one composite data. , and an output unit that outputs the synthesized data.
  • One aspect of the present invention is the input information generation device according to (1) above, in which the moving image is a color moving image, and the image acquisition unit converts pixel values of each color from one frame into a plurality of different images.
  • the input conversion unit converts each of a plurality of images acquired from one frame into the input data.
  • One aspect of the present invention is the input information generation device according to (1) or (2) above, in which the image acquisition unit is configured to continuously capture images before and after the target frame, respectively, of the frames constituting the video. This method acquires images of multiple adjacent frames.
  • One aspect of the present invention is the input information generation device according to any one of (1) to (3) above, wherein the image acquisition unit outputs the composite data regarding the target frame by the output unit. After that, a frame adjacent to the target frame is set as the target frame, and a plurality of frames including at least the target frame are acquired as the input image.
  • the input information generation device is configured to use a frame other than the target frame among the plurality of frames acquired by the image acquisition unit.
  • the input conversion unit converts the pixel values of the target frame into one integrated frame by performing an operation based on the pixel values of a certain adjacent frame. data, and further converts the pixel values of the integrated frame into the input data, and the combining unit converts the plurality of input data converted based on the target frame and the plurality of input data converted based on the integrated frame.
  • the above-mentioned input data are combined into one above-mentioned combined data.
  • One aspect of the present invention is the input information generation device according to (5) above, wherein the integrating unit sets an average value of pixel values of the plurality of adjacent frames as the pixel value of the integrated frame. be.
  • One aspect of the present invention is the input information generation device according to (6) above, in which the integrating unit calculates a weighted average of the plurality of adjacent frames according to a temporal distance from the target frame. By this calculation, the pixel value of the integrated frame is calculated.
  • One aspect of the present invention is the input information generation device according to (6) above, in which the integrating unit excludes frames with large brightness changes from among the frames forming the video from the targets for calculating the average value. It is something to do.
  • the input information generation device includes an average value temporary storage unit that stores an average value of pixel values of a predetermined frame among the frames constituting the moving image. Furthermore, the integrating unit calculates the pixel value of the integrated frame by calculation based on the value stored in the average value temporary storage unit and the target frame.
  • the input information generation device includes an imaging condition acquisition unit that acquires the imaging conditions of the moving image, and a temporary value of the average value according to the acquired imaging conditions.
  • the apparatus further includes an adjustment section that adjusts the average value stored in the storage section.
  • the input information generation device further includes a comparison unit that compares the value stored in the average value temporary storage unit and the pixel value of the target frame. Preparation: When the difference is less than or equal to a predetermined value as a result of the comparison by the comparison unit, the integration unit calculates a moving average based on the value stored in the average value temporary storage unit and the target frame. The pixel value of the integrated frame is calculated, and if the difference is not less than a predetermined value, the pixel value of the target frame is set as the pixel value of the integrated frame.
  • One aspect of the present invention is the input information generation device according to (5) above, in which the integrating unit integrates randomly specified frames among the adjacent frames acquired by the image acquiring unit. It is used as a frame.
  • One aspect of the present invention includes the input information generation device according to any one of (1) to (12) above, and a convolution neural network that uses the synthetic data outputted by the input information generation device as input information.
  • An image processing apparatus includes a network.
  • One aspect of the present invention includes an image acquisition step of acquiring, as input images, a plurality of frames including at least a target frame for which input information is to be generated, among frames constituting a moving image; an input conversion step of converting a pixel value of an input image into input data with a smaller number of bits than the number of bits representing the pixel value of the input image; and a synthesis step of combining the plurality of converted input data into one composite data. , and an output step of outputting the synthesized data.
  • A1 One aspect of the present invention is that first image information including at least one image and a subject that is the same as the subject imaged in the image included in the first image information are imaged, and the first image information an image acquisition unit that acquires second image information including at least one image of lower image quality than the image contained in the image; and a plurality of images at different positions that are part of the acquired first image information and are cut out.
  • a plurality of images are combined to generate first video information, a plurality of images at different positions that are part of the acquired second image information are cut out, and the plurality of cut out images are combined to generate second video information.
  • a learning unit that learns to infer a high-quality video from a low-quality video based on a video information generation unit and teacher data that includes the first video information and the second video information generated by the video information generation unit. This is a learning device comprising:
  • One aspect of the present invention is the learning device according to (A1) above, in which the second image information includes the same subject as the subject captured in the image included in the first image information. the plurality of images, each of which includes a plurality of images on which different noises are superimposed, and the video information generation unit is configured to cut out different parts from each of the plurality of images included in the second image information. The second moving image information is generated.
  • One aspect of the present invention is the learning device according to (A1) or (A2) above, in which the plurality of images included in the second image information are images captured at different times that are close to each other.
  • One aspect of the present invention is the learning device according to any one of (A1) to (A3) above, in which the video information generation unit generates a different image from one image included in the first image information.
  • the first moving image information is generated by cutting out a portion.
  • One aspect of the present invention is the learning device according to any one of (A1) to (A4) above, in which the video information generation unit shifts the plurality of cut out images in a predetermined direction so as to move the plurality of cut out images to different positions. This is to cut out multiple images.
  • the video information generation unit is configured to generate a plurality of video information at a position shifted by a predetermined number of bits in a predetermined direction. The image is cut out.
  • One aspect of the present invention is that in the learning device described in (A6) above, the predetermined direction in which the video information generation unit cuts out the image is calculated by affine transformation.
  • One aspect of the present invention is the learning device according to (A6) above, further comprising a trajectory vector acquisition unit that acquires a trajectory vector, and the predetermined direction in which the video information generation unit cuts out the image is the acquisition device. This is calculated based on the trajectory vector obtained.
  • One aspect of the present invention includes: an image acquisition unit that acquires image information including at least one image; a cutting unit that cuts out a plurality of images at different positions that are part of the acquired image information; a first video information generation section that combines a plurality of cut out images to generate first video information; a noise superposition section that superimposes noise on each of the plurality of images cut out by the cutout section; and a noise superposition section that a second video information generation unit that generates second video information by combining a plurality of images on which noise is superimposed; and the first video information generated by the first video information generation unit and the second video information generation unit.
  • the learning device is provided with a learning unit that learns to infer a high-quality video from a low-quality video based on teacher data that includes the second video information generated by the above-mentioned second video information.
  • first image information including at least one image and the same subject as the subject imaged in the image included in the first image information are captured by the computer, and the first image information includes at least one image; an image acquisition step of acquiring second image information including at least one image of lower image quality than the image included in the first image information; and cutting out a plurality of images at different positions that are part of the acquired first image information. , combine a plurality of cut out images to generate first moving image information, cut out a plurality of images at different positions that are part of the acquired second image information, and combine the plurality of cut out images to generate second moving image information.
  • learning to infer a high-quality video from a low-quality video based on training data including the first video information and the second video information generated by the video information generation step; This is a program that executes learning steps.
  • One aspect of the present invention is that the first image information including at least one image and the same subject as the subject imaged in the image included in the first image information are imaged, and the first image information an image acquisition step of acquiring second image information including at least one image of lower image quality than the image included in the image; and cutting out a plurality of images at different positions that are part of the acquired first image information.
  • a plurality of images are combined to generate first video information, a plurality of images at different positions that are part of the acquired second image information are cut out, and the plurality of cut out images are combined to generate second video information.
  • FIG. 1 is a block diagram illustrating an example of a functional configuration of a high-quality video generation system according to a first embodiment.
  • FIG. 1 is a diagram illustrating an example of a convolutional neural network according to the first embodiment.
  • FIG. 3 is a diagram illustrating frames constituting a moving image according to the first embodiment.
  • FIG. 2 is a diagram for explaining an overview of an input information generation method according to the first embodiment.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration of input information generation according to the first embodiment.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration of an input conversion section according to the first embodiment.
  • FIG. 7 is a diagram for explaining an overview of an input information generation method according to a second embodiment.
  • FIG. 1 is a block diagram illustrating an example of a functional configuration of a high-quality video generation system according to a first embodiment.
  • FIG. 1 is a diagram illustrating an example of a convolutional neural network according to the first
  • FIG. 3 is a block diagram illustrating an example of a functional configuration of input information generation according to a second embodiment.
  • FIG. 7 is a block diagram illustrating an example of a functional configuration of input information generation according to a third embodiment.
  • FIG. 7 is a block diagram illustrating an example of a functional configuration of input information generation according to a fourth embodiment. It is a figure for explaining the outline of the learning system concerning a 5th embodiment. It is a figure showing an example of the functional composition of the learning device concerning a 5th embodiment.
  • FIG. 12 is a diagram for explaining an example of the position of an image cut out from a high-quality image by the learning device according to the fifth embodiment.
  • FIG. 12 is a diagram for explaining an example of the position of an image cut out from a low-quality image by the learning device according to the fifth embodiment.
  • FIG. 12 is a diagram for explaining an example of a direction in which a learning device according to a fifth embodiment cuts out.
  • FIG. 12 is a diagram illustrating an example of a functional configuration of a learning device according to a fifth embodiment when the learning device generates a moving image based on a trajectory vector.
  • FIG. 12 is a diagram for explaining an example of the position of an image cut out from a still image when a learning device according to a modification of the fifth embodiment generates a moving image based on a trajectory vector.
  • FIG. 12 is a flowchart illustrating an example of a series of operations of a learning method of a noise reduction device according to a modification of the fifth embodiment. It is a figure for explaining the outline of the learning system concerning a 6th embodiment. It is a figure showing an example of functional composition of a video information generation part concerning a 6th embodiment.
  • the input information generation device, image processing device, and input information generation method according to the present embodiment receive low-quality video information with superimposed noise as input and generate high-quality video information from which noise has been removed.
  • Low-quality videos include low-quality videos
  • high-quality videos include high-quality videos.
  • An example of a high-quality moving image is a moving image with high image quality captured by low ISO sensitivity and long exposure.
  • An example of a low-quality moving image is a moving image with low image quality captured by high ISO sensitivity and short exposure.
  • image quality deterioration due to noise will be described as an example of low-quality video, but the present embodiment is widely applicable to matters other than noise that degrade the quality of video.
  • Things that can degrade video quality include a decrease in resolution or color shift due to optical aberrations, a decrease in resolution due to camera shake or subject shake, uneven black levels due to dark current or circuits, ghosts and flare due to high-brightness subjects, Examples include signal level abnormalities.
  • noise includes streak-like noise that occurs in the horizontal or vertical direction of an image, noise that occurs in a fixed pattern in an image, and the like. Further, noise specific to moving images, such as flicker-like noise that fluctuates between consecutive frames, may be included.
  • the input information generation device, image processing device, and input information generation method according to the present embodiment improve the image quality of each frame by image processing each frame included in a video, thereby increasing the quality of the video. I do.
  • a video captured by an imaging device may be used, or a video prepared in advance may be used.
  • a low-quality video may be referred to as a low-quality video or a noise video.
  • a high-quality video may be referred to as a high-quality video.
  • the video targeted by the input information generation device, image processing device, and input information generation method according to the present embodiment may be a video captured by a CCD camera using a CCD (Charge Coupled Devices) image sensor. good.
  • the moving image targeted by the input information generation device, image processing device, and input information generation method according to the present embodiment is an image captured by a CMOS (complementary metal oxide semiconductor) camera using a CMOS (complementary metal oxide semiconductor) image sensor. Good too.
  • the video targeted by the input information generation device, image processing device, and input information generation method according to the present embodiment may be a color video or a monochrome video.
  • the video targeted by the input information generation device, image processing device, and input information generation method according to the present embodiment is a video captured by an infrared camera using an infrared sensor or the like that acquires non-visible light components. There may be.
  • FIG. 1 is a block diagram showing an example of the functional configuration of a high-quality video generation system according to the first embodiment.
  • An example of the functional configuration of the high-quality video generation system 1 will be described with reference to the same figure.
  • the high-quality video generation system 1 includes an imaging device 100, an input information generation device 10, and a convolutional neural network 200 (hereinafter referred to as "CNN 200") as its functions.
  • the input information generation device 10 and CNN 200 perform image processing on each frame constituting the moving image captured by the imaging device 100.
  • the input information generation device 10 and the CNN 200 include trained models that have been trained in advance.
  • a configuration including the input information generation device 10 and CNN 200 may be referred to as an image processing device 2.
  • the high-quality moving image generation system 1 may be configured to include an encoding unit that compresses and encodes the output of the image processing device 2, and a predetermined memory that holds the results compressed and encoded by the encoder.
  • the imaging device 100 images a moving image.
  • the moving image captured by the imaging device 100 is a low-quality moving image that is subject to quality improvement.
  • the imaging device 100 may be, for example, a surveillance camera installed in a dark (low amount of light) location.
  • the imaging device 100 images a low-quality moving image due to insufficient light, for example.
  • the imaging device 100 outputs the captured moving image to the input information generation device 10.
  • a moving image captured by the imaging device 100 becomes an input to the image processing device 2 . Therefore, the video output from the imaging device 100 to the input information generation device 10 may be referred to as video information IM.
  • both the imaging device 100 and the image processing device 2 may exist within a housing of a smartphone, a tablet terminal, or the like. That is, the high-quality video generation system 1 may exist as an element constituting an edge device. Further, the imaging device 100 may be connected to the image processing device 2 via a predetermined communication network. That is, the high-quality video generation system 1 may exist by having components connected to each other via a predetermined communication network. Further, the imaging device 100 may be configured to include a plurality of lenses and a plurality of image sensors respectively corresponding to the plurality of lenses. As a specific example of such a configuration, the imaging device 100 may include a plurality of lenses and image sensors so as to acquire images with different angles of view.
  • the images acquired from the respective image sensors can be said to be spatially adjacent to each other.
  • the high-quality video generation system 1 is applicable not only to a plurality of temporally adjacent images such as a video, but also to a plurality of spatially adjacent images.
  • the input information generation device 10 acquires video information IM from the imaging device 100.
  • the input information generation device 10 generates input information IN based on the acquired video information IM.
  • Input information IN is generated for each frame that constitutes moving image information IM.
  • the input information IN may be generated based on a target frame and other frames determined based on the target frame.
  • the other frame determined based on the frame may be a frame temporally adjacent to the target frame.
  • the CNN 200 is a convolutional neural network that uses the data output by the input information generation device 10 as input information IN. An example of the CNN 200 will be described with reference to FIG. 2.
  • FIG. 2 is a diagram showing an example of the CNN 200 according to the first embodiment. Details of the CNN 200 will be explained in detail with reference to the figure.
  • CNN 200 is a neural network with a multilayer structure.
  • the CNN 200 is a multilayer network including an input layer 210 to which input information IN is input, a convolution layer 220 to perform convolution operations, a pooling layer 230 to perform pooling, and an output layer 240. In at least a portion of the CNN 200, the convolution layer 220 and the pooling layer 230 are alternately connected.
  • CNN200 is a model widely used for image recognition and video recognition.
  • the CNN 200 may further include layers having other functions, such as a fully connected layer.
  • the pooling layer 230 may include a quantization layer that performs a quantization operation to reduce the number of bits to the operation result of the convolution layer 220. Specifically, when the result of the convolution operation in the convolution layer 220 is 16 bits, the quantization layer performs an operation to reduce the number of bits of the result of the convolution operation in the quantization layer to 8 bits or less.
  • the CNN 200 may adopt a configuration in which the outputs of each of the plurality of convolutional layers 220 and pooling layers 230 included in the CNN 200 are used as intermediate outputs and inputs to other layers.
  • the CNN 200 configures a U-net by using the outputs of the plurality of convolutional layers 220 and pooling layers 230 included in the CNN 200 as intermediate outputs and inputs to other layers. It's okay.
  • the CNN 200 includes an encoder section that extracts a feature amount by a convolution operation, and a decoder section that performs a deconvolution operation based on the extracted feature amount.
  • Input information IN is input to the input layer 210.
  • Input information IN is generated based on the input image.
  • the input image is a frame image that constitutes a moving image.
  • the input information generation device 10 according to this embodiment generates input information IN from an input image.
  • the elements of the input information IN may be, for example, 2-bit unsigned integers (0, 1, 2, 3).
  • the elements of the input data may be, for example, 4-bit or 8-bit integers.
  • the convolution layer 220 performs a convolution operation on the input information IN input to the input layer 210.
  • the convolution layer 220 performs a convolution operation on the low-bit input information IN.
  • the convolution layer 220 outputs predetermined output data to the pooling layer 230 as a result of performing a predetermined convolution operation.
  • the pooling layer extracts a representative value of a certain area based on the result of the convolution operation performed by the convolution layer 220. Specifically, the pooling layer 230 compresses the output data of the convolution layer 220 by performing an operation such as average pooling or MAX pooling on the output data of the convolution operation output by the convolution layer 220.
  • the output layer 240 is a layer that outputs the results of the CNN 200.
  • the output layer 240 may output the results of the CNN 200 using, for example, an identity function or a softmax function.
  • the layer provided before the output layer 240 may be the convolution layer 220, the pooling layer 230, or another layer.
  • FIG. 3 is a diagram showing frames constituting a moving image according to the first embodiment.
  • frames used by the input information generation device 10 to generate input information IN will be described.
  • the figure shows a plurality of consecutive frames constituting a moving image.
  • Frames F1 to F7 shown in the figure are examples of a plurality of consecutive frames constituting a moving image.
  • each frame is a RAW image that has not been compressed and encoded, and each pixel is expressed with 12 or 14 bits.
  • the number of pixels in each frame is the number of pixels necessary to satisfy a predetermined video format such as 1920x1080 or 4096x2160.
  • the processing target of the CNN 200 will be described as a RAW image, but the processing target is not limited to this. If the image to be processed contains sufficient signal components, the image that has been subjected to processing such as compression encoding may be used as the target.
  • the input information generation device 10 generates input information IN based on a target frame TF, which is a target frame, and an adjacent frame AF, which is a frame adjacent to the target frame TF.
  • the adjacent frame AF is, for example, a frame that is consecutively adjacent before or after the target frame TF.
  • two frames before and after the target frame TF are set as adjacent frames AF. That is, when the target frame TF is the frame F4, the frame F2, the frame F3, the frame F5, and the frame F6 are the adjacent frames AF.
  • the number of adjacent frames AF is not limited to this example, and may be one frame before and after the target frame TF, or three frames before and after the target frame TF. Further, the adjacent frames AF are not limited to the example of adjacent frames before and after the target frame TF, but may be only frames adjacent to either the front or the rear of the target frame TF, for example. Furthermore, the adjacent frame AF does not need to be continuous with the target frame TF; for example, when frame F4 is the target frame TF, the adjacent frame AF may be frames F2, F6, etc. that are not continuous with frame F4.
  • FIG. 4 is a diagram for explaining an overview of the input information generation method according to the first embodiment.
  • a method for generating input information IN by the input information generation device 10 will be described with reference to the same figure.
  • the figure shows a frame at time t-2, a frame at time t-1, a frame at time t, a frame at time t+1, and a frame at time t+2.
  • the frame at time t corresponds to the above-described target frame TF
  • the frames at time t-2, time t-1, time t+1, and time t+2 correspond to adjacent frames AF.
  • each frame includes a large number of pixels, a circuit that processes the entire frame at the same time becomes large-scale. Therefore, when processing is performed in the CNN 200, it is preferable to divide each frame into predetermined sizes. In this embodiment, as an example, a case where a patch having a size of 256 ⁇ 256 is divided into a plurality of parts will be described.
  • Each frame is configured to include image data of 4 channels: R (Red), G (Green) ⁇ 2 channels, and B (Blue).
  • the input information generation device 10 performs quantization and vectorization of each channel.
  • the nine-channel data may be data in which pixel values are quantized using different threshold values.
  • the input information generation device 10 outputs the combined 180 channel data to the input layer of the CNN 200 as input information IN.
  • the input information generation device 10 may generate the input information IN based on three-channel data including RGB, for example. Further, in the illustrated example, a case has been described in which 9-channel data is generated by performing quantization and vectorization based on image data, but the aspect of the present embodiment is not limited to this example. It is preferable that the number of data to be generated is a number that allows efficient calculation after synthesis.
  • the input information generation device 10 generates data of M channels (M is a natural number of 1 or more) for each of N channels (N is a natural number of 1 or more) of image data constituting one frame.
  • M channel data may be generated.
  • the value of N ⁇ M is preferably a value close to a multiple of 32 (or 64).
  • FIG. 5 is a block diagram illustrating an example of a functional configuration for generating input information according to the first embodiment.
  • the input information generation device 10 includes an image acquisition section 11, an input conversion section 12, a composition section 13, and an output section 14.
  • the input information generation device 10 includes a CPU (Central Processing Unit) (not shown), a storage device such as a ROM (Read only memory) or a RAM (Random access memory), etc., which are connected via a bus.
  • the input information generation device 10 functions as a device including an image acquisition section 11, an input conversion section 12, a composition section 13, and an output section 14 by executing an input information generation program.
  • the input information generation device 10 is realized using hardware such as an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), or an FPGA (Field-Programmable Gate Array). Good too.
  • the input information generation program may be recorded on a computer-readable recording medium.
  • the computer-readable recording medium is, for example, a portable medium such as a flexible disk, magneto-optical disk, ROM, or CD-ROM, or a storage device such as a hard disk built into a computer system.
  • the input information generation program may be transmitted via a telecommunications line.
  • a memory in which moving image information IM of a moving image captured by the imaging device 100 is stored is referred to as a first memory M1
  • a memory in which input information IN generated by the input information generating device 10 is stored is referred to as a first memory M1.
  • the first memory M1 and the second memory M2 are storage devices such as ROM or RAM.
  • the image acquisition unit 11 acquires image information IMD that includes the input image used for processing from among the video information IM stored in the first memory M1. Specifically, the image acquisition unit 11 acquires, as input images, a plurality of frames including at least a target frame TF for which input information IN is to be generated, among a plurality of frames constituting a moving image. For example, the image acquisition unit 11 acquires an adjacent frame AF as an input image in addition to the target frame TF.
  • the adjacent frames AF may be a plurality of frames that are consecutively adjacent to each other before and after the target frame TF.
  • the image acquisition unit 11 acquires pixel values of each color from one frame as a plurality of different images. For example, if the imaging device 100 uses an image sensor employing a Bayer array, the image acquisition unit 11 acquires four channels of RGGB image information from one frame. The pixel value of the image acquired by the image acquisition unit 11 includes multi-bit elements.
  • the input conversion unit 12 acquires image information IMD from the image acquisition unit 11.
  • the input conversion unit 12 converts pixel values of input images of multiple frames included in the image information IMD into low-bit input data IND based on comparisons with multiple threshold values. Since the input image is a RAW image and its pixel value includes multi-bit (for example, 12 bits or 14 bits) elements, the image acquisition unit 11 indicates the pixel value of the input image based on a plurality of threshold values.
  • the input data IND is converted into input data IND having a bit number (eg, 2 bits or 1 bit) less than the bit number (eg, 8 bits).
  • the input conversion unit 12 outputs the converted input data IND to the synthesis unit 13.
  • the input conversion unit 12 performs conversion for each color. That is, the input conversion unit 12 converts each of the plurality of images acquired from one frame into input data IND.
  • FIG. 6 is a block diagram showing an example of the functional configuration of the input conversion section according to the first embodiment.
  • the input conversion section 12 includes a plurality of conversion sections 121 and a threshold storage section 122.
  • the input conversion unit 12 includes a conversion unit 121-1, a conversion unit 121-2, ..., a conversion unit 121-n (n is a natural number of 1 or more) as an example of the plurality of conversion units 121. Equipped with.
  • the number of converters 121 included in the input converter 12 may be the number of input data IND that the input converter 12 generates from one channel of input images. That is, when the input converter 12 converts a one-channel input image into nine-channel input data IND, the input converter 12 includes nine converters 121, converters 121-1 to 121-9.
  • the image data of the input image has a matrix-like data structure in which pixel data is multivalued, with each element in the x-axis direction and y-axis direction having more than 8 bits.
  • each element is quantized and becomes input data of low bits (for example, 2 bits or 1 bit below 8 bits).
  • the conversion unit 121 compares each element of the input image with a predetermined threshold.
  • the conversion unit 121 quantizes each element of the input image based on the comparison result.
  • the conversion unit 121 quantizes, for example, a 12-bit input image into a 2-bit or 1-bit value.
  • the converter 121 may perform quantization by comparing the number of bits with a number of thresholds corresponding to the number of bits after conversion. For example, for conversion to 1 bit, one threshold value is sufficient, and for conversion to 2 bits, three threshold values may be used. In other words, one threshold value may be used when the quantization performed by the conversion unit 121 is 1-bit quantization, and three threshold values may be used when the quantization is 2-bit quantization. Note that if a large number of threshold values such as 8 bits are required, quantization may be performed using a function, a table, or the like instead of a threshold value.
  • Each conversion unit 121 performs quantization on the same element using independent thresholds.
  • the input converter 12 outputs a vector including elements corresponding to the number of converters 121 as the calculation result (input data IND) for one channel of input.
  • the bit precision of the converted result which is the output of the converting unit 121, may be changed as appropriate based on the bit precision of the input image.
  • the threshold storage unit 122 stores a plurality of threshold values used in calculations performed by the conversion unit 121.
  • the threshold value stored in the threshold storage unit 122 is a predetermined value, and is set corresponding to each of the plurality of conversion units 121. Note that each threshold value may be a learning target parameter, and may be determined and updated in the learning step.
  • the mode of the input converter 12 is not limited to this.
  • the input image is image data that includes elements of three or more channels including color components
  • the conversion unit 121 is divided into a plurality of corresponding groups, and the corresponding elements are input to each group and converted. You may.
  • some kind of conversion processing may be applied in advance to the elements to be input to a predetermined conversion unit 121, or which conversion unit 12 to input them to can be switched depending on the presence or absence of pre-processing. It's okay.
  • the number of conversion units 121 does not need to be fixed, and may be determined as appropriate depending on the structure of the neural network or hardware information. Note that if it is necessary to compensate for a decrease in calculation accuracy due to quantization by the converter 121, it is preferable to set the number of converters 121 to be greater than or equal to the bit precision of each element of the input image. More generally, it is preferable to set the number of converters 121 to be greater than the difference in bit precision of the input image before and after quantization. Specifically, when quantizing an input image whose pixel value is represented by 8 bits to 1 bit, the number of converters 121 is 7 or more (for example, 16 or 32), which corresponds to 7 bits as a difference. ) is preferable.
  • the combining unit 13 combines (concats) the plurality of converted input data IND into one data.
  • Data obtained by combining a plurality of input data is also referred to as composite data CD.
  • the combining process by the combining unit 13 may be a process of arranging (or connecting) a plurality of input data IND into one data.
  • the output section 14 outputs the composite data CD synthesized by the synthesis section 13.
  • the composite data CD may be temporarily stored in the second memory M2.
  • the composite data CD is, in other words, the input information IN input to the input layer 210 of the CNN 200.
  • the input information generation device 10 After generating input information IN for the target frame TF, the input information generation device 10 generates input information IN for the next frame of the target frame TF.
  • the next frame may be a frame temporally continuous with the target frame TF. That is, after the output unit 14 outputs the composite data CD regarding the target frame TF, the image acquisition unit 11 shifts the target frame TF by one frame and acquires a frame adjacent to the target frame TF as the target frame TF. do. In this way, the input information generation device 10 acquires a plurality of frames including at least the target frame TF as input images, and generates composite data CD.
  • the input information generation device 10 generates input information IN for all frames included in the video information IM. Note that, in the example described above, an example was described in which the input information generation device 10 generates input information IN for all frames included in the video information IM, but the aspect of the present embodiment is not limited to this example. For example, the input information generation device 10 may generate input information IN every predetermined frame. Furthermore, although the high-quality video generation system 1 converts the video into a high-quality video based on the video information IM, the output format is not limited to the video format. For example, the high-quality video generation system 1 may generate still images from videos. That is, the high-quality video generation system 1 can apply this embodiment even when extracting frames included in video information IM to generate still images by using frames extracted from a video as target frames TF. can.
  • the input information generation device 10 is equipped with the image acquisition unit 11 to generate a plurality of frames including at least the target frame TF, which is the target of generation of the input information IND, among the frames constituting the moving image. Obtain as input image.
  • the input information generation device 10 includes the input conversion unit 12, so that the input information generation device 10 converts the pixel values of the acquired multiple frames of the input image into the number of bits indicating the pixel value of the input image ( The input data IND is converted into input data IND having a smaller number of bits (for example, 2 bits or 1 bit less than 8 bits) (for example, 12 bits).
  • the input information generation device 10 also includes a synthesizing section 13 to synthesize a plurality of converted input data IND into one synthetic data CD, and includes an output section 14 to synthesize the synthesized synthetic data CD. Output. That is, according to the present embodiment, the input information generation device 10 generates input information IN by combining a plurality of input data IND obtained from a plurality of images including at least the target frame TF. The generated input information IN is input to the input layer 210 of the CNN 200.
  • the input information IN is information with lower bits than the input image in each element.
  • the CNN 200 can process low-bit information. Therefore, according to this embodiment, the processing of the CNN 200 can be reduced in weight.
  • the input information IN is information generated based on a plurality of images. Therefore, when improving the quality of a video based on the input information IN, it becomes possible to perform processing that also considers multiple frames adjacent to the target frame TF, making it possible to remove noise with high precision. become able to. Therefore, according to this embodiment, it is possible to convert a low-quality video into a high-quality video with lightweight calculations. Note that the processing of the CNN 200 is not limited to noise removal.
  • the moving image information IM includes, for example, pixel values of each color of RGB.
  • the image acquisition unit 11 acquires pixel values of each color from one frame as a plurality of different images, and the input conversion unit 12 converts each of the plurality of images acquired from one frame as different input data IND. Therefore, according to this embodiment, more accurate image processing can be performed. Therefore, according to this embodiment, noise can be removed with even higher accuracy.
  • the image acquisition unit 11 acquires images of a plurality of frames consecutively adjacent to each of the front and rear of the target frame TF, among the plurality of frames constituting the moving image.
  • the input information IN is generated based on the information of the frames adjacent before and after the target frame TF, so that more accurate image processing can be performed. Therefore, according to this embodiment, noise can be removed with high accuracy.
  • the image acquisition unit 11 sets a frame adjacent to the target frame TF to at least the target frame TF.
  • a plurality of frames including TF are acquired as input images. That is, the input information generation device 10 generates input information IN for each of a plurality of frames included in the video by shifting the target frames TF one after another. Therefore, according to this embodiment, a low quality video can be converted into a high quality video.
  • the input information generation device 10 reads a plurality of frames including a target frame TF and an adjacent frame AF, and performs quantization on each of the plurality of frames. Therefore, according to the input information generation device 10 according to the first embodiment, the number of frames to be quantized is large, and the calculation load on the first layer becomes large. The second embodiment attempts to solve this problem and further lighten the calculation load.
  • FIG. 7 is a diagram for explaining an overview of the input information generation method according to the second embodiment.
  • a method for generating input information IN by the input information generation device 10A according to the second embodiment will be described with reference to the same figure.
  • a frame at time t is shown as the target frame TF.
  • a frame at time t-2, a frame at time t-1, a frame at time t+1, and a frame at time t+2 are shown as adjacent frames AF.
  • Each frame is configured to include image data of four channels of RGGB.
  • the input information generation device 10A performs quantization and vectorization of each channel for the target frame TF.
  • the input information generation device 10A performs quantization and vectorization of each channel on the average image of adjacent frames AF. That is, the second embodiment is different from the first embodiment in that the average image of the adjacent frames AF is quantized and vectorized instead of being quantized and vectorized for each adjacent frame AF. different.
  • the input information generation device 10A converts each of the target frame TF and the average image of the adjacent frame AF into 64 channels of data, so a total of 128 channels of data is generated.
  • the averaging process may be performed by calculating a simple average of pixel values. Furthermore, the input information generation device 10A may generate an average image for each color by calculating the average for each color. That is, the input information generation device 10A generates an average image based on the R image of the frame in the adjacent frame AF from time t-2 to time t+2, and generates an average image based on the R image of the frame in the adjacent frame AF from time t-2 to time t+2. An average image based on the B image of the frame in the adjacent frame AF from time t-2 to time t+2 may be generated.
  • quantization and vectorization are performed after calculating the average of adjacent frames AF, but the aspect of the present embodiment is not limited to this example.
  • the input information generation device 10A may be configured to calculate the average after performing quantization and vectorization, for example.
  • input information IN is generated by combining these.
  • 128 channels of data are generated.
  • the combined input information IN is smaller than the amount of information according to the first embodiment.
  • the amount of data representing the target frame TF is increased compared to the first embodiment.
  • FIG. 8 is a block diagram illustrating an example of a functional configuration for generating input information according to the second embodiment.
  • An example of the functional configuration of the input information generation device 10A will be described with reference to the same figure.
  • the input information generation device 10A differs from the input information generation device 10 in that it further includes an integration section 15, an input conversion section 12A instead of the input conversion section 12, and a synthesis section 13A instead of the synthesis section 13.
  • the same components as the input information generation device 10 may be given the same reference numerals and the description thereof may be omitted.
  • the integrating unit 15 acquires information regarding adjacent frames AF from the image acquiring unit 11.
  • the adjacent frame AF is a frame other than the target frame TF among the plurality of frames acquired by the image acquisition unit 11.
  • the integrating unit 15 performs a process of integrating a plurality of adjacent frames AF into one integrated frame IF by performing calculations based on the pixel values of the adjacent frames AF.
  • the integrating unit 15 outputs the integrated frame IF obtained as a result of the calculation to the input converting unit 12A.
  • the integrating unit 15 may perform the integrating process by, for example, obtaining a simple average of adjacent frames AF.
  • the integrating unit 15 takes, for example, the average value of the pixel values of the plurality of adjacent frames AF as the pixel value of the integrated frame IF.
  • the aspect of the integration process of the integration unit 15 is not limited to the example of calculating a simple average.
  • the integrating unit 15 may perform the integrating process by calculating a weighted average, for example.
  • the weighted average is calculated according to the temporal distance from the target frame TF.
  • the integrating unit 15 reduces the weight by multiplying the pixel values of frames at time t-2 and time t+2, which are temporally distant from time t, by 0.7, which is smaller than 1, and
  • the weight may be increased by multiplying the pixel values of the frames at time t-1 and time t+1 by 1.3, which is greater than 1.
  • the integrating unit 15 may calculate the pixel value of the integrated frame IF by calculating a weighted average of the plurality of adjacent frames AF according to the temporal distance from the target frame TF. By integrating using a weighted average, the degree of contribution of the adjacent frame AF to the target frame TF can be reflected in the integrated frame IF. Note that the magnitude of the weight to be multiplied may be different depending on whether the temporal distance from the target frame TF is the same or whether it is before or after the target frame TF. For example, if the adjacent frame AF is before the target frame TF, the weight may be increased, and if the adjacent frame AF is after the target frame TF, the weight may be decreased.
  • the input conversion unit 12A converts the pixel value of the target frame TF acquired from the image acquisition unit 11 into input data IND based on comparison with a plurality of threshold values. Further, the input conversion unit 12A converts the pixel values of the integrated frame IF obtained from the integration unit 15 into input data based on comparison with a plurality of threshold values. The input conversion section 12A outputs the converted input data IND to the synthesis section 13A.
  • the synthesis unit 13A synthesizes a plurality of input data IND converted based on the target frame TF and a plurality of input data IND converted based on the integrated frame IF into one composite data CD.
  • the combined input information IN is smaller than the amount of information according to the first embodiment.
  • the input information generation device 10A further includes the integrating unit 15, thereby performing calculations based on the pixel values of adjacent frames AF, and converting a plurality of adjacent frames AF into one integrated frame IF. Integrate.
  • the input conversion unit 12A converts the pixel value of the target frame TF into input data IND based on comparison with a plurality of threshold values, and further converts the pixel value of the integrated frame IF into input data IND based on comparison with a plurality of threshold values. Convert to IND.
  • the synthesizing unit 13A synthesizes a plurality of input data IND converted based on the target frame TF and a plurality of input data IND converted based on the integrated frame IF into one composite data CD. That is, the input information generation device 10A does not quantize and vectorize the adjacent frame AF like the target frame TF, but calculates an integrated frame IF based on a plurality of adjacent frames AF, and quantizes the integrated frame IF. and vectorization. Therefore, the input information IN synthesized by the synthesizing section 13A is smaller than the amount of information according to the first embodiment, and the calculation load on the first layer can be reduced.
  • the integrating unit 15 sets the average value of the pixel values of the plurality of adjacent frames AF as the pixel value of the integrated frame IF. That is, the integrating unit 15 takes the simple average of the adjacent frames AF as the integrated frame IF. Therefore, according to the present embodiment, input information IN having a smaller amount of information than the input information IN according to the first embodiment can be generated by easy calculation, and the calculation load on the first layer can be reduced. Can be done.
  • the integrating unit 15 calculates the pixel value of the integrated frame IF by calculating a weighted average of the plurality of adjacent frames AF according to the temporal distance from the target frame TF. . Therefore, according to the present embodiment, the integrated frame IF can be generated in consideration of the degree of contribution of the adjacent frame AF to the target frame TF. Therefore, by using the input information IN generated by the input information generation device 10A, the CNN 200 can perform image processing with higher accuracy.
  • the input information generation device 10A includes the integrating unit 15 to calculate the average value of adjacent frames AF.
  • calculating the average value for each frame causes duplication of processing and is not efficient. Therefore, the third embodiment attempts to solve this problem and further lighten the calculation load.
  • FIG. 9 is a block diagram illustrating an example of a functional configuration for generating input information according to the third embodiment.
  • An example of the functional configuration of the input information generation device 10B will be described with reference to the same figure.
  • the input information generation device 10B is different from the input information generation device 10A in that it further includes an average value temporary storage section 16, an imaging condition acquisition section 17, and an adjustment section 18, and includes an integration section 15B instead of the integration section 15. different.
  • the same components as the input information generation device 10A may be given the same reference numerals and the description thereof may be omitted.
  • the average value temporary storage unit 16 stores the average value of the pixel values of a predetermined frame among the frames making up the moving image.
  • the value stored in the average value temporary storage section 16 is calculated by the integration section 15B.
  • the integration unit 15B acquires information about the target frame TF from the image acquisition unit 11, and acquires the stored value SV from the average value temporary storage unit 16.
  • the integrating unit 15B calculates the pixel value of the integrated frame IF by calculation based on the target frame TF and the stored value SV, which is the value stored in the average value temporary storage unit 16.
  • the integrating unit 15B stores the calculated value in the average value temporary storage unit 16 as the calculated value CV. That is, the value stored in the average value temporary storage unit 16 is updated every time the integration unit 15B calculates a new target frame TF.
  • the input information generation device 10B calculates a moving average based on the target frame TF. Note that regarding the calculation for the first frame, since the stored value SV does not yet exist in the average value temporary storage unit 16, the integrating unit 15B may perform the calculation based only on the target frame TF.
  • the imaging condition acquisition unit 17 acquires video imaging conditions from the imaging device 100.
  • the video imaging conditions acquired by the imaging condition acquisition unit 17 may be, for example, settings of the imaging device such as shutter speed, aperture, or ISO sensitivity. Further, the video imaging conditions acquired by the imaging condition acquisition unit 17 may include other information regarding the operation and drive of the imaging device 100.
  • the adjustment unit 18 adjusts the value (average value) stored in the average value temporary storage unit 16 according to the imaging condition acquired by the imaging condition acquisition unit 17.
  • the imaging conditions of the imaging device 100 change, the relationship between the pixel value of the target frame TF and the past moving average value will change.
  • the ISO sensitivity is doubled due to a change in the settings of the imaging device 100, the pixel value of the target frame TF becomes brighter than the past moving average value, so the moving average value suddenly becomes darker. turn into. Therefore, when the ISO sensitivity is doubled due to a change in the settings of the imaging device 100, by doubling the value stored in the average value temporary storage unit 16, the integrating unit 15B continues to calculate the average value. It is possible to continue calculating the moving average value while utilizing the values stored in the value temporary storage unit 16.
  • the CNN 200 may not be able to properly perform image processing regarding the target frame TF. Therefore, in cases where the shooting scene of a moving image has changed, it is preferable not to obtain a moving average with past frames.
  • the adjustment unit 18 is configured to reset the value stored in the average value temporary storage unit 16 when the video shooting scene changes, so that the integration unit 15B starts calculating a new moving average value. You can leave it there. Whether or not the shooting scene of the video has changed may be determined based on whether the power button of the imaging device 100 is turned on or off, the shooting button or the stop button is turned on or off, or the like.
  • a frame with a large change in brightness may be inserted.
  • Possible causes for inserting a frame with a large luminance change include a case where a light source is imaged due to a change in the imaging angle, a case where a car's headlights are reflected, etc.
  • the integrating unit 15B may exclude a frame with a large luminance change among the plurality of frames from the targets for calculating the average value. By excluding frames with large brightness changes from the average value calculation target, it is possible to prevent the moving average value from being dragged by frames with large brightness changes.
  • determining whether the brightness change is large by comparing the pixel value of the immediately previous target frame TF with the pixel value of the target frame TF to be calculated, it is determined whether the difference is less than or equal to a threshold value. It may be determined whether or not.
  • the input information generation device 10B further includes the average value temporary storage unit 16 to store the average value of the pixel values of a predetermined frame among the frames constituting the moving image. Further, according to the input information generation device 10B, the integrating unit 15B calculates the pixel value of the integrated frame IF by calculation based on the value stored in the average value temporary storage unit 16 and the target frame TF. That is, the input information generation device 10B calculates the pixel value of the integrated frame IF based on the target frame TF and the stored moving average value. Therefore, the input information generation device 10B has a lighter calculation load than the input information generation device 10A. Therefore, according to this embodiment, the calculation load can be further reduced.
  • the integrating unit 15B excludes frames with large luminance changes from among the plurality of frames making up the video from the targets for calculating the average value. Therefore, according to the present embodiment, when a brightness change suddenly becomes large, by excluding the frame from the calculation of the moving average value, the pixel values of the integrated frame IF can be adjusted to avoid sudden brightness changes. It is possible to prevent the pixel value from being dragged down by the pixel value of a frame that has become large.
  • the input information generation device 10B further includes the imaging condition acquisition unit 17 to acquire the imaging conditions for a moving image, and further includes the adjustment unit 18 to adjust the imaging conditions according to the acquired imaging conditions.
  • the average value stored in the average value temporary storage section 16 is adjusted. Therefore, according to this embodiment, it is possible to adjust the average value according to changes in imaging conditions. Therefore, according to this embodiment, even if a change occurs in the imaging conditions, it is possible to continue calculating the moving average value.
  • the input information generation device 10B calculates a moving average by including the average value temporary storage section 16.
  • the integrating unit 15B calculates the moving average of the entire image.
  • the input information generation device 10B generates input data IN based on the target frame TF and the moving average, and the CNN 200 removes noise from the moving image based on the generated input data IN. Since the integrating unit 15B calculates a moving average of the entire image, if a moving subject is photographed in part of the video, afterimages may occur in the part where the moving subject is photographed as a result of noise removal. In some cases, problems may arise.
  • the fourth embodiment attempts to solve this problem.
  • FIG. 10 is a block diagram illustrating an example of a functional configuration for generating input information according to the fourth embodiment.
  • An example of the functional configuration of an input information generation device 10C according to the fourth embodiment will be described with reference to the same figure.
  • the input information generation device 10C is different from the input information generation device 10B in that it further includes a comparison section 19 and includes an integration section 15C instead of the integration section 15B.
  • the input information generation device 10C may or may not include the imaging condition acquisition section 17 and the adjustment section 18 like the input information generation device 10B. In the illustrated example, an example will be described in which the input information generation device 10C does not include the imaging condition acquisition section 17 and the adjustment section 18.
  • the same components as the input information generation device 10B may be given the same reference numerals and the description thereof may be omitted.
  • the comparison unit 19 acquires the stored value SV from the average value temporary storage unit 16 and acquires the target frame TF from the integration unit 15C.
  • the comparison unit 19 compares the acquired stored value SV stored in the average value temporary storage unit 16 and the pixel value of the target frame TF.
  • the comparison unit 19 may compare the entire image, each pixel, or each patch composed of a plurality of pixels.
  • the comparison unit 19 outputs the comparison result to the integration unit 15C as a comparison result CR.
  • the comparison result CR may include a difference between pixel values, or may include information about the result of comparing the difference with a predetermined threshold.
  • the integration unit 15C acquires the comparison result CR from the comparison unit 19 and acquires the stored value SV from the average value temporary storage unit 16. Based on the comparison result CR, which is the result of comparison by the comparison unit 19, the integration unit 15C calculates the average value between the stored value SV stored in the average value temporary storage unit 16 and the target frame TF, if the difference is less than a predetermined value. Calculate the moving average based on The integrating unit 15C sets the calculated value as the pixel value of the integrated frame IF. Further, based on the comparison result CR, which is the result of comparison by the comparison unit 19, the integration unit 15C sets the pixel value of the target frame TF to the pixel value of the integrated frame IF if the difference is not less than a predetermined value.
  • the integration process by the integration unit 15C may be performed for the entire image, for each pixel, or for each patch made up of multiple pixels.
  • the pixel value of the integrated frame IF is a moving average value for locations where the difference is less than a predetermined value (i.e., locations where the movement is small), and for locations where the difference is not less than a predetermined value ( In other words, the pixel value of the target frame TF is used for areas with large movements.
  • the integrating unit 15C stores the calculated result in the average value temporary storage unit 16 as a calculated value CV.
  • a subject with large movements is not included in the average image, and a background with small movements is included in the average image.
  • the calculation performed by the input information generation device 10C can also be called a selective average.
  • the integrating unit 15C instead of choosing between capturing or not capturing into the average image, it may be multiplied by a coefficient. For example, if the difference is less than or equal to a predetermined value based on the comparison result CR that is the result of comparison by the comparison unit 19, the integrating unit 15C adds a predetermined coefficient (for example, a moving average may be calculated based on the value multiplied by 0.9) and the average value of the target frame TF, and may be used as the pixel value of the integrated frame IF.
  • a predetermined coefficient For example, a moving average may be calculated based on the value multiplied by 0.9
  • the average value of the target frame TF may be used as the pixel value of the integrated frame IF.
  • the integrating section 15C adds a predetermined coefficient (for example, A moving average may be calculated based on the value multiplied by 0.1) and the average value of the target frame TF, and may be used as the pixel value of the integrated frame IF.
  • a predetermined coefficient for example, A moving average may be calculated based on the value multiplied by 0.1
  • the input information generation device 10C further includes the comparison unit 19 to compare the stored value SV stored in the average value temporary storage unit 16 and the pixel value of the target frame TF. . If the difference is less than or equal to a predetermined value as a result of the comparison by the comparison unit 19, the integration unit 15C calculates a moving average based on the stored value SV stored in the average value temporary storage unit 16 and the target frame TF. Calculate the pixel value of the integrated frame IF. Further, as a result of the comparison by the comparing unit 19, if the difference is not less than a predetermined value, the integrating unit 15C sets the pixel value of the target frame TF to the pixel value of the integrated frame IF.
  • the pixel value of the integrated frame IF is determined by distinguishing between a subject with large movement and a background with small movement, and selectively performing averaging processing.
  • the input information generation device 10C generates input data IN based on the target frame TF and the integrated frame IF. Therefore, according to the present embodiment, since the pixel values of the adjacent frame AF are not reflected in the pixel values of the integrated frame IF in areas where there is large movement, it is possible to prevent the problem of occurrence of afterimages.
  • any one of the frames adjacent to the target frame TF may be specified by a predetermined algorithm and used as the integrated frame IF.
  • the predetermined algorithm may be one that randomly identifies one frame among frames adjacent to the target frame TF.
  • the integrating unit 15 sets a randomly specified frame among the adjacent frames AF acquired by the image acquiring unit 11 as an integrated frame IF.
  • aspects of the present embodiment are not limited to any of the aspects of the first to fourth embodiments described above, and can be modified from the first to fourth embodiments based on predetermined conditions. Either of these may be used selectively.
  • the predetermined conditions may be video shooting conditions, shooting mode, exposure conditions, type of subject, etc.
  • the first to fourth As in any one of the embodiments, it is preferable to perform learning using not only the target frame TF but also the adjacent frame AF. Calculations related to learning do not necessarily need to be executed in the image processing device 2, and results such as parameters learned in advance in a dedicated learning device may be included in the CNN 200 as a learned model.
  • a learning model is trained using a combination of a noise image on which noise is superimposed and a high-quality image as training data.
  • the training data is created by capturing images of the same object with different exposure settings using an imaging device to obtain a high-quality image and a noise image.
  • machine learning requires a large amount of training data, and creating training data by capturing images using a camera is time-consuming. Therefore, a technique is known in which training data is created by adding random noise to a high-quality image (for example, see Japanese Patent Application Laid-Open No. 2021-071936). It is known to use such conventional techniques to create training data for inferring a high quality image from a low quality image by adding random noise to a high quality image.
  • the present invention aims to provide a technology that can generate training data for inferring a high-quality video from a low-quality video.
  • a learning model is trained to infer a high-quality video from which noise is removed by inputting low-quality video information with superimposed noise.
  • Low-quality videos include low-quality videos
  • high-quality videos include high-quality videos.
  • Teacher data used for learning by the learning device, program, and learning method of the noise reduction device according to the present embodiment is generated from a still image of a subject.
  • a still image taken of a subject may be a single high-quality image, or multiple images taken of the same subject (one or more high-quality images and one or more low-quality images). combination of images).
  • a plurality of images of the same subject may be captured under different imaging conditions.
  • the image captured of the subject may be any other image including at least one image.
  • a high-quality image can be, for example, a high-quality image captured by low ISO sensitivity and long exposure.
  • a high quality image may be referred to as GT (Ground Truth).
  • An example of a low-quality image is a low-quality image captured by high ISO sensitivity and short exposure.
  • image quality deterioration due to noise will be described as an example of a low-quality image, but the present embodiment is widely applicable to matters other than noise that degrade image quality.
  • Items that reduce image quality include reduced resolution or color shift due to optical aberrations, reduced resolution due to camera shake or subject blur, uneven black level due to dark current or circuits, ghosts and flare caused by high-brightness objects, Examples include signal level abnormalities.
  • a low-quality image may be referred to as a low-quality image or a noise image.
  • a high quality image may be referred to as a high quality image or GT.
  • a low-quality video may be described as a low-quality video or a noise video.
  • a high-quality video may be referred to as a high-quality video or GT.
  • Images targeted by the learning device may be still images or frames included in a video.
  • the data format may be a format that has not undergone compression encoding processing, such as a Raw format, or a format that has undergone compression encoding processing, such as a Jpeg format or an MPEG format.
  • a Raw format a format that has not undergone compression encoding processing
  • a Jpeg format a format that has undergone compression encoding processing
  • the image targeted by the learning device according to the present embodiment may be an image captured by a CCD camera using a CCD (Charge Coupled Devices) image sensor. Further, the image targeted by the learning device according to the present embodiment may be an image captured by a CMOS camera using a complementary metal oxide semiconductor (CMOS) image sensor. Further, the image targeted by the learning device according to the present embodiment may be a color image or a monochrome image. Further, the image targeted by the learning device according to the present embodiment may be an image captured by an infrared camera using an infrared sensor or the like to obtain a non-visible light component.
  • CCD Charge Coupled Devices
  • CMOS complementary metal oxide semiconductor
  • FIG. 11 is a diagram for explaining an overview of the learning system according to the fifth embodiment.
  • An overview of the learning system 1001 will be explained with reference to the same figure.
  • a learning system 1001 shown in the figure is an example of a configuration in the learning stage of machine learning.
  • the learning system 1001 trains the learning model 1040 using teacher data TD generated based on images captured by the imaging device 1020.
  • the learning system 1001 is equipped with an imaging device 1020 to capture a high-quality image 1031 and a low-quality image 1032.
  • the high-quality image 1031 and the low-quality image 1032 are images of the same subject.
  • the high-quality image 1031 and the low-quality image 1032 are captured at the same angle of view and imaging angle, but with different settings such as ISO sensitivity and exposure time.
  • there is one high-quality image 1031 there may be a plurality of high-quality images 1031.
  • there be a plurality of low-quality images 1032 there may be only one low-quality image 1032.
  • the plurality of low-quality images 1032 are different images captured with different settings such as ISO sensitivity and exposure time.
  • the imaging device 1020 may be, for example, a smartphone having communication means, a tablet terminal, or the like. Further, the imaging device 1020 may be a surveillance camera or the like having communication means.
  • the learning system 1001 generates a high-quality video 1033 from a high-quality image 1031 and a low-quality video 1034 from a low-quality image 1032.
  • the high-quality video 1033 is preferably generated from one high-quality image 1031
  • the low-quality video 1034 is preferably generated from a plurality of low-quality images 1032.
  • a high-quality video 1033 and a low-quality video 1034 generated from a high-quality image 1031 and a low-quality image 1032 captured from the same subject are associated with each other.
  • a high-quality video 1033 and a low-quality video 1034 that correspond to each other are input to the learning model 1040 for learning as teacher data TD.
  • the high-quality video 1033 and low-quality video 1034 that correspond to each other may be temporarily stored in a predetermined storage device for later learning. That is, the learning system 1001 may generate a plurality of teacher data TD in advance before learning that is performed later. Further, the high quality image 1031 and the low quality image 1032 captured by the imaging device 1020 may be temporarily stored in a predetermined storage device. In this case, the learning system 1001 may store a plurality of combinations of mutually corresponding high-quality images 1031 and low-quality images 1032, and generate teacher data TD during learning.
  • the learning model 1040 is trained using the teacher data TD generated by the learning system 1001. Specifically, the learning model 1040 is trained to infer high quality videos from low quality videos. In other words, the learning model 1040 after learning infers a high-quality video using a low-quality video as input, and outputs the inference result. That is, the learned model 1040 after learning may be used in a noise reduction device for removing noise from a low-quality video.
  • the high-quality image 1031 and low-quality image 1032 captured by the imaging device 1020 are stored in a predetermined storage device that temporarily stores information.
  • the predetermined storage device may be provided in the imaging device 1020, or may be provided in a cloud server or the like. That is, the learning system 1001 may be configured as an edge device or may include an edge device and a cloud server. Furthermore, the learning of the learning model 1040 may also utilize a GPU or the like provided on the server.
  • FIG. 12 is a diagram showing an example of the functional configuration of the learning device according to the fifth embodiment.
  • the functional configuration of the learning device 1010 will be explained with reference to the same figure.
  • the learning device 1010 is used to implement the learning system 1001 described above.
  • the learning device 1010 generates a high-quality video 1033 and a low-quality video 1034 based on the high-quality image 1031 and low-quality image 1032 captured by the imaging device 1020.
  • the learning device 1010 causes the learning model 1040 to learn using the generated high-quality video 1033 and low-quality video 1034 as teacher data TD.
  • the learning device 1010 includes an image acquisition section 1011, a video information generation section 1012, and a learning section 1013.
  • the learning device 1010 includes a CPU (Central Processing Unit), a storage device such as a ROM (Read only memory) or a RAM (Random access memory), etc., which are connected via a bus (not shown).
  • the learning device 1010 functions as a device including an image acquisition section 1011, a video information generation section 1012, and a learning section 1013 by executing a learning program.
  • the learning device 1010 may be realized using hardware such as an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), or an FPGA (Field-Programmable Gate Array).
  • the learning program may be recorded on a computer-readable recording medium.
  • the computer-readable recording medium is, for example, a portable medium such as a flexible disk, magneto-optical disk, ROM, or CD-ROM, or a storage device such as a hard disk built into a computer system.
  • the learning program may be transmitted via a telecommunications line.
  • the image acquisition unit 1011 acquires image information I from the imaging device 1020.
  • Image information I includes first image information I1 and second image information I2.
  • the first image information I1 includes at least one high-quality image 1031.
  • the second image information I2 includes at least one low-quality image 1032.
  • the same subject as the subject captured in the high-quality image 1031 included in the first image information I1 is captured in the low-quality image 1032 included in the second image information I2.
  • the image included in the second image information I2 has lower image quality than the image included in the first image information I1.
  • the image acquisition unit 1011 outputs the acquired image information I to the video information generation unit 1012.
  • the video information generation unit 1012 generates video information M by cutting out a plurality of parts of the images included in the image information I and connecting the cut out images as frame images at a predetermined time interval (or it can also be called a frame rate). generate.
  • the frame rate may be, for example, 60 [FPS (frames per second)].
  • the position of the image cut out by the video information generation unit 1012 may be different for each frame.
  • the size of the cut out images may be fixed, and the video information generation unit 1012 may cut out a plurality of images at positions moved by a predetermined number of pixels (bit number) in a predetermined direction.
  • the size of the image to be cut out may be fixed to 256 pixels x 256 pixels.
  • the video information generation unit 1012 may cut out an image at a position where the size is shifted by 10 pixels for each frame. If the amount of shift is too large, the amount of change in the image for each frame will become too large, resulting in an unnatural moving image, so it is preferable to set a limit (upper limit) so that the shift does not exceed a predetermined amount. It is preferable to determine the amount of shift and the limit based on the shooting angle of view, shooting resolution, focal length of the optical system, distance to the subject, shooting frame rate, and the like. Furthermore, since the speed of a falling subject increases in terms of acceleration, the amount of shift may be increased for frames temporally farther from the target image.
  • the video information generation unit 1012 generates first video information M1 from the images included in the first image information I1, and generates second video information M2 from the images included in the second image information I2. That is, the video information generation unit 1012 cuts out a plurality of images at different positions that are part of the first image information I1, and generates the first video information M1 by combining the plurality of cut out images. Further, the video information generation unit 1012 cuts out a plurality of images at different positions that are part of the second image information I2, and generates the second video information M2 by combining the plurality of cut out images. Generating a moving image by combining a plurality of images may mean converting the plurality of images into a file format that displays them at predetermined time intervals depending on the frame rate. The video information generation unit 1012 outputs information including the generated first video information M1 and second video information M2 as video information M to the learning unit 1013.
  • the sizes of the plurality of images cut out by the video information generation unit 1012 and the cutout positions may be arbitrarily determined. However, it is preferable that the position to be cut out from the image included in the first image information I1 and the position to be cut out from the image included in the second image information I2 are approximately the same position. This is because the first video information M1, which is a high-quality video, and the second video information M2, which is a low-quality video, should be of the same subject.
  • the learning unit 1013 acquires video information M from the video information generation unit 1012.
  • the learning unit 1013 causes the learning model 1040 to learn by inputting the acquired video information M to the learning model 1040 as teacher data TD.
  • the learning model 1040 is trained to infer high quality videos from low quality videos. That is, the learning unit 1013 causes learning to infer a high-quality video from a low-quality video based on the teacher data TD that includes the first video information M1 and the second video information M2 generated by the video information generation unit 1012. .
  • the learning model 1040 can also be said to be trained to reason to remove noise from the input video.
  • a method of generating a high-quality video from a high-quality image (described with reference to FIG. 13) and a method of generating a low-quality video from a low-quality image (described with reference to FIG. 14) will be explained.
  • a high-quality video may be generated from a high-quality image
  • a low-quality video may be generated from a low-quality image, using methods similar to each other. That is, a low-quality video may be generated by the method described with reference to FIG. 13, and a high-quality video may be generated by the method described with reference to FIG. 14.
  • FIG. 13 is a diagram for explaining an example of the position of an image cut out from a high-quality image by the learning device according to the fifth embodiment.
  • An example of the position of an image cut out from a high-quality image by the learning device 1010 will be described with reference to the same figure.
  • FIG. 13A shows an image I-11 that is an example of an image included in the first image information I1.
  • FIG. 13(B) shows an example of a case where a plurality of images are cut out from image I-11 shown in FIG. 13(A) as image I-12.
  • the ball B which is the subject, is captured in the image I-11.
  • the video information generation unit 1012 generates a video from the still image I-11 by cutting out a plurality of images from the image I-11 and temporally connecting the cut-out images.
  • the image I-12 shown in FIG. 13(B) shows a plurality of cut-out images CI, which are images cut out by the video information generation unit 1012. Specifically, cut-out images CI-11 to cut-out images CI-15 are shown as examples of images cut out by the video information generation unit 1012. When cutout images CI-11 to cutout images CI-15 are not distinguished, they may be simply written as cutout images CI.
  • the cutout images CI-11 to CI-15 are each shifted by a predetermined number of pixels in the vertical and horizontal directions.
  • an image C-11 is displayed at a certain time t1
  • an image C-12 is displayed at a certain time t2
  • an image C-13 is displayed at a certain time t3.
  • image C-14 is displayed at a certain time t4
  • image C-15 is displayed at a certain time t5.
  • the shift direction and shift amount of the image cut out by the video information generation unit 1012 are determined based on shooting conditions such as the shooting angle of view, shooting resolution, focal length of the optical system, distance to the subject, and shooting frame rate. is suitable. Furthermore, in the case of simulating a falling object, since the speed increases in an accelerated manner, it is preferable to gradually change (increase) the shift amount.
  • the high-quality video (first video information M1) generated by the learning device 1010 is a high-quality video without superimposed noise. Therefore, it is ideal that noise is not superimposed on an image that is a still image for generating a moving image. Further, ideally, each frame of a high-quality video generated from an image without superimposed noise should also be free from superimposed noise. Therefore, it is preferable that the video information generation unit 1012 generates a video from a single image on which no noise is superimposed. That is, it is preferable that the video information generation unit 1012 generates the first video information M1 by cutting out different parts from one high-quality image included in the first image information I1.
  • FIG. 14 is a diagram for explaining an example of the position of an image cut out from a low-quality image by the learning device according to the fifth embodiment.
  • An example of the position of an image cut out from a low-quality image by the learning device 1010 will be described with reference to the same figure.
  • the learning device 1010 cuts out images of different frames from a plurality of low-quality images. Images I-21 to I-25, which are different images, are shown in FIGS. 14(A) to 14(E), respectively.
  • the learning device 1010 cuts out images of different frames from images I-21 to I-25.
  • compositions of images I-21 to I-25 which are low-quality images, are similar to image I-11 shown in FIG. 13(A). That is, the ball B is imaged at the same position in images I-21 to I-25. Images I-21 to I-25 differ from image I-11 in that different noises are superimposed on them. Images I-21 to I-25 may have different noises superimposed on them, for example, by using different imaging conditions during imaging.
  • the video information generation unit 1012 cuts out a cutout image CI-21 from the image I-21, cuts out a cutout image CI-22 from the image I-22, cuts out a cutout image CI-23 from the image I-23, and cuts out the cutout image CI-23 from the image I-24.
  • a cutout image CI-24 is cut out from the image, and a cutout image CI-25 is cut out from the image I-25.
  • the cutout images CI-21 to CI-25 are each shifted by a predetermined number of pixels in the vertical and horizontal directions.
  • image C-21 is displayed at a certain time t1
  • image C-22 is displayed at a certain time t2
  • image C-23 is displayed at a certain time t3.
  • image C-24 is displayed at a certain time t4
  • image C-25 is displayed at a certain time t5. Since different noises are superimposed on each of the cutout images CI-21 to CI-25, different noises are superimposed on the generated moving image depending on the time.
  • the low-quality video (second video information M2) generated by the learning device 1010 is a low-quality video on which noise is superimposed. If you cut out multiple different positions from a single image with superimposed noise and create a video, the same noise will be included at every moment (in other words, the noise will not change over time). It may not be appropriate as a low-quality video. Therefore, in this embodiment, a low-quality video is generated by cutting out a plurality of different low-quality images. The same subject as the subject captured in the high-quality image is captured in each of the different low-quality images.
  • the second image information M2 includes a plurality of images in which the same subject as the subject imaged in the image included in the first image information I1 is imaged, and each image has different noise superimposed thereon. is included.
  • the plurality of images included in the second image information I2 may be images captured at different times close to each other.
  • the video information generation unit 1012 generates the second video information M2 by cutting out different parts from each of the plurality of images included in the second image information. Note that, for example, it is not necessary to prepare low-quality images for the number of frames, and the images may be cut out multiple times so as not to be continuous from a plurality of images. The order of cutting out the plurality of images may be random.
  • FIG. 15 is a diagram for explaining an example of the direction in which the learning device according to the fifth embodiment cuts out.
  • an example was described in which a position moved by a predetermined number of pixels in both the vertical and horizontal directions is cut out.
  • the video information generation unit 1012 may cut out positions moved in other directions.
  • Another example of the direction in which the video information generation unit 1012 cuts out the cutout image CI will be described with reference to FIGS. 15(A) to 15(C).
  • FIG. 15(A) shows image I-31.
  • FIG. 15(A) is an example of a case where a position moved only in the lateral direction (horizontal direction) is extracted.
  • the video information generation unit 1012 fixes the y-coordinate in the vertical direction and changes only the x-coordinate in the horizontal direction, thereby cutting out the cut-out images CI at a plurality of different positions. By cutting out the image in this way, it is possible to generate a moving image in which the subject moves laterally (horizontally). Similarly, the video information generation unit 1012 may cut out the cutout image CI at a position moved only in the vertical direction (vertical direction).
  • the video information generation unit 1012 may cut out the cutout image CI at a position moved in both the vertical and horizontal directions. In this case, the amount of movement in the vertical direction and the amount of movement in the lateral direction may be different from each other.
  • FIG. 15(B) shows image I-32.
  • FIG. 15(B) is an example of a case where a position moved in the rotational direction is extracted.
  • the video information generation unit 1012 cuts out the cutout images CI at a plurality of different positions by moving the cutout position in an arc shape having a rotation center of 0 and a radius of r.
  • the video information generation unit 1012 cuts out a position rotated counterclockwise. By cutting out in this way, it is possible to generate a moving image in which the subject moves in the rotational direction.
  • the position of the center of rotation O and the size of the radius r may differ from frame to frame.
  • FIG. 15(C) shows image I-33.
  • FIG. 15C is an example of enlarging and reducing the cutting position.
  • the size of the cutout image CI is constant. Therefore, the video information generation unit 1012 enlarges or reduces the image I and cuts it out while maintaining the size of the cut-out image CI.
  • the size of the cutout image CI is fixed at 256 pixels x 256 pixels, the video information generation unit 1012 enlarges and reduces the image I so that it fits within the size of the cutout image CI. By cutting out the image in this way, it is possible to generate a moving image that looks like the subject is zoomed in or zoomed out.
  • the cutout positions described with reference to FIGS. 15(A) to 15(C) are examples of this embodiment, and the video information generation unit 1012 can generate a video by cutting out and connecting other different positions. Information may be generated.
  • the video information generation unit 1012 may cut out the cutout image CI, for example, by combining the cutout methods described with reference to FIGS. 15(A) to 15(C). In this case, it is possible to generate a moving image in which, for example, the moving image is horizontally or vertically moved and then rotated, or moved and then enlarged or reduced.
  • the movement of the cutout position as described above may be calculated by affine transformation. That is, the predetermined direction in which the video information generation unit 1012 cuts out an image can also be described as being calculated by affine transformation.
  • the video information generation unit 1012 may generate a video by cutting out a part of the image and then moving it.
  • the video information generation unit 1012 cuts out an image of 256 pixels x 256 pixels, and generates a plurality of pixels by moving the cut out image in a predetermined direction.
  • the video information generation unit 1012 generates a video by connecting the cut out images. That is, the video information generation unit 1012 may cut out a plurality of images at different positions by shifting the plurality of cut out images in a predetermined direction. Note that by moving the image after cutting it out, an area where no data exists will occur around the image. However, by predefining the peripheral portion of the image as the margin, it is possible to exclude it from the range of the image to be learned, and to prevent problems from occurring in the later learning stage.
  • the video information generation unit 1012 generates a video by cutting out an image that has been moved in a direction calculated by some method such as affine transformation.
  • the learning device 1010 can generate a moving image by cutting out an image in which the object is moved in a direction based on the trajectory of the actual movement, and can generate training data that is more effective for machine learning.
  • An example of such a case will be described as a modification of the fifth embodiment with reference to FIGS. 16 and 17.
  • the video information generation unit 1012 may generate the video after performing correction to add pseudo subject blur to the still image for which the video is to be created.
  • subject blurring may be added by performing a predetermined averaging process in the shift direction or by performing a process to lower the resolution.
  • FIG. 16 is a diagram illustrating an example of the functional configuration of a learning device according to a modification of the fifth embodiment when the learning device generates a moving image based on a trajectory vector.
  • An example of the functional configuration of a learning device 1010A according to a modification of the fifth embodiment will be described with reference to the same figure.
  • a learning system 1001A according to a modification of the fifth embodiment differs from the learning system 1001 in that it further includes a trajectory vector generation device 1050.
  • the learning device 1010A differs from the learning device 1010 in that it further includes a trajectory vector acquisition unit 1014.
  • the learning device 1010A differs from the learning device 1010 in that the learning device 1010A includes a video information generating section 1012A instead of the video information generating section 1012.
  • the same components as the learning device 1010 may be given the same reference numerals and the description thereof may be omitted.
  • the trajectory vector generation device 1050 acquires information regarding the trajectory of the object captured in the video. Video information is input to the trajectory vector generation device 1050, and the trajectory vector generation device 1050 analyzes the trajectory of the object imaged based on the input video information. Trajectory vector generation device 1050 outputs the analyzed result as trajectory vector TV.
  • the trajectory vector TV indicates the trajectory of the object captured in the video information.
  • Trajectory vector generation device 1050 acquires trajectory vector TV from video information using, for example, conventional technology such as optical flow. Note that the trajectory vector TV may include coordinate information indicating the trajectory of the movement of the object in addition to or in place of the vector information.
  • the trajectory vector acquisition unit 1014 acquires the trajectory vector TV from the trajectory vector generation device 1050.
  • the trajectory vector acquisition unit 1014 outputs the acquired trajectory vector TV to the video information generation unit 1012A.
  • the moving image for which the trajectory vector TV has been acquired by the trajectory vector generation device 1050 and the image acquired by the image acquisition unit 1011 may have a predetermined relationship.
  • the image acquisition unit 1011 may acquire, as an image, one frame of a video whose trajectory vector TV has been acquired by the trajectory vector generation device 1050.
  • the present embodiment is not limited to this example, and the video whose trajectory vector TV is acquired by the trajectory vector generation device 1050 and the video acquired by the image acquisition unit 1011 do not have a predetermined relationship. It's okay.
  • the video information generation unit 1012A acquires image information I from the image acquisition unit 1011 and acquires the trajectory vector TV from the trajectory vector acquisition unit 1014.
  • the video information generation unit 1012A generates video information based on the acquired image information I and trajectory vector TV.
  • the video information generation unit 1012A determines the cutting direction of the cutout image CI and the amount of shift per frame based on the trajectory indicated by the trajectory vector TV. That is, the predetermined direction in which the video information generation unit 1012A cuts out the image is calculated based on the acquired trajectory vector TV.
  • FIG. 17 is a diagram for explaining an example of the position of an image cut out from a still image when a learning device according to a modification of the fifth embodiment generates a moving image based on a trajectory vector.
  • An example of the position coordinates of the cut-out image CI in the case of generating a moving image based on the trajectory vector TV will be described with reference to the same figure.
  • FIG. 17A shows an image I-41 that is an example of an image included in the first image information I1.
  • FIG. 17B shows an example of a plurality of cut out images CI cut out from the image I-41.
  • the image I-41 shows a trajectory vector TV that is the trajectory of the ball B, which is the subject.
  • the trajectory vector TV represents a vector in which the ball B falls from the upper right direction in the figure to the lower center direction, bounces at the lower center point, and then moves toward the upper left direction in the figure.
  • the video information generation unit 1012A cuts out the cutout image CI of the position coordinates based on the trajectory vector TV shown in the image I-41, and temporally connects the cutout images, so that the image I-41, which is a still image, is extracted from the image I-41. Generate video.
  • FIG. 17(B) shows an example of a cut-out image CI, which is an image cut out by the video information generation unit 1012.
  • cutout images CI-41 to cutout images CI-49 are shown.
  • the cutout images CI-41 to CI-49 are located at coordinates based on the trajectory vector TV. That is, the cutout image CI-41 is located in the upper right direction in the figure, and the cutout position moves toward the center and lower in the figure as it approaches the cutout image CI-45. Further, the cutout position moves toward the upper left in the figure from cutout image CI-45 to cutout image CI-49.
  • FIG. 18 is a flowchart illustrating an example of a series of operations of the learning method of the noise reduction device according to the fifth embodiment. An example of a series of operations of the learning method of the noise reduction device using the learning device 1010 will be described with reference to the same figure.
  • Step S110 First, the image acquisition unit 1011 acquires an image.
  • the image acquisition unit 1011 acquires first image information I1 that includes a high-quality image and second image information I2 that includes a low-quality image.
  • the step of acquiring an image by the image acquisition unit 1011 may be referred to as an image acquisition step or an image acquisition process.
  • Step S130 the video information generation unit 1012 cuts out a part of the acquired image.
  • the video information generation unit 1012 cuts out a plurality of cut images CI from the acquired image.
  • the video information generation unit 1012 cuts out a plurality of cutout images CI from each of the high quality image included in the first image information I1 and the low quality image included in the second image information I2. Note that it is preferable that the position coordinates cut out from each of the high-quality image included in the first image information I1 and the low-quality image included in the second image information I2 are the same.
  • the position coordinates to be extracted from each of the high-quality image included in the first image information I1 and the low-quality image included in the second image information I2 are determined by taking into account the deviation due to the time difference. It is preferable. More specifically, it is preferable to change the position coordinates to be cut out from the high-quality image included in the first image information I1 or the low-quality image included in the second image information I2 in a direction that reduces the amount of deviation caused by the time difference.
  • Step S150 the video information generation unit 1012 connects the cut out images to generate a video.
  • the video information generation unit 1012 generates a high-quality video by connecting multiple images cut out from high-quality images, and generates a low-quality video by connecting multiple images cut out from low-quality images.
  • the step of generating video information in step S130 and step S150 may be referred to as a video information generation step or a video information generation step.
  • Step S170 the learning unit 1013 uses the combination of the generated high-quality video and low-quality video as teacher data TD and learns to infer a high-quality video from a low-quality video. This step may be referred to as a learning step or a learning process.
  • the learning device 1010 includes the image acquisition unit 1011 to acquire the first image information I1 and the second image information I2.
  • the first image information I1 includes at least one image
  • the second image information I2 captures the same subject as the subject captured in the image included in the first image information I1
  • the first image information I1 contains at least one image of lower quality than the images contained in the image.
  • the learning device 1010 includes a video information generation unit 1012, which cuts out a plurality of images at different positions that are part of the first image information I1, and generates the first video information M1 by combining the plurality of cut out images. do.
  • the learning device 1010 includes a video information generation unit 1012, which cuts out a plurality of images at different positions that are part of the second image information I2, and combines the plurality of cut out images to generate the second video information M2. generate. Further, the learning device 1010 includes a learning unit 1013, so that the learning device 1010 can change the quality of the video from low-quality video to high-quality video based on the teacher data TD that includes the first video information M1 and the second video information M2 generated by the video information generation unit 1012. Train to infer videos. That is, according to the present embodiment, the learning device 1010 does not need to acquire training data including low-quality videos and high-quality videos by shooting videos, which was conventionally required, and can generate the training data from still images. can. Therefore, according to this embodiment, training data for inferring a high-quality video from a low-quality video can be easily generated.
  • the learning device 1010 can generate a plurality of different moving images from the same still image. Therefore, according to this embodiment, since a huge amount of teacher data TD is generated, it is not necessary to prepare a huge amount of still images, and many moving images can be generated from a small number of still images. Therefore, according to this embodiment, the time required to capture images for use in learning can be shortened.
  • the second image information I2 includes a plurality of images in which the same subject as the subject imaged in the image included in the first image information I1 is captured, and each image is mutually Contains multiple images on which different noises are superimposed.
  • the video information generation unit 1012 generates the second video information M2 by cutting out different parts from each of the plurality of images included in the second image information I2. That is, according to the present embodiment, a low-quality moving image with superimposed noise is generated based on a plurality of different low-quality images with superimposed noise. Therefore, the second video information M2 generated according to the present embodiment has different noise superimposed on each frame, and can be generated by reproducing a low-quality video with noise superimposed more accurately.
  • the plurality of images included in the second image information I2 are images taken at different times that are close to each other. That is, low-quality images for generating a low-quality video are captured at close times.
  • the close time may be, for example, 1/60th of a second.
  • noise peculiar to moving images having a temporal component may be superimposed. Images taken at different times that are close to each other contain noise specific to this moving image. Therefore, according to the present embodiment, since the learning device 1010 generates a moving image based on images captured at different times that are close to each other, the learning device 1010 can reproduce and generate noise peculiar to moving images having a temporal component. .
  • the video information generation unit 1012 generates the first video information M1 by cutting out a different part from one image included in the first image information I1. That is, according to this embodiment, a high-quality video is generated based on one image. Therefore, according to this embodiment, it is possible to easily generate a high-quality moving image without having to capture many high-quality images.
  • the video information generation unit 1012 cuts out a plurality of images at different positions by shifting the plurality of cut out images by different amounts in a predetermined direction. That is, according to this embodiment, the learning device 1010 cuts out the image and then shifts it in a predetermined direction. In other words, after cutting out an image, the learning device 1010 performs processing based on the small image that has been cut out, without requiring processing based on the large image. Therefore, according to this embodiment, the learning device 1010 can lighten the processing.
  • the video information generation unit 1012 cuts out a plurality of images at positions shifted by a predetermined number of bits in a predetermined direction.
  • the video information generation unit 1012 generates a video by connecting the cut out images. That is, the subject imaged in the video generated by the video information generation unit 1012 appears to move in a predetermined direction in the video. Therefore, according to this embodiment, a moving image can be easily generated from a still image.
  • the predetermined direction in which the video information generation unit 1012 cuts out an image is calculated by affine transformation.
  • the predetermined direction in which the video information generation unit 1012 cuts out the image is the direction in which the subject moves in the video. Therefore, according to this embodiment, the learning device 1010 can generate a video in which the subject moves in various directions.
  • the learning device 1010 further includes the trajectory vector acquisition unit 1014 to acquire the trajectory vector TV. Further, the predetermined direction in which the video information generation unit 1012 cuts out the image is calculated based on the acquired trajectory vector TV.
  • the trajectory vector TV is information regarding a vector indicating the trajectory of the subject actually moving in the actually captured moving image. Therefore, according to this embodiment, it is possible to generate a video based on the trajectory of the actual movement of the subject.
  • FIG. 19 is a diagram for explaining an overview of the learning system according to the sixth embodiment.
  • An overview of a learning system 1001B according to the sixth embodiment will be described with reference to the same figure.
  • the same components as those in the fifth embodiment may be given the same reference numerals and the description thereof may be omitted.
  • the imaging device 1020 captures a high-quality image 1031.
  • the low-quality image 1032 is generated based on the high-quality image 1031 by the learning device 1010B according to the sixth embodiment.
  • the low-quality image 1032 is generated, for example, by subjecting the high-quality image 1031 to image processing and superimposing noise. That is, according to the present embodiment, the imaging device 1020 captures only the high-quality image 1031 and does not need to capture the low-quality image 1032.
  • FIG. 20 is a diagram illustrating an example of the functional configuration of the video information generation section according to the sixth embodiment.
  • the video information generation unit 1012B included in the learning device 1010B will be described with reference to the same figure.
  • a learning device 1010B according to the sixth embodiment differs from the learning device 1010 in that it includes a video information generating section 1012B instead of the video information generating section 1012.
  • the video information generation section 1012B includes a cutting section 1121, a noise superimposition section 1123, a first video information generation section 1125, and a second video information generation section 1127.
  • the cutting unit 1121 acquires an image from the image acquiring unit 1011.
  • the learning device 1010B acquires a high-quality image from the imaging device 1020
  • the cutting unit 1121 acquires a high-quality image from the image acquisition unit 1011.
  • the cutout unit 1121 cuts out a plurality of cutout images CI that are part of the acquired high-quality image and have different positional coordinates.
  • the cutout unit 1121 outputs the cutout image CI to the first moving image information generation unit 1125 and the noise superimposition unit 1123.
  • the noise superimposition unit 1123 acquires the cutout image CI cut out by the cutout unit 1121.
  • the noise superimposition unit 1123 superimposes noise on the acquired cutout image CI.
  • the noise superimposition unit 1123 obtains a plurality of cutout images CI obtained by cutting out a plurality of position coordinates, and superimposes noise on each of the plurality of obtained cutout images CI.
  • the noise superimposed by the noise superimposing unit 1123 may be modeled in advance.
  • the modeled noises include shot noise due to fluctuations in the number of photons, noise that occurs when the light incident on the image sensor is converted into electrons, noise that occurs when the converted electrons are converted to analog voltage values, and noise that occurs when the converted electrons are converted to analog voltage values.
  • noise that occurs when converting an analog voltage value into a digital signal.
  • the intensity of the superimposed noise may be adjusted by a predetermined method. It is preferable that the noise superimposition unit 1123 superimposes different noises on each of the plurality of cut-out images CI.
  • the noise superimposition unit 1123 outputs the image after superimposing noise to the second moving image information generation unit 1127 as a noise image NI.
  • the first video information generation unit 1125 acquires a plurality of cut out images CI from the cut out unit 1121.
  • the first video information generation unit 1125 generates first video information M1 by combining the plurality of cut out images.
  • the first video information generation unit 1125 outputs the generated first video information M1 to the learning unit 1013.
  • the second video information generation unit 1127 acquires a plurality of noise images NI from the noise superimposition unit 1123.
  • the second video information generation unit 1127 generates second video information M2 by combining a plurality of noise images NI on which noise is superimposed.
  • the second video information generation unit 1127 outputs the generated second video information M2 to the learning unit 1013.
  • the learning unit 1013 acquires the first video information M1 from the first video information generation unit 1125 and the second video information M2 from the second video information generation unit 1127.
  • the learning unit 1013 trains the learning model 1040 based on the first video information M1 and the second video information M2 generated by the video information generation unit 1012B.
  • the learning device 1010B includes the image acquisition unit 1011 to acquire image information I including at least one high-quality image. Further, the learning device 1010B includes a video information generation unit 1012B to generate both high-quality videos and low-quality videos from high-quality images.
  • the video information generation unit 1012B includes a cutting unit 1121 to cut out a plurality of images at different positions that are part of the acquired image information I. Furthermore, the video information generation unit 1012B includes a noise superimposition unit 1123 to superimpose noise on each of the plurality of images cut out by the cutout unit 1121.
  • the video information generation unit 1012B includes a first video information generation unit 1125, so that the video information generation unit 1012B generates first video information M1, which is a high-quality video, by combining the plurality of images cut out by the cutout unit 1121, and generates the second video information M1.
  • first video information M1 which is a high-quality video
  • second video information M1 is generated by combining a plurality of images on which noise has been superimposed by the noise superimposing section 1123.
  • the learning device 1010B can generate the first video information M1 generated by the first video information generation unit 1125 and the second video information M2 generated by the second video information generation unit 1127.
  • a learning model 1040 is trained that generates a high-quality video and a low-quality video based on one high-quality image, and infers a high-quality video from the low-quality video.
  • inferring a high-quality video from a low-quality video is noise removal. Therefore, according to the present embodiment, it is possible to easily learn the noise removal model without requiring time to acquire the teacher data TD.
  • a high-quality video is generated from a high-quality image
  • a low-quality image is generated by superimposing noise on the high-quality image
  • a low-quality video is generated based on the generated low-quality image.
  • the learning device 1010 may create the teacher data TD based only on low-quality images. That is, a low-quality video may be generated from a low-quality image, a high-quality image may be generated by further removing noise from the low-quality video, and a high-quality video may be generated based on the generated high-quality image.
  • the number of images used to generate the moving image may be one or multiple.
  • the learning device 1010 and learning device 1010A described in the fifth embodiment and the learning device 1010B described in the sixth embodiment are examples used for learning a learning model 1040 that infers a high-quality video from a low-quality video.
  • the learning model 1040 may be configured to have a function to detect a specific subject such as a person in the high-quality video after inferring a high-quality video from the low-quality video, or a function may be provided to detect a specific subject such as a person in the high-quality video. It may also be configured to have a function of recognizing characters on signboards and the like. That is, the high-quality video inferred by the learning model 1040 is not limited to an example of a video for viewing, but may be used for purposes such as object detection.
  • the ideal training data is a video that includes as much of the expected movement of the subject as possible.
  • SYMBOLS 1...High quality video generation system 2...Image processing device, 10...Input information generation device, 11...Image acquisition section, 12...Input conversion section, 13...Composition section, 14...Output section, 15...Integration section, 16... Average value temporary storage section, 17... Imaging condition acquisition section, 18... Adjustment section, 19... Comparison section, 100... Imaging device, 200... CNN, 210... Input layer, 220... Convolution layer, 230... Pooling layer, 240...
  • Output Layer IM...Video information, IN...Input information, TF...Target frame, AF...Adjacent frame, IF...Integrated frame, M1...First memory, M2...Second memory, IMD...Image information, IND...Input data, CD ...Synthetic data, SV...Stored value, CV...Calculated value, CR...Comparison result, 1001...Learning system, 1010...Learning device, 1011...Image acquisition section, 1012...Video information generation section, 1013...Learning section, 1014...Trajectory Vector acquisition unit, 1020... Imaging device, 1031... High quality image, 1032... Low quality image, 1033... High quality video, 1034... Low quality video, 1040... Learning model, 1050...
  • Trajectory vector generation device TD... Teacher data, I...image information, I1...first image information, I2...second image information, M...video information, M1...first video information, M2...second video information, TV...trajectory vector, 1121...cutting section, 1123 ...Noise superimposition unit, 1125...First video information generation unit, 1127...Second video information generation unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

Ce dispositif de génération d'informations d'entrée comprend : une unité d'acquisition d'image qui acquiert, en tant qu'image d'entrée, une pluralité de trames qui incluent au moins une trame cible qui sert de cible pour la génération d'informations d'entrée parmi les trames qui constituent une vidéo ; une unité de conversion d'entrée qui convertit des valeurs de pixel de l'image d'entrée de la pluralité de trames acquises en une pluralité d'éléments de données d'entrée à deux bits ; une unité de synthèse qui synthétise la pluralité d'éléments des données d'entrée converties en un élément de données de synthèse ; et une unité de sortie qui délivre en sortie les données de synthèse synthétisées.
PCT/JP2023/021204 2022-08-31 2023-06-07 Dispositif de génération d'informations d'entrée, dispositif de traitement d'image, procédé de génération d'informations d'entrée, dispositif d'apprentissage, programme, et procédé d'apprentissage pour dispositif de réduction de bruit WO2024047994A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2022137843A JP2024033920A (ja) 2022-08-31 2022-08-31 学習装置、プログラム及びノイズ低減装置の学習方法
JP2022-137843 2022-08-31
JP2022-137834 2022-08-31
JP2022137834A JP2024033913A (ja) 2022-08-31 2022-08-31 入力情報生成装置、画像処理装置及び入力情報生成方法

Publications (1)

Publication Number Publication Date
WO2024047994A1 true WO2024047994A1 (fr) 2024-03-07

Family

ID=90099234

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/021204 WO2024047994A1 (fr) 2022-08-31 2023-06-07 Dispositif de génération d'informations d'entrée, dispositif de traitement d'image, procédé de génération d'informations d'entrée, dispositif d'apprentissage, programme, et procédé d'apprentissage pour dispositif de réduction de bruit

Country Status (1)

Country Link
WO (1) WO2024047994A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011171843A (ja) * 2010-02-16 2011-09-01 Fujifilm Corp 画像処理方法及び装置並びにプログラム
JP2019106059A (ja) * 2017-12-13 2019-06-27 日立オートモティブシステムズ株式会社 演算システム、サーバ、車載装置
JP2020010331A (ja) * 2018-07-03 2020-01-16 株式会社ユビタス 画質を向上させる方法
CN113055674A (zh) * 2021-03-24 2021-06-29 电子科技大学 一种基于两阶段多帧协同的压缩视频质量增强方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011171843A (ja) * 2010-02-16 2011-09-01 Fujifilm Corp 画像処理方法及び装置並びにプログラム
JP2019106059A (ja) * 2017-12-13 2019-06-27 日立オートモティブシステムズ株式会社 演算システム、サーバ、車載装置
JP2020010331A (ja) * 2018-07-03 2020-01-16 株式会社ユビタス 画質を向上させる方法
CN113055674A (zh) * 2021-03-24 2021-06-29 电子科技大学 一种基于两阶段多帧协同的压缩视频质量增强方法

Similar Documents

Publication Publication Date Title
CN110728648B (zh) 图像融合的方法、装置、电子设备及可读存储介质
US8189960B2 (en) Image processing apparatus, image processing method, program and recording medium
US10021313B1 (en) Image adjustment techniques for multiple-frame images
US8077214B2 (en) Signal processing apparatus, signal processing method, program and recording medium
JP4898761B2 (ja) オブジェクト追跡を用いたデジタル画像の手ぶれ補正装置および方法
US9288392B2 (en) Image capturing device capable of blending images and image processing method for blending images thereof
US11184553B1 (en) Image signal processing in multi-camera system
CN105141841B (zh) 摄像设备及其方法
KR20150108774A (ko) 비디오 시퀀스를 프로세싱하는 방법, 대응하는 디바이스, 컴퓨터 프로그램 및 비일시적 컴퓨터 판독가능 매체
CN113170158B (zh) 视频编码器和编码方法
CN116711317A (zh) 用于图像处理的高动态范围技术选择
JP4916378B2 (ja) 撮像装置、画像処理装置、画像ファイル及び階調補正方法
US20180197282A1 (en) Method and device for producing a digital image
JP2010220207A (ja) 画像処理装置及び画像処理プログラム
CN112750092A (zh) 训练数据获取方法、像质增强模型与方法及电子设备
WO2020090176A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image
JP4879363B1 (ja) 画像処理システム
WO2024047994A1 (fr) Dispositif de génération d'informations d'entrée, dispositif de traitement d'image, procédé de génération d'informations d'entrée, dispositif d'apprentissage, programme, et procédé d'apprentissage pour dispositif de réduction de bruit
EP4167134A1 (fr) Système et procédé pour maximiser la précision d'inférence à l'aide d'ensembles de données recapturés
CN109218602B (zh) 影像撷取装置、影像处理方法及电子装置
JP5202277B2 (ja) 撮像装置
JP2024033913A (ja) 入力情報生成装置、画像処理装置及び入力情報生成方法
JP4462017B2 (ja) 欠陥検出補正装置、撮像装置および欠陥検出補正方法
CN113973175A (zh) 一种快速的hdr视频重建方法
JP2009296224A (ja) 撮像手段及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23859764

Country of ref document: EP

Kind code of ref document: A1